Compare commits

...

74 Commits

Author SHA1 Message Date
Prowler Bot 322a500352 fix(ui): centralize default muted findings filter on finding groups (#10819)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-21 14:33:42 +02:00
Prowler Bot ea09ff8902 perf(api): speed up finding-groups /resources endpoint (#10817)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-21 13:37:52 +02:00
Prowler Bot 24ce8d268b fix(changelog): relocate entries for the SDK (#10813)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-21 08:20:47 +02:00
Prowler Bot 0eb7b34207 chore(deps): bump pyasn1 from 0.6.2 to 0.6.3 in /api (#10805)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-20 17:58:18 +02:00
Prowler Bot f6b9d8611c fix(api): align latest_resources scan selection with completed_at (#10804)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-20 17:35:40 +02:00
Prowler Bot 28175170ce chore(api): Bump version to v1.25.2 (#10796)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:41:52 +02:00
Prowler Bot f5cb033f91 chore(release): Bump version to v5.24.2 (#10793)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:41:20 +02:00
Prowler Bot 558e292a2a docs: Update version to v5.24.1 (#10795)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:40:52 +02:00
Prowler Bot a4938897ac chore(ui): Bump version to v5.24.2 (#10794)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-20 15:40:15 +02:00
Prowler Bot 2cb8179477 chore: review changelog for v5.24.1 (#10792)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-20 14:10:04 +02:00
Prowler Bot c9bbe7033b fix(ui): sorting and filtering for findings (#10790)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-04-20 13:46:36 +02:00
Prowler Bot 76ecb30968 fix(api): detect silent failures in ResourceFindingMapping (#10781)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-20 09:15:49 +02:00
Prowler Bot 84a60fe06b fix(ui): correct IaC findings counters (#10773)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-04-17 13:55:17 +02:00
Prowler Bot f71743b95b fix(cloudflare): guard validate_credentials against paginator infinite loops (#10772)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2026-04-17 11:38:12 +02:00
Prowler Bot 68dcc5a75c fix(ui): exclude muted findings and polish filter selectors (#10770)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2026-04-17 11:16:41 +02:00
Prowler Bot 407ae24f04 perf(attack-paths): cleanup task prioritization, restore default batch sizes to 1000, upgrade Cartography to 0.135.0 (#10768)
Co-authored-by: Josema Camacho <josema@prowler.com>
2026-04-17 11:01:19 +02:00
Prowler Bot 17c4a286af chore(deps): bump msgraph-sdk to 1.55.0 and azure-mgmt-resource to 24.0.0, remove marshmallow (#10766)
Co-authored-by: Josema Camacho <josema@prowler.com>
2026-04-17 10:22:17 +02:00
Prowler Bot 69ee2cdcef fix(googleworkspace): treat secure Google defaults as PASS for Drive checks (#10765)
Co-authored-by: lydiavilchez <114735608+lydiavilchez@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-17 09:12:57 +02:00
Prowler Bot 3544ff5e75 fix: CHANGELOG minor issue (#10759)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2026-04-16 17:10:44 +02:00
Prowler Bot 69287dc3a1 fix(api): exclude muted findings from pass_count, fail_count and manual_count (#10755) 2026-04-16 16:16:25 +02:00
Prowler Bot cf5848d11d fix(ui): upgrade React 19.2.5 and Next.js 16.2.3 to mitigate CVE-2026-23869 (#10754)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2026-04-16 15:39:30 +02:00
Prowler Bot 8ead3fa6bb fix(api): add fallback handling for missing resources in findings (#10751)
Co-authored-by: Adrián Peña <adrianjpr@gmail.com>
2026-04-16 14:54:27 +02:00
Prowler Bot 21483cc12f fix(googleworkspace): treat secure Google defaults as PASS for Calendar checks (#10735)
Co-authored-by: lydiavilchez <114735608+lydiavilchez@users.noreply.github.com>
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2026-04-16 13:36:14 +02:00
Prowler Bot 628de4bd06 fix(image): --registry-list crashes with AttributeError on global_provider (#10730)
Co-authored-by: Erich Blume <725328+eblume@users.noreply.github.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-04-16 13:31:08 +02:00
Prowler Bot 043f1ef138 fix(sdk): allow account-scoped tokens in Cloudflare connection test (#10731)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2026-04-16 13:25:09 +02:00
Prowler Bot a120da9409 fix(db): add missing tenant_id filter in queries (#10725)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-16 12:11:28 +02:00
Prowler Bot d5b71c6436 chore(ui): Bump version to v5.24.1 (#10713)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:14:37 +02:00
Prowler Bot 9114d09ba5 docs: Update version to v5.24.0 (#10716)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:14:27 +02:00
Prowler Bot d2b1224a30 chore(release): Bump version to v5.24.1 (#10712)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:13:54 +02:00
Prowler Bot 54b54e25e2 chore(api): Bump version to v1.25.1 (#10717)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 20:13:43 +02:00
Prowler Bot 1b45724ca8 chore(api): Update prowler dependency to v5.24 for release 5.24.0 (#10709)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-15 18:57:37 +02:00
Pepe Fagoaga ba5b23245f chore: review changelog for v5.24 (#10707) 2026-04-15 18:05:55 +02:00
Daniel Barranquero 43913b1592 feat(aws): support excluding regions from scans via CLI, env var, and config (#10688) 2026-04-15 17:59:46 +02:00
Alan Buscaglia 9e31160887 fix(ui): improve attack paths scan table UX and fix info banner variant (#10704)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-04-15 17:33:29 +02:00
Pepe Fagoaga 9a0c73256e chore: delete .opencode (#10702) 2026-04-15 15:10:40 +02:00
Alejandro Bailo 2a160a10df refactor(ui): remove legacy side drawers and clean code (#10692) 2026-04-15 13:55:57 +02:00
Alan Buscaglia 8d8bee165b feat(ui): improve attack paths scan selection UX (#10685) 2026-04-15 13:54:25 +02:00
Alan Buscaglia 606efec9f8 fix(ui): keep update credentials wizard open (#10675) 2026-04-15 13:50:20 +02:00
Alan Buscaglia d5354e8b1d feat(ui): add syntax highlighting to finding groups remediation code (#10698) 2026-04-15 12:58:35 +02:00
Rubén De la Torre Vico a96e5890dc docs: replace Excalidraw diagrams with Mermaid and fix architecture connections (#10697) 2026-04-15 12:51:29 +02:00
Pepe Fagoaga bb81c5dd2d docs: add contextual menu for copy and issue/feat (#10699) 2026-04-15 12:50:29 +02:00
Daniel Barranquero c3acb818d9 fix(vercel): handle team-scoped firewall config responses (#10695) 2026-04-15 11:59:20 +02:00
Andoni Alonso e6fc59267b docs: add Finding Groups documentation page (#10696)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-15 11:58:39 +02:00
Josema Camacho 62f114f5d0 refactor(api): remove dead cleanup_findings no-op from attack-paths module (#10684) 2026-04-15 09:16:38 +02:00
Pepe Fagoaga 392ffd5a60 fix(beat): make it dependant from API service (#10603)
Co-authored-by: Josema Camacho <josema@prowler.com>
2026-04-14 18:35:26 +02:00
Alejandro Bailo 507b0882d5 fix(ui): fix findings group resource filters and mute modal migration (#10662) 2026-04-14 18:01:45 +02:00
Alejandro Bailo 89d72cf8fd feat(ui): new resources side drawer with redesigned detail panel (#10673) 2026-04-14 17:20:19 +02:00
Rubén De la Torre Vico f3a042933f chore(deps): replace pre-commit and husky with prek (#10601) 2026-04-14 16:34:54 +02:00
stepsecurity-app[bot] 96e7d6cb3a feat(security): security best practices from StepSecurity (#10682)
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io>
Co-authored-by: stepsecurity-app[bot] <188008098+stepsecurity-app[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-04-14 15:13:12 +02:00
Hugo Pereira Brito a82eaa885d refactor(m365): normalize CA platforms at model level (#10635)
Co-authored-by: Hugo P.Brito <hugopbrito@Mac.home>
2026-04-14 15:00:23 +02:00
Hugo Pereira Brito 90a619a8b4 feat(m365): add entra_conditional_access_policy_block_unknown_device_platforms security check (#10615)
Co-authored-by: Hugo P.Brito <hugopbrito@Mac.home>
2026-04-14 14:32:37 +02:00
Hugo Pereira Brito 638bf62d76 feat(entra): directory sync account exclusion (#10620)
Co-authored-by: Hugo P.Brito <hugopbrito@Mac.home>
2026-04-14 14:16:32 +02:00
Pablo Fernandez Guerra (PFE) 962615ca1f chore(ui): bump serialize-javascript override from 7.0.3 to 7.0.5 (#10653)
Co-authored-by: Pablo F.G <pablo.fernandez@prowler.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 14:11:59 +02:00
Hugo Pereira Brito 5610f5ad90 feat(m365): add entra_conditional_access_policy_corporate_device_sign_in_frequency_enforced security check (#10618)
Co-authored-by: Hugo P.Brito <hugopbrito@Mac.home>
2026-04-14 14:10:00 +02:00
Pepe Fagoaga be6fe1db04 chore(security): bump pytest to 9.0.3 (#10678) 2026-04-14 13:59:30 +02:00
Hugo Pereira Brito 92b838866a feat(m365): add entra_conditional_access_policy_mfa_enforced_for_guest_users security check (#10616)
Co-authored-by: Hugo P.Brito <hugopbrito@Mac.home>
2026-04-14 13:45:12 +02:00
Josema Camacho 51591cb8cd build: bump poetry to 2.3.4 and consolidate SDK workflows (#10681) 2026-04-14 13:32:46 +02:00
Hugo Pereira Brito e24e1ab771 feat(m365): add exchange_organization_delicensing_resiliency_enabled security check (#10608) 2026-04-14 13:30:45 +02:00
Hugo Pereira Brito bc3fd79457 feat(intune): add device compliance policy marks noncompliant check (#10599) 2026-04-14 13:01:47 +02:00
Hugo Pereira Brito 4941ed5797 feat(entra): add new check entra_conditional_access_policy_all_apps_all_users (#10619)
Co-authored-by: Hugo P.Brito <hugopbrito@Mac.home>
2026-04-14 12:47:57 +02:00
Daniel Barranquero 0f4d8ff891 feat(aws): add bedrock_vpc_endpoints_configured security check (#10591) 2026-04-14 12:22:22 +02:00
Daniel Barranquero d1ab8b8ae5 feat(aws): add iam_policy_no_wildcard_marketplace_subscribe and iam_inline_policy_no_wildcard_marketplace_subscribe checks (#10525)
Co-authored-by: Hugo P.Brito <hugopbrit@gmail.com>
2026-04-14 12:08:40 +02:00
Daniel Barranquero 65e9593b41 feat(aws): add bedrock_access_not_stale security check (#10536) 2026-04-14 11:20:40 +02:00
Daniel Barranquero 131112398b feat(aws): add bedrock_full_access_policy_attached security check (#10577) 2026-04-14 11:00:40 +02:00
Pedro Martín c952ea018e fix(ui): reflect actual provider in compliance detail header (#10674)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2026-04-14 10:22:42 +02:00
Pedro Martín 31b645ee53 chore(github): allow GitHub release CDN in trivy scan allowlist (#10679) 2026-04-14 10:09:54 +02:00
harshadkhetpal 0123e603d8 fix: replace bare except with except Exception in prowler-wrapper (#10499) 2026-04-14 08:11:53 +02:00
Prowler Bot b65265da4b feat(aws): Update regions for AWS services (#10659)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-14 08:03:14 +02:00
Prowler Bot 1335332fe9 chore(api): Bump version to v1.25.0 (#10668)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-13 22:18:59 +02:00
Prowler Bot f37a2a1efe chore(release): Bump version to v5.24.0 (#10666)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-13 22:18:54 +02:00
Prowler Bot 3e0e1398c4 docs: Update version to v5.23.0 (#10667)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-13 22:18:13 +02:00
Prowler Bot a4ad9ba01f chore(ui): Bump version to v5.24.0 (#10665)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-04-13 22:17:44 +02:00
Adrián Peña c6d5f44c5e chore: update pyjwt (#10661) 2026-04-13 14:09:37 +02:00
Adrián Peña 5d24a41625 feat(api): add sort support for all finding group counter fields (#10655) 2026-04-13 13:34:35 +02:00
375 changed files with 22830 additions and 7925 deletions
+1 -1
View File
@@ -145,7 +145,7 @@ SENTRY_RELEASE=local
NEXT_PUBLIC_SENTRY_ENVIRONMENT=${SENTRY_ENVIRONMENT}
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.16.0
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.24.2
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
@@ -13,11 +13,15 @@ inputs:
poetry-version:
description: 'Poetry version to install'
required: false
default: '2.1.1'
default: '2.3.4'
install-dependencies:
description: 'Install Python dependencies with Poetry'
required: false
default: 'true'
update-lock:
description: 'Run `poetry lock` during setup. Only enable when a prior step mutates pyproject.toml (e.g. API `@master` VCS rewrite). Default: false.'
required: false
default: 'false'
runs:
using: 'composite'
@@ -74,7 +78,7 @@ runs:
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock (prowler repo only)
if: github.repository == 'prowler-cloud/prowler'
if: github.repository == 'prowler-cloud/prowler' && inputs.update-lock == 'true'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: poetry lock
+2
View File
@@ -13,6 +13,8 @@ env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
permissions: {}
jobs:
detect-release-type:
runs-on: ubuntu-latest
+3
View File
@@ -17,6 +17,8 @@ concurrency:
env:
API_WORKING_DIR: ./api
permissions: {}
jobs:
api-code-quality:
runs-on: ubuntu-latest
@@ -67,6 +69,7 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
update-lock: 'true'
- name: Poetry check
if: steps.check-changes.outputs.any_changed == 'true'
+2
View File
@@ -24,6 +24,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
api-analyze:
name: CodeQL Security Analysis
@@ -33,6 +33,8 @@ env:
PROWLERCLOUD_DOCKERHUB_REPOSITORY: prowlercloud
PROWLERCLOUD_DOCKERHUB_IMAGE: prowler-api
permissions: {}
jobs:
setup:
if: github.repository == 'prowler-cloud/prowler'
@@ -18,6 +18,8 @@ env:
API_WORKING_DIR: ./api
IMAGE_NAME: prowler-api
permissions: {}
jobs:
api-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
+5 -1
View File
@@ -17,6 +17,8 @@ concurrency:
env:
API_WORKING_DIR: ./api
permissions: {}
jobs:
api-security-scans:
runs-on: ubuntu-latest
@@ -70,6 +72,7 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
update-lock: 'true'
- name: Bandit
if: steps.check-changes.outputs.any_changed == 'true'
@@ -77,9 +80,10 @@ jobs:
- name: Safety
if: steps.check-changes.outputs.any_changed == 'true'
run: poetry run safety check --ignore 79023,79027,86217
run: poetry run safety check --ignore 79023,79027,86217,71600
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
- name: Vulture
if: steps.check-changes.outputs.any_changed == 'true'
+3
View File
@@ -30,6 +30,8 @@ env:
VALKEY_DB: 0
API_WORKING_DIR: ./api
permissions: {}
jobs:
api-tests:
runs-on: ubuntu-latest
@@ -116,6 +118,7 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
update-lock: 'true'
- name: Run tests with pytest
if: steps.check-changes.outputs.any_changed == 'true'
+2
View File
@@ -17,6 +17,8 @@ env:
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_IGNORE: was-backported
permissions: {}
jobs:
backport:
if: github.event.pull_request.merged == true && !(contains(github.event.pull_request.labels.*.name, 'backport')) && !(contains(github.event.pull_request.labels.*.name, 'was-backported'))
+2
View File
@@ -21,6 +21,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
zizmor:
if: github.repository == 'prowler-cloud/prowler'
@@ -9,6 +9,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.issue.number }}
cancel-in-progress: false
permissions: {}
jobs:
update-labels:
if: contains(github.event.issue.labels.*.name, 'status/awaiting-response')
@@ -16,6 +16,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions: {}
jobs:
conventional-commit-check:
runs-on: ubuntu-latest
@@ -13,6 +13,8 @@ env:
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_COLOR: B60205
permissions: {}
jobs:
create-label:
runs-on: ubuntu-latest
+2
View File
@@ -13,6 +13,8 @@ env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
permissions: {}
jobs:
detect-release-type:
runs-on: ubuntu-latest
+2
View File
@@ -14,6 +14,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
scan-secrets:
runs-on: ubuntu-latest
+2
View File
@@ -21,6 +21,8 @@ concurrency:
env:
CHART_PATH: contrib/k8s/helm/prowler-app
permissions: {}
jobs:
helm-lint:
if: github.repository == 'prowler-cloud/prowler'
+2
View File
@@ -13,6 +13,8 @@ concurrency:
env:
CHART_PATH: contrib/k8s/helm/prowler-app
permissions: {}
jobs:
release-helm-chart:
if: github.repository == 'prowler-cloud/prowler'
@@ -9,6 +9,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.issue.number }}
cancel-in-progress: false
permissions: {}
jobs:
lock:
if: |
+2
View File
@@ -15,6 +15,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions: {}
jobs:
labeler:
runs-on: ubuntu-latest
@@ -32,6 +32,8 @@ env:
PROWLERCLOUD_DOCKERHUB_REPOSITORY: prowlercloud
PROWLERCLOUD_DOCKERHUB_IMAGE: prowler-mcp
permissions: {}
jobs:
setup:
if: github.repository == 'prowler-cloud/prowler'
@@ -18,6 +18,8 @@ env:
MCP_WORKING_DIR: ./mcp_server
IMAGE_NAME: prowler-mcp
permissions: {}
jobs:
mcp-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
@@ -87,6 +89,9 @@ jobs:
api.github.com:443
mirror.gcr.io:443
check.trivy.dev:443
get.trivy.dev:443
release-assets.githubusercontent.com:443
objects.githubusercontent.com:443
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
+2
View File
@@ -14,6 +14,8 @@ env:
PYTHON_VERSION: "3.12"
WORKING_DIRECTORY: ./mcp_server
permissions: {}
jobs:
validate-release:
if: github.repository == 'prowler-cloud/prowler'
+2
View File
@@ -16,6 +16,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions: {}
jobs:
check-changelog:
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
@@ -16,6 +16,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions: {}
jobs:
check-compliance-mapping:
if: contains(github.event.pull_request.labels.*.name, 'no-compliance-check') == false
@@ -15,6 +15,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions: {}
jobs:
check-conflicts:
runs-on: ubuntu-latest
+2
View File
@@ -12,6 +12,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: false
permissions: {}
jobs:
trigger-cloud-pull-request:
if: |
+5 -7
View File
@@ -17,6 +17,8 @@ concurrency:
env:
PROWLER_VERSION: ${{ inputs.prowler_version }}
permissions: {}
jobs:
prepare-release:
if: github.event_name == 'workflow_dispatch' && github.repository == 'prowler-cloud/prowler'
@@ -38,15 +40,11 @@ jobs:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: '3.12'
- name: Install Poetry
run: |
python3 -m pip install --user poetry==2.1.1
echo "$HOME/.local/bin" >> $GITHUB_PATH
install-dependencies: 'false'
- name: Configure Git
run: |
+2
View File
@@ -13,6 +13,8 @@ env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
permissions: {}
jobs:
detect-release-type:
runs-on: ubuntu-latest
@@ -10,6 +10,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
check-duplicate-test-names:
if: github.repository == 'prowler-cloud/prowler'
+4 -13
View File
@@ -14,6 +14,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
sdk-code-quality:
if: github.repository == 'prowler-cloud/prowler'
@@ -69,22 +71,11 @@ jobs:
contrib/**
**/AGENTS.md
- name: Install Poetry
- name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true'
run: pipx install poetry==2.1.1
- name: Set up Python ${{ matrix.python-version }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
- name: Install dependencies
if: steps.check-changes.outputs.any_changed == 'true'
run: |
poetry install --no-root
poetry run pip list
- name: Check Poetry lock file
if: steps.check-changes.outputs.any_changed == 'true'
+2
View File
@@ -30,6 +30,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
sdk-analyze:
if: github.repository == 'prowler-cloud/prowler'
@@ -47,6 +47,8 @@ env:
# AWS configuration (for ECR)
AWS_REGION: us-east-1
permissions: {}
jobs:
setup:
if: github.repository == 'prowler-cloud/prowler'
@@ -74,15 +76,14 @@ jobs:
with:
persist-credentials: false
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
- name: Install Poetry
run: |
pipx install poetry==2.1.1
pipx inject poetry poetry-bumpversion
- name: Inject poetry-bumpversion plugin
run: pipx inject poetry poetry-bumpversion
- name: Get Prowler version and set tags
id: get-prowler-version
@@ -17,6 +17,8 @@ concurrency:
env:
IMAGE_NAME: prowler
permissions: {}
jobs:
sdk-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
@@ -85,6 +87,7 @@ jobs:
check.trivy.dev:443
debian.map.fastlydns.net:80
release-assets.githubusercontent.com:443
objects.githubusercontent.com:443
pypi.org:443
files.pythonhosted.org:443
www.powershellgallery.com:443
+8 -10
View File
@@ -13,6 +13,8 @@ env:
RELEASE_TAG: ${{ github.event.release.tag_name }}
PYTHON_VERSION: '3.12'
permissions: {}
jobs:
validate-release:
if: github.repository == 'prowler-cloud/prowler'
@@ -73,13 +75,11 @@ jobs:
with:
persist-credentials: false
- name: Install Poetry
run: pipx install poetry==2.1.1
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
- name: Build Prowler package
run: poetry build
@@ -111,13 +111,11 @@ jobs:
with:
persist-credentials: false
- name: Install Poetry
run: pipx install poetry==2.1.1
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ env.PYTHON_VERSION }}
install-dependencies: 'false'
- name: Install toml package
run: pip install toml
@@ -13,6 +13,8 @@ env:
PYTHON_VERSION: '3.12'
AWS_REGION: 'us-east-1'
permissions: {}
jobs:
refresh-aws-regions:
if: github.repository == 'prowler-cloud/prowler'
@@ -12,6 +12,8 @@ concurrency:
env:
PYTHON_VERSION: '3.12'
permissions: {}
jobs:
refresh-oci-regions:
if: github.repository == 'prowler-cloud/prowler'
+4 -11
View File
@@ -14,6 +14,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
sdk-security-scans:
if: github.repository == 'prowler-cloud/prowler'
@@ -69,20 +71,11 @@ jobs:
contrib/**
**/AGENTS.md
- name: Install Poetry
- name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true'
run: pipx install poetry==2.1.1
- name: Set up Python 3.12
if: steps.check-changes.outputs.any_changed == 'true'
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: ./.github/actions/setup-python-poetry
with:
python-version: '3.12'
cache: 'poetry'
- name: Install dependencies
if: steps.check-changes.outputs.any_changed == 'true'
run: poetry install --no-root
- name: Security scan with Bandit
if: steps.check-changes.outputs.any_changed == 'true'
+4 -11
View File
@@ -14,6 +14,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
sdk-tests:
if: github.repository == 'prowler-cloud/prowler'
@@ -90,20 +92,11 @@ jobs:
contrib/**
**/AGENTS.md
- name: Install Poetry
- name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true'
run: pipx install poetry==2.1.1
- name: Set up Python ${{ matrix.python-version }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
- name: Install dependencies
if: steps.check-changes.outputs.any_changed == 'true'
run: poetry install --no-root
# AWS Provider
- name: Check if AWS files changed
@@ -31,6 +31,8 @@ on:
description: "Whether there are UI E2E tests to run"
value: ${{ jobs.analyze.outputs.has-ui-e2e }}
permissions: {}
jobs:
analyze:
runs-on: ubuntu-latest
+2
View File
@@ -13,6 +13,8 @@ env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
permissions: {}
jobs:
detect-release-type:
runs-on: ubuntu-latest
+2
View File
@@ -26,6 +26,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
ui-analyze:
if: github.repository == 'prowler-cloud/prowler'
@@ -35,6 +35,8 @@ env:
# Build args
NEXT_PUBLIC_API_BASE_URL: http://prowler-api:8080/api/v1
permissions: {}
jobs:
setup:
if: github.repository == 'prowler-cloud/prowler'
@@ -18,6 +18,8 @@ env:
UI_WORKING_DIR: ./ui
IMAGE_NAME: prowler-ui
permissions: {}
jobs:
ui-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
@@ -89,6 +91,8 @@ jobs:
mirror.gcr.io:443
check.trivy.dev:443
get.trivy.dev:443
release-assets.githubusercontent.com:443
objects.githubusercontent.com:443
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
+2
View File
@@ -15,6 +15,8 @@ on:
- 'ui/**'
- 'api/**' # API changes can affect UI E2E
permissions: {}
jobs:
# First, analyze which tests need to run
impact-analysis:
+2
View File
@@ -18,6 +18,8 @@ env:
UI_WORKING_DIR: ./ui
NODE_VERSION: '24.13.0'
permissions: {}
jobs:
ui-tests:
runs-on: ubuntu-latest
+1
View File
@@ -84,6 +84,7 @@ continue.json
.continuerc.json
# AI Coding Assistants - OpenCode
.opencode/
opencode.json
# AI Coding Assistants - GitHub Copilot
+3 -2
View File
@@ -70,7 +70,7 @@ repos:
args: ["--ignore=E266,W503,E203,E501,W605"]
- repo: https://github.com/python-poetry/poetry
rev: 2.1.1
rev: 2.3.4
hooks:
- id: poetry-check
name: API - poetry-check
@@ -128,7 +128,8 @@ repos:
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745,79023,79027,86217'
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745,79023,79027,86217,71600'
language: system
- id: vulture
+1 -1
View File
@@ -13,7 +13,7 @@ build:
post_create_environment:
# Install poetry
# https://python-poetry.org/docs/#installing-manually
- python -m pip install poetry
- python -m pip install poetry==2.3.4
post_install:
# Install dependencies with 'docs' dependency group
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
+3 -3
View File
@@ -140,7 +140,7 @@ Prowler is an open-source cloud security assessment tool supporting AWS, Azure,
| Component | Location | Tech Stack |
|-----------|----------|------------|
| SDK | `prowler/` | Python 3.10+, Poetry |
| SDK | `prowler/` | Python 3.10+, Poetry 2.3+ |
| API | `api/` | Django 5.1, DRF, Celery |
| UI | `ui/` | Next.js 15, React 19, Tailwind 4 |
| MCP Server | `mcp_server/` | FastMCP, Python 3.12+ |
@@ -153,12 +153,12 @@ Prowler is an open-source cloud security assessment tool supporting AWS, Azure,
```bash
# Setup
poetry install --with dev
poetry run pre-commit install
poetry run prek install
# Code quality
poetry run make lint
poetry run make format
poetry run pre-commit run --all-files
poetry run prek run --all-files
```
---
+1 -1
View File
@@ -68,7 +68,7 @@ ENV HOME='/home/prowler'
ENV PATH="${HOME}/.local/bin:${PATH}"
#hadolint ignore=DL3013
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir poetry
pip install --no-cache-dir poetry==2.3.4
RUN poetry install --compile && \
rm -rf ~/.cache/pip
+1 -8
View File
@@ -246,14 +246,7 @@ Some pre-commit hooks require tools installed on your system:
1. **Install [TruffleHog](https://github.com/trufflesecurity/trufflehog#install)** (secret scanning) — see the [official installation options](https://github.com/trufflesecurity/trufflehog#install).
2. **Install [Safety](https://github.com/pyupio/safety)** (dependency vulnerability checking):
```console
# Requires a Python environment (e.g. via pyenv)
pip install safety
```
3. **Install [Hadolint](https://github.com/hadolint/hadolint#install)** (Dockerfile linting) — see the [official installation options](https://github.com/hadolint/hadolint#install).
2. **Install [Hadolint](https://github.com/hadolint/hadolint#install)** (Dockerfile linting) — see the [official installation options](https://github.com/hadolint/hadolint#install).
## Prowler CLI
### Pip package
+46
View File
@@ -2,6 +2,51 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.25.2] (Prowler v5.24.2)
### 🔄 Changed
- Finding groups `/resources` endpoints now materialize the filtered finding IDs into a Python list before filtering `ResourceFindingMapping`, so PostgreSQL switches from a Merge Semi Join that read hundreds of thousands of RFM index entries to a Nested Loop Index Scan over `finding_id`. The `has_mappings.exists()` pre-check is removed, and a request-scoped cache deduplicates the finding-id round-trip across the helpers that build different RFM querysets [(#10816)](https://github.com/prowler-cloud/prowler/pull/10816)
### 🐞 Fixed
- `/finding-groups/latest/<check_id>/resources` now selects the latest completed scan per provider by `-completed_at` (then `-inserted_at`) instead of `-inserted_at`, matching the `/finding-groups/latest` summary path and the daily-summary upsert so overlapping scans no longer produce diverging `delta`/`new_count` between the two endpoints [(#10802)](https://github.com/prowler-cloud/prowler/pull/10802)
---
## [1.25.1] (Prowler v5.24.1)
### 🔄 Changed
- Attack Paths: Restore `SYNC_BATCH_SIZE` and `FINDINGS_BATCH_SIZE` defaults to 1000, upgrade Cartography to 0.135.0, enable Celery queue priority for cleanup task, rewrite Finding insertion, remove AWS graph cleanup and add timing logs [(#10729)](https://github.com/prowler-cloud/prowler/pull/10729)
### 🐞 Fixed
- Finding group resources endpoints now include findings without associated resources (orphaned IaC findings) as simulated resource rows, and return one row per finding when multiple findings share a resource [(#10708)](https://github.com/prowler-cloud/prowler/pull/10708)
- Attack Paths: Missing `tenant_id` filter while getting related findings after scan completes [(#10722)](https://github.com/prowler-cloud/prowler/pull/10722)
- Finding group counters `pass_count`, `fail_count` and `manual_count` now exclude muted findings [(#10753)](https://github.com/prowler-cloud/prowler/pull/10753)
- Silent data loss in `ResourceFindingMapping` bulk insert that left findings orphaned when `INSERT ... ON CONFLICT DO NOTHING` dropped rows without raising; added explicit `unique_fields` [(#10724)](https://github.com/prowler-cloud/prowler/pull/10724)
---
## [1.25.0] (Prowler v5.24.0)
### 🔄 Changed
- Bump Poetry to `2.3.4` in Dockerfile and pre-commit hooks. Regenerate `api/poetry.lock` [(#10681)](https://github.com/prowler-cloud/prowler/pull/10681)
- Attack Paths: Remove dead `cleanup_findings` no-op and its supporting `prowler_finding_lastupdated` index [(#10684)](https://github.com/prowler-cloud/prowler/pull/10684)
### 🐞 Fixed
- Worker-beat race condition on cold start: replaced `sleep 15` with API service healthcheck dependency (Docker Compose) and init containers (Helm), aligned Gunicorn default port to `8080` [(#10603)](https://github.com/prowler-cloud/prowler/pull/10603)
- API container startup crash on Linux due to root-owned bind-mount preventing JWT key generation [(#10646)](https://github.com/prowler-cloud/prowler/pull/10646)
### 🔐 Security
- `pytest` from 8.2.2 to 9.0.3 to fix CVE-2025-71176 [(#10678)](https://github.com/prowler-cloud/prowler/pull/10678)
---
## [1.24.0] (Prowler v5.23.0)
### 🚀 Added
@@ -12,6 +57,7 @@ All notable changes to the **Prowler API** are documented in this file.
- Finding groups list and latest endpoints support `sort=delta`, ordering by `new_count` then `changed_count` so groups with the most new findings rank highest [(#10606)](https://github.com/prowler-cloud/prowler/pull/10606)
- Finding group resources endpoints (`/finding-groups/{check_id}/resources` and `/finding-groups/latest/{check_id}/resources`) now expose `finding_id` per row, pointing to the most recent matching Finding for each resource. UUIDv7 ordering guarantees `Max(finding__id)` resolves to the latest snapshot [(#10630)](https://github.com/prowler-cloud/prowler/pull/10630)
- Handle CIS and CISA SCuBA compliance framework from google workspace [(#10629)](https://github.com/prowler-cloud/prowler/pull/10629)
- Sort support for all finding group counter fields: `pass_muted_count`, `fail_muted_count`, `manual_muted_count`, and all `new_*`/`changed_*` status-mute breakdown counters [(#10655)](https://github.com/prowler-cloud/prowler/pull/10655)
### 🔄 Changed
+1 -1
View File
@@ -71,7 +71,7 @@ RUN mkdir -p /tmp/prowler_api_output
COPY pyproject.toml ./
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir poetry
pip install --no-cache-dir poetry==2.3.4
ENV PATH="/home/prowler/.local/bin:$PATH"
-1
View File
@@ -56,7 +56,6 @@ start_worker() {
start_worker_beat() {
echo "Starting the worker-beat..."
sleep 15
poetry run python -m celery -A config.celery beat -l "${DJANGO_LOGGING_LEVEL:-info}" --scheduler django_celery_beat.schedulers:DatabaseScheduler
}
+184 -132
View File
@@ -682,21 +682,21 @@ requests = ">=2.21.0,<3.0.0"
[[package]]
name = "alibabacloud-tea-openapi"
version = "0.4.1"
version = "0.4.4"
description = "Alibaba Cloud openapi SDK Library for Python"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "alibabacloud_tea_openapi-0.4.1-py3-none-any.whl", hash = "sha256:e46bfa3ca34086d2c357d217a0b7284ecbd4b3bab5c88e075e73aec637b0e4a0"},
{file = "alibabacloud_tea_openapi-0.4.1.tar.gz", hash = "sha256:2384b090870fdb089c3c40f3fb8cf0145b8c7d6c14abbac521f86a01abb5edaf"},
{file = "alibabacloud_tea_openapi-0.4.4-py3-none-any.whl", hash = "sha256:cea6bc1fe35b0319a8752cb99eb0ecb0dab7ca1a71b99c12970ba0867410995f"},
{file = "alibabacloud_tea_openapi-0.4.4.tar.gz", hash = "sha256:1b0917bc03cd49417da64945e92731716d53e2eb8707b235f54e45b7473221ce"},
]
[package.dependencies]
alibabacloud-credentials = ">=1.0.2,<2.0.0"
alibabacloud-gateway-spi = ">=0.0.2,<1.0.0"
alibabacloud-tea-util = ">=0.3.13,<1.0.0"
cryptography = ">=3.0.0,<45.0.0"
cryptography = {version = ">=3.0.0,<47.0.0", markers = "python_version >= \"3.8\""}
darabonba-core = ">=1.0.3,<2.0.0"
[[package]]
@@ -1526,19 +1526,19 @@ typing-extensions = ">=4.6.0"
[[package]]
name = "azure-mgmt-resource"
version = "23.3.0"
version = "24.0.0"
description = "Microsoft Azure Resource Management Client Library for Python"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "azure_mgmt_resource-23.3.0-py3-none-any.whl", hash = "sha256:ab216ee28e29db6654b989746e0c85a1181f66653929d2cb6e48fba66d9af323"},
{file = "azure_mgmt_resource-23.3.0.tar.gz", hash = "sha256:fc4f1fd8b6aad23f8af4ed1f913df5f5c92df117449dc354fea6802a2829fea4"},
{file = "azure_mgmt_resource-24.0.0-py3-none-any.whl", hash = "sha256:27b32cd223e2784269f5a0db3c282042886ee4072d79cedc638438ece7cd0df4"},
{file = "azure_mgmt_resource-24.0.0.tar.gz", hash = "sha256:cf6b8995fcdd407ac9ff1dd474087129429a1d90dbb1ac77f97c19b96237b265"},
]
[package.dependencies]
azure-common = ">=1.1"
azure-mgmt-core = ">=1.3.2"
azure-mgmt-core = ">=1.5.0"
isodate = ">=0.6.1"
typing-extensions = ">=4.6.0"
@@ -1822,19 +1822,19 @@ crt = ["awscrt (==0.27.6)"]
[[package]]
name = "cartography"
version = "0.132.0"
version = "0.135.0"
description = "Explore assets and their relationships across your technical infrastructure."
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "cartography-0.132.0-py3-none-any.whl", hash = "sha256:c070aa51d0ab4479cb043cae70b35e7df49f2fb5f1fa95ccf10000bbeb952262"},
{file = "cartography-0.132.0.tar.gz", hash = "sha256:7c6332bc57fd2629d7b83aee7bd95a7b2edb0d51ef746efa0461399e0b66625c"},
{file = "cartography-0.135.0-py3-none-any.whl", hash = "sha256:c62c32a6917b8f23a8b98fe2b6c7c4a918b50f55918482966c4dae1cf5f538e1"},
{file = "cartography-0.135.0.tar.gz", hash = "sha256:3f500cd22c3b392d00e8b49f62acc95fd4dcd559ce514aafe2eb8101133c7a49"},
]
[package.dependencies]
adal = ">=1.2.4"
aioboto3 = ">=13.0.0"
aioboto3 = ">=15.0.0"
azure-cli-core = ">=2.26.0"
azure-identity = ">=1.5.0"
azure-keyvault-certificates = ">=4.0.0"
@@ -1852,9 +1852,9 @@ azure-mgmt-keyvault = ">=10.0.0"
azure-mgmt-logic = ">=10.0.0"
azure-mgmt-monitor = ">=3.0.0"
azure-mgmt-network = ">=25.0.0"
azure-mgmt-resource = ">=10.2.0,<25.0.0"
azure-mgmt-resource = ">=24.0.0,<25"
azure-mgmt-security = ">=5.0.0"
azure-mgmt-sql = ">=3.0.1,<4"
azure-mgmt-sql = ">=3.0.1"
azure-mgmt-storage = ">=16.0.0"
azure-mgmt-synapse = ">=2.0.0"
azure-mgmt-web = ">=7.0.0"
@@ -1862,38 +1862,39 @@ azure-synapse-artifacts = ">=0.17.0"
backoff = ">=2.1.2"
boto3 = ">=1.15.1"
botocore = ">=1.18.1"
cloudflare = ">=4.1.0,<5.0.0"
cloudflare = ">=4.1.0"
crowdstrike-falconpy = ">=0.5.1"
cryptography = "*"
dnspython = ">=1.15.0"
duo-client = "*"
google-api-python-client = ">=1.7.8"
cryptography = ">=45.0.0"
dnspython = ">=2.0.0"
duo-client = ">=5.5.0"
google-api-python-client = ">=2.0.0"
google-auth = ">=2.37.0"
google-cloud-asset = ">=1.0.0"
google-cloud-resource-manager = ">=1.14.2"
httpx = ">=0.24.0"
kubernetes = ">=22.6.0"
marshmallow = ">=3.0.0rc7"
msgraph-sdk = "*"
marshmallow = ">=4.0.0"
msgraph-sdk = ">=1.53.0"
msrestazure = ">=0.6.4"
neo4j = ">=6.0.0"
oci = ">=2.71.0"
okta = "<1.0.0"
packageurl-python = "*"
packaging = "*"
packageurl-python = ">=0.17.0"
packaging = ">=26.0.0"
pagerduty = ">=4.0.1"
policyuniverse = ">=1.1.0.0"
PyJWT = {version = ">=2.0.0", extras = ["crypto"]}
python-dateutil = "*"
python-dateutil = ">=2.9.0"
python-digitalocean = ">=1.16.0"
pyyaml = ">=5.3.1"
requests = ">=2.22.0"
scaleway = ">=2.10.0"
slack-sdk = ">=3.37.0"
statsd = "*"
statsd = ">=4.0.0"
typer = ">=0.9.0"
types-aiobotocore-ecr = "*"
xmltodict = "*"
types-aiobotocore-ecr = ">=3.1.0"
workos = ">=5.44.0"
xmltodict = ">=1.0.0"
[[package]]
name = "celery"
@@ -2503,62 +2504,74 @@ dev = ["bandit", "coverage", "flake8", "pydocstyle", "pylint", "pytest", "pytest
[[package]]
name = "cryptography"
version = "44.0.3"
version = "46.0.6"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.7"
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
groups = ["main", "dev"]
files = [
{file = "cryptography-44.0.3-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:962bc30480a08d133e631e8dfd4783ab71cc9e33d5d7c1e192f0b7c06397bb88"},
{file = "cryptography-44.0.3-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ffc61e8f3bf5b60346d89cd3d37231019c17a081208dfbbd6e1605ba03fa137"},
{file = "cryptography-44.0.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58968d331425a6f9eedcee087f77fd3c927c88f55368f43ff7e0a19891f2642c"},
{file = "cryptography-44.0.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:e28d62e59a4dbd1d22e747f57d4f00c459af22181f0b2f787ea83f5a876d7c76"},
{file = "cryptography-44.0.3-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:af653022a0c25ef2e3ffb2c673a50e5a0d02fecc41608f4954176f1933b12359"},
{file = "cryptography-44.0.3-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:157f1f3b8d941c2bd8f3ffee0af9b049c9665c39d3da9db2dc338feca5e98a43"},
{file = "cryptography-44.0.3-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:c6cd67722619e4d55fdb42ead64ed8843d64638e9c07f4011163e46bc512cf01"},
{file = "cryptography-44.0.3-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:b424563394c369a804ecbee9b06dfb34997f19d00b3518e39f83a5642618397d"},
{file = "cryptography-44.0.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:c91fc8e8fd78af553f98bc7f2a1d8db977334e4eea302a4bfd75b9461c2d8904"},
{file = "cryptography-44.0.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:25cd194c39fa5a0aa4169125ee27d1172097857b27109a45fadc59653ec06f44"},
{file = "cryptography-44.0.3-cp37-abi3-win32.whl", hash = "sha256:3be3f649d91cb182c3a6bd336de8b61a0a71965bd13d1a04a0e15b39c3d5809d"},
{file = "cryptography-44.0.3-cp37-abi3-win_amd64.whl", hash = "sha256:3883076d5c4cc56dbef0b898a74eb6992fdac29a7b9013870b34efe4ddb39a0d"},
{file = "cryptography-44.0.3-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:5639c2b16764c6f76eedf722dbad9a0914960d3489c0cc38694ddf9464f1bb2f"},
{file = "cryptography-44.0.3-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3ffef566ac88f75967d7abd852ed5f182da252d23fac11b4766da3957766759"},
{file = "cryptography-44.0.3-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:192ed30fac1728f7587c6f4613c29c584abdc565d7417c13904708db10206645"},
{file = "cryptography-44.0.3-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:7d5fe7195c27c32a64955740b949070f21cba664604291c298518d2e255931d2"},
{file = "cryptography-44.0.3-cp39-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3f07943aa4d7dad689e3bb1638ddc4944cc5e0921e3c227486daae0e31a05e54"},
{file = "cryptography-44.0.3-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:cb90f60e03d563ca2445099edf605c16ed1d5b15182d21831f58460c48bffb93"},
{file = "cryptography-44.0.3-cp39-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:ab0b005721cc0039e885ac3503825661bd9810b15d4f374e473f8c89b7d5460c"},
{file = "cryptography-44.0.3-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:3bb0847e6363c037df8f6ede57d88eaf3410ca2267fb12275370a76f85786a6f"},
{file = "cryptography-44.0.3-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:b0cc66c74c797e1db750aaa842ad5b8b78e14805a9b5d1348dc603612d3e3ff5"},
{file = "cryptography-44.0.3-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6866df152b581f9429020320e5eb9794c8780e90f7ccb021940d7f50ee00ae0b"},
{file = "cryptography-44.0.3-cp39-abi3-win32.whl", hash = "sha256:c138abae3a12a94c75c10499f1cbae81294a6f983b3af066390adee73f433028"},
{file = "cryptography-44.0.3-cp39-abi3-win_amd64.whl", hash = "sha256:5d186f32e52e66994dce4f766884bcb9c68b8da62d61d9d215bfe5fb56d21334"},
{file = "cryptography-44.0.3-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:cad399780053fb383dc067475135e41c9fe7d901a97dd5d9c5dfb5611afc0d7d"},
{file = "cryptography-44.0.3-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:21a83f6f35b9cc656d71b5de8d519f566df01e660ac2578805ab245ffd8523f8"},
{file = "cryptography-44.0.3-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:fc3c9babc1e1faefd62704bb46a69f359a9819eb0292e40df3fb6e3574715cd4"},
{file = "cryptography-44.0.3-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:e909df4053064a97f1e6565153ff8bb389af12c5c8d29c343308760890560aff"},
{file = "cryptography-44.0.3-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:dad80b45c22e05b259e33ddd458e9e2ba099c86ccf4e88db7bbab4b747b18d06"},
{file = "cryptography-44.0.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:479d92908277bed6e1a1c69b277734a7771c2b78633c224445b5c60a9f4bc1d9"},
{file = "cryptography-44.0.3-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:896530bc9107b226f265effa7ef3f21270f18a2026bc09fed1ebd7b66ddf6375"},
{file = "cryptography-44.0.3-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9b4d4a5dbee05a2c390bf212e78b99434efec37b17a4bff42f50285c5c8c9647"},
{file = "cryptography-44.0.3-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:02f55fb4f8b79c1221b0961488eaae21015b69b210e18c386b69de182ebb1259"},
{file = "cryptography-44.0.3-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:dd3db61b8fe5be220eee484a17233287d0be6932d056cf5738225b9c05ef4fff"},
{file = "cryptography-44.0.3-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:978631ec51a6bbc0b7e58f23b68a8ce9e5f09721940933e9c217068388789fe5"},
{file = "cryptography-44.0.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:5d20cc348cca3a8aa7312f42ab953a56e15323800ca3ab0706b8cd452a3a056c"},
{file = "cryptography-44.0.3.tar.gz", hash = "sha256:fe19d8bc5536a91a24a8133328880a41831b6c5df54599a8417b62fe015d3053"},
{file = "cryptography-46.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:64235194bad039a10bb6d2d930ab3323baaec67e2ce36215fd0952fad0930ca8"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:26031f1e5ca62fcb9d1fcb34b2b60b390d1aacaa15dc8b895a9ed00968b97b30"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9a693028b9cbe51b5a1136232ee8f2bc242e4e19d456ded3fa7c86e43c713b4a"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:67177e8a9f421aa2d3a170c3e56eca4e0128883cf52a071a7cbf53297f18b175"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:d9528b535a6c4f8ff37847144b8986a9a143585f0540fbcb1a98115b543aa463"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:22259338084d6ae497a19bae5d4c66b7ca1387d3264d1c2c0e72d9e9b6a77b97"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:760997a4b950ff00d418398ad73fbc91aa2894b5c1db7ccb45b4f68b42a63b3c"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3dfa6567f2e9e4c5dceb8ccb5a708158a2a871052fa75c8b78cb0977063f1507"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:cdcd3edcbc5d55757e5f5f3d330dd00007ae463a7e7aa5bf132d1f22a4b62b19"},
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:d4e4aadb7fc1f88687f47ca20bb7227981b03afaae69287029da08096853b738"},
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2b417edbe8877cda9022dde3a008e2deb50be9c407eef034aeeb3a8b11d9db3c"},
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:380343e0653b1c9d7e1f55b52aaa2dbb2fdf2730088d48c43ca1c7c0abb7cc2f"},
{file = "cryptography-46.0.6-cp311-abi3-win32.whl", hash = "sha256:bcb87663e1f7b075e48c3be3ecb5f0b46c8fc50b50a97cf264e7f60242dca3f2"},
{file = "cryptography-46.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:6739d56300662c468fddb0e5e291f9b4d084bead381667b9e654c7dd81705124"},
{file = "cryptography-46.0.6-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:2ef9e69886cbb137c2aef9772c2e7138dc581fad4fcbcf13cc181eb5a3ab6275"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7f417f034f91dcec1cb6c5c35b07cdbb2ef262557f701b4ecd803ee8cefed4f4"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d24c13369e856b94892a89ddf70b332e0b70ad4a5c43cf3e9cb71d6d7ffa1f7b"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:aad75154a7ac9039936d50cf431719a2f8d4ed3d3c277ac03f3339ded1a5e707"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:3c21d92ed15e9cfc6eb64c1f5a0326db22ca9c2566ca46d845119b45b4400361"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:4668298aef7cddeaf5c6ecc244c2302a2b8e40f384255505c22875eebb47888b"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:8ce35b77aaf02f3b59c90b2c8a05c73bac12cea5b4e8f3fbece1f5fddea5f0ca"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:c89eb37fae9216985d8734c1afd172ba4927f5a05cfd9bf0e4863c6d5465b013"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:ed418c37d095aeddf5336898a132fba01091f0ac5844e3e8018506f014b6d2c4"},
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:69cf0056d6947edc6e6760e5f17afe4bea06b56a9ac8a06de9d2bd6b532d4f3a"},
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8e7304c4f4e9490e11efe56af6713983460ee0780f16c63f219984dab3af9d2d"},
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b928a3ca837c77a10e81a814a693f2295200adb3352395fad024559b7be7a736"},
{file = "cryptography-46.0.6-cp314-cp314t-win32.whl", hash = "sha256:97c8115b27e19e592a05c45d0dd89c57f81f841cc9880e353e0d3bf25b2139ed"},
{file = "cryptography-46.0.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c797e2517cb7880f8297e2c0f43bb910e91381339336f75d2c1c2cbf811b70b4"},
{file = "cryptography-46.0.6-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:12cae594e9473bca1a7aceb90536060643128bb274fcea0fc459ab90f7d1ae7a"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:639301950939d844a9e1c4464d7e07f902fe9a7f6b215bb0d4f28584729935d8"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ed3775295fb91f70b4027aeba878d79b3e55c0b3e97eaa4de71f8f23a9f2eb77"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8927ccfbe967c7df312ade694f987e7e9e22b2425976ddbf28271d7e58845290"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:b12c6b1e1651e42ab5de8b1e00dc3b6354fdfd778e7fa60541ddacc27cd21410"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:063b67749f338ca9c5a0b7fe438a52c25f9526b851e24e6c9310e7195aad3b4d"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:02fad249cb0e090b574e30b276a3da6a149e04ee2f049725b1f69e7b8351ec70"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:7e6142674f2a9291463e5e150090b95a8519b2fb6e6aaec8917dd8d094ce750d"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:456b3215172aeefb9284550b162801d62f5f264a081049a3e94307fe20792cfa"},
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:341359d6c9e68834e204ceaf25936dffeafea3829ab80e9503860dcc4f4dac58"},
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9a9c42a2723999a710445bc0d974e345c32adfd8d2fac6d8a251fa829ad31cfb"},
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6617f67b1606dfd9fe4dbfa354a9508d4a6d37afe30306fe6c101b7ce3274b72"},
{file = "cryptography-46.0.6-cp38-abi3-win32.whl", hash = "sha256:7f6690b6c55e9c5332c0b59b9c8a3fb232ebf059094c17f9019a51e9827df91c"},
{file = "cryptography-46.0.6-cp38-abi3-win_amd64.whl", hash = "sha256:79e865c642cfc5c0b3eb12af83c35c5aeff4fa5c672dc28c43721c2c9fdd2f0f"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:2ea0f37e9a9cf0df2952893ad145fd9627d326a59daec9b0802480fa3bcd2ead"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a3e84d5ec9ba01f8fd03802b2147ba77f0c8f2617b2aff254cedd551844209c8"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:12f0fa16cc247b13c43d56d7b35287ff1569b5b1f4c5e87e92cc4fcc00cd10c0"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:50575a76e2951fe7dbd1f56d181f8c5ceeeb075e9ff88e7ad997d2f42af06e7b"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:90e5f0a7b3be5f40c3a0a0eafb32c681d8d2c181fc2a1bdabe9b3f611d9f6b1a"},
{file = "cryptography-46.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6728c49e3b2c180ef26f8e9f0a883a2c585638db64cf265b49c9ba10652d430e"},
{file = "cryptography-46.0.6.tar.gz", hash = "sha256:27550628a518c5c6c903d84f637fbecf287f6cb9ced3804838a1295dc1fd0759"},
]
[package.dependencies]
cffi = {version = ">=1.12", markers = "platform_python_implementation != \"PyPy\""}
cffi = {version = ">=2.0.0", markers = "python_full_version >= \"3.9.0\" and platform_python_implementation != \"PyPy\""}
[package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-rtd-theme (>=3.0.0) ; python_version >= \"3.8\""]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs", "sphinx-rtd-theme (>=3.0.0)"]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox (>=2024.4.15)", "nox[uv] (>=2024.3.2) ; python_version >= \"3.8\""]
pep8test = ["check-sdist ; python_version >= \"3.8\"", "click (>=8.0.1)", "mypy (>=1.4)", "ruff (>=0.3.6)"]
nox = ["nox[uv] (>=2024.4.15)"]
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==44.0.3)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
@@ -3740,19 +3753,19 @@ urllib3 = ["packaging", "urllib3"]
[[package]]
name = "google-auth-httplib2"
version = "0.2.1"
version = "0.2.0"
description = "Google Authentication Library: httplib2 transport"
optional = false
python-versions = ">=3.7"
python-versions = "*"
groups = ["main"]
files = [
{file = "google_auth_httplib2-0.2.1-py3-none-any.whl", hash = "sha256:1be94c611db91c01f9703e7f62b0a59bbd5587a95571c7b6fade510d648bc08b"},
{file = "google_auth_httplib2-0.2.1.tar.gz", hash = "sha256:5ef03be3927423c87fb69607b42df23a444e434ddb2555b73b3679793187b7de"},
{file = "google-auth-httplib2-0.2.0.tar.gz", hash = "sha256:38aa7badf48f974f1eb9861794e9c0cb2a0511a4ec0679b1f886d108f5640e05"},
{file = "google_auth_httplib2-0.2.0-py2.py3-none-any.whl", hash = "sha256:b65a0a2123300dd71281a7bf6e64d65a0759287df52729bdd1ae2e47dc311a3d"},
]
[package.dependencies]
google-auth = ">=1.32.0,<3.0.0"
httplib2 = ">=0.19.0,<1.0.0"
google-auth = "*"
httplib2 = ">=0.19.0"
[[package]]
name = "google-cloud-access-context-manager"
@@ -5181,24 +5194,16 @@ files = [
[[package]]
name = "marshmallow"
version = "3.26.2"
version = "4.3.0"
description = "A lightweight library for converting complex datatypes to and from native Python datatypes."
optional = false
python-versions = ">=3.9"
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "marshmallow-3.26.2-py3-none-any.whl", hash = "sha256:013fa8a3c4c276c24d26d84ce934dc964e2aa794345a0f8c7e5a7191482c8a73"},
{file = "marshmallow-3.26.2.tar.gz", hash = "sha256:bbe2adb5a03e6e3571b573f42527c6fe926e17467833660bebd11593ab8dfd57"},
{file = "marshmallow-4.3.0-py3-none-any.whl", hash = "sha256:46c4fe6984707e3cbd485dfebbf0a59874f58d695aad05c1668d15e8c6e13b46"},
{file = "marshmallow-4.3.0.tar.gz", hash = "sha256:fb43c53b3fe240b8f6af37223d6ef1636f927ad9bea8ab323afad95dff090880"},
]
[package.dependencies]
packaging = ">=17.0"
[package.extras]
dev = ["marshmallow[tests]", "pre-commit (>=3.5,<5.0)", "tox"]
docs = ["autodocsumm (==0.2.14)", "furo (==2024.8.6)", "sphinx (==8.1.3)", "sphinx-copybutton (==0.5.2)", "sphinx-issues (==5.0.0)", "sphinxext-opengraph (==0.9.1)"]
tests = ["pytest", "simplejson"]
[[package]]
name = "matplotlib"
version = "3.10.8"
@@ -5492,14 +5497,14 @@ dev = ["bumpver", "isort", "mypy", "pylint", "pytest", "yapf"]
[[package]]
name = "msgraph-sdk"
version = "1.23.0"
version = "1.55.0"
description = "The Microsoft Graph Python SDK"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "msgraph_sdk-1.23.0-py3-none-any.whl", hash = "sha256:58e0047b4ca59fd82022c02cd73fec0170a3d84f3b76721e3db2a0314df9a58a"},
{file = "msgraph_sdk-1.23.0.tar.gz", hash = "sha256:6dd1ba9a46f5f0ce8599fd9610133adbd9d1493941438b5d3632fce9e55ed607"},
{file = "msgraph_sdk-1.55.0-py3-none-any.whl", hash = "sha256:c8e68ebc4b88af5111de312e7fa910a4e76ddf48a4534feadb1fb8a411c48cfc"},
{file = "msgraph_sdk-1.55.0.tar.gz", hash = "sha256:6df691a31954a050d26b8a678968017e157d940fb377f2a8a4e17a9741b98756"},
]
[package.dependencies]
@@ -5925,23 +5930,24 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "oci"
version = "2.160.3"
version = "2.169.0"
description = "Oracle Cloud Infrastructure Python SDK"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "oci-2.160.3-py3-none-any.whl", hash = "sha256:858bff3e697098bdda44833d2476bfb4632126f0182178e7dbde4dbd156d71f0"},
{file = "oci-2.160.3.tar.gz", hash = "sha256:57514889be3b713a8385d86e3ba8a33cf46e3563c2a7e29a93027fb30b8a2537"},
{file = "oci-2.169.0-py3-none-any.whl", hash = "sha256:c71bb5143f307791082b3e33cc1545c2490a518cfed85ab1948ef5107c36d30b"},
{file = "oci-2.169.0.tar.gz", hash = "sha256:f3c5fff00b01783b5325ea7b13bf140053ec1e9f41da20bfb9c8a349ee7662fa"},
]
[package.dependencies]
certifi = "*"
circuitbreaker = {version = ">=1.3.1,<3.0.0", markers = "python_version >= \"3.7\""}
cryptography = ">=3.2.1,<46.0.0"
pyOpenSSL = ">=17.5.0,<25.0.0"
cryptography = ">=3.2.1,<47.0.0"
pyOpenSSL = ">=17.5.0,<27.0.0"
python-dateutil = ">=2.5.3,<3.0.0"
pytz = ">=2016.10"
urllib3 = {version = ">=2.6.3", markers = "python_version >= \"3.10.0\""}
[package.extras]
adk = ["docstring-parser (>=0.16) ; python_version >= \"3.10\" and python_version < \"4\"", "mcp (>=1.6.0) ; python_version >= \"3.10\" and python_version < \"4\"", "pydantic (>=2.10.6) ; python_version >= \"3.10\" and python_version < \"4\"", "rich (>=13.9.4) ; python_version >= \"3.10\" and python_version < \"4\""]
@@ -6445,6 +6451,33 @@ docs = ["sphinx (>=1.7.1)"]
redis = ["redis"]
tests = ["pytest (>=5.4.1)", "pytest-cov (>=2.8.1)", "pytest-mypy (>=0.8.0)", "pytest-timeout (>=2.1.0)", "redis", "sphinx (>=6.0.0)", "types-redis"]
[[package]]
name = "prek"
version = "0.3.9"
description = "A Git hook manager written in Rust, designed as a drop-in alternative to pre-commit."
optional = false
python-versions = ">=3.8"
groups = ["dev"]
files = [
{file = "prek-0.3.9-py3-none-linux_armv6l.whl", hash = "sha256:3ed793d51bfaa27bddb64d525d7acb77a7c8644f549412d82252e3eb0b88aad8"},
{file = "prek-0.3.9-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:399c58400c0bd0b82a93a3c09dc1bfd88d8d0cfb242d414d2ed247187b06ead1"},
{file = "prek-0.3.9-py3-none-macosx_11_0_arm64.whl", hash = "sha256:e2ea1ffb124e92f081b8e2ca5b5a623a733efb3be0c5b1f4b7ffe2ee17d1f20c"},
{file = "prek-0.3.9-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:aaf639f95b7301639298311d8d44aad0d0b4864e9736083ad3c71ce9765d37ab"},
{file = "prek-0.3.9-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ff104863b187fa443ea8451ca55d51e2c6e94f99f00d88784b5c3c4c623f1ebe"},
{file = "prek-0.3.9-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:039ecaf87c63a3e67cca645ebd5bc5eb6aafa6c9d929e9a27b2921e7849d7ef9"},
{file = "prek-0.3.9-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3bde2a3d045705095983c7f78ba04f72a7565fe1c2b4e85f5628502a254754ff"},
{file = "prek-0.3.9-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28a0960a21543563e2c8e19aaad176cc8423a87aac3c914d0f313030d7a9244a"},
{file = "prek-0.3.9-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:0dfb5d5171d7523271909246ee306b4dc3d5b63752e7dd7c7e8a8908fc9490d1"},
{file = "prek-0.3.9-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:82b791bd36c1430c84d3ae7220a85152babc7eaf00f70adcb961bd594e756ba3"},
{file = "prek-0.3.9-py3-none-musllinux_1_1_armv7l.whl", hash = "sha256:6eac6d2f736b041118f053a1487abed468a70dd85a8688eaf87bb42d3dcecf20"},
{file = "prek-0.3.9-py3-none-musllinux_1_1_i686.whl", hash = "sha256:5517e46e761367a3759b3168eabc120840ffbca9dfbc53187167298a98f87dc4"},
{file = "prek-0.3.9-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:92024778cf78683ca32687bb249ab6a7d5c33887b5ee1d1a9f6d0c14228f4cf3"},
{file = "prek-0.3.9-py3-none-win32.whl", hash = "sha256:7f89c55e5f480f5d073769e319924ad69d4bf9f98c5cb46a83082e26e634c958"},
{file = "prek-0.3.9-py3-none-win_amd64.whl", hash = "sha256:7722f3372eaa83b147e70a43cb7b9fe2128c13d0c78d8a1cdbf2a8ec2ee071eb"},
{file = "prek-0.3.9-py3-none-win_arm64.whl", hash = "sha256:0bced6278d6cc8a4b46048979e36bc9da034611dc8facd77ab123177b833a929"},
{file = "prek-0.3.9.tar.gz", hash = "sha256:f82b92d81f42f1f90a47f5fbbf492373e25ef1f790080215b2722dd6da66510e"},
]
[[package]]
name = "prompt-toolkit"
version = "3.0.52"
@@ -6632,7 +6665,7 @@ files = [
[[package]]
name = "prowler"
version = "5.23.0"
version = "5.24.0"
description = "Prowler is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks."
optional = false
python-versions = ">=3.10,<3.13"
@@ -6652,7 +6685,7 @@ alibabacloud-rds20140815 = "12.0.0"
alibabacloud_sas20181203 = "6.1.0"
alibabacloud-sls20201230 = "5.9.0"
alibabacloud_sts20150401 = "1.1.6"
alibabacloud_tea_openapi = "0.4.1"
alibabacloud_tea_openapi = "0.4.4"
alibabacloud_vpc20160428 = "6.13.0"
alive-progress = "3.3.0"
awsipranges = "0.3.3"
@@ -6674,7 +6707,7 @@ azure-mgmt-postgresqlflexibleservers = "1.1.0"
azure-mgmt-rdbms = "10.1.0"
azure-mgmt-recoveryservices = "3.1.0"
azure-mgmt-recoveryservicesbackup = "9.2.0"
azure-mgmt-resource = "23.3.0"
azure-mgmt-resource = "24.0.0"
azure-mgmt-search = "9.1.0"
azure-mgmt-security = "7.0.0"
azure-mgmt-sql = "3.0.1"
@@ -6687,29 +6720,29 @@ boto3 = "1.40.61"
botocore = "1.40.61"
cloudflare = "4.3.1"
colorama = "0.4.6"
cryptography = "44.0.3"
cryptography = "46.0.6"
dash = "3.1.1"
dash-bootstrap-components = "2.0.3"
defusedxml = ">=0.7.1"
defusedxml = "0.7.1"
detect-secrets = "1.5.0"
dulwich = "0.23.0"
google-api-python-client = "2.163.0"
google-auth-httplib2 = ">=0.1,<0.3"
google-auth-httplib2 = "0.2.0"
h2 = "4.3.0"
jsonschema = "4.23.0"
kubernetes = "32.0.1"
markdown = "3.10.2"
microsoft-kiota-abstractions = "1.9.2"
msgraph-sdk = "1.23.0"
msgraph-sdk = "1.55.0"
numpy = "2.0.2"
oci = "2.160.3"
oci = "2.169.0"
openstacksdk = "4.2.0"
pandas = "2.2.3"
py-iam-expand = "0.1.0"
py-ocsf-models = "0.8.1"
pydantic = ">=2.0,<3.0"
pydantic = "2.12.5"
pygithub = "2.8.0"
python-dateutil = ">=2.9.0.post0,<3.0.0"
python-dateutil = "2.9.0.post0"
pytz = "2025.1"
schema = "0.7.5"
shodan = "1.31.0"
@@ -6721,8 +6754,8 @@ uuid6 = "2024.7.10"
[package.source]
type = "git"
url = "https://github.com/prowler-cloud/prowler.git"
reference = "master"
resolved_reference = "6ac90eb1b58590b6f2f51645dbef17b9231053f4"
reference = "v5.24"
resolved_reference = "ba5b23245f4805f46d67e67fc059aefd6831f7b3"
[[package]]
name = "psutil"
@@ -6887,14 +6920,14 @@ pydantic = ">=2.12.0,<3.0.0"
[[package]]
name = "pyasn1"
version = "0.6.2"
version = "0.6.3"
description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pyasn1-0.6.2-py3-none-any.whl", hash = "sha256:1eb26d860996a18e9b6ed05e7aae0e9fc21619fcee6af91cca9bad4fbea224bf"},
{file = "pyasn1-0.6.2.tar.gz", hash = "sha256:9b59a2b25ba7e4f8197db7686c09fb33e658b98339fadb826e9512629017833b"},
{file = "pyasn1-0.6.3-py3-none-any.whl", hash = "sha256:a80184d120f0864a52a073acc6fc642847d0be408e7c7252f31390c0f4eadcde"},
{file = "pyasn1-0.6.3.tar.gz", hash = "sha256:697a8ecd6d98891189184ca1fa05d1bb00e2f84b5977c481452050549c8a72cf"},
]
[[package]]
@@ -7129,14 +7162,14 @@ windows-terminal = ["colorama (>=0.4.6)"]
[[package]]
name = "pyjwt"
version = "2.11.0"
version = "2.12.1"
description = "JSON Web Token implementation in Python"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pyjwt-2.11.0-py3-none-any.whl", hash = "sha256:94a6bde30eb5c8e04fee991062b534071fd1439ef58d2adc9ccb823e7bcd0469"},
{file = "pyjwt-2.11.0.tar.gz", hash = "sha256:35f95c1f0fbe5d5ba6e43f00271c275f7a1a4db1dab27bf708073b75318ea623"},
{file = "pyjwt-2.12.1-py3-none-any.whl", hash = "sha256:28ca37c070cad8ba8cd9790cd940535d40274d22f80ab87f3ac6a713e6e8454c"},
{file = "pyjwt-2.12.1.tar.gz", hash = "sha256:c74a7a2adf861c04d002db713dd85f84beb242228e671280bf709d765b03672b"},
]
[package.dependencies]
@@ -7183,7 +7216,7 @@ description = "The MSALRuntime Python Interop Package"
optional = false
python-versions = ">=3.6"
groups = ["main"]
markers = "(platform_system == \"Windows\" or platform_system == \"Darwin\" or platform_system == \"Linux\") and sys_platform == \"win32\""
markers = "sys_platform == \"win32\" and (platform_system == \"Windows\" or platform_system == \"Darwin\" or platform_system == \"Linux\")"
files = [
{file = "pymsalruntime-0.18.1-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:0c22e2e83faa10de422bbfaacc1bb2887c9025ee8a53f0fc2e4f7db01c4a7b66"},
{file = "pymsalruntime-0.18.1-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:8ce2944a0f944833d047bb121396091e00287e2b6373716106da86ea99abf379"},
@@ -7261,18 +7294,19 @@ tests = ["hypothesis (>=3.27.0)", "pytest (>=7.4.0)", "pytest-cov (>=2.10.1)", "
[[package]]
name = "pyopenssl"
version = "24.3.0"
version = "26.0.0"
description = "Python wrapper module around the OpenSSL library"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pyOpenSSL-24.3.0-py3-none-any.whl", hash = "sha256:e474f5a473cd7f92221cc04976e48f4d11502804657a08a989fb3be5514c904a"},
{file = "pyopenssl-24.3.0.tar.gz", hash = "sha256:49f7a019577d834746bc55c5fce6ecbcec0f2b4ec5ce1cf43a9a173b8138bb36"},
{file = "pyopenssl-26.0.0-py3-none-any.whl", hash = "sha256:df94d28498848b98cc1c0ffb8ef1e71e40210d3b0a8064c9d29571ed2904bf81"},
{file = "pyopenssl-26.0.0.tar.gz", hash = "sha256:f293934e52936f2e3413b89c6ce36df66a0b34ae1ea3a053b8c5020ff2f513fc"},
]
[package.dependencies]
cryptography = ">=41.0.5,<45"
cryptography = ">=46.0.0,<47"
typing-extensions = {version = ">=4.9", markers = "python_version < \"3.13\" and python_version >= \"3.8\""}
[package.extras]
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx_rtd_theme"]
@@ -7324,24 +7358,25 @@ files = [
[[package]]
name = "pytest"
version = "8.2.2"
version = "9.0.3"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "pytest-8.2.2-py3-none-any.whl", hash = "sha256:c434598117762e2bd304e526244f67bf66bbd7b5d6cf22138be51ff661980343"},
{file = "pytest-8.2.2.tar.gz", hash = "sha256:de4bb8104e201939ccdc688b27a89a7be2079b22e2bd2b07f806b6ba71117977"},
{file = "pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9"},
{file = "pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c"},
]
[package.dependencies]
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=1.5,<2.0"
colorama = {version = ">=0.4", markers = "sys_platform == \"win32\""}
iniconfig = ">=1.0.1"
packaging = ">=22"
pluggy = ">=1.5,<2"
pygments = ">=2.7.2"
[package.extras]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "requests", "setuptools", "xmlschema"]
[[package]]
name = "pytest-celery"
@@ -8779,6 +8814,23 @@ markupsafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog (>=2.3)"]
[[package]]
name = "workos"
version = "6.0.4"
description = "WorkOS Python Client"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "workos-6.0.4-py3-none-any.whl", hash = "sha256:548668b3702673536f853ba72a7b5bbbc269e467aaf9ac4f477b6e0177df5e21"},
{file = "workos-6.0.4.tar.gz", hash = "sha256:b0bfe8fd212b8567422c4ea3732eb33608794033eb3a69900c6b04db183c32d6"},
]
[package.dependencies]
cryptography = ">=46.0,<47.0"
httpx = ">=0.28,<1.0"
pyjwt = ">=2.12,<3.0"
[[package]]
name = "wrapt"
version = "1.17.3"
@@ -9372,4 +9424,4 @@ files = [
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "167d4549788b8bc8bb7772b9a81ade1eab73d8f354251a8d6af4901223cc7f67"
content-hash = "5781e74b0692aed541fe445d6713d2dfd792bb226789501420aac4a8cb45aa2a"
+5 -5
View File
@@ -25,7 +25,7 @@ dependencies = [
"defusedxml==0.7.1",
"gunicorn==23.0.0",
"lxml==5.3.2",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@v5.24",
"psycopg2-binary==2.9.9",
"pytest-celery[redis] (==1.3.0)",
"sentry-sdk[django] (==2.56.0)",
@@ -38,7 +38,7 @@ dependencies = [
"matplotlib (==3.10.8)",
"reportlab (==4.4.10)",
"neo4j (==6.1.0)",
"cartography (==0.132.0)",
"cartography (==0.135.0)",
"gevent (==25.9.1)",
"werkzeug (==3.1.7)",
"sqlparse (==0.5.5)",
@@ -50,7 +50,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.24.0"
version = "1.25.2"
[project.scripts]
celery = "src.backend.config.settings.celery"
@@ -62,10 +62,9 @@ django-silk = "5.3.2"
docker = "7.1.0"
filelock = "3.20.3"
freezegun = "1.5.1"
marshmallow = "==3.26.2"
mypy = "1.10.1"
pylint = "3.2.5"
pytest = "8.2.2"
pytest = "9.0.3"
pytest-cov = "5.0.0"
pytest-django = "4.8.0"
pytest-env = "1.1.3"
@@ -75,3 +74,4 @@ ruff = "0.5.0"
safety = "3.7.0"
tqdm = "4.67.1"
vulture = "2.14"
prek = "0.3.9"
@@ -0,0 +1,23 @@
from django.db import migrations
TASK_NAME = "attack-paths-cleanup-stale-scans"
def set_cleanup_priority(apps, schema_editor):
PeriodicTask = apps.get_model("django_celery_beat", "PeriodicTask")
PeriodicTask.objects.filter(name=TASK_NAME).update(priority=0)
def unset_cleanup_priority(apps, schema_editor):
PeriodicTask = apps.get_model("django_celery_beat", "PeriodicTask")
PeriodicTask.objects.filter(name=TASK_NAME).update(priority=None)
class Migration(migrations.Migration):
dependencies = [
("api", "0089_backfill_finding_group_status_muted"),
]
operations = [
migrations.RunPython(set_cleanup_priority, unset_cleanup_priority),
]
+1 -1
View File
@@ -1,7 +1,7 @@
openapi: 3.0.3
info:
title: Prowler API
version: 1.24.0
version: 1.25.2
description: |-
Prowler API specification.
+191 -1
View File
@@ -57,6 +57,7 @@ from api.models import (
ProviderGroupMembership,
ProviderSecret,
Resource,
ResourceFindingMapping,
Role,
RoleProviderGroupRelationship,
SAMLConfiguration,
@@ -15465,7 +15466,7 @@ class TestFindingGroupViewSet:
attrs = data[0]["attributes"]
assert attrs["status"] == "FAIL"
assert attrs["muted"] is True
assert attrs["fail_count"] == 2
assert attrs["fail_count"] == 0
assert attrs["fail_muted_count"] == 2
assert attrs["pass_muted_count"] == 0
assert attrs["manual_muted_count"] == 0
@@ -16030,6 +16031,36 @@ class TestFindingGroupViewSet:
# s3_bucket_public_access has 2 findings with 2 different resources
assert len(data) == 2
def test_resources_id_matches_resource_id_for_mapped_findings(
self, authenticated_client, finding_groups_fixture
):
"""Findings with a resource expose the resource id as row id (hot path contract)."""
response = authenticated_client.get(
reverse(
"finding-group-resources", kwargs={"pk": "s3_bucket_public_access"}
),
{"filter[inserted_at]": TODAY},
)
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert data, "expected resources in response"
resource_ids = set(
ResourceFindingMapping.objects.filter(
finding__check_id="s3_bucket_public_access",
).values_list("resource_id", flat=True)
)
finding_ids = set(
Finding.objects.filter(
check_id="s3_bucket_public_access",
).values_list("id", flat=True)
)
returned_ids = {item["id"] for item in data}
assert returned_ids <= {str(rid) for rid in resource_ids}
assert returned_ids.isdisjoint({str(fid) for fid in finding_ids})
def test_resources_fields(self, authenticated_client, finding_groups_fixture):
"""Test resource fields (uid, name, service, region, type) have valid values."""
response = authenticated_client.get(
@@ -17046,6 +17077,57 @@ class TestFindingGroupViewSet:
# Descending boolean: True (1) before False (0)
assert muted_values == sorted(muted_values, reverse=True)
@pytest.mark.parametrize(
"endpoint_name", ["finding-group-list", "finding-group-latest"]
)
@pytest.mark.parametrize(
"sort_field",
[
"pass_muted_count",
"fail_muted_count",
"manual_muted_count",
"new_fail_count",
"new_fail_muted_count",
"new_pass_count",
"new_pass_muted_count",
"new_manual_count",
"new_manual_muted_count",
"changed_fail_count",
"changed_fail_muted_count",
"changed_pass_count",
"changed_pass_muted_count",
"changed_manual_count",
"changed_manual_muted_count",
],
)
def test_finding_groups_sort_by_counter_fields(
self,
authenticated_client,
finding_groups_fixture,
endpoint_name,
sort_field,
):
"""All counter fields are accepted as sort parameters (asc and desc)."""
params = {"sort": f"-{sort_field}"}
if endpoint_name == "finding-group-list":
params["filter[inserted_at]"] = TODAY
response = authenticated_client.get(reverse(endpoint_name), params)
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert len(data) > 0
desc_values = [item["attributes"][sort_field] for item in data]
assert desc_values == sorted(desc_values, reverse=True)
params["sort"] = sort_field
response = authenticated_client.get(reverse(endpoint_name), params)
assert response.status_code == status.HTTP_200_OK
asc_values = [
item["attributes"][sort_field] for item in response.json()["data"]
]
assert asc_values == sorted(asc_values)
@pytest.mark.parametrize(
"endpoint_name", ["finding-group-list", "finding-group-latest"]
)
@@ -17189,3 +17271,111 @@ class TestFindingGroupViewSet:
attrs = item["attributes"]
assert "finding_id" in attrs
assert attrs["finding_id"] in rds_finding_ids
def test_latest_resources_picks_scan_by_completed_at_when_overlap(
self,
authenticated_client,
tenants_fixture,
providers_fixture,
resources_fixture,
):
"""Overlapping scans on the same provider must resolve to the scan
with the latest completed_at, matching the /latest summary path and
the daily-summary upsert (keyed on midnight(completed_at)). Picking
by inserted_at here produced /resources and /latest reading from
different scans and reporting diverging delta/new counts.
"""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
resource = resources_fixture[0]
check_id = "overlap_regression_check"
t0 = datetime.now(timezone.utc) - timedelta(hours=5)
t1 = t0 + timedelta(hours=1)
t1_end = t1 + timedelta(minutes=30)
t2 = t0 + timedelta(hours=4)
scan_long = Scan.objects.create(
name="long overlap scan",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant_id=tenant.id,
started_at=t0,
completed_at=t2,
)
scan_short = Scan.objects.create(
name="short overlap scan",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant_id=tenant.id,
started_at=t1,
completed_at=t1_end,
)
# inserted_at is auto_now_add so override with .update() to recreate
# the overlap shape: short scan inserted later but completed earlier.
Scan.all_objects.filter(pk=scan_long.pk).update(inserted_at=t0)
Scan.all_objects.filter(pk=scan_short.pk).update(inserted_at=t1)
scan_long.refresh_from_db()
scan_short.refresh_from_db()
assert scan_short.inserted_at > scan_long.inserted_at
assert scan_long.completed_at > scan_short.completed_at
long_finding = Finding.objects.create(
tenant_id=tenant.id,
uid=f"{check_id}_long",
scan=scan_long,
delta=None,
status=Status.FAIL,
status_extended="long scan finding",
impact=Severity.high,
impact_extended="high",
severity=Severity.high,
raw_result={"status": Status.FAIL, "severity": Severity.high},
check_id=check_id,
check_metadata={
"CheckId": check_id,
"checktitle": "Overlap regression",
"Description": "Overlapping scan regression.",
},
first_seen_at=t0,
muted=False,
)
long_finding.add_resources([resource])
short_finding = Finding.objects.create(
tenant_id=tenant.id,
uid=f"{check_id}_short",
scan=scan_short,
delta="new",
status=Status.FAIL,
status_extended="short scan finding",
impact=Severity.high,
impact_extended="high",
severity=Severity.high,
raw_result={"status": Status.FAIL, "severity": Severity.high},
check_id=check_id,
check_metadata={
"CheckId": check_id,
"checktitle": "Overlap regression",
"Description": "Overlapping scan regression.",
},
first_seen_at=t1,
muted=False,
)
short_finding.add_resources([resource])
response = authenticated_client.get(
reverse(
"finding-group-latest_resources",
kwargs={"check_id": check_id},
),
)
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert len(data) == 1
attrs = data[0]["attributes"]
assert attrs["finding_id"] == str(long_finding.id)
assert attrs["delta"] is None
+3 -2
View File
@@ -4225,10 +4225,11 @@ class FindingGroupResourceSerializer(BaseSerializerV1):
Serializer for Finding Group Resources - resources within a finding group.
Returns individual resources with their current status, severity,
and timing information.
and timing information. Orphan findings (without any resource) expose the
finding id as `id` so the row stays identifiable in the UI.
"""
id = serializers.UUIDField(source="resource_id")
id = serializers.UUIDField(source="row_id")
resource = serializers.SerializerMethodField()
provider = serializers.SerializerMethodField()
finding_id = serializers.UUIDField()
+312 -67
View File
@@ -35,11 +35,13 @@ from django.db.models import (
CharField,
Count,
DecimalField,
Exists,
ExpressionWrapper,
F,
IntegerField,
Max,
Min,
OuterRef,
Prefetch,
Q,
QuerySet,
@@ -415,7 +417,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.24.0"
spectacular_settings.VERSION = "1.25.2"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
@@ -7125,17 +7127,16 @@ class FindingGroupViewSet(BaseRLSViewSet):
output_field=IntegerField(),
)
# `pass_count`, `fail_count` and `manual_count` count *every* finding
# for the check (muted or not) so the aggregated `status` reflects the
# underlying check outcome regardless of mute state. Whether the group
# is actionable is signalled by the orthogonal `muted` flag below.
# `pass_count`, `fail_count` and `manual_count` only count non-muted
# findings. Muted findings are tracked separately via the
# `*_muted_count` fields.
return (
queryset.values("check_id")
.annotate(
severity_order=Max(severity_case),
pass_count=Count("id", filter=Q(status="PASS")),
fail_count=Count("id", filter=Q(status="FAIL")),
manual_count=Count("id", filter=Q(status="MANUAL")),
pass_count=Count("id", filter=Q(status="PASS", muted=False)),
fail_count=Count("id", filter=Q(status="FAIL", muted=False)),
manual_count=Count("id", filter=Q(status="MANUAL", muted=False)),
pass_muted_count=Count("id", filter=Q(status="PASS", muted=True)),
fail_muted_count=Count("id", filter=Q(status="FAIL", muted=True)),
manual_muted_count=Count("id", filter=Q(status="MANUAL", muted=True)),
@@ -7280,12 +7281,14 @@ class FindingGroupViewSet(BaseRLSViewSet):
# finding-level aggregation path.
row.pop("nonmuted_count", None)
# Compute aggregated status. Counts are inclusive of muted findings,
# so the underlying check outcome surfaces even when the group is
# fully muted.
if row.get("fail_count", 0) > 0:
# Compute aggregated status from non-muted counts first, then
# fall back to muted counts so fully-muted groups still reflect
# the underlying check outcome.
total_fail = row.get("fail_count", 0) + row.get("fail_muted_count", 0)
total_pass = row.get("pass_count", 0) + row.get("pass_muted_count", 0)
if total_fail > 0:
row["status"] = "FAIL"
elif row.get("pass_count", 0) > 0:
elif total_pass > 0:
row["status"] = "PASS"
else:
row["status"] = "MANUAL"
@@ -7311,8 +7314,23 @@ class FindingGroupViewSet(BaseRLSViewSet):
"pass_count": "pass_count",
"manual_count": "manual_count",
"muted_count": "muted_count",
"pass_muted_count": "pass_muted_count",
"fail_muted_count": "fail_muted_count",
"manual_muted_count": "manual_muted_count",
"new_count": "new_count",
"new_fail_count": "new_fail_count",
"new_fail_muted_count": "new_fail_muted_count",
"new_pass_count": "new_pass_count",
"new_pass_muted_count": "new_pass_muted_count",
"new_manual_count": "new_manual_count",
"new_manual_muted_count": "new_manual_muted_count",
"changed_count": "changed_count",
"changed_fail_count": "changed_fail_count",
"changed_fail_muted_count": "changed_fail_muted_count",
"changed_pass_count": "changed_pass_count",
"changed_pass_muted_count": "changed_pass_muted_count",
"changed_manual_count": "changed_manual_count",
"changed_manual_muted_count": "changed_manual_muted_count",
"resources_total": "resources_total",
"resources_fail": "resources_fail",
"first_seen_at": "agg_first_seen_at",
@@ -7370,9 +7388,12 @@ class FindingGroupViewSet(BaseRLSViewSet):
if computed_params.get("status") or computed_params.getlist("status__in"):
queryset = queryset.annotate(
total_fail=F("fail_count") + F("fail_muted_count"),
total_pass=F("pass_count") + F("pass_muted_count"),
).annotate(
aggregated_status=Case(
When(fail_count__gt=0, then=Value("FAIL")),
When(pass_count__gt=0, then=Value("PASS")),
When(total_fail__gt=0, then=Value("FAIL")),
When(total_pass__gt=0, then=Value("PASS")),
default=Value("MANUAL"),
output_field=CharField(),
)
@@ -7392,6 +7413,25 @@ class FindingGroupViewSet(BaseRLSViewSet):
return filterset.qs
def _resolve_finding_ids(self, filtered_queryset):
"""
Materialize and request-cache the finding_ids list used to anchor
RFM lookups.
Turning `finding_id__in=Subquery(findings_qs)` into `finding_id__in=
[uuid, ...]` nudges PostgreSQL out of a Merge Semi Join that ends up
reading hundreds of thousands of RFM index entries just to post-
filter tenant_id. Caching on the ViewSet instance (one instance per
request) avoids duplicating the findings round-trip when several
helpers build different RFM querysets from the same filtered set.
"""
cached = getattr(self, "_finding_ids_cache", None)
if cached is not None and cached[0] is filtered_queryset:
return cached[1]
finding_ids = list(filtered_queryset.order_by().values_list("id", flat=True))
self._finding_ids_cache = (filtered_queryset, finding_ids)
return finding_ids
def _build_resource_mapping_queryset(
self, filtered_queryset, resource_ids=None, tenant_id: str | None = None
):
@@ -7401,10 +7441,10 @@ class FindingGroupViewSet(BaseRLSViewSet):
Starting from ResourceFindingMapping avoids scanning all mappings
before applying check_id/date filters on findings.
"""
finding_ids = filtered_queryset.order_by().values("id")
finding_ids = self._resolve_finding_ids(filtered_queryset)
mapping_queryset = ResourceFindingMapping.objects.filter(
finding_id__in=Subquery(finding_ids)
finding_id__in=finding_ids
)
if tenant_id:
mapping_queryset = mapping_queryset.filter(tenant_id=tenant_id)
@@ -7563,6 +7603,53 @@ class FindingGroupViewSet(BaseRLSViewSet):
.order_by(*ordering)
)
def _orphan_findings_queryset(self, filtered_queryset, finding_ids=None):
"""Findings in the filtered set with no ResourceFindingMapping entries."""
orphan_qs = filtered_queryset.filter(
~Exists(ResourceFindingMapping.objects.filter(finding_id=OuterRef("pk")))
)
if finding_ids is not None:
orphan_qs = orphan_qs.filter(id__in=finding_ids)
return orphan_qs
def _has_orphan_findings(self, filtered_queryset) -> bool:
"""Return True if any finding in the filtered set has no resource mapping."""
return self._orphan_findings_queryset(filtered_queryset).exists()
def _orphan_aggregation_values(self, orphan_queryset):
"""Raw rows for orphan findings; resource payload synthesized from metadata.
check_metadata is stored with lowercase keys (see
`prowler.lib.outputs.finding.Finding.get_metadata`) and
`Finding.resource_groups` is already denormalized at ingest time.
"""
return orphan_queryset.annotate(
_provider_type=F("scan__provider__provider"),
_provider_uid=F("scan__provider__uid"),
_provider_alias=F("scan__provider__alias"),
_svc=KeyTextTransform("servicename", "check_metadata"),
_region=KeyTextTransform("region", "check_metadata"),
_rtype=KeyTextTransform("resourcetype", "check_metadata"),
_rgroup=F("resource_groups"),
).values(
"id",
"uid",
"status",
"severity",
"delta",
"muted",
"muted_reason",
"first_seen_at",
"inserted_at",
"_provider_type",
"_provider_uid",
"_provider_alias",
"_svc",
"_region",
"_rtype",
"_rgroup",
)
def _post_process_resources(self, resource_data):
"""Convert resource aggregation rows to API output."""
results = []
@@ -7584,9 +7671,13 @@ class FindingGroupViewSet(BaseRLSViewSet):
else:
delta = None
resource_id = row["resource_id"]
finding_id = str(row["finding_id"]) if row.get("finding_id") else None
results.append(
{
"resource_id": row["resource_id"],
"row_id": resource_id,
"resource_id": resource_id,
"resource_uid": row["resource_uid"],
"resource_name": row["resource_name"],
"resource_service": row["resource_service"],
@@ -7605,9 +7696,46 @@ class FindingGroupViewSet(BaseRLSViewSet):
"muted": bool(row.get("muted", False)),
"muted_reason": row.get("muted_reason"),
"resource_group": row.get("resource_group", ""),
"finding_id": (
str(row["finding_id"]) if row.get("finding_id") else None
),
"finding_id": finding_id,
}
)
return results
def _post_process_orphans(self, orphan_rows):
"""Convert orphan finding rows into the same API shape as mapping rows."""
results = []
for row in orphan_rows:
status_val = row["status"]
status = status_val if status_val in ("FAIL", "PASS") else "MANUAL"
muted = bool(row["muted"])
delta_val = row.get("delta")
delta = delta_val if delta_val in ("new", "changed") and not muted else None
finding_id = str(row["id"])
results.append(
{
"row_id": finding_id,
"resource_id": None,
"resource_uid": row["uid"],
"resource_name": row["uid"],
"resource_service": row["_svc"] or "",
"resource_region": row["_region"] or "",
"resource_type": row["_rtype"] or "",
"provider_type": row["_provider_type"],
"provider_uid": row["_provider_uid"],
"provider_alias": row["_provider_alias"],
"status": status,
"severity": row["severity"],
"delta": delta,
"first_seen_at": row["first_seen_at"],
"last_seen_at": row["inserted_at"],
"muted": muted,
"muted_reason": row.get("muted_reason"),
"resource_group": row["_rgroup"] or "",
"finding_id": finding_id,
}
)
@@ -7668,26 +7796,19 @@ class FindingGroupViewSet(BaseRLSViewSet):
sort_param, self._FINDING_GROUP_SORT_MAP
)
if ordering:
# status_order is annotated on demand so groups can be sorted by
# their aggregated status (FAIL > PASS > MANUAL), mirroring the
# priority used in _post_process_aggregation. Counts are
# inclusive of muted findings, so the underlying check outcome
# surfaces even for fully muted groups.
if any(field.lstrip("-") == "status_order" for field in ordering):
aggregated_queryset = aggregated_queryset.annotate(
total_fail_for_sort=F("fail_count") + F("fail_muted_count"),
total_pass_for_sort=F("pass_count") + F("pass_muted_count"),
).annotate(
status_order=Case(
When(fail_count__gt=0, then=Value(3)),
When(pass_count__gt=0, then=Value(2)),
When(total_fail_for_sort__gt=0, then=Value(3)),
When(total_pass_for_sort__gt=0, then=Value(2)),
default=Value(1),
output_field=IntegerField(),
)
)
# delta_order is a virtual sort field: expand it to a
# lexicographic ordering by (new_count, changed_count) so groups
# with more new findings rank higher, with changed_count as the
# tie-breaker (preserves the "new > changed" priority used by
# the resources endpoint, but driven by the actual counters).
expanded_ordering = []
for field in ordering:
if field.lstrip("-") == "delta_order":
@@ -7721,41 +7842,65 @@ class FindingGroupViewSet(BaseRLSViewSet):
def _paginated_resource_response(
self, request, filtered_queryset, resource_ids, tenant_id
):
"""Paginate and return resources.
"""Paginate and return resources, appending orphan findings when present.
Without sort: paginate lightweight resource IDs first, aggregate only the page.
With sort: build a lightweight ordering subquery (resource_id + sort keys),
paginate that, then aggregate full details only for the page.
Hot path (no orphans, or resource filter applied): resources come from
ResourceFindingMapping aggregation. Untouched pre-existing behaviour.
Orphan fallback: findings without a mapping (e.g. IaC) are appended
after mapping rows as synthesised resource-like rows so they remain
visible in the UI without paying the aggregation cost on the hot path.
"""
sort_param = request.query_params.get("sort")
ordering = None
if sort_param:
ordering = self._validate_sort_fields(sort_param, self._RESOURCE_SORT_MAP)
if ordering:
if "resource_id" not in {field.lstrip("-") for field in ordering}:
ordering.append("resource_id")
validated = self._validate_sort_fields(sort_param, self._RESOURCE_SORT_MAP)
ordering = validated if validated else None
# Phase 1: lightweight aggregation with only sort keys, paginate
ordering_qs = self._build_resource_ordering_queryset(
filtered_queryset,
resource_ids=resource_ids,
tenant_id=tenant_id,
ordering=ordering,
)
page = self.paginate_queryset(ordering_qs)
if page is not None:
page_ids = [row["resource_id"] for row in page]
resource_data = self._build_resource_aggregation(
filtered_queryset, resource_ids=page_ids, tenant_id=tenant_id
)
# Re-sort to match the page ordering
id_order = {rid: idx for idx, rid in enumerate(page_ids)}
results = self._post_process_resources(resource_data)
results.sort(key=lambda r: id_order.get(r["resource_id"], 0))
serializer = FindingGroupResourceSerializer(results, many=True)
return self.get_paginated_response(serializer.data)
# Resource filters can only match findings with resources; skip orphan
# detection entirely when they are present.
if resource_ids is not None:
return self._mapping_paginated_response(
request, filtered_queryset, resource_ids, tenant_id, ordering
)
page_ids = [row["resource_id"] for row in ordering_qs]
# Serve the mapping response directly and piggyback on the paginator
# count to detect orphan-only groups, instead of paying a separate
# has_mappings.exists() semi-join over ResourceFindingMapping on
# every non-IaC request. TODO: once the ephemeral resources strategy
# is decided, mixed groups should route to _combined_paginated_response.
response = self._mapping_paginated_response(
request, filtered_queryset, resource_ids, tenant_id, ordering
)
page = getattr(self.paginator, "page", None)
mapping_total = page.paginator.count if page is not None else None
if mapping_total == 0:
# Pure orphan group (e.g. IaC): synthesize resource-like rows.
return self._combined_paginated_response(
request, filtered_queryset, tenant_id, ordering
)
return response
def _mapping_paginated_response(
self, request, filtered_queryset, resource_ids, tenant_id, ordering
):
"""Mapping-only paginated response (original fast path)."""
if ordering:
if "resource_id" not in {field.lstrip("-") for field in ordering}:
ordering.append("resource_id")
# Phase 1: lightweight aggregation with only sort keys, paginate
ordering_qs = self._build_resource_ordering_queryset(
filtered_queryset,
resource_ids=resource_ids,
tenant_id=tenant_id,
ordering=ordering,
)
page = self.paginate_queryset(ordering_qs)
if page is not None:
page_ids = [row["resource_id"] for row in page]
resource_data = self._build_resource_aggregation(
filtered_queryset, resource_ids=page_ids, tenant_id=tenant_id
)
@@ -7763,10 +7908,18 @@ class FindingGroupViewSet(BaseRLSViewSet):
results = self._post_process_resources(resource_data)
results.sort(key=lambda r: id_order.get(r["resource_id"], 0))
serializer = FindingGroupResourceSerializer(results, many=True)
return Response(serializer.data)
return self.get_paginated_response(serializer.data)
page_ids = [row["resource_id"] for row in ordering_qs]
resource_data = self._build_resource_aggregation(
filtered_queryset, resource_ids=page_ids, tenant_id=tenant_id
)
id_order = {rid: idx for idx, rid in enumerate(page_ids)}
results = self._post_process_resources(resource_data)
results.sort(key=lambda r: id_order.get(r["resource_id"], 0))
serializer = FindingGroupResourceSerializer(results, many=True)
return Response(serializer.data)
# No sort (or only empty sort fragments): paginate lightweight resource IDs
# first, aggregate only the page.
mapping_qs = self._build_resource_mapping_queryset(
filtered_queryset, resource_ids=resource_ids, tenant_id=tenant_id
)
@@ -7794,6 +7947,95 @@ class FindingGroupViewSet(BaseRLSViewSet):
serializer = FindingGroupResourceSerializer(results, many=True)
return Response(serializer.data)
def _combined_paginated_response(
self, request, filtered_queryset, tenant_id, ordering
):
"""Mapping rows + orphan findings appended at end.
Orphans sit after mapping rows regardless of sort. This keeps the
mapping-only code path intact for checks that have no orphans (the
common case) and avoids paying UNION/coalesce costs there.
"""
mapping_qs = self._build_resource_mapping_queryset(
filtered_queryset, resource_ids=None, tenant_id=tenant_id
)
mapping_count = mapping_qs.values("resource_id").distinct().count()
orphan_ids = list(
self._orphan_findings_queryset(filtered_queryset)
.order_by("id")
.values_list("id", flat=True)
)
orphan_count = len(orphan_ids)
total = mapping_count + orphan_count
# Paginate a simple [0..total) index sequence so DRF produces proper
# links/meta; then slice mapping / orphan sources accordingly.
page = self.paginate_queryset(range(total))
page_indices = list(page) if page is not None else list(range(total))
mapping_indices = [i for i in page_indices if i < mapping_count]
orphan_positions = [
i - mapping_count for i in page_indices if i >= mapping_count
]
mapping_results = []
if mapping_indices:
start = mapping_indices[0]
stop = mapping_indices[-1] + 1
if ordering:
ordering_fields = list(ordering)
if "resource_id" not in {
field.lstrip("-") for field in ordering_fields
}:
ordering_fields.append("resource_id")
ordered_qs = self._build_resource_ordering_queryset(
filtered_queryset,
resource_ids=None,
tenant_id=tenant_id,
ordering=ordering_fields,
)
slice_rids = [row["resource_id"] for row in ordered_qs[start:stop]]
else:
slice_rids = list(
mapping_qs.values_list("resource_id", flat=True)
.distinct()
.order_by("resource_id")[start:stop]
)
if slice_rids:
resource_data = self._build_resource_aggregation(
filtered_queryset,
resource_ids=slice_rids,
tenant_id=tenant_id,
)
rows_by_rid = {row["resource_id"]: row for row in resource_data}
ordered_rows = [
rows_by_rid[rid] for rid in slice_rids if rid in rows_by_rid
]
mapping_results = self._post_process_resources(ordered_rows)
orphan_results = []
if orphan_positions:
slice_fids = [orphan_ids[pos] for pos in orphan_positions]
raw_rows = list(
self._orphan_aggregation_values(
self._orphan_findings_queryset(
filtered_queryset, finding_ids=slice_fids
)
)
)
rows_by_fid = {row["id"]: row for row in raw_rows}
ordered_rows = [
rows_by_fid[fid] for fid in slice_fids if fid in rows_by_fid
]
orphan_results = self._post_process_orphans(ordered_rows)
results = mapping_results + orphan_results
serializer = FindingGroupResourceSerializer(results, many=True)
if page is not None:
return self.get_paginated_response(serializer.data)
return Response(serializer.data)
def list(self, request, *args, **kwargs):
"""
List finding groups with aggregation and filtering.
@@ -7923,10 +8165,13 @@ class FindingGroupViewSet(BaseRLSViewSet):
tenant_id = request.tenant_id
queryset = self._get_finding_queryset()
# Get latest completed scan for each provider
# Order by -completed_at (matching the /latest summary path and the
# daily summary upsert keyed on midnight(completed_at)) so that
# overlapping scans do not make /resources and /latest read from
# different scans and report diverging counts.
latest_scan_ids = (
Scan.objects.filter(tenant_id=tenant_id, state=StateChoices.COMPLETED)
.order_by("provider_id", "-inserted_at")
.order_by("provider_id", "-completed_at", "-inserted_at")
.distinct("provider_id")
.values_list("id", flat=True)
)
+3 -1
View File
@@ -17,8 +17,10 @@ celery_app.config_from_object("django.conf:settings", namespace="CELERY")
celery_app.conf.update(result_extended=True, result_expires=None)
celery_app.conf.broker_transport_options = {
"visibility_timeout": BROKER_VISIBILITY_TIMEOUT
"visibility_timeout": BROKER_VISIBILITY_TIMEOUT,
"queue_order_strategy": "priority",
}
celery_app.conf.task_default_priority = 6
celery_app.conf.result_backend_transport_options = {
"visibility_timeout": BROKER_VISIBILITY_TIMEOUT
}
+1 -1
View File
@@ -15,7 +15,7 @@ from config.django.production import LOGGING as DJANGO_LOGGERS, DEBUG # noqa: E
from config.custom_logging import BackendLogger # noqa: E402
BIND_ADDRESS = env("DJANGO_BIND_ADDRESS", default="127.0.0.1")
PORT = env("DJANGO_PORT", default=8000)
PORT = env("DJANGO_PORT", default=8080)
# Server settings
bind = f"{BIND_ADDRESS}:{PORT}"
+46 -10
View File
@@ -1,6 +1,8 @@
# Portions of this file are based on code from the Cartography project
# (https://github.com/cartography-cncf/cartography), which is licensed under the Apache 2.0 License.
import time
from typing import Any
import aioboto3
@@ -33,7 +35,7 @@ def start_aws_ingestion(
For the scan progress updates:
- The caller of this function (`tasks.jobs.attack_paths.scan.run`) has set it to 2.
- When the control returns to the caller, it will be set to 95.
- When the control returns to the caller, it will be set to 93.
"""
# Initialize variables common to all jobs
@@ -89,34 +91,50 @@ def start_aws_ingestion(
logger.info(
f"Syncing function permission_relationships for AWS account {prowler_api_provider.uid}"
)
t0 = time.perf_counter()
cartography_aws.RESOURCE_FUNCTIONS["permission_relationships"](**sync_args)
logger.info(
f"Synced function permission_relationships for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 88)
if "resourcegroupstaggingapi" in requested_syncs:
logger.info(
f"Syncing function resourcegroupstaggingapi for AWS account {prowler_api_provider.uid}"
)
t0 = time.perf_counter()
cartography_aws.RESOURCE_FUNCTIONS["resourcegroupstaggingapi"](**sync_args)
logger.info(
f"Synced function resourcegroupstaggingapi for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 89)
logger.info(
f"Syncing ec2_iaminstanceprofile scoped analysis for AWS account {prowler_api_provider.uid}"
)
t0 = time.perf_counter()
cartography_aws.run_scoped_analysis_job(
"aws_ec2_iaminstanceprofile.json",
neo4j_session,
common_job_parameters,
)
logger.info(
f"Synced ec2_iaminstanceprofile scoped analysis for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 90)
logger.info(
f"Syncing lambda_ecr analysis for AWS account {prowler_api_provider.uid}"
)
t0 = time.perf_counter()
cartography_aws.run_analysis_job(
"aws_lambda_ecr.json",
neo4j_session,
common_job_parameters,
)
logger.info(
f"Synced lambda_ecr analysis for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
if all(
s in requested_syncs
@@ -125,25 +143,34 @@ def start_aws_ingestion(
logger.info(
f"Syncing lb_container_exposure scoped analysis for AWS account {prowler_api_provider.uid}"
)
t0 = time.perf_counter()
cartography_aws.run_scoped_analysis_job(
"aws_lb_container_exposure.json",
neo4j_session,
common_job_parameters,
)
logger.info(
f"Synced lb_container_exposure scoped analysis for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
if all(s in requested_syncs for s in ["ec2:network_acls", "ec2:load_balancer_v2"]):
logger.info(
f"Syncing lb_nacl_direct scoped analysis for AWS account {prowler_api_provider.uid}"
)
t0 = time.perf_counter()
cartography_aws.run_scoped_analysis_job(
"aws_lb_nacl_direct.json",
neo4j_session,
common_job_parameters,
)
logger.info(
f"Synced lb_nacl_direct scoped analysis for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 91)
logger.info(f"Syncing metadata for AWS account {prowler_api_provider.uid}")
t0 = time.perf_counter()
cartography_aws.merge_module_sync_metadata(
neo4j_session,
group_type="AWSAccount",
@@ -152,24 +179,23 @@ def start_aws_ingestion(
update_tag=cartography_config.update_tag,
stat_handler=cartography_aws.stat_handler,
)
logger.info(
f"Synced metadata for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 92)
# Removing the added extra field
del common_job_parameters["AWS_ID"]
logger.info(f"Syncing cleanup_job for AWS account {prowler_api_provider.uid}")
cartography_aws.run_cleanup_job(
"aws_post_ingestion_principals_cleanup.json",
neo4j_session,
common_job_parameters,
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 93)
logger.info(f"Syncing analysis for AWS account {prowler_api_provider.uid}")
t0 = time.perf_counter()
cartography_aws._perform_aws_analysis(
requested_syncs, neo4j_session, common_job_parameters
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 94)
logger.info(
f"Synced analysis for AWS account {prowler_api_provider.uid} in {time.perf_counter() - t0:.3f}s"
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 93)
return failed_syncs
@@ -234,6 +260,8 @@ def sync_aws_account(
)
try:
func_t0 = time.perf_counter()
# `ecr:image_layers` uses `aioboto3_session` instead of `boto3_session`
if func_name == "ecr:image_layers":
cartography_aws.RESOURCE_FUNCTIONS[func_name](
@@ -257,7 +285,15 @@ def sync_aws_account(
else:
cartography_aws.RESOURCE_FUNCTIONS[func_name](**sync_args)
logger.info(
f"Synced function {func_name} for AWS account {prowler_api_provider.uid} in {time.perf_counter() - func_t0:.3f}s"
)
except Exception as e:
logger.info(
f"Synced function {func_name} for AWS account {prowler_api_provider.uid} in {time.perf_counter() - func_t0:.3f}s (FAILED)"
)
exception_message = utils.stringify_exception(
e, f"Exception for AWS sync function: {func_name}"
)
@@ -8,9 +8,9 @@ from tasks.jobs.attack_paths import aws
# Batch size for Neo4j write operations (resource labeling, cleanup)
BATCH_SIZE = env.int("ATTACK_PATHS_BATCH_SIZE", 1000)
# Batch size for Postgres findings fetch (keyset pagination page size)
FINDINGS_BATCH_SIZE = env.int("ATTACK_PATHS_FINDINGS_BATCH_SIZE", 500)
FINDINGS_BATCH_SIZE = env.int("ATTACK_PATHS_FINDINGS_BATCH_SIZE", 1000)
# Batch size for temp-to-tenant graph sync (nodes and relationships per cursor page)
SYNC_BATCH_SIZE = env.int("ATTACK_PATHS_SYNC_BATCH_SIZE", 250)
SYNC_BATCH_SIZE = env.int("ATTACK_PATHS_SYNC_BATCH_SIZE", 1000)
# Neo4j internal labels (Prowler-specific, not provider-specific)
# - `Internet`: Singleton node representing external internet access for exposed-resource queries
@@ -5,7 +5,6 @@ This module handles:
- Adding resource labels to Cartography nodes for efficient lookups
- Loading Prowler findings into the graph
- Linking findings to resources
- Cleaning up stale findings
"""
from collections import defaultdict
@@ -13,6 +12,7 @@ from typing import Any, Generator
from uuid import UUID
import neo4j
from cartography.config import Config as CartographyConfig
from celery.utils.log import get_task_logger
from tasks.jobs.attack_paths.config import (
@@ -24,7 +24,6 @@ from tasks.jobs.attack_paths.config import (
)
from tasks.jobs.attack_paths.queries import (
ADD_RESOURCE_LABEL_TEMPLATE,
CLEANUP_FINDINGS_TEMPLATE,
INSERT_FINDING_TEMPLATE,
render_cypher_template,
)
@@ -88,18 +87,21 @@ def analysis(
prowler_api_provider: Provider,
scan_id: str,
config: CartographyConfig,
) -> None:
) -> tuple[int, int]:
"""
Main entry point for Prowler findings analysis.
Adds resource labels, loads findings, and cleans up stale data.
Adds resource labels and loads findings.
Returns (labeled_nodes, findings_loaded).
"""
add_resource_label(
total_labeled = add_resource_label(
neo4j_session, prowler_api_provider.provider, str(prowler_api_provider.uid)
)
findings_data = stream_findings_with_resources(prowler_api_provider, scan_id)
load_findings(neo4j_session, findings_data, prowler_api_provider, config)
cleanup_findings(neo4j_session, prowler_api_provider, config)
total_loaded = load_findings(
neo4j_session, findings_data, prowler_api_provider, config
)
return total_labeled, total_loaded
def add_resource_label(
@@ -149,12 +151,11 @@ def load_findings(
findings_batches: Generator[list[dict[str, Any]], None, None],
prowler_api_provider: Provider,
config: CartographyConfig,
) -> None:
) -> int:
"""Load Prowler findings into the graph, linking them to resources."""
query = render_cypher_template(
INSERT_FINDING_TEMPLATE,
{
"__ROOT_NODE_LABEL__": get_root_node_label(prowler_api_provider.provider),
"__NODE_UID_FIELD__": get_node_uid_field(prowler_api_provider.provider),
"__RESOURCE_LABEL__": get_provider_resource_label(
prowler_api_provider.provider
@@ -163,7 +164,6 @@ def load_findings(
)
parameters = {
"provider_uid": str(prowler_api_provider.uid),
"last_updated": config.update_tag,
"prowler_version": ProwlerConfig.prowler_version,
}
@@ -181,28 +181,7 @@ def load_findings(
neo4j_session.run(query, parameters)
logger.info(f"Finished loading {total_records} records in {batch_num} batches")
def cleanup_findings(
neo4j_session: neo4j.Session,
prowler_api_provider: Provider,
config: CartographyConfig,
) -> None:
"""Remove stale findings (classic Cartography behaviour)."""
parameters = {
"last_updated": config.update_tag,
"batch_size": BATCH_SIZE,
}
batch = 1
deleted_count = 1
while deleted_count > 0:
logger.info(f"Cleaning findings batch {batch}")
result = neo4j_session.run(CLEANUP_FINDINGS_TEMPLATE, parameters)
deleted_count = result.single().get("deleted_findings_count", 0)
batch += 1
return total_records
# Findings Streaming (Generator-based)
@@ -273,7 +252,9 @@ def _fetch_findings_batch(
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
# Use `all_objects` to get `Findings` even on soft-deleted `Providers`
# But even the provider is already validated as active in this context
qs = FindingModel.all_objects.filter(scan_id=scan_id).order_by("id")
qs = FindingModel.all_objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).order_by("id")
if after_id is not None:
qs = qs.filter(id__gt=after_id)
@@ -13,14 +13,13 @@ from tasks.jobs.attack_paths.config import (
logger = get_task_logger(__name__)
# Indexes for Prowler findings and resource lookups
# Indexes for Prowler Findings and resource lookups
FINDINGS_INDEX_STATEMENTS = [
# Resource indexes for Prowler Finding lookups
"CREATE INDEX aws_resource_arn IF NOT EXISTS FOR (n:_AWSResource) ON (n.arn);",
"CREATE INDEX aws_resource_id IF NOT EXISTS FOR (n:_AWSResource) ON (n.id);",
# Prowler Finding indexes
f"CREATE INDEX prowler_finding_id IF NOT EXISTS FOR (n:{PROWLER_FINDING_LABEL}) ON (n.id);",
f"CREATE INDEX prowler_finding_lastupdated IF NOT EXISTS FOR (n:{PROWLER_FINDING_LABEL}) ON (n.lastupdated);",
f"CREATE INDEX prowler_finding_status IF NOT EXISTS FOR (n:{PROWLER_FINDING_LABEL}) ON (n.status);",
# Internet node index for MERGE lookups
f"CREATE INDEX internet_id IF NOT EXISTS FOR (n:{INTERNET_NODE_LABEL}) ON (n.id);",
@@ -32,17 +32,14 @@ ADD_RESOURCE_LABEL_TEMPLATE = """
"""
INSERT_FINDING_TEMPLATE = f"""
MATCH (account:__ROOT_NODE_LABEL__ {{id: $provider_uid}})
UNWIND $findings_data AS finding_data
OPTIONAL MATCH (account)-->(resource_by_uid:__RESOURCE_LABEL__)
WHERE resource_by_uid.__NODE_UID_FIELD__ = finding_data.resource_uid
WITH account, finding_data, resource_by_uid
OPTIONAL MATCH (resource_by_uid:__RESOURCE_LABEL__ {{__NODE_UID_FIELD__: finding_data.resource_uid}})
WITH finding_data, resource_by_uid
OPTIONAL MATCH (account)-->(resource_by_id:__RESOURCE_LABEL__)
OPTIONAL MATCH (resource_by_id:__RESOURCE_LABEL__ {{id: finding_data.resource_uid}})
WHERE resource_by_uid IS NULL
AND resource_by_id.id = finding_data.resource_uid
WITH account, finding_data, COALESCE(resource_by_uid, resource_by_id) AS resource
WITH finding_data, COALESCE(resource_by_uid, resource_by_id) AS resource
WHERE resource IS NOT NULL
MERGE (finding:{PROWLER_FINDING_LABEL} {{id: finding_data.id}})
@@ -80,17 +77,6 @@ INSERT_FINDING_TEMPLATE = f"""
rel.lastupdated = $last_updated
"""
CLEANUP_FINDINGS_TEMPLATE = f"""
MATCH (finding:{PROWLER_FINDING_LABEL})
WHERE finding.lastupdated < $last_updated
WITH finding LIMIT $batch_size
DETACH DELETE finding
RETURN COUNT(finding) AS deleted_findings_count
"""
# Internet queries (used by internet.py)
# ---------------------------------------
+38 -10
View File
@@ -55,6 +55,7 @@ exception propagates to Celery.
import logging
import time
from typing import Any
from cartography.config import Config as CartographyConfig
@@ -144,6 +145,12 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
attack_paths_scan, task_id, tenant_cartography_config
)
scan_t0 = time.perf_counter()
logger.info(
f"Starting Attack Paths scan ({attack_paths_scan.id}) for "
f"{prowler_api_provider.provider.upper()} provider {prowler_api_provider.id}"
)
subgraph_dropped = False
sync_completed = False
provider_gated = False
@@ -169,6 +176,7 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 2)
# The real scan, where iterates over cloud services
t0 = time.perf_counter()
ingestion_exceptions = utils.call_within_event_loop(
cartography_ingestion_function,
tmp_neo4j_session,
@@ -177,19 +185,23 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
prowler_sdk_provider,
attack_paths_scan,
)
logger.info(
f"Cartography ingestion completed in {time.perf_counter() - t0:.3f}s "
f"(failed_syncs={len(ingestion_exceptions)})"
)
# Post-processing: Just keeping it to be more Cartography compliant
logger.info(
f"Syncing Cartography ontology for AWS account {prowler_api_provider.uid}"
)
cartography_ontology.run(tmp_neo4j_session, tmp_cartography_config)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 95)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 94)
logger.info(
f"Syncing Cartography analysis for AWS account {prowler_api_provider.uid}"
)
cartography_analysis.run(tmp_neo4j_session, tmp_cartography_config)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 96)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 95)
# Creating Internet node and CAN_ACCESS relationships
logger.info(
@@ -198,14 +210,20 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
internet.analysis(
tmp_neo4j_session, prowler_api_provider, tmp_cartography_config
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 96)
# Adding Prowler Finding nodes and relationships
logger.info(
f"Syncing Prowler analysis for AWS account {prowler_api_provider.uid}"
)
findings.analysis(
t0 = time.perf_counter()
labeled_nodes, findings_loaded = findings.analysis(
tmp_neo4j_session, prowler_api_provider, scan_id, tmp_cartography_config
)
logger.info(
f"Prowler analysis completed in {time.perf_counter() - t0:.3f}s "
f"(findings={findings_loaded}, labeled_nodes={labeled_nodes})"
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 97)
logger.info(
@@ -227,22 +245,33 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
logger.info(f"Deleting existing provider graph in {tenant_database_name}")
db_utils.set_provider_graph_data_ready(attack_paths_scan, False)
provider_gated = True
graph_database.drop_subgraph(
t0 = time.perf_counter()
deleted_nodes = graph_database.drop_subgraph(
database=tenant_database_name,
provider_id=str(prowler_api_provider.id),
)
logger.info(
f"Deleted existing provider graph in {time.perf_counter() - t0:.3f}s "
f"(deleted_nodes={deleted_nodes})"
)
subgraph_dropped = True
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 98)
logger.info(
f"Syncing graph from {tmp_database_name} into {tenant_database_name}"
)
sync.sync_graph(
t0 = time.perf_counter()
sync_result = sync.sync_graph(
source_database=tmp_database_name,
target_database=tenant_database_name,
tenant_id=str(prowler_api_provider.tenant_id),
provider_id=str(prowler_api_provider.id),
)
logger.info(
f"Synced graph in {time.perf_counter() - t0:.3f}s "
f"(nodes={sync_result['nodes']}, relationships={sync_result['relationships']})"
)
sync_completed = True
db_utils.set_graph_data_ready(attack_paths_scan, True)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 99)
@@ -250,17 +279,16 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
logger.info(f"Clearing Neo4j cache for database {tenant_database_name}")
graph_database.clear_cache(tenant_database_name)
logger.info(
f"Completed Cartography ({attack_paths_scan.id}) for "
f"{prowler_api_provider.provider.upper()} provider {prowler_api_provider.id}"
)
logger.info(f"Dropping temporary Neo4j database {tmp_database_name}")
graph_database.drop_database(tmp_database_name)
db_utils.finish_attack_paths_scan(
attack_paths_scan, StateChoices.COMPLETED, ingestion_exceptions
)
logger.info(
f"Attack Paths scan completed in {time.perf_counter() - scan_t0:.3f}s "
f"(state=completed, failed_syncs={len(ingestion_exceptions)})"
)
return ingestion_exceptions
except Exception as e:
@@ -5,6 +5,8 @@ This module handles syncing graph data from temporary scan databases
to the tenant database, adding provider isolation labels and properties.
"""
import time
from collections import defaultdict
from typing import Any
@@ -81,6 +83,7 @@ def sync_nodes(
Source and target sessions are opened sequentially per batch to avoid
holding two Bolt connections simultaneously for the entire sync duration.
"""
t0 = time.perf_counter()
last_id = -1
total_synced = 0
@@ -117,7 +120,7 @@ def sync_nodes(
total_synced += batch_count
logger.info(
f"Synced {total_synced} nodes from {source_database} to {target_database}"
f"Synced {total_synced} nodes from {source_database} to {target_database} in {time.perf_counter() - t0:.3f}s"
)
return total_synced
@@ -136,6 +139,7 @@ def sync_relationships(
Source and target sessions are opened sequentially per batch to avoid
holding two Bolt connections simultaneously for the entire sync duration.
"""
t0 = time.perf_counter()
last_id = -1
total_synced = 0
@@ -166,7 +170,7 @@ def sync_relationships(
total_synced += batch_count
logger.info(
f"Synced {total_synced} relationships from {source_database} to {target_database}"
f"Synced {total_synced} relationships from {source_database} to {target_database} in {time.perf_counter() - t0:.3f}s"
)
return total_synced
+15 -9
View File
@@ -752,11 +752,19 @@ def _process_finding_micro_batch(
)
if mappings_to_create:
ResourceFindingMapping.objects.bulk_create(
created_mappings = ResourceFindingMapping.objects.bulk_create(
mappings_to_create,
batch_size=SCAN_DB_BATCH_SIZE,
ignore_conflicts=True,
unique_fields=["tenant_id", "resource_id", "finding_id"],
)
inserted = sum(1 for m in created_mappings if m.pk)
if inserted != len(mappings_to_create):
logger.error(
f"scan {scan_instance.id}: expected "
f"{len(mappings_to_create)} ResourceFindingMapping rows, "
f"inserted {inserted}. Rolling back micro-batch."
)
# Update finding denormalized arrays
findings_to_update = []
@@ -1804,11 +1812,9 @@ def aggregate_finding_group_summaries(tenant_id: str, scan_id: str):
)
# Aggregate findings by check_id for this scan.
# `pass_count`, `fail_count` and `manual_count` count *every* finding
# in this group, regardless of mute state, so the aggregated `status`
# always reflects the underlying check outcome (FAIL > PASS > MANUAL)
# even when the group is fully muted. The orthogonal `muted` flag is
# what tells whether the group has any actionable (non-muted) findings.
# `pass_count`, `fail_count` and `manual_count` only count non-muted
# findings. Muted findings are tracked separately via the
# `*_muted_count` fields.
aggregated = (
Finding.objects.filter(
tenant_id=tenant_id,
@@ -1817,9 +1823,9 @@ def aggregate_finding_group_summaries(tenant_id: str, scan_id: str):
.values("check_id")
.annotate(
severity_order=Max(severity_case),
pass_count=Count("id", filter=Q(status="PASS")),
fail_count=Count("id", filter=Q(status="FAIL")),
manual_count=Count("id", filter=Q(status="MANUAL")),
pass_count=Count("id", filter=Q(status="PASS", muted=False)),
fail_count=Count("id", filter=Q(status="FAIL", muted=False)),
manual_count=Count("id", filter=Q(status="MANUAL", muted=False)),
pass_muted_count=Count("id", filter=Q(status="PASS", muted=True)),
fail_muted_count=Count("id", filter=Q(status="FAIL", muted=True)),
manual_muted_count=Count("id", filter=Q(status="MANUAL", muted=True)),
@@ -38,11 +38,14 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.db_utils.finish_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.db_utils.update_attack_paths_scan_progress")
@patch("tasks.jobs.attack_paths.scan.db_utils.starting_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.sync.sync_graph")
@patch("tasks.jobs.attack_paths.scan.graph_database.drop_subgraph")
@patch(
"tasks.jobs.attack_paths.scan.sync.sync_graph",
return_value={"nodes": 0, "relationships": 0},
)
@patch("tasks.jobs.attack_paths.scan.graph_database.drop_subgraph", return_value=0)
@patch("tasks.jobs.attack_paths.scan.indexes.create_sync_indexes")
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_ontology.run")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -188,7 +191,7 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.db_utils.set_provider_graph_data_ready")
@patch("tasks.jobs.attack_paths.scan.db_utils.update_attack_paths_scan_progress")
@patch("tasks.jobs.attack_paths.scan.db_utils.starting_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -287,7 +290,7 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.db_utils.set_provider_graph_data_ready")
@patch("tasks.jobs.attack_paths.scan.db_utils.update_attack_paths_scan_progress")
@patch("tasks.jobs.attack_paths.scan.db_utils.starting_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -390,7 +393,7 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.db_utils.set_provider_graph_data_ready")
@patch("tasks.jobs.attack_paths.scan.db_utils.update_attack_paths_scan_progress")
@patch("tasks.jobs.attack_paths.scan.db_utils.starting_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -489,14 +492,17 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.db_utils.set_provider_graph_data_ready")
@patch("tasks.jobs.attack_paths.scan.db_utils.update_attack_paths_scan_progress")
@patch("tasks.jobs.attack_paths.scan.db_utils.starting_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.sync.sync_graph")
@patch(
"tasks.jobs.attack_paths.scan.sync.sync_graph",
return_value={"nodes": 0, "relationships": 0},
)
@patch(
"tasks.jobs.attack_paths.scan.graph_database.drop_subgraph",
side_effect=RuntimeError("drop failed"),
)
@patch("tasks.jobs.attack_paths.scan.indexes.create_sync_indexes")
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_ontology.run")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -609,7 +615,7 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.graph_database.drop_subgraph")
@patch("tasks.jobs.attack_paths.scan.indexes.create_sync_indexes")
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_ontology.run")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -718,11 +724,14 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.db_utils.set_provider_graph_data_ready")
@patch("tasks.jobs.attack_paths.scan.db_utils.update_attack_paths_scan_progress")
@patch("tasks.jobs.attack_paths.scan.db_utils.starting_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.sync.sync_graph")
@patch(
"tasks.jobs.attack_paths.scan.sync.sync_graph",
return_value={"nodes": 0, "relationships": 0},
)
@patch("tasks.jobs.attack_paths.scan.graph_database.drop_subgraph")
@patch("tasks.jobs.attack_paths.scan.indexes.create_sync_indexes")
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_ontology.run")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -833,14 +842,17 @@ class TestAttackPathsRun:
@patch("tasks.jobs.attack_paths.scan.db_utils.set_provider_graph_data_ready")
@patch("tasks.jobs.attack_paths.scan.db_utils.update_attack_paths_scan_progress")
@patch("tasks.jobs.attack_paths.scan.db_utils.starting_attack_paths_scan")
@patch("tasks.jobs.attack_paths.scan.sync.sync_graph")
@patch(
"tasks.jobs.attack_paths.scan.sync.sync_graph",
return_value={"nodes": 0, "relationships": 0},
)
@patch(
"tasks.jobs.attack_paths.scan.graph_database.drop_subgraph",
side_effect=RuntimeError("drop failed"),
)
@patch("tasks.jobs.attack_paths.scan.indexes.create_sync_indexes")
@patch("tasks.jobs.attack_paths.scan.internet.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis")
@patch("tasks.jobs.attack_paths.scan.findings.analysis", return_value=(0, 0))
@patch("tasks.jobs.attack_paths.scan.indexes.create_findings_indexes")
@patch("tasks.jobs.attack_paths.scan.cartography_ontology.run")
@patch("tasks.jobs.attack_paths.scan.cartography_analysis.run")
@@ -1274,10 +1286,6 @@ class TestAttackPathsFindingsHelpers:
mock_session = MagicMock()
with (
patch(
"tasks.jobs.attack_paths.findings.get_root_node_label",
return_value="AWSAccount",
),
patch(
"tasks.jobs.attack_paths.findings.get_node_uid_field",
return_value="arn",
@@ -1294,27 +1302,9 @@ class TestAttackPathsFindingsHelpers:
assert mock_session.run.call_count == 2
for call_args in mock_session.run.call_args_list:
params = call_args.args[1]
assert params["provider_uid"] == str(provider.uid)
assert params["last_updated"] == config.update_tag
assert "findings_data" in params
def test_cleanup_findings_runs_batches(self, providers_fixture):
provider = providers_fixture[0]
config = SimpleNamespace(update_tag=1024)
mock_session = MagicMock()
first_batch = MagicMock()
first_batch.single.return_value = {"deleted_findings_count": 3}
second_batch = MagicMock()
second_batch.single.return_value = {"deleted_findings_count": 0}
mock_session.run.side_effect = [first_batch, second_batch]
findings_module.cleanup_findings(mock_session, provider, config)
assert mock_session.run.call_count == 2
params = mock_session.run.call_args.args[1]
assert params["last_updated"] == config.update_tag
def test_stream_findings_with_resources_returns_latest_scan_data(
self,
tenants_fixture,
@@ -1690,10 +1680,6 @@ class TestAttackPathsFindingsHelpers:
yield # Make it a generator
with (
patch(
"tasks.jobs.attack_paths.findings.get_root_node_label",
return_value="AWSAccount",
),
patch(
"tasks.jobs.attack_paths.findings.get_node_uid_field",
return_value="arn",
@@ -34,6 +34,10 @@ spec:
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.worker.initContainers }}
initContainers:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: worker
{{- with .Values.worker.securityContext }}
@@ -32,6 +32,10 @@ spec:
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.worker_beat.initContainers }}
initContainers:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: worker-beat
{{- with .Values.worker_beat.securityContext }}
+1 -1
View File
@@ -220,7 +220,7 @@ def _send_prowler_results(prowler_results, _prowler_version, options):
try:
_debug("RESULT MSG --- {0}".format(_check_result), 2)
_check_result = json.loads(TEMPLATE_CHECK.format(_check_result))
except:
except Exception:
_debug(
"INVALID JSON --- {0}".format(TEMPLATE_CHECK.format(_check_result)), 1
)
+17 -10
View File
@@ -1,4 +1,11 @@
services:
api-dev-init:
image: busybox:1.37.0
volumes:
- ./_data/api:/data
command: ["sh", "-c", "chown -R 1000:1000 /data"]
restart: "no"
api-dev:
hostname: "prowler-api"
image: prowler-api-dev
@@ -21,12 +28,20 @@ services:
- ./_data/api:/home/prowler/.config/prowler-api
- outputs:/tmp/prowler_api_output
depends_on:
api-dev-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
valkey:
condition: service_healthy
neo4j:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget -q -O /dev/null http://127.0.0.1:${DJANGO_PORT:-8080}/api/v1/ || exit 1"]
interval: 10s
timeout: 5s
retries: 12
start_period: 60s
entrypoint:
- "/home/prowler/docker-entrypoint.sh"
- "dev"
@@ -139,11 +154,7 @@ services:
- ./api/docker-entrypoint.sh:/home/prowler/docker-entrypoint.sh
- outputs:/tmp/prowler_api_output
depends_on:
valkey:
condition: service_healthy
postgres:
condition: service_healthy
neo4j:
api-dev:
condition: service_healthy
ulimits:
nofile:
@@ -165,11 +176,7 @@ services:
- path: ./.env
required: false
depends_on:
valkey:
condition: service_healthy
postgres:
condition: service_healthy
neo4j:
api-dev:
condition: service_healthy
ulimits:
nofile:
+17 -6
View File
@@ -5,6 +5,13 @@
# docker compose -f docker-compose-dev.yml up
#
services:
api-init:
image: busybox:1.37.0
volumes:
- ./_data/api:/data
command: ["sh", "-c", "chown -R 1000:1000 /data"]
restart: "no"
api:
hostname: "prowler-api"
image: prowlercloud/prowler-api:${PROWLER_API_VERSION:-stable}
@@ -17,12 +24,20 @@ services:
- ./_data/api:/home/prowler/.config/prowler-api
- output:/tmp/prowler_api_output
depends_on:
api-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
valkey:
condition: service_healthy
neo4j:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget -q -O /dev/null http://127.0.0.1:${DJANGO_PORT:-8080}/api/v1/ || exit 1"]
interval: 10s
timeout: 5s
retries: 12
start_period: 60s
entrypoint:
- "/home/prowler/docker-entrypoint.sh"
- "prod"
@@ -114,9 +129,7 @@ services:
volumes:
- "output:/tmp/prowler_api_output"
depends_on:
valkey:
condition: service_healthy
postgres:
api:
condition: service_healthy
ulimits:
nofile:
@@ -132,9 +145,7 @@ services:
- path: ./.env
required: false
depends_on:
valkey:
condition: service_healthy
postgres:
api:
condition: service_healthy
ulimits:
nofile:
+7 -3
View File
@@ -118,18 +118,22 @@ In case you have any doubts, consult the [Poetry environment activation guide](h
### Pre-Commit Hooks
This repository uses Git pre-commit hooks managed by the [pre-commit](https://pre-commit.com/) tool, it is installed with `poetry install --with dev`. Next, run the following command in the root of this repository:
This repository uses Git pre-commit hooks managed by the [prek](https://prek.j178.dev/) tool, it is installed with `poetry install --with dev`. Next, run the following command in the root of this repository:
```shell
pre-commit install
prek install
```
Successful installation should produce the following output:
```shell
pre-commit installed at .git/hooks/pre-commit
prek installed at `.git/hooks/pre-commit`
```
<Warning>
If pre-commit hooks were previously installed, run `prek install --overwrite` to replace the existing hook. Otherwise, both tools will run on each commit.
</Warning>
### Code Quality and Security Checks
Before merging pull requests, several automated checks and utilities ensure code security and updated dependencies:
@@ -15,8 +15,7 @@ This document describes the internal architecture of Prowler Lighthouse AI, enab
Lighthouse AI operates as a Langchain-based agent that connects Large Language Models (LLMs) with Prowler security data through the Model Context Protocol (MCP).
<img className="block dark:hidden" src="/images/lighthouse-architecture-light.png" alt="Prowler Lighthouse Architecture" />
<img className="hidden dark:block" src="/images/lighthouse-architecture-dark.png" alt="Prowler Lighthouse Architecture" />
![Prowler Lighthouse Architecture](/images/lighthouse-architecture.png)
### Three-Tier Architecture
+19
View File
@@ -12,6 +12,24 @@
"dark": "/images/prowler-logo-white.png",
"light": "/images/prowler-logo-black.png"
},
"contextual": {
"options": [
"copy",
"view",
{
"title": "Request a feature",
"description": "Open a feature request on GitHub",
"icon": "plus",
"href": "https://github.com/prowler-cloud/prowler/issues/new?template=feature-request.yml"
},
{
"title": "Report an issue",
"description": "Open a bug report on GitHub",
"icon": "bug",
"href": "https://github.com/prowler-cloud/prowler/issues/new?template=bug_report.yml"
}
]
},
"navigation": {
"tabs": [
{
@@ -133,6 +151,7 @@
]
},
"user-guide/tutorials/prowler-app-attack-paths",
"user-guide/tutorials/prowler-app-finding-groups",
"user-guide/tutorials/prowler-cloud-public-ips",
{
"group": "Tutorials",
@@ -121,8 +121,8 @@ To update the environment file:
Edit the `.env` file and change version values:
```env
PROWLER_UI_VERSION="5.22.0"
PROWLER_API_VERSION="5.22.0"
PROWLER_UI_VERSION="5.24.1"
PROWLER_API_VERSION="5.24.1"
```
<Note>
@@ -59,6 +59,10 @@ Prowler Lighthouse AI is powerful, but there are limitations:
- **NextJS session dependence**: If your Prowler application session expires or logs out, Lighthouse AI will error out. Refresh and log back in to continue.
- **Response quality**: The response quality depends on the selected LLM provider and model. Choose models with strong tool-calling capabilities for best results. We recommend `gpt-5` model from OpenAI.
## Architecture
![Prowler Lighthouse Architecture](/images/lighthouse-architecture.png)
## Extending Lighthouse AI
Lighthouse AI retrieves data through Prowler MCP. To add new capabilities, extend the Prowler MCP Server with additional tools and Lighthouse AI discovers them automatically.
@@ -46,8 +46,7 @@ Search and retrieve official Prowler documentation:
The following diagram illustrates the Prowler MCP Server architecture and its integration points:
<img className="block dark:hidden" src="/images/prowler_mcp_schema_light.png" alt="Prowler MCP Server Schema" />
<img className="hidden dark:block" src="/images/prowler_mcp_schema_dark.png" alt="Prowler MCP Server Schema" />
![Prowler MCP Server Schema](/images/prowler_mcp_schema.png)
The architecture shows how AI assistants connect through the MCP protocol to access Prowler's three main components:
- Prowler Cloud/App for security operations
Binary file not shown.

After

Width:  |  Height:  |  Size: 755 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 340 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 410 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 267 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 265 KiB

+37
View File
@@ -0,0 +1,37 @@
flowchart TB
browser([Browser])
subgraph NEXTJS["Next.js Server"]
route["API Route<br/>(auth + context assembly)"]
agent["LangChain Agent"]
subgraph TOOLS["Agent Tools"]
metatools["Meta-tools<br/>describe_tool / execute_tool / load_skill"]
end
mcpclient["MCP Client<br/>(HTTP transport)"]
end
llm["LLM Provider<br/>(OpenAI / Bedrock / OpenAI-compatible)"]
subgraph MCP["Prowler MCP Server"]
app_tools["prowler_app_* tools<br/>(auth required)"]
hub_tools["prowler_hub_* tools<br/>(no auth)"]
docs_tools["prowler_docs_* tools<br/>(no auth)"]
end
api["Prowler API"]
hub["hub.prowler.com"]
docs["docs.prowler.com<br/>(Mintlify)"]
browser <-->|SSE stream| route
route --> agent
agent <-->|LLM API| llm
agent --> metatools
metatools --> mcpclient
mcpclient -->|MCP HTTP · Bearer token<br/>for prowler_app_* only| app_tools
mcpclient -->|MCP HTTP| hub_tools
mcpclient -->|MCP HTTP| docs_tools
app_tools -->|REST| api
hub_tools -->|REST| hub
docs_tools -->|REST| docs
Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

@@ -23,6 +23,8 @@ flowchart TB
user --> ui
user --> cli
ui -->|REST| api
ui -->|MCP HTTP| mcp
mcp -->|REST| api
api --> pg
api --> valkey
beat -->|enqueue jobs| valkey
@@ -31,7 +33,5 @@ flowchart TB
worker -->|Attack Paths| neo4j
worker -->|invokes| sdk
cli --> sdk
api -. AI tools .-> mcp
mcp -. context .-> api
sdk --> providers
Binary file not shown.

Before

Width:  |  Height:  |  Size: 268 KiB

After

Width:  |  Height:  |  Size: 348 KiB

+29
View File
@@ -0,0 +1,29 @@
flowchart LR
subgraph HOSTS["MCP Hosts"]
chat["Chat Interfaces<br/>(Claude Desktop, LobeChat)"]
ide["IDEs and Code Editors<br/>(Claude Code, Cursor)"]
apps["Other AI Applications<br/>(5ire, custom agents)"]
end
subgraph MCP["Prowler MCP Server"]
app_tools["prowler_app_* tools<br/>(JWT or API key auth)<br/>Findings · Providers · Scans<br/>Resources · Muting · Compliance<br/>Attack Paths"]
hub_tools["prowler_hub_* tools<br/>(no auth)<br/>Checks Catalog · Check Code<br/>Fixers · Compliance Frameworks"]
docs_tools["prowler_docs_* tools<br/>(no auth)<br/>Search · Document Retrieval"]
end
api["Prowler API<br/>(REST)"]
hub["hub.prowler.com<br/>(REST)"]
docs["docs.prowler.com<br/>(Mintlify)"]
chat -->|STDIO or HTTP| app_tools
chat -->|STDIO or HTTP| hub_tools
chat -->|STDIO or HTTP| docs_tools
ide -->|STDIO or HTTP| app_tools
ide -->|STDIO or HTTP| hub_tools
ide -->|STDIO or HTTP| docs_tools
apps -->|STDIO or HTTP| app_tools
apps -->|STDIO or HTTP| hub_tools
apps -->|STDIO or HTTP| docs_tools
app_tools -->|REST| api
hub_tools -->|REST| hub
docs_tools -->|REST| docs
Binary file not shown.

After

Width:  |  Height:  |  Size: 371 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 328 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 332 KiB

@@ -33,6 +33,41 @@ To scan a particular AWS region with Prowler, use:
prowler aws -f/--region eu-west-1 us-east-1
```
### Excluding Specific Regions
To scan all supported AWS regions except a specific subset, use the `--excluded-region` flag:
```console
prowler aws --excluded-region eu-west-1 me-south-1
```
You can also configure the exclusion list with the `PROWLER_AWS_DISALLOWED_REGIONS` environment variable as a comma-separated list:
```console
export PROWLER_AWS_DISALLOWED_REGIONS="eu-west-1,me-south-1"
prowler aws
```
Or with the AWS provider configuration in `config.yaml`:
```yaml
aws:
disallowed_regions:
- eu-west-1
- me-south-1
```
When more than one source is set, precedence is:
1. `--excluded-region`
2. `PROWLER_AWS_DISALLOWED_REGIONS`
3. `aws.disallowed_regions` in `config.yaml`
<Note>
For self-hosted App or API-triggered scans, set `PROWLER_AWS_DISALLOWED_REGIONS` in the runtime environment of the backend scan containers such as `api` and `worker`. The `ui` container does not enforce AWS region selection.
</Note>
### AWS Credentials Configuration
For details on configuring AWS credentials, refer to the following [Botocore](https://github.com/boto/botocore) [file](https://github.com/boto/botocore/blob/22a19ea7c4c2c4dd7df4ab8c32733cba0c7597a4/botocore/data/partitions.json).
@@ -0,0 +1,119 @@
---
title: 'Finding Groups'
description: 'Organize and triage security findings by check to reduce noise and prioritize remediation effectively.'
---
import { VersionBadge } from "/snippets/version-badge.mdx"
<VersionBadge version="5.23.0" />
Finding Groups transforms security findings triage by grouping them by check instead of displaying a flat list. This dramatically reduces noise and enables faster, more effective prioritization.
## Triage Challenges with Flat Finding Lists
A real cloud environment produces thousands of findings per scan. A flat list makes it impossible to triage effectively:
- **Signal buried in noise**: the same misconfiguration repeated across 200 resources shows up as 200 rows, burying the signal in repetitive data
- **Prioritization guesswork**: without grouping, understanding which issues affect the most resources requires manual counting and correlation
- **Tedious muting**: muting a false positive globally requires manually acting on each individual finding across the list
- **Lost context**: when investigating a single resource, related findings are scattered across the same flat list, making it hard to see the full picture
## How Finding Groups Addresses These Challenges
Finding Groups addresses these challenges by intelligently grouping findings by check.
### Grouped View at a Glance
Each row represents a single check title with key information immediately visible:
- **Severity** indicator for quick risk assessment
- **Impacted providers** showing which cloud platforms are affected
- **X of Y impacted resources** counter displaying how many resources fail this check
For example, `Vercel project has the Web Application Firewall enabled` across every affected project collapses to a single row — not one per project. Sort or filter by severity, provider, or status at the group level to triage top-down instead of drowning in per-resource rows.
![Finding Groups list view](/images/finding-groups-list.png)
### Expanding Groups for Details
Expand any group inline to see the failing resources with detailed information:
| Column | Description |
|--------|-------------|
| **UID** | Unique identifier for the resource |
| **Service** | The cloud service the resource belongs to |
| **Region** | Geographic region where the resource is deployed |
| **Severity** | Risk level of the finding |
| **Provider** | Cloud provider (AWS, Azure, GCP, Kubernetes, etc.) |
| **Last Seen** | When the finding was last detected |
| **Failing For** | Duration the resource has been in a failing state |
![Finding Groups expanded view](/images/finding-groups-expanded.png)
### Resource Detail Drawer
Select any resource to open the detail drawer with full finding context:
- **Risk**: the security risk associated with this finding
- **Description**: detailed explanation of what was detected
- **Status Extended**: additional status information and context
- **Remediation**: step-by-step guidance to resolve the issue
- **View in Prowler Hub**: direct link to explore the check in Prowler Hub
- **Analyze This Finding With Lighthouse AI**: one-click AI-powered analysis for deeper insights
![Finding Groups resource detail drawer](/images/finding-groups-drawer.png)
### Bulk Actions
Bulk-mute an entire group instead of chasing duplicates across the list. This is especially useful for:
- Known false positives that appear across many resources
- Findings in development or test environments
- Accepted risks that have been documented and approved
<Warning>
Muting findings does not resolve underlying security issues. Review each finding carefully before muting to ensure it represents an acceptable risk or has been properly addressed.
</Warning>
## Other Findings for This Resource
Inside the resource detail drawer, the **Other Findings For This Resource** tab lists every finding that hits the same resource — passing, failing, and muted — alongside the one currently being reviewed.
![Other Findings For This Resource tab](/images/finding-groups-other-findings.png)
### Why This Matters
When reviewing "WAF not enabled" on a Vercel project, the tab immediately shows:
- Skew protection status
- Rate limiting configuration
- IP blocking settings
- Custom firewall rules
- Password protection findings
All for that same project, without navigating back to the main list and filtering by resource UID.
### Complete Context Within the Drawer
Pair the Other Findings tab with:
- **Scans tab**: scan history for this resource
- **Events tab**: changes and events over time
This provides full context without leaving the drawer.
## Best Practices
1. **Start with high severity groups**: focus on critical and high severity groups first for maximum impact.
2. **Use filters strategically**: filter by provider or status at the group level to narrow the triage scope.
3. **Leverage bulk mute**: when a finding represents a confirmed false positive, mute the entire group at once.
4. **Check related findings**: review the Other Findings tab to understand the full security posture of a resource.
5. **Track failure duration**: use the "Failing For" column to prioritize long-standing issues that may indicate systemic problems.
## Getting Started
1. Navigate to the **Findings** section in Prowler Cloud/App.
2. Toggle to the **Grouped View** to see findings organized by check.
3. Select any group row to expand and see affected resources.
4. Select a resource to open the detail drawer with full context.
5. Use the **Other Findings For This Resource** tab to see all findings for that resource.

Some files were not shown because too many files have changed in this diff Show More