Compare commits

...

35 Commits

Author SHA1 Message Date
Pepe Fagoaga e07e45c8e5 chore(api): update lock for SDK 2025-12-23 16:28:14 +01:00
Pepe Fagoaga a37aea84e7 chore: changelog for v5.16.1 (#9661) 2025-12-23 12:51:47 +01:00
Pedro Martín 8d1d041092 chore(aws): support new eusc partition (#9649)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-23 12:28:10 +01:00
Rubén De la Torre Vico 6f018183cd ci(mcp): add GitHub Actions workflow for PyPI release (#9660) 2025-12-23 12:27:08 +01:00
Pedro Martín 8ce56b5ed6 feat(ui): add search bar when adding a provider (#9634)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-12-23 12:09:55 +01:00
lydiavilchez ad5095595c feat(gcp): add compute check to ensure VM disks have auto-delete disabled (#9604)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-23 10:57:11 +01:00
Alejandro Bailo 3fbe157d10 feat(ui): add shadcn Alert component (#9655)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 10:52:48 +01:00
Rubén De la Torre Vico 83d04753ef docs: add resource types for new providers (#9113) 2025-12-23 10:19:53 +01:00
Ulissis Correa de8e2219c2 fix(ui): add API docs URL build arg for self-hosted deployments (#9388)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-23 09:54:04 +01:00
dependabot[bot] 2850c40dd5 build(deps): bump trufflesecurity/trufflehog from 3.90.12 to 3.91.1 (#9395)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-23 09:51:30 +01:00
dependabot[bot] e213afd4e1 build(deps): bump aws-actions/configure-aws-credentials from 5.1.0 to 5.1.1 (#9392)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-23 09:50:49 +01:00
dependabot[bot] deada62d66 build(deps): bump peter-evans/repository-dispatch from 4.0.0 to 4.0.1 (#9391)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-23 09:50:36 +01:00
dependabot[bot] b8d9860a2f build(deps): bump github/codeql-action from 4.31.2 to 4.31.6 (#9393)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-23 09:38:13 +01:00
Pedro Martín be759216c4 fix(compliance): handle ZeroDivision error from Prowler ThreatScore (#9653) 2025-12-23 09:29:14 +01:00
dependabot[bot] ca9211b5ed build(deps): bump actions/setup-python from 6.0.0 to 6.1.0 (#9390)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-23 09:26:54 +01:00
dependabot[bot] 3cf7f7845e build(deps): bump actions/checkout from 5.0.0 to 6.0.0 (#9397)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-23 09:20:19 +01:00
Ryan Nolette 81e046ecf6 feat(bedrock): API pagination (#9606)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-23 09:06:19 +01:00
Ryan Nolette 0d363e6100 feat(sagemaker): parallelize tag listing for better performance (#9609)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-23 08:51:16 +01:00
Pepe Fagoaga 0719e31b58 chore(security-hub): handle SecurityHubNoEnabledRegionsError (#9635) 2025-12-22 16:50:36 +01:00
StylusFrost 19ceb7db88 docs: add end-to-end testing documentation for Prowler App (#9557) 2025-12-22 16:39:53 +01:00
lydiavilchez 43875b6ae7 feat(gcp): add check to ensure Managed Instance Groups span multiple zones (#9566)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-22 15:12:08 +01:00
Adrián Peña 641dc78c3a fix(api): add cleanup for orphan scheduled scans caused by transaction isolation (#9633) 2025-12-22 14:11:50 +01:00
Prowler Bot 57b9a2ea10 feat(aws): Update regions for AWS services (#9631)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
Co-authored-by: pedrooot <pedromarting3@gmail.com>
2025-12-22 13:31:58 +01:00
Rubén De la Torre Vico 19e9a9965b chore(aws): enhance metadata for secretsmanager service (#9408)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-22 13:20:46 +01:00
Pedro Martín 3eb2595f6d feat(api): support alibabacloud provider (#9485) 2025-12-22 12:46:50 +01:00
Rubén De la Torre Vico d776356d16 chore(aws): enhance metadata for shield service (#9427)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-22 12:33:55 +01:00
Rubén De la Torre Vico 5118d0ecb4 chore(lighthouse): change meta tools descriptions to be more accurate (#9632) 2025-12-22 10:57:04 +01:00
mchennai df8e465366 fix(s3): remediation URL for s3_bucket_object_versioning (#9605) 2025-12-22 09:53:07 +01:00
César Arroba f4a78d64f1 chore(github): bump version for API, UI and Docs (#9601) 2025-12-22 09:35:00 +01:00
Alejandro Bailo e5cd25e60c docs: simple mutelist added and advanced changed (#9600) 2025-12-19 16:01:21 +01:00
Rubén De la Torre Vico 7d963751aa chore(aws): enhance metadata for sqs service (#9429)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-19 11:18:50 +01:00
Rubén De la Torre Vico fa4371bbf6 chore(aws): enhance metadata for route53 service (#9406)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-19 11:00:05 +01:00
Rubén De la Torre Vico ff6fbcbf48 chore(aws): enhance metadata for stepfunctions service (#9432)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-19 10:39:29 +01:00
Pedro Martín 9bf3702d71 feat(compliance): add Prowler ThreatScore for the AlibabaCloud provider (#9511) 2025-12-19 09:36:42 +01:00
Prowler Bot ec32be2f1d chore(release): Bump version to v5.17.0 (#9597)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-12-18 18:38:31 +01:00
158 changed files with 11162 additions and 2601 deletions
+1 -1
View File
@@ -119,7 +119,7 @@ NEXT_PUBLIC_SENTRY_ENVIRONMENT=${SENTRY_ENVIRONMENT}
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.12.2
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.16.0
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
+254
View File
@@ -0,0 +1,254 @@
name: 'API: Bump Version'
on:
release:
types:
- 'published'
concurrency:
group: ${{ github.workflow }}-${{ github.event.release.tag_name }}
cancel-in-progress: false
env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
jobs:
detect-release-type:
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
is_minor: ${{ steps.detect.outputs.is_minor }}
is_patch: ${{ steps.detect.outputs.is_patch }}
major_version: ${{ steps.detect.outputs.major_version }}
minor_version: ${{ steps.detect.outputs.minor_version }}
patch_version: ${{ steps.detect.outputs.patch_version }}
current_api_version: ${{ steps.get_api_version.outputs.current_api_version }}
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get current API version
id: get_api_version
run: |
CURRENT_API_VERSION=$(grep -oP '^version = "\K[^"]+' api/pyproject.toml)
echo "current_api_version=${CURRENT_API_VERSION}" >> "${GITHUB_OUTPUT}"
echo "Current API version: $CURRENT_API_VERSION"
- name: Detect release type and parse version
id: detect
run: |
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
PATCH_VERSION=${BASH_REMATCH[3]}
echo "major_version=${MAJOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "minor_version=${MINOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "patch_version=${PATCH_VERSION}" >> "${GITHUB_OUTPUT}"
if (( MAJOR_VERSION != 5 )); then
echo "::error::Releasing another Prowler major version, aborting..."
exit 1
fi
if (( PATCH_VERSION == 0 )); then
echo "is_minor=true" >> "${GITHUB_OUTPUT}"
echo "is_patch=false" >> "${GITHUB_OUTPUT}"
echo "✓ Minor release detected: $PROWLER_VERSION"
else
echo "is_minor=false" >> "${GITHUB_OUTPUT}"
echo "is_patch=true" >> "${GITHUB_OUTPUT}"
echo "✓ Patch release detected: $PROWLER_VERSION"
fi
else
echo "::error::Invalid version syntax: '$PROWLER_VERSION' (must be X.Y.Z)"
exit 1
fi
bump-minor-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_minor == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next API minor version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_API_VERSION="${{ needs.detect-release-type.outputs.current_api_version }}"
# API version follows Prowler minor + 1
# For Prowler 5.17.0 -> API 1.18.0
# For next master (Prowler 5.18.0) -> API 1.19.0
NEXT_API_VERSION=1.$((MINOR_VERSION + 2)).0
echo "CURRENT_API_VERSION=${CURRENT_API_VERSION}" >> "${GITHUB_ENV}"
echo "NEXT_API_VERSION=${NEXT_API_VERSION}" >> "${GITHUB_ENV}"
echo "Prowler release version: ${MAJOR_VERSION}.${MINOR_VERSION}.0"
echo "Current API version: $CURRENT_API_VERSION"
echo "Next API minor version (for master): $NEXT_API_VERSION"
- name: Bump API versions in files for master
run: |
set -e
sed -i "s|version = \"${CURRENT_API_VERSION}\"|version = \"${NEXT_API_VERSION}\"|" api/pyproject.toml
sed -i "s|spectacular_settings.VERSION = \"${CURRENT_API_VERSION}\"|spectacular_settings.VERSION = \"${NEXT_API_VERSION}\"|" api/src/backend/api/v1/views.py
sed -i "s| version: ${CURRENT_API_VERSION}| version: ${NEXT_API_VERSION}|" api/src/backend/api/specs/v1.yaml
echo "Files modified:"
git --no-pager diff
- name: Create PR for next API minor version to master
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: master
commit-message: 'chore(api): Bump version to v${{ env.NEXT_API_VERSION }}'
branch: api-version-bump-to-v${{ env.NEXT_API_VERSION }}
title: 'chore(api): Bump version to v${{ env.NEXT_API_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Bump Prowler API version to v${{ env.NEXT_API_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Checkout version branch
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
- name: Calculate first API patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_API_VERSION="${{ needs.detect-release-type.outputs.current_api_version }}"
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
# API version follows Prowler minor + 1
# For Prowler 5.17.0 release -> version branch v5.17 should have API 1.18.1
FIRST_API_PATCH_VERSION=1.$((MINOR_VERSION + 1)).1
echo "CURRENT_API_VERSION=${CURRENT_API_VERSION}" >> "${GITHUB_ENV}"
echo "FIRST_API_PATCH_VERSION=${FIRST_API_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Prowler release version: ${MAJOR_VERSION}.${MINOR_VERSION}.0"
echo "First API patch version (for ${VERSION_BRANCH}): $FIRST_API_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
- name: Bump API versions in files for version branch
run: |
set -e
sed -i "s|version = \"${CURRENT_API_VERSION}\"|version = \"${FIRST_API_PATCH_VERSION}\"|" api/pyproject.toml
sed -i "s|spectacular_settings.VERSION = \"${CURRENT_API_VERSION}\"|spectacular_settings.VERSION = \"${FIRST_API_PATCH_VERSION}\"|" api/src/backend/api/v1/views.py
sed -i "s| version: ${CURRENT_API_VERSION}| version: ${FIRST_API_PATCH_VERSION}|" api/src/backend/api/specs/v1.yaml
echo "Files modified:"
git --no-pager diff
- name: Create PR for first API patch version to version branch
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(api): Bump version to v${{ env.FIRST_API_PATCH_VERSION }}'
branch: api-version-bump-to-v${{ env.FIRST_API_PATCH_VERSION }}
title: 'chore(api): Bump version to v${{ env.FIRST_API_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Bump Prowler API version to v${{ env.FIRST_API_PATCH_VERSION }} in version branch after releasing Prowler v${{ env.PROWLER_VERSION }}.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
bump-patch-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_patch == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next API patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
PATCH_VERSION=${{ needs.detect-release-type.outputs.patch_version }}
CURRENT_API_VERSION="${{ needs.detect-release-type.outputs.current_api_version }}"
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
# Extract current API patch to increment it
if [[ $CURRENT_API_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
API_PATCH=${BASH_REMATCH[3]}
# API version follows Prowler minor + 1
# Keep same API minor (based on Prowler minor), increment patch
NEXT_API_PATCH_VERSION=1.$((MINOR_VERSION + 1)).$((API_PATCH + 1))
echo "CURRENT_API_VERSION=${CURRENT_API_VERSION}" >> "${GITHUB_ENV}"
echo "NEXT_API_PATCH_VERSION=${NEXT_API_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Prowler release version: ${MAJOR_VERSION}.${MINOR_VERSION}.${PATCH_VERSION}"
echo "Current API version: $CURRENT_API_VERSION"
echo "Next API patch version: $NEXT_API_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
else
echo "::error::Invalid API version format: $CURRENT_API_VERSION"
exit 1
fi
- name: Bump API versions in files for version branch
run: |
set -e
sed -i "s|version = \"${CURRENT_API_VERSION}\"|version = \"${NEXT_API_PATCH_VERSION}\"|" api/pyproject.toml
sed -i "s|spectacular_settings.VERSION = \"${CURRENT_API_VERSION}\"|spectacular_settings.VERSION = \"${NEXT_API_PATCH_VERSION}\"|" api/src/backend/api/v1/views.py
sed -i "s| version: ${CURRENT_API_VERSION}| version: ${NEXT_API_PATCH_VERSION}|" api/src/backend/api/specs/v1.yaml
echo "Files modified:"
git --no-pager diff
- name: Create PR for next API patch version to version branch
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(api): Bump version to v${{ env.NEXT_API_PATCH_VERSION }}'
branch: api-version-bump-to-v${{ env.NEXT_API_PATCH_VERSION }}
title: 'chore(api): Bump version to v${{ env.NEXT_API_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Bump Prowler API version to v${{ env.NEXT_API_PATCH_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
+1 -1
View File
@@ -33,7 +33,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for API changes
id: check-changes
+3 -3
View File
@@ -42,15 +42,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Initialize CodeQL
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
uses: github/codeql-action/analyze@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
with:
category: '/language:${{ matrix.language }}'
@@ -57,7 +57,7 @@ jobs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Notify container push started
id: slack-notification
@@ -93,7 +93,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -170,7 +170,7 @@ jobs:
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Determine overall outcome
id: outcome
@@ -207,7 +207,7 @@ jobs:
steps:
- name: Trigger API deployment
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+2 -2
View File
@@ -28,7 +28,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -63,7 +63,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for API changes
id: check-changes
+1 -1
View File
@@ -33,7 +33,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for API changes
id: check-changes
+1 -1
View File
@@ -73,7 +73,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for API changes
id: check-changes
+247
View File
@@ -0,0 +1,247 @@
name: 'Docs: Bump Version'
on:
release:
types:
- 'published'
concurrency:
group: ${{ github.workflow }}-${{ github.event.release.tag_name }}
cancel-in-progress: false
env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
jobs:
detect-release-type:
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
is_minor: ${{ steps.detect.outputs.is_minor }}
is_patch: ${{ steps.detect.outputs.is_patch }}
major_version: ${{ steps.detect.outputs.major_version }}
minor_version: ${{ steps.detect.outputs.minor_version }}
patch_version: ${{ steps.detect.outputs.patch_version }}
current_docs_version: ${{ steps.get_docs_version.outputs.current_docs_version }}
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get current documentation version
id: get_docs_version
run: |
CURRENT_DOCS_VERSION=$(grep -oP 'PROWLER_UI_VERSION="\K[^"]+' docs/getting-started/installation/prowler-app.mdx)
echo "current_docs_version=${CURRENT_DOCS_VERSION}" >> "${GITHUB_OUTPUT}"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
- name: Detect release type and parse version
id: detect
run: |
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
PATCH_VERSION=${BASH_REMATCH[3]}
echo "major_version=${MAJOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "minor_version=${MINOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "patch_version=${PATCH_VERSION}" >> "${GITHUB_OUTPUT}"
if (( MAJOR_VERSION != 5 )); then
echo "::error::Releasing another Prowler major version, aborting..."
exit 1
fi
if (( PATCH_VERSION == 0 )); then
echo "is_minor=true" >> "${GITHUB_OUTPUT}"
echo "is_patch=false" >> "${GITHUB_OUTPUT}"
echo "✓ Minor release detected: $PROWLER_VERSION"
else
echo "is_minor=false" >> "${GITHUB_OUTPUT}"
echo "is_patch=true" >> "${GITHUB_OUTPUT}"
echo "✓ Patch release detected: $PROWLER_VERSION"
fi
else
echo "::error::Invalid version syntax: '$PROWLER_VERSION' (must be X.Y.Z)"
exit 1
fi
bump-minor-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_minor == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next minor version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_DOCS_VERSION="${{ needs.detect-release-type.outputs.current_docs_version }}"
NEXT_MINOR_VERSION=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).0
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "NEXT_MINOR_VERSION=${NEXT_MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
echo "Current release version: $PROWLER_VERSION"
echo "Next minor version: $NEXT_MINOR_VERSION"
- name: Bump versions in documentation for master
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to master
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: master
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Update Prowler documentation version references to v${{ env.PROWLER_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `docs/getting-started/installation/prowler-app.mdx`: `PROWLER_UI_VERSION` and `PROWLER_API_VERSION`
- All `*.mdx` files with `<VersionBadge>` components
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Checkout version branch
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
- name: Calculate first patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_DOCS_VERSION="${{ needs.detect-release-type.outputs.current_docs_version }}"
FIRST_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.1
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "FIRST_PATCH_VERSION=${FIRST_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "First patch version: $FIRST_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
- name: Bump versions in documentation for version branch
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to version branch
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}-branch
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Update Prowler documentation version references to v${{ env.PROWLER_VERSION }} in version branch after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `docs/getting-started/installation/prowler-app.mdx`: `PROWLER_UI_VERSION` and `PROWLER_API_VERSION`
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
bump-patch-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_patch == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
PATCH_VERSION=${{ needs.detect-release-type.outputs.patch_version }}
CURRENT_DOCS_VERSION="${{ needs.detect-release-type.outputs.current_docs_version }}"
NEXT_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION + 1))
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
echo "NEXT_PATCH_VERSION=${NEXT_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Current documentation version: $CURRENT_DOCS_VERSION"
echo "Current release version: $PROWLER_VERSION"
echo "Next patch version: $NEXT_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
- name: Bump versions in documentation for patch version
run: |
set -e
# Update prowler-app.mdx with current release version
sed -i "s|PROWLER_UI_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_UI_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
sed -i "s|PROWLER_API_VERSION=\"${CURRENT_DOCS_VERSION}\"|PROWLER_API_VERSION=\"${PROWLER_VERSION}\"|" docs/getting-started/installation/prowler-app.mdx
echo "Files modified:"
git --no-pager diff
- name: Create PR for documentation update to version branch
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
branch: docs-version-update-to-v${{ env.PROWLER_VERSION }}
title: 'docs: Update version to v${{ env.PROWLER_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Update Prowler documentation version references to v${{ env.PROWLER_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `docs/getting-started/installation/prowler-app.mdx`: `PROWLER_UI_VERSION` and `PROWLER_API_VERSION`
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
+2 -2
View File
@@ -23,11 +23,11 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Scan for secrets with TruffleHog
uses: trufflesecurity/trufflehog@b84c3d14d189e16da175e2c27fa8136603783ffc # v3.90.12
uses: trufflesecurity/trufflehog@aade3bff5594fe8808578dd4db3dfeae9bf2abdc # v3.91.1
with:
extra_args: '--results=verified,unknown'
@@ -56,7 +56,7 @@ jobs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Notify container push started
id: slack-notification
@@ -91,7 +91,7 @@ jobs:
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -176,7 +176,7 @@ jobs:
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Determine overall outcome
id: outcome
@@ -213,7 +213,7 @@ jobs:
steps:
- name: Trigger MCP deployment
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+2 -2
View File
@@ -28,7 +28,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -62,7 +62,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for MCP changes
id: check-changes
+81
View File
@@ -0,0 +1,81 @@
name: "MCP: PyPI Release"
on:
release:
types:
- "published"
concurrency:
group: ${{ github.workflow }}-${{ github.event.release.tag_name }}
cancel-in-progress: false
env:
RELEASE_TAG: ${{ github.event.release.tag_name }}
PYTHON_VERSION: "3.12"
WORKING_DIRECTORY: ./mcp_server
jobs:
validate-release:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
prowler_version: ${{ steps.parse-version.outputs.version }}
major_version: ${{ steps.parse-version.outputs.major }}
steps:
- name: Parse and validate version
id: parse-version
run: |
PROWLER_VERSION="${{ env.RELEASE_TAG }}"
echo "version=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
# Extract major version
MAJOR_VERSION="${PROWLER_VERSION%%.*}"
echo "major=${MAJOR_VERSION}" >> "${GITHUB_OUTPUT}"
# Validate major version (only Prowler 3, 4, 5 supported)
case ${MAJOR_VERSION} in
3|4|5)
echo "✓ Releasing Prowler MCP for tag ${PROWLER_VERSION}"
;;
*)
echo "::error::Unsupported Prowler major version: ${MAJOR_VERSION}"
exit 1
;;
esac
publish-prowler-mcp:
needs: validate-release
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
id-token: write
environment:
name: pypi-prowler-mcp
url: https://pypi.org/project/prowler-mcp/
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install uv
uses: astral-sh/setup-uv@v7
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Build prowler-mcp package
working-directory: ${{ env.WORKING_DIRECTORY }}
run: uv build
- name: Publish prowler-mcp package to PyPI
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
with:
packages-dir: ${{ env.WORKING_DIRECTORY }}/dist/
print-hash: true
+1 -1
View File
@@ -29,7 +29,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
+1 -1
View File
@@ -25,7 +25,7 @@ jobs:
steps:
- name: Checkout PR head
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
+5 -2
View File
@@ -13,7 +13,10 @@ concurrency:
jobs:
trigger-cloud-pull-request:
if: github.event.pull_request.merged == true && github.repository == 'prowler-cloud/prowler'
if: |
github.event.pull_request.merged == true &&
github.repository == 'prowler-cloud/prowler' &&
!contains(github.event.pull_request.labels.*.name, 'skip-sync')
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
@@ -26,7 +29,7 @@ jobs:
echo "SHORT_SHA=${SHORT_SHA::7}" >> $GITHUB_ENV
- name: Trigger Cloud repository pull request
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+2 -2
View File
@@ -27,13 +27,13 @@ jobs:
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: '3.12'
+6 -9
View File
@@ -67,7 +67,7 @@ jobs:
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next minor version
run: |
@@ -86,7 +86,6 @@ jobs:
sed -i "s|version = \"${PROWLER_VERSION}\"|version = \"${NEXT_MINOR_VERSION}\"|" pyproject.toml
sed -i "s|prowler_version = \"${PROWLER_VERSION}\"|prowler_version = \"${NEXT_MINOR_VERSION}\"|" prowler/config/config.py
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${NEXT_MINOR_VERSION}|" .env
echo "Files modified:"
git --no-pager diff
@@ -100,7 +99,7 @@ jobs:
commit-message: 'chore(release): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
branch: version-bump-to-v${{ env.NEXT_MINOR_VERSION }}
title: 'chore(release): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
labels: no-changelog
labels: no-changelog,skip-sync
body: |
### Description
@@ -111,7 +110,7 @@ jobs:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Checkout version branch
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
@@ -135,7 +134,6 @@ jobs:
sed -i "s|version = \"${PROWLER_VERSION}\"|version = \"${FIRST_PATCH_VERSION}\"|" pyproject.toml
sed -i "s|prowler_version = \"${PROWLER_VERSION}\"|prowler_version = \"${FIRST_PATCH_VERSION}\"|" prowler/config/config.py
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${FIRST_PATCH_VERSION}|" .env
echo "Files modified:"
git --no-pager diff
@@ -149,7 +147,7 @@ jobs:
commit-message: 'chore(release): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
branch: version-bump-to-v${{ env.FIRST_PATCH_VERSION }}
title: 'chore(release): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
labels: no-changelog
labels: no-changelog,skip-sync
body: |
### Description
@@ -169,7 +167,7 @@ jobs:
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next patch version
run: |
@@ -193,7 +191,6 @@ jobs:
sed -i "s|version = \"${PROWLER_VERSION}\"|version = \"${NEXT_PATCH_VERSION}\"|" pyproject.toml
sed -i "s|prowler_version = \"${PROWLER_VERSION}\"|prowler_version = \"${NEXT_PATCH_VERSION}\"|" prowler/config/config.py
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${NEXT_PATCH_VERSION}|" .env
echo "Files modified:"
git --no-pager diff
@@ -207,7 +204,7 @@ jobs:
commit-message: 'chore(release): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
branch: version-bump-to-v${{ env.NEXT_PATCH_VERSION }}
title: 'chore(release): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
labels: no-changelog
labels: no-changelog,skip-sync
body: |
### Description
+2 -2
View File
@@ -31,7 +31,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for SDK changes
id: check-changes
@@ -62,7 +62,7 @@ jobs:
- name: Set up Python ${{ matrix.python-version }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
+3 -3
View File
@@ -49,15 +49,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Initialize CodeQL
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/sdk-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
uses: github/codeql-action/analyze@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
with:
category: '/language:${{ matrix.language }}'
@@ -61,10 +61,10 @@ jobs:
stable_tag: ${{ steps.get-prowler-version.outputs.stable_tag }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.PYTHON_VERSION }}
@@ -115,7 +115,7 @@ jobs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Notify container push started
id: slack-notification
@@ -151,7 +151,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -252,7 +252,7 @@ jobs:
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Determine overall outcome
id: outcome
@@ -294,7 +294,7 @@ jobs:
- name: Dispatch v3 deployment (latest)
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}
@@ -303,7 +303,7 @@ jobs:
- name: Dispatch v3 deployment (release)
if: github.event_name == 'release'
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}
+2 -2
View File
@@ -27,7 +27,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -62,7 +62,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for SDK changes
id: check-changes
+4 -4
View File
@@ -59,13 +59,13 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install Poetry
run: pipx install poetry==2.1.1
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'poetry'
@@ -91,13 +91,13 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install Poetry
run: pipx install poetry==2.1.1
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'poetry'
@@ -25,12 +25,12 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: 'master'
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
@@ -39,7 +39,7 @@ jobs:
run: pip install boto3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
with:
aws-region: ${{ env.AWS_REGION }}
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
+2 -2
View File
@@ -24,7 +24,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for SDK changes
id: check-changes
@@ -55,7 +55,7 @@ jobs:
- name: Set up Python 3.12
if: steps.check-changes.outputs.any_changed == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: '3.12'
cache: 'poetry'
+2 -2
View File
@@ -31,7 +31,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for SDK changes
id: check-changes
@@ -62,7 +62,7 @@ jobs:
- name: Set up Python ${{ matrix.python-version }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
+221
View File
@@ -0,0 +1,221 @@
name: 'UI: Bump Version'
on:
release:
types:
- 'published'
concurrency:
group: ${{ github.workflow }}-${{ github.event.release.tag_name }}
cancel-in-progress: false
env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
jobs:
detect-release-type:
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
is_minor: ${{ steps.detect.outputs.is_minor }}
is_patch: ${{ steps.detect.outputs.is_patch }}
major_version: ${{ steps.detect.outputs.major_version }}
minor_version: ${{ steps.detect.outputs.minor_version }}
patch_version: ${{ steps.detect.outputs.patch_version }}
steps:
- name: Detect release type and parse version
id: detect
run: |
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
PATCH_VERSION=${BASH_REMATCH[3]}
echo "major_version=${MAJOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "minor_version=${MINOR_VERSION}" >> "${GITHUB_OUTPUT}"
echo "patch_version=${PATCH_VERSION}" >> "${GITHUB_OUTPUT}"
if (( MAJOR_VERSION != 5 )); then
echo "::error::Releasing another Prowler major version, aborting..."
exit 1
fi
if (( PATCH_VERSION == 0 )); then
echo "is_minor=true" >> "${GITHUB_OUTPUT}"
echo "is_patch=false" >> "${GITHUB_OUTPUT}"
echo "✓ Minor release detected: $PROWLER_VERSION"
else
echo "is_minor=false" >> "${GITHUB_OUTPUT}"
echo "is_patch=true" >> "${GITHUB_OUTPUT}"
echo "✓ Patch release detected: $PROWLER_VERSION"
fi
else
echo "::error::Invalid version syntax: '$PROWLER_VERSION' (must be X.Y.Z)"
exit 1
fi
bump-minor-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_minor == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next minor version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
NEXT_MINOR_VERSION=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).0
echo "NEXT_MINOR_VERSION=${NEXT_MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "Current version: $PROWLER_VERSION"
echo "Next minor version: $NEXT_MINOR_VERSION"
- name: Bump UI version in .env for master
run: |
set -e
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${NEXT_MINOR_VERSION}|" .env
echo "Files modified:"
git --no-pager diff
- name: Create PR for next minor version to master
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: master
commit-message: 'chore(ui): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
branch: ui-version-bump-to-v${{ env.NEXT_MINOR_VERSION }}
title: 'chore(ui): Bump version to v${{ env.NEXT_MINOR_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Bump Prowler UI version to v${{ env.NEXT_MINOR_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `.env`: `NEXT_PUBLIC_PROWLER_RELEASE_VERSION`
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Checkout version branch
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
- name: Calculate first patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
FIRST_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.1
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "FIRST_PATCH_VERSION=${FIRST_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "First patch version: $FIRST_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
- name: Bump UI version in .env for version branch
run: |
set -e
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${FIRST_PATCH_VERSION}|" .env
echo "Files modified:"
git --no-pager diff
- name: Create PR for first patch version to version branch
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(ui): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
branch: ui-version-bump-to-v${{ env.FIRST_PATCH_VERSION }}
title: 'chore(ui): Bump version to v${{ env.FIRST_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Bump Prowler UI version to v${{ env.FIRST_PATCH_VERSION }} in version branch after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `.env`: `NEXT_PUBLIC_PROWLER_RELEASE_VERSION`
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
bump-patch-version:
needs: detect-release-type
if: needs.detect-release-type.outputs.is_patch == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate next patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
PATCH_VERSION=${{ needs.detect-release-type.outputs.patch_version }}
NEXT_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION + 1))
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "NEXT_PATCH_VERSION=${NEXT_PATCH_VERSION}" >> "${GITHUB_ENV}"
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Current version: $PROWLER_VERSION"
echo "Next patch version: $NEXT_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
- name: Bump UI version in .env for version branch
run: |
set -e
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${NEXT_PATCH_VERSION}|" .env
echo "Files modified:"
git --no-pager diff
- name: Create PR for next patch version to version branch
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: 'chore(ui): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
branch: ui-version-bump-to-v${{ env.NEXT_PATCH_VERSION }}
title: 'chore(ui): Bump version to v${{ env.NEXT_PATCH_VERSION }}'
labels: no-changelog,skip-sync
body: |
### Description
Bump Prowler UI version to v${{ env.NEXT_PATCH_VERSION }} after releasing Prowler v${{ env.PROWLER_VERSION }}.
### Files Updated
- `.env`: `NEXT_PUBLIC_PROWLER_RELEASE_VERSION`
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
+3 -3
View File
@@ -45,15 +45,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Initialize CodeQL
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/ui-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
uses: github/codeql-action/analyze@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
with:
category: '/language:${{ matrix.language }}'
@@ -59,7 +59,7 @@ jobs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Notify container push started
id: slack-notification
@@ -95,7 +95,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -175,7 +175,7 @@ jobs:
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Determine overall outcome
id: outcome
@@ -212,7 +212,7 @@ jobs:
steps:
- name: Trigger UI deployment
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+2 -2
View File
@@ -28,7 +28,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -63,7 +63,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for UI changes
id: check-changes
+1 -1
View File
@@ -54,7 +54,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Create k8s Kind Cluster
uses: helm/kind-action@v1
with:
+1 -1
View File
@@ -30,7 +30,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Check for UI changes
id: check-changes
+17
View File
@@ -2,6 +2,23 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.18.0] (Prowler UNRELEASED)
### Added
- Support AlibabaCloud provider [(#9485)](https://github.com/prowler-cloud/prowler/pull/9485)
---
## [1.17.1] (Prowler v5.16.1)
### Changed
- Security Hub integration error when no regions [(#9635)](https://github.com/prowler-cloud/prowler/pull/9635)
### Fixed
- Orphan scheduled scans caused by transaction isolation during provider creation [(#9633)](https://github.com/prowler-cloud/prowler/pull/9633)
---
## [1.17.0] (Prowler v5.16.0)
### Added
+2960 -1909
View File
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -44,7 +44,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.16.0"
version = "1.18.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
@@ -0,0 +1,37 @@
# Generated by Django migration for Alibaba Cloud provider support
from django.db import migrations
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0064_finding_categories"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="provider",
field=api.db_utils.ProviderEnumField(
choices=[
("aws", "AWS"),
("azure", "Azure"),
("gcp", "GCP"),
("kubernetes", "Kubernetes"),
("m365", "M365"),
("github", "GitHub"),
("mongodbatlas", "MongoDB Atlas"),
("iac", "IaC"),
("oraclecloud", "Oracle Cloud Infrastructure"),
("alibabacloud", "Alibaba Cloud"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'alibabacloud';",
reverse_sql=migrations.RunSQL.noop,
),
]
+10
View File
@@ -287,6 +287,7 @@ class Provider(RowLevelSecurityProtectedModel):
MONGODBATLAS = "mongodbatlas", _("MongoDB Atlas")
IAC = "iac", _("IaC")
ORACLECLOUD = "oraclecloud", _("Oracle Cloud Infrastructure")
ALIBABACLOUD = "alibabacloud", _("Alibaba Cloud")
@staticmethod
def validate_aws_uid(value):
@@ -391,6 +392,15 @@ class Provider(RowLevelSecurityProtectedModel):
pointer="/data/attributes/uid",
)
@staticmethod
def validate_alibabacloud_uid(value):
if not re.match(r"^\d{16}$", value):
raise ModelValidationError(
detail="Alibaba Cloud account ID must be exactly 16 digits.",
code="alibabacloud-uid",
pointer="/data/attributes/uid",
)
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
+75 -1
View File
@@ -1,7 +1,7 @@
openapi: 3.0.3
info:
title: Prowler API
version: 1.17.0
version: 1.18.0
description: |-
Prowler API specification.
@@ -894,6 +894,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -904,6 +905,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -921,6 +923,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -933,6 +936,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -1447,6 +1451,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -1457,6 +1462,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -1474,6 +1480,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -1486,6 +1493,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -1908,6 +1916,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -1918,6 +1927,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -1935,6 +1945,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -1947,6 +1958,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -2367,6 +2379,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -2377,6 +2390,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -2394,6 +2408,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -2406,6 +2421,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -2814,6 +2830,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -2824,6 +2841,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -2841,6 +2859,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -2853,6 +2872,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -4947,6 +4967,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -4957,6 +4978,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -4974,6 +4996,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -4986,6 +5009,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -5136,6 +5160,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -5146,6 +5171,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -5163,6 +5189,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -5175,6 +5202,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -5319,6 +5347,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -5329,6 +5358,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -5345,6 +5375,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -5357,6 +5388,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- name: filter[search]
@@ -5543,6 +5575,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -5553,6 +5586,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -5570,6 +5604,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -5582,6 +5617,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -5715,6 +5751,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -5725,6 +5762,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -5742,6 +5780,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -5754,6 +5793,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -6534,6 +6574,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -6544,6 +6585,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider__in]
schema:
@@ -6561,6 +6603,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -6573,6 +6616,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -6590,6 +6634,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -6600,6 +6645,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -6617,6 +6663,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -6629,6 +6676,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- name: filter[search]
@@ -7240,6 +7288,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -7250,6 +7299,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -7267,6 +7317,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -7279,6 +7330,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -7623,6 +7675,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -7633,6 +7686,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -7650,6 +7704,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -7662,6 +7717,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -7901,6 +7957,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -7911,6 +7968,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -7928,6 +7986,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -7940,6 +7999,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -8185,6 +8245,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -8195,6 +8256,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -8212,6 +8274,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -8224,6 +8287,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -9032,6 +9096,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
* `aws` - AWS
* `azure` - Azure
@@ -9042,6 +9107,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
- in: query
name: filter[provider_type__in]
schema:
@@ -9059,6 +9125,7 @@ paths:
- m365
- mongodbatlas
- oraclecloud
- alibabacloud
description: |-
Multiple values may be separated by commas.
@@ -9071,6 +9138,7 @@ paths:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
explode: false
style: form
- in: query
@@ -16566,6 +16634,7 @@ components:
- mongodbatlas
- iac
- oraclecloud
- alibabacloud
type: string
description: |-
* `aws` - AWS
@@ -16577,6 +16646,7 @@ components:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
x-spec-enum-id: eca8c51e6bd28935
uid:
type: string
@@ -16692,6 +16762,7 @@ components:
- mongodbatlas
- iac
- oraclecloud
- alibabacloud
type: string
x-spec-enum-id: eca8c51e6bd28935
description: |-
@@ -16706,6 +16777,7 @@ components:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
uid:
type: string
title: Unique identifier for the provider, set by the provider
@@ -16752,6 +16824,7 @@ components:
- mongodbatlas
- iac
- oraclecloud
- alibabacloud
type: string
x-spec-enum-id: eca8c51e6bd28935
description: |-
@@ -16766,6 +16839,7 @@ components:
* `mongodbatlas` - MongoDB Atlas
* `iac` - IaC
* `oraclecloud` - Oracle Cloud Infrastructure
* `alibabacloud` - Alibaba Cloud
uid:
type: string
minLength: 3
+2
View File
@@ -16,6 +16,7 @@ from api.utils import (
return_prowler_provider,
validate_invitation,
)
from prowler.providers.alibabacloud.alibabacloud_provider import AlibabacloudProvider
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHubConnection
from prowler.providers.azure.azure_provider import AzureProvider
@@ -116,6 +117,7 @@ class TestReturnProwlerProvider:
(Provider.ProviderChoices.MONGODBATLAS.value, MongodbatlasProvider),
(Provider.ProviderChoices.ORACLECLOUD.value, OraclecloudProvider),
(Provider.ProviderChoices.IAC.value, IacProvider),
(Provider.ProviderChoices.ALIBABACLOUD.value, AlibabacloudProvider),
],
)
def test_return_prowler_provider(self, provider_type, expected_provider):
+79 -4
View File
@@ -1165,6 +1165,11 @@ class TestProviderViewSet:
"uid": "64b1d3c0e4b03b1234567890",
"alias": "Atlas Organization",
},
{
"provider": "alibabacloud",
"uid": "1234567890123456",
"alias": "Alibaba Cloud Account",
},
]
),
)
@@ -1514,6 +1519,36 @@ class TestProviderViewSet:
"mongodbatlas-uid",
"uid",
),
# Alibaba Cloud UID validation - too short (not 16 digits)
(
{
"provider": "alibabacloud",
"uid": "123456789012345",
"alias": "test",
},
"alibabacloud-uid",
"uid",
),
# Alibaba Cloud UID validation - too long (not 16 digits)
(
{
"provider": "alibabacloud",
"uid": "12345678901234567",
"alias": "test",
},
"alibabacloud-uid",
"uid",
),
# Alibaba Cloud UID validation - contains non-digits
(
{
"provider": "alibabacloud",
"uid": "123456789012345a",
"alias": "test",
},
"alibabacloud-uid",
"uid",
),
]
),
)
@@ -1687,21 +1722,21 @@ class TestProviderViewSet:
(
"uid.icontains",
"1",
7,
8,
),
("alias", "aws_testing_1", 1),
("alias.icontains", "aws", 2),
("inserted_at", TODAY, 8),
("inserted_at", TODAY, 9),
(
"inserted_at.gte",
"2024-01-01",
8,
9,
),
("inserted_at.lte", "2024-01-01", 0),
(
"updated_at.gte",
"2024-01-01",
8,
9,
),
("updated_at.lte", "2024-01-01", 0),
]
@@ -2251,6 +2286,46 @@ class TestProviderSecretViewSet:
"atlas_private_key": "private-key",
},
),
# Alibaba Cloud credentials (with access key only)
(
Provider.ProviderChoices.ALIBABACLOUD.value,
ProviderSecret.TypeChoices.STATIC,
{
"access_key_id": "LTAI5t1234567890abcdef",
"access_key_secret": "my-secret-access-key",
},
),
# Alibaba Cloud credentials (with STS security token)
(
Provider.ProviderChoices.ALIBABACLOUD.value,
ProviderSecret.TypeChoices.STATIC,
{
"access_key_id": "LTAI5t1234567890abcdef",
"access_key_secret": "my-secret-access-key",
"security_token": "my-security-token-for-sts",
},
),
# Alibaba Cloud RAM Role Assumption (minimal required fields)
(
Provider.ProviderChoices.ALIBABACLOUD.value,
ProviderSecret.TypeChoices.ROLE,
{
"role_arn": "acs:ram::1234567890123456:role/ProwlerRole",
"access_key_id": "LTAI5t1234567890abcdef",
"access_key_secret": "my-secret-access-key",
},
),
# Alibaba Cloud RAM Role Assumption (with optional role_session_name)
(
Provider.ProviderChoices.ALIBABACLOUD.value,
ProviderSecret.TypeChoices.ROLE,
{
"role_arn": "acs:ram::1234567890123456:role/ProwlerRole",
"access_key_id": "LTAI5t1234567890abcdef",
"access_key_secret": "my-secret-access-key",
"role_session_name": "ProwlerAuditSession",
},
),
],
)
def test_provider_secrets_create_valid(
+12 -8
View File
@@ -11,6 +11,7 @@ from api.exceptions import InvitationTokenExpiredException
from api.models import Integration, Invitation, Processor, Provider, Resource
from api.v1.serializers import FindingMetadataSerializer
from prowler.lib.outputs.jira.jira import Jira, JiraBasicAuthError
from prowler.providers.alibabacloud.alibabacloud_provider import AlibabacloudProvider
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.aws.lib.s3.s3 import S3
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHub
@@ -63,8 +64,9 @@ def merge_dicts(default_dict: dict, replacement_dict: dict) -> dict:
def return_prowler_provider(
provider: Provider,
) -> [
AwsProvider
) -> (
AlibabacloudProvider
| AwsProvider
| AzureProvider
| GcpProvider
| GithubProvider
@@ -73,14 +75,14 @@ def return_prowler_provider(
| M365Provider
| MongodbatlasProvider
| OraclecloudProvider
]:
):
"""Return the Prowler provider class based on the given provider type.
Args:
provider (Provider): The provider object containing the provider type and associated secrets.
Returns:
AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | OraclecloudProvider | MongodbatlasProvider: The corresponding provider class.
AlibabacloudProvider | AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | MongodbatlasProvider | OraclecloudProvider: The corresponding provider class.
Raises:
ValueError: If the provider type specified in `provider.provider` is not supported.
@@ -104,6 +106,8 @@ def return_prowler_provider(
prowler_provider = IacProvider
case Provider.ProviderChoices.ORACLECLOUD.value:
prowler_provider = OraclecloudProvider
case Provider.ProviderChoices.ALIBABACLOUD.value:
prowler_provider = AlibabacloudProvider
case _:
raise ValueError(f"Provider type {provider.provider} not supported")
return prowler_provider
@@ -169,7 +173,8 @@ def initialize_prowler_provider(
provider: Provider,
mutelist_processor: Processor | None = None,
) -> (
AwsProvider
AlibabacloudProvider
| AwsProvider
| AzureProvider
| GcpProvider
| GithubProvider
@@ -186,9 +191,8 @@ def initialize_prowler_provider(
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
Returns:
AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | OraclecloudProvider | MongodbatlasProvider: An instance of the corresponding provider class
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `GithubProvider`, `IacProvider`, `KubernetesProvider`, `M365Provider`, `OraclecloudProvider` or `MongodbatlasProvider`) initialized with the
provider's secrets.
AlibabacloudProvider | AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | MongodbatlasProvider | OraclecloudProvider: An instance of the corresponding provider class
initialized with the provider's secrets.
"""
prowler_provider = return_prowler_provider(provider)
prowler_provider_kwargs = get_prowler_provider_kwargs(provider, mutelist_processor)
@@ -304,6 +304,48 @@ from rest_framework_json_api import serializers
},
"required": ["atlas_public_key", "atlas_private_key"],
},
{
"type": "object",
"title": "Alibaba Cloud Static Credentials",
"properties": {
"access_key_id": {
"type": "string",
"description": "The Alibaba Cloud access key ID for authentication.",
},
"access_key_secret": {
"type": "string",
"description": "The Alibaba Cloud access key secret for authentication.",
},
"security_token": {
"type": "string",
"description": "The STS security token for temporary credentials (optional).",
},
},
"required": ["access_key_id", "access_key_secret"],
},
{
"type": "object",
"title": "Alibaba Cloud RAM Role Assumption",
"properties": {
"role_arn": {
"type": "string",
"description": "The ARN of the RAM role to assume (e.g., acs:ram::1234567890123456:role/ProwlerRole).",
},
"access_key_id": {
"type": "string",
"description": "The Alibaba Cloud access key ID of the RAM user that will assume the role.",
},
"access_key_secret": {
"type": "string",
"description": "The Alibaba Cloud access key secret of the RAM user that will assume the role.",
},
"role_session_name": {
"type": "string",
"description": "An identifier for the role session (optional, defaults to 'ProwlerSession').",
},
},
"required": ["role_arn", "access_key_id", "access_key_secret"],
},
]
}
)
+40 -1
View File
@@ -1390,12 +1390,23 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
serializer = OracleCloudProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.MONGODBATLAS.value:
serializer = MongoDBAtlasProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.ALIBABACLOUD.value:
serializer = AlibabaCloudProviderSecret(data=secret)
else:
raise serializers.ValidationError(
{"provider": f"Provider type not supported {provider_type}"}
)
elif secret_type == ProviderSecret.TypeChoices.ROLE:
serializer = AWSRoleAssumptionProviderSecret(data=secret)
if provider_type == Provider.ProviderChoices.AWS.value:
serializer = AWSRoleAssumptionProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.ALIBABACLOUD.value:
serializer = AlibabaCloudRoleAssumptionProviderSecret(data=secret)
else:
raise serializers.ValidationError(
{
"secret_type": f"Role assumption not supported for provider type: {provider_type}"
}
)
elif secret_type == ProviderSecret.TypeChoices.SERVICE_ACCOUNT:
serializer = GCPServiceAccountProviderSecret(data=secret)
else:
@@ -1532,6 +1543,34 @@ class OracleCloudProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
class AlibabaCloudProviderSecret(serializers.Serializer):
access_key_id = serializers.CharField()
access_key_secret = serializers.CharField()
security_token = serializers.CharField(required=False)
class Meta:
resource_name = "provider-secrets"
class AlibabaCloudRoleAssumptionProviderSecret(serializers.Serializer):
role_arn = serializers.CharField(
help_text="Access Key ID of the RAM user that will assume the role"
)
access_key_id = serializers.CharField(
help_text="Access Key ID of the RAM user that will assume the role"
)
access_key_secret = serializers.CharField(
help_text="Access Key Secret of the RAM user that will assume the role"
)
role_session_name = serializers.CharField(
required=False,
help_text="Session name for the assumed role session (optional, defaults to 'ProwlerSession')",
)
class Meta:
resource_name = "provider-secrets"
class AWSRoleAssumptionProviderSecret(serializers.Serializer):
role_arn = serializers.CharField()
external_id = serializers.CharField()
+1 -1
View File
@@ -359,7 +359,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.17.0"
spectacular_settings.VERSION = "1.18.0"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
+7
View File
@@ -517,6 +517,12 @@ def providers_fixture(tenants_fixture):
alias="mongodbatlas_testing",
tenant_id=tenant.id,
)
provider9 = Provider.objects.create(
provider="alibabacloud",
uid="1234567890123456",
alias="alibabacloud_testing",
tenant_id=tenant.id,
)
return (
provider1,
@@ -527,6 +533,7 @@ def providers_fixture(tenants_fixture):
provider6,
provider7,
provider8,
provider9,
)
+11
View File
@@ -27,6 +27,7 @@ from prowler.lib.outputs.compliance.c5.c5_gcp import GCPC5
from prowler.lib.outputs.compliance.ccc.ccc_aws import CCC_AWS
from prowler.lib.outputs.compliance.ccc.ccc_azure import CCC_Azure
from prowler.lib.outputs.compliance.ccc.ccc_gcp import CCC_GCP
from prowler.lib.outputs.compliance.cis.cis_alibabacloud import AlibabaCloudCIS
from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
@@ -50,6 +51,9 @@ from prowler.lib.outputs.compliance.mitre_attack.mitre_attack_azure import (
AzureMitreAttack,
)
from prowler.lib.outputs.compliance.mitre_attack.mitre_attack_gcp import GCPMitreAttack
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_alibaba import (
ProwlerThreatScoreAlibaba,
)
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_aws import (
ProwlerThreatScoreAWS,
)
@@ -128,6 +132,13 @@ COMPLIANCE_CLASS_MAP = {
"oraclecloud": [
(lambda name: name.startswith("cis_"), OracleCloudCIS),
],
"alibabacloud": [
(lambda name: name.startswith("cis_"), AlibabaCloudCIS),
(
lambda name: name == "prowler_threatscore_alibabacloud",
ProwlerThreatScoreAlibaba,
),
],
}
+16 -15
View File
@@ -19,6 +19,9 @@ from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.aws.lib.s3.s3 import S3
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHub
from prowler.providers.common.models import Connection
from prowler.providers.aws.lib.security_hub.exceptions.exceptions import (
SecurityHubNoEnabledRegionsError,
)
logger = get_task_logger(__name__)
@@ -222,8 +225,9 @@ def get_security_hub_client_from_integration(
)
return True, security_hub
else:
# Reset regions information if connection fails
# Reset regions information if connection fails and integration is not connected
with rls_transaction(tenant_id, using=MainRouter.default_db):
integration.connected = False
integration.configuration["regions"] = {}
integration.save()
@@ -330,15 +334,18 @@ def upload_security_hub_integration(
)
if not connected:
logger.error(
f"Security Hub connection failed for integration {integration.id}: "
f"{security_hub.error}"
)
with rls_transaction(
tenant_id, using=MainRouter.default_db
if isinstance(
security_hub.error,
SecurityHubNoEnabledRegionsError,
):
integration.connected = False
integration.save()
logger.warning(
f"Security Hub integration {integration.id} has no enabled regions"
)
else:
logger.error(
f"Security Hub connection failed for integration {integration.id}: "
f"{security_hub.error}"
)
break # Skip this integration
security_hub_client = security_hub
@@ -409,22 +416,16 @@ def upload_security_hub_integration(
logger.warning(
f"Failed to archive previous findings: {str(archive_error)}"
)
except Exception as e:
logger.error(
f"Security Hub integration {integration.id} failed: {str(e)}"
)
continue
result = integration_executions == len(integrations)
if result:
logger.info(
f"All Security Hub integrations completed successfully for provider {provider_id}"
)
else:
logger.error(
f"Some Security Hub integrations failed for provider {provider_id}"
)
return result
+1
View File
@@ -3531,6 +3531,7 @@ def generate_compliance_reports(
"gcp",
"m365",
"kubernetes",
"alibabacloud",
]:
logger.info(
f"Provider {provider_id} ({provider_type}) is not supported for ThreatScore report"
+60
View File
@@ -61,6 +61,58 @@ from prowler.lib.outputs.finding import Finding as FindingOutput
logger = get_task_logger(__name__)
def _cleanup_orphan_scheduled_scans(
tenant_id: str,
provider_id: str,
scheduler_task_id: int,
) -> int:
"""
TEMPORARY WORKAROUND: Clean up orphan AVAILABLE scans.
Detects and removes AVAILABLE scans that were never used due to an
issue during the first scheduled scan setup.
An AVAILABLE scan is considered orphan if there's also a SCHEDULED scan for
the same provider with the same scheduler_task_id. This situation indicates
that the first scan execution didn't find the AVAILABLE scan (because it
wasn't committed yet, probably) and created a new one, leaving the AVAILABLE orphaned.
Args:
tenant_id: The tenant ID.
provider_id: The provider ID.
scheduler_task_id: The PeriodicTask ID that triggers these scans.
Returns:
Number of orphan scans deleted (0 if none found).
"""
orphan_available_scans = Scan.objects.filter(
tenant_id=tenant_id,
provider_id=provider_id,
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=scheduler_task_id,
)
scheduled_scan_exists = Scan.objects.filter(
tenant_id=tenant_id,
provider_id=provider_id,
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduler_task_id=scheduler_task_id,
).exists()
if scheduled_scan_exists and orphan_available_scans.exists():
orphan_count = orphan_available_scans.count()
logger.warning(
f"[WORKAROUND] Found {orphan_count} orphan AVAILABLE scan(s) for "
f"provider {provider_id} alongside a SCHEDULED scan. Cleaning up orphans..."
)
orphan_available_scans.delete()
return orphan_count
return 0
def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str):
"""
Helper function to perform tasks after a scan is completed.
@@ -247,6 +299,14 @@ def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
return serializer.data
next_scan_datetime = get_next_execution_datetime(task_id, provider_id)
# TEMPORARY WORKAROUND: Clean up orphan scans from transaction isolation issue
_cleanup_orphan_scheduled_scans(
tenant_id=tenant_id,
provider_id=provider_id,
scheduler_task_id=periodic_task_instance.id,
)
scan_instance, _ = Scan.objects.get_or_create(
tenant_id=tenant_id,
provider_id=provider_id,
@@ -1199,9 +1199,6 @@ class TestSecurityHubIntegrationUploads:
)
assert result is False
# Integration should be marked as disconnected
integration.save.assert_called_once()
assert integration.connected is False
@patch("tasks.jobs.integrations.ASFF")
@patch("tasks.jobs.integrations.FindingOutput")
+344
View File
@@ -4,11 +4,13 @@ from unittest.mock import MagicMock, patch
import openai
import pytest
from botocore.exceptions import ClientError
from django_celery_beat.models import IntervalSchedule, PeriodicTask
from tasks.jobs.lighthouse_providers import (
_create_bedrock_client,
_extract_bedrock_credentials,
)
from tasks.tasks import (
_cleanup_orphan_scheduled_scans,
_perform_scan_complete_tasks,
check_integrations_task,
check_lighthouse_provider_connection_task,
@@ -22,6 +24,8 @@ from api.models import (
Integration,
LighthouseProviderConfiguration,
LighthouseProviderModels,
Scan,
StateChoices,
)
@@ -1715,3 +1719,343 @@ class TestRefreshLighthouseProviderModelsTask:
assert result["deleted"] == 0
assert "error" in result
assert result["error"] is not None
@pytest.mark.django_db
class TestCleanupOrphanScheduledScans:
"""Unit tests for _cleanup_orphan_scheduled_scans helper function."""
def _create_periodic_task(self, provider_id, tenant_id):
"""Helper to create a PeriodicTask for testing."""
interval, _ = IntervalSchedule.objects.get_or_create(every=24, period="hours")
return PeriodicTask.objects.create(
name=f"scan-perform-scheduled-{provider_id}",
task="scan-perform-scheduled",
interval=interval,
kwargs=f'{{"tenant_id": "{tenant_id}", "provider_id": "{provider_id}"}}',
enabled=True,
)
def test_cleanup_deletes_orphan_when_both_available_and_scheduled_exist(
self, tenants_fixture, providers_fixture
):
"""Test that AVAILABLE scan is deleted when SCHEDULED also exists."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
periodic_task = self._create_periodic_task(provider.id, tenant.id)
# Create orphan AVAILABLE scan
orphan_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task.id,
)
# Create SCHEDULED scan (next execution)
scheduled_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduler_task_id=periodic_task.id,
)
# Execute cleanup
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
scheduler_task_id=periodic_task.id,
)
# Verify orphan was deleted
assert deleted_count == 1
assert not Scan.objects.filter(id=orphan_scan.id).exists()
assert Scan.objects.filter(id=scheduled_scan.id).exists()
def test_cleanup_does_not_delete_when_only_available_exists(
self, tenants_fixture, providers_fixture
):
"""Test that AVAILABLE scan is NOT deleted when no SCHEDULED exists."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
periodic_task = self._create_periodic_task(provider.id, tenant.id)
# Create only AVAILABLE scan (normal first scan scenario)
available_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task.id,
)
# Execute cleanup
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
scheduler_task_id=periodic_task.id,
)
# Verify nothing was deleted
assert deleted_count == 0
assert Scan.objects.filter(id=available_scan.id).exists()
def test_cleanup_does_not_delete_when_only_scheduled_exists(
self, tenants_fixture, providers_fixture
):
"""Test that nothing is deleted when only SCHEDULED exists."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
periodic_task = self._create_periodic_task(provider.id, tenant.id)
# Create only SCHEDULED scan (normal subsequent scan scenario)
scheduled_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduler_task_id=periodic_task.id,
)
# Execute cleanup
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
scheduler_task_id=periodic_task.id,
)
# Verify nothing was deleted
assert deleted_count == 0
assert Scan.objects.filter(id=scheduled_scan.id).exists()
def test_cleanup_returns_zero_when_no_scans_exist(
self, tenants_fixture, providers_fixture
):
"""Test that cleanup returns 0 when no scans exist."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
periodic_task = self._create_periodic_task(provider.id, tenant.id)
# Execute cleanup with no scans
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
scheduler_task_id=periodic_task.id,
)
assert deleted_count == 0
def test_cleanup_deletes_multiple_orphan_available_scans(
self, tenants_fixture, providers_fixture
):
"""Test that multiple AVAILABLE orphan scans are all deleted."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
periodic_task = self._create_periodic_task(provider.id, tenant.id)
# Create multiple orphan AVAILABLE scans
orphan_scan_1 = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task.id,
)
orphan_scan_2 = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task.id,
)
# Create SCHEDULED scan
scheduled_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduler_task_id=periodic_task.id,
)
# Execute cleanup
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
scheduler_task_id=periodic_task.id,
)
# Verify all orphans were deleted
assert deleted_count == 2
assert not Scan.objects.filter(id=orphan_scan_1.id).exists()
assert not Scan.objects.filter(id=orphan_scan_2.id).exists()
assert Scan.objects.filter(id=scheduled_scan.id).exists()
def test_cleanup_does_not_affect_different_provider(
self, tenants_fixture, providers_fixture
):
"""Test that cleanup only affects scans for the specified provider."""
tenant = tenants_fixture[0]
provider1 = providers_fixture[0]
provider2 = providers_fixture[1]
periodic_task1 = self._create_periodic_task(provider1.id, tenant.id)
periodic_task2 = self._create_periodic_task(provider2.id, tenant.id)
# Create orphan scenario for provider1
orphan_scan_p1 = Scan.objects.create(
tenant_id=tenant.id,
provider=provider1,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task1.id,
)
scheduled_scan_p1 = Scan.objects.create(
tenant_id=tenant.id,
provider=provider1,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduler_task_id=periodic_task1.id,
)
# Create AVAILABLE scan for provider2 (should not be affected)
available_scan_p2 = Scan.objects.create(
tenant_id=tenant.id,
provider=provider2,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task2.id,
)
# Execute cleanup for provider1 only
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider1.id),
scheduler_task_id=periodic_task1.id,
)
# Verify only provider1's orphan was deleted
assert deleted_count == 1
assert not Scan.objects.filter(id=orphan_scan_p1.id).exists()
assert Scan.objects.filter(id=scheduled_scan_p1.id).exists()
assert Scan.objects.filter(id=available_scan_p2.id).exists()
def test_cleanup_does_not_affect_manual_scans(
self, tenants_fixture, providers_fixture
):
"""Test that cleanup only affects SCHEDULED trigger scans, not MANUAL."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
periodic_task = self._create_periodic_task(provider.id, tenant.id)
# Create orphan AVAILABLE scheduled scan
orphan_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task.id,
)
# Create SCHEDULED scan
scheduled_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduler_task_id=periodic_task.id,
)
# Create AVAILABLE manual scan (should not be affected)
manual_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Manual scan",
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.AVAILABLE,
)
# Execute cleanup
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
scheduler_task_id=periodic_task.id,
)
# Verify only scheduled orphan was deleted
assert deleted_count == 1
assert not Scan.objects.filter(id=orphan_scan.id).exists()
assert Scan.objects.filter(id=scheduled_scan.id).exists()
assert Scan.objects.filter(id=manual_scan.id).exists()
def test_cleanup_does_not_affect_different_scheduler_task(
self, tenants_fixture, providers_fixture
):
"""Test that cleanup only affects scans with the specified scheduler_task_id."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
periodic_task1 = self._create_periodic_task(provider.id, tenant.id)
# Create another periodic task
interval, _ = IntervalSchedule.objects.get_or_create(every=24, period="hours")
periodic_task2 = PeriodicTask.objects.create(
name=f"scan-perform-scheduled-other-{provider.id}",
task="scan-perform-scheduled",
interval=interval,
kwargs=f'{{"tenant_id": "{tenant.id}", "provider_id": "{provider.id}"}}',
enabled=True,
)
# Create orphan scenario for periodic_task1
orphan_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task1.id,
)
scheduled_scan = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduler_task_id=periodic_task1.id,
)
# Create AVAILABLE scan for periodic_task2 (should not be affected)
available_scan_other_task = Scan.objects.create(
tenant_id=tenant.id,
provider=provider,
name="Daily scheduled scan",
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.AVAILABLE,
scheduler_task_id=periodic_task2.id,
)
# Execute cleanup for periodic_task1 only
deleted_count = _cleanup_orphan_scheduled_scans(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
scheduler_task_id=periodic_task1.id,
)
# Verify only periodic_task1's orphan was deleted
assert deleted_count == 1
assert not Scan.objects.filter(id=orphan_scan.id).exists()
assert Scan.objects.filter(id=scheduled_scan.id).exists()
assert Scan.objects.filter(id=available_scan_other_task.id).exists()
@@ -0,0 +1,28 @@
import warnings
from dashboard.common_methods import get_section_containers_threatscore
warnings.filterwarnings("ignore")
def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_DESCRIPTION",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
].copy()
return get_section_containers_threatscore(
aux,
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"REQUIREMENTS_ID",
)
+46 -8
View File
@@ -407,9 +407,11 @@ def display_data(
compliance_module = importlib.import_module(
f"dashboard.compliance.{current}"
)
data = data.drop_duplicates(
subset=["CHECKID", "STATUS", "MUTED", "RESOURCEID", "STATUSEXTENDED"]
)
# Build subset list based on available columns
dedup_columns = ["CHECKID", "STATUS", "RESOURCEID", "STATUSEXTENDED"]
if "MUTED" in data.columns:
dedup_columns.insert(2, "MUTED")
data = data.drop_duplicates(subset=dedup_columns)
if "threatscore" in analytics_input:
data = get_threatscore_mean_by_pillar(data)
@@ -652,6 +654,7 @@ def get_table(current_compliance, table):
def get_threatscore_mean_by_pillar(df):
score_per_pillar = {}
max_score_per_pillar = {}
counted_findings_per_pillar = {}
for _, row in df.iterrows():
pillar = (
@@ -663,6 +666,18 @@ def get_threatscore_mean_by_pillar(df):
if pillar not in score_per_pillar:
score_per_pillar[pillar] = 0
max_score_per_pillar[pillar] = 0
counted_findings_per_pillar[pillar] = set()
# Skip muted findings for score calculation
is_muted = "MUTED" in df.columns and row.get("MUTED") == "True"
if is_muted:
continue
# Create unique finding identifier to avoid counting duplicates
finding_id = f"{row.get('CHECKID', '')}_{row.get('RESOURCEID', '')}"
if finding_id in counted_findings_per_pillar[pillar]:
continue
counted_findings_per_pillar[pillar].add(finding_id)
level_of_risk = pd.to_numeric(
row["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"], errors="coerce"
@@ -706,6 +721,10 @@ def get_table_prowler_threatscore(df):
score_per_pillar = {}
max_score_per_pillar = {}
pillars = {}
counted_findings_per_pillar = {}
counted_pass = set()
counted_fail = set()
counted_muted = set()
df_copy = df.copy()
@@ -720,6 +739,24 @@ def get_table_prowler_threatscore(df):
pillars[pillar] = {"FAIL": 0, "PASS": 0, "MUTED": 0}
score_per_pillar[pillar] = 0
max_score_per_pillar[pillar] = 0
counted_findings_per_pillar[pillar] = set()
# Create unique finding identifier
finding_id = f"{row.get('CHECKID', '')}_{row.get('RESOURCEID', '')}"
# Check if muted
is_muted = "MUTED" in df_copy.columns and row.get("MUTED") == "True"
# Count muted findings (separate from score calculation)
if is_muted and finding_id not in counted_muted:
counted_muted.add(finding_id)
pillars[pillar]["MUTED"] += 1
continue # Skip muted findings for score calculation
# Skip if already counted for this pillar
if finding_id in counted_findings_per_pillar[pillar]:
continue
counted_findings_per_pillar[pillar].add(finding_id)
level_of_risk = pd.to_numeric(
row["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"], errors="coerce"
@@ -738,13 +775,14 @@ def get_table_prowler_threatscore(df):
max_score_per_pillar[pillar] += level_of_risk * weight
if row["STATUS"] == "PASS":
pillars[pillar]["PASS"] += 1
if finding_id not in counted_pass:
counted_pass.add(finding_id)
pillars[pillar]["PASS"] += 1
score_per_pillar[pillar] += level_of_risk * weight
elif row["STATUS"] == "FAIL":
pillars[pillar]["FAIL"] += 1
if "MUTED" in row and row["MUTED"] == "True":
pillars[pillar]["MUTED"] += 1
if finding_id not in counted_fail:
counted_fail.add(finding_id)
pillars[pillar]["FAIL"] += 1
result_df = []
+2 -1
View File
@@ -312,7 +312,8 @@ The type of resource being audited. This field helps categorize and organize fin
- **Azure**: Use types from [Azure Resource Graph](https://learn.microsoft.com/en-us/azure/governance/resource-graph/reference/supported-tables-resources), for example: `Microsoft.Storage/storageAccounts`.
- **Google Cloud**: Use [Cloud Asset Inventory asset types](https://cloud.google.com/asset-inventory/docs/asset-types), for example: `compute.googleapis.com/Instance`.
- **Kubernetes**: Use types shown under `KIND` from `kubectl api-resources`.
- **M365 / GitHub**: Leave empty due to lack of standardized types.
- **Oracle Cloud Infrastructure**: Use types from [Oracle Cloud Infrastructure documentation](https://docs.public.oneportal.content.oci.oraclecloud.com/en-us/iaas/Content/Search/Tasks/queryingresources_topic-Listing_Supported_Resource_Types.htm).
- **M365 / GitHub / MongoDB Atlas**: Leave empty due to lack of standardized types.
#### Description
+327
View File
@@ -0,0 +1,327 @@
---
title: 'End-2-End Tests for Prowler App'
---
End-to-end (E2E) tests validate complete user flows in Prowler App (UI + API). These tests are implemented with [Playwright](https://playwright.dev/) under the `ui/tests` folder and are designed to run against a Prowler App environment.
## General Recommendations
When adding or maintaining E2E tests for Prowler App, follow these guidelines:
1. **Test real user journeys**
Focus on full workflows (for example, sign-up → login → add provider → launch scan) instead of low-level UI details already covered by unit or integration tests.
2. **Group tests by entity or feature area**
- Organize E2E tests by entity or feature area (for example, `providers.spec.ts`, `scans.spec.ts`, `invitations.spec.ts`, `sign-up.spec.ts`).
- Each entity should have its own test file and corresponding page model class (for example, `ProvidersPage`, `ScansPage`, `InvitationsPage`).
- Related tests for the same entity should be grouped together in the same test file to improve maintainability and make it easier to find and update tests for a specific feature.
3. **Use a Page Model (Page Object Model)**
- Encapsulate selectors and common actions in page classes instead of repeating them in each test.
- Leverage and extend the existing Playwright page models in `ui/tests`—such as `ProvidersPage`, `ScansPage`, and others—which are all based on the shared `BasePage`.
- Page models for Prowler App pages should be placed in their respective entity folders (for example, `ui/tests/providers/providers-page.ts`).
- Page models for external pages (not part of Prowler App) should be grouped in the `external` folder (for example, `ui/tests/external/github-page.ts`).
- This approach improves readability, reduces duplication, and makes refactors safer.
4. **Reuse authentication states (StorageState)**
- Multiple authentication setup projects are available that generate pre-authenticated state files stored in `playwright/.auth/`. Each project requires specific environment variables:
- `admin.auth.setup` Admin users with full system permissions (requires `E2E_ADMIN_USER` / `E2E_ADMIN_PASSWORD`)
- `manage-scans.auth.setup` Users with scan management permissions (requires `E2E_MANAGE_SCANS_USER` / `E2E_MANAGE_SCANS_PASSWORD`)
- `manage-integrations.auth.setup` Users with integration management permissions (requires `E2E_MANAGE_INTEGRATIONS_USER` / `E2E_MANAGE_INTEGRATIONS_PASSWORD`)
- `manage-account.auth.setup` Users with account management permissions (requires `E2E_MANAGE_ACCOUNT_USER` / `E2E_MANAGE_ACCOUNT_PASSWORD`)
- `manage-cloud-providers.auth.setup` Users with cloud provider management permissions (requires `E2E_MANAGE_CLOUD_PROVIDERS_USER` / `E2E_MANAGE_CLOUD_PROVIDERS_PASSWORD`)
- `unlimited-visibility.auth.setup` Users with unlimited visibility permissions (requires `E2E_UNLIMITED_VISIBILITY_USER` / `E2E_UNLIMITED_VISIBILITY_PASSWORD`)
- `invite-and-manage-users.auth.setup` Users with user invitation and management permissions (requires `E2E_INVITE_AND_MANAGE_USERS_USER` / `E2E_INVITE_AND_MANAGE_USERS_PASSWORD`)
<Note>
If fixtures have been applied (fixtures are used to populate the database with initial development data), you can use the user `e2e@prowler.com` with password `Thisisapassword123@` to configure the Admin credentials by setting `E2E_ADMIN_USER=e2e@prowler.com` and `E2E_ADMIN_PASSWORD=Thisisapassword123@`.
</Note>
- Within test files, use `test.use({ storageState: "playwright/.auth/admin_user.json" })` to load the pre-authenticated state, avoiding redundant authentication steps in each test. This must be placed at the test level (not inside the test function) to apply the authentication state to all tests in that scope. This approach is preferred over declaring dependencies in `playwright.config.ts` because it provides more control over which authentication states are used in specific tests.
**Example:**
```typescript
// Use admin authentication state for all tests in this scope
test.use({ storageState: "playwright/.auth/admin_user.json" });
test("should perform admin action", async ({ page }) => {
// Test implementation
});
```
5. **Tag and document scenarios**
- Follow the existing naming convention for suites and test cases (for example, `SCANS-E2E-001`, `PROVIDER-E2E-003`) and use tags such as `@e2e`, `@serial` and feature tags (for example, `@providers`, `@scans`,`@aws`) to filter and organize tests.
**Example:**
```typescript
test(
"should add a new AWS provider with static credentials",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@aws",
"@serial",
"@PROVIDER-E2E-001",
],
},
async ({ page }) => {
// Test implementation
}
);
```
- Document each one in the Markdown files under `ui/tests`, including **Priority**, **Tags**, **Description**, **Preconditions**, **Flow steps**, **Expected results**,**Key verification points** and **Notes**.
**Example**
```Markdown
## Test Case: `SCANS-E2E-001` - Execute On-Demand Scan
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @scans
**Description/Objective:** Validates the complete flow to execute an on-demand scan selecting a provider by UID and confirming success on the Scans page.
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured for : E2E_AWS_PROVIDER_ACCOUNT_ID,E2E_AWS_PROVIDER_ACCESS_KEY and E2E_AWS_PROVIDER_SECRET_KEY
- Remove any existing AWS provider with the same Account ID before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Account ID Provider to be already registered.
### Flow Steps:
1. Navigate to Scans page
2. Open provider selector and choose the entry whose text contains E2E_AWS_PROVIDER_ACCOUNT_ID
3. Optionally fill scan label (alias)
4. Click "Start now" to launch the scan
5. Verify the success toast appears
6. Verify a row in the Scans table contains the provided scan label (or shows the new scan entry)
### Expected Result:
- Scan is launched successfully
- Success toast is displayed to the user
- Scans table displays the new scan entry (including the alias when provided)
### Key verification points:
- Scans page loads correctly
- Provider select is available and lists the configured provider UID
- "Start now" button is rendered and enabled when form is valid
- Success toast message: "The scan was launched successfully."
- Table contains a row with the scan label or new scan state (queued/available/executing)
### Notes:
- The table may take a short time to reflect the new scan; assertions look for a row containing the alias.
- Provider cleanup performed before each test to ensure clean state
- Tests should run serially to avoid state conflicts.
```
6. **Use environment variables for secrets and dynamic data**
Credentials, provider identifiers, secrets, tokens must come from environment variables (for example, `E2E_AWS_PROVIDER_ACCOUNT_ID`, `E2E_AWS_PROVIDER_ACCESS_KEY`, `E2E_AWS_PROVIDER_SECRET_KEY`, `E2E_GCP_PROJECT_ID`).
<Warning>
Never commit real secrets, tokens, or account IDs to the repository.
</Warning>
7. **Keep tests deterministic and isolated**
- Use Playwright's `test.beforeEach()` and `test.afterEach()` hooks to manage test state:
- **`test.beforeEach()`**: Execute cleanup or setup logic before each test runs (for example, delete existing providers with a specific account ID to ensure a clean state).
- **`test.afterEach()`**: Execute cleanup logic after each test completes (for example, remove test data created during the test execution to prevent interference with subsequent tests).
- Define tests as serial using `test.describe.serial()` when they share state or resources that could interfere with parallel execution (for example, tests that use the same provider account ID or create dependent resources). This ensures tests within the serial group run sequentially, preventing race conditions and data conflicts.
- Use unique identifiers (for example, random suffixes for emails or labels) to prevent data collisions.
8. **Use explicit waiting strategies**
- Avoid using `waitForLoadState('networkidle')` as it is unreliable and can lead to flaky tests or unnecessary delays.
- Leverage Playwright's auto-waiting capabilities by waiting for specific elements to be actionable (for example, `locator.click()`, `locator.fill()`, `locator.waitFor()`).
- **Prioritize selector strategies**: Prefer `page.getByRole()` over other approaches like `page.getByText()`. `getByRole()` is more resilient to UI changes, aligns with accessibility best practices, and better reflects how users interact with the application (by role and accessible name rather than implementation details).
- For dynamic content, wait for specific UI elements that indicate the page is ready (for example, button becoming enabled, a specific text appearing, etc).
- This approach makes tests more reliable, faster, and aligned with how users actually interact with the application.
**Common waiting patterns used in Prowler E2E tests:**
- **Element visibility assertions**: Use `expect(locator).toBeVisible()` or `expect(locator).not.toBeVisible()` to wait for elements to appear or disappear (Playwright automatically waits for these conditions).
- **URL changes**: Use `expect(page).toHaveURL(url)` or `page.waitForURL(url)` to wait for navigation to complete.
- **Element states**: Use `locator.waitFor({ state: "visible" })` or `locator.waitFor({ state: "hidden" })` when you need explicit state control.
- **Text content**: Use `expect(locator).toHaveText(text)` or `expect(locator).toContainText(text)` to wait for specific text to appear.
- **Element attributes**: Use `expect(locator).toHaveAttribute(name, value)` to wait for attributes like `aria-disabled="false"` indicating a button is enabled.
- **Custom conditions**: Use `page.waitForFunction(() => condition)` for complex conditions that cannot be expressed with locators (for example, checking DOM element dimensions or computed styles).
- **Retryable assertions**: Use `expect(async () => { ... }).toPass({ timeout })` for conditions that may take time to stabilize (for example, waiting for table rows to filter after a server request).
- **Scroll into view**: Use `locator.scrollIntoViewIfNeeded()` before interacting with elements that may be outside the viewport.
**Example from Prowler tests:**
```typescript
// Wait for page to load by checking main content is visible
await expect(page.locator("main")).toBeVisible();
// Wait for URL change after form submission
await expect(page).toHaveURL("/providers");
// Wait for button to become enabled
await expect(submitButton).toHaveAttribute("aria-disabled", "false");
// Wait for loading spinner to disappear
await expect(page.getByText("Loading")).not.toBeVisible();
// Wait for custom condition
await page.waitForFunction(() => {
const main = document.querySelector("main");
return main && main.offsetHeight > 0;
});
// Wait for retryable condition (e.g., table filtering)
await expect(async () => {
const rowCount = await tableRows.count();
expect(rowCount).toBeLessThanOrEqual(1);
}).toPass({ timeout: 20000 });
```
## Running Prowler Tests
E2E tests for Prowler App run from the `ui` project using Playwright. The Playwright configuration lives in `ui/playwright.config.ts` and defines:
- `testDir: "./tests"` location of E2E test files (relative to the `ui` project root, so `ui/tests`).
- `webServer` how to start the Next.js development server and connect to Prowler API.
- `use.baseURL` base URL for browser interactions (defaults to `http://localhost:3000` or `AUTH_URL` if set).
- `reporter: [["list"]]` uses the list reporter to display test results in a concise format in the terminal. Other reporter options are available (for example, `html`, `json`, `junit`, `github`), and multiple reporters can be configured simultaneously. See the [Playwright reporter documentation](https://playwright.dev/docs/test-reporters) for all available options.
- `expect.timeout: 20000` timeout for assertions (20 seconds). This is the maximum time Playwright will wait for an assertion to pass before considering it failed.
- **Test artifacts** (in `use` configuration): By default, `trace`, `screenshot`, and `video` are set to `"off"` to minimize resource usage. To review test failures or debug issues, these can be enabled in `playwright.config.ts` by changing them to `"on"`, `"on-first-retry"`, or `"retain-on-failure"` depending on your needs.
- `outputDir: "/tmp/playwright-tests"` directory where Playwright stores test artifacts (screenshots, videos, traces) during test execution.
- **CI-specific configuration**: The configuration uses different settings when running in CI environments (detected via `process.env.CI`):
- **Retries**: `2` retries in CI (to handle flaky tests), `0` retries locally (for faster feedback during development).
- **Workers**: `1` worker in CI (sequential execution for stability), `undefined` locally (parallel execution by default for faster test runs).
### Prerequisites
Before running E2E tests:
- **Install root and UI dependencies**
- Follow the [developer guide introduction](/developer-guide/introduction#getting-the-code-and-installing-all-dependencies) to clone the repository and install core dependencies.
- From the `ui` directory, install frontend dependencies:
```bash
cd ui
pnpm install
pnpm run test:e2e:install # Install Playwright browsers
```
- **Ensure Prowler API is available**
- By default, Playwright uses `NEXT_PUBLIC_API_BASE_URL=http://localhost:8080/api/v1` (configured in `playwright.config.ts`).
- Start Prowler API so it is reachable on that URL (for example, via `docker-compose-dev.yml` or the development orchestration used locally).
- If a different API URL is required, set `NEXT_PUBLIC_API_BASE_URL` accordingly before running the tests.
- **Ensure Prowler App UI is available**
- Playwright automatically starts the Next.js server through the `webServer` block in `playwright.config.ts` (`pnpm run dev` by default).
- If the UI is already running on `http://localhost:3000`, Playwright will reuse the existing server when `reuseExistingServer` is `true`.
- **Configure E2E environment variables**
- Suite-specific variables (for example, provider account IDs, credentials, and E2E user data) must be provided before running tests.
- They can be defined either:
- As exported environment variables in the shell before executing the Playwright commands, or
- In a `.env.local` or `.env` file under `ui/`, and then loaded into the shell before running tests, for example:
```bash
cd ui
set -a
source .env.local # or .env
set +a
```
- Refer to the Markdown documentation files in `ui/tests` for each E2E suite (for example, the `*.md` files that describe sign-up, providers, scans, invitations, and other flows) to see the exact list of required variables and their meaning.
- Each E2E test suite explicitly checks that its required environment variables are defined at runtime and will fail with a clear error message if any mandatory variable is missing, making misconfiguration easy to detect.
### Executing Tests
To execute E2E tests for Prowler App:
1. **Run the full E2E suite (headless)**
From the `ui` directory:
```bash
pnpm run test:e2e
```
This command runs Playwright with the configured projects
2. **Run E2E tests with the Playwright UI runner**
```bash
pnpm run test:e2e:ui
```
This opens the Playwright test runner UI to inspect, debug, and rerun specific tests or projects.
3. **Debug E2E tests interactively**
```bash
pnpm run test:e2e:debug
```
Use this mode to step through flows, inspect selectors, and adjust timings. It runs tests in headed mode with debugging tools enabled.
4. **Run tests in headed mode without debugger**
```bash
pnpm run test:e2e:headed
```
This is useful to visually confirm flows while still running the full suite.
5. **View previous test reports**
```bash
pnpm run test:e2e:report
```
This opens the latest Playwright HTML report, including traces and screenshots when enabled.
6. **Run specific tests or subsets**
In addition to the predefined scripts, Playwright allows filtering which tests run. These examples use the Playwright CLI directly through `pnpm`:
- **By test ID (`@ID` in the test metadata or description)**
To run a single test case identified by its ID (for example, `@PROVIDER-E2E-001` or `@SCANS-E2E-001`):
```bash
pnpm playwright test --grep @PROVIDER-E2E-001
```
- **By tags**
To run all tests that share a common tag (for example, all provider E2E tests tagged with `@providers`):
```bash
pnpm playwright test --grep @providers
```
This is useful to focus on a specific feature area such as providers, scans, invitations, or sign-up.
- **By Playwright project**
To run only the tests associated with a given project defined in `playwright.config.ts` (for example, `providers` or `scans`):
```bash
pnpm playwright test --project=providers
```
Combining project and grep filters is also supported, enabling very narrow runs (for example, a single test ID within the `providers` project). For additional CLI options and combinations, see the [Playwright command line documentation](https://playwright.dev/docs/test-cli).
<Note>
For detailed flows, preconditions, and environment variable requirements per feature, always refer to the Markdown files in `ui/tests`. Those documents are the single source of truth for business expectations and validation points in each E2E suite.
</Note>
+1
View File
@@ -220,6 +220,7 @@ The function returns a JSON file containing the list of regions for the provider
"sa-east-1", "us-east-1", "us-east-2", "us-west-1", "us-west-2"
],
"aws-cn": ["cn-north-1", "cn-northwest-1"],
"aws-eusc": ["eusc-de-east-1"],
"aws-us-gov": ["us-gov-east-1", "us-gov-west-1"]
}
}
+34 -10
View File
@@ -19,7 +19,9 @@
"groups": [
{
"group": "Welcome",
"pages": ["introduction"]
"pages": [
"introduction"
]
},
{
"group": "Prowler Cloud",
@@ -49,7 +51,9 @@
},
{
"group": "Prowler Lighthouse AI",
"pages": ["getting-started/products/prowler-lighthouse-ai"]
"pages": [
"getting-started/products/prowler-lighthouse-ai"
]
},
{
"group": "Prowler MCP Server",
@@ -95,7 +99,14 @@
},
"user-guide/tutorials/prowler-app-rbac",
"user-guide/tutorials/prowler-app-api-keys",
"user-guide/tutorials/prowler-app-mute-findings",
{
"group": "Mutelist",
"expanded": true,
"pages": [
"user-guide/tutorials/prowler-app-simple-mutelist",
"user-guide/tutorials/prowler-app-mute-findings"
]
},
{
"group": "Integrations",
"expanded": true,
@@ -149,7 +160,9 @@
"user-guide/cli/tutorials/quick-inventory",
{
"group": "Tutorials",
"pages": ["user-guide/cli/tutorials/parallel-execution"]
"pages": [
"user-guide/cli/tutorials/parallel-execution"
]
}
]
},
@@ -237,7 +250,9 @@
},
{
"group": "LLM",
"pages": ["user-guide/providers/llm/getting-started-llm"]
"pages": [
"user-guide/providers/llm/getting-started-llm"
]
},
{
"group": "Oracle Cloud Infrastructure",
@@ -250,7 +265,9 @@
},
{
"group": "Compliance",
"pages": ["user-guide/compliance/tutorials/threatscore"]
"pages": [
"user-guide/compliance/tutorials/threatscore"
]
}
]
},
@@ -291,7 +308,8 @@
"group": "Testing",
"pages": [
"developer-guide/unit-testing",
"developer-guide/integration-testing"
"developer-guide/integration-testing",
"developer-guide/end2end-testing"
]
},
"developer-guide/debugging",
@@ -304,15 +322,21 @@
},
{
"tab": "Security",
"pages": ["security"]
"pages": [
"security"
]
},
{
"tab": "Contact Us",
"pages": ["contact"]
"pages": [
"contact"
]
},
{
"tab": "Troubleshooting",
"pages": ["troubleshooting"]
"pages": [
"troubleshooting"
]
},
{
"tab": "About Us",
@@ -115,8 +115,8 @@ To update the environment file:
Edit the `.env` file and change version values:
```env
PROWLER_UI_VERSION="5.15.0"
PROWLER_API_VERSION="5.15.0"
PROWLER_UI_VERSION="5.16.0"
PROWLER_API_VERSION="5.16.0"
```
<Note>
@@ -93,6 +93,11 @@ The following list includes all the Azure checks with configurable variables tha
## GCP
### Configurable Checks
The following list includes all the GCP checks with configurable variables that can be changed in the configuration yaml file:
| Check Name | Value | Type |
|---------------------------------------------------------------|--------------------------------------------------|-----------------|
| `compute_instance_group_multiple_zones` | `mig_min_zones` | Integer |
## Kubernetes
@@ -548,6 +553,9 @@ gcp:
# GCP Compute Configuration
# gcp.compute_public_address_shodan
shodan_api_key: null
# gcp.compute_instance_group_multiple_zones
# Minimum number of zones a MIG should span for high availability
mig_min_zones: 2
# Kubernetes Configuration
kubernetes:
@@ -6,15 +6,16 @@ By default Prowler is able to scan the following AWS partitions:
- Commercial: `aws`
- China: `aws-cn`
- European Sovereign Cloud: `aws-eusc`
- GovCloud (US): `aws-us-gov`
<Note>
To check the available regions for each partition and service, refer to: [aws\_regions\_by\_service.json](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_regions_by_service.json)
</Note>
## Scanning AWS China and GovCloud Partitions in Prowler
## Scanning AWS China, European Sovereign Cloud and GovCloud Partitions in Prowler
When scanning the China (`aws-cn`) or GovCloud (`aws-us-gov`), ensure one of the following:
When scanning the China (`aws-cn`), European Sovereign Cloud (`aws-eusc`) or GovCloud (`aws-us-gov`) partitions, ensure one of the following:
- Your AWS credentials include a valid region within the desired partition.
@@ -83,6 +84,29 @@ To scan an account in the AWS GovCloud (US) partition (`aws-us-gov`):
<Note>
With this configuration, all partition regions will be scanned without needing the `-f/--region` flag
</Note>
### AWS European Sovereign Cloud
To scan an account in the AWS European Sovereign Cloud partition (`aws-eusc`):
- By using the `-f/--region` flag:
```
prowler aws --region eusc-de-east-1
```
- By using the region configured in your AWS profile at `~/.aws/credentials` or `~/.aws/config`:
```
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXX
region = eusc-de-east-1
```
<Note>
With this configuration, all partition regions will be scanned without needing the `-f/--region` flag
</Note>
### AWS ISO (US \& Europe)
@@ -99,6 +123,9 @@ The AWS ISO partitions—commonly referred to as "secret partitions"—are air-g
"cn-north-1",
"cn-northwest-1"
],
"aws-eusc": [
"eusc-de-east-1"
],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
@@ -1,20 +1,26 @@
---
title: 'Mute Findings (Mutelist)'
title: 'Advanced Mutelist (YAML)'
---
import { VersionBadge } from "/snippets/version-badge.mdx"
<VersionBadge version="5.9.0" />
Prowler App allows users to mute specific findings to focus on the most critical security issues. This comprehensive guide demonstrates how to effectively use the Mutelist feature to manage and prioritize security findings.
Prowler App allows users to mute specific findings to focus on the most critical security issues. This guide demonstrates how to use the Advanced Mutelist feature with YAML configuration for complex, pattern-based muting rules.
## What Is the Mutelist Feature?
<Note>
For muting individual findings without YAML configuration, use [Simple Mutelist](/user-guide/tutorials/prowler-app-simple-mutelist) to mute findings directly from the Findings table.
The Mutelist feature enables users to:
</Note>
- **Suppress specific findings** from appearing in future scans
- **Focus on critical issues** by hiding resolved or accepted risks
## What Is Advanced Mutelist?
Advanced Mutelist enables users to create powerful, pattern-based muting rules using YAML configuration:
- **Define complex muting patterns** using regular expressions
- **Mute findings by check, region, resource, or tag** across multiple accounts
- **Apply wildcards** to mute entire categories of findings
- **Create exceptions** within broad muting rules
- **Maintain audit trails** of muted findings for compliance purposes
- **Streamline security workflows** by reducing noise from non-critical findings
## Prerequisites
@@ -28,46 +34,51 @@ Before muting findings, ensure:
Muting findings does not resolve underlying security issues. Review each finding carefully before muting to ensure it represents an acceptable risk or has been properly addressed.
</Warning>
## Step 1: Add a provider
## Step 1: Connect a Provider
To configure Mutelist:
To configure Advanced Mutelist:
1. Log into Prowler App
2. Navigate to the providers page
2. Navigate to the Providers page
![Add provider](/images/mutelist-ui-1.png)
3. Add a provider, then "Configure Muted Findings" button will be enabled in providers page and scans page
3. Connect a provider to enable Mutelist configuration
![Button enabled in providers page](/images/mutelist-ui-2.png)
![Button enabled in scans pages](/images/mutelist-ui-3.png)
## Step 2: Configure Mutelist
## Step 2: Configure Advanced Mutelist
1. Open the modal by clicking "Configure Muted Findings" button
![Open modal](/images/mutelist-ui-4.png)
1. Provide a valid Mutelist in `YAML` format. More details about Mutelist [here](/user-guide/cli/tutorials/mutelist)
1. Navigate to the Mutelist page from the left navigation menu
2. Select the "Advanced" tab
3. Provide a valid Mutelist configuration in `YAML` format
<Note>
The YAML format follows the same specification as Prowler CLI. See [CLI Mutelist documentation](/user-guide/cli/tutorials/mutelist) for detailed syntax reference.
</Note>
![Valid YAML configuration](/images/mutelist-ui-5.png)
If the YAML configuration is invalid, an error message will be displayed
![Wrong YAML configuration](/images/mutelist-ui-7.png)
![Wrong YAML configuration 2](/images/mutelist-ui-8.png)
## Step 3: Review the Mutelist
## Step 3: Review and Update the Configuration
1. Once added, the configuration can be removed or updated
1. Once added, the configuration can be updated or removed from the Advanced tab
![Remove or update configuration](/images/mutelist-ui-6.png)
## Step 4: Check muted findings in the scan results
## Step 4: Verify Muted Findings in Scan Results
1. Run a new scan
2. Check the muted findings in the scan results
![Check muted fidings](/images/mutelist-ui-9.png)
2. Navigate to the Findings page to verify muted findings
![Check muted findings](/images/mutelist-ui-9.png)
<Note>
The Mutelist configuration takes effect on the next scans.
The Advanced Mutelist configuration takes effect on subsequent scans. Existing findings are not retroactively muted.
</Note>
## Mutelist Ready To Use Examples
## YAML Configuration Examples
Below are examples for different cloud providers supported by Prowler App. Check how the mutelist works [here](/user-guide/cli/tutorials/mutelist#how-the-mutelist-works).
Below are ready-to-use examples for different cloud providers. For detailed syntax and logic explanation, see [CLI Mutelist documentation](/user-guide/cli/tutorials/mutelist#how-the-mutelist-works).
### AWS Provider
@@ -0,0 +1,180 @@
---
title: "Simple Mutelist"
---
import { VersionBadge } from "/snippets/version-badge.mdx";
<VersionBadge version="5.16.0" />
Prowler App provides Simple Mutelist, an intuitive way to mute findings directly from the Findings page without writing YAML configuration. This feature streamlines the muting workflow by allowing individual or bulk muting with just a few clicks.
## What Is Simple Mutelist?
Simple Mutelist enables users to:
- **Mute findings directly from the Findings table** using checkbox selection
- **Perform bulk muting** of multiple findings at once
- **Manage mute rules** through a dedicated interface
- **Toggle mute rules on and off** without deleting them
- **Edit mute rule justifications** after creation
<Note>
Simple Mutelist creates rules based on the finding's unique identifier (UID). For complex muting patterns based on checks, regions, tags, or regular expressions, use [Advanced Mutelist](/user-guide/tutorials/prowler-app-mute-findings) with YAML configuration.
</Note>
## Accessing the Mutelist Page
To access the Mutelist page:
1. Click "Mutelist" in the left navigation menu
The Mutelist page contains two tabs:
- **Simple:** Displays a table of mute rules created through Simple Mutelist
- **Advanced:** Provides YAML-based configuration for complex muting patterns
## Muting Findings from the Findings Page
### Muting Individual Findings
To mute a single finding:
1. Navigate to the Findings page
2. Locate the finding to mute
3. Click the actions menu (three dots) on the finding row
4. Select "Mute"
5. Enter a justification for muting this finding
6. Click "Confirm" to create the mute rule
### Muting Multiple Findings (Bulk Muting)
To mute multiple findings at once:
1. Navigate to the Findings page
2. Select findings using the checkboxes in the leftmost column
3. Click the floating "Mute" button that appears at the bottom of the screen
4. Enter a justification that applies to all selected findings
5. Click "Confirm" to create mute rules for all selected findings
<Note>
Findings that are already muted display a muted icon instead of a checkbox. These findings cannot be selected for bulk operations.
</Note>
## Managing Mute Rules
### Viewing Mute Rules
To view all mute rules:
1. Navigate to the Mutelist page
2. Select the "Simple" tab
3. The table displays all mute rules with the following information:
- **Finding UID:** The unique identifier of the muted finding
- **Justification:** The reason provided for muting
- **Enabled:** Whether the rule is currently active
- **Created:** When the rule was created
### Enabling and Disabling Mute Rules
To toggle a mute rule without deleting it:
1. Navigate to the Mutelist page
2. Select the "Simple" tab
3. Locate the mute rule
4. Use the toggle switch in the "Enabled" column to enable or disable the rule
<Note>
Disabled mute rules remain in the system but do not affect findings. Findings associated with disabled rules will appear as unmuted in subsequent scans.
</Note>
### Editing Mute Rules
To edit a mute rule's justification:
1. Navigate to the Mutelist page
2. Select the "Simple" tab
3. Click the actions menu (three dots) on the mute rule row
4. Select "Edit"
5. Update the justification
6. Click "Save" to apply changes
### Deleting Mute Rules
To permanently remove a mute rule:
1. Navigate to the Mutelist page
2. Select the "Simple" tab
3. Click the actions menu (three dots) on the mute rule row
4. Select "Delete"
5. Confirm the deletion
<Warning>
Deleting a mute rule is permanent. The finding will appear as unmuted in subsequent scans. To temporarily unmute a finding without losing the rule, disable the rule instead of deleting it.
</Warning>
## How Simple Mutelist Works
Simple Mutelist creates mute rules based on a finding's unique identifier (UID). When a mute rule is created:
- **Existing findings** matching the UID are immediately marked as muted
- **Historical findings** with the same UID are also muted
- **Future findings** from subsequent scans are automatically muted if they match the UID
### Uniqueness Constraint
Each finding UID can only have one mute rule. Attempting to create a duplicate mute rule for the same finding displays an error message indicating the rule already exists.
## Simple Mutelist vs. Advanced Mutelist
| Feature | Simple Mutelist | Advanced Mutelist |
| ------------------------ | ----------------------------------------- | ------------------------------------------------------ |
| **Configuration method** | Point-and-click interface | YAML configuration file |
| **Muting scope** | Individual finding UIDs | Patterns based on checks, regions, resources, and tags |
| **Regular expressions** | Not supported | Fully supported |
| **Bulk operations** | Checkbox selection in Findings table | YAML wildcards and patterns |
| **Best for** | Quick, ad-hoc muting of specific findings | Complex, policy-driven muting rules |
### When to Use Simple Mutelist
- Muting specific findings identified during review
- Quick suppression of known false positives
- Ad-hoc muting without YAML knowledge
### When to Use Advanced Mutelist
- Muting all findings for a specific check across regions
- Pattern-based muting using regular expressions
- Tag-based muting for environment-specific resources
- Complex rules with exceptions
## Best Practices
1. **Provide meaningful justifications:** Document why each finding is muted for audit trails and team communication
2. **Review muted findings regularly:** Periodically audit mute rules to ensure they remain valid
3. **Use disable instead of delete:** When temporarily unmuting findings, disable rules rather than deleting them
4. **Combine with Advanced Mutelist:** Use Simple Mutelist for specific findings and Advanced Mutelist for broad patterns
5. **Limit bulk muting:** Review findings individually when possible to ensure appropriate justification for each
## Troubleshooting
### Duplicate Rule Error
If an error indicates a mute rule already exists for a finding:
1. Navigate to the Mutelist page
2. Search for the existing rule in the Simple tab
3. Edit the existing rule's justification if needed, or
4. Delete the existing rule and create a new one
### Finding Still Appears Unmuted
If a muted finding still appears unmuted:
1. Verify the mute rule exists in the Mutelist page
2. Ensure the mute rule is enabled (toggle is on)
3. Check that the finding UID matches the mute rule
4. Wait for the next scan to see updated muting status on historical findings
+2 -2
View File
@@ -5,8 +5,8 @@ This package provides MCP tools for accessing:
- Prowler Hub: All security artifacts (detections, remediations and frameworks) supported by Prowler
"""
__version__ = "0.1.0"
__version__ = "0.3.0"
__author__ = "Prowler Team"
__email__ = "engineering@prowler.com"
__all__ = ["__version__", "prowler_mcp_server"]
__all__ = ["__version__", "__author__", "__email__"]
@@ -6,14 +6,13 @@ across all providers.
from typing import Any
from pydantic import Field
from prowler_mcp_server.prowler_app.models.resources import (
DetailedResource,
ResourcesListResponse,
ResourcesMetadataResponse,
)
from prowler_mcp_server.prowler_app.tools.base import BaseTool
from pydantic import Field
class ResourcesTools(BaseTool):
@@ -188,7 +187,7 @@ class ResourcesTools(BaseTool):
1. Configuration Details:
- metadata: Provider-specific configuration (tags, policies, encryption settings, network rules)
- partition: Provider-specific partition/region grouping (e.g., aws, aws-cn, aws-us-gov for AWS)
- partition: Provider-specific partition/region grouping (e.g., aws, aws-cn, aws-eusc, aws-us-gov for AWS)
2. Temporal Tracking:
- inserted_at: When Prowler first discovered this resource
-1
View File
@@ -14,7 +14,6 @@ requires-python = ">=3.12"
version = "0.3.0"
[project.scripts]
generate-prowler-app-mcp-server = "prowler_mcp_server.prowler_app.utils.server_generator:generate_server_file"
prowler-mcp = "prowler_mcp_server.main:main"
[tool.uv]
+27 -2
View File
@@ -2,15 +2,39 @@
All notable changes to the **Prowler SDK** are documented in this file.
## [5.17.0] (Prowler UNRELEASED)
### Added
- Add Prowler ThreatScore for the Alibaba Cloud provider [(#9511)](https://github.com/prowler-cloud/prowler/pull/9511)
- `compute_instance_group_multiple_zones` check for GCP provider [(#9566)](https://github.com/prowler-cloud/prowler/pull/9566)
- Support AWS European Sovereign Cloud [(#9649)](https://github.com/prowler-cloud/prowler/pull/9649)
- `compute_instance_disk_auto_delete_disabled` check for GCP provider [(#9604)](https://github.com/prowler-cloud/prowler/pull/9604)
- Bedrock service pagination [(#9606)](https://github.com/prowler-cloud/prowler/pull/9606)
### Changed
- Update AWS Step Functions service metadata to new format [(#9432)](https://github.com/prowler-cloud/prowler/pull/9432)
- Update AWS Route 53 service metadata to new format [(#9406)](https://github.com/prowler-cloud/prowler/pull/9406)
- Update AWS SQS service metadata to new format [(#9429)](https://github.com/prowler-cloud/prowler/pull/9429)
- Update AWS Shield service metadata to new format [(#9427)](https://github.com/prowler-cloud/prowler/pull/9427)
- Update AWS Secrets Manager service metadata to new format [(#9408)](https://github.com/prowler-cloud/prowler/pull/9408)
- Improve SageMaker service tag retrieval with parallel execution [(#9609)](https://github.com/prowler-cloud/prowler/pull/9609)
---
## [5.16.1] (Prowler v5.16.1)
### Fixed
- ZeroDivision error from Prowler ThreatScore [(#9653)](https://github.com/prowler-cloud/prowler/pull/9653)
---
## [5.16.0] (Prowler v5.16.0)
### Added
- `privilege-escalation` and `ec2-imdsv1` categories for AWS checks [(#9537)](https://github.com/prowler-cloud/prowler/pull/9537)
- Supported IaC formats and scanner documentation for the IaC provider [(#9553)](https://github.com/prowler-cloud/prowler/pull/9553)
### Changed
- Update AWS Glue service metadata to new format [(#9258)](https://github.com/prowler-cloud/prowler/pull/9258)
- Update AWS Kafka service metadata to new format [(#9261)](https://github.com/prowler-cloud/prowler/pull/9261)
- Update AWS KMS service metadata to new format [(#9263)](https://github.com/prowler-cloud/prowler/pull/9263)
@@ -46,6 +70,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
- `compute_instance_preemptible_vm_disabled` check for GCP provider [(#9342)](https://github.com/prowler-cloud/prowler/pull/9342)
- `compute_instance_automatic_restart_enabled` check for GCP provider [(#9271)](https://github.com/prowler-cloud/prowler/pull/9271)
- `compute_instance_deletion_protection_enabled` check for GCP provider [(#9358)](https://github.com/prowler-cloud/prowler/pull/9358)
- Add needed changes to AlibabaCloud provider from the API [(#9485)](https://github.com/prowler-cloud/prowler/pull/9485)
- Update SOC2 - Azure with Processing Integrity requirements [(#9463)](https://github.com/prowler-cloud/prowler/pull/9463)
- Update SOC2 - GCP with Processing Integrity requirements [(#9464)](https://github.com/prowler-cloud/prowler/pull/9464)
- Update SOC2 - AWS with Processing Integrity requirements [(#9462)](https://github.com/prowler-cloud/prowler/pull/9462)
+15
View File
@@ -83,6 +83,9 @@ from prowler.lib.outputs.compliance.mitre_attack.mitre_attack_azure import (
AzureMitreAttack,
)
from prowler.lib.outputs.compliance.mitre_attack.mitre_attack_gcp import GCPMitreAttack
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_alibaba import (
ProwlerThreatScoreAlibaba,
)
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_aws import (
ProwlerThreatScoreAWS,
)
@@ -1039,6 +1042,18 @@ def prowler():
)
generated_outputs["compliance"].append(cis)
cis.batch_write_data_to_file()
elif compliance_name == "prowler_threatscore_alibabacloud":
filename = (
f"{output_options.output_directory}/compliance/"
f"{output_options.output_filename}_{compliance_name}.csv"
)
prowler_threatscore = ProwlerThreatScoreAlibaba(
findings=finding_outputs,
compliance=bulk_compliance_frameworks[compliance_name],
file_path=filename,
)
generated_outputs["compliance"].append(prowler_threatscore)
prowler_threatscore.batch_write_data_to_file()
else:
filename = (
f"{output_options.output_directory}/compliance/"
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -38,7 +38,7 @@ class _MutableTimestamp:
timestamp = _MutableTimestamp(datetime.today())
timestamp_utc = _MutableTimestamp(datetime.now(timezone.utc))
prowler_version = "5.16.0"
prowler_version = "5.17.0"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
square_logo_img = "https://raw.githubusercontent.com/prowler-cloud/prowler/dc7d2d5aeb92fdf12e8604f42ef6472cd3e8e889/docs/img/prowler-logo-black.png"
aws_logo = "https://user-images.githubusercontent.com/38561120/235953920-3e3fba08-0795-41dc-b480-9bea57db9f2e.png"
+3
View File
@@ -507,6 +507,9 @@ gcp:
# GCP Compute Configuration
# gcp.compute_public_address_shodan
shodan_api_key: null
# gcp.compute_instance_group_multiple_zones
# Minimum number of zones a MIG should span for high availability
mig_min_zones: 2
# GCP Service Account and user-managed keys unused configuration
# gcp.iam_service_account_unused
# gcp.iam_sa_user_managed_key_unused
@@ -146,3 +146,29 @@ class ProwlerThreatScoreKubernetesModel(BaseModel):
Muted: bool
Framework: str
Name: str
class ProwlerThreatScoreAlibabaModel(BaseModel):
"""
ProwlerThreatScoreAlibabaModel generates a finding's output in Alibaba Cloud Prowler ThreatScore Compliance format.
"""
Provider: str
Description: str
AccountId: str
Region: str
AssessmentDate: str
Requirements_Id: str
Requirements_Description: str
Requirements_Attributes_Title: str
Requirements_Attributes_Section: str
Requirements_Attributes_SubSection: Optional[str] = None
Requirements_Attributes_AttributeDescription: str
Requirements_Attributes_AdditionalInformation: str
Requirements_Attributes_LevelOfRisk: int
Requirements_Attributes_Weight: int
Status: str
StatusExtended: str
ResourceId: str
ResourceName: str
CheckId: str
@@ -103,8 +103,16 @@ def get_prowler_threatscore_table(
for pillar in pillars:
pillar_table["Provider"].append(compliance.Provider)
pillar_table["Pillar"].append(pillar)
if max_score_per_pillar[pillar] == 0:
pillar_score = 100.0
score_color = Fore.GREEN
else:
pillar_score = (
score_per_pillar[pillar] / max_score_per_pillar[pillar]
) * 100
score_color = Fore.RED
pillar_table["Score"].append(
f"{Style.BRIGHT}{Fore.RED}{(score_per_pillar[pillar] / max_score_per_pillar[pillar]) * 100:.2f}%{Style.RESET_ALL}"
f"{Style.BRIGHT}{score_color}{pillar_score:.2f}%{Style.RESET_ALL}"
)
if pillars[pillar]["FAIL"] > 0:
pillar_table["Status"].append(
@@ -148,9 +156,12 @@ def get_prowler_threatscore_table(
print(
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
)
print(
f"\nGeneric Threat Score: {generic_score / max_generic_score * 100:.2f}%"
)
# Handle division by zero when all findings are muted
if max_generic_score == 0:
generic_threat_score = 100.0
else:
generic_threat_score = generic_score / max_generic_score * 100
print(f"\nGeneric Threat Score: {generic_threat_score:.2f}%")
print(
tabulate(
pillar_table,
@@ -0,0 +1,98 @@
from prowler.config.config import timestamp
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.outputs.compliance.compliance_output import ComplianceOutput
from prowler.lib.outputs.compliance.prowler_threatscore.models import (
ProwlerThreatScoreAlibabaModel,
)
from prowler.lib.outputs.finding import Finding
class ProwlerThreatScoreAlibaba(ComplianceOutput):
"""
This class represents the Alibaba Cloud Prowler ThreatScore compliance output.
Attributes:
- _data (list): A list to store transformed data from findings.
- _file_descriptor (TextIOWrapper): A file descriptor to write data to a file.
Methods:
- transform: Transforms findings into Alibaba Cloud Prowler ThreatScore compliance format.
"""
def transform(
self,
findings: list[Finding],
compliance: Compliance,
compliance_name: str,
) -> None:
"""
Transforms a list of findings into Alibaba Cloud Prowler ThreatScore compliance format.
Parameters:
- findings (list): A list of findings.
- compliance (Compliance): A compliance model.
- compliance_name (str): The name of the compliance model.
Returns:
- None
"""
for finding in findings:
# Get the compliance requirements for the finding
finding_requirements = finding.compliance.get(compliance_name, [])
for requirement in compliance.Requirements:
if requirement.Id in finding_requirements:
for attribute in requirement.Attributes:
compliance_row = ProwlerThreatScoreAlibabaModel(
Provider=finding.provider,
Description=compliance.Description,
AccountId=finding.account_uid,
Region=finding.region,
AssessmentDate=str(timestamp),
Requirements_Id=requirement.Id,
Requirements_Description=requirement.Description,
Requirements_Attributes_Title=attribute.Title,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_AttributeDescription=attribute.AttributeDescription,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_LevelOfRisk=attribute.LevelOfRisk,
Requirements_Attributes_Weight=attribute.Weight,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_uid,
ResourceName=finding.resource_name,
CheckId=finding.check_id,
Muted=finding.muted,
Framework=compliance.Framework,
Name=compliance.Name,
)
self._data.append(compliance_row)
# Add manual requirements to the compliance output
for requirement in compliance.Requirements:
if not requirement.Checks:
for attribute in requirement.Attributes:
compliance_row = ProwlerThreatScoreAlibabaModel(
Provider=compliance.Provider.lower(),
Description=compliance.Description,
AccountId="",
Region="",
AssessmentDate=str(timestamp),
Requirements_Id=requirement.Id,
Requirements_Description=requirement.Description,
Requirements_Attributes_Title=attribute.Title,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_AttributeDescription=attribute.AttributeDescription,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_LevelOfRisk=attribute.LevelOfRisk,
Requirements_Attributes_Weight=attribute.Weight,
Status="MANUAL",
StatusExtended="Manual check",
ResourceId="manual_check",
ResourceName="Manual check",
CheckId="manual",
Muted=False,
Framework=compliance.Framework,
Name=compliance.Name,
)
self._data.append(compliance_row)
@@ -75,6 +75,9 @@ class AlibabacloudProvider(Provider):
mutelist_path: str = None,
mutelist_content: dict = None,
fixer_config: dict = {},
access_key_id: str = None,
access_key_secret: str = None,
security_token: str = None,
):
"""
Initialize the AlibabaCloudProvider.
@@ -91,6 +94,9 @@ class AlibabacloudProvider(Provider):
mutelist_path: Path to the mutelist file
mutelist_content: Content of the mutelist file
fixer_config: Fixer configuration dictionary
access_key_id: Alibaba Cloud Access Key ID
access_key_secret: Alibaba Cloud Access Key Secret
security_token: STS Security Token (for temporary credentials)
Raises:
AlibabaCloudSetUpSessionError: If an error occurs during the setup process.
@@ -107,6 +113,7 @@ class AlibabacloudProvider(Provider):
- alibabacloud = AlibabacloudProvider(regions=["cn-hangzhou", "cn-shanghai"]) # Specific regions
- alibabacloud = AlibabacloudProvider(role_arn="acs:ram::...:role/ProwlerRole")
- alibabacloud = AlibabacloudProvider(ecs_ram_role="ECS-Prowler-Role")
- alibabacloud = AlibabacloudProvider(access_key_id="LTAI...", access_key_secret="...")
"""
logger.info("Initializing Alibaba Cloud Provider ...")
@@ -118,6 +125,9 @@ class AlibabacloudProvider(Provider):
ecs_ram_role=ecs_ram_role,
oidc_role_arn=oidc_role_arn,
credentials_uri=credentials_uri,
access_key_id=access_key_id,
access_key_secret=access_key_secret,
security_token=security_token,
)
logger.info("Alibaba Cloud session configured successfully")
@@ -234,6 +244,9 @@ class AlibabacloudProvider(Provider):
ecs_ram_role: str = None,
oidc_role_arn: str = None,
credentials_uri: str = None,
access_key_id: str = None,
access_key_secret: str = None,
security_token: str = None,
) -> AlibabaCloudSession:
"""
Set up the Alibaba Cloud session.
@@ -244,6 +257,9 @@ class AlibabacloudProvider(Provider):
ecs_ram_role: Name of the RAM role attached to an ECS instance
oidc_role_arn: ARN of the RAM role for OIDC authentication
credentials_uri: URI to retrieve credentials from an external service
access_key_id: Alibaba Cloud Access Key ID
access_key_secret: Alibaba Cloud Access Key Secret
security_token: STS Security Token (for temporary credentials)
Returns:
AlibabaCloudSession object
@@ -275,25 +291,22 @@ class AlibabacloudProvider(Provider):
if not ecs_ram_role and "ALIBABA_CLOUD_ECS_METADATA" in os.environ:
ecs_ram_role = os.environ["ALIBABA_CLOUD_ECS_METADATA"]
# Check for access key credentials from environment variables only
# Check for access key credentials from parameters first, then fall back to environment variables
# Support both ALIBABA_CLOUD_* and ALIYUN_* prefixes for compatibility
# Note: We intentionally do NOT support credentials via CLI arguments for security reasons
access_key_id = None
access_key_secret = None
security_token = None
if not access_key_id:
if "ALIBABA_CLOUD_ACCESS_KEY_ID" in os.environ:
access_key_id = os.environ["ALIBABA_CLOUD_ACCESS_KEY_ID"]
elif "ALIYUN_ACCESS_KEY_ID" in os.environ:
access_key_id = os.environ["ALIYUN_ACCESS_KEY_ID"]
if "ALIBABA_CLOUD_ACCESS_KEY_ID" in os.environ:
access_key_id = os.environ["ALIBABA_CLOUD_ACCESS_KEY_ID"]
elif "ALIYUN_ACCESS_KEY_ID" in os.environ:
access_key_id = os.environ["ALIYUN_ACCESS_KEY_ID"]
if "ALIBABA_CLOUD_ACCESS_KEY_SECRET" in os.environ:
access_key_secret = os.environ["ALIBABA_CLOUD_ACCESS_KEY_SECRET"]
elif "ALIYUN_ACCESS_KEY_SECRET" in os.environ:
access_key_secret = os.environ["ALIYUN_ACCESS_KEY_SECRET"]
if not access_key_secret:
if "ALIBABA_CLOUD_ACCESS_KEY_SECRET" in os.environ:
access_key_secret = os.environ["ALIBABA_CLOUD_ACCESS_KEY_SECRET"]
elif "ALIYUN_ACCESS_KEY_SECRET" in os.environ:
access_key_secret = os.environ["ALIYUN_ACCESS_KEY_SECRET"]
# Check for STS security token (for temporary credentials)
if "ALIBABA_CLOUD_SECURITY_TOKEN" in os.environ:
if not security_token and "ALIBABA_CLOUD_SECURITY_TOKEN" in os.environ:
security_token = os.environ["ALIBABA_CLOUD_SECURITY_TOKEN"]
# Check for RAM role assumption from CLI arguments or environment
@@ -695,6 +708,9 @@ class AlibabacloudProvider(Provider):
@staticmethod
def test_connection(
access_key_id: str = None,
access_key_secret: str = None,
security_token: str = None,
role_arn: str = None,
role_session_name: str = None,
ecs_ram_role: str = None,
@@ -707,6 +723,9 @@ class AlibabacloudProvider(Provider):
Test the connection to Alibaba Cloud with the provided credentials.
Args:
access_key_id: Alibaba Cloud Access Key ID (for static credentials)
access_key_secret: Alibaba Cloud Access Key Secret (for static credentials)
security_token: STS Security Token (for temporary credentials)
role_arn: ARN of the RAM role to assume
role_session_name: Session name when assuming the RAM role
ecs_ram_role: Name of the RAM role attached to an ECS instance
@@ -734,17 +753,24 @@ class AlibabacloudProvider(Provider):
raise_on_exception=False
)
Connection(is_connected=True, Error=None)
>>> AlibabacloudProvider.test_connection(
access_key_id="LTAI...",
access_key_secret="...",
raise_on_exception=False
)
Connection(is_connected=True, Error=None)
"""
try:
session = None
# Setup session
# Setup session - pass credentials directly instead of using env vars
session = AlibabacloudProvider.setup_session(
role_arn=role_arn,
role_session_name=role_session_name,
ecs_ram_role=ecs_ram_role,
oidc_role_arn=oidc_role_arn,
credentials_uri=credentials_uri,
access_key_id=access_key_id,
access_key_secret=access_key_secret,
security_token=security_token,
)
# Validate credentials
@@ -755,10 +781,6 @@ class AlibabacloudProvider(Provider):
# Validate provider_id if provided
if provider_id and caller_identity.account_id != provider_id:
from prowler.providers.alibabacloud.exceptions.exceptions import (
AlibabaCloudInvalidCredentialsError,
)
raise AlibabaCloudInvalidCredentialsError(
file=pathlib.Path(__file__).name,
message=f"Provider ID mismatch: expected '{provider_id}', got '{caller_identity.account_id}'",
+8 -5
View File
@@ -984,6 +984,8 @@ class AwsProvider(Provider):
global_region = "us-east-1"
if self._identity.partition == "aws-cn":
global_region = "cn-north-1"
elif self._identity.partition == "aws-eusc":
global_region = "eusc-de-east-1"
elif self._identity.partition == "aws-us-gov":
global_region = "us-gov-east-1"
elif "aws-iso" in self._identity.partition:
@@ -1473,11 +1475,12 @@ class AwsProvider(Provider):
sts_client = create_sts_session(session, 'us-west-2')
"""
try:
sts_endpoint_url = (
f"https://sts.{aws_region}.amazonaws.com"
if not aws_region.startswith("cn-")
else f"https://sts.{aws_region}.amazonaws.com.cn"
)
if aws_region.startswith("cn-"):
sts_endpoint_url = f"https://sts.{aws_region}.amazonaws.com.cn"
elif aws_region.startswith("eusc-"):
sts_endpoint_url = f"https://sts.{aws_region}.amazonaws.eu"
else:
sts_endpoint_url = f"https://sts.{aws_region}.amazonaws.com"
return session.client("sts", aws_region, endpoint_url=sts_endpoint_url)
except Exception as error:
logger.critical(
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -59,5 +59,5 @@ def parse_iam_credentials_arn(arn: str) -> ARN:
def is_valid_arn(arn: str) -> bool:
"""is_valid_arn returns True or False whether the given AWS ARN (Amazon Resource Name) is valid or not."""
regex = r"^arn:aws(-cn|-us-gov|-iso|-iso-b)?:[a-zA-Z0-9\-]+:([a-z]{2}-[a-z]+-\d{1})?:(\d{12})?:[a-zA-Z0-9\-_\/:\.\*]+(:\d+)?$"
regex = r"^arn:aws(-cn|-eusc|-us-gov|-iso|-iso-b)?:[a-zA-Z0-9\-]+:([a-z]{2}-[a-z]+-\d{1})?:(\d{12})?:[a-zA-Z0-9\-_\/:\.\*]+(:\d+)?$"
return re.match(regex, arn) is not None
@@ -55,7 +55,7 @@ class SecurityHubConnection(Connection):
Attributes:
enabled_regions (set): Set of regions where Security Hub is enabled.
disabled_regions (set): Set of regions where Security Hub is disabled.
partition (str): AWS partition (e.g., aws, aws-cn, aws-us-gov) where SecurityHub is deployed.
partition (str): AWS partition (e.g., aws, aws-cn, aws-eusc, aws-us-gov) where SecurityHub is deployed.
"""
enabled_regions: set = None
@@ -70,7 +70,7 @@ class SecurityHub:
Attributes:
_session (Session): AWS session object for authentication and communication with AWS services.
_aws_account_id (str): AWS account ID associated with the SecurityHub instance.
_aws_partition (str): AWS partition (e.g., aws, aws-cn, aws-us-gov) where SecurityHub is deployed.
_aws_partition (str): AWS partition (e.g., aws, aws-cn, aws-eusc, aws-us-gov) where SecurityHub is deployed.
_findings_per_region (dict): Dictionary containing findings per region.
_enabled_regions (dict): Dictionary containing enabled regions with SecurityHub clients.
@@ -115,7 +115,7 @@ class SecurityHub:
Args:
- aws_session (Session): AWS session object for authentication and communication with AWS services.
- aws_account_id (str): AWS account ID associated with the SecurityHub instance.
- aws_partition (str): AWS partition (e.g., aws, aws-cn, aws-us-gov) where SecurityHub is deployed.
- aws_partition (str): AWS partition (e.g., aws, aws-cn, aws-eusc, aws-us-gov) where SecurityHub is deployed.
- findings (list[AWSSecurityFindingFormat]): List of findings to filter and send to Security Hub.
- aws_security_hub_available_regions (list[str]): List of regions where Security Hub is available.
- send_only_fails (bool): Flag indicating whether to send only findings with status 'FAIL'.
@@ -477,7 +477,7 @@ class SecurityHub:
Args:
aws_account_id (str): AWS account ID to check for Prowler integration.
aws_partition (str): AWS partition (e.g., aws, aws-cn, aws-us-gov).
aws_partition (str): AWS partition (e.g., aws, aws-cn, aws-eusc, aws-us-gov).
regions (set): Set of regions to check for Security Hub integration.
raise_on_exception (bool): Whether to raise an exception if an error occurs.
profile (str): AWS profile name to use for authentication.
+2
View File
@@ -90,6 +90,7 @@ class Partition(str, Enum):
Attributes:
aws (str): Represents the standard AWS commercial regions.
aws_cn (str): Represents the AWS China regions.
aws_eusc (str): Represents the AWS European Sovereign Cloud regions.
aws_us_gov (str): Represents the AWS GovCloud (US) Regions.
aws_iso (str): Represents the AWS ISO (US) Regions.
aws_iso_b (str): Represents the AWS ISOB (US) Regions.
@@ -99,6 +100,7 @@ class Partition(str, Enum):
aws = "aws"
aws_cn = "aws-cn"
aws_eusc = "aws-eusc"
aws_us_gov = "aws-us-gov"
aws_iso = "aws-iso"
aws_iso_b = "aws-iso-b"
@@ -55,16 +55,18 @@ class Bedrock(AWSService):
def _list_guardrails(self, regional_client):
logger.info("Bedrock - Listing Guardrails...")
try:
for guardrail in regional_client.list_guardrails().get("guardrails", []):
if not self.audit_resources or (
is_resource_filtered(guardrail["arn"], self.audit_resources)
):
self.guardrails[guardrail["arn"]] = Guardrail(
id=guardrail["id"],
name=guardrail["name"],
arn=guardrail["arn"],
region=regional_client.region,
)
paginator = regional_client.get_paginator("list_guardrails")
for page in paginator.paginate():
for guardrail in page.get("guardrails", []):
if not self.audit_resources or (
is_resource_filtered(guardrail["arn"], self.audit_resources)
):
self.guardrails[guardrail["arn"]] = Guardrail(
id=guardrail["id"],
name=guardrail["name"],
arn=guardrail["arn"],
region=regional_client.region,
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -130,20 +132,22 @@ class BedrockAgent(AWSService):
def _list_agents(self, regional_client):
logger.info("Bedrock Agent - Listing Agents...")
try:
for agent in regional_client.list_agents().get("agentSummaries", []):
agent_arn = f"arn:aws:bedrock:{regional_client.region}:{self.audited_account}:agent/{agent['agentId']}"
if not self.audit_resources or (
is_resource_filtered(agent_arn, self.audit_resources)
):
self.agents[agent_arn] = Agent(
id=agent["agentId"],
name=agent["agentName"],
arn=agent_arn,
guardrail_id=agent.get("guardrailConfiguration", {}).get(
"guardrailIdentifier"
),
region=regional_client.region,
)
paginator = regional_client.get_paginator("list_agents")
for page in paginator.paginate():
for agent in page.get("agentSummaries", []):
agent_arn = f"arn:aws:bedrock:{regional_client.region}:{self.audited_account}:agent/{agent['agentId']}"
if not self.audit_resources or (
is_resource_filtered(agent_arn, self.audit_resources)
):
self.agents[agent_arn] = Agent(
id=agent["agentId"],
name=agent["agentName"],
arn=agent_arn,
guardrail_id=agent.get("guardrailConfiguration", {}).get(
"guardrailIdentifier"
),
region=regional_client.region,
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -1,30 +1,39 @@
{
"Provider": "aws",
"CheckID": "route53_dangling_ip_subdomain_takeover",
"CheckTitle": "Check if Route53 Records contains dangling IPs.",
"CheckType": [],
"CheckTitle": "Route53 A record does not point to a dangling IP address",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"TTPs/Initial Access",
"Effects/Data Exposure"
],
"ServiceName": "route53",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "Other",
"Description": "Check if Route53 Records contains dangling IPs.",
"Risk": "When an ephemeral AWS resource such as an Elastic IP (EIP) is released into the Amazon's Elastic IP pool, an attacker may acquire the EIP resource and effectively control the domain/subdomain associated with that EIP in your Route 53 DNS records.",
"ResourceType": "AwsRoute53HostedZone",
"Description": "**Route 53 `A` records** (non-alias) that use literal IPs are evaluated for **public AWS addresses** not currently assigned to resources in the account. Entries that match AWS ranges yet lack ownership are identified as potential **dangling IP targets**.",
"Risk": "**Dangling DNS `A` records** pointing to released AWS IPs enable **subdomain takeover**. An attacker who later obtains that IP can:\n- Redirect or alter content (integrity)\n- Capture credentials/cookies (confidentiality)\n- Disrupt or impersonate services (availability)",
"RelatedUrl": "",
"AdditionalURLs": [
"https://support.icompaas.com/support/solutions/articles/62000233461-ensure-route53-records-contains-dangling-ips-",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Route53/dangling-dns-records.html",
"https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-deleting.html"
],
"Remediation": {
"Code": {
"CLI": "aws route53 change-resource-record-sets --hosted-zone-id <resource_id>",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Route53/dangling-dns-records.html",
"Terraform": ""
"CLI": "aws route53 change-resource-record-sets --hosted-zone-id <example_resource_id> --change-batch '{\"Changes\":[{\"Action\":\"UPSERT\",\"ResourceRecordSet\":{\"Name\":\"<example_resource_name>\",\"Type\":\"A\",\"AliasTarget\":{\"HostedZoneId\":\"<ALIAS_TARGET_HOSTED_ZONE_ID>\",\"DNSName\":\"<ALIAS_TARGET_DNS_NAME>\",\"EvaluateTargetHealth\":false}}}]}'",
"NativeIaC": "```yaml\n# CloudFormation: convert A record to an Alias so it no longer points to a dangling IP\nResources:\n <example_resource_name>:\n Type: AWS::Route53::RecordSet\n Properties:\n HostedZoneId: <example_resource_id>\n Name: <example_resource_name>\n Type: A\n AliasTarget:\n HostedZoneId: <ALIAS_TARGET_HOSTED_ZONE_ID> # CRITICAL: use Alias to an AWS resource instead of an IP\n DNSName: <ALIAS_TARGET_DNS_NAME> # CRITICAL: target AWS resource DNS (e.g., ALB/CloudFront)\n EvaluateTargetHealth: false\n```",
"Other": "1. Open AWS Console > Route 53 > Hosted zones\n2. Select the hosted zone and locate the failing non-alias A record\n3. If not needed: click Delete and confirm\n4. If needed: select the record, click Edit, enable Alias, choose the correct AWS resource (e.g., ALB/CloudFront), then Save changes\n5. Wait for propagation (~60s) and re-run the check",
"Terraform": "```hcl\n# Terraform: convert A record to Alias to avoid dangling public IPs\nresource \"aws_route53_record\" \"<example_resource_name>\" {\n zone_id = \"<example_resource_id>\"\n name = \"<example_resource_name>\"\n type = \"A\"\n\n alias { # CRITICAL: Alias to AWS resource (no direct IP)\n name = \"<ALIAS_TARGET_DNS_NAME>\" # e.g., dualstack.<alb>.amazonaws.com\n zone_id = \"<ALIAS_TARGET_HOSTED_ZONE_ID>\"\n evaluate_target_health = false\n }\n}\n```"
},
"Recommendation": {
"Text": "Ensure that any dangling DNS records are deleted from your Amazon Route 53 public hosted zones in order to maintain the integrity and authenticity of your domains/subdomains and to protect against domain hijacking attacks.",
"Url": "https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-deleting.html"
"Text": "Remove or update any record that points to an unassigned IP. Avoid hard-coding AWS public IPs in `A` records; use **aliases/CNAMEs** to managed endpoints. Enforce **asset lifecycle** decommissioning, routine DNS-asset reconciliation, and **change control** with monitoring to prevent and detect drift.",
"Url": "https://hub.prowler.com/check/route53_dangling_ip_subdomain_takeover"
}
},
"Categories": [
"forensics-ready"
"internet-exposed"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,29 +1,40 @@
{
"Provider": "aws",
"CheckID": "route53_domains_privacy_protection_enabled",
"CheckTitle": "Enable Privacy Protection for for a Route53 Domain.",
"CheckType": [],
"CheckTitle": "Route 53 domain has admin contact privacy protection enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
"Sensitive Data Identifications/PII"
],
"ServiceName": "route53",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Enable Privacy Protection for for a Route53 Domain.",
"Risk": "Without privacy protection enabled, ones personal information is published to the public WHOIS database.",
"RelatedUrl": "https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-privacy-protection.html",
"Description": "**Route 53 domain** administrative contact has **privacy protection** enabled, so WHOIS queries return redacted or proxy details.\n\nEvaluates whether contact data is hidden instead of publicly listed.",
"Risk": "**Public WHOIS contact data** exposes names, emails, phones, and addresses, enabling:\n- **Phishing/social engineering** of the registrar\n- **SIM-swap** or account takeover\n- **Domain hijacking**, affecting DNS integrity/availability\nIt also increases spam and targeted harassment.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-privacy-protection.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Route53/privacy-protection.html",
"https://support.icompaas.com/support/solutions/articles/62000233459-enable-privacy-protection-for-for-a-route53-domain-"
],
"Remediation": {
"Code": {
"CLI": "aws route53domains update-domain-contact-privacy --domain-name domain.com --registrant-privacy",
"CLI": "aws route53domains update-domain-contact-privacy --domain-name <DOMAIN_NAME> --admin-privacy",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Route53/privacy-protection.html",
"Terraform": ""
"Other": "1. Open the AWS Console and go to Route 53\n2. Click Registered domains and select <DOMAIN_NAME>\n3. Click Edit in Contact information\n4. Enable Privacy protection (ensures Admin contact privacy is on)\n5. Save changes",
"Terraform": "```hcl\nresource \"aws_route53domains_registered_domain\" \"<example_resource_name>\" {\n domain_name = \"<example_resource_name>\"\n admin_privacy = true # Critical: enables admin contact privacy to pass the check\n}\n```"
},
"Recommendation": {
"Text": "Ensure default Privacy is enabled.",
"Url": "https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-privacy-protection.html"
"Text": "Enable **WHOIS privacy** for all contacts (admin, registrant, tech) to minimize exposure. Apply **defense in depth**: use dedicated, monitored contact emails, enforce **transfer lock** and **MFA** on registrar access, and regularly review settings. *If a TLD lacks privacy*, provide minimal, role-based contact details.",
"Url": "https://hub.prowler.com/check/route53_domains_privacy_protection_enabled"
}
},
"Categories": [],
"Categories": [
"internet-exposed"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,37 @@
{
"Provider": "aws",
"CheckID": "route53_domains_transferlock_enabled",
"CheckTitle": "Enable Transfer Lock for a Route53 Domain.",
"CheckType": [],
"CheckTitle": "Route 53 domain has Transfer Lock enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"TTPs/Initial Access/Unauthorized Access"
],
"ServiceName": "route53",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"Severity": "high",
"ResourceType": "Other",
"Description": "Enable Transfer Lock for a Route53 Domain.",
"Risk": "Without transfer lock enabled, a domain name could be incorrectly moved to a new registrar.",
"RelatedUrl": "https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-lock.html",
"Description": "**Route 53 registered domains** are assessed for a transfer-lock state, indicated by the `clientTransferProhibited` status on the domain.",
"Risk": "Without **transfer lock**, a domain can be illicitly moved to another registrar, enabling **domain hijacking**. Attackers could alter DNS, redirect traffic, harvest credentials, and disrupt email and apps-compromising **confidentiality**, **integrity**, and **availability**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-lock.html"
],
"Remediation": {
"Code": {
"CLI": "aws route53domains enable-domain-transfer-lock --domain-name DOMAIN",
"CLI": "aws route53domains enable-domain-transfer-lock --domain-name <example_domain_name>",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"Other": "1. Open the AWS Management Console and go to Route 53\n2. In the left pane, select Registered domains\n3. Click the domain name <example_domain_name>\n4. In Actions, choose Turn on transfer lock\n5. Confirm to enable the lock",
"Terraform": "```hcl\nresource \"aws_route53domains_registered_domain\" \"<example_resource_name>\" {\n domain_name = \"<example_domain_name>\"\n transfer_lock = true # Enables transfer lock (sets clientTransferProhibited)\n}\n```"
},
"Recommendation": {
"Text": "Ensure transfer lock is enabled.",
"Url": "https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-lock.html"
"Text": "Enable **transfer lock** on domains to prevent unauthorized registrar moves. Enforce **least privilege** on domain management, require **MFA**, and monitor status changes. *For planned transfers*, remove the lock only under approved change control and re-enable immediately afterward.",
"Url": "https://hub.prowler.com/check/route53_domains_transferlock_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,30 +1,37 @@
{
"Provider": "aws",
"CheckID": "route53_public_hosted_zones_cloudwatch_logging_enabled",
"CheckTitle": "Check if Route53 public hosted zones are logging queries to CloudWatch Logs.",
"CheckType": [],
"CheckTitle": "Route53 public hosted zone has query logging enabled to a CloudWatch Logs log group",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "route53",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsRoute53HostedZone",
"Description": "Check if Route53 public hosted zones are logging queries to CloudWatch Logs.",
"Risk": "If logs are not enabled, monitoring of service use and threat analysis is not possible.",
"RelatedUrl": "https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/monitoring-hosted-zones-with-cloudwatch.html",
"Description": "**Route 53 public hosted zones** have **DNS query logging** enabled to **CloudWatch Logs**, recording resolver requests for the zone and writing events to an associated log group.",
"Risk": "Missing **DNS query logs** removes visibility into domain use, weakening detection of:\n- **Data exfiltration** via DNS\n- **Malware C2/DGA** patterns\n- **Hijacking or misconfigurations**\nThis degrades **incident response**, threatens data **confidentiality** and **integrity**, and slows **availability** troubleshooting.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/monitoring-hosted-zones-with-cloudwatch.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Route53/enable-query-logging.html"
],
"Remediation": {
"Code": {
"CLI": "aws route53 create-query-logging-config --hosted-zone-id <zone_id> --cloud-watch-logs-log-group-arn <log_group_arn>",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Route53/enable-query-logging.html",
"Terraform": ""
"CLI": "aws route53 create-query-logging-config --hosted-zone-id <HOSTED_ZONE_ID> --cloud-watch-logs-log-group-arn <LOG_GROUP_ARN>",
"NativeIaC": "```yaml\n# CloudFormation: Enable query logging for a public hosted zone\nResources:\n <example_resource_name>:\n Type: AWS::Route53::HostedZone\n Properties:\n Name: <example_domain_name>\n QueryLoggingConfig:\n CloudWatchLogsLogGroupArn: <example_log_group_arn> # Critical: enables Route53 query logging to this CloudWatch Logs group\n```",
"Other": "1. Open the AWS Console and go to Route 53 > Hosted zones\n2. Select the public hosted zone\n3. Choose Query logging > Enable\n4. Select the target CloudWatch Logs log group and click Save\n5. If prompted, allow Route 53 to write to the log group (approve the CloudWatch Logs resource policy)",
"Terraform": "```hcl\n# Enable Route53 query logging for a public hosted zone\nresource \"aws_route53_query_log\" \"example\" {\n zone_id = \"<example_resource_id>\" # Critical: target hosted zone\n cloudwatch_log_group_arn = \"<example_log_group_arn>\" # Critical: delivers logs to this CloudWatch Logs group\n}\n```"
},
"Recommendation": {
"Text": "Enable CloudWatch logs and define metrics and uses cases for the events recorded.",
"Url": "https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/monitoring-hosted-zones-with-cloudwatch.html"
"Text": "Enable **Route 53 query logging** for public zones to a centralized **CloudWatch Logs** group. Apply **least privilege** to log delivery, set **retention** and **metric filters/alerts**, and stream to your **SIEM**. Use **defense in depth** by correlating DNS logs with network and endpoint telemetry and regularly review baselines.",
"Url": "https://hub.prowler.com/check/route53_public_hosted_zones_cloudwatch_logging_enabled"
}
},
"Categories": [
"forensics-ready"
"logging"
],
"DependsOn": [],
"RelatedTo": [],
@@ -22,7 +22,7 @@
},
"Recommendation": {
"Text": "Configure versioning using the Amazon console or API for buckets with sensitive information that is changing frequently, and backup may not be enough to capture all the changes.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/dev-retired/Versioning.html"
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html"
}
},
"Categories": [],
@@ -16,10 +16,14 @@ class SageMaker(AWSService):
self.sagemaker_models = []
self.sagemaker_training_jobs = []
self.endpoint_configs = {}
# Retrieve resources concurrently
self.__threading_call__(self._list_notebook_instances)
self.__threading_call__(self._list_models)
self.__threading_call__(self._list_training_jobs)
self.__threading_call__(self._list_endpoint_configs)
# Describe resources concurrently
self.__threading_call__(self._describe_model, self.sagemaker_models)
self.__threading_call__(
self._describe_notebook_instance, self.sagemaker_notebook_instances
@@ -28,9 +32,21 @@ class SageMaker(AWSService):
self._describe_training_job, self.sagemaker_training_jobs
)
self.__threading_call__(
self._describe_endpoint_config, self.endpoint_configs.values()
self._describe_endpoint_config, list(self.endpoint_configs.values())
)
# List tags concurrently for each resource collection
# This replaces the previous sequential sequential execution to improve performance
self.__threading_call__(self._list_tags_for_resource, self.sagemaker_models)
self.__threading_call__(
self._list_tags_for_resource, self.sagemaker_notebook_instances
)
self.__threading_call__(
self._list_tags_for_resource, self.sagemaker_training_jobs
)
self.__threading_call__(
self._list_tags_for_resource, list(self.endpoint_configs.values())
)
self._list_tags_for_resource()
def _list_notebook_instances(self, regional_client):
logger.info("SageMaker - listing notebook instances...")
@@ -187,40 +203,16 @@ class SageMaker(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def _list_tags_for_resource(self):
def _list_tags_for_resource(self, resource):
"""
Lists tags for a specific SageMaker resource.
This method is designed to be called in parallel threads for each resource.
"""
logger.info("SageMaker - List Tags...")
try:
for model in self.sagemaker_models:
regional_client = self.regional_clients[model.region]
response = regional_client.list_tags(ResourceArn=model.arn)["Tags"]
model.tags = response
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
try:
for instance in self.sagemaker_notebook_instances:
regional_client = self.regional_clients[instance.region]
response = regional_client.list_tags(ResourceArn=instance.arn)["Tags"]
instance.tags = response
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
try:
for job in self.sagemaker_training_jobs:
regional_client = self.regional_clients[job.region]
response = regional_client.list_tags(ResourceArn=job.arn)["Tags"]
job.tags = response
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
try:
for endpoint in self.endpoint_configs.values():
regional_client = self.regional_clients[endpoint.region]
response = regional_client.list_tags(ResourceArn=endpoint.arn)["Tags"]
endpoint.tags = response
regional_client = self.regional_clients[resource.region]
response = regional_client.list_tags(ResourceArn=resource.arn)["Tags"]
resource.tags = response
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -1,29 +1,39 @@
{
"Provider": "aws",
"CheckID": "secretsmanager_automatic_rotation_enabled",
"CheckTitle": "Check if Secrets Manager secret rotation is enabled.",
"CheckType": [],
"CheckTitle": "Secrets Manager secret has rotation enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/NIST 800-53 Controls (USA)",
"Software and Configuration Checks/Industry and Regulatory Standards/NIST CSF Controls (USA)"
],
"ServiceName": "secretsmanager",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:secretsmanager:region:account-id:secret:secret-name",
"Severity": "medium",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "AwsSecretsManagerSecret",
"Description": "Check if Secrets Manager secret rotation is enabled.",
"Risk": "Rotating secrets minimizes exposure to attacks using stolen secrets.",
"RelatedUrl": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets_strategies.html",
"Description": "**AWS Secrets Manager secrets** are evaluated for **automatic rotation**; the check determines if a rotation schedule is enabled for each secret",
"Risk": "Absent rotation, **long-lived secrets** widen the attack window:\n- Valid after leakage in code, images, or logs\n- Enable **unauthorized access** and **lateral movement**\n- Complicate incident response and recovery\nThis impacts **confidentiality** and **integrity**, and can threaten **availability** if revocation lags.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets_strategies.html"
],
"Remediation": {
"Code": {
"CLI": "aws secretsmanager rotate-secret --region <REGION> --secret-id <SECRET-ID> --rotation-lambda-arn <LAMBDA-ARN> --rotation-rules AutomaticallyAfterDays=30",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws secretsmanager rotate-secret --secret-id <example_resource_id> --rotation-lambda-arn <example_resource_id> --rotation-rules AutomaticallyAfterDays=30",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::SecretsManager::RotationSchedule\n Properties:\n SecretId: <example_resource_id>\n RotationLambdaARN: <example_resource_id>\n RotationRules:\n AutomaticallyAfterDays: 30 # Critical: enables rotation on a 30-day schedule\n```",
"Other": "1. Open AWS Console > Secrets Manager\n2. Select the secret\n3. Click Rotation > Enable automatic rotation\n4. Choose the rotation Lambda function\n5. Set rotation interval to 30 days\n6. Save",
"Terraform": "```hcl\nresource \"aws_secretsmanager_secret_rotation\" \"<example_resource_name>\" {\n secret_id = \"<example_resource_id>\"\n rotation_lambda_arn = \"<example_resource_id>\"\n rotation_rules {\n automatically_after_days = 30 # Critical: enables rotation schedule\n }\n}\n```"
},
"Recommendation": {
"Text": "Implement automated detective control to scan accounts for passwords and secrets. Use secrets manager service to store and retrieve passwords and secrets.",
"Url": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets_strategies.html"
"Text": "Enable **automatic rotation** for secrets and set schedules based on sensitivity (e.g., `30-90 days`). Enforce **least privilege** for accessing and rotating secrets and apply **separation of duties**. Monitor rotation health. Avoid hardcoded credentials; retrieve secrets at runtime and support versioned updates.",
"Url": "https://hub.prowler.com/check/secretsmanager_automatic_rotation_enabled"
}
},
"Categories": [],
"Categories": [
"secrets"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Infrastructure Protection"
@@ -1,32 +1,40 @@
{
"Provider": "aws",
"CheckID": "secretsmanager_not_publicly_accessible",
"CheckTitle": "Ensure Secrets Manager secrets are not publicly accessible.",
"CheckTitle": "Secrets Manager secret resource policy does not allow public access",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Credential Access",
"Effects/Data Exposure"
],
"ServiceName": "secretsmanager",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:secretsmanager:region:account-id:secret:secret-name",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "AwsSecretsManagerSecret",
"Description": "This control checks whether Secrets Manager secrets are not publicly accessible via resource policies.",
"Risk": "Publicly accessible secrets can expose sensitive information and pose a security risk.",
"RelatedUrl": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-policies.html",
"Description": "**AWS Secrets Manager secrets** are evaluated for **public exposure** through resource-based policies that grant broad access, such as `Principal: \"*\"`, which would allow any principal to perform actions on the secret.",
"Risk": "**Public access** to a secret enables uncontrolled retrieval of secret values, compromising **confidentiality**. If broad actions are allowed, attackers can modify or delete the secret, impacting **integrity** and **availability**, and use exposed credentials for unauthorized data access and **lateral movement**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-policies.html",
"https://docs.aws.amazon.com/secretsmanager/latest/userguide/determine-acccess_examine-iam-policies.html"
],
"Remediation": {
"Code": {
"CLI": "aws secretsmanager delete-resource-policy --secret-id <secret-id>",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws secretsmanager put-resource-policy --secret-id <secret-id> --resource-policy '{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::<ACCOUNT_ID>:root\"},\"Action\":\"secretsmanager:GetSecretValue\",\"Resource\":\"*\"}]}' --block-public-policy",
"NativeIaC": "```yaml\n# CloudFormation: attach a non-public resource policy\nResources:\n <example_resource_name>:\n Type: AWS::SecretsManager::ResourcePolicy\n Properties:\n SecretId: \"<example_resource_id>\"\n BlockPublicPolicy: true # Critical: prevents policies that allow public access\n ResourcePolicy: # Critical: principal is restricted, not \"*\"\n Version: '2012-10-17'\n Statement:\n - Effect: Allow\n Principal:\n AWS: arn:aws:iam::<ACCOUNT_ID>:root\n Action: secretsmanager:GetSecretValue\n Resource: \"*\"\n```",
"Other": "1. Open AWS Console > Secrets Manager\n2. Select the secret > Overview tab > Resource permissions > Edit permissions\n3. Remove any statement with Principal set to \"*\" (or AWS: \"*\")\n4. Add an allow statement for only your account root principal: arn:aws:iam::<ACCOUNT_ID>:root\n5. Enable Block public access (if available) and click Save",
"Terraform": "```hcl\n# Restrict secret policy and block public access\nresource \"aws_secretsmanager_secret_policy\" \"<example_resource_name>\" {\n secret_arn = \"<example_resource_id>\"\n block_public_policy = true # Critical: blocks public policies\n policy = jsonencode({ # Critical: principal is not \"*\"\n Version = \"2012-10-17\"\n Statement = [{\n Effect = \"Allow\"\n Principal = { AWS = \"arn:aws:iam::<ACCOUNT_ID>:root\" }\n Action = \"secretsmanager:GetSecretValue\"\n Resource = \"*\"\n }]\n })\n}\n```"
},
"Recommendation": {
"Text": "Review and remove any public access from Secrets Manager policies to follow the Principle of Least Privilege.",
"Url": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/determine-acccess_examine-iam-policies.html"
"Text": "Apply **least privilege** to resource policies:\n- Remove wildcards and limit access to specific principals\n- Add contextual conditions (e.g., VPC endpoints, source account/ARN)\n- Enable safeguards that block public policies\n- Prefer private access paths\n- Periodically review related identity and KMS policies",
"Url": "https://hub.prowler.com/check/secretsmanager_not_publicly_accessible"
}
},
"Categories": [
"internet-exposed"
"internet-exposed",
"secrets"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,26 +1,33 @@
{
"Provider": "aws",
"CheckID": "secretsmanager_secret_rotated_periodically",
"CheckTitle": "Secrets should be rotated periodically",
"CheckType": [],
"CheckTitle": "AWS Secrets Manager secret is rotated within the configured maximum number of days",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "secretsmanager",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:secretsmanager:region:account-id:secret:secret-name",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsSecretsManagerSecret",
"Description": "Secrets should be rotated periodically to reduce the risk of unauthorized access.",
"Risk": "Rotating secrets in your AWS account reduces the risk of unauthorized access, especially for credentials like passwords or API keys. Automatic rotation via AWS Secrets Manager replaces long-term secrets with short-term ones, lowering the chances of compromise.",
"RelatedUrl": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html",
"Description": "**AWS Secrets Manager secrets** are evaluated for **periodic rotation** within a configured window (default `90` days).\n\nSecrets with no recorded rotation, or with rotation older than the allowed window, are identified for review.",
"Risk": "**Long-lived or never-rotated secrets** widen the attack window. Leaked or brute-forced credentials stay valid, enabling unauthorized access to databases and APIs, **data exfiltration**, and unauthorized changes-compromising **confidentiality** and **integrity**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-other.html",
"https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html"
],
"Remediation": {
"Code": {
"CLI": "aws secretsmanager rotate-secret --secret-id <secret-name>",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/secretsmanager-controls.html#secretsmanager-4",
"Terraform": ""
"CLI": "aws secretsmanager rotate-secret --secret-id <secret-id>",
"NativeIaC": "```yaml\n# CloudFormation: enable rotation and rotate now\nResources:\n <example_resource_name>:\n Type: AWS::SecretsManager::RotationSchedule\n Properties:\n SecretId: <example_resource_id> # CRITICAL: target secret to rotate\n RotationLambdaARN: <example_resource_id> # CRITICAL: Lambda ARN used to perform rotation\n ScheduleExpression: rate(30 days) # CRITICAL: ensures rotation occurs within max allowed days\n RotateImmediatelyOnUpdate: true # CRITICAL: triggers an immediate rotation to pass the check\n```",
"Other": "1. Open the AWS Console > Secrets Manager\n2. Select the secret\n3. If Rotation status is Enabled: click Rotate secret immediately\n4. If Rotation is Disabled: click Edit rotation, turn on Automatic rotation, choose the rotation Lambda function, Save, then click Rotate secret immediately",
"Terraform": "```hcl\n# Enable rotation for the secret\nresource \"aws_secretsmanager_secret_rotation\" \"<example_resource_name>\" {\n secret_id = \"<example_resource_id>\" # CRITICAL: target secret\n rotation_lambda_arn = \"<example_resource_id>\" # CRITICAL: Lambda ARN used to rotate\n\n rotation_rules { \n automatically_after_days = 30 # CRITICAL: rotate within allowed days\n }\n}\n```"
},
"Recommendation": {
"Text": "Configure automatic rotation for your Secrets Manager secrets.",
"Url": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_lambda.html"
"Text": "Enable **automatic rotation** for all secrets with intervals aligned to sensitivity (**`90` days or more frequent). Ensure apps retrieve secrets at runtime. Apply **least privilege** to rotation roles and KMS keys, use **separation of duties**, and monitor rotation health with alerts. Avoid hard-coded credentials and retire unused secrets.",
"Url": "https://hub.prowler.com/check/secretsmanager_secret_rotated_periodically"
}
},
"Categories": [
@@ -1,26 +1,33 @@
{
"Provider": "aws",
"CheckID": "secretsmanager_secret_unused",
"CheckTitle": "Ensure secrets manager secrets are not unused",
"CheckType": [],
"CheckTitle": "Secrets Manager secret has been accessed within the last 90 days",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
],
"ServiceName": "secretsmanager",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:secretsmanager:region:account-id:secret:secret-name",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsSecretsManagerSecret",
"Description": "Checks whether Secrets Manager secrets are unused.",
"Risk": "Unused secrets can be abused by former users or leaked to unauthorized entities, increasing the risk of unauthorized access and data breaches.",
"RelatedUrl": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_delete-secret.html",
"Description": "**AWS Secrets Manager secrets** with no retrieval activity beyond a configured window (default `90` days) are identified as **unused** based on their most recent access timestamp",
"Risk": "Unused yet valid secrets jeopardize **confidentiality** and **integrity**:\n- Reuse by ex-users or leaked code enables unauthorized access\n- Limited rotation/revocation increases stealth persistence and data exfiltration\n- Secret sprawl adds operational risk and extra cost",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/secretsmanager-controls.html#secretsmanager-3",
"https://support.icompaas.com/support/solutions/articles/62000233606-ensure-secrets-manager-secrets-are-not-unused",
"https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_delete-secret.html"
],
"Remediation": {
"Code": {
"CLI": "aws secretsmanager delete-secret --secret-id <secret-arn>",
"CLI": "aws secretsmanager delete-secret --secret-id <example_resource_id>",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/secretsmanager-controls.html#secretsmanager-3",
"Other": "1. In the AWS Console, go to Secrets Manager\n2. Select the unused secret\n3. If the secret has replicas: in Replicate secret, select each replica and choose Actions > Delete replica\n4. Choose Actions > Delete secret\n5. Keep the default recovery window (or set one) and select Schedule deletion",
"Terraform": ""
},
"Recommendation": {
"Text": "Regularly review Secrets Manager secrets and delete those that are no longer in use.",
"Url": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_delete-secret.html"
"Text": "Apply a **lifecycle policy** for secrets:\n- Require ownership tags and periodic reviews\n- Rotate or disable, then retire secrets unused beyond policy\n- Enforce **least privilege** and monitor retrievals with alerts\n- Automate cleanup using recovery windows to prevent accidental loss",
"Url": "https://hub.prowler.com/check/secretsmanager_secret_unused"
}
},
"Categories": [
@@ -1,29 +1,38 @@
{
"Provider": "aws",
"CheckID": "shield_advanced_protection_in_associated_elastic_ips",
"CheckTitle": "Check if Elastic IP addresses with associations are protected by AWS Shield Advanced.",
"CheckType": [],
"CheckTitle": "Elastic IP address is protected by AWS Shield Advanced",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Effects/Denial of Service"
],
"ServiceName": "shield",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsEc2Eip",
"Description": "Check if Elastic IP addresses with associations are protected by AWS Shield Advanced.",
"Risk": "AWS Shield Advanced provides expanded DDoS attack protection for your resources.",
"RelatedUrl": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html",
"Description": "**Elastic IP addresses** are assessed for **AWS Shield Advanced** coverage by verifying they are listed as protected resources.",
"Risk": "Without **Shield Advanced**, internet-facing EIPs are more susceptible to **DDoS**, threatening **availability** and driving **cost** spikes.\n\nVolumetric or protocol floods can saturate bandwidth or exhaust connection state, disrupting services behind the EIP and slowing incident response.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws shield create-protection --name <example_resource_name> --resource-arn arn:aws:ec2:<REGION>:<ACCOUNT_ID>:elastic-ip/eipalloc-<ALLOCATION_ID>",
"NativeIaC": "```yaml\n# CloudFormation: Add Shield Advanced protection to an Elastic IP\nResources:\n Protection:\n Type: AWS::Shield::Protection\n Properties:\n Name: <example_resource_name>\n ResourceArn: arn:aws:ec2:<REGION>:<ACCOUNT_ID>:elastic-ip/eipalloc-<ALLOCATION_ID> # Critical: ARN of the Elastic IP to protect\n```",
"Other": "1. Open the AWS WAF & Shield console\n2. Go to AWS Shield > Protected resources\n3. Click Add resources to protect\n4. Select the Region and resource type: EC2 Elastic IP, then Load resources\n5. Select the target Elastic IP\n6. Click Protect with Shield Advanced",
"Terraform": "```hcl\n# Terraform: Add Shield Advanced protection to an Elastic IP\nresource \"aws_shield_protection\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n resource_arn = \"arn:aws:ec2:<REGION>:<ACCOUNT_ID>:elastic-ip/eipalloc-<ALLOCATION_ID>\" # Critical: ARN of the Elastic IP to protect\n}\n```"
},
"Recommendation": {
"Text": "Add as a protected resource in AWS Shield Advanced.",
"Url": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
"Text": "Register critical EIPs as **Shield Advanced protected resources**.\n\nApply **defense in depth**: minimize public exposure, use application-layer controls (WAF, rate limiting), monitor telemetry, and review protections regularly, aligning network access with **least privilege**.",
"Url": "https://hub.prowler.com/check/shield_advanced_protection_in_associated_elastic_ips"
}
},
"Categories": [],
"Categories": [
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,37 @@
{
"Provider": "aws",
"CheckID": "shield_advanced_protection_in_classic_load_balancers",
"CheckTitle": "Check if Classic Load Balancers are protected by AWS Shield Advanced.",
"CheckType": [],
"CheckTitle": "Classic Load Balancer is protected by AWS Shield Advanced",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Effects/Denial of Service"
],
"ServiceName": "shield",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsElbLoadBalancer",
"Description": "Check if Classic Load Balancers are protected by AWS Shield Advanced.",
"Risk": "AWS Shield Advanced provides expanded DDoS attack protection for your resources.",
"RelatedUrl": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html",
"Description": "**Classic Load Balancers** are evaluated for association with **AWS Shield Advanced** as a protected resource.\n\nIdentifies load balancers without an active Shield Advanced protection when the subscription is enabled.",
"Risk": "Unprotected ELB Classic endpoints are more exposed to large L3/L4 DDoS (e.g., SYN/UDP floods), risking **availability loss** from connection exhaustion and failed health checks, plus operational impact from autoscaling and data transfer surges.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws shield create-protection --name <PROTECTION_NAME> --resource-arn <ELB_ARN>",
"NativeIaC": "```yaml\n# CloudFormation: Add Shield Advanced protection to a Classic Load Balancer\nResources:\n <example_resource_name>:\n Type: AWS::Shield::Protection\n Properties:\n Name: <example_resource_name>\n ResourceArn: <example_resource_id> # Critical: ARN of the Classic Load Balancer to protect\n```",
"Other": "1. In the AWS Console, open AWS WAF & Shield\n2. Go to Shield > Protected resources and click Add resources to protect\n3. Select the Region and resource type Classic Load Balancer, then Load resources\n4. Select your Classic Load Balancer and click Protect with Shield Advanced\n5. Confirm to create the protection",
"Terraform": "```hcl\n# Add Shield Advanced protection to a Classic Load Balancer\nresource \"aws_shield_protection\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n resource_arn = \"<example_resource_id>\" # Critical: ARN of the Classic Load Balancer to protect\n}\n```"
},
"Recommendation": {
"Text": "Add as a protected resource in AWS Shield Advanced.",
"Url": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
"Text": "Add internet-facing **Classic Load Balancers** as protected resources in **Shield Advanced** to strengthen DDoS resilience and cost protection.\n\nApply defense-in-depth: minimize public exposure, enforce least-privilege network access, enable health-based detection, and use protection groups.",
"Url": "https://hub.prowler.com/check/shield_advanced_protection_in_classic_load_balancers"
}
},
"Categories": [],
"Categories": [
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,37 @@
{
"Provider": "aws",
"CheckID": "shield_advanced_protection_in_cloudfront_distributions",
"CheckTitle": "Check if Cloudfront distributions are protected by AWS Shield Advanced.",
"CheckType": [],
"CheckTitle": "CloudFront distribution is protected by AWS Shield Advanced",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Effects/Denial of Service"
],
"ServiceName": "shield",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudFrontDistribution",
"Description": "Check if Cloudfront distributions are protected by AWS Shield Advanced.",
"Risk": "AWS Shield Advanced provides expanded DDoS attack protection for your resources.",
"RelatedUrl": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html",
"Description": "**CloudFront distributions** are associated with **AWS Shield Advanced** as protected resources.\n\nThe assessment identifies distributions that lack this protection mapping.",
"Risk": "Missing **Shield Advanced** leaves distributions exposed to large **DDoS** that degrade **availability** via L3/L4 floods and L7 request surges. Effects include edge saturation, latency, and outages, plus loss of **cost protection** and expert support, causing unexpected spend and longer recovery.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws shield create-protection --region us-east-1 --name <example_resource_name> --resource-arn <example_resource_arn>",
"NativeIaC": "```yaml\n# CloudFormation: Add Shield Advanced protection to a CloudFront distribution\nResources:\n ShieldProtection:\n Type: AWS::Shield::Protection\n Properties:\n Name: <example_resource_name>\n ResourceArn: <example_resource_arn> # Critical: associates Shield Advanced protection with the CloudFront distribution ARN\n```",
"Other": "1. In the AWS Console, open WAF & Shield\n2. Go to AWS Shield > Protected resources\n3. Click Add resources to protect\n4. Set Scope to Global and select CloudFront distributions, then Load resources\n5. Select the target distribution\n6. Click Protect with Shield Advanced",
"Terraform": "```hcl\n# Add Shield Advanced protection to a CloudFront distribution\nresource \"aws_shield_protection\" \"example\" {\n name = \"<example_resource_name>\"\n resource_arn = \"<example_resource_arn>\" # Critical: associates Shield Advanced protection with the CloudFront distribution ARN\n}\n```"
},
"Recommendation": {
"Text": "Add as a protected resource in AWS Shield Advanced.",
"Url": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
"Text": "Enroll critical CloudFront distributions in **AWS Shield Advanced** and keep them listed as protected resources.\n\nAdopt layered defense: **AWS WAF**, rate limiting, and continuous monitoring. Maintain DDoS runbooks and use DRT support. Apply **least privilege** to who can modify protections.",
"Url": "https://hub.prowler.com/check/shield_advanced_protection_in_cloudfront_distributions"
}
},
"Categories": [],
"Categories": [
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,38 @@
{
"Provider": "aws",
"CheckID": "shield_advanced_protection_in_global_accelerators",
"CheckTitle": "Check if Global Accelerators are protected by AWS Shield Advanced.",
"CheckType": [],
"CheckTitle": "Global Accelerator accelerator is protected by AWS Shield Advanced",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Denial of Service"
],
"ServiceName": "shield",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Check if Global Accelerators are protected by AWS Shield Advanced.",
"Risk": "AWS Shield Advanced provides expanded DDoS attack protection for your resources.",
"RelatedUrl": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html",
"Description": "**AWS Global Accelerator** accelerators are assessed for enrollment in **Shield Advanced** as `protected resources`, indicating whether enhanced DDoS coverage is configured for each accelerator.",
"Risk": "Without **Shield Advanced**, Global Accelerators are more vulnerable to volumetric and protocol **DDoS** that can exhaust capacity, causing **availability** loss, elevated **latency**, and disrupted failover. Limited visibility and no SRT support prolong incidents and can trigger unexpected **cost** spikes from malicious traffic.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html",
"https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws shield create-protection --name <example_resource_name> --resource-arn <example_resource_id>",
"NativeIaC": "```yaml\n# CloudFormation: Add Shield Advanced protection to a Global Accelerator accelerator\nResources:\n ShieldProtection:\n Type: AWS::Shield::Protection\n Properties:\n Name: <example_resource_name>\n ResourceArn: <example_resource_id> # Critical: ARN of the Global Accelerator accelerator to protect\n```",
"Other": "1. In the AWS Console, open AWS WAF & Shield\n2. Under AWS Shield, select Protected resources\n3. Click Add resources to protect\n4. Set Scope to Global and select the Global Accelerator resource type\n5. Select the target accelerator and click Protect with Shield Advanced",
"Terraform": "```hcl\n# Enable Shield Advanced protection for a Global Accelerator accelerator\nresource \"aws_shield_protection\" \"protection\" {\n name = \"<example_resource_name>\"\n resource_arn = \"<example_resource_id>\" # Critical: ARN of the Global Accelerator accelerator to protect\n}\n```"
},
"Recommendation": {
"Text": "Add as a protected resource in AWS Shield Advanced.",
"Url": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
"Text": "Add each Global Accelerator as a `protected resource` in **Shield Advanced**. Apply **defense in depth** with AWS WAF where applicable, enable proactive monitoring and alerting, and use **Firewall Manager** to enforce coverage across accounts. Follow **least privilege** to restrict who can modify protections.",
"Url": "https://hub.prowler.com/check/shield_advanced_protection_in_global_accelerators"
}
},
"Categories": [],
"Categories": [
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,39 @@
{
"Provider": "aws",
"CheckID": "shield_advanced_protection_in_internet_facing_load_balancers",
"CheckTitle": "Check if internet-facing Application Load Balancers are protected by AWS Shield Advanced.",
"CheckType": [],
"CheckTitle": "Internet-facing Application Load Balancer is protected by AWS Shield Advanced",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Effects/Denial of Service"
],
"ServiceName": "shield",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsElbv2LoadBalancer",
"Description": "Check if internet-facing Application Load Balancers are protected by AWS Shield Advanced.",
"Risk": "AWS Shield Advanced provides expanded DDoS attack protection for your resources.",
"RelatedUrl": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html",
"Description": "**Application Load Balancers** that are **internet-facing** are evaluated for an associated **AWS Shield Advanced** protection. Scope includes ALBs of type application with external exposure.",
"Risk": "Without enhanced DDoS protection, internet-facing ALBs are exposed to volumetric L3/L4 floods and HTTP L7 floods, compromising **availability** via outages and latency spikes. Sudden scaling can raise **costs**, while reduced visibility and response support extend disruption across dependent services.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://aws.amazon.com/documentation-overview/shield/",
"https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws shield create-protection --name <ALB_NAME> --resource-arn <ALB_ARN>",
"NativeIaC": "```yaml\nResources:\n ShieldProtection:\n Type: AWS::Shield::Protection\n Properties:\n Name: \"<example_resource_name>\"\n ResourceArn: \"<example_resource_id>\" # CRITICAL: Set to the ALB ARN to enable Shield Advanced protection for it\n```",
"Other": "1. In the AWS Console, open AWS WAF & Shield\n2. Go to Shield > Protected resources\n3. Click Add resources to protect\n4. Select the Region and resource type Application Load Balancer\n5. Select your internet-facing ALB\n6. Click Protect with Shield Advanced",
"Terraform": "```hcl\nresource \"aws_shield_protection\" \"protect\" {\n name = \"<example_resource_name>\"\n resource_arn = \"<example_resource_id>\" # CRITICAL: ALB ARN; creating this enables Shield Advanced on the ALB\n}\n```"
},
"Recommendation": {
"Text": "Add as a protected resource in AWS Shield Advanced.",
"Url": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
"Text": "Register internet-facing ALBs as **Shield Advanced protected resources** to strengthen **availability**. Use defense-in-depth: pair with **AWS WAF** for L7 filtering and rate limits, group related assets, enable health-based detection and proactive engagement, and enforce least-privilege IAM with continuous monitoring.",
"Url": "https://hub.prowler.com/check/shield_advanced_protection_in_internet_facing_load_balancers"
}
},
"Categories": [],
"Categories": [
"internet-exposed",
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,38 @@
{
"Provider": "aws",
"CheckID": "shield_advanced_protection_in_route53_hosted_zones",
"CheckTitle": "Check if Route53 hosted zones are protected by AWS Shield Advanced.",
"CheckType": [],
"CheckTitle": "Route53 hosted zone is protected by AWS Shield Advanced",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Effects/Denial of Service"
],
"ServiceName": "shield",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsRoute53HostedZone",
"Description": "Check if Route53 hosted zones are protected by AWS Shield Advanced.",
"Risk": "AWS Shield Advanced provides expanded DDoS attack protection for your resources.",
"RelatedUrl": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html",
"Description": "**Route 53 hosted zones** have an active **AWS Shield Advanced** protection registered to the zone's `ARN`.",
"Risk": "Without **Shield Advanced**, authoritative DNS is vulnerable to:\n- **Volumetric/reflection** floods\n- **Query/application** layer attacks\n\nEffects: disrupted resolution and app outages (**availability**), latency spikes, and unexpected cost from attack traffic.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws shield create-protection --name <example_resource_name> --resource-arn arn:aws:route53:::hostedzone/<example_resource_id>",
"NativeIaC": "```yaml\n# CloudFormation: Add Shield Advanced protection to a Route53 hosted zone\nResources:\n <example_resource_name>:\n Type: AWS::Shield::Protection\n Properties:\n ResourceArn: arn:aws:route53:::hostedzone/<example_resource_id> # Critical: Protects the hosted zone with Shield Advanced\n```",
"Other": "1. Open the AWS WAF & Shield console\n2. Go to AWS Shield > Protected resources\n3. Click Add resources to protect\n4. Set Scope to Global and select resource type: Amazon Route 53 Hosted Zone\n5. Select the hosted zone and click Protect with Shield Advanced",
"Terraform": "```hcl\n# Add Shield Advanced protection to a Route53 hosted zone\nresource \"aws_shield_protection\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n resource_arn = \"arn:aws:route53:::hostedzone/<example_resource_id>\" # Critical: Protects the hosted zone with Shield Advanced\n}\n```"
},
"Recommendation": {
"Text": "Add as a protected resource in AWS Shield Advanced.",
"Url": "https://docs.aws.amazon.com/waf/latest/developerguide/configure-new-protection.html"
"Text": "Add critical **Route 53 hosted zones** as **Shield Advanced protected resources** to apply managed DDoS safeguards. Follow **defense in depth**: limit DNS exposure, enforce least-privilege for protection changes, monitor traffic baselines, and prepare incident runbooks with clear escalation to speed response.",
"Url": "https://hub.prowler.com/check/shield_advanced_protection_in_route53_hosted_zones"
}
},
"Categories": [],
"Categories": [
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,26 +1,35 @@
{
"Provider": "aws",
"CheckID": "sqs_queues_not_publicly_accessible",
"CheckTitle": "Check if SQS queues have policy set as Public",
"CheckType": [],
"CheckTitle": "SQS queue policy does not allow public access",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Initial Access/Unauthorized Access",
"Effects/Data Exposure"
],
"ServiceName": "sqs",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:sqs:region:account-id:queue",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsSqsQueue",
"Description": "Check if SQS queues have policy set as Public",
"Risk": "Sensitive information could be disclosed",
"RelatedUrl": "https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.html",
"Description": "Amazon SQS queue policies are assessed for **public access**. The finding highlights queues with `Allow` statements using a wildcard `Principal` without restrictive conditions, compared to queues that only grant access to the owning account or explicitly trusted principals.",
"Risk": "**Public SQS access** can expose message data (**confidentiality**), enable unauthorized send/receive or tampering (**integrity**), and allow purge/delete operations that disrupt processing (**availability**). It may also trigger unbounded message ingestion, causing cost spikes and consumer overload.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/SQS/sqs-queue-exposed.html",
"https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/SQS/sqs-queue-exposed.html",
"Terraform": "https://docs.prowler.com/checks/aws/general-policies/ensure-sqs-queue-policy-is-not-public-by-only-allowing-specific-services-or-principals-to-access-it#terraform"
"CLI": "aws sqs set-queue-attributes --queue-url <example_queue_url> --attributes Policy='{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"<example_account_id>\"},\"Action\":\"sqs:*\",\"Resource\":\"<example_queue_arn>\"}]}'",
"NativeIaC": "```yaml\n# CloudFormation: Restrict SQS policy to a specific principal (not public)\nResources:\n QueuePolicy:\n Type: AWS::SQS::QueuePolicy\n Properties:\n Queues:\n - \"<example_queue_url>\"\n PolicyDocument:\n Version: \"2012-10-17\"\n Statement:\n - Effect: Allow\n Principal:\n AWS: \"<example_account_id>\" # CRITICAL: restrict access to a specific account (removes public \"*\")\n Action: \"sqs:*\"\n Resource: \"<example_queue_arn>\"\n```",
"Other": "1. Open the Amazon SQS console and select the queue\n2. Go to Permissions (Access policy) and click Edit\n3. In the JSON policy, replace any \"Principal\": \"*\" with \"Principal\": { \"AWS\": \"<your_account_id>\" } or remove those public statements\n4. Save changes",
"Terraform": "```hcl\n# Restrict SQS policy to a specific principal (not public)\nresource \"aws_sqs_queue_policy\" \"<example_resource_name>\" {\n queue_url = \"<example_queue_url>\"\n policy = jsonencode({\n Version = \"2012-10-17\"\n Statement = [{\n Effect = \"Allow\"\n Principal = { AWS = \"<example_account_id>\" } # CRITICAL: restrict to a specific principal (removes public \"*\")\n Action = \"sqs:*\"\n Resource = \"<example_queue_arn>\"\n }]\n })\n}\n```"
},
"Recommendation": {
"Text": "Review service with overly permissive policies. Adhere to Principle of Least Privilege.",
"Url": "https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.html"
"Text": "Apply **least privilege** on SQS resource policies:\n- Avoid `Principal: *`; grant access only to specific accounts, roles, or services\n- Add restrictive conditions to tightly scope access\n- Prefer private connectivity and defense-in-depth controls\n- Review policies and audit activity regularly to prevent drift",
"Url": "https://hub.prowler.com/check/sqs_queues_not_publicly_accessible"
}
},
"Categories": [
@@ -1,26 +1,35 @@
{
"Provider": "aws",
"CheckID": "sqs_queues_server_side_encryption_enabled",
"CheckTitle": "Check if SQS queues have Server Side Encryption enabled",
"CheckType": [],
"CheckTitle": "SQS queue has server-side encryption enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Effects/Data Exposure"
],
"ServiceName": "sqs",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:sqs:region:account-id:queue",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsSqsQueue",
"Description": "Check if SQS queues have Server Side Encryption enabled",
"Risk": "If not enabled sensitive information in transit is not protected.",
"RelatedUrl": "https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sse-existing-queue.html",
"Description": "**Amazon SQS queues** are evaluated for **server-side encryption** configured with a **KMS key** (`SSE-KMS`) protecting message bodies at rest.\n\nQueues without an associated KMS key are identified.",
"Risk": "Without **KMS-backed SSE**, message bodies lack tenant-controlled keys and detailed audit. Secrets, tokens, or PII in messages become easier to access through **privilege misuse**, misconfiguration, or unintended integrations, reducing **confidentiality** and limiting containment since you cannot revoke access via key disable/rotation.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/SQS/queue-encrypted-with-kms-customer-master-keys.html",
"https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sse-existing-queue.html"
],
"Remediation": {
"Code": {
"CLI": "aws sqs set-queue-attributes --queue-url <QUEUE_URL> --attributes KmsMasterKeyId=<KEY>",
"NativeIaC": "https://docs.prowler.com/checks/aws/general-policies/general_16-encrypt-sqs-queue#cloudformation",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/SQS/queue-encrypted-with-kms-customer-master-keys.html",
"Terraform": "https://docs.prowler.com/checks/aws/general-policies/general_16-encrypt-sqs-queue#terraform"
"CLI": "aws sqs set-queue-attributes --queue-url <QUEUE_URL> --attributes KmsMasterKeyId=<KMS_KEY_ID_OR_ALIAS>",
"NativeIaC": "```yaml\n# CloudFormation: Enable SSE-KMS for an SQS queue\nResources:\n <example_resource_name>:\n Type: AWS::SQS::Queue\n Properties:\n KmsMasterKeyId: alias/aws/sqs # Critical: sets a KMS key, enabling SSE-KMS so the queue reports a kms_key_id\n```",
"Other": "1. In the AWS Console, go to Amazon SQS > Queues\n2. Select the queue and click Edit\n3. Expand Encryption\n4. Set Server-side encryption to Enabled\n5. For AWS KMS key, select alias/aws/sqs (or choose a specific KMS key)\n6. Click Save",
"Terraform": "```hcl\n# Enable SSE-KMS for an SQS queue\nresource \"aws_sqs_queue\" \"<example_resource_name>\" {\n kms_master_key_id = \"alias/aws/sqs\" # Critical: sets a KMS key, enabling SSE-KMS so the queue reports a kms_key_id\n}\n```"
},
"Recommendation": {
"Text": "Enable Encryption. Use a CMK where possible. It will provide additional management and privacy benefits",
"Url": "https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sse-existing-queue.html"
"Text": "Enable **SSE-KMS** on all queues using a **customer-managed KMS key**.\n- Apply **least privilege** to key and queue policies; restrict `Encrypt/Decrypt`\n- Enforce key rotation and separation of duties\n- Tune data key reuse for security vs. cost\n- Monitor key and queue access to support **defense in depth**",
"Url": "https://hub.prowler.com/check/sqs_queues_server_side_encryption_enabled"
}
},
"Categories": [
@@ -1,28 +1,33 @@
{
"Provider": "aws",
"CheckID": "stepfunctions_statemachine_logging_enabled",
"CheckTitle": "Step Functions state machines should have logging enabled",
"CheckTitle": "Step Functions state machine has logging enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis"
],
"ServiceName": "stepfunctions",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:states:{region}:{account-id}:stateMachine/{stateMachine-id}",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsStepFunctionStateMachine",
"Description": "This control checks if AWS Step Functions state machines have logging enabled. The control fails if the state machine doesn't have the loggingConfiguration property defined.",
"Risk": "Without logging enabled, important operational data may be lost, making it difficult to troubleshoot issues, monitor performance, and ensure compliance with auditing requirements.",
"RelatedUrl": "https://docs.aws.amazon.com/step-functions/latest/dg/logging.html",
"Description": "**AWS Step Functions state machines** are configured to emit **execution logs** to CloudWatch Logs via a defined `loggingConfiguration` with a `level` set above `OFF`.",
"Risk": "Without **execution logs**, workflow failures and anomalies are **undetectable**, increasing MTTR and risking silent data loss. Missing audit trails weaken **integrity** oversight and complicate **forensics**, enabling misuse of invoked services to go unnoticed and creating **compliance** gaps.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/step-functions/latest/dg/logging.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/stepfunctions-controls.html#stepfunctions-1",
"https://support.icompaas.com/support/solutions/articles/62000233757-ensure-step-functions-state-machines-should-have-logging-enabled"
],
"Remediation": {
"Code": {
"CLI": "aws stepfunctions update-state-machine --state-machine-arn <state-machine-arn> --logging-configuration file://logging-config.json",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/stepfunctions-controls.html#stepfunctions-1",
"Terraform": "https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sfn_state_machine#logging_configuration"
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::StepFunctions::StateMachine\n Properties:\n RoleArn: arn:aws:iam::<account-id>:role/<example_role_name>\n DefinitionString: |\n {\"StartAt\":\"Pass\",\"States\":{\"Pass\":{\"Type\":\"Pass\",\"End\":true}}}\n LoggingConfiguration:\n Destinations:\n - CloudWatchLogsLogGroup:\n LogGroupArn: arn:aws:logs:<region>:<account-id>:log-group:<log-group-name>:* # Critical: target CloudWatch Logs group\n Level: ERROR # Critical: enables logging (not OFF)\n```",
"Other": "1. Open AWS Console > Step Functions > State machines\n2. Select the state machine and click Edit\n3. In Logging, enable logging\n4. Choose an existing CloudWatch Logs log group\n5. Set Level to Error (or All)\n6. Save changes",
"Terraform": "```hcl\nresource \"aws_sfn_state_machine\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n role_arn = \"arn:aws:iam::<account-id>:role/<example_role_name>\"\n definition = jsonencode({ StartAt = \"Pass\", States = { Pass = { Type = \"Pass\", End = true } } })\n\n logging_configuration {\n log_destination = \"arn:aws:logs:<region>:<account-id>:log-group:<log-group-name>:*\" # Critical: CloudWatch Logs destination\n level = \"ERROR\" # Critical: enables logging\n }\n}\n```"
},
"Recommendation": {
"Text": "Configure logging for your Step Functions state machines to ensure that operational data is captured and available for debugging, monitoring, and auditing purposes.",
"Url": "https://docs.aws.amazon.com/step-functions/latest/dg/logging.html"
"Text": "Enable CloudWatch logging on all state machines at an appropriate `level` (e.g., `ERROR` or `ALL`) and send logs to a protected log group. Apply **least privilege** to log write/read, set **retention**, and avoid sensitive data unless required using `includeExecutionData`. Use X-Ray tracing for **defense in depth**.",
"Url": "https://hub.prowler.com/check/stepfunctions_statemachine_logging_enabled"
}
},
"Categories": [

Some files were not shown because too many files have changed in this diff Show More