Compare commits

...

52 Commits

Author SHA1 Message Date
pedrooot 989c270ad1 feat(threatscore): improve the way of scoring from the CLI 2025-08-21 11:43:22 +02:00
Chandrapal Badshah d54e3b25db fix: Refactor getting lighthouse config (#8546)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-21 11:14:21 +02:00
Pepe Fagoaga 6a8e8750bb chore(actions): conflict checker (#8547) 2025-08-21 14:28:18 +05:45
Hugo Pereira Brito ad3d4536fb fix(m365): only evaluate enabled users in entra_users_mfa_capable (#8544) 2025-08-20 16:45:00 +02:00
Andoni Alonso 46c24055ee docs: refactor Overview into several files (#8543) 2025-08-20 17:44:06 +05:45
Pepe Fagoaga 4c6a1592ac chore(actions): update docs comment with link (#8448) 2025-08-20 17:42:32 +05:45
Hugo Pereira Brito 89e657561c feat(github): add User Email and APP name/installations information (#8501)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-20 12:26:38 +02:00
Hugo Pereira Brito 55099abc86 fix(organization): list all accessible organizations (#8535)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-20 12:13:01 +02:00
Andoni Alonso 3c599a75cc feat(iam): add ECS privilege escalation patterns to IAM checks (#8541) 2025-08-20 09:23:30 +02:00
Chandrapal Badshah f77897f813 feat: gpt-5 and gpt-5-mini integration with lighthouse (#8527)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-08-19 16:49:21 +02:00
Sergio Garcia 30518f2e0e feat(aws): new check eks_cluster_deletion_protection_enabled (#8536) 2025-08-19 10:25:24 +02:00
Chandrapal Badshah efdeb431ba feat: Add resource agent to supervisor (#8509)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-19 09:40:14 +02:00
Sergio Garcia bb07cf9147 fix(aws): exact match in resource-arn filtering (#8533) 2025-08-18 12:11:13 +02:00
Prowler Bot 9214b5c26f chore(regions_update): Changes in regions for AWS services (#8531)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-18 11:58:41 +02:00
dependabot[bot] d57df3cc28 chore(deps): bump actions/upload-artifact from 4.5.0 to 4.6.2 (#8154)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 11:43:41 +02:00
Andoni Alonso 2f5fce41dc feat(iam): remove standalone iam:PassRole from privesc detection and add missing patterns (#8530) 2025-08-18 11:35:14 +02:00
Chandrapal Badshah 6918a75449 fix: add business context to lighthouse chat (#8528)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-18 09:49:23 +02:00
Pablo Lara 3aeaa3d992 feat(filters): improve provider connection filter UX (#8520) 2025-08-18 09:10:16 +02:00
Sergio Garcia fd833eecf0 fix(github): solve Github APP auth method (#8529) 2025-08-18 08:35:19 +02:00
Andoni Alonso 39e4d20b24 feat(iam): add Bedrock AgentCore privilege escalation combo (#8526) 2025-08-15 13:25:15 +02:00
Sergio Garcia dfdd45e4d0 fix(github): list all accessible repositories (#8522) 2025-08-14 10:38:38 +02:00
Hugo Pereira Brito 81478dfed3 fix(compliance): GitHub CIS 1.0 (#8519) 2025-08-13 16:45:36 +02:00
Chandrapal Badshah 2854f8405c fix: simplify error handling to use only error.message (#8518)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-13 10:59:47 +02:00
Jaen-923 0e1578cfbc chore(aws): Refine kisa isms-p compliance mapping (#8479)
Co-authored-by: ghkim583 <203069125+ghkim583@users.noreply.github.com>
2025-08-13 09:08:37 +02:00
Hugo Pereira Brito f5b1532647 fix(kafka): false positives in kafka_cluster_is_public check (#8514) 2025-08-13 09:05:09 +02:00
Sergio Garcia d9f3a6b88e docs(github): add Github onboarding documentation (#8510) 2025-08-12 17:11:30 +02:00
Hugo Pereira Brito b0c386fc60 fix(app): fix false positives in app_http_logs_enabled (#8507)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-12 14:47:17 +02:00
Hugo Pereira Brito 72b06261df fix(storage): fall positives in storage_geo_redundant_enabled (#8504) 2025-08-12 12:30:43 +02:00
sumit-tft 1562b77581 fix(ui): redirection after deleting providers group and improve erro… (#8389)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-12 11:31:45 +02:00
Daniel Barranquero 10e38ca407 fix: missing resource_name in GCP and Azure Defender checks (#8352) 2025-08-11 16:16:08 +02:00
Rubén De la Torre Vico 5842f2df37 feat(azure/vm): add new check vm_jit_access_enabled (#8202)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-11 13:12:36 +02:00
Prowler Bot 8b3b9ffd99 chore(regions_update): Changes in regions for AWS services (#8499)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-11 12:00:02 +02:00
Rubén De la Torre Vico d238050065 feat(azure/vm): add new check vm_sufficient_daily_backup_retention_period (#8200)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-11 11:44:45 +02:00
sumit-tft 5572d476ad fix(ui): adjust table headers to be single-line and consistent (#8480) 2025-08-11 10:47:10 +02:00
sumit-tft 3c94d3a56f fix(ui): disable See Compliance button until scan completes (#8487)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-11 10:37:35 +02:00
Hugo Pereira Brito 85af4ff77c feat(m365): add certificate auth method to cli (#8404)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-11 09:47:56 +02:00
Daniel Barranquero dcee114ef3 fix: validation errors in azure and m365 (#8368) 2025-08-11 09:42:30 +02:00
Pedro Martín 760723874c fix(prowler-threatscore): order the requirements by id (#8495)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-11 08:20:10 +02:00
Pedro Martín c0a4898074 chore(changelog): update (#8496) 2025-08-11 07:48:23 +02:00
Alejandro Bailo 03c0533b58 feat(ui): overview charts display improved (#8491)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-08 10:59:15 +02:00
sumit-tft c8dcb0edb0 feat(ui): add GitHub submenu under High Risk Findings (#8488)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-08 10:36:36 +02:00
Pablo Lara 82171ee916 docs: update changelog (#8489) 2025-08-08 10:20:53 +02:00
Pablo Lara df4bf18b97 feat(ui): add Mutelist menu item under Configuration (#8444)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-08-08 09:09:37 +02:00
Alejandro Bailo 94e60f7329 fix(ui): assume role fields shown (#8484) 2025-08-07 17:44:46 +02:00
Rubén De la Torre Vico f1ba5abbec chore(docs): update provider statistics in README.md (#8483)
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-07 17:10:56 +02:00
Hugo Pereira Brito 6cc1a9a2cb fix(compliance): delete invalid requirements for GitHub CIS 1.0 (#8472)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-07 20:51:20 +07:00
Pablo Lara 31f98092bf feat(ui): add provider type filter to providers page (#8473) 2025-08-07 14:34:04 +02:00
Pepe Fagoaga 85197036ca chore(env): Update NEXT_PUBLIC_PROWLER_RELEASE_VERSION (#8476) 2025-08-07 17:50:18 +05:45
Pepe Fagoaga be43025f00 fix(actions): always get latest SDK reference (#8474) 2025-08-07 17:38:40 +05:45
César Arroba c6b34f0a85 chore(api): open PR with API prowler version (#8475) 2025-08-07 13:49:39 +02:00
Prowler Bot 675698a26a chore(release): Bump version to v5.11.0 (#8470)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-07 12:40:55 +02:00
Alejandro Bailo 8d9bf2384f docs: S3 tutorial documentation (#8414)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-08-07 16:04:42 +05:45
194 changed files with 10043 additions and 5454 deletions
+1 -1
View File
@@ -133,7 +133,7 @@ SENTRY_ENVIRONMENT=local
SENTRY_RELEASE=local
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.7.5
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.10.0
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
+21 -1
View File
@@ -13,6 +13,7 @@ on:
- "master"
- "v5.*"
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
env:
@@ -81,7 +82,9 @@ jobs:
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: api/**
files: |
api/**
.github/workflows/api-pull-request.yml
files_ignore: ${{ env.IGNORE_FILES }}
- name: Replace @master with current branch in pyproject.toml
@@ -105,6 +108,23 @@ jobs:
run: |
poetry lock
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push'
run: |
# Get the latest commit hash from the prowler-cloud/prowler repository
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
# Update the resolved_reference specifically for prowler-cloud/prowler repository
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
# Verify the change was made
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
@@ -7,6 +7,7 @@ on:
- 'v3'
paths:
- 'docs/**'
- '.github/workflows/build-documentation-on-pr.yml'
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
@@ -16,9 +17,20 @@ jobs:
name: Documentation Link
runs-on: ubuntu-latest
steps:
- name: Leave PR comment with the Prowler Documentation URI
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
- name: Find existing documentation comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: find-comment
with:
issue-number: ${{ env.PR_NUMBER }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- prowler-docs-link -->'
- name: Create or update PR comment with the Prowler Documentation URI
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ env.PR_NUMBER }}
body: |
<!-- prowler-docs-link -->
You can check the documentation for this PR here -> [Prowler Documentation](https://prowler-prowler-docs--${{ env.PR_NUMBER }}.com.readthedocs.build/projects/prowler-open-source/en/${{ env.PR_NUMBER }}/)
edit-mode: replace
+1 -1
View File
@@ -1,4 +1,4 @@
name: Create Backport Label
name: Prowler - Create Backport Label
on:
release:
+166
View File
@@ -0,0 +1,166 @@
name: Prowler - PR Conflict Checker
on:
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- "master"
- "v5.*"
pull_request_target:
types:
- opened
- synchronize
- reopened
branches:
- "master"
- "v5.*"
jobs:
conflict-checker:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
**
- name: Check for conflict markers
id: conflict-check
run: |
echo "Checking for conflict markers in changed files..."
CONFLICT_FILES=""
HAS_CONFLICTS=false
# Check each changed file for conflict markers
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [ -f "$file" ]; then
echo "Checking file: $file"
# Look for conflict markers
if grep -l "^<<<<<<<\|^=======\|^>>>>>>>" "$file" 2>/dev/null; then
echo "Conflict markers found in: $file"
CONFLICT_FILES="$CONFLICT_FILES$file "
HAS_CONFLICTS=true
fi
fi
done
if [ "$HAS_CONFLICTS" = true ]; then
echo "has_conflicts=true" >> $GITHUB_OUTPUT
echo "conflict_files=$CONFLICT_FILES" >> $GITHUB_OUTPUT
echo "Conflict markers detected in files: $CONFLICT_FILES"
else
echo "has_conflicts=false" >> $GITHUB_OUTPUT
echo "No conflict markers found in changed files"
fi
- name: Add conflict label
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const hasConflictLabel = labels.some(label => label.name === 'has-conflicts');
if (!hasConflictLabel) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['has-conflicts']
});
console.log('Added has-conflicts label');
} else {
console.log('has-conflicts label already exists');
}
- name: Remove conflict label
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
script: |
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: 'has-conflicts'
});
console.log('Removed has-conflicts label');
} catch (error) {
if (error.status === 404) {
console.log('has-conflicts label was not present');
} else {
throw error;
}
}
- name: Find existing conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: find-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Create or update conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
⚠️ **Conflict Markers Detected**
This pull request contains unresolved conflict markers in the following files:
```
${{ steps.conflict-check.outputs.conflict_files }}
```
Please resolve these conflicts by:
1. Locating the conflict markers: `<<<<<<<`, `=======`, and `>>>>>>>`
2. Manually editing the files to resolve the conflicts
3. Removing all conflict markers
4. Committing and pushing the changes
- name: Find existing conflict comment when resolved
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: find-resolved-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Update comment when conflicts resolved
if: steps.conflict-check.outputs.has_conflicts == 'false' && steps.find-resolved-comment.outputs.comment-id != ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-resolved-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
✅ **Conflict Markers Resolved**
All conflict markers have been successfully resolved in this pull request.
@@ -1,4 +1,4 @@
name: Prowler Release Preparation
name: Prowler - Release Preparation
run-name: Prowler Release Preparation for ${{ inputs.prowler_version }}
@@ -144,27 +144,34 @@ jobs:
fi
echo "✓ api/src/backend/api/v1/views.py version: $CURRENT_API_VERSION"
- name: Create release branch for minor release
- name: Checkout existing release branch for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
echo "Minor release detected (patch = 0), creating new branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/heads/$BRANCH_NAME" || git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "ERROR: Branch $BRANCH_NAME already exists for minor release $PROWLER_VERSION"
echo "Minor release detected (patch = 0), checking out existing branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists remotely, checking out..."
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "ERROR: Branch $BRANCH_NAME should exist for minor release $PROWLER_VERSION. Please create it manually first."
exit 1
fi
git checkout -b "$BRANCH_NAME"
# Push the new branch first so it exists remotely
git push origin "$BRANCH_NAME"
- name: Update prowler dependency in api/pyproject.toml
- name: Prepare prowler dependency update for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
BRANCH_NAME_TRIMMED=$(echo "$BRANCH_NAME" | tr -d '[:space:]')
# Minor release: update the dependency to use the new branch
echo "Minor release detected - updating prowler dependency from '$CURRENT_PROWLER_REF' to '$BRANCH_NAME_TRIMMED'"
# Create a temporary branch for the PR
TEMP_BRANCH="update-api-dependency-$BRANCH_NAME_TRIMMED-$(date +%s)"
echo "TEMP_BRANCH=$TEMP_BRANCH" >> $GITHUB_ENV
# Switch back to master and create temp branch
git checkout master
git checkout -b "$TEMP_BRANCH"
# Minor release: update the dependency to use the release branch
echo "Updating prowler dependency from '$CURRENT_PROWLER_REF' to '$BRANCH_NAME_TRIMMED'"
sed -i "s|prowler @ git+https://github.com/prowler-cloud/prowler.git@[^\"]*\"|prowler @ git+https://github.com/prowler-cloud/prowler.git@$BRANCH_NAME_TRIMMED\"|" api/pyproject.toml
# Verify the change was made
@@ -180,12 +187,39 @@ jobs:
poetry lock
cd ..
# Commit and push the changes
# Commit and push the temporary branch
git add api/pyproject.toml api/poetry.lock
git commit -m "chore(api): update prowler dependency to $BRANCH_NAME_TRIMMED for release $PROWLER_VERSION"
git push origin "$BRANCH_NAME"
git push origin "$TEMP_BRANCH"
echo "✓ api/pyproject.toml prowler dependency updated to: $UPDATED_PROWLER_REF"
echo "✓ Prepared prowler dependency update to: $UPDATED_PROWLER_REF"
- name: Create Pull Request against release branch
if: ${{ env.PATCH_VERSION == '0' }}
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
branch: ${{ env.TEMP_BRANCH }}
base: ${{ env.BRANCH_NAME }}
title: "chore(api): Update prowler dependency to ${{ env.BRANCH_NAME }} for release ${{ env.PROWLER_VERSION }}"
body: |
### Description
Updates the API prowler dependency for release ${{ env.PROWLER_VERSION }}.
**Changes:**
- Updates `api/pyproject.toml` prowler dependency from `@master` to `@${{ env.BRANCH_NAME }}`
- Updates `api/poetry.lock` file with resolved dependencies
This PR should be merged into the `${{ env.BRANCH_NAME }}` release branch.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
labels: |
component/api
no-changelog
- name: Extract changelog entries
run: |
@@ -1,4 +1,4 @@
name: Check Changelog
name: Prowler - Check Changelog
on:
pull_request:
+1 -1
View File
@@ -84,7 +84,7 @@ jobs:
working-directory: ./ui
run: npm run test:e2e
- name: Upload test reports
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: failure()
with:
name: playwright-report
+4 -4
View File
@@ -86,12 +86,12 @@ prowler dashboard
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 567 | 82 | 36 | 10 |
| AWS | 571 | 82 | 36 | 10 |
| GCP | 79 | 13 | 10 | 3 |
| Azure | 142 | 18 | 11 | 3 |
| Azure | 162 | 19 | 11 | 4 |
| Kubernetes | 83 | 7 | 5 | 7 |
| GitHub | 16 | 2 | 1 | 0 |
| M365 | 69 | 7 | 3 | 2 |
| GitHub | 17 | 2 | 1 | 0 |
| M365 | 70 | 7 | 3 | 2 |
| NHN (Unofficial) | 6 | 2 | 1 | 0 |
> [!Note]
+5
View File
@@ -2,6 +2,11 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.12.0] (Prowler 5.11.0 - UNRELEASED)
### Added
- Lighthouse support for OpenAI GPT-5 [(#8527)](https://github.com/prowler-cloud/prowler/pull/8527)
## [1.11.0] (Prowler 5.10.0)
### Added
+4
View File
@@ -1752,6 +1752,10 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
GPT_4O = "gpt-4o", _("GPT-4o Default")
GPT_4O_MINI_2024_07_18 = "gpt-4o-mini-2024-07-18", _("GPT-4o Mini v2024-07-18")
GPT_4O_MINI = "gpt-4o-mini", _("GPT-4o Mini Default")
GPT_5_2025_08_07 = "gpt-5-2025-08-07", _("GPT-5 v2025-08-07")
GPT_5 = "gpt-5", _("GPT-5 Default")
GPT_5_MINI_2025_08_07 = "gpt-5-mini-2025-08-07", _("GPT-5 Mini v2025-08-07")
GPT_5_MINI = "gpt-5-mini", _("GPT-5 Mini Default")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
+61
View File
@@ -0,0 +1,61 @@
## Access Prowler App
After [installation](../installation/prowler-app.md), navigate to [http://localhost:3000](http://localhost:3000) and sign up with email and password.
<img src="../../img/sign-up-button.png" alt="Sign Up Button" width="320"/>
<img src="../../img/sign-up.png" alt="Sign Up" width="285"/>
???+ note "User creation and default tenant behavior"
When creating a new user, the behavior depends on whether an invitation is provided:
- **Without an invitation**:
- A new tenant is automatically created.
- The new user is assigned to this tenant.
- A set of **RBAC admin permissions** is generated and assigned to the user for the newly-created tenant.
- **With an invitation**: The user is added to the specified tenant with the permissions defined in the invitation.
This mechanism ensures that the first user in a newly created tenant has administrative permissions within that tenant.
## Log In
Access Prowler App by logging in with **email and password**.
<img src="../../img/log-in.png" alt="Log In" width="285"/>
## Add Cloud Provider
Configure a cloud provider for scanning:
1. Navigate to `Settings > Cloud Providers` and click `Add Account`.
2. Select the cloud provider.
3. Enter the provider's identifier (Optional: Add an alias):
- **AWS**: Account ID
- **GCP**: Project ID
- **Azure**: Subscription ID
- **Kubernetes**: Cluster ID
- **M365**: Domain ID
4. Follow the guided instructions to add and authenticate your credentials.
## Start a Scan
Once credentials are successfully added and validated, Prowler initiates a scan of your cloud environment.
Click `Go to Scans` to monitor progress.
## View Results
Review findings during scan execution in the following sections:
- **Overview** Provides a high-level summary of your scans.
<img src="../../img/overview.png" alt="Overview" width="700"/>
- **Compliance** Displays compliance insights based on security frameworks.
<img src="../../img/compliance.png" alt="Compliance" width="700"/>
> For detailed usage instructions, refer to the [Prowler App Guide](../tutorials/prowler-app.md).
???+ note
Prowler will automatically scan all configured providers every **24 hours**, ensuring your cloud environment stays continuously monitored.
+257
View File
@@ -0,0 +1,257 @@
## Running Prowler
Running Prowler requires specifying the provider (e.g `aws`, `gcp`, `azure`, `m365`, `github` or `kubernetes`):
???+ note
If no provider is specified, AWS is used by default for backward compatibility with Prowler v2.
```console
prowler <provider>
```
![Prowler Execution](../img/short-display.png)
???+ note
Running the `prowler` command without options will uses environment variable credentials. Refer to the [Requirements](../getting-started/requirements.md) section for credential configuration details.
## Verbose Output
If you prefer the former verbose output, use: `--verbose`. This allows seeing more info while Prowler is running, minimal output is displayed unless verbosity is enabled.
## Report Generation
By default, Prowler generates CSV, JSON-OCSF, and HTML reports. To generate a JSON-ASFF report (used by AWS Security Hub), specify `-M` or `--output-modes`:
```console
prowler <provider> -M csv json-asff json-ocsf html
```
The HTML report is saved in the output directory, alongside other reports. It will look like this:
![Prowler Execution](../img/html-output.png)
## Listing Available Checks and Services
List all available checks or services within a provider using `-l`/`--list-checks` or `--list-services`.
```console
prowler <provider> --list-checks
prowler <provider> --list-services
```
## Running Specific Checks or Services
Execute specific checks or services using `-c`/`checks` or `-s`/`services`:
```console
prowler azure --checks storage_blob_public_access_level_is_disabled
prowler aws --services s3 ec2
prowler gcp --services iam compute
prowler kubernetes --services etcd apiserver
```
## Excluding Checks and Services
Checks and services can be excluded with `-e`/`--excluded-checks` or `--excluded-services`:
```console
prowler aws --excluded-checks s3_bucket_public_access
prowler azure --excluded-services defender iam
prowler gcp --excluded-services kms
prowler kubernetes --excluded-services controllermanager
```
## Additional Options
Explore more advanced time-saving execution methods in the [Miscellaneous](../tutorials/misc.md) section.
Access the help menu and view all available options with `-h`/`--help`:
```console
prowler --help
```
## AWS
Use a custom AWS profile with `-p`/`--profile` and/or specific AWS regions with `-f`/`--filter-region`:
```console
prowler aws --profile custom-profile -f us-east-1 eu-south-2
```
???+ note
By default, `prowler` will scan all AWS regions.
See more details about AWS Authentication in the [Requirements](../getting-started/requirements.md#aws) section.
## Azure
Azure requires specifying the auth method:
```console
# To use service principal authentication
prowler azure --sp-env-auth
# To use az cli authentication
prowler azure --az-cli-auth
# To use browser authentication
prowler azure --browser-auth --tenant-id "XXXXXXXX"
# To use managed identity auth
prowler azure --managed-identity-auth
```
See more details about Azure Authentication in [Requirements](../getting-started/requirements.md#azure)
By default, Prowler scans all accessible subscriptions. Scan specific subscriptions using the following flag (using az cli auth as example):
```console
prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription ID 2> ... <subscription ID N>
```
## Google Cloud
- **User Account Credentials**
By default, Prowler uses **User Account credentials**. Configure accounts using:
- `gcloud init` Set up a new account.
- `gcloud config set account <account>` Switch to an existing account.
Once configured, obtain access credentials using: `gcloud auth application-default login`.
- **Service Account Authentication**
Alternatively, you can use Service Account credentials:
Generate and download Service Account keys in JSON format. Refer to [Google IAM documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) for details.
Provide the key file location using this argument:
```console
prowler gcp --credentials-file path
```
- **Scanning Specific GCP Projects**
By default, Prowler scans all accessible GCP projects. Scan specific projects with the `--project-ids` flag:
```console
prowler gcp --project-ids <Project ID 1> <Project ID 2> ... <Project ID N>
```
- **GCP Retry Configuration**
Configure the maximum number of retry attempts for Google Cloud SDK API calls with the `--gcp-retries-max-attempts` flag:
```console
prowler gcp --gcp-retries-max-attempts 5
```
This is useful when experiencing quota exceeded errors (HTTP 429) to increase the number of automatic retry attempts.
## Kubernetes
Prowler enables security scanning of Kubernetes clusters, supporting both **in-cluster** and **external** execution.
- **Non In-Cluster Execution**
```console
prowler kubernetes --kubeconfig-file path
```
???+ note
If no `--kubeconfig-file` is provided, Prowler will use the default KubeConfig file location (`~/.kube/config`).
- **In-Cluster Execution**
To run Prowler inside the cluster, apply the provided YAML configuration to deploy a job in a new namespace:
```console
kubectl apply -f kubernetes/prowler-sa.yaml
kubectl apply -f kubernetes/job.yaml
kubectl apply -f kubernetes/prowler-role.yaml
kubectl apply -f kubernetes/prowler-rolebinding.yaml
kubectl get pods --namespace prowler-ns --> prowler-XXXXX
kubectl logs prowler-XXXXX --namespace prowler-ns
```
???+ note
By default, Prowler scans all namespaces in the active Kubernetes context. Use the `--context`flag to specify the context to be scanned and `--namespaces` to restrict scanning to specific namespaces.
## Microsoft 365
Microsoft 365 requires specifying the auth method:
```console
# To use service principal authentication for MSGraph and PowerShell modules
prowler m365 --sp-env-auth
# To use both service principal (for MSGraph) and user credentials (for PowerShell modules)
prowler m365 --env-auth
# To use az cli authentication
prowler m365 --az-cli-auth
# To use browser authentication
prowler m365 --browser-auth --tenant-id "XXXXXXXX"
```
See more details about M365 Authentication in the [Requirements](../getting-started/requirements.md#microsoft-365) section.
## GitHub
Prowler enables security scanning of your **GitHub account**, including **Repositories**, **Organizations** and **Applications**.
- **Supported Authentication Methods**
Authenticate using one of the following methods:
```console
# Personal Access Token (PAT):
prowler github --personal-access-token pat
# OAuth App Token:
prowler github --oauth-app-token oauth_token
# GitHub App Credentials:
prowler github --github-app-id app_id --github-app-key app_key
```
???+ note
If no login method is explicitly provided, Prowler will automatically attempt to authenticate using environment variables in the following order of precedence:
1. `GITHUB_PERSONAL_ACCESS_TOKEN`
2. `OAUTH_APP_TOKEN`
3. `GITHUB_APP_ID` and `GITHUB_APP_KEY`
## Infrastructure as Code (IaC)
Prowler's Infrastructure as Code (IaC) provider enables you to scan local or remote infrastructure code for security and compliance issues using [Checkov](https://www.checkov.io/). This provider supports a wide range of IaC frameworks, allowing you to assess your code before deployment.
```console
# Scan a directory for IaC files
prowler iac --scan-path ./my-iac-directory
# Scan a remote GitHub repository (public or private)
prowler iac --scan-repository-url https://github.com/user/repo.git
# Authenticate to a private repo with GitHub username and PAT
prowler iac --scan-repository-url https://github.com/user/repo.git \
--github-username <username> --personal-access-token <token>
# Authenticate to a private repo with OAuth App Token
prowler iac --scan-repository-url https://github.com/user/repo.git \
--oauth-app-token <oauth_token>
# Specify frameworks to scan (default: all)
prowler iac --scan-path ./my-iac-directory --frameworks terraform kubernetes
# Exclude specific paths
prowler iac --scan-path ./my-iac-directory --exclude-path ./my-iac-directory/test,./my-iac-directory/examples
```
???+ note
- `--scan-path` and `--scan-repository-url` are mutually exclusive; only one can be specified at a time.
- For remote repository scans, authentication can be provided via CLI flags or environment variables (`GITHUB_OAUTH_APP_TOKEN`, `GITHUB_USERNAME`, `GITHUB_PERSONAL_ACCESS_TOKEN`). CLI flags take precedence.
- The IaC provider does not require cloud authentication for local scans.
- It is ideal for CI/CD pipelines and local development environments.
- For more details on supported frameworks and rules, see the [Checkov documentation](https://www.checkov.io/1.Welcome/Quick%20Start.html)
See more details about IaC scanning in the [IaC Tutorial](../tutorials/iac/getting-started-iac.md) section.
Binary file not shown.

Before

Width:  |  Height:  |  Size: 518 KiB

+10 -783
View File
@@ -1,3 +1,5 @@
# What is Prowler?
**Prowler** is the open source cloud security platform trusted by thousands to **automate security and compliance** in any cloud environment. With hundreds of ready-to-use checks and compliance frameworks, Prowler delivers real-time, customizable monitoring and seamless integrations, making cloud security simple, scalable, and cost-effective for organizations of any size.
The official supported providers right now are:
@@ -8,790 +10,15 @@ The official supported providers right now are:
- **Kubernetes**
- **M365**
- **Github**
- **IaC**
Unofficially, Prowler supports: NHN.
Prowler supports **auditing, incident response, continuous monitoring, hardening, forensic readiness, and remediation**.
### Prowler Components
### Products
- **Prowler CLI** (Command Line Interface) Known as **Prowler Open Source**.
- **Prowler Cloud** A managed service built on top of Prowler CLI.
More information: [Prowler Cloud](https://prowler.com)
## Prowler App
![Prowler App](img/overview.png)
Prowler App is a web application that simplifies running Prowler. It provides:
- A **user-friendly interface** for configuring and executing scans.
- A dashboard to **view results** and manage **security findings**.
### Installation Guide
Refer to the [Quick Start](#prowler-app-installation) section for installation steps.
## Prowler CLI
```console
prowler <provider>
```
![Prowler CLI Execution](img/short-display.png)
## Prowler Dashboard
```console
prowler dashboard
```
![Prowler Dashboard](img/dashboard.png)
Prowler includes hundreds of security controls aligned with widely recognized industry frameworks and standards, including:
- CIS Benchmarks (AWS, Azure, Microsoft 365, Kubernetes, GitHub)
- NIST SP 800-53 (rev. 4 and 5) and NIST SP 800-171
- NIST Cybersecurity Framework (CSF)
- CISA Guidelines
- FedRAMP Low & Moderate
- PCI DSS v3.2.1 and v4.0
- ISO/IEC 27001:2013 and 2022
- SOC 2
- GDPR (General Data Protection Regulation)
- HIPAA (Health Insurance Portability and Accountability Act)
- FFIEC (Federal Financial Institutions Examination Council)
- ENS RD2022 (Spanish National Security Framework)
- GxP 21 CFR Part 11 and EU Annex 11
- RBI Cybersecurity Framework (Reserve Bank of India)
- KISA ISMS-P (Korean Information Security Management System)
- MITRE ATT&CK
- AWS Well-Architected Framework (Security & Reliability Pillars)
- AWS Foundational Technical Review (FTR)
- Microsoft NIS2 Directive (EU)
- Custom threat scoring frameworks (prowler_threatscore)
- Custom security frameworks for enterprise needs
## Quick Start
### Prowler App Installation
Prowler App supports multiple installation methods based on your environment.
Refer to the [Prowler App Tutorial](tutorials/prowler-app.md) for detailed usage instructions.
=== "Docker Compose"
_Requirements_:
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
_Commands_:
``` bash
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/docker-compose.yml
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/.env
docker compose up -d
```
> Containers are built for `linux/amd64`. If your workstation's architecture is different, please set `DOCKER_DEFAULT_PLATFORM=linux/amd64` in your environment or use the `--platform linux/amd64` flag in the docker command.
> Enjoy Prowler App at http://localhost:3000 by signing up with your email and password.
???+ note
You can change the environment variables in the `.env` file. Note that it is not recommended to use the default values in production environments.
???+ note
There is a development mode available, you can use the file https://github.com/prowler-cloud/prowler/blob/master/docker-compose-dev.yml to run the app in development mode.
???+ warning
Google and GitHub authentication is only available in [Prowler Cloud](https://prowler.com).
=== "GitHub"
_Requirements_:
* `git` installed.
* `poetry` installed: [poetry installation](https://python-poetry.org/docs/#installation).
* `npm` installed: [npm installation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
???+ warning
Make sure to have `api/.env` and `ui/.env.local` files with the required environment variables. You can find the required environment variables in the [`api/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/api/.env.example) and [`ui/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/ui/.env.template) files.
_Commands to run the API_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/api \
poetry install \
eval $(poetry env activate) \
set -a \
source .env \
docker compose up postgres valkey -d \
cd src/backend \
python manage.py migrate --database admin \
gunicorn -c config/guniconf.py config.wsgi:application
```
???+ important
Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
In case you have any doubts, consult the Poetry environment activation guide: https://python-poetry.org/docs/managing-environments/#activating-the-environment
> Now, you can access the API documentation at http://localhost:8080/api/v1/docs.
_Commands to run the API Worker_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/api \
poetry install \
eval $(poetry env activate) \
set -a \
source .env \
cd src/backend \
python -m celery -A config.celery worker -l info -E
```
_Commands to run the API Scheduler_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/api \
poetry install \
eval $(poetry env activate) \
set -a \
source .env \
cd src/backend \
python -m celery -A config.celery beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
```
_Commands to run the UI_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/ui \
npm install \
npm run build \
npm start
```
> Enjoy Prowler App at http://localhost:3000 by signing up with your email and password.
???+ warning
Google and GitHub authentication is only available in [Prowler Cloud](https://prowler.com).
### Prowler CLI Installation
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler/). Consequently, it can be installed as Python package with `Python >= 3.9, <= 3.12`:
=== "pipx"
[pipx](https://pipx.pypa.io/stable/) is a tool to install Python applications in isolated environments. It is recommended to use `pipx` for a global installation.
_Requirements_:
* `Python >= 3.9, <= 3.12`
* `pipx` installed: [pipx installation](https://pipx.pypa.io/stable/installation/).
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
``` bash
pipx install prowler
prowler -v
```
To upgrade Prowler to the latest version, run:
``` bash
pipx upgrade prowler
```
=== "pip"
???+ warning
This method is not recommended because it will modify the environment which you choose to install. Consider using [pipx](https://docs.prowler.com/projects/prowler-open-source/en/latest/#__tabbed_1_1) for a global installation.
_Requirements_:
* `Python >= 3.9, <= 3.12`
* `Python pip >= 21.0.0`
* AWS, GCP, Azure, M365 and/or Kubernetes credentials
_Commands_:
``` bash
pip install prowler
prowler -v
```
To upgrade Prowler to the latest version, run:
``` bash
pip install --upgrade prowler
```
=== "Docker"
_Requirements_:
* Have `docker` installed: https://docs.docker.com/get-docker/.
* In the command below, change `-v` to your local directory path in order to access the reports.
* AWS, GCP, Azure and/or Kubernetes credentials
> Containers are built for `linux/amd64`. If your workstation's architecture is different, please set `DOCKER_DEFAULT_PLATFORM=linux/amd64` in your environment or use the `--platform linux/amd64` flag in the docker command.
_Commands_:
``` bash
docker run -ti --rm -v /your/local/dir/prowler-output:/home/prowler/output \
--name prowler \
--env AWS_ACCESS_KEY_ID \
--env AWS_SECRET_ACCESS_KEY \
--env AWS_SESSION_TOKEN toniblyx/prowler:latest
```
=== "GitHub"
_Requirements for Developers_:
* `git`
* `poetry` installed: [poetry installation](https://python-poetry.org/docs/#installation).
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
poetry install
poetry run python prowler-cli.py -v
```
???+ note
If you want to clone Prowler from Windows, use `git config core.longpaths true` to allow long file paths.
=== "Amazon Linux 2"
_Requirements_:
* `Python >= 3.9, <= 3.12`
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
```
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install prowler
prowler -v
```
=== "Ubuntu"
_Requirements_:
* `Ubuntu 23.04` or above, if you are using an older version of Ubuntu check [pipx installation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#__tabbed_1_1) and ensure you have `Python >= 3.9, <= 3.12`.
* `Python >= 3.9, <= 3.12`
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
``` bash
sudo apt update
sudo apt install pipx
pipx ensurepath
pipx install prowler
prowler -v
```
=== "Brew"
_Requirements_:
* `Brew` installed in your Mac or Linux
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
``` bash
brew install prowler
prowler -v
```
=== "AWS CloudShell"
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [[2]](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it is already included in AL2023. Prowler can thus be easily installed following the generic method of installation via pip. Follow the steps below to successfully execute Prowler v4 in AWS CloudShell:
_Requirements_:
* Open AWS CloudShell `bash`.
_Commands_:
```bash
sudo bash
adduser prowler
su prowler
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install prowler
cd /tmp
prowler aws
```
???+ note
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/tmp/output/prowler-output-123456789012-20221220191331.csv`
=== "Azure CloudShell"
_Requirements_:
* Open Azure CloudShell `bash`.
_Commands_:
```bash
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install prowler
cd /tmp
prowler azure --az-cli-auth
```
### Prowler App Update
You have two options to upgrade your Prowler App installation:
#### Option 1: Change env file with the following values
Edit your `.env` file and change the version values:
```env
PROWLER_UI_VERSION="5.9.0"
PROWLER_API_VERSION="5.9.0"
```
#### Option 2: Run the following command
```bash
docker compose pull --policy always
```
The `--policy always` flag ensures that Docker pulls the latest images even if they already exist locally.
???+ note "What Gets Preserved During Upgrade"
Everything is preserved, nothing will be deleted after the update.
#### Troubleshooting
If containers don't start, check logs for errors:
```bash
# Check logs for errors
docker compose logs
# Verify image versions
docker images | grep prowler
```
If you encounter issues, you can rollback to the previous version by changing the `.env` file back to your previous version and running:
```bash
docker compose pull
docker compose up -d
```
## Prowler container versions
The available versions of Prowler CLI are the following:
- `latest`: in sync with `master` branch (please note that it is not a stable version)
- `v4-latest`: in sync with `v4` branch (please note that it is not a stable version)
- `v3-latest`: in sync with `v3` branch (please note that it is not a stable version)
- `<x.y.z>` (release): you can find the releases [here](https://github.com/prowler-cloud/prowler/releases), those are stable releases.
- `stable`: this tag always point to the latest release.
- `v4-stable`: this tag always point to the latest release for v4.
- `v3-stable`: this tag always point to the latest release for v3.
The container images are available here:
- Prowler CLI:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
- Prowler App:
- [DockerHub - Prowler UI](https://hub.docker.com/r/prowlercloud/prowler-ui/tags)
- [DockerHub - Prowler API](https://hub.docker.com/r/prowlercloud/prowler-api/tags)
## High level architecture
You can run Prowler from your workstation, a Kubernetes Job, a Google Compute Engine, an Azure VM, an EC2 instance, Fargate or any other container, CloudShell and many more.
![Architecture](img/architecture.png)
### Prowler App
The **Prowler App** consists of three main components:
- **Prowler UI**: A user-friendly web interface for running Prowler and viewing results, powered by Next.js.
- **Prowler API**: The backend API that executes Prowler scans and stores the results, built with Django REST Framework.
- **Prowler SDK**: A Python SDK that integrates with Prowler CLI for advanced functionality.
The app leverages the following supporting infrastructure:
- **PostgreSQL**: Used for persistent storage of scan results.
- **Celery Workers**: Facilitate asynchronous execution of Prowler scans.
- **Valkey**: An in-memory database serving as a message broker for the Celery workers.
![Prowler App Architecture](img/prowler-app-architecture.png)
## Deprecations from v3
The following are the deprecations carried out from v3.
### General
- `Allowlist` now is called `Mutelist`.
- The `--quiet` option has been deprecated. From now on use the `--status` flag to select the finding's status you want to get: PASS, FAIL or MANUAL.
- All `INFO` finding's status has changed to `MANUAL`.
- The CSV output format is common for all providers.
Some output formats are now deprecated:
- The native JSON is replaced for the JSON [OCSF](https://schema.ocsf.io/) v1.1.0, common for all the providers.
### AWS
- Deprecate the AWS flag `--sts-endpoint-region` since AWS STS regional tokens are used.
- To send only FAILS to AWS Security Hub, now you must use either `--send-sh-only-fails` or `--security-hub --status FAIL`.
## Basic Usage
### Prowler App
#### **Access the App**
Go to [http://localhost:3000](http://localhost:3000) after installing the app (see [Quick Start](#prowler-app-installation)). Sign up with your email and password.
<img src="img/sign-up-button.png" alt="Sign Up Button" width="320"/>
<img src="img/sign-up.png" alt="Sign Up" width="285"/>
???+ note "User creation and default tenant behavior"
When creating a new user, the behavior depends on whether an invitation is provided:
- **Without an invitation**:
- A new tenant is automatically created.
- The new user is assigned to this tenant.
- A set of **RBAC admin permissions** is generated and assigned to the user for the newly-created tenant.
- **With an invitation**: The user is added to the specified tenant with the permissions defined in the invitation.
This mechanism ensures that the first user in a newly created tenant has administrative permissions within that tenant.
#### Log In
Log in using your **email and password** to access the Prowler App.
<img src="img/log-in.png" alt="Log In" width="285"/>
#### Add a Cloud Provider
To configure a cloud provider for scanning:
1. Navigate to `Settings > Cloud Providers` and click `Add Account`.
2. Select the cloud provider you wish to scan (**AWS, GCP, Azure, Kubernetes**).
3. Enter the provider's identifier (Optional: Add an alias):
- **AWS**: Account ID
- **GCP**: Project ID
- **Azure**: Subscription ID
- **Kubernetes**: Cluster ID
- **M36**: Domain ID
4. Follow the guided instructions to add and authenticate your credentials.
#### Start a Scan
Once credentials are successfully added and validated, Prowler initiates a scan of your cloud environment.
Click `Go to Scans` to monitor progress.
#### View Results
While the scan is running, you can review findings in the following sections:
- **Overview** Provides a high-level summary of your scans.
<img src="img/overview.png" alt="Overview" width="700"/>
- **Compliance** Displays compliance insights based on security frameworks.
<img src="img/compliance.png" alt="Compliance" width="700"/>
> For detailed usage instructions, refer to the [Prowler App Guide](tutorials/prowler-app.md).
???+ note
Prowler will automatically scan all configured providers every **24 hours**, ensuring your cloud environment stays continuously monitored.
### Prowler CLI
#### Running Prowler
To run Prowler, you will need to specify the provider (e.g `aws`, `gcp`, `azure`, `m365`, `github` or `kubernetes`):
???+ note
If no provider is specified, AWS is used by default for backward compatibility with Prowler v2.
```console
prowler <provider>
```
![Prowler Execution](img/short-display.png)
???+ note
Running the `prowler` command without options will uses environment variable credentials. Refer to the [Requirements](./getting-started/requirements.md) section for credential configuration details.
#### Verbose Output
If you prefer the former verbose output, use: `--verbose`. This allows seeing more info while Prowler is running, minimal output is displayed unless verbosity is enabled.
#### Report Generation
By default, Prowler generates CSV, JSON-OCSF, and HTML reports. To generate a JSON-ASFF report (used by AWS Security Hub), specify `-M` or `--output-modes`:
```console
prowler <provider> -M csv json-asff json-ocsf html
```
The HTML report is saved in the output directory, alongside other reports. It will look like this:
![Prowler Execution](img/html-output.png)
#### Listing Available Checks and Services
To view all available checks or services within a provider:, use `-l`/`--list-checks` or `--list-services`.
```console
prowler <provider> --list-checks
prowler <provider> --list-services
```
#### Running Specific Checks or Services
Execute specific checks or services using `-c`/`checks` or `-s`/`services`:
```console
prowler azure --checks storage_blob_public_access_level_is_disabled
prowler aws --services s3 ec2
prowler gcp --services iam compute
prowler kubernetes --services etcd apiserver
```
#### Excluding Checks and Services
Checks and services can be excluded with `-e`/`--excluded-checks` or `--excluded-services`:
```console
prowler aws --excluded-checks s3_bucket_public_access
prowler azure --excluded-services defender iam
prowler gcp --excluded-services kms
prowler kubernetes --excluded-services controllermanager
```
#### Additional Options
Explore more advanced time-saving execution methods in the [Miscellaneous](tutorials/misc.md) section.
To access the help menu and view all available options, use: `-h`/`--help`:
```console
prowler --help
```
#### AWS
Use a custom AWS profile with `-p`/`--profile` and/or the AWS regions you want to audit with `-f`/`--filter-region`:
```console
prowler aws --profile custom-profile -f us-east-1 eu-south-2
```
???+ note
By default, `prowler` will scan all AWS regions.
See more details about AWS Authentication in the [Requirements](getting-started/requirements.md#aws) section.
#### Azure
Azure requires specifying the auth method:
```console
# To use service principal authentication
prowler azure --sp-env-auth
# To use az cli authentication
prowler azure --az-cli-auth
# To use browser authentication
prowler azure --browser-auth --tenant-id "XXXXXXXX"
# To use managed identity auth
prowler azure --managed-identity-auth
```
See more details about Azure Authentication in [Requirements](getting-started/requirements.md#azure)
By default, Prowler scans all the subscriptions for which it has permissions. To scan a single or various specific subscription you can use the following flag (using az cli auth as example):
```console
prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription ID 2> ... <subscription ID N>
```
#### Google Cloud
- **User Account Credentials**
By default, Prowler uses **User Account credentials**. You can configure your account using:
- `gcloud init` Set up a new account.
- `gcloud config set account <account>` Switch to an existing account.
Once configured, obtain access credentials using: `gcloud auth application-default login`.
- **Service Account Authentication**
Alternatively, you can use Service Account credentials:
Generate and download Service Account keys in JSON format. Refer to [Google IAM documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) for details.
Provide the key file location using this argument:
```console
prowler gcp --credentials-file path
```
- **Scanning Specific GCP Projects**
By default, Prowler scans all accessible GCP projects. To scan specific projects, use the `--project-ids` flag:
```console
prowler gcp --project-ids <Project ID 1> <Project ID 2> ... <Project ID N>
```
- **GCP Retry Configuration**
To configure the maximum number of retry attempts for Google Cloud SDK API calls, use the `--gcp-retries-max-attempts` flag:
```console
prowler gcp --gcp-retries-max-attempts 5
```
This is useful when experiencing quota exceeded errors (HTTP 429) to increase the number of automatic retry attempts.
#### Kubernetes
Prowler enables security scanning of Kubernetes clusters, supporting both **in-cluster** and **external** execution.
- **Non In-Cluster Execution**
```console
prowler kubernetes --kubeconfig-file path
```
???+ note
If no `--kubeconfig-file` is provided, Prowler will use the default KubeConfig file location (`~/.kube/config`).
- **In-Cluster Execution**
To run Prowler inside the cluster, apply the provided YAML configuration to deploy a job in a new namespace:
```console
kubectl apply -f kubernetes/prowler-sa.yaml
kubectl apply -f kubernetes/job.yaml
kubectl apply -f kubernetes/prowler-role.yaml
kubectl apply -f kubernetes/prowler-rolebinding.yaml
kubectl get pods --namespace prowler-ns --> prowler-XXXXX
kubectl logs prowler-XXXXX --namespace prowler-ns
```
???+ note
By default, Prowler scans all namespaces in the active Kubernetes context. Use the `--context`flag to specify the context to be scanned and `--namespaces` to restrict scanning to specific namespaces.
#### Microsoft 365
Microsoft 365 requires specifying the auth method:
```console
# To use service principal authentication for MSGraph and PowerShell modules
prowler m365 --sp-env-auth
# To use both service principal (for MSGraph) and user credentials (for PowerShell modules)
prowler m365 --env-auth
# To use az cli authentication
prowler m365 --az-cli-auth
# To use browser authentication
prowler m365 --browser-auth --tenant-id "XXXXXXXX"
```
See more details about M365 Authentication in the [Requirements](getting-started/requirements.md#microsoft-365) section.
#### GitHub
Prowler enables security scanning of your **GitHub account**, including **Repositories**, **Organizations** and **Applications**.
- **Supported Authentication Methods**
Authenticate using one of the following methods:
```console
# Personal Access Token (PAT):
prowler github --personal-access-token pat
# OAuth App Token:
prowler github --oauth-app-token oauth_token
# GitHub App Credentials:
prowler github --github-app-id app_id --github-app-key app_key
```
???+ note
If no login method is explicitly provided, Prowler will automatically attempt to authenticate using environment variables in the following order of precedence:
1. `GITHUB_PERSONAL_ACCESS_TOKEN`
2. `OAUTH_APP_TOKEN`
3. `GITHUB_APP_ID` and `GITHUB_APP_KEY`
#### Infrastructure as Code (IaC)
Prowler's Infrastructure as Code (IaC) provider enables you to scan local or remote infrastructure code for security and compliance issues using [Checkov](https://www.checkov.io/). This provider supports a wide range of IaC frameworks, allowing you to assess your code before deployment.
```console
# Scan a directory for IaC files
prowler iac --scan-path ./my-iac-directory
# Scan a remote GitHub repository (public or private)
prowler iac --scan-repository-url https://github.com/user/repo.git
# Authenticate to a private repo with GitHub username and PAT
prowler iac --scan-repository-url https://github.com/user/repo.git \
--github-username <username> --personal-access-token <token>
# Authenticate to a private repo with OAuth App Token
prowler iac --scan-repository-url https://github.com/user/repo.git \
--oauth-app-token <oauth_token>
# Specify frameworks to scan (default: all)
prowler iac --scan-path ./my-iac-directory --frameworks terraform kubernetes
# Exclude specific paths
prowler iac --scan-path ./my-iac-directory --exclude-path ./my-iac-directory/test,./my-iac-directory/examples
```
???+ note
- `--scan-path` and `--scan-repository-url` are mutually exclusive; only one can be specified at a time.
- For remote repository scans, authentication can be provided via CLI flags or environment variables (`GITHUB_OAUTH_APP_TOKEN`, `GITHUB_USERNAME`, `GITHUB_PERSONAL_ACCESS_TOKEN`). CLI flags take precedence.
- The IaC provider does not require cloud authentication for local scans.
- It is ideal for CI/CD pipelines and local development environments.
- For more details on supported frameworks and rules, see the [Checkov documentation](https://www.checkov.io/1.Welcome/Quick%20Start.html)
See more details about IaC scanning in the [IaC Tutorial](tutorials/iac/getting-started-iac.md) section.
## Prowler v2 Documentation
For **Prowler v2 Documentation**, refer to the [official repository](https://github.com/prowler-cloud/prowler/blob/8818f47333a0c1c1a457453c87af0ea5b89a385f/README.md).
- **Prowler CLI** (Command Line Interface)
- **Prowler App** (Web Application)
- [**Prowler Cloud**](https://cloud.prowler.com) A managed service built on top of Prowler App.
- [**Prowler Hub**](https://hub.prowler.com) A public library of versioned checks, cloud service artifacts, and compliance frameworks.
+173
View File
@@ -0,0 +1,173 @@
### Installation
Prowler App supports multiple installation methods based on your environment.
Refer to the [Prowler App Tutorial](../tutorials/prowler-app.md) for detailed usage instructions.
=== "Docker Compose"
_Requirements_:
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
_Commands_:
``` bash
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/docker-compose.yml
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/.env
docker compose up -d
```
> Containers are built for `linux/amd64`. If your workstation's architecture is different, please set `DOCKER_DEFAULT_PLATFORM=linux/amd64` in your environment or use the `--platform linux/amd64` flag in the docker command.
> Enjoy Prowler App at http://localhost:3000 by signing up with your email and password.
???+ note
You can change the environment variables in the `.env` file. Note that it is not recommended to use the default values in production environments.
???+ note
There is a development mode available, you can use the file https://github.com/prowler-cloud/prowler/blob/master/docker-compose-dev.yml to run the app in development mode.
???+ warning
Google and GitHub authentication is only available in [Prowler Cloud](https://prowler.com).
=== "GitHub"
_Requirements_:
* `git` installed.
* `poetry` installed: [poetry installation](https://python-poetry.org/docs/#installation).
* `npm` installed: [npm installation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
???+ warning
Make sure to have `api/.env` and `ui/.env.local` files with the required environment variables. You can find the required environment variables in the [`api/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/api/.env.example) and [`ui/.env.template`](https://github.com/prowler-cloud/prowler/blob/master/ui/.env.template) files.
_Commands to run the API_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/api \
poetry install \
eval $(poetry env activate) \
set -a \
source .env \
docker compose up postgres valkey -d \
cd src/backend \
python manage.py migrate --database admin \
gunicorn -c config/guniconf.py config.wsgi:application
```
???+ important
Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
In case you have any doubts, consult the Poetry environment activation guide: https://python-poetry.org/docs/managing-environments/#activating-the-environment
> Now, you can access the API documentation at http://localhost:8080/api/v1/docs.
_Commands to run the API Worker_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/api \
poetry install \
eval $(poetry env activate) \
set -a \
source .env \
cd src/backend \
python -m celery -A config.celery worker -l info -E
```
_Commands to run the API Scheduler_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/api \
poetry install \
eval $(poetry env activate) \
set -a \
source .env \
cd src/backend \
python -m celery -A config.celery beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
```
_Commands to run the UI_:
``` bash
git clone https://github.com/prowler-cloud/prowler \
cd prowler/ui \
npm install \
npm run build \
npm start
```
> Enjoy Prowler App at http://localhost:3000 by signing up with your email and password.
???+ warning
Google and GitHub authentication is only available in [Prowler Cloud](https://prowler.com).
### Update Prowler App
Upgrade Prowler App installation using one of two options:
#### Option 1: Update Environment File
Edit the `.env` file and change version values:
```env
PROWLER_UI_VERSION="5.9.0"
PROWLER_API_VERSION="5.9.0"
```
#### Option 2: Use Docker Compose Pull
```bash
docker compose pull --policy always
```
The `--policy always` flag ensures that Docker pulls the latest images even if they already exist locally.
???+ note "What Gets Preserved During Upgrade"
Everything is preserved, nothing will be deleted after the update.
### Troubleshooting
If containers don't start, check logs for errors:
```bash
# Check logs for errors
docker compose logs
# Verify image versions
docker images | grep prowler
```
If you encounter issues, you can rollback to the previous version by changing the `.env` file back to your previous version and running:
```bash
docker compose pull
docker compose up -d
```
### Container versions
The available versions of Prowler CLI are the following:
- `latest`: in sync with `master` branch (please note that it is not a stable version)
- `v4-latest`: in sync with `v4` branch (please note that it is not a stable version)
- `v3-latest`: in sync with `v3` branch (please note that it is not a stable version)
- `<x.y.z>` (release): you can find the releases [here](https://github.com/prowler-cloud/prowler/releases), those are stable releases.
- `stable`: this tag always point to the latest release.
- `v4-stable`: this tag always point to the latest release for v4.
- `v3-stable`: this tag always point to the latest release for v3.
The container images are available here:
- Prowler App:
- [DockerHub - Prowler UI](https://hub.docker.com/r/prowlercloud/prowler-ui/tags)
- [DockerHub - Prowler API](https://hub.docker.com/r/prowlercloud/prowler-api/tags)
+208
View File
@@ -0,0 +1,208 @@
## Installation
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler/). Install it as a Python package with `Python >= 3.9, <= 3.12`:
=== "pipx"
[pipx](https://pipx.pypa.io/stable/) installs Python applications in isolated environments. Use `pipx` for global installation.
_Requirements_:
* `Python >= 3.9, <= 3.12`
* `pipx` installed: [pipx installation](https://pipx.pypa.io/stable/installation/).
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
``` bash
pipx install prowler
prowler -v
```
Upgrade Prowler to the latest version:
``` bash
pipx upgrade prowler
```
=== "pip"
???+ warning
This method modifies the chosen installation environment. Consider using [pipx](https://docs.prowler.com/projects/prowler-open-source/en/latest/#__tabbed_1_1) for global installation.
_Requirements_:
* `Python >= 3.9, <= 3.12`
* `Python pip >= 21.0.0`
* AWS, GCP, Azure, M365 and/or Kubernetes credentials
_Commands_:
``` bash
pip install prowler
prowler -v
```
Upgrade Prowler to the latest version:
``` bash
pip install --upgrade prowler
```
=== "Docker"
_Requirements_:
* Have `docker` installed: https://docs.docker.com/get-docker/.
* In the command below, change `-v` to your local directory path in order to access the reports.
* AWS, GCP, Azure and/or Kubernetes credentials
> Containers are built for `linux/amd64`. If your workstation's architecture is different, please set `DOCKER_DEFAULT_PLATFORM=linux/amd64` in your environment or use the `--platform linux/amd64` flag in the docker command.
_Commands_:
``` bash
docker run -ti --rm -v /your/local/dir/prowler-output:/home/prowler/output \
--name prowler \
--env AWS_ACCESS_KEY_ID \
--env AWS_SECRET_ACCESS_KEY \
--env AWS_SESSION_TOKEN toniblyx/prowler:latest
```
=== "GitHub"
_Requirements for Developers_:
* `git`
* `poetry` installed: [poetry installation](https://python-poetry.org/docs/#installation).
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
poetry install
poetry run python prowler-cli.py -v
```
???+ note
If you want to clone Prowler from Windows, use `git config core.longpaths true` to allow long file paths.
=== "Amazon Linux 2"
_Requirements_:
* `Python >= 3.9, <= 3.12`
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
```
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install prowler
prowler -v
```
=== "Ubuntu"
_Requirements_:
* `Ubuntu 23.04` or above, if you are using an older version of Ubuntu check [pipx installation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#__tabbed_1_1) and ensure you have `Python >= 3.9, <= 3.12`.
* `Python >= 3.9, <= 3.12`
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
``` bash
sudo apt update
sudo apt install pipx
pipx ensurepath
pipx install prowler
prowler -v
```
=== "Brew"
_Requirements_:
* `Brew` installed in your Mac or Linux
* AWS, GCP, Azure and/or Kubernetes credentials
_Commands_:
``` bash
brew install prowler
prowler -v
```
=== "AWS CloudShell"
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [[2]](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it is already included in AL2023. Prowler can thus be easily installed following the generic method of installation via pip. Follow the steps below to successfully execute Prowler v4 in AWS CloudShell:
_Requirements_:
* Open AWS CloudShell `bash`.
_Commands_:
```bash
sudo bash
adduser prowler
su prowler
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install prowler
cd /tmp
prowler aws
```
???+ note
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/tmp/output/prowler-output-123456789012-20221220191331.csv`
=== "Azure CloudShell"
_Requirements_:
* Open Azure CloudShell `bash`.
_Commands_:
```bash
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install prowler
cd /tmp
prowler azure --az-cli-auth
```
## Container versions
The available versions of Prowler CLI are the following:
- `latest`: in sync with `master` branch (please note that it is not a stable version)
- `v4-latest`: in sync with `v4` branch (please note that it is not a stable version)
- `v3-latest`: in sync with `v3` branch (please note that it is not a stable version)
- `<x.y.z>` (release): you can find the releases [here](https://github.com/prowler-cloud/prowler/releases), those are stable releases.
- `stable`: this tag always point to the latest release.
- `v4-stable`: this tag always point to the latest release for v4.
- `v3-stable`: this tag always point to the latest release for v3.
The container images are available here:
- Prowler CLI:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)

Before

Width:  |  Height:  |  Size: 313 KiB

After

Width:  |  Height:  |  Size: 313 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 420 KiB

Before

Width:  |  Height:  |  Size: 192 KiB

After

Width:  |  Height:  |  Size: 192 KiB

+22
View File
@@ -0,0 +1,22 @@
Prowler App is a web application that simplifies running Prowler. It provides:
- **User-friendly interface** for configuring and executing scans
- Dashboard to **view results** and manage **security findings**
![Prowler App](img/overview.png)
## Components
Prowler App consists of three main components:
- **Prowler UI**: User-friendly web interface for running Prowler and viewing results, powered by Next.js
- **Prowler API**: Backend API that executes Prowler scans and stores results, built with Django REST Framework
- **Prowler SDK**: Python SDK that integrates with Prowler CLI for advanced functionality
Supporting infrastructure includes:
- **PostgreSQL**: Persistent storage of scan results
- **Celery Workers**: Asynchronous execution of Prowler scans
- **Valkey**: In-memory database serving as message broker for Celery workers
![Prowler App Architecture](img/prowler-app-architecture.png)
+37
View File
@@ -0,0 +1,37 @@
Prowler CLI is a command-line interface for running Prowler scans from the terminal.
```console
prowler <provider>
```
![Prowler CLI Execution](../img/short-display.png)
## Prowler Dashboard
```console
prowler dashboard
```
![Prowler Dashboard](img/dashboard.png)
Prowler includes hundreds of security controls aligned with widely recognized industry frameworks and standards, including:
- CIS Benchmarks (AWS, Azure, Microsoft 365, Kubernetes, GitHub)
- NIST SP 800-53 (rev. 4 and 5) and NIST SP 800-171
- NIST Cybersecurity Framework (CSF)
- CISA Guidelines
- FedRAMP Low & Moderate
- PCI DSS v3.2.1 and v4.0
- ISO/IEC 27001:2013 and 2022
- SOC 2
- GDPR (General Data Protection Regulation)
- HIPAA (Health Insurance Portability and Accountability Act)
- FFIEC (Federal Financial Institutions Examination Council)
- ENS RD2022 (Spanish National Security Framework)
- GxP 21 CFR Part 11 and EU Annex 11
- RBI Cybersecurity Framework (Reserve Bank of India)
- KISA ISMS-P (Korean Information Security Management System)
- MITRE ATT&CK
- AWS Well-Architected Framework (Security & Reliability Pillars)
- AWS Foundational Technical Review (FTR)
- Microsoft NIS2 Directive (EU)
- Custom threat scoring frameworks (prowler_threatscore)
- Custom security frameworks for enterprise needs
+1
View File
@@ -80,6 +80,7 @@ The following list includes all the Azure checks with configurable variables tha
| `app_ensure_python_version_is_latest` | `python_latest_version` | String |
| `app_ensure_java_version_is_latest` | `java_latest_version` | String |
| `sqlserver_recommended_minimal_tls_version` | `recommended_minimal_tls_versions` | List of Strings |
| `vm_sufficient_daily_backup_retention_period` | `vm_backup_min_daily_retention_days` | Integer |
| `vm_desired_sku_size` | `desired_vm_sku_sizes` | List of Strings |
| `defender_attack_path_notifications_properly_configured` | `defender_attack_path_minimal_risk_level` | String |
@@ -13,7 +13,10 @@ This guide explains how to set up authentication with GitHub for Prowler. The do
Personal Access Tokens provide the simplest GitHub authentication method and support individual user authentication or testing scenarios.
#### How to Create a Personal Access Token
???+ warning "Classic Tokens Deprecated"
GitHub has deprecated Personal Access Tokens (classic) in favor of fine-grained Personal Access Tokens. We recommend using fine-grained tokens as they provide better security through more granular permissions and resource-specific access control.
#### **Option 1: Create a Fine-Grained Personal Access Token (Recommended)**
1. **Navigate to GitHub Settings**
- Open [GitHub](https://github.com) and sign in
@@ -24,18 +27,67 @@ Personal Access Tokens provide the simplest GitHub authentication method and sup
- Scroll down the left sidebar
- Click "Developer settings"
3. **Generate New Token**
3. **Generate Fine-Grained Token**
- Click "Personal access tokens"
- Select "Fine-grained tokens"
- Click "Generate new token"
4. **Configure Token Settings**
- **Token name**: Give your token a descriptive name (e.g., "Prowler Security Scanner")
- **Expiration**: Set an appropriate expiration date (recommended: 90 days or less)
- **Repository access**: Choose "All repositories" or "Only select repositories" based on your needs
???+ note "Public repositories"
Even if you select 'Only select repositories', the token will have access to the public repositories that you own or are a member of.
5. **Configure Token Permissions**
To enable Prowler functionality, configure the following permissions:
- **Repository permissions:**
- **Contents**: Read-only access
- **Metadata**: Read-only access
- **Pull requests**: Read-only access
- **Security advisories**: Read-only access
- **Statuses**: Read-only access
- **Organization permissions:**
- **Members**: Read-only access
- **Account permissions:**
- **Email addresses**: Read-only access
6. **Copy and Store the Token**
- Copy the generated token immediately (GitHub displays tokens only once)
- Store tokens securely using environment variables
![GitHub Personal Access Token Permissions](./img/github-pat-permissions.png)
#### **Option 2: Create a Classic Personal Access Token (Not Recommended)**
???+ warning "Security Risk"
Classic tokens provide broad permissions that may exceed what Prowler actually needs. Use fine-grained tokens instead for better security.
1. **Navigate to GitHub Settings**
- Open [GitHub](https://github.com) and sign in
- Click the profile picture in the top right corner
- Select "Settings" from the dropdown menu
2. **Access Developer Settings**
- Scroll down the left sidebar
- Click "Developer settings"
3. **Generate Classic Token**
- Click "Personal access tokens"
- Select "Tokens (classic)"
- Click "Generate new token"
4. **Configure Token Permissions**
To enable Prowler functionality, configure the following scopes:
- `repo`: Full control of private repositories
- `repo`: Full control of private repositories (includes `repo:status` and `repo:contents`)
- `read:org`: Read organization and team membership
- `read:user`: Read user profile data
- `read:discussion`: Read discussions
- `read:enterprise`: Read enterprise data (if applicable)
- `security_events`: Access security events (secret scanning and Dependabot alerts)
- `read:enterprise`: Read enterprise data (if using GitHub Enterprise)
5. **Copy and Store the Token**
- Copy the generated token immediately (GitHub displays tokens only once)
Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 574 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 561 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 277 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 338 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

@@ -0,0 +1,412 @@
# Amazon S3 Integration
**Prowler App** allows automatic export of scan results to Amazon S3 buckets, providing seamless integration with existing data workflows and storage infrastructure. This comprehensive guide demonstrates configuration and management of Amazon S3 integrations to streamline security finding management and reporting.
When enabled and configured, scan results are automatically stored in the configured bucket. Results are provided in `csv`, `html` and `json-ocsf` formats, offering flexibility for custom integrations:
<!-- TODO: remove the comment once the AWS Security Hub integration is completed -->
<!-- - json-asff -->
<!--
???+ note
The `json-asff` file will be only present in your configured Amazon S3 Bucket if you have the AWS Security Hub integration enabled. You can get more information about that integration here. -->
???+ note
Enabling this integration incurs costs in Amazon S3. Refer to [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/) for more information.
The Amazon S3 Integration provides the following capabilities:
- **Automate scan result exports** to designated S3 buckets after each scan
- **Configure separate bucket destinations** for different cloud providers or use cases
- **Customize export paths** within buckets for organized storage
- **Support multiple authentication methods** including IAM roles and static credentials
- **Verify connection reliability** through built-in connection testing
- **Manage integrations independently** with separate configuration and credential controls
## Required Permissions
Before configuring the Amazon S3 Integration, ensure that AWS credentials and optionally the IAM Role used for S3 access have the necessary permissions to write scan results to the designated S3 bucket. This requirement applies when using static credentials, session credentials, or an IAM role (either self-created or generated using [Prowler's permissions templates](#available-templates)).
### IAM Policy
The S3 integration requires the following permissions. Add these to the IAM role policy, or ensure AWS credentials have these permissions:
```json title="s3:DeleteObject"
{
"Version": "2012-10-17",
"Statement": [
{
"Condition": {
"StringEquals": {
"s3:ResourceAccount": "<BUCKET AWS ACCOUNT NUMBER>"
}
},
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET NAME>/*test-prowler-connection.txt"
],
"Effect": "Allow"
}
]
}
```
`s3:DeleteObject` permission is required for connection testing. When testing the S3 integration, Prowler creates a temporary beacon file, `test-prowler-connection.txt`, to verify write permissions, then deletes it to confirm the connection is working properly.
```json title="s3:PutObject"
{
"Version": "2012-10-17",
"Statement": [
{
"Condition": {
"StringEquals": {
"s3:ResourceAccount": "<BUCKET AWS ACCOUNT NUMBER>"
}
},
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET NAME>/*"
],
"Effect": "Allow"
}
]
}
```
```json title="s3:ListBucket"
{
"Version": "2012-10-17",
"Statement": [
{
"Condition": {
"StringEquals": {
"s3:ResourceAccount": "<BUCKET AWS ACCOUNT NUMBER>"
}
},
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<BUCKET NAME>"
],
"Effect": "Allow"
}
]
}
```
???+ note
Replace `<BUCKET AWS ACCOUNT NUMBER>` with the AWS account ID that owns the destination S3 bucket, and `<BUCKET NAME>` with the actual bucket name.
### Cross-Account S3 Bucket
If the S3 destination bucket is in a different AWS account than the one providing the credentials for S3 access, configure a bucket policy on the destination bucket to allow cross-account access.
The following diagrams illustrate the three common S3 integration scenarios:
##### Same Account Setup (No Bucket Policy Required)
When both the IAM credentials and destination S3 bucket are in the same AWS account, no additional bucket policy is required.
![](./img/s3/s3-same-account.png)
##### Cross-Account Setup (Bucket Policy Required)
When the S3 bucket is in a different AWS account, you must configure a bucket policy to allow cross-account access.
![](./img/s3/s3-cross-account.png)
##### Multi-Account Setup (Multiple Principals in Bucket Policy)
When multiple AWS accounts need to write to the same destination bucket, configure the bucket policy with multiple principals.
![](./img/s3/s3-multiple-accounts.png)
#### S3 Bucket Policy
Apply the following bucket policy to the destination S3 bucket:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<SOURCE ACCOUNT ID>:role/ProwlerScan"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<BUCKET NAME>/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<SOURCE ACCOUNT ID>:role/ProwlerScan"
},
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::<BUCKET NAME>/*test-prowler-connection.txt"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<SOURCE ACCOUNT ID>:role/ProwlerScan"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<BUCKET NAME>"
}
]
}
```
???+ note
Replace `<SOURCE ACCOUNT ID>` with the AWS account ID that contains the IAM role and `<BUCKET NAME>` with the destination bucket name. The role name `ProwlerScan` is the default name when using Prowler's permissions templates. If using a custom IAM role or different authentication method, replace `ProwlerScan` with the actual role name.
##### Multi-Account Configuration
For multiple AWS accounts, modify the `Principal` field to an array:
```json
"Principal": {
"AWS": [
"arn:aws:iam::<SOURCE ACCOUNT ID 1>:role/ProwlerScan",
"arn:aws:iam::<SOURCE ACCOUNT ID 2>:role/ProwlerScan"
]
}
```
???+ note
Replace `<SOURCE ACCOUNT ID>` with the AWS account ID that contains the IAM role and `<BUCKET NAME>` with the destination bucket name. The role name `ProwlerScan` is the default name when using Prowler's permissions templates. If using a custom IAM role or different authentication method, replace `ProwlerScan` with the actual role name.
### Available Templates
**Prowler App** provides Infrastructure as Code (IaC) templates to automate IAM role setup with S3 integration permissions.
???+ note
Templates are optional. Custom IAM roles or static credentials can be used instead.
Choose from the following deployment options:
- [CloudFormation](https://prowler-cloud-public.s3.eu-west-1.amazonaws.com/permissions/templates/aws/cloudformation/prowler-scan-role.yml)
- [Terraform](https://github.com/prowler-cloud/prowler/tree/master/permissions/templates/terraform)
#### CloudFormation
##### AWS CLI
When using Prowler's CloudFormation template, execute the following command to update the existing Prowler stack:
```bash
aws cloudformation update-stack \
--capabilities CAPABILITY_IAM --capabilities CAPABILITY_NAMED_IAM \
--stack-name "Prowler" \
--template-url "https://prowler-cloud-public.s3.eu-west-1.amazonaws.com/permissions/templates/aws/cloudformation/prowler-scan-role.yml" \
--parameters \
ParameterKey=EnableS3Integration,ParameterValue="true" \
ParameterKey=ExternalId,ParameterValue="your-external-id" \
ParameterKey=S3IntegrationBucketName,ParameterValue="your-bucket-name" \
ParameterKey=S3IntegrationBucketAccountId,ParameterValue="your-bucket-aws-account-id-owner"
```
Alternatively, if you don't have the `ProwlerScan` IAM Role, execute the following command to create the CloudFormation stack:
```bash
aws cloudformation create-stack \
--capabilities CAPABILITY_IAM --capabilities CAPABILITY_NAMED_IAM \
--stack-name "Prowler" \
--template-url "https://prowler-cloud-public.s3.eu-west-1.amazonaws.com/permissions/templates/aws/cloudformation/prowler-scan-role.yml" \
--parameters \
ParameterKey=EnableS3Integration,ParameterValue="true" \
ParameterKey=ExternalId,ParameterValue="your-external-id" \
ParameterKey=S3IntegrationBucketName,ParameterValue="your-bucket-name" \
ParameterKey=S3IntegrationBucketAccountId,ParameterValue="your-bucket-aws-account-id-owner"
```
A CloudFormation Quick Link is also available [here](https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/quickcreate?templateURL=https%3A%2F%2Fprowler-cloud-public.s3.eu-west-1.amazonaws.com%2Fpermissions%2Ftemplates%2Faws%2Fcloudformation%2Fprowler-scan-role.yml&stackName=Prowler&param_EnableS3Integration=true)
##### AWS Console
If using Prowler's CloudFormation template, execute the following command to update the existing Prowler stack:
1. Navigate to CloudFormation service in the AWS region you are using
2. Select "ProwlerScan", click "Update" and then "Make a direct update"
3. Replace template, uploading the [CloudFormation template](https://prowler-cloud-public.s3.eu-west-1.amazonaws.com/permissions/templates/aws/cloudformation/prowler-scan-role.yml)
4. Configure parameters:
- `ExternalId`: Keep existing value
- `EnableS3Integration`: Select "true"
- `S3IntegrationBucketName`: Your bucket name
- `S3IntegrationBucketAccountId`: Bucket owner's AWS account ID
5. In the "Configure stack options" screen, again, leave everything as it is and click on "Next"
6. Finally, under "Review Prowler", at the bottom click on "Submit"
#### Terraform
1. Download the Terraform code:
```bash
# Clone or download from GitHub
git clone https://github.com/prowler-cloud/prowler.git
cd prowler/permissions/templates/terraform
```
2. Configure your variables:
```bash
cp terraform.tfvars.example terraform.tfvars
```
3. Edit `terraform.tfvars` with your specific values:
```hcl
# Required: External ID from Prowler App
external_id = "your-unique-external-id-here"
# S3 Integration Configuration
enable_s3_integration = true
s3_integration_bucket_name = "your-s3-bucket-name"
s3_integration_bucket_account_id = "123456789012" # Bucket owner's AWS Account ID
```
4. Deploy the infrastructure:
```bash
terraform init
terraform plan # Review the planned changes
terraform apply # Type 'yes' when prompted
```
5. After successful deployment, Terraform will display important values:
```
Outputs:
prowler_role_arn = "arn:aws:iam::123456789012:role/ProwlerScan"
prowler_role_name = "ProwlerScan"
s3_integration_enabled = "true"
```
6. Copy the `prowler_role_arn`, as it's required to complete the S3 integration credentials configuration.
For detailed information, refer to the [Terraform README](https://github.com/prowler-cloud/prowler/blob/master/permissions/templates/terraform/README.md).
---
## Configuration
Once the required permissions are set up, proceed to configure the S3 integration in **Prowler App**.
1. Navigate to "Integrations"
![Navigate to integrations](./img/s3/s3-integration-ui-1.png)
2. Locate the Amazon S3 Integration card and click on the "Configure" button
![Access S3 integration](./img/s3/s3-integration-ui-2.png)
3. Click the "Add Integration" button
![Add integration button](./img/s3/s3-integration-ui-3.png)
4. Complete the configuration form with the following details:
- **Cloud Providers:** Select the providers whose scan results should be exported to this S3 bucket
- **Bucket Name:** Enter the name of the target S3 bucket (e.g., `my-security-findings-bucket`)
- **Output Directory:** Specify the directory path within the bucket (e.g., `/prowler-findings/`, defaults to `output`)
![Configuration form](./img/s3/s3-integration-ui-4.png)
6. Click "Next" to configure credentials
7. Configure AWS authentication using one of the supported methods:
- **AWS SDK Default:** Use default AWS credentials from the environment. For Prowler Cloud users, this is the recommended option as the service has AWS credentials to assume IAM roles with ARNs matching `arn:aws:iam::*:role/Prowler*` or `arn:aws:iam::*:role/prowler*`
- **Access Keys:** Provide AWS access key ID and secret access key
- **IAM Role (optional):** Specify the IAM Role ARN, external ID, and optional session parameters
![Credentials configuration](./img/s3/s3-integration-ui-5.png)
8. Optional - For IAM role authentication, complete the required fields:
- **Role ARN:** The Amazon Resource Name of the IAM role
- **External ID:** Unique identifier for additional security (defaults to Tenant/Organization ID) - mandatory and automatically filled
- **Role Session Name:** Optional - name for the assumed role session
- **Session Duration:** Optional - duration in seconds for the session
9. Click "Create Integration" to verify the connection and complete the setup
???+ success
Once credentials are configured and the connection test passes, the S3 integration will be active. Scan results will automatically be exported to the specified bucket after each scan completes. Run a new scan and check the S3 bucket to verify the integration is working.
???+ note
Scan outputs are processed after scan completion. Depending on scan size and network conditions, exports may take a few minutes to appear in the S3 bucket.
---
### Integration Status
Once the integration is active, monitor its status and make adjustments as needed through the integrations management interface.
1. Review configured integrations in the management interface
2. Each integration displays:
- **Connection Status:** Connected or Disconnected indicator
- **Bucket Information:** Bucket name and output directory
- **Last Checked:** Timestamp of the most recent connection test
![Integration status view](./img/s3/s3-integration-ui-6.png)
#### Actions
![Action buttons](./img/s3/s3-integration-ui-7.png)
Each S3 integration provides several management actions accessible through dedicated buttons:
| Button | Purpose | Available Actions | Notes |
|--------|---------|------------------|-------|
| **Test** | Verify integration connectivity | • Test AWS credential validity<br/>• Check S3 bucket accessibility<br/>• Verify write permissions<br/>• Validate connection setup | Results displayed in notification message |
| **Config** | Modify integration settings | • Update selected cloud providers<br/>• Change bucket name<br/>• Modify output directory path | Click "Update Configuration" to save changes |
| **Credentials** | Update authentication settings | • Modify AWS access keys<br/>• Update IAM role configuration<br/>• Change authentication method | Click "Update Credentials" to save changes |
| **Enable/Disable** | Toggle integration status | • Enable integration to start exporting results<br/>• Disable integration to pause exports | Status change takes effect immediately |
| **Delete** | Remove integration permanently | • Permanently delete integration<br/>• Remove all configuration data | ⚠️ **Cannot be undone** - confirm before deleting |
???+ tip "Management Best Practices"
- Test the integration after any configuration changes
- Use the Enable/Disable toggle for temporary changes instead of deleting
---
## Understanding S3 Export Structure
When the S3 integration is enabled and a scan completes, Prowler creates a folder inside the specified bucket path (using `output` as the default folder name) with subfolders for each output format:
- Regular: `prowler-output-{provider-uid}-{timestamp}.{extension}`
- Compliance: `prowler-output-{provider-uid}-{timestamp}_{compliance_framework}.{extension}`
```
output/
├── compliance/
│ └── prowler-output-111122223333-20250805120000_cis_5.0_aws.csv
├── csv/
│ └── prowler-output-111122223333-20250805120000.csv
├── html/
│ └── prowler-output-111122223333-20250805120000.html
└── json-ocsf/
└── prowler-output-111122223333-20250805120000.ocsf.json
```
![](./img/s3/s3-output-folder.png)
For detailed information about Prowler's reporting formats, refer to the [Prowler reporting documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/reporting/).
## Troubleshooting
**Connection test fails:**
- Check AWS credentials are valid
- If using IAM Role, check its permissions
- Verify bucket permissions and region
- Confirm network access to S3
**No scan results in bucket:**
- Ensure integration shows "Connected"
- Check that provider is associated with integration
- Verify bucket policies allow writes
+45
View File
@@ -201,6 +201,51 @@ For full setup instructions and requirements, check the [Microsoft 365 provider
<img src="../../img/m365-credentials.png" alt="Prowler Cloud M365 Credentials" width="700"/>
### **Step 4.6: GitHub Credentials**
For GitHub, you must enter your Provider ID (username or organization name) and choose the authentication method you want to use:
- **Personal Access Token** (Recommended for individual users)
- **OAuth App Token** (For applications requiring user consent)
- **GitHub App** (Recommended for organizations and production use)
???+ note
For full setup instructions and requirements, check the [GitHub provider requirements](./github/getting-started-github.md).
<img src="../img/github-auth-methods.png" alt="GitHub Authentication Methods" width="700"/>
#### **Step 4.6.1: Personal Access Token**
Personal Access Tokens provide the simplest GitHub authentication method and support individual user authentication or testing scenarios.
- Select `Personal Access Token` and enter your `Personal Access Token`:
<img src="../img/github-pat-credentials.png" alt="GitHub Personal Access Token Credentials" width="700"/>
???+ note
For detailed instructions on creating a Personal Access Token and the exact permissions required, check the [GitHub Personal Access Token tutorial](./github/getting-started-github.md#1-personal-access-token-pat).
#### **Step 4.6.2: OAuth App Token**
OAuth Apps enable applications to act on behalf of users with explicit consent.
- Select `OAuth App Token` and enter your `OAuth App Token`:
<img src="../img/github-oauth-credentials.png" alt="GitHub OAuth App Credentials" width="700"/>
???+ note
To create an OAuth App, go to GitHub Settings → Developer settings → OAuth Apps → New OAuth App. You'll need to exchange an authorization code for an access token using the OAuth flow.
#### **Step 4.6.3: GitHub App**
GitHub Apps provide the recommended integration method for accessing multiple repositories or organizations.
- Select `GitHub App` and enter your `GitHub App ID` and `GitHub App Private Key`:
<img src="../img/github-app-credentials.png" alt="GitHub App Credentials" width="700"/>
???+ note
To create a GitHub App, go to GitHub Settings → Developer settings → GitHub Apps → New GitHub App. Configure the necessary permissions and generate a private key. Install the app to your account or organization and provide the App ID and private key content.
## **Step 5: Test Connection**
After adding your credentials of your cloud account, click the `Launch` button to verify that Prowler App can successfully connect to your provider:
+15 -2
View File
@@ -46,8 +46,20 @@ repo_name: prowler-cloud/prowler
nav:
- Getting Started:
- Overview: index.md
- Requirements: getting-started/requirements.md
- Overview:
- What is Prowler?: index.md
- Products:
- Prowler App: products/prowler-app.md
- Prowler CLI: products/prowler-cli.md
- Prowler Cloud: https://cloud.prowler.com
- Prowler Hub: https://hub.prowler.com
- Installation:
- Prowler App: installation/prowler-app.md
- Prowler CLI: installation/prowler-cli.md
- Basic Usage:
- Prowler App: basic-usage/prowler-app.md
- Prowler CLI: basic-usage/prowler-cli.md
- Requirements: getting-started/requirements.md
- Tutorials:
- Prowler App:
- Getting Started: tutorials/prowler-app.md
@@ -55,6 +67,7 @@ nav:
- Social Login: tutorials/prowler-app-social-login.md
- SSO with SAML: tutorials/prowler-app-sso.md
- Mute findings: tutorials/prowler-app-mute-findings.md
- Amazon S3 Integration: tutorials/prowler-app-s3-integration.md
- Lighthouse: tutorials/prowler-app-lighthouse.md
- CLI:
- Miscellaneous: tutorials/misc.md
+33 -16
View File
@@ -1,12 +1,21 @@
## Deployment using Terraform
To deploy the Prowler Scan Role in order to allow scanning your AWS account from Prowler, please run the following commands in your terminal:
This Terraform configuration creates the necessary IAM role and policies to allow Prowler to scan your AWS account, with optional S3 integration for storing scan reports.
1. `terraform init`
2. `terraform plan`
3. `terraform apply`
### Quick Start
During the `terraform plan` and `terraform apply` steps you will be asked for an External ID to be configured in the `ProwlerScan` IAM role.
1. **Configure variables:**
```bash
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your values
```
2. **Deploy:**
```bash
terraform init
terraform plan
terraform apply
```
### Variables
@@ -15,7 +24,7 @@ During the `terraform plan` and `terraform apply` steps you will be asked for an
- `iam_principal` (optional): IAM principal pattern allowed to assume the role (defaults to Prowler Cloud: "role/prowler*")
- `enable_s3_integration` (optional): Enable S3 integration for storing scan reports (default: false)
- `s3_integration_bucket_name` (conditional): S3 bucket name for reports (required if `enable_s3_integration` is true)
- `s3_integration_bucket_account` (conditional): S3 bucket owner account ID (required if `enable_s3_integration` is true)
- `s3_integration_bucket_account_id` (conditional): S3 bucket owner account ID (required if `enable_s3_integration` is true)
### Usage Examples
@@ -30,18 +39,26 @@ terraform apply \
-var="external_id=your-external-id-here" \
-var="enable_s3_integration=true" \
-var="s3_integration_bucket_name=your-s3-bucket-name" \
-var="s3_integration_bucket_account=123456789012"
-var="s3_integration_bucket_account_id=123456789012"
```
#### Using terraform.tfvars file:
Create a `terraform.tfvars` file:
```hcl
external_id = "your-external-id-here"
enable_s3_integration = true
s3_integration_bucket_name = "your-s3-bucket-name"
s3_integration_bucket_account = "123456789012"
#### Using terraform.tfvars file (Recommended):
```bash
cp terraform.tfvars.example terraform.tfvars
# Edit the file with your values
terraform apply
```
Then run: `terraform apply`
#### Command line variables (Alternative):
```bash
terraform apply -var="external_id=your-external-id-here"
```
> Note that Terraform will use the AWS credentials of your default profile.
### Outputs
After successful deployment, you'll get:
- `prowler_role_arn`: The ARN of the created IAM role (use this in Prowler App)
- `prowler_role_name`: The name of the IAM role
- `s3_integration_enabled`: Whether S3 integration is enabled
> **Note:** Terraform will use the AWS credentials of your default profile or AWS_PROFILE environment variable.
+4 -4
View File
@@ -3,15 +3,15 @@
locals {
s3_integration_validation = (
!var.enable_s3_integration ||
(var.enable_s3_integration && var.s3_integration_bucket_name != "" && var.s3_integration_bucket_account != "")
(var.enable_s3_integration && var.s3_integration_bucket_name != "" && var.s3_integration_bucket_account_id != "")
)
}
# Validation check using check block (Terraform 1.5+)
check "s3_integration_requirements" {
assert {
condition = !var.enable_s3_integration || (var.s3_integration_bucket_name != "" && var.s3_integration_bucket_account != "")
error_message = "When enable_s3_integration is true, both s3_integration_bucket_name and s3_integration_bucket_account must be provided and non-empty."
condition = !var.enable_s3_integration || (var.s3_integration_bucket_name != "" && var.s3_integration_bucket_account_id != "")
error_message = "When enable_s3_integration is true, both s3_integration_bucket_name and s3_integration_bucket_account_id must be provided and non-empty."
}
}
@@ -75,7 +75,7 @@ module "s3_integration" {
source = "./s3-integration"
s3_integration_bucket_name = var.s3_integration_bucket_name
s3_integration_bucket_account = var.s3_integration_bucket_account
s3_integration_bucket_account_id = var.s3_integration_bucket_account_id
prowler_role_name = aws_iam_role.prowler_scan.name
}
@@ -17,7 +17,7 @@ resource "aws_iam_role_policy" "prowler_s3_integration" {
]
Condition = {
StringEquals = {
"s3:ResourceAccount" = var.s3_integration_bucket_account
"s3:ResourceAccount" = var.s3_integration_bucket_account_id
}
}
},
@@ -31,7 +31,7 @@ resource "aws_iam_role_policy" "prowler_s3_integration" {
]
Condition = {
StringEquals = {
"s3:ResourceAccount" = var.s3_integration_bucket_account
"s3:ResourceAccount" = var.s3_integration_bucket_account_id
}
}
},
@@ -45,7 +45,7 @@ resource "aws_iam_role_policy" "prowler_s3_integration" {
]
Condition = {
StringEquals = {
"s3:ResourceAccount" = var.s3_integration_bucket_account
"s3:ResourceAccount" = var.s3_integration_bucket_account_id
}
}
}
@@ -8,13 +8,13 @@ variable "s3_integration_bucket_name" {
}
}
variable "s3_integration_bucket_account" {
variable "s3_integration_bucket_account_id" {
type = string
description = "The AWS Account ID owner of the S3 Bucket."
validation {
condition = length(var.s3_integration_bucket_account) == 12 && can(tonumber(var.s3_integration_bucket_account))
error_message = "s3_integration_bucket_account must be a valid 12-digit AWS Account ID."
condition = length(var.s3_integration_bucket_account_id) == 12 && can(tonumber(var.s3_integration_bucket_account_id))
error_message = "s3_integration_bucket_account_id must be a valid 12-digit AWS Account ID."
}
}
@@ -0,0 +1,30 @@
# =============================================================================
# Prowler Terraform Configuration
# =============================================================================
# REQUIRED: External ID from your Prowler App
external_id = "your-unique-external-id-here"
# =============================================================================
# Optional Variables (uncomment and modify as needed)
# =============================================================================
# Prowler Cloud Service Account (leave default unless using self-hosted)
# account_id = "232136659152"
# IAM Principal Pattern (leave default unless using self-hosted)
# iam_principal = "role/prowler*"
# =============================================================================
# S3 Integration Configuration
# =============================================================================
# Uncomment the following lines to enable S3 integration for storing scan reports
# Enable S3 integration
# enable_s3_integration = true
# S3 bucket name where reports will be stored
# s3_integration_bucket_name = "my-prowler-reports-bucket"
# AWS Account ID that owns the S3 bucket (usually your account)
# s3_integration_bucket_account_id = "123456789012"
@@ -1,13 +1,31 @@
# Required variable
# =============================================================================
# Prowler Terraform Configuration
# =============================================================================
# REQUIRED: External ID from your Prowler App setup
# This must match exactly what you configured in Prowler App
external_id = "your-unique-external-id-here"
# Optional Variables
# Prowler Cloud Account
# =============================================================================
# Optional Variables (uncomment and modify as needed)
# =============================================================================
# Prowler Cloud Service Account (leave default unless using self-hosted)
# account_id = "232136659152"
# Prowler Cloud Role
# IAM Principal Pattern (leave default unless using self-hosted)
# iam_principal = "role/prowler*"
# S3 Integration (optional)
# =============================================================================
# S3 Integration Configuration
# =============================================================================
# Uncomment the following lines to enable S3 integration for storing scan reports
# Enable S3 integration
# enable_s3_integration = true
# s3_bucket_name = "your-prowler-reports-bucket"
# s3_bucket_account = "123456789012"
# S3 bucket name where reports will be stored
# s3_integration_bucket_name = "my-prowler-reports-bucket"
# AWS Account ID that owns the S3 bucket (usually your account)
# s3_integration_bucket_account_id = "123456789012"
+3 -3
View File
@@ -44,13 +44,13 @@ variable "s3_integration_bucket_name" {
}
}
variable "s3_integration_bucket_account" {
variable "s3_integration_bucket_account_id" {
type = string
description = "The AWS Account ID owner of the S3 Bucket. Required if enable_s3_integration is true."
default = ""
validation {
condition = var.s3_integration_bucket_account == "" || (length(var.s3_integration_bucket_account) == 12 && can(tonumber(var.s3_integration_bucket_account)))
error_message = "s3_integration_bucket_account must be a valid 12-digit AWS Account ID or empty."
condition = var.s3_integration_bucket_account_id == "" || (length(var.s3_integration_bucket_account_id) == 12 && can(tonumber(var.s3_integration_bucket_account_id)))
error_message = "s3_integration_bucket_account_id must be a valid 12-digit AWS Account ID or empty."
}
}
+51
View File
@@ -2,6 +2,57 @@
All notable changes to the **Prowler SDK** are documented in this file.
## [v5.11.0] (Prowler UNRELEASED)
### Added
- Certificate authentication for M365 provider [(#8404)](https://github.com/prowler-cloud/prowler/pull/8404)
- `vm_sufficient_daily_backup_retention_period` check for Azure provider [(#8200)](https://github.com/prowler-cloud/prowler/pull/8200)
- `vm_jit_access_enabled` check for Azure provider [(#8202)](https://github.com/prowler-cloud/prowler/pull/8202)
- Bedrock AgentCore privilege escalation combination for AWS provider [(#8526)](https://github.com/prowler-cloud/prowler/pull/8526)
- Add User Email and APP name/installations information in GitHub provider [(#8501)](https://github.com/prowler-cloud/prowler/pull/8501)
- Remove standalone iam:PassRole from privesc detection and add missing patterns [(#8530)](https://github.com/prowler-cloud/prowler/pull/8530)
- `eks_cluster_deletion_protection_enabled` check for AWS provider [(#8536)](https://github.com/prowler-cloud/prowler/pull/8536)
- ECS privilege escalation patterns (StartTask and RunTask) for AWS provider [(#8541)](https://github.com/prowler-cloud/prowler/pull/8541)
### Changed
- Refine kisa isms-p compliance mapping [(#8479)](https://github.com/prowler-cloud/prowler/pull/8479)
### Fixed
---
## [v5.10.3] (Prowler UNRELEASED)
### Fixed
- AWS resource-arn filtering [(#8533)](https://github.com/prowler-cloud/prowler/pull/8533)
- GitHub App authentication for GitHub provider [(#8529)](https://github.com/prowler-cloud/prowler/pull/8529)
- List all accessible organizations in GitHub provider [(#8535)](https://github.com/prowler-cloud/prowler/pull/8535)
- Only evaluate enabled accounts in `entra_users_mfa_capable` check [(#8544)](https://github.com/prowler-cloud/prowler/pull/8544)
---
## [v5.10.2] (Prowler v5.10.2)
### Fixed
- Order requirements by ID in Prowler ThreatScore AWS compliance framework [(#8495)](https://github.com/prowler-cloud/prowler/pull/8495)
- Add explicit resource name to GCP and Azure Defender checks [(#8352)](https://github.com/prowler-cloud/prowler/pull/8352)
- Validation errors in Azure and M365 providers [(#8353)](https://github.com/prowler-cloud/prowler/pull/8353)
- Azure `app_http_logs_enabled` check false positives [(#8507)](https://github.com/prowler-cloud/prowler/pull/8507)
- Azure `storage_geo_redundant_enabled` check false positives [(#8504)](https://github.com/prowler-cloud/prowler/pull/8504)
- AWS `kafka_cluster_is_public` check false positives [(#8514)](https://github.com/prowler-cloud/prowler/pull/8514)
- List all accessible repositories in GitHub [(#8522)](https://github.com/prowler-cloud/prowler/pull/8522)
- GitHub CIS 1.0 Compliance Reports [(#8519)](https://github.com/prowler-cloud/prowler/pull/8519)
---
## [v5.10.1] (Prowler v5.10.1)
### Fixed
- Remove invalid requirements from CIS 1.0 for GitHub provider [(#8472)](https://github.com/prowler-cloud/prowler/pull/8472)
---
## [v5.10.0] (Prowler v5.10.0)
### Added
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -12,7 +12,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "5.10.0"
prowler_version = "5.11.0"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
square_logo_img = "https://prowler.com/wp-content/uploads/logo-html.png"
aws_logo = "https://user-images.githubusercontent.com/38561120/235953920-3e3fba08-0795-41dc-b480-9bea57db9f2e.png"
+3
View File
@@ -459,6 +459,9 @@ azure:
"Standard_DS3_v2",
"Standard_D4s_v3",
]
# Azure VM Backup Configuration
# azure.vm_sufficient_daily_backup_retention_period
vm_backup_min_daily_retention_days: 7
# GCP Configuration
gcp:
+1 -3
View File
@@ -550,9 +550,7 @@ class Check_Report_GCP(Check_Report):
or ""
)
self.resource_name = (
resource_name
or getattr(resource, "name", "")
or getattr(resource, "id", "")
resource_name or getattr(resource, "name", "") or "GCP Project"
)
self.project_id = project_id or getattr(resource, "project_id", "")
self.location = (
@@ -2,6 +2,7 @@ from colorama import Fore, Style
from tabulate import tabulate
from prowler.config.config import orange_color
from prowler.lib.check.compliance_models import Compliance
def get_prowler_threatscore_table(
@@ -25,7 +26,10 @@ def get_prowler_threatscore_table(
pillars = {}
score_per_pillar = {}
max_score_per_pillar = {}
counted_findings = []
counted_findings_per_pillar = {}
generic_score = 0
generic_max_score = 0
generic_counted_findings = []
for index, finding in enumerate(findings):
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
@@ -33,18 +37,24 @@ def get_prowler_threatscore_table(
if compliance.Framework == "ProwlerThreatScore":
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
# Score per pillar logic
pillar = attribute.Section
if not any(
[
pillar in score_per_pillar.keys(),
pillar in max_score_per_pillar.keys(),
pillar in counted_findings_per_pillar.keys(),
]
):
score_per_pillar[pillar] = 0
max_score_per_pillar[pillar] = 0
counted_findings_per_pillar[pillar] = []
if index not in counted_findings:
if (
index not in counted_findings_per_pillar.get(pillar, [])
and not finding.muted
):
if finding.status == "PASS":
score_per_pillar[pillar] += (
attribute.LevelOfRisk * attribute.Weight
@@ -52,7 +62,7 @@ def get_prowler_threatscore_table(
max_score_per_pillar[pillar] += (
attribute.LevelOfRisk * attribute.Weight
)
counted_findings.append(index)
counted_findings_per_pillar[pillar].append(index)
if pillar not in pillars:
pillars[pillar] = {"FAIL": 0, "PASS": 0, "Muted": 0}
@@ -69,6 +79,17 @@ def get_prowler_threatscore_table(
pass_count.append(index)
pillars[pillar]["PASS"] += 1
# Generic score logic
if index not in generic_counted_findings and not finding.muted:
if finding.status == "PASS":
generic_score += (
attribute.LevelOfRisk * attribute.Weight
)
generic_max_score += (
attribute.LevelOfRisk * attribute.Weight
)
generic_counted_findings.append(index)
pillars = dict(sorted(pillars.items()))
for pillar in pillars:
pillar_table["Provider"].append(compliance.Provider)
@@ -88,6 +109,27 @@ def get_prowler_threatscore_table(
f"{orange_color}{pillars[pillar]['Muted']}{Style.RESET_ALL}"
)
# Add pillars with no findings to the table with Status: PASS and Score: 100%
provider_name = compliance_framework.split("_")[-1]
bulk_compliance_frameworks = Compliance.get_bulk(provider_name)
unique_sections = set()
for compliance_name, compliance in bulk_compliance_frameworks.items():
if compliance_name.startswith(f"prowler_threatscore_{provider_name}"):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
unique_sections.add(attribute.Section)
for section in unique_sections:
if section not in pillars:
pillar_table["Provider"].append(provider_name.capitalize())
pillar_table["Pillar"].append(section)
pillar_table["Score"].append(
f"{Style.BRIGHT}{Fore.GREEN}100.00%{Style.RESET_ALL}"
)
pillar_table["Status"].append(f"{Fore.GREEN}PASS(0){Style.RESET_ALL}")
pillar_table["Muted"].append(f"{orange_color}0{Style.RESET_ALL}")
if (
len(fail_count) + len(pass_count) + len(muted_count) > 1
): # If there are no resources, don't print the compliance table
@@ -104,9 +146,12 @@ def get_prowler_threatscore_table(
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(
f"\n{Style.BRIGHT}Overall ThreatScore: {generic_score / generic_max_score * 100:.2f}%{Style.RESET_ALL}"
)
if len(fail_count) > 0 and len(pillar_table["Pillar"]) > 0:
print(
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
f"Framework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
)
print(
+6 -4
View File
@@ -19,6 +19,7 @@ from prowler.lib.outputs.compliance.compliance import get_check_compliance
from prowler.lib.outputs.utils import unroll_tags
from prowler.lib.utils.utils import dict_to_lowercase, get_nested_attribute
from prowler.providers.common.provider import Provider
from prowler.providers.github.models import GithubAppIdentityInfo, GithubIdentityInfo
class Finding(BaseModel):
@@ -250,15 +251,16 @@ class Finding(BaseModel):
output_data["resource_name"] = check_output.resource_name
output_data["resource_uid"] = check_output.resource_id
if hasattr(provider.identity, "account_name"):
if isinstance(provider.identity, GithubIdentityInfo):
# GithubIdentityInfo (Personal Access Token, OAuth)
output_data["account_name"] = provider.identity.account_name
output_data["account_uid"] = provider.identity.account_id
elif hasattr(provider.identity, "app_id"):
output_data["account_email"] = provider.identity.account_email
elif isinstance(provider.identity, GithubAppIdentityInfo):
# GithubAppIdentityInfo (GitHub App)
# TODO: Get Github App name
output_data["account_name"] = f"app-{provider.identity.app_id}"
output_data["account_name"] = provider.identity.app_name
output_data["account_uid"] = provider.identity.app_id
output_data["installations"] = provider.identity.installations
output_data["region"] = check_output.owner
+62 -13
View File
@@ -41,7 +41,7 @@ class HTML(Output):
<td>{finding_status}</td>
<td>{finding.metadata.Severity.value}</td>
<td>{finding.metadata.ServiceName}</td>
<td>{":".join([finding.resource_metadata['file_path'], "-".join(map(str, finding.resource_metadata['file_line_range']))]) if finding.metadata.Provider == "iac" else finding.region.lower()}</td>
<td>{":".join([finding.resource_metadata["file_path"], "-".join(map(str, finding.resource_metadata["file_line_range"]))]) if finding.metadata.Provider == "iac" else finding.region.lower()}</td>
<td>{finding.metadata.CheckID.replace("_", "<wbr />_")}</td>
<td>{finding.metadata.CheckTitle}</td>
<td>{finding.resource_uid.replace("<", "&lt;").replace(">", "&gt;").replace("_", "<wbr />_")}</td>
@@ -558,10 +558,65 @@ class HTML(Output):
try:
if hasattr(provider.identity, "account_name"):
# GithubIdentityInfo (Personal Access Token, OAuth)
account_display = provider.identity.account_name
account_info_items = f"""
<li class="list-group-item">
<b>GitHub account:</b> {provider.identity.account_name}
</li>
"""
# Add email if available
if (
hasattr(provider.identity, "account_email")
and provider.identity.account_email
):
account_info_items += f"""
<li class="list-group-item">
<b>GitHub account email:</b> {provider.identity.account_email}
</li>"""
elif hasattr(provider.identity, "app_id"):
# GithubAppIdentityInfo (GitHub App)
account_display = f"app-{provider.identity.app_id}"
# Assessment items: App Name and Installations
account_info_items = f"""
<li class="list-group-item">
<b>GitHub App Name:</b> {provider.identity.app_name}
</li>"""
# Add installations if available
if (
hasattr(provider.identity, "installations")
and provider.identity.installations
):
installations_display = ", ".join(provider.identity.installations)
account_info_items += f"""
<li class="list-group-item">
<b>Installations:</b> {installations_display}
</li>"""
else:
account_info_items += """
<li class="list-group-item">
<b>Installations:</b> No installations found
</li>"""
# Credentials items: Authentication method and App ID
credentials_items = f"""
<li class="list-group-item">
<b>GitHub authentication method:</b> {provider.auth_method}
</li>
<li class="list-group-item">
<b>GitHub App ID:</b> {provider.identity.app_id}
</li>"""
else:
# Fallback for other identity types
account_info_items = ""
credentials_items = f"""
<li class="list-group-item">
<b>GitHub authentication method:</b> {provider.auth_method}
</li>"""
# For PAT/OAuth, use default credentials structure
if hasattr(provider.identity, "account_name"):
credentials_items = f"""
<li class="list-group-item">
<b>GitHub authentication method:</b> {provider.auth_method}
</li>"""
return f"""
<div class="col-md-2">
@@ -569,11 +624,8 @@ class HTML(Output):
<div class="card-header">
GitHub Assessment Summary
</div>
<ul class="list-group
list-group-flush">
<li class="list-group-item">
<b>GitHub account:</b> {account_display}
</li>
<ul class="list-group list-group-flush">
{account_info_items}
</ul>
</div>
</div>
@@ -582,11 +634,8 @@ class HTML(Output):
<div class="card-header">
GitHub Credentials
</div>
<ul class="list-group
list-group-flush">
<li class="list-group-item">
<b>GitHub authentication method:</b> {provider.auth_method}
</li>
<ul class="list-group list-group-flush">
{credentials_items}
</ul>
</div>
</div>"""
+1 -1
View File
@@ -8,7 +8,7 @@ def is_resource_filtered(resource: str, audit_resources: list) -> bool:
Returns True if it is filtered and False if it does not match the input filters
"""
try:
if resource in str(audit_resources):
if resource in audit_resources:
return True
return False
except Exception as error:
@@ -1430,7 +1430,9 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-west-1"
]
}
},
"bedrock-runtime": {
@@ -4269,15 +4271,18 @@
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -5705,12 +5710,15 @@
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-5",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -5916,9 +5924,11 @@
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-5",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -6580,6 +6590,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -6673,6 +6684,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -8260,6 +8272,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -9914,6 +9927,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -0,0 +1,32 @@
{
"Provider": "aws",
"CheckID": "eks_cluster_deletion_protection_enabled",
"CheckTitle": "Ensure EKS clusters have deletion protection enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Resource Management"
],
"ServiceName": "eks",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "high",
"ResourceType": "AwsEksCluster",
"Description": "Ensure that your Amazon EKS clusters have deletion protection enabled to prevent accidental deletion of critical Kubernetes clusters.",
"Risk": "Without deletion protection, EKS clusters can be accidentally deleted through Terraform automation, AWS CLI commands, or the AWS console, leading to data loss and service disruption.",
"RelatedUrl": "https://docs.aws.amazon.com/eks/latest/userguide/deletion-protection.html",
"Remediation": {
"Code": {
"CLI": "aws eks update-cluster-config --region <region_name> --name <cluster_name> --deletion-protection",
"NativeIaC": "",
"Other": "",
"Terraform": "resource \"aws_eks_cluster\" \"example\" {\n name = \"example-cluster\"\n role_arn = aws_iam_role.example.arn\n deletion_protection = true\n # ... other configuration\n}"
},
"Recommendation": {
"Text": "Enable deletion protection on all EKS clusters to prevent accidental deletion. This is especially important for production clusters and those managed through Infrastructure as Code (IaC) tools.",
"Url": "https://docs.aws.amazon.com/eks/latest/userguide/deletion-protection.html"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}
@@ -0,0 +1,21 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.eks.eks_client import eks_client
class eks_cluster_deletion_protection_enabled(Check):
def execute(self):
findings = []
for cluster in eks_client.clusters:
report = Check_Report_AWS(metadata=self.metadata(), resource=cluster)
report.status = "PASS"
report.status_extended = (
f"EKS cluster {cluster.name} has deletion protection enabled."
)
if cluster.deletion_protection is False:
report.status = "FAIL"
report.status_extended = (
f"EKS cluster {cluster.name} has deletion protection disabled."
)
findings.append(report)
return findings
@@ -83,6 +83,10 @@ class EKS(AWSService):
]["publicAccessCidrs"]
if "encryptionConfig" in describe_cluster["cluster"]:
cluster.encryptionConfig = True
if "deletionProtection" in describe_cluster["cluster"]:
cluster.deletion_protection = describe_cluster["cluster"][
"deletionProtection"
]
cluster.tags = [describe_cluster["cluster"].get("tags")]
cluster.version = describe_cluster["cluster"].get("version", "")
@@ -108,4 +112,5 @@ class EKSCluster(BaseModel):
endpoint_private_access: bool = None
public_access_cidrs: list[str] = []
encryptionConfig: bool = None
deletion_protection: bool = None
tags: Optional[list] = []
@@ -24,7 +24,6 @@ privilege_escalation_policies_combination = {
"IAMPut": {"iam:Put*"},
"CreatePolicyVersion": {"iam:CreatePolicyVersion"},
"SetDefaultPolicyVersion": {"iam:SetDefaultPolicyVersion"},
"iam:PassRole": {"iam:PassRole"},
"PassRole+EC2": {
"iam:PassRole",
"ec2:RunInstances",
@@ -69,6 +68,21 @@ privilege_escalation_policies_combination = {
},
"GlueUpdateDevEndpoint": {"glue:UpdateDevEndpoint"},
"lambda:UpdateFunctionCode": {"lambda:UpdateFunctionCode"},
"lambda:UpdateFunctionConfiguration": {"lambda:UpdateFunctionConfiguration"},
"PassRole+CodeStar": {
"iam:PassRole",
"codestar:CreateProject",
},
"PassRole+CreateAutoScaling": {
"iam:PassRole",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:CreateLaunchConfiguration",
},
"PassRole+UpdateAutoScaling": {
"iam:PassRole",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateLaunchConfiguration",
},
"iam:CreateAccessKey": {"iam:CreateAccessKey"},
"iam:CreateLoginProfile": {"iam:CreateLoginProfile"},
"iam:UpdateLoginProfile": {"iam:UpdateLoginProfile"},
@@ -86,6 +100,24 @@ privilege_escalation_policies_combination = {
"sts:AssumeRole",
"iam:UpdateAssumeRolePolicy",
},
# AgentCore privilege escalation patterns
"PassRole+AgentCoreCreateInterpreter+InvokeInterpreter": {
"iam:PassRole",
"bedrock-agentcore:CreateCodeInterpreter",
"bedrock-agentcore:InvokeCodeInterpreter",
},
# ECS-based privilege escalation patterns
# Reference: https://labs.reversec.com/posts/2025/08/another-ecs-privilege-escalation-path
"PassRole+ECS+StartTask": {
"iam:PassRole",
"ecs:StartTask",
"ecs:RegisterContainerInstance",
"ecs:DeregisterContainerInstance",
},
"PassRole+ECS+RunTask": {
"iam:PassRole",
"ecs:RunTask",
},
# TO-DO: We have to handle AssumeRole just if the resource is * and without conditions
# "sts:AssumeRole": {"sts:AssumeRole"},
}
@@ -10,13 +10,13 @@ class kafka_cluster_is_public(Check):
report = Check_Report_AWS(metadata=self.metadata(), resource=cluster)
report.status = "FAIL"
report.status_extended = (
f"Kafka cluster '{cluster.name}' is publicly accessible."
f"Kafka cluster {cluster.name} is publicly accessible."
)
if cluster.public_access:
if not cluster.public_access:
report.status = "PASS"
report.status_extended = (
f"Kafka cluster '{cluster.name}' is not publicly accessible."
f"Kafka cluster {cluster.name} is not publicly accessible."
)
findings.append(report)
@@ -22,6 +22,10 @@ class app_http_logs_enabled(Check):
report.status = "PASS"
report.status_extended = f"App {app.name} has HTTP Logs enabled in diagnostic setting {diagnostic_setting.name} in subscription {subscription_name}"
break
elif log.category_group == "allLogs" and log.enabled:
report.status = "PASS"
report.status_extended = f"App {app.name} has allLogs category group which includes HTTP Logs enabled in diagnostic setting {diagnostic_setting.name} in subscription {subscription_name}"
break
findings.append(report)
return findings
@@ -14,6 +14,11 @@ class defender_additional_email_configured_with_a_security_contact(Check):
report = Check_Report_Azure(
metadata=self.metadata(), resource=contact_configuration
)
report.resource_name = (
contact_configuration.name
if contact_configuration.name
else "Security Contact"
)
report.subscription = subscription_name
if len(contact_configuration.emails) > 0:
@@ -31,6 +31,11 @@ class defender_attack_path_notifications_properly_configured(Check):
report = Check_Report_Azure(
metadata=self.metadata(), resource=contact_configuration
)
report.resource_name = (
contact_configuration.name
if contact_configuration.name
else "Security Contact"
)
report.subscription = subscription_name
actual_risk_level = getattr(
contact_configuration, "attack_path_minimal_risk_level", None
@@ -14,6 +14,11 @@ class defender_ensure_notify_alerts_severity_is_high(Check):
report = Check_Report_Azure(
metadata=self.metadata(), resource=contact_configuration
)
report.resource_name = (
contact_configuration.name
if contact_configuration.name
else "Security Contact"
)
report.subscription = subscription_name
report.status = "FAIL"
report.status_extended = f"Notifications are not enabled for alerts with a minimum severity of high or lower in subscription {subscription_name}."
@@ -12,7 +12,13 @@ class defender_ensure_notify_emails_to_owners(Check):
) in defender_client.security_contact_configurations.items():
for contact_configuration in security_contact_configurations.values():
report = Check_Report_Azure(
metadata=self.metadata(), resource=contact_configuration
metadata=self.metadata(),
resource=contact_configuration,
)
report.resource_name = (
contact_configuration.name
if contact_configuration.name
else "Security Contact"
)
report.subscription = subscription_name
if (
@@ -25,6 +25,7 @@ class Defender(AzureService):
).token
)
self.iot_security_solutions = self._get_iot_security_solutions()
self.jit_policies = self._get_jit_policies()
def _get_pricings(self):
logger.info("Defender - Getting pricings...")
@@ -246,6 +247,44 @@ class Defender(AzureService):
)
return iot_security_solutions
def _get_jit_policies(self) -> dict[str, dict]:
"""
Get all JIT policies for all subscriptions.
Returns:
A dictionary of JIT policies for each subscription. The format will be:
{
"subscription_name": {
"jit_policy_id": JITPolicy
}
}
"""
logger.info("Defender - Getting JIT policies...")
jit_policies = {}
for subscription_name, client in self.clients.items():
try:
jit_policies[subscription_name] = {}
policies = client.jit_network_access_policies.list()
for policy in policies:
vm_ids = set()
for vm in getattr(policy, "virtual_machines", []):
vm_ids.add(vm.id)
jit_policies[subscription_name].update(
{
policy.id: JITPolicy(
id=policy.id,
name=policy.name,
location=getattr(policy, "location", "Global"),
vm_ids=vm_ids,
),
}
)
except Exception as error:
logger.error(
f"Subscription name: {subscription_name} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return jit_policies
class Pricing(BaseModel):
resource_id: str
@@ -317,3 +356,10 @@ class IoTSecuritySolution(BaseModel):
resource_id: str
name: str
status: str
class JITPolicy(BaseModel):
id: str
name: str
location: str = ""
vm_ids: list[str] = []
@@ -11,20 +11,30 @@ from prowler.providers.azure.lib.service.service import AzureService
class BackupItem(BaseModel):
"""Minimal BackupItem: only essential identifying and descriptive fields."""
"""Model that represents a backup item."""
id: str
name: str
workload_type: Optional[DataSourceType]
backup_policy_id: Optional[str] = None
class BackupPolicy(BaseModel):
"""Model that represents a backup policy."""
id: str
name: str
retention_days: Optional[int] = None
class BackupVault(BaseModel):
"""Minimal BackupVault: only essential identifying fields and its backup items."""
"""Model that represents a backup vault."""
id: str
name: str
location: str
backup_protected_items: dict[str, BackupItem] = Field(default_factory=dict)
backup_policies: dict[str, BackupPolicy] = Field(default_factory=dict)
class Recovery(AzureService):
@@ -71,6 +81,9 @@ class RecoveryBackup(AzureService):
vault.backup_protected_items = self._get_backup_protected_items(
subscription_name=subscription_name, vault=vault
)
vault.backup_policies = self._get_backup_policies(
subscription_name=subscription_name, vault=vault
)
def _get_backup_protected_items(
self, subscription_name: str, vault: BackupVault
@@ -95,7 +108,58 @@ class RecoveryBackup(AzureService):
workload_type=(
item_properties.workload_type if item_properties else None
),
backup_policy_id=(
item_properties.policy_id if item_properties else None
),
)
except Exception as e:
logger.error(f"Recovery - Error getting backup protected items: {e}")
except Exception as error:
logger.error(
f"Subscription name: {subscription_name} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return backup_protected_items_dict
def _get_backup_policies(
self, subscription_name: str, vault: BackupVault
) -> dict[str, BackupPolicy]:
"""
Retrieve all backup policies for a given vault.
"""
logger.info("Recovery - Getting backup policies...")
backup_policies_dict: dict[str, BackupPolicy] = {}
unique_backup_policies: set[str] = set()
try:
for item in vault.backup_protected_items.values():
if item.backup_policy_id:
unique_backup_policies.add(item.backup_policy_id)
for policy_id in unique_backup_policies:
policy = self.clients[subscription_name].protection_policies.get(
vault_name=vault.name,
resource_group_name=vault.id.split("/")[4],
policy_name=policy_id.split("/")[-1],
)
backup_policies_dict[policy_id] = BackupPolicy(
id=policy.id,
name=policy.name,
retention_days=getattr(
getattr(
getattr(
getattr(
getattr(policy, "properties", None),
"retention_policy",
None,
),
"daily_schedule",
None,
),
"retention_duration",
None,
),
"count",
None,
),
)
except Exception as error:
logger.error(
f"Subscription name: {subscription_name} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return backup_policies_dict
@@ -1,6 +1,5 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.storage.storage_client import storage_client
from prowler.providers.azure.services.storage.storage_service import ReplicationSettings
class storage_geo_redundant_enabled(Check):
@@ -27,14 +26,16 @@ class storage_geo_redundant_enabled(Check):
report.subscription = subscription
if (
storage_account.replication_settings
== ReplicationSettings.STANDARD_GRS
storage_account.replication_settings == "Standard_GRS"
or storage_account.replication_settings == "Standard_GZRS"
or storage_account.replication_settings == "Standard_RAGRS"
or storage_account.replication_settings == "Standard_RAGZRS"
):
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has Geo-redundant storage (GRS) enabled."
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has Geo-redundant storage {storage_account.replication_settings} enabled."
else:
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not have Geo-redundant storage (GRS) enabled."
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not have Geo-redundant storage enabled, it has {storage_account.replication_settings} instead."
findings.append(report)
@@ -1,4 +1,3 @@
from enum import Enum
from typing import Optional
from azure.mgmt.storage import StorageManagementClient
@@ -35,7 +34,6 @@ class Storage(AzureService):
key_expiration_period_in_days = int(
storage_account.key_policy.key_expiration_period_in_days
)
replication_settings = ReplicationSettings(storage_account.sku.name)
storage_accounts[subscription].append(
Account(
id=storage_account.id,
@@ -84,7 +82,7 @@ class Storage(AzureService):
False,
)
),
replication_settings=replication_settings,
replication_settings=storage_account.sku.name,
allow_cross_tenant_replication=(
True
if getattr(
@@ -273,17 +271,6 @@ class PrivateEndpointConnection(BaseModel):
type: str
class ReplicationSettings(Enum):
STANDARD_LRS = "Standard_LRS"
STANDARD_GRS = "Standard_GRS"
STANDARD_RAGRS = "Standard_RAGRS"
STANDARD_ZRS = "Standard_ZRS"
PREMIUM_LRS = "Premium_LRS"
PREMIUM_ZRS = "Premium_ZRS"
STANDARD_GZRS = "Standard_GZRS"
STANDARD_RAGZRS = "Standard_RAGZRS"
class SMBProtocolSettings(BaseModel):
channel_encryption: list[str]
supported_versions: list[str]
@@ -310,7 +297,7 @@ class Account(BaseModel):
minimum_tls_version: str
private_endpoint_connections: list[PrivateEndpointConnection]
key_expiration_period_in_days: Optional[int] = None
replication_settings: ReplicationSettings = ReplicationSettings.STANDARD_LRS
replication_settings: str = "Standard_LRS"
allow_cross_tenant_replication: bool = True
allow_shared_key_access: bool = True
blob_properties: Optional[BlobProperties] = None
@@ -0,0 +1,30 @@
{
"Provider": "azure",
"CheckID": "vm_jit_access_enabled",
"CheckTitle": "Enable Just-In-Time Access for Virtual Machines",
"CheckType": [],
"ServiceName": "vm",
"SubServiceName": "",
"ResourceIdTemplate": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}",
"Severity": "high",
"ResourceType": "Microsoft.Compute/virtualMachines",
"Description": "Ensure that Microsoft Azure virtual machines are configured to use Just-in-Time (JIT) access.",
"Risk": "Without JIT access, management ports such as 22 (SSH) and 3389 (RDP) may be exposed, increasing the risk of brute-force and DDoS attacks.",
"RelatedUrl": "https://docs.microsoft.com/en-us/azure/security-center/security-center-just-in-time?tabs=jit-config-asc%2Cjit-request-asc",
"Remediation": {
"Code": {
"CLI": "az security jit-policy list --query '[*].virtualMachines[*].id | []'",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "Enable Just-in-Time (JIT) network access for your Microsoft Azure virtual machines using the Azure Portal under Security Center > Just-in-time VM access.",
"Url": "https://docs.microsoft.com/en-us/azure/security-center/security-center-just-in-time?tabs=jit-config-asc%2Cjit-request-asc"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "JIT access can only be enabled via the Azure Portal. Ensure Security Center standard pricing tier for servers is enabled."
}
@@ -0,0 +1,33 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.defender.defender_client import defender_client
from prowler.providers.azure.services.vm.vm_client import vm_client
class vm_jit_access_enabled(Check):
"""
Ensure that Microsoft Azure virtual machines are configured to use Just-in-Time (JIT) access.
This check evaluates whether JIT access is enabled for each VM to reduce the attack surface.
- PASS: VM has JIT access enabled.
- FAIL: VM does not have JIT access enabled.
"""
def execute(self):
findings = []
jit_enabled_vms = set()
for subscription_name, vms in vm_client.virtual_machines.items():
for jit_policy in defender_client.jit_policies[subscription_name].values():
jit_enabled_vms.update(jit_policy.vm_ids)
for vm in vms.values():
report = Check_Report_Azure(metadata=self.metadata(), resource=vm)
report.subscription = subscription_name
if vm.resource_id.lower() in {
vm_id.lower() for vm_id in jit_enabled_vms
}:
report.status = "PASS"
report.status_extended = f"VM {vm.resource_name} in subscription {subscription_name} has JIT (Just-in-Time) access enabled."
else:
report.status = "FAIL"
report.status_extended = f"VM {vm.resource_name} in subscription {subscription_name} does not have JIT (Just-in-Time) access enabled."
findings.append(report)
return findings
@@ -0,0 +1,30 @@
{
"Provider": "azure",
"CheckID": "vm_sufficient_daily_backup_retention_period",
"CheckTitle": "Ensure there is a sufficient daily backup retention period configured for Azure virtual machines.",
"CheckType": [],
"ServiceName": "vm",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Microsoft.Compute/virtualMachines",
"Description": "Ensure there is a sufficient daily backup retention period configured for Azure virtual machines.",
"Risk": "Having an optimal daily backup retention period for your Azure virtual machines will enforce your backup strategy to follow the best practices as specified in the compliance regulations promoted by your organization. Retaining VM backups for a longer period of time will allow you to handle more efficiently your data restoration process in the event of a failure.",
"RelatedUrl": "https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-introduction",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/VirtualMachines/sufficient-backup-retention-period.html",
"Terraform": ""
},
"Recommendation": {
"Text": "Set the daily backup retention period for each VM's backup policy to meet or exceed your organization's minimum requirement.",
"Url": "https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-introduction"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}
@@ -0,0 +1,51 @@
from azure.mgmt.recoveryservicesbackup.activestamp.models import DataSourceType
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.recovery.recovery_client import recovery_client
from prowler.providers.azure.services.vm.vm_client import vm_client
class vm_sufficient_daily_backup_retention_period(Check):
"""
Ensure there is a sufficient daily backup retention period configured for Azure virtual machines.
- PASS: The VM has a backup policy with sufficient daily retention period.
- FAIL: The VM does not have a backup policy or the retention period is insufficient.
"""
def execute(self) -> list[Check_Report_Azure]:
findings = []
min_retention_days = getattr(vm_client, "audit_config", {}).get(
"vm_backup_min_daily_retention_days", 7
)
for subscription, vms in vm_client.virtual_machines.items():
vaults = recovery_client.vaults.get(subscription, {})
for vm in vms.values():
backup_found = False
retention_days = None
for vault in vaults.values():
for backup_item in vault.backup_protected_items.values():
if (
backup_item.workload_type == DataSourceType.VM
and backup_item.name.split(";")[-1] == vm.resource_name
):
backup_found = True
policy_id = backup_item.backup_policy_id
if policy_id and policy_id in vault.backup_policies:
retention_days = vault.backup_policies[
policy_id
].retention_days
break
if backup_found:
break
if backup_found and retention_days:
report = Check_Report_Azure(metadata=self.metadata(), resource=vm)
report.subscription = subscription
if retention_days >= min_retention_days:
report.status = "PASS"
report.status_extended = f"VM {vm.resource_name} in subscription {subscription} has a daily backup retention period of {retention_days} days (minimum required: {min_retention_days})."
else:
report.status = "FAIL"
report.status_extended = f"VM {vm.resource_name} in subscription {subscription} has insufficient daily backup retention period of {retention_days} days (minimum required: {min_retention_days})."
findings.append(report)
return findings
+2
View File
@@ -222,6 +222,8 @@ class Provider(ABC):
env_auth=arguments.env_auth,
az_cli_auth=arguments.az_cli_auth,
browser_auth=arguments.browser_auth,
certificate_auth=arguments.certificate_auth,
certificate_path=arguments.certificate_path,
tenant_id=arguments.tenant_id,
init_modules=arguments.init_modules,
fixer_config=fixer_config,
+20 -2
View File
@@ -659,6 +659,9 @@ class GcpProvider(Provider):
if asset["resource"]["data"].get("name")
else project_id
)
# Handle empty or null project names
if not project_name or project_name.strip() == "":
project_name = "GCP Project"
gcp_project = GCPProject(
number=project_number,
id=project_id,
@@ -717,6 +720,9 @@ class GcpProvider(Provider):
if project.get("name")
else project_id
)
# Handle empty or null project names
if not project_name or project_name.strip() == "":
project_name = "GCP Project"
project_id = project["projectId"]
gcp_project = GCPProject(
number=project_number,
@@ -757,9 +763,15 @@ class GcpProvider(Provider):
# If no projects were able to be accessed via API, add them manually if provided by the user in arguments
if project_ids:
for input_project in project_ids:
# Handle empty or null project names
project_name = (
input_project
if input_project and input_project.strip() != ""
else "GCP Project"
)
projects[input_project] = GCPProject(
id=input_project,
name=input_project,
name=project_name,
number=0,
labels={},
lifecycle_state="ACTIVE",
@@ -768,9 +780,15 @@ class GcpProvider(Provider):
elif credentials_file:
with open(credentials_file, "r", encoding="utf-8") as file:
project_id = json.load(file)["project_id"]
# Handle empty or null project names
project_name = (
project_id
if project_id and project_id.strip() != ""
else "GCP Project"
)
projects[project_id] = GCPProject(
id=project_id,
name=project_id,
name=project_name,
number=0,
labels={},
lifecycle_state="ACTIVE",
@@ -13,7 +13,7 @@ class iam_no_service_roles_at_project_level(Check):
metadata=self.metadata(),
resource=binding,
resource_id=binding.role,
resource_name=binding.role,
resource_name=binding.role if binding.role else "Service Role",
location=cloudresourcemanager_client.region,
)
if binding.role in [
@@ -31,7 +31,6 @@ class iam_no_service_roles_at_project_level(Check):
metadata=self.metadata(),
resource=cloudresourcemanager_client.projects[project],
project_id=project,
resource_name=project,
location=cloudresourcemanager_client.region,
)
report.status = "PASS"
@@ -20,6 +20,7 @@ class logging_log_metric_filter_and_alert_for_audit_configuration_changes_enable
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -18,6 +18,7 @@ class logging_log_metric_filter_and_alert_for_bucket_permission_changes_enabled(
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -18,6 +18,7 @@ class logging_log_metric_filter_and_alert_for_custom_role_changes_enabled(Check)
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -18,6 +18,7 @@ class logging_log_metric_filter_and_alert_for_project_ownership_changes_enabled(
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -17,6 +17,7 @@ class logging_log_metric_filter_and_alert_for_sql_instance_configuration_changes
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -18,6 +18,7 @@ class logging_log_metric_filter_and_alert_for_vpc_firewall_rule_changes_enabled(
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -18,6 +18,7 @@ class logging_log_metric_filter_and_alert_for_vpc_network_changes_enabled(Check)
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -18,6 +18,7 @@ class logging_log_metric_filter_and_alert_for_vpc_network_route_changes_enabled(
metadata=self.metadata(),
resource=metric,
location=logging_client.region,
resource_name=metric.name if metric.name else "Log Metric Filter",
)
projects_with_metric.add(metric.project_id)
report.status = "FAIL"
@@ -26,6 +26,11 @@ class logging_sink_created(Check):
metadata=self.metadata(),
resource=projects_with_logging_sink[project],
location=logging_client.region,
resource_name=(
projects_with_logging_sink[project].name
if projects_with_logging_sink[project].name
else "Logging Sink"
),
)
report.status = "PASS"
report.status_extended = f"Sink {projects_with_logging_sink[project].name} is enabled exporting copies of all the log entries in project {project}."
+24 -4
View File
@@ -133,6 +133,12 @@ class GithubProvider(Provider):
"""
logger.info("Instantiating GitHub Provider...")
# Mute GitHub library logs to reduce noise since it is already handled by the Prowler logger
import logging
logging.getLogger("github").setLevel(logging.CRITICAL)
logging.getLogger("github.GithubRetry").setLevel(logging.CRITICAL)
# Set repositories and organizations for scoping
self._repositories = repositories or []
self._organizations = organizations or []
@@ -344,10 +350,12 @@ class GithubProvider(Provider):
auth = Auth.Token(session.token)
g = Github(auth=auth, retry=retry_config)
try:
user = g.get_user()
identity = GithubIdentityInfo(
account_id=g.get_user().id,
account_name=g.get_user().login,
account_url=g.get_user().url,
account_id=user.id,
account_name=user.login,
account_url=user.url,
account_email=user.get_emails()[0].email,
)
return identity
@@ -359,8 +367,18 @@ class GithubProvider(Provider):
elif session.id != 0 and session.key:
auth = Auth.AppAuth(session.id, session.key)
gi = GithubIntegration(auth=auth, retry=retry_config)
installations = []
for installation in gi.get_installations():
installations.append(
installation.raw_data.get("account", {}).get("login")
)
try:
identity = GithubAppIdentityInfo(app_id=gi.get_app().id)
app = gi.get_app()
identity = GithubAppIdentityInfo(
app_id=app.id,
app_name=app.name,
installations=installations,
)
return identity
except Exception as error:
@@ -387,11 +405,13 @@ class GithubProvider(Provider):
report_lines = [
f"GitHub Account: {Fore.YELLOW}{self.identity.account_name}{Style.RESET_ALL}",
f"GitHub Account ID: {Fore.YELLOW}{self.identity.account_id}{Style.RESET_ALL}",
f"GitHub Account Email: {Fore.YELLOW}{self.identity.account_email}{Style.RESET_ALL}",
f"Authentication Method: {Fore.YELLOW}{self.auth_method}{Style.RESET_ALL}",
]
elif isinstance(self.identity, GithubAppIdentityInfo):
report_lines = [
f"GitHub App ID: {Fore.YELLOW}{self.identity.app_id}{Style.RESET_ALL}",
f"GitHub App Name: {Fore.YELLOW}{self.identity.app_name}{Style.RESET_ALL}",
f"Authentication Method: {Fore.YELLOW}{self.auth_method}{Style.RESET_ALL}",
]
report_title = (
+5
View File
@@ -1,3 +1,5 @@
from typing import Optional
from pydantic.v1 import BaseModel
from prowler.config.config import output_file_timestamp
@@ -14,10 +16,13 @@ class GithubIdentityInfo(BaseModel):
account_id: str
account_name: str
account_url: str
account_email: Optional[str] = None
class GithubAppIdentityInfo(BaseModel):
app_id: str
app_name: str
installations: list[str]
class GithubOutputOptions(ProviderOutputOptions):
@@ -5,6 +5,7 @@ from pydantic.v1 import BaseModel
from prowler.lib.logger import logger
from prowler.providers.github.lib.service.service import GithubService
from prowler.providers.github.models import GithubAppIdentityInfo, GithubIdentityInfo
class Organization(GithubService):
@@ -113,8 +114,15 @@ class Organization(GithubService):
elif not self.provider.repositories:
# Default behavior: get all organizations the user is a member of
# Only when no repositories are specified
for org in client.get_user().get_orgs():
self._process_organization(org, organizations)
if isinstance(self.provider.identity, GithubIdentityInfo):
orgs = client.get_user().get_orgs()
for org in orgs:
self._process_organization(org, organizations)
elif isinstance(self.provider.identity, GithubAppIdentityInfo):
orgs = client.get_organizations()
if orgs.totalCount > 0:
for org in orgs:
self._process_organization(org, organizations)
except github.RateLimitExceededException as error:
logger.error(f"GitHub API rate limit exceeded: {error}")
@@ -2,10 +2,12 @@ from datetime import datetime
from typing import Optional
import github
import requests
from pydantic.v1 import BaseModel
from prowler.lib.logger import logger
from prowler.providers.github.lib.service.service import GithubService
from prowler.providers.github.models import GithubAppIdentityInfo
class Repository(GithubService):
@@ -50,31 +52,74 @@ class Repository(GithubService):
return True
def _get_accessible_repos_graphql(self) -> list[str]:
"""
Use the GitHub GraphQL API to list all repositories that the authentication token has access to.
This works with high-granularity (fine-grained) PATs.
"""
graphql_url = "https://api.github.com/graphql"
token = self.provider.session.token
headers = {
"Authorization": f"bearer {token}",
"Content-Type": "application/json",
}
query = """
{
viewer {
repositories(first: 100, affiliations: [OWNER, ORGANIZATION_MEMBER]) {
nodes {
nameWithOwner
}
}
}
}
"""
try:
response = requests.post(
graphql_url, json={"query": query}, headers=headers
)
response.raise_for_status()
data = response.json()
if "errors" in data:
logger.error(f"Error in GraphQL query: {data['errors']}")
return []
repo_nodes = (
data.get("data", {})
.get("viewer", {})
.get("repositories", {})
.get("nodes", [])
)
return [repo["nameWithOwner"] for repo in repo_nodes]
except requests.exceptions.RequestException as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return []
def _list_repositories(self):
"""
List repositories based on provider scoping configuration.
Scoping behavior:
- No scoping: Returns all accessible repositories for authenticated user
- Repository scoping: Returns only specified repositories
Example: --repository owner1/repo1 owner2/repo2
- Organization scoping: Returns all repositories from specified organizations
Example: --organization org1 org2
- Combined scoping: Returns specified repositories + all repos from organizations
Example: --repository owner1/repo1 --organization org2
Returns:
dict: Dictionary of repository ID to Repo objects
Raises:
github.GithubException: When GitHub API access fails
github.RateLimitExceededException: When API rate limits are exceeded
If the provider is a GitHub App, it will list repositories in the organizations that the GitHub App is installed in.
If the provider is a user, it will list repositories where the user is a member or owner.
If input repositories are provided, it will list repositories that match the input repositories.
If input organizations are provided, it will list repositories in the organizations that match the input organizations.
"""
logger.info("Repository - Listing Repositories...")
repos = {}
try:
for client in self.clients:
if self.provider.repositories or self.provider.organizations:
if (
self.provider.repositories
or self.provider.organizations
or (
isinstance(self.provider.identity, GithubAppIdentityInfo)
and self.provider.identity.installations
)
):
if self.provider.repositories:
logger.info(
f"Filtering for specific repositories: {self.provider.repositories}"
@@ -108,12 +153,57 @@ class Repository(GithubService):
self._handle_github_api_error(
error, "processing organization", org_name
)
if (
isinstance(self.provider.identity, GithubAppIdentityInfo)
and self.provider.identity.installations
):
logger.info(
f"Filtering for repositories in the organizations or accounts that the GitHub App is installed in: {', '.join(self.provider.identity.installations)}"
)
for org_name in self.provider.identity.installations:
try:
repos_list, _ = self._get_repositories_from_owner(
client, org_name
)
for repo in repos_list:
self._process_repository(repo, repos)
except Exception as error:
self._handle_github_api_error(
error, "processing organization", org_name
)
else:
for repo in client.get_user().get_repos():
self._process_repository(repo, repos)
logger.info(
"No repository or organization specified, discovering accessible repositories via GraphQL API..."
)
accessible_repo_names = self._get_accessible_repos_graphql()
if not accessible_repo_names:
logger.warning(
"Could not find any accessible repositories with the provided token."
)
for repo_name in accessible_repo_names:
try:
repo = client.get_repo(repo_name)
logger.info(
f"Processing repository found via GraphQL: {repo.full_name}"
)
self._process_repository(repo, repos)
except Exception as error:
if hasattr(self, "_handle_github_api_error"):
self._handle_github_api_error(
error,
"accessing repository discovered via GraphQL",
repo_name,
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except github.RateLimitExceededException as error:
logger.error(f"GitHub API rate limit exceeded: {error}")
raise # Re-raise rate limit errors as they need special handling
raise
except github.GithubException as error:
logger.error(f"GitHub API error while listing repositories: {error}")
except Exception as error:
@@ -124,158 +214,167 @@ class Repository(GithubService):
def _process_repository(self, repo, repos):
"""Process a single repository and extract all its information."""
default_branch = repo.default_branch
securitymd_exists = self._file_exists(repo, "SECURITY.md")
# CODEOWNERS file can be in .github/, root, or docs/
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#codeowners-file-location
codeowners_paths = [
".github/CODEOWNERS",
"CODEOWNERS",
"docs/CODEOWNERS",
]
codeowners_files = [self._file_exists(repo, path) for path in codeowners_paths]
if True in codeowners_files:
codeowners_exists = True
elif all(file is None for file in codeowners_files):
codeowners_exists = None
else:
codeowners_exists = False
delete_branch_on_merge = (
repo.delete_branch_on_merge
if repo.delete_branch_on_merge is not None
else False
)
require_pr = False
approval_cnt = 0
branch_protection = False
required_linear_history = False
allow_force_pushes = True
branch_deletion = True
require_code_owner_reviews = False
require_signed_commits = False
status_checks = False
enforce_admins = False
conversation_resolution = False
try:
branch = repo.get_branch(default_branch)
if branch.protected:
protection = branch.get_protection()
if protection:
require_pr = protection.required_pull_request_reviews is not None
approval_cnt = (
protection.required_pull_request_reviews.required_approving_review_count
if require_pr
else 0
)
required_linear_history = protection.required_linear_history
allow_force_pushes = protection.allow_force_pushes
branch_deletion = protection.allow_deletions
status_checks = protection.required_status_checks is not None
enforce_admins = protection.enforce_admins
conversation_resolution = (
protection.required_conversation_resolution
)
branch_protection = True
require_code_owner_reviews = (
protection.required_pull_request_reviews.require_code_owner_reviews
if require_pr
else False
)
require_signed_commits = branch.get_required_signatures()
except Exception as error:
# If the branch is not found, it is not protected
if "404" in str(error):
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# Any other error, we cannot know if the branch is protected or not
default_branch = repo.default_branch
securitymd_exists = self._file_exists(repo, "SECURITY.md")
# CODEOWNERS file can be in .github/, root, or docs/
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#codeowners-file-location
codeowners_paths = [
".github/CODEOWNERS",
"CODEOWNERS",
"docs/CODEOWNERS",
]
codeowners_files = [
self._file_exists(repo, path) for path in codeowners_paths
]
if True in codeowners_files:
codeowners_exists = True
elif all(file is None for file in codeowners_files):
codeowners_exists = None
else:
require_pr = None
approval_cnt = None
branch_protection = None
required_linear_history = None
allow_force_pushes = None
branch_deletion = None
require_code_owner_reviews = None
require_signed_commits = None
status_checks = None
enforce_admins = None
conversation_resolution = None
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
codeowners_exists = False
delete_branch_on_merge = (
repo.delete_branch_on_merge
if repo.delete_branch_on_merge is not None
else False
)
secret_scanning_enabled = False
dependabot_alerts_enabled = False
try:
if (
repo.security_and_analysis
and repo.security_and_analysis.secret_scanning
):
secret_scanning_enabled = (
repo.security_and_analysis.secret_scanning.status == "enabled"
)
require_pr = False
approval_cnt = 0
branch_protection = False
required_linear_history = False
allow_force_pushes = True
branch_deletion = True
require_code_owner_reviews = False
require_signed_commits = False
status_checks = False
enforce_admins = False
conversation_resolution = False
try:
# Use get_dependabot_alerts to check if Dependabot alerts are enabled
repo.get_dependabot_alerts().totalCount
# If the call succeeds, Dependabot is enabled (even if no alerts)
dependabot_alerts_enabled = True
branch = repo.get_branch(default_branch)
if branch.protected:
protection = branch.get_protection()
if protection:
require_pr = (
protection.required_pull_request_reviews is not None
)
approval_cnt = (
protection.required_pull_request_reviews.required_approving_review_count
if require_pr
else 0
)
required_linear_history = protection.required_linear_history
allow_force_pushes = protection.allow_force_pushes
branch_deletion = protection.allow_deletions
status_checks = protection.required_status_checks is not None
enforce_admins = protection.enforce_admins
conversation_resolution = (
protection.required_conversation_resolution
)
branch_protection = True
require_code_owner_reviews = (
protection.required_pull_request_reviews.require_code_owner_reviews
if require_pr
else False
)
require_signed_commits = branch.get_required_signatures()
except Exception as error:
error_str = str(error)
if (
"403" in error_str
and "Dependabot alerts are disabled for this repository."
in error_str
):
dependabot_alerts_enabled = False
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
# If the branch is not found, it is not protected
if "404" in str(error):
logger.warning(
f"{repo.full_name}: {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
dependabot_alerts_enabled = None
# Any other error, we cannot know if the branch is protected or not
else:
require_pr = None
approval_cnt = None
branch_protection = None
required_linear_history = None
allow_force_pushes = None
branch_deletion = None
require_code_owner_reviews = None
require_signed_commits = None
status_checks = None
enforce_admins = None
conversation_resolution = None
logger.error(
f"{repo.full_name}: {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
secret_scanning_enabled = False
dependabot_alerts_enabled = False
try:
if (
repo.security_and_analysis
and repo.security_and_analysis.secret_scanning
):
secret_scanning_enabled = (
repo.security_and_analysis.secret_scanning.status == "enabled"
)
try:
# Use get_dependabot_alerts to check if Dependabot alerts are enabled
repo.get_dependabot_alerts().totalCount
# If the call succeeds, Dependabot is enabled (even if no alerts)
dependabot_alerts_enabled = True
except Exception as error:
error_str = str(error)
if (
"403" in error_str
and "Dependabot alerts are disabled for this repository."
in error_str
):
dependabot_alerts_enabled = False
else:
logger.error(
f"{repo.full_name}: {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
dependabot_alerts_enabled = None
except Exception as error:
logger.error(
f"{repo.full_name}: {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
secret_scanning_enabled = None
dependabot_alerts_enabled = None
repos[repo.id] = Repo(
id=repo.id,
name=repo.name,
owner=repo.owner.login,
full_name=repo.full_name,
default_branch=Branch(
name=default_branch,
protected=branch_protection,
default_branch=True,
require_pull_request=require_pr,
approval_count=approval_cnt,
required_linear_history=required_linear_history,
allow_force_pushes=allow_force_pushes,
branch_deletion=branch_deletion,
status_checks=status_checks,
enforce_admins=enforce_admins,
conversation_resolution=conversation_resolution,
require_code_owner_reviews=require_code_owner_reviews,
require_signed_commits=require_signed_commits,
),
private=repo.private,
archived=repo.archived,
pushed_at=repo.pushed_at,
securitymd=securitymd_exists,
codeowners_exists=codeowners_exists,
secret_scanning_enabled=secret_scanning_enabled,
dependabot_alerts_enabled=dependabot_alerts_enabled,
delete_branch_on_merge=delete_branch_on_merge,
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{repo.full_name}: {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
secret_scanning_enabled = None
dependabot_alerts_enabled = None
repos[repo.id] = Repo(
id=repo.id,
name=repo.name,
owner=repo.owner.login,
full_name=repo.full_name,
default_branch=Branch(
name=default_branch,
protected=branch_protection,
default_branch=True,
require_pull_request=require_pr,
approval_count=approval_cnt,
required_linear_history=required_linear_history,
allow_force_pushes=allow_force_pushes,
branch_deletion=branch_deletion,
status_checks=status_checks,
enforce_admins=enforce_admins,
conversation_resolution=conversation_resolution,
require_code_owner_reviews=require_code_owner_reviews,
require_signed_commits=require_signed_commits,
),
private=repo.private,
archived=repo.archived,
pushed_at=repo.pushed_at,
securitymd=securitymd_exists,
codeowners_exists=codeowners_exists,
secret_scanning_enabled=secret_scanning_enabled,
dependabot_alerts_enabled=dependabot_alerts_enabled,
delete_branch_on_merge=delete_branch_on_merge,
)
class Branch(BaseModel):
"""Model for Github Branch"""
name: str
protected: bool
protected: Optional[bool]
default_branch: bool
require_pull_request: Optional[bool]
approval_count: Optional[int]

Some files were not shown because too many files have changed in this diff Show More