Compare commits

...

51 Commits

Author SHA1 Message Date
Adrián Jesús Peña Rodríguez ae08623b75 docs(api): update README to emphasize environment variable requirements for production deployment 2025-09-26 11:16:50 +02:00
César Arroba ab727e6816 chore(gha): fix e2e workflow (#8769) 2025-09-25 22:13:53 +05:45
Rubén De la Torre Vico 23d882d7ab feat(mcp): add Prowler App MCP Server (#8744) 2025-09-25 15:21:34 +02:00
Alejandro Bailo 59435167ea fix(scans): update link disable condition for findings table (#8762) 2025-09-25 12:57:22 +02:00
Andoni Alonso 77cdd793f8 fix(aws): cover SNS ResourceID in Quick Inventory output (#8763) 2025-09-25 11:14:32 +02:00
Andoni Alonso d13f3f0e0c docs(gcp): refactor getting started and auth (#8758) 2025-09-25 10:19:01 +02:00
Víctor Fernández Poyatos 56821de2f4 feat(tasks): Move compliance tasks to compliance queue (#8755) 2025-09-24 14:00:17 +02:00
Daniel Barranquero 92190fa69f feat(docs): add renaming checks to developer guide (#8717)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-09-24 11:46:52 +02:00
Prowler Bot 85db7c5183 chore(regions_update): Changes in regions for AWS services (#8736)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-09-24 10:38:12 +02:00
Josema Camacho a55ac266bf chore(django): update django to 5.1.12 due to security problems (#8693) 2025-09-23 16:35:25 +05:45
Andoni Alonso 90622e0437 docs: update Entra SSO SAML video link (#8745) 2025-09-23 12:43:51 +02:00
Pepe Fagoaga 81596250dc fix(actions): lock poetry after changes (#8477) 2025-09-23 14:31:45 +05:45
Rubén De la Torre Vico 43db5fe527 feat(mcp): add basic logger (#8740) 2025-09-23 09:09:38 +02:00
Pepe Fagoaga dfb479fa80 chore(readme): remove deprecations and fix typo (#8739) 2025-09-22 20:31:42 +05:45
Pedro Martín aa88b453ff fix(compliance): change order in models and remove prints (#8738) 2025-09-22 15:45:09 +02:00
Pedro Martín fbda66c6d1 feat(compliance): add name for each compliance (#7920)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-09-22 14:53:27 +02:00
Adrián Jesús Peña Rodríguez 2200e65519 feat(auth): add safeguards to prevent self-role removal and enforce MANAGE_ACCOUNT role presence (#8729) 2025-09-22 14:04:39 +02:00
Josema Camacho b8537aa22d feat(config): add generation for JWT keys if missing (#8655) 2025-09-22 13:14:54 +02:00
Rubén De la Torre Vico cb4a5dec79 chore: set an appropiate User-Agent in requests (#8724) 2025-09-22 12:48:13 +02:00
Rubén De la Torre Vico 0286de7ce2 chore: add mcp_server component labeler configuration (#8737) 2025-09-22 15:40:23 +05:45
Pepe Fagoaga b00602f109 fix(users): only list roles and memberships with manage_account (#8281)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-22 11:25:24 +02:00
Adrián Jesús Peña Rodríguez 1cfae546a0 chore(deps): add markdown package version 3.9 to dependencies (#8735) 2025-09-22 10:44:26 +02:00
Sergio Garcia 05dae4e8d1 fix(iac): handle empty results (#8733) 2025-09-16 14:20:15 +02:00
dependabot[bot] 52ddaca4c5 chore(deps-dev): bump moto from 5.0.28 to 5.1.11 (#7100)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-09-16 14:17:47 +02:00
Alejandro Bailo 940a1202b3 fix: handle 4XX and 204 properly (#8722)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-09-15 17:07:15 +02:00
Prowler Bot ec27451199 chore(regions_update): Changes in regions for AWS services (#8728)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-09-15 15:02:37 +02:00
Sergio Garcia 60e06dcc6e chore(html): support markdown in HTML (#8727) 2025-09-15 11:38:18 +02:00
Hugo Pereira Brito 7733aab088 feat: add additional_urls to finding details and markdown (#8704)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-15 11:33:27 +02:00
Pepe Fagoaga 5c6fadcfe7 chore(changelog): remove whitespace in links (#8712) 2025-09-12 17:09:19 +05:45
César Arroba 1bdb314e2c chore(gha): permissions missed for conflict checker action (#8714) 2025-09-12 12:37:12 +02:00
Rubén De la Torre Vico 5b0365947f feat: add first Prowler MCP server version (#8695) 2025-09-12 09:56:36 +02:00
Daniel Barranquero b512f6c421 fix(firehose): false positive in firehose_stream_encrypted_at_rest (#8599)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-09-11 09:55:16 -04:00
Alejandro Bailo c4a8771647 chore(dependencies): update package versions and track them (#8696) 2025-09-11 15:36:06 +02:00
Alejandro Bailo 6f967c6da7 fix(auth): validate email field (#8698) 2025-09-11 15:29:49 +02:00
Alejandro Bailo 82cd29d595 fix(auth): add method attribute to form for proper submission handling (#8699) 2025-09-11 15:02:36 +02:00
Daniel Barranquero 14c2334e1b fix(defender): change policies rules key (#8702) 2025-09-11 13:46:21 +02:00
Rubén De la Torre Vico 3598514cb4 chore(aws/config): adapt metadata to new standarized format (#8641)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-09-10 17:46:11 +02:00
Hugo Pereira Brito c4ba061f30 chore(outputs): adapt to new metadata specification (#8651) 2025-09-10 17:21:19 +02:00
Chandrapal Badshah f4530b21d2 fix(lighthouse): make Enter submit text (#8664)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-10 16:34:35 +02:00
Chandrapal Badshah 3949ab736d fix(lighthouse): allow scrolling during AI response streaming (#8669)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-10 16:34:24 +02:00
sumit-tft 9da5066b18 feat(ui): add copy link icon to finding detail page (#8685)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-10 16:30:16 +02:00
Rubén De la Torre Vico 941539616c chore(aws/neptune): adapt some metadata fields to new format (#8494)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-09-10 16:21:30 +02:00
sumit-tft 135fa044b7 feat(ui): Add Prowler Hub menu item with tooltip (#8692)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-10 16:09:09 +02:00
Andoni Alonso 48913c1886 docs(aws): refactor getting started and auth (#8683) 2025-09-10 13:45:36 +02:00
Pedro Martín ea20943f83 feat(actions): support dashboard changes in changelog (#8694) 2025-09-10 11:05:56 +02:00
Hugo Pereira Brito 2738cfd1bd feat(dashboard): add Description and markdown support (#8667) 2025-09-10 10:53:53 +02:00
Rubén De la Torre Vico 265c3d818e docs(developer-guide): enhance check metadata format (#8411)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-09-10 09:19:08 +02:00
Alejandro Bailo c0a9fdf8c8 docs(jira): add comprehensive guide for Jira integration in Prowler App (#8681)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-09-09 17:01:12 +02:00
Rubén De la Torre Vico 8b3335f426 chore: add metadata-review label for .metadata.json files (#8689) 2025-09-09 20:32:04 +05:45
Daniel Barranquero 252033d113 fix(compliance): replace old check id with new one (#8682) 2025-09-09 14:25:56 +02:00
Prowler Bot 0bc00dbca4 chore(release): Bump version to v5.13.0 (#8679)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-09-09 16:36:22 +05:45
330 changed files with 12663 additions and 3772 deletions
+2 -37
View File
@@ -85,44 +85,9 @@ DJANGO_CACHE_MAX_AGE=3600
DJANGO_STALE_WHILE_REVALIDATE=60
DJANGO_MANAGE_DB_PARTITIONS=True
# openssl genrsa -out private.pem 2048
DJANGO_TOKEN_SIGNING_KEY="-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDs4e+kt7SnUJek
6V5r9zMGzXCoU5qnChfPiqu+BgANyawz+MyVZPs6RCRfeo6tlCknPQtOziyXYM2I
7X+qckmuzsjqp8+u+o1mw3VvUuJew5k2SQLPYwsiTzuFNVJEOgRo3hywGiGwS2iv
/5nh2QAl7fq2qLqZEXQa5+/xJlQggS1CYxOJgggvLyra50QZlBvPve/AxKJ/EV/Q
irWTZU5lLNI8sH2iZR05vQeBsxZ0dCnGMT+vGl+cGkqrvzQzKsYbDmabMcfTYhYi
78fpv6A4uharJFHayypYBjE39PwhMyyeycrNXlpm1jpq+03HgmDuDMHydk1tNwuT
nEC7m7iNAgMBAAECggEAA2m48nJcJbn9SVi8bclMwKkWmbJErOnyEGEy2sTK3Of+
NWx9BB0FmqAPNxn0ss8K7cANKOhDD7ZLF9E2MO4/HgfoMKtUzHRbM7MWvtEepldi
nnvcUMEgULD8Dk4HnqiIVjt3BdmGiTv46OpBnRWrkSBV56pUL+7msZmMZTjUZvh2
ZWv0+I3gtDIjo2Zo/FiwDV7CfwRjJarRpYUj/0YyuSA4FuOUYl41WAX1I301FKMH
xo3jiAYi1s7IneJ16OtPpOA34Wg5F6ebm/UO0uNe+iD4kCXKaZmxYQPh5tfB0Qa3
qj1T7GNpFNyvtG7VVdauhkb8iu8X/wl6PCwbg0RCKQKBgQD9HfpnpH0lDlHMRw9K
X7Vby/1fSYy1BQtlXFEIPTN/btJ/asGxLmAVwJ2HAPXWlrfSjVAH7CtVmzN7v8oj
HeIHfeSgoWEu1syvnv2AMaYSo03UjFFlfc/GUxF7DUScRIhcJUPCP8jkAROz9nFv
DByNjUL17Q9r43DmDiRsy0IFqQKBgQDvlJ9Uhl+Sp7gRgKYwa/IG0+I4AduAM+Gz
Dxbm52QrMGMTjaJFLmLHBUZ/ot+pge7tZZGws8YR8ufpyMJbMqPjxhIvRRa/p1Tf
E3TQPW93FMsHUvxAgY3MV5MzXFPhlNAKb+akP/RcXUhetGAuZKLubtDCWa55ZQuL
wj2OS+niRQKBgE7K8zUqNi6/22S8xhy/2GPgB1qPObbsABUofK0U6CAGLo6te+gc
6Jo84IyzFtQbDNQFW2Fr+j1m18rw9AqkdcUhQndiZS9AfG07D+zFB86LeWHt4DS4
ymIRX8Kvaak/iDcu/n3Mf0vCrhB6aetImObTj4GgrwlFvtJOmrYnO8EpAoGAIXXP
Xt25gWD9OyyNiVu6HKwA/zN7NYeJcRmdaDhO7B1A6R0x2Zml4AfjlbXoqOLlvLAf
zd79vcoAC82nH1eOPiSOq51plPDI0LMF8IN0CtyTkn1Lj7LIXA6rF1RAvtOqzppc
SvpHpZK9pcRpXnFdtBE0BMDDtl6fYzCIqlP94UUCgYEAnhXbAQMF7LQifEm34Dx8
BizRMOKcqJGPvbO2+Iyt50O5X6onU2ITzSV1QHtOvAazu+B1aG9pEuBFDQ+ASxEu
L9ruJElkOkb/o45TSF6KCsHd55ReTZ8AqnRjf5R+lyzPqTZCXXb8KTcRvWT4zQa3
VxyT2PnaSqEcexWUy4+UXoQ=
-----END PRIVATE KEY-----"
DJANGO_TOKEN_SIGNING_KEY=""
# openssl rsa -in private.pem -pubout -out public.pem
DJANGO_TOKEN_VERIFYING_KEY="-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7OHvpLe0p1CXpOlea/cz
Bs1wqFOapwoXz4qrvgYADcmsM/jMlWT7OkQkX3qOrZQpJz0LTs4sl2DNiO1/qnJJ
rs7I6qfPrvqNZsN1b1LiXsOZNkkCz2MLIk87hTVSRDoEaN4csBohsEtor/+Z4dkA
Je36tqi6mRF0Gufv8SZUIIEtQmMTiYIILy8q2udEGZQbz73vwMSifxFf0Iq1k2VO
ZSzSPLB9omUdOb0HgbMWdHQpxjE/rxpfnBpKq780MyrGGw5mmzHH02IWIu/H6b+g
OLoWqyRR2ssqWAYxN/T8ITMsnsnKzV5aZtY6avtNx4Jg7gzB8nZNbTcLk5xAu5u4
jQIDAQAB
-----END PUBLIC KEY-----"
DJANGO_TOKEN_VERIFYING_KEY=""
# openssl rand -base64 32
DJANGO_SECRETS_ENCRYPTION_KEY="oE/ltOhp/n1TdbHjVmzcjDPLcLA41CVI/4Rk+UB5ESc="
DJANGO_BROKER_VISIBILITY_TIMEOUT=86400
+8
View File
@@ -110,6 +110,10 @@ component/ui:
- changed-files:
- any-glob-to-any-file: "ui/**"
component/mcp-server:
- changed-files:
- any-glob-to-any-file: "mcp_server/**"
compliance:
- changed-files:
- any-glob-to-any-file: "prowler/compliance/**"
@@ -119,3 +123,7 @@ compliance:
review-django-migrations:
- changed-files:
- any-glob-to-any-file: "api/src/backend/api/migrations/**"
metadata-review:
- changed-files:
- any-glob-to-any-file: "**/*.metadata.json"
+6 -6
View File
@@ -102,12 +102,6 @@ jobs:
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Update poetry.lock after the branch name change
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry lock
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push'
@@ -125,6 +119,12 @@ jobs:
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry lock
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
@@ -24,6 +24,7 @@ jobs:
permissions:
contents: read
pull-requests: write
issues: write
steps:
- name: Checkout repository
@@ -13,7 +13,7 @@ jobs:
contents: read
pull-requests: write
env:
MONITORED_FOLDERS: "api ui prowler"
MONITORED_FOLDERS: "api ui prowler dashboard"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
+8 -8
View File
@@ -122,7 +122,7 @@ jobs:
files: |
./prowler/providers/aws/**
./tests/providers/aws/**
.poetry.lock
./poetry.lock
- name: AWS - Test
if: steps.aws-changed-files.outputs.any_changed == 'true'
@@ -137,7 +137,7 @@ jobs:
files: |
./prowler/providers/azure/**
./tests/providers/azure/**
.poetry.lock
./poetry.lock
- name: Azure - Test
if: steps.azure-changed-files.outputs.any_changed == 'true'
@@ -152,7 +152,7 @@ jobs:
files: |
./prowler/providers/gcp/**
./tests/providers/gcp/**
.poetry.lock
./poetry.lock
- name: GCP - Test
if: steps.gcp-changed-files.outputs.any_changed == 'true'
@@ -167,7 +167,7 @@ jobs:
files: |
./prowler/providers/kubernetes/**
./tests/providers/kubernetes/**
.poetry.lock
./poetry.lock
- name: Kubernetes - Test
if: steps.kubernetes-changed-files.outputs.any_changed == 'true'
@@ -182,7 +182,7 @@ jobs:
files: |
./prowler/providers/github/**
./tests/providers/github/**
.poetry.lock
./poetry.lock
- name: GitHub - Test
if: steps.github-changed-files.outputs.any_changed == 'true'
@@ -197,7 +197,7 @@ jobs:
files: |
./prowler/providers/nhn/**
./tests/providers/nhn/**
.poetry.lock
./poetry.lock
- name: NHN - Test
if: steps.nhn-changed-files.outputs.any_changed == 'true'
@@ -212,7 +212,7 @@ jobs:
files: |
./prowler/providers/m365/**
./tests/providers/m365/**
.poetry.lock
./poetry.lock
- name: M365 - Test
if: steps.m365-changed-files.outputs.any_changed == 'true'
@@ -227,7 +227,7 @@ jobs:
files: |
./prowler/providers/iac/**
./tests/providers/iac/**
.poetry.lock
./poetry.lock
- name: IaC - Test
if: steps.iac-changed-files.outputs.any_changed == 'true'
@@ -56,7 +56,7 @@ jobs:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services"
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low, provider/aws"
labels: "status/waiting-for-revision, severity/low, provider/aws, no-changelog"
title: "chore(regions_update): Changes in regions for AWS services"
body: |
### Description
+2
View File
@@ -21,6 +21,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Fix API data directory permissions
run: docker run --rm -v $(pwd)/_data/api:/data alpine chown -R 1000:1000 /data
- name: Start API services
run: |
# Override docker-compose image tag to use latest instead of stable
+8
View File
@@ -63,6 +63,7 @@ junit-reports/
# .env
ui/.env*
api/.env*
mcp_server/.env*
.env.local
# Coverage
@@ -78,3 +79,10 @@ _data/
# Claude
CLAUDE.md
# LLM's (Until we have a standard one)
AGENTS.md
# MCP Server
mcp_server/prowler_mcp_server/prowler_app/server.py
mcp_server/prowler_mcp_server/prowler_app/utils/schema.yaml
+1 -29
View File
@@ -301,40 +301,12 @@ And many more environments.
![Architecture](docs/img/architecture.png)
# Deprecations from v3
## General
- `Allowlist` now is called `Mutelist`.
- The `--quiet` option has been deprecated. Use the `--status` flag to filter findings based on their status: PASS, FAIL, or MANUAL.
- All findings with an `INFO` status have been reclassified as `MANUAL`.
- The CSV output format is standardized across all providers.
**Deprecated Output Formats**
The following formats are now deprecated:
- Native JSON has been replaced with JSON in [OCSF] v1.1.0 format, which is standardized across all providers (https://schema.ocsf.io/).
## AWS
**AWS Flag Deprecation**
The flag --sts-endpoint-region has been deprecated due to the adoption of AWS STS regional tokens.
**Sending FAIL Results to AWS Security Hub**
- To send only FAILS to AWS Security Hub, use one of the following options: `--send-sh-only-fails` or `--security-hub --status FAIL`.
# 📖 Documentation
**Documentation Resources**
For installation instructions, usage details, tutorials, and the Developer Guide, visit https://docs.prowler.com/
# 📃 License
**Prowler License Information**
Prowler is licensed under the Apache License 2.0, as indicated in each file within the repository. Obtaining a Copy of the License
Prowler is licensed under the Apache License 2.0.
A copy of the License is available at <http://www.apache.org/licenses/LICENSE-2.0>
+24
View File
@@ -2,6 +2,28 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.14.0] (Prowler UNRELEASED)
### Added
- Default JWT keys are generated and stored if they are missing from configuration [(#8655)](https://github.com/prowler-cloud/prowler/pull/8655)
- `compliance_name` for each compliance [(#7920)](https://github.com/prowler-cloud/prowler/pull/7920)
### Changed
- Now the MANAGE_ACCOUNT permission is required to modify or read user permissions instead of MANAGE_USERS [(#8281)](https://github.com/prowler-cloud/prowler/pull/8281)
- Now at least one user with MANAGE_ACCOUNT permission is required in the tenant [(#8729)](https://github.com/prowler-cloud/prowler/pull/8729)
---
## [1.13.1] (Prowler 5.12.2)
### Changed
- Renamed compliance overview task queue to `compliance` [(#8755)](https://github.com/prowler-cloud/prowler/pull/8755)
### Security
- Django updated to the latest 5.1 security release, 5.1.12, due to [problems](https://www.djangoproject.com/weblog/2025/sep/03/security-releases/) with potential SQL injection in FilteredRelation column aliases [(#8693)](https://github.com/prowler-cloud/prowler/pull/8693)
---
## [1.13.0] (Prowler 5.12.0)
### Added
@@ -21,6 +43,8 @@ All notable changes to the **Prowler API** are documented in this file.
### Fixed
- GitHub provider always scans user instead of organization when using provider UID [(#8587)](https://github.com/prowler-cloud/prowler/pull/8587)
---
## [1.11.0] (Prowler 5.10.0)
### Added
+4
View File
@@ -20,6 +20,10 @@ Valkey exposes a Redis 7.2 compliant API. Any service that exposes the Redis API
Under the root path of the project, you can find a file called `.env.example`. This file shows all the environment variables that the project uses. You *must* create a new file called `.env` and set the values for the variables.
If you don't set `DJANGO_TOKEN_SIGNING_KEY` or `DJANGO_TOKEN_VERIFYING_KEY`, the API will generate them at `~/.config/prowler-api/` with `0600` and `0644` permissions; back up these files to persist identity across redeploys.
**Important:** When using the production Docker Compose profile (`docker compose --profile prod`), you **must** set both `DJANGO_TOKEN_SIGNING_KEY` and `DJANGO_TOKEN_VERIFYING_KEY` in your `.env` file, as automatic key generation is not available in this deployment mode.
## Local deployment
Keep in mind if you export the `.env` file to use it with local deployment that you will have to do it within the context of the Poetry interpreter, not before. Otherwise, variables will not be loaded properly.
+1 -1
View File
@@ -32,7 +32,7 @@ start_prod_server() {
start_worker() {
echo "Starting the worker..."
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview,integrations -E --max-tasks-per-child 1
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview,integrations,compliance -E --max-tasks-per-child 1
}
start_worker_beat() {
+26 -5
View File
@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.3 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.1.4 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -1511,14 +1511,14 @@ with-social = ["django-allauth[socialaccount] (>=64.0.0)"]
[[package]]
name = "django"
version = "5.1.10"
version = "5.1.12"
description = "A high-level Python web framework that encourages rapid development and clean, pragmatic design."
optional = false
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "django-5.1.10-py3-none-any.whl", hash = "sha256:19c9b771e9cf4de91101861aadd2daaa159bcf10698ca909c5755c88e70ccb84"},
{file = "django-5.1.10.tar.gz", hash = "sha256:73e5d191421d177803dbd5495d94bc7d06d156df9561f4eea9e11b4994c07137"},
{file = "django-5.1.12-py3-none-any.whl", hash = "sha256:9eb695636cea3601b65690f1596993c042206729afb320ca0960b55f8ed4477b"},
{file = "django-5.1.12.tar.gz", hash = "sha256:8a8991b1ec052ef6a44fefd1ef336ab8daa221287bcb91a4a17d5e1abec5bbcc"},
]
[package.dependencies]
@@ -2933,6 +2933,22 @@ html5 = ["html5lib"]
htmlsoup = ["BeautifulSoup4"]
source = ["Cython (>=3.0.11,<3.1.0)"]
[[package]]
name = "markdown"
version = "3.9"
description = "Python implementation of John Gruber's Markdown."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "markdown-3.9-py3-none-any.whl", hash = "sha256:9f4d91ed810864ea88a6f32c07ba8bee1346c0cc1f6b1f9f6c822f2a9667d280"},
{file = "markdown-3.9.tar.gz", hash = "sha256:d2900fe1782bd33bdbbd56859defef70c2e78fc46668f8eb9df3128138f2cb6a"},
]
[package.extras]
docs = ["mdx_gh_links (>=0.2)", "mkdocs (>=1.6)", "mkdocs-gen-files", "mkdocs-literate-nav", "mkdocs-nature (>=0.6)", "mkdocs-section-index", "mkdocstrings[python]"]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markdown-it-py"
version = "4.0.0"
@@ -5223,6 +5239,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f66efbc1caa63c088dead1c4170d148eabc9b80d95fb75b6c92ac0aad2437d76"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:22353049ba4181685023b25b5b51a574bce33e7f51c759371a7422dcae5402a6"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:932205970b9f9991b34f55136be327501903f7c66830e9760a8ffb15b07f05cd"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a52d48f4e7bf9005e8f0a89209bf9a73f7190ddf0489eee5eb51377385f59f2a"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-win32.whl", hash = "sha256:3eac5a91891ceb88138c113f9db04f3cebdae277f5d44eaa3651a4f573e6a5da"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-win_amd64.whl", hash = "sha256:ab007f2f5a87bd08ab1499bdf96f3d5c6ad4dcfa364884cb4549aa0154b13a28"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:4a6679521a58256a90b0d89e03992c15144c5f3858f40d7c18886023d7943db6"},
@@ -5231,6 +5248,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:811ea1594b8a0fb466172c384267a4e5e367298af6b228931f273b111f17ef52"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cf12567a7b565cbf65d438dec6cfbe2917d3c1bdddfce84a9930b7d35ea59642"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7dd5adc8b930b12c8fc5b99e2d535a09889941aa0d0bd06f4749e9a9397c71d2"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1492a6051dab8d912fc2adeef0e8c72216b24d57bd896ea607cb90bb0c4981d3"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-win32.whl", hash = "sha256:bd0a08f0bab19093c54e18a14a10b4322e1eacc5217056f3c063bd2f59853ce4"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-win_amd64.whl", hash = "sha256:a274fb2cb086c7a3dea4322ec27f4cb5cc4b6298adb583ab0e211a4682f241eb"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:20b0f8dc160ba83b6dcc0e256846e1a02d044e13f7ea74a3d1d56ede4e48c632"},
@@ -5239,6 +5257,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:749c16fcc4a2b09f28843cda5a193e0283e47454b63ec4b81eaa2242f50e4ccd"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:bf165fef1f223beae7333275156ab2022cffe255dcc51c27f066b4370da81e31"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:32621c177bbf782ca5a18ba4d7af0f1082a3f6e517ac2a18b3974d4edf349680"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b82a7c94a498853aa0b272fd5bc67f29008da798d4f93a2f9f289feb8426a58d"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-win32.whl", hash = "sha256:e8c4ebfcfd57177b572e2040777b8abc537cdef58a2120e830124946aa9b42c5"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-win_amd64.whl", hash = "sha256:0467c5965282c62203273b838ae77c0d29d7638c8a4e3a1c8bdd3602c10904e4"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:4c8c5d82f50bb53986a5e02d1b3092b03622c02c2eb78e29bec33fd9593bae1a"},
@@ -5247,6 +5266,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96777d473c05ee3e5e3c3e999f5d23c6f4ec5b0c38c098b3a5229085f74236c6"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:3bc2a80e6420ca8b7d3590791e2dfc709c88ab9152c00eeb511c9875ce5778bf"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:e188d2699864c11c36cdfdada94d781fd5d6b0071cd9c427bceb08ad3d7c70e1"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4f6f3eac23941b32afccc23081e1f50612bdbe4e982012ef4f5797986828cd01"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-win32.whl", hash = "sha256:6442cb36270b3afb1b4951f060eccca1ce49f3d087ca1ca4563a6eb479cb3de6"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-win_amd64.whl", hash = "sha256:e5b8daf27af0b90da7bb903a876477a9e6d7270be6146906b276605997c7e9a3"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:fc4b630cd3fa2cf7fce38afa91d7cfe844a9f75d7f0f36393fa98815e911d987"},
@@ -5255,6 +5275,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e2f1c3765db32be59d18ab3953f43ab62a761327aafc1594a2a1fbe038b8b8a7"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d85252669dc32f98ebcd5d36768f5d4faeaeaa2d655ac0473be490ecdae3c285"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e143ada795c341b56de9418c58d028989093ee611aa27ffb9b7f609c00d813ed"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2c59aa6170b990d8d2719323e628aaf36f3bfbc1c26279c0eeeb24d05d2d11c7"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-win32.whl", hash = "sha256:beffaed67936fbbeffd10966a4eb53c402fafd3d6833770516bf7314bc6ffa12"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-win_amd64.whl", hash = "sha256:040ae85536960525ea62868b642bdb0c2cc6021c9f9d507810c0c604e66f5a7b"},
{file = "ruamel.yaml.clib-0.2.12.tar.gz", hash = "sha256:6c8fbb13ec503f99a91901ab46e0b07ae7941cd527393187039aec586fdfd36f"},
@@ -6160,4 +6181,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "b954196aba7e108cacb94fd15732be7130b27379add09140fabbb55f7335bb7b"
content-hash = "91058a14382b76136a82f45624a30aece7a6d77c8b36c290bb4c40ea60c8850b"
+4 -3
View File
@@ -7,7 +7,7 @@ authors = [{name = "Prowler Engineering", email = "engineering@prowler.com"}]
dependencies = [
"celery[pytest] (>=5.4.0,<6.0.0)",
"dj-rest-auth[with_social,jwt] (==7.0.1)",
"django==5.1.10",
"django (==5.1.12)",
"django-allauth[saml] (>=65.8.0,<66.0.0)",
"django-celery-beat (>=2.7.0,<3.0.0)",
"django-celery-results (>=2.5.1,<3.0.0)",
@@ -31,7 +31,8 @@ dependencies = [
"uuid6==2024.7.10",
"openai (>=1.82.0,<2.0.0)",
"xmlsec==1.3.14",
"h2 (==4.3.0)"
"h2 (==4.3.0)",
"markdown (>=3.9,<4.0)"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
@@ -39,7 +40,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.13.0"
version = "1.14.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
+158
View File
@@ -1,4 +1,28 @@
import logging
import os
from pathlib import Path
import sys
from django.apps import AppConfig
from django.conf import settings
from config.custom_logging import BackendLogger
from config.env import env
logger = logging.getLogger(BackendLogger.API)
SIGNING_KEY_ENV = "DJANGO_TOKEN_SIGNING_KEY"
VERIFYING_KEY_ENV = "DJANGO_TOKEN_VERIFYING_KEY"
PRIVATE_KEY_FILE = "jwt_private.pem"
PUBLIC_KEY_FILE = "jwt_public.pem"
KEYS_DIRECTORY = (
Path.home() / ".config" / "prowler-api"
) # `/home/prowler/.config/prowler-api` inside the container
_keys_initialized = False # Flag to prevent multiple executions within the same process
class ApiConfig(AppConfig):
@@ -9,4 +33,138 @@ class ApiConfig(AppConfig):
from api import signals # noqa: F401
from api.compliance import load_prowler_compliance
# Generate required cryptographic keys if not present, but only if:
# `"manage.py" not in sys.argv`: If an external server (e.g., Gunicorn) is running the app
# `os.environ.get("RUN_MAIN")`: If it's not a Django command or using `runserver`,
# only the main process will do it
if "manage.py" not in sys.argv or os.environ.get("RUN_MAIN"):
self._ensure_crypto_keys()
load_prowler_compliance()
def _ensure_crypto_keys(self):
"""
Orchestrator method that ensures all required cryptographic keys are present.
This method coordinates the generation of:
- RSA key pairs for JWT token signing and verification
Note: During development, Django spawns multiple processes (migrations, fixtures, etc.)
which will each generate their own keys. This is expected behavior and each process
will have consistent keys for its lifetime. In production, set the keys as environment
variables to avoid regeneration.
"""
global _keys_initialized
# Skip key generation if running tests
if hasattr(settings, "TESTING") and settings.TESTING:
return
# Skip if already initialized in this process
if _keys_initialized:
return
# Check if both JWT keys are set; if not, generate them
signing_key = env.str(SIGNING_KEY_ENV, default="").strip()
verifying_key = env.str(VERIFYING_KEY_ENV, default="").strip()
if not signing_key or not verifying_key:
logger.info(
f"Generating JWT RSA key pair. In production, set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' "
"environment variables."
)
self._ensure_jwt_keys()
# Mark as initialized to prevent future executions in this process
_keys_initialized = True
def _read_key_file(self, file_name):
"""
Utility method to read the contents of a file.
"""
file_path = KEYS_DIRECTORY / file_name
return file_path.read_text().strip() if file_path.is_file() else None
def _write_key_file(self, file_name, content, private=True):
"""
Utility method to write content to a file.
"""
try:
file_path = KEYS_DIRECTORY / file_name
file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text(content)
file_path.chmod(0o600 if private else 0o644)
except Exception as e:
logger.error(
f"Error writing key file '{file_name}': {e}. "
f"Please set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' manually."
)
raise e
def _ensure_jwt_keys(self):
"""
Generate RSA key pairs for JWT token signing and verification
if they are not already set in environment variables.
"""
# Read existing keys from files if they exist
signing_key = self._read_key_file(PRIVATE_KEY_FILE)
verifying_key = self._read_key_file(PUBLIC_KEY_FILE)
if not signing_key or not verifying_key:
# Generate and store the RSA key pair
signing_key, verifying_key = self._generate_jwt_keys()
self._write_key_file(PRIVATE_KEY_FILE, signing_key, private=True)
self._write_key_file(PUBLIC_KEY_FILE, verifying_key, private=False)
logger.info("JWT keys generated and stored successfully")
else:
logger.info("JWT keys already generated")
# Set environment variables and Django settings
os.environ[SIGNING_KEY_ENV] = signing_key
settings.SIMPLE_JWT["SIGNING_KEY"] = signing_key
os.environ[VERIFYING_KEY_ENV] = verifying_key
settings.SIMPLE_JWT["VERIFYING_KEY"] = verifying_key
def _generate_jwt_keys(self):
"""
Generate and set RSA key pairs for JWT token operations.
"""
try:
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa
# Generate RSA key pair
private_key = rsa.generate_private_key( # Future improvement: we could read the next values from env vars
public_exponent=65537,
key_size=2048,
)
# Serialize private key (for signing)
private_pem = private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.PKCS8,
encryption_algorithm=serialization.NoEncryption(),
).decode("utf-8")
# Serialize public key (for verification)
public_key = private_key.public_key()
public_pem = public_key.public_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PublicFormat.SubjectPublicKeyInfo,
).decode("utf-8")
logger.debug("JWT RSA key pair generated successfully.")
return private_pem, public_pem
except ImportError as e:
logger.warning(
"The 'cryptography' package is required for automatic JWT key generation."
)
raise e
except Exception as e:
logger.error(
f"Error generating JWT keys: {e}. Please set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' manually."
)
raise e
+1
View File
@@ -225,6 +225,7 @@ def generate_compliance_overview_template(prowler_compliance: dict):
# Build compliance dictionary
compliance_dict = {
"framework": compliance_data.Framework,
"name": compliance_data.Name,
"version": compliance_data.Version,
"provider": provider_type,
"description": compliance_data.Description,
File diff suppressed because one or more lines are too long
+54 -49
View File
@@ -1,7 +1,7 @@
openapi: 3.0.3
info:
title: Prowler API
version: 1.13.0
version: 1.14.0
description: |-
Prowler API specification.
@@ -182,6 +182,7 @@ paths:
type: string
enum:
- id
- compliance_name
- framework_description
- name
- framework
@@ -6363,8 +6364,10 @@ paths:
description: ''
patch:
operationId: roles_partial_update
description: Update certain fields of an existing role's information without
affecting other fields.
description: Update selected fields on an existing role. When changing the `users`
relationship of a role that grants MANAGE_ACCOUNT, the API blocks attempts
that would leave the tenant without any MANAGE_ACCOUNT assignees and prevents
callers from removing their own assignment to that role.
summary: Partially update a role
parameters:
- in: path
@@ -6399,7 +6402,8 @@ paths:
description: ''
delete:
operationId: roles_destroy
description: Remove a role from the system by their ID.
description: Delete the specified role. The API rejects deletion of the last
role in the tenant that grants MANAGE_ACCOUNT.
summary: Delete a role
parameters:
- in: path
@@ -8220,6 +8224,7 @@ paths:
type: string
enum:
- roles
- memberships
description: include query parameter to allow the client to customize which
related resources should be returned.
explode: false
@@ -8339,6 +8344,7 @@ paths:
type: string
enum:
- roles
- memberships
description: include query parameter to allow the client to customize which
related resources should be returned.
explode: false
@@ -8437,7 +8443,8 @@ paths:
patch:
operationId: users_relationships_roles_partial_update
description: Update the user-roles relationship information without affecting
other fields.
other fields. If the update would remove MANAGE_ACCOUNT from the last remaining
user in the tenant, the API rejects the request with a 400 response.
summary: Partially update a user-roles relationship
tags:
- User
@@ -8461,6 +8468,10 @@ paths:
delete:
operationId: users_relationships_roles_destroy
description: Remove the user-roles relationship from the system by their ID.
If removing MANAGE_ACCOUNT would take it away from the last remaining user
in the tenant, the API rejects the request with a 400 response. Users also
cannot delete their own role assignments; attempting to do so returns a 400
response.
summary: Delete a user-roles relationship
tags:
- User
@@ -8652,6 +8663,7 @@ paths:
type: string
enum:
- roles
- memberships
description: include query parameter to allow the client to customize which
related resources should be returned.
explode: false
@@ -8728,6 +8740,8 @@ components:
properties:
id:
type: string
compliance_name:
type: string
framework_description:
type: string
name:
@@ -8741,6 +8755,7 @@ components:
attributes: {}
required:
- id
- compliance_name
- framework_description
- name
- framework
@@ -15553,59 +15568,49 @@ components:
type: object
properties:
data:
type: array
items:
type: object
properties:
id:
type: string
format: uuid
title: Resource Identifier
description: The identifier of the related object.
type:
type: string
enum:
- memberships
title: Resource Type Name
description: The [type](https://jsonapi.org/format/#document-resource-object-identification)
member is used to describe resource objects that share common
attributes and relationships.
required:
- id
- type
type: object
properties:
id:
type: string
type:
type: string
enum:
- memberships
title: Resource Type Name
description: The [type](https://jsonapi.org/format/#document-resource-object-identification)
member is used to describe resource objects that share common
attributes and relationships.
required:
- id
- type
required:
- data
description: A related resource object from type memberships
title: memberships
description: The identifier of the related object.
title: Resource Identifier
readOnly: true
roles:
type: object
properties:
data:
type: array
items:
type: object
properties:
id:
type: string
format: uuid
title: Resource Identifier
description: The identifier of the related object.
type:
type: string
enum:
- roles
title: Resource Type Name
description: The [type](https://jsonapi.org/format/#document-resource-object-identification)
member is used to describe resource objects that share common
attributes and relationships.
required:
- id
- type
type: object
properties:
id:
type: string
type:
type: string
enum:
- roles
title: Resource Type Name
description: The [type](https://jsonapi.org/format/#document-resource-object-identification)
member is used to describe resource objects that share common
attributes and relationships.
required:
- id
- type
required:
- data
description: A related resource object from type roles
title: roles
description: The identifier of the related object.
title: Resource Identifier
readOnly: true
UserCreate:
type: object
+152
View File
@@ -0,0 +1,152 @@
import os
from pathlib import Path
from unittest.mock import MagicMock
import pytest
from django.conf import settings
import api.apps as api_apps_module
from api.apps import (
ApiConfig,
PRIVATE_KEY_FILE,
PUBLIC_KEY_FILE,
SIGNING_KEY_ENV,
VERIFYING_KEY_ENV,
)
@pytest.fixture(autouse=True)
def reset_keys_initialized(monkeypatch):
"""Ensure per-test clean state for the module-level guard flag."""
monkeypatch.setattr(api_apps_module, "_keys_initialized", False, raising=False)
def _stub_keys():
return (
"""-----BEGIN PRIVATE KEY-----\nPRIVATE\n-----END PRIVATE KEY-----\n""",
"""-----BEGIN PUBLIC KEY-----\nPUBLIC\n-----END PUBLIC KEY-----\n""",
)
def test_generate_jwt_keys_when_missing(monkeypatch, tmp_path):
# Arrange: isolate FS, env, and settings; force generation path
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.delenv(SIGNING_KEY_ENV, raising=False)
monkeypatch.delenv(VERIFYING_KEY_ENV, raising=False)
# Work on a copy of SIMPLE_JWT to avoid mutating the global settings dict for other tests
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
monkeypatch.setattr(settings, "TESTING", False, raising=False)
# Avoid dependency on the cryptography package
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_stub_keys))
config = ApiConfig("api", api_apps_module)
# Act
config._ensure_crypto_keys()
# Assert: files created with expected content
priv_path = Path(tmp_path) / PRIVATE_KEY_FILE
pub_path = Path(tmp_path) / PUBLIC_KEY_FILE
assert priv_path.is_file()
assert pub_path.is_file()
assert priv_path.read_text() == _stub_keys()[0]
assert pub_path.read_text() == _stub_keys()[1]
# Env vars and Django settings updated
assert os.environ[SIGNING_KEY_ENV] == _stub_keys()[0]
assert os.environ[VERIFYING_KEY_ENV] == _stub_keys()[1]
assert settings.SIMPLE_JWT["SIGNING_KEY"] == _stub_keys()[0]
assert settings.SIMPLE_JWT["VERIFYING_KEY"] == _stub_keys()[1]
def test_ensure_crypto_keys_are_idempotent_within_process(monkeypatch, tmp_path):
# Arrange
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.delenv(SIGNING_KEY_ENV, raising=False)
monkeypatch.delenv(VERIFYING_KEY_ENV, raising=False)
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
monkeypatch.setattr(settings, "TESTING", False, raising=False)
mock_generate = MagicMock(side_effect=_stub_keys)
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(mock_generate))
config = ApiConfig("api", api_apps_module)
# Act: first call should generate, second should be a no-op (guard flag)
config._ensure_crypto_keys()
config._ensure_crypto_keys()
# Assert: generation occurred exactly once
assert mock_generate.call_count == 1
def test_ensure_jwt_keys_uses_existing_files(monkeypatch, tmp_path):
# Arrange: pre-create key files
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
existing_private, existing_public = _stub_keys()
(Path(tmp_path) / PRIVATE_KEY_FILE).write_text(existing_private)
(Path(tmp_path) / PUBLIC_KEY_FILE).write_text(existing_public)
# If generation were called, fail the test
def _fail_generate():
raise AssertionError("_generate_jwt_keys should not be called when files exist")
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_fail_generate))
config = ApiConfig("api", api_apps_module)
# Act: call the lower-level method directly to set env/settings from files
config._ensure_jwt_keys()
# Assert
# _read_key_file() strips trailing newlines; environment/settings should reflect stripped content
assert os.environ[SIGNING_KEY_ENV] == existing_private.strip()
assert os.environ[VERIFYING_KEY_ENV] == existing_public.strip()
assert settings.SIMPLE_JWT["SIGNING_KEY"] == existing_private.strip()
assert settings.SIMPLE_JWT["VERIFYING_KEY"] == existing_public.strip()
def test_ensure_crypto_keys_skips_when_env_vars(monkeypatch, tmp_path):
# Arrange: put values in env so the orchestrator doesn't generate
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.setenv(SIGNING_KEY_ENV, "ENV-PRIVATE")
monkeypatch.setenv(VERIFYING_KEY_ENV, "ENV-PUBLIC")
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
monkeypatch.setattr(settings, "TESTING", False, raising=False)
called = {"ensure": False}
def _track_call():
called["ensure"] = True
return _stub_keys()
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_track_call))
config = ApiConfig("api", api_apps_module)
# Act
config._ensure_crypto_keys()
# Assert: orchestrator did not trigger generation when env present
assert called["ensure"] is False
@@ -239,6 +239,7 @@ class TestCompliance:
Framework="Framework 1",
Version="1.0",
Description="Description of compliance1",
Name="Compliance 1",
)
prowler_compliance = {"aws": {"compliance1": compliance1}}
@@ -248,6 +249,7 @@ class TestCompliance:
"aws": {
"compliance1": {
"framework": "Framework 1",
"name": "Compliance 1",
"version": "1.0",
"provider": "aws",
"description": "Description of compliance1",
+336
View File
@@ -1,3 +1,4 @@
import json
from unittest.mock import ANY, Mock, patch
import pytest
@@ -151,6 +152,221 @@ class TestUserViewSet:
assert response.status_code == status.HTTP_200_OK
assert response.json()["data"]["attributes"]["email"] == "rbac_limited@rbac.com"
def test_me_shows_own_roles_and_memberships_without_manage_account(
self, authenticated_client_no_permissions_rbac
):
response = authenticated_client_no_permissions_rbac.get(reverse("user-me"))
assert response.status_code == status.HTTP_200_OK
rels = response.json()["data"]["relationships"]
# Self should see own roles and memberships even without manage_account
assert isinstance(rels["roles"]["data"], list)
assert rels["memberships"]["meta"]["count"] == 1
def test_me_shows_roles_and_memberships_with_manage_account(
self, authenticated_client_rbac
):
response = authenticated_client_rbac.get(reverse("user-me"))
assert response.status_code == status.HTTP_200_OK
rels = response.json()["data"]["relationships"]
# Roles should have data when manage_account is True
assert len(rels["roles"]["data"]) > 0
# Memberships should be present and count > 0
assert rels["memberships"]["meta"]["count"] > 0
def test_me_include_roles_and_memberships_included_block(
self, authenticated_client_rbac
):
# Request current user info including roles and memberships
response = authenticated_client_rbac.get(
reverse("user-me"), {"include": "roles,memberships"}
)
assert response.status_code == status.HTTP_200_OK
payload = response.json()
# Included must contain memberships corresponding to relationships data
rel_memberships = payload["data"]["relationships"]["memberships"]
ids_in_relationship = {item["id"] for item in rel_memberships["data"]}
included = payload["included"]
included_membership_ids = {
item["id"] for item in included if item["type"] == "memberships"
}
# If there are memberships in relationships, they must be present in included
if ids_in_relationship:
assert ids_in_relationship.issubset(included_membership_ids)
else:
# At minimum, included should contain the user's membership when requested
# (count should align with meta count)
assert rel_memberships["meta"]["count"] == len(included_membership_ids)
def test_list_users_with_manage_account_only_forbidden(
self, authenticated_client_rbac_manage_account
):
response = authenticated_client_rbac_manage_account.get(reverse("user-list"))
assert response.status_code == status.HTTP_403_FORBIDDEN
def test_retrieve_other_user_with_manage_account_only_forbidden(
self, authenticated_client_rbac_manage_account, create_test_user
):
response = authenticated_client_rbac_manage_account.get(
reverse("user-detail", kwargs={"pk": create_test_user.id})
)
assert response.status_code == status.HTTP_403_FORBIDDEN
def test_list_users_with_manage_users_only_hides_relationships(
self, authenticated_client_rbac_manage_users_only
):
# Ensure there is at least one other user in the same tenant
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
other_user = User.objects.create_user(
name="other_in_tenant",
email="other_in_tenant@rbac.com",
password="Password123@",
)
Membership.objects.create(user=other_user, tenant=tenant)
response = authenticated_client_rbac_manage_users_only.get(reverse("user-list"))
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert isinstance(data, list)
current_user_id = str(mu_user.id)
assert any(item["id"] == current_user_id for item in data)
for item in data:
rels = item["relationships"]
if item["id"] == current_user_id:
# Self should see own relationships
assert isinstance(rels["roles"]["data"], list)
assert rels["memberships"]["meta"].get("count", 0) >= 1
else:
# Others should be hidden without manage_account
assert rels["roles"]["data"] == []
assert rels["memberships"]["data"] == []
assert rels["memberships"]["meta"]["count"] == 0
def test_include_roles_hidden_without_manage_account(
self, authenticated_client_rbac_manage_users_only
):
# Arrange: ensure another user in the same tenant with its own role
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
other_user = User.objects.create_user(
name="other_in_tenant_inc",
email="other_in_tenant_inc@rbac.com",
password="Password123@",
)
Membership.objects.create(user=other_user, tenant=tenant)
other_role = Role.objects.create(
name="other_inc_role",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
UserRoleRelationship.objects.create(
user=other_user, role=other_role, tenant_id=tenant.id
)
response = authenticated_client_rbac_manage_users_only.get(
reverse("user-list"), {"include": "roles"}
)
assert response.status_code == status.HTTP_200_OK
payload = response.json()
# Assert: included must not contain the other user's role
included = payload.get("included", [])
included_role_ids = {
item["id"] for item in included if item.get("type") == "roles"
}
assert str(other_role.id) not in included_role_ids
# Relationships for other user should be empty
for item in payload["data"]:
if item["id"] == str(other_user.id):
rels = item["relationships"]
assert rels["roles"]["data"] == []
def test_include_roles_visible_with_manage_account(
self, authenticated_client_rbac, tenants_fixture
):
# Arrange: another user in tenant[0] with its role
tenant = tenants_fixture[0]
other_user = User.objects.create_user(
name="other_with_role",
email="other_with_role@rbac.com",
password="Password123@",
)
Membership.objects.create(user=other_user, tenant=tenant)
other_role = Role.objects.create(
name="other_visible_role",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
UserRoleRelationship.objects.create(
user=other_user, role=other_role, tenant_id=tenant.id
)
response = authenticated_client_rbac.get(
reverse("user-list"), {"include": "roles"}
)
assert response.status_code == status.HTTP_200_OK
payload = response.json()
# Assert: included must contain the other user's role
included = payload.get("included", [])
included_role_ids = {
item["id"] for item in included if item.get("type") == "roles"
}
assert str(other_role.id) in included_role_ids
def test_retrieve_user_with_manage_users_only_hides_relationships(
self, authenticated_client_rbac_manage_users_only
):
# Create a target user in the same tenant to ensure visibility
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
target_user = User.objects.create_user(
name="target_same_tenant",
email="target_same_tenant@rbac.com",
password="Password123@",
)
Membership.objects.create(user=target_user, tenant=tenant)
response = authenticated_client_rbac_manage_users_only.get(
reverse("user-detail", kwargs={"pk": target_user.id})
)
assert response.status_code == status.HTTP_200_OK
rels = response.json()["data"]["relationships"]
assert rels["roles"]["data"] == []
assert rels["memberships"]["data"] == []
assert rels["memberships"]["meta"]["count"] == 0
def test_list_users_with_all_permissions_shows_relationships(
self, authenticated_client_rbac
):
response = authenticated_client_rbac.get(reverse("user-list"))
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert isinstance(data, list)
rels = data[0]["relationships"]
assert len(rels["roles"]["data"]) >= 0
assert rels["memberships"]["meta"]["count"] >= 0
@pytest.mark.django_db
class TestProviderViewSet:
@@ -494,3 +710,123 @@ class TestLimitedVisibility:
assert response.status_code == status.HTTP_200_OK
assert len(response.json()["data"]) == 0
@pytest.mark.django_db
class TestRolePermissions:
def test_role_create_with_manage_account_only_allowed(
self, authenticated_client_rbac_manage_account
):
data = {
"data": {
"type": "roles",
"attributes": {
"name": "Role Manage Account Only",
"manage_users": "false",
"manage_account": "true",
"manage_providers": "false",
"manage_scans": "false",
"unlimited_visibility": "false",
},
"relationships": {"provider_groups": {"data": []}},
}
}
response = authenticated_client_rbac_manage_account.post(
reverse("role-list"),
data=json.dumps(data),
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_201_CREATED
def test_role_create_with_manage_users_only_forbidden(
self, authenticated_client_rbac_manage_users_only
):
data = {
"data": {
"type": "roles",
"attributes": {
"name": "Role Manage Users Only",
"manage_users": "true",
"manage_account": "false",
"manage_providers": "false",
"manage_scans": "false",
"unlimited_visibility": "false",
},
"relationships": {"provider_groups": {"data": []}},
}
}
response = authenticated_client_rbac_manage_users_only.post(
reverse("role-list"),
data=json.dumps(data),
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_403_FORBIDDEN
@pytest.mark.django_db
class TestUserRoleLinkPermissions:
def test_link_user_roles_with_manage_account_only_allowed(
self, authenticated_client_rbac_manage_account
):
# Arrange: create a second user in the same tenant as the manage_account user
ma_user = authenticated_client_rbac_manage_account.user
ma_membership = Membership.objects.filter(user=ma_user).first()
tenant = ma_membership.tenant
user2 = User.objects.create_user(
name="target_user",
email="target_user_ma@rbac.com",
password="Password123@",
)
Membership.objects.create(user=user2, tenant=tenant)
# Create a role in the same tenant
role = Role.objects.create(
name="linkable_role",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
data = {"data": [{"type": "roles", "id": str(role.id)}]}
# Act
response = authenticated_client_rbac_manage_account.post(
reverse("user-roles-relationship", kwargs={"pk": user2.id}),
data=data,
content_type="application/vnd.api+json",
)
# Assert
assert response.status_code == status.HTTP_204_NO_CONTENT
def test_link_user_roles_with_manage_users_only_forbidden(
self, authenticated_client_rbac_manage_users_only
):
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
user2 = User.objects.create_user(
name="target_user2",
email="target_user_mu@rbac.com",
password="Password123@",
)
Membership.objects.create(user=user2, tenant=tenant)
role = Role.objects.create(
name="linkable_role_mu",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
data = {"data": [{"type": "roles", "id": str(role.id)}]}
response = authenticated_client_rbac_manage_users_only.post(
reverse("user-roles-relationship", kwargs={"pk": user2.id}),
data=data,
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_403_FORBIDDEN
+154 -4
View File
@@ -51,6 +51,7 @@ from api.models import (
UserRoleRelationship,
)
from api.rls import Tenant
from api.v1.serializers import TokenSerializer
from api.v1.views import ComplianceOverviewViewSet, TenantFinishACSView
@@ -4720,6 +4721,36 @@ class TestRoleViewSet:
assert role.users.count() == 0
assert role.provider_groups.count() == 0
def test_cannot_remove_own_assignment_via_role_update(
self, authenticated_client, roles_fixture
):
role = roles_fixture[0]
# Ensure the authenticated user is assigned to this role
user = User.objects.get(email=TEST_USER)
if not UserRoleRelationship.objects.filter(user=user, role=role).exists():
UserRoleRelationship.objects.create(
user=user, role=role, tenant_id=role.tenant_id
)
# Attempt to update role users to exclude the current user
data = {
"data": {
"id": str(role.id),
"type": "roles",
"relationships": {"users": {"data": []}},
}
}
response = authenticated_client.patch(
reverse("role-detail", kwargs={"pk": role.id}),
data=json.dumps(data),
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_400_BAD_REQUEST
assert (
"cannot remove their own role"
in response.json()["errors"][0]["detail"].lower()
)
def test_role_create_with_invalid_user_relationship(
self, authenticated_client, provider_groups_fixture
):
@@ -4841,15 +4872,134 @@ class TestUserRoleRelationshipViewSet:
roles_fixture[2].id,
}
def test_destroy_relationship(
self, authenticated_client, roles_fixture, create_test_user
def test_destroy_relationship_other_user(
self, authenticated_client, roles_fixture, create_test_user, tenants_fixture
):
# Create another user in same tenant and assign a role
tenant = tenants_fixture[0]
other_user = User.objects.create_user(
name="other",
email="other_user@prowler.com",
password="TmpPass123@",
)
Membership.objects.create(user=other_user, tenant=tenant)
UserRoleRelationship.objects.create(
user=other_user, role=roles_fixture[0], tenant_id=tenant.id
)
# Delete roles for the other user (allowed)
response = authenticated_client.delete(
reverse("user-roles-relationship", kwargs={"pk": other_user.id}),
)
assert response.status_code == status.HTTP_204_NO_CONTENT
relationships = UserRoleRelationship.objects.filter(user=other_user.id)
assert relationships.count() == 0
def test_cannot_delete_own_roles(self, authenticated_client, create_test_user):
# Attempt to delete own roles should be forbidden
response = authenticated_client.delete(
reverse("user-roles-relationship", kwargs={"pk": create_test_user.id}),
)
assert response.status_code == status.HTTP_400_BAD_REQUEST
def test_prevent_removing_last_manage_account_on_patch(
self, authenticated_client, roles_fixture, create_test_user, tenants_fixture
):
# roles_fixture[1] has manage_account=False
limited_role = roles_fixture[1]
# Ensure there is no other user with MANAGE_ACCOUNT in the tenant
tenant = tenants_fixture[0]
# Create a secondary user without MANAGE_ACCOUNT
user2 = User.objects.create_user(
name="limited_user",
email="limited_user@prowler.com",
password="TmpPass123@",
)
Membership.objects.create(user=user2, tenant=tenant)
UserRoleRelationship.objects.create(
user=user2, role=limited_role, tenant_id=tenant.id
)
# Attempt to switch the only MANAGE_ACCOUNT user to a role without it
data = {"data": [{"type": "roles", "id": str(limited_role.id)}]}
response = authenticated_client.patch(
reverse("user-roles-relationship", kwargs={"pk": create_test_user.id}),
data=data,
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_400_BAD_REQUEST
assert "MANAGE_ACCOUNT" in response.json()["errors"][0]["detail"]
def test_allow_role_change_when_other_user_has_manage_account_on_patch(
self, authenticated_client, roles_fixture, create_test_user, tenants_fixture
):
# roles_fixture[1] has manage_account=False, roles_fixture[0] has manage_account=True
limited_role = roles_fixture[1]
ma_role = roles_fixture[0]
tenant = tenants_fixture[0]
# Create another user with MANAGE_ACCOUNT
user2 = User.objects.create_user(
name="ma_user",
email="ma_user@prowler.com",
password="TmpPass123@",
)
Membership.objects.create(user=user2, tenant=tenant)
UserRoleRelationship.objects.create(
user=user2, role=ma_role, tenant_id=tenant.id
)
# Now changing the first user's roles to a non-MA role should succeed
data = {"data": [{"type": "roles", "id": str(limited_role.id)}]}
response = authenticated_client.patch(
reverse("user-roles-relationship", kwargs={"pk": create_test_user.id}),
data=data,
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_204_NO_CONTENT
relationships = UserRoleRelationship.objects.filter(role=roles_fixture[0].id)
assert relationships.count() == 0
def test_role_destroy_only_manage_account_blocked(
self, authenticated_client, tenants_fixture
):
# Use a tenant without default admin role (tenant3)
tenant = tenants_fixture[2]
user = User.objects.get(email=TEST_USER)
# Add membership for this tenant
Membership.objects.create(user=user, tenant=tenant)
# Create a single MANAGE_ACCOUNT role in this tenant
only_role = Role.objects.create(
name="only_ma",
tenant=tenant,
manage_users=True,
manage_account=True,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
)
# Switch token to this tenant
serializer = TokenSerializer(
data={
"type": "tokens",
"email": TEST_USER,
"password": TEST_PASSWORD,
"tenant_id": str(tenant.id),
}
)
serializer.is_valid(raise_exception=True)
access_token = serializer.validated_data["access"]
authenticated_client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
# Attempt to delete the only MANAGE_ACCOUNT role
response = authenticated_client.delete(
reverse("role-detail", kwargs={"pk": only_role.id})
)
assert response.status_code == status.HTTP_400_BAD_REQUEST
assert Role.objects.filter(id=only_role.id).exists()
def test_invalid_provider_group_id(self, authenticated_client, create_test_user):
invalid_id = "non-existent-id"
+157 -3
View File
@@ -15,6 +15,7 @@ from rest_framework_simplejwt.exceptions import TokenError
from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
from rest_framework_simplejwt.tokens import RefreshToken
from api.db_router import MainRouter
from api.exceptions import ConflictException
from api.models import (
Finding,
@@ -259,8 +260,15 @@ class UserSerializer(BaseSerializerV1):
Serializer for the User model.
"""
memberships = serializers.ResourceRelatedField(many=True, read_only=True)
roles = serializers.ResourceRelatedField(many=True, read_only=True)
# We use SerializerMethodResourceRelatedField so includes (e.g. ?include=roles)
# respect RBAC and do not leak relationships of other users when the requester
# lacks manage_account. The visibility logic lives in get_roles/get_memberships.
memberships = SerializerMethodResourceRelatedField(
many=True, read_only=True, source="memberships", method_name="get_memberships"
)
roles = SerializerMethodResourceRelatedField(
many=True, read_only=True, source="roles", method_name="get_roles"
)
class Meta:
model = User
@@ -278,9 +286,35 @@ class UserSerializer(BaseSerializerV1):
}
included_serializers = {
"roles": "api.v1.serializers.RoleSerializer",
"roles": "api.v1.serializers.RoleIncludeSerializer",
"memberships": "api.v1.serializers.MembershipIncludeSerializer",
}
def _can_view_relationships(self, instance) -> bool:
"""Allow self to view own relationships. Require manage_account to view others."""
role = self.context.get("role")
request = self.context.get("request")
is_self = bool(
request
and getattr(request, "user", None)
and getattr(instance, "id", None) == request.user.id
)
return is_self or (role and role.manage_account)
def get_roles(self, instance):
return (
instance.roles.all()
if self._can_view_relationships(instance)
else Role.objects.none()
)
def get_memberships(self, instance):
return (
instance.memberships.all()
if self._can_view_relationships(instance)
else Membership.objects.none()
)
class UserCreateSerializer(BaseWriteSerializer):
password = serializers.CharField(write_only=True)
@@ -388,6 +422,34 @@ class UserRoleRelationshipSerializer(RLSSerializer, BaseWriteSerializer):
roles = Role.objects.filter(id__in=role_ids)
tenant_id = self.context.get("tenant_id")
# Safeguard: A tenant must always have at least one user with MANAGE_ACCOUNT.
# If the target roles do NOT include MANAGE_ACCOUNT, and the current user is
# the only one in the tenant with MANAGE_ACCOUNT, block the update.
target_includes_manage_account = roles.filter(manage_account=True).exists()
if not target_includes_manage_account:
# Check if any other user has MANAGE_ACCOUNT
other_users_have_manage_account = (
UserRoleRelationship.objects.filter(
tenant_id=tenant_id, role__manage_account=True
)
.exclude(user_id=instance.id)
.exists()
)
# Check if the current user has MANAGE_ACCOUNT
instance_has_manage_account = instance.roles.filter(
tenant_id=tenant_id, manage_account=True
).exists()
# If the current user is the last holder of MANAGE_ACCOUNT, prevent removal
if instance_has_manage_account and not other_users_have_manage_account:
raise serializers.ValidationError(
{
"roles": "At least one user in the tenant must retain MANAGE_ACCOUNT. "
"Assign MANAGE_ACCOUNT to another user before removing it here."
}
)
instance.roles.clear()
new_relationships = [
UserRoleRelationship(user=instance, role=r, tenant_id=tenant_id)
@@ -502,6 +564,12 @@ class TenantSerializer(BaseSerializerV1):
fields = ["id", "name", "memberships"]
class TenantIncludeSerializer(BaseSerializerV1):
class Meta:
model = Tenant
fields = ["id", "name"]
# Memberships
@@ -523,6 +591,29 @@ class MembershipSerializer(serializers.ModelSerializer):
fields = ["id", "user", "tenant", "role", "date_joined"]
class MembershipIncludeSerializer(serializers.ModelSerializer):
"""
Include-oriented Membership serializer that enables including tenant objects with names
without altering the base MembershipSerializer behavior.
"""
role = MemberRoleEnumSerializerField()
user = serializers.ResourceRelatedField(read_only=True)
tenant = SerializerMethodResourceRelatedField(read_only=True, source="tenant")
class Meta:
model = Membership
fields = ["id", "user", "tenant", "role", "date_joined"]
included_serializers = {"tenant": "api.v1.serializers.TenantIncludeSerializer"}
def get_tenant(self, instance):
try:
return Tenant.objects.using(MainRouter.admin_db).get(id=instance.tenant_id)
except Tenant.DoesNotExist:
return None
# Provider Groups
class ProviderGroupSerializer(RLSSerializer, BaseWriteSerializer):
providers = serializers.ResourceRelatedField(
@@ -1678,6 +1769,37 @@ class RoleUpdateSerializer(RoleSerializer):
if "users" in validated_data:
users = validated_data.pop("users")
# Prevent a user from removing their own role assignment via Role update
request = self.context.get("request")
if request and getattr(request, "user", None):
request_user = request.user
is_currently_assigned = instance.users.filter(
id=request_user.id
).exists()
will_be_assigned = any(u.id == request_user.id for u in users)
if is_currently_assigned and not will_be_assigned:
raise serializers.ValidationError(
{"users": "Users cannot remove their own role."}
)
# Safeguard MANAGE_ACCOUNT coverage when updating users of this role
if instance.manage_account:
# Existing MANAGE_ACCOUNT assignments on other roles within the tenant
other_ma_exists = (
UserRoleRelationship.objects.filter(
tenant_id=tenant_id, role__manage_account=True
)
.exclude(role_id=instance.id)
.exists()
)
if not other_ma_exists and len(users) == 0:
raise serializers.ValidationError(
{
"users": "At least one user in the tenant must retain MANAGE_ACCOUNT. "
"Assign this MANAGE_ACCOUNT role to at least one user or ensure another user has it."
}
)
instance.users.clear()
through_model_instances = [
UserRoleRelationship(
@@ -1692,6 +1814,37 @@ class RoleUpdateSerializer(RoleSerializer):
return super().update(instance, validated_data)
class RoleIncludeSerializer(RLSSerializer):
permission_state = serializers.SerializerMethodField()
def get_permission_state(self, obj) -> str:
return obj.permission_state
class Meta:
model = Role
fields = [
"id",
"name",
"manage_users",
"manage_account",
# Disable for the first release
# "manage_billing",
# /Disable for the first release
"manage_integrations",
"manage_providers",
"manage_scans",
"permission_state",
"unlimited_visibility",
"inserted_at",
"updated_at",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
}
class ProviderGroupResourceIdentifierSerializer(serializers.Serializer):
resource_type = serializers.CharField(source="type")
id = serializers.UUIDField()
@@ -1806,6 +1959,7 @@ class ComplianceOverviewDetailSerializer(serializers.Serializer):
class ComplianceOverviewAttributesSerializer(serializers.Serializer):
id = serializers.CharField()
compliance_name = serializers.CharField()
framework_description = serializers.CharField()
name = serializers.CharField()
framework = serializers.CharField()
+88 -12
View File
@@ -300,7 +300,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.13.0"
spectacular_settings.VERSION = "1.14.0"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
@@ -768,11 +768,13 @@ class UserViewSet(BaseUserViewset):
# If called during schema generation, return an empty queryset
if getattr(self, "swagger_fake_view", False):
return User.objects.none()
queryset = (
User.objects.filter(membership__tenant__id=self.request.tenant_id)
if hasattr(self.request, "tenant_id")
else User.objects.all()
)
return queryset.prefetch_related("memberships", "roles")
def get_permissions(self):
@@ -790,6 +792,12 @@ class UserViewSet(BaseUserViewset):
else:
return UserSerializer
def get_serializer_context(self):
context = super().get_serializer_context()
if self.request.user.is_authenticated:
context["role"] = get_role(self.request.user)
return context
@action(detail=False, methods=["get"], url_name="me")
def me(self, request):
user = self.request.user
@@ -894,7 +902,11 @@ class UserViewSet(BaseUserViewset):
partial_update=extend_schema(
tags=["User"],
summary="Partially update a user-roles relationship",
description="Update the user-roles relationship information without affecting other fields.",
description=(
"Update the user-roles relationship information without affecting other fields. "
"If the update would remove MANAGE_ACCOUNT from the last remaining user in the "
"tenant, the API rejects the request with a 400 response."
),
responses={
204: OpenApiResponse(
response=None, description="Relationship updated successfully"
@@ -904,7 +916,12 @@ class UserViewSet(BaseUserViewset):
destroy=extend_schema(
tags=["User"],
summary="Delete a user-roles relationship",
description="Remove the user-roles relationship from the system by their ID.",
description=(
"Remove the user-roles relationship from the system by their ID. If removing "
"MANAGE_ACCOUNT would take it away from the last remaining user in the tenant, "
"the API rejects the request with a 400 response. Users also cannot delete their "
"own role assignments; attempting to do so returns a 400 response."
),
responses={
204: OpenApiResponse(
response=None, description="Relationship deleted successfully"
@@ -919,11 +936,48 @@ class UserRoleRelationshipView(RelationshipView, BaseRLSViewSet):
http_method_names = ["post", "patch", "delete"]
schema = RelationshipViewSchema()
# RBAC required permissions
required_permissions = [Permissions.MANAGE_USERS]
required_permissions = [Permissions.MANAGE_ACCOUNT]
def get_queryset(self):
return User.objects.filter(membership__tenant__id=self.request.tenant_id)
def destroy(self, request, *args, **kwargs):
"""
Prevent deleting role relationships if it would leave the tenant with no
users having MANAGE_ACCOUNT. Supports deleting specific roles via JSON:API
relationship payload or clearing all roles for the user when no payload.
"""
user = self.get_object()
# Disallow deleting own roles
if str(user.id) == str(request.user.id):
return Response(
data={
"detail": "Users cannot delete the relationship with their role."
},
status=status.HTTP_400_BAD_REQUEST,
)
tenant_id = self.request.tenant_id
payload = request.data if isinstance(request.data, dict) else None
# If a user has more than one role, we will delete the relationship with the roles in the payload
data = payload.get("data") if payload else None
if data:
try:
role_ids = [item["id"] for item in data]
except KeyError:
role_ids = []
roles_to_remove = Role.objects.filter(id__in=role_ids, tenant_id=tenant_id)
else:
roles_to_remove = user.roles.filter(tenant_id=tenant_id)
UserRoleRelationship.objects.filter(
user=user,
tenant_id=tenant_id,
role_id__in=roles_to_remove.values_list("id", flat=True),
).delete()
return Response(status=status.HTTP_204_NO_CONTENT)
def create(self, request, *args, **kwargs):
user = self.get_object()
@@ -962,12 +1016,6 @@ class UserRoleRelationshipView(RelationshipView, BaseRLSViewSet):
serializer.save()
return Response(status=status.HTTP_204_NO_CONTENT)
def destroy(self, request, *args, **kwargs):
user = self.get_object()
user.roles.clear()
return Response(status=status.HTTP_204_NO_CONTENT)
@extend_schema_view(
list=extend_schema(
@@ -2872,13 +2920,11 @@ class InvitationAcceptViewSet(BaseRLSViewSet):
partial_update=extend_schema(
tags=["Role"],
summary="Partially update a role",
description="Update certain fields of an existing role's information without affecting other fields.",
responses={200: RoleSerializer},
),
destroy=extend_schema(
tags=["Role"],
summary="Delete a role",
description="Remove a role from the system by their ID.",
),
)
class RoleViewSet(BaseRLSViewSet):
@@ -2900,6 +2946,14 @@ class RoleViewSet(BaseRLSViewSet):
return RoleUpdateSerializer
return super().get_serializer_class()
@extend_schema(
description=(
"Update selected fields on an existing role. When changing the `users` "
"relationship of a role that grants MANAGE_ACCOUNT, the API blocks attempts "
"that would leave the tenant without any MANAGE_ACCOUNT assignees and prevents "
"callers from removing their own assignment to that role."
)
)
def partial_update(self, request, *args, **kwargs):
user_role = get_role(request.user)
# If the user is the owner of the role, the manage_account field is not editable
@@ -2907,6 +2961,12 @@ class RoleViewSet(BaseRLSViewSet):
request.data["manage_account"] = str(user_role.manage_account).lower()
return super().partial_update(request, *args, **kwargs)
@extend_schema(
description=(
"Delete the specified role. The API rejects deletion of the last role "
"in the tenant that grants MANAGE_ACCOUNT."
)
)
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
if (
@@ -2914,6 +2974,21 @@ class RoleViewSet(BaseRLSViewSet):
): # TODO: Move to a constant/enum (in case other roles are created by default)
raise ValidationError(detail="The admin role cannot be deleted.")
# Prevent deleting the last MANAGE_ACCOUNT role in the tenant
if instance.manage_account:
has_other_ma = (
Role.objects.filter(tenant_id=instance.tenant_id, manage_account=True)
.exclude(id=instance.id)
.exists()
)
if not has_other_ma:
return Response(
data={
"detail": "Cannot delete the only role with MANAGE_ACCOUNT in the tenant."
},
status=status.HTTP_400_BAD_REQUEST,
)
return super().destroy(request, *args, **kwargs)
@@ -3470,6 +3545,7 @@ class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
),
"name": requirement.get("name", ""),
"framework": compliance_framework.get("framework", ""),
"compliance_name": compliance_framework.get("name", ""),
"version": compliance_framework.get("version", ""),
"description": requirement.get("description", ""),
"attributes": base_attributes,
+102
View File
@@ -191,6 +191,108 @@ def create_test_user_rbac_limited(django_db_setup, django_db_blocker):
return user
@pytest.fixture(scope="function")
def create_test_user_rbac_manage_account(django_db_setup, django_db_blocker):
"""User with only manage_account permission (no manage_users)."""
with django_db_blocker.unblock():
user = User.objects.create_user(
name="testing_manage_account",
email="rbac_manage_account@rbac.com",
password=TEST_PASSWORD,
)
tenant = Tenant.objects.create(
name="Tenant Test Manage Account",
)
Membership.objects.create(
user=user,
tenant=tenant,
role=Membership.RoleChoices.OWNER,
)
role = Role.objects.create(
name="manage_account",
tenant_id=tenant.id,
manage_users=False,
manage_account=True,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
)
UserRoleRelationship.objects.create(
user=user,
role=role,
tenant_id=tenant.id,
)
return user
@pytest.fixture
def authenticated_client_rbac_manage_account(
create_test_user_rbac_manage_account, tenants_fixture, client
):
client.user = create_test_user_rbac_manage_account
serializer = TokenSerializer(
data={
"type": "tokens",
"email": "rbac_manage_account@rbac.com",
"password": TEST_PASSWORD,
}
)
serializer.is_valid()
access_token = serializer.validated_data["access"]
client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
return client
@pytest.fixture(scope="function")
def create_test_user_rbac_manage_users_only(django_db_setup, django_db_blocker):
"""User with only manage_users permission (no manage_account)."""
with django_db_blocker.unblock():
user = User.objects.create_user(
name="testing_manage_users_only",
email="rbac_manage_users_only@rbac.com",
password=TEST_PASSWORD,
)
tenant = Tenant.objects.create(name="Tenant Test Manage Users Only")
Membership.objects.create(
user=user,
tenant=tenant,
role=Membership.RoleChoices.OWNER,
)
role = Role.objects.create(
name="manage_users_only",
tenant_id=tenant.id,
manage_users=True,
manage_account=False,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
)
UserRoleRelationship.objects.create(user=user, role=role, tenant_id=tenant.id)
return user
@pytest.fixture
def authenticated_client_rbac_manage_users_only(
create_test_user_rbac_manage_users_only, client
):
client.user = create_test_user_rbac_manage_users_only
serializer = TokenSerializer(
data={
"type": "tokens",
"email": "rbac_manage_users_only@rbac.com",
"password": TEST_PASSWORD,
}
)
serializer.is_valid()
access_token = serializer.validated_data["access"]
client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
return client
@pytest.fixture
def authenticated_client_rbac(create_test_user_rbac, tenants_fixture, client):
client.user = create_test_user_rbac
+1 -1
View File
@@ -461,7 +461,7 @@ def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
return backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="overview")
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="compliance")
def create_compliance_requirements_task(tenant_id: str, scan_id: str):
"""
Creates detailed compliance requirement records for a scan.
+34
View File
@@ -0,0 +1,34 @@
/* Override Tailwind CSS reset for markdown content */
.markdown-content ul {
list-style: disc !important;
margin-left: 20px !important;
padding-left: 10px !important;
margin-bottom: 8px !important;
}
.markdown-content ol {
list-style: decimal !important;
margin-left: 20px !important;
padding-left: 10px !important;
margin-bottom: 8px !important;
}
.markdown-content li {
margin-bottom: 4px !important;
display: list-item !important;
}
.markdown-content p {
margin-bottom: 8px !important;
}
/* Ensure nested lists work properly */
.markdown-content ul ul {
margin-top: 4px !important;
margin-bottom: 4px !important;
}
.markdown-content ol ol {
margin-top: 4px !important;
margin-bottom: 4px !important;
}
+66 -16
View File
@@ -1654,6 +1654,39 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
[
html.Div(
[
# Description as first details item
html.Div(
[
html.P(
html.Strong(
"Description: ",
style={
"margin-bottom": "8px"
},
)
),
html.Div(
dcc.Markdown(
str(
data.get(
"DESCRIPTION",
"",
)
),
dangerously_allow_html=True,
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
className="markdown-content",
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
],
),
html.Div(
[
html.P(
@@ -1793,19 +1826,27 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
html.P(
html.Strong(
"Risk: ",
style={
"margin-right": "5px"
},
style={},
)
),
html.P(
str(data.get("RISK", "")),
html.Div(
dcc.Markdown(
str(
data.get("RISK", "")
),
dangerously_allow_html=True,
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
className="markdown-content",
style={
"margin-left": "5px"
"margin-left": "0px",
"padding-left": "10px",
},
),
],
style={"display": "flex"},
),
html.Div(
[
@@ -1847,23 +1888,32 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
html.Strong(
"Recommendation: ",
style={
"margin-right": "5px"
"margin-bottom": "8px"
},
)
),
html.P(
str(
data.get(
"REMEDIATION_RECOMMENDATION_TEXT",
"",
)
html.Div(
dcc.Markdown(
str(
data.get(
"REMEDIATION_RECOMMENDATION_TEXT",
"",
)
),
dangerously_allow_html=True,
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
className="markdown-content",
style={
"margin-left": "5px"
"margin-left": "0px",
"padding-left": "10px",
},
),
],
style={"display": "flex"},
style={"margin-bottom": "15px"},
),
html.Div(
[
+6 -4
View File
@@ -14,9 +14,11 @@ services:
ports:
- "${DJANGO_PORT:-8080}:${DJANGO_PORT:-8080}"
volumes:
- "./api/src/backend:/home/prowler/backend"
- "./api/pyproject.toml:/home/prowler/pyproject.toml"
- "outputs:/tmp/prowler_api_output"
- ./api/src/backend:/home/prowler/backend
- ./api/pyproject.toml:/home/prowler/pyproject.toml
- ./api/docker-entrypoint.sh:/home/prowler/docker-entrypoint.sh
- ./_data/api:/home/prowler/.config/prowler-api
- outputs:/tmp/prowler_api_output
depends_on:
postgres:
condition: service_healthy
@@ -64,7 +66,7 @@ services:
image: valkey/valkey:7-alpine3.19
hostname: "valkey"
volumes:
- ./api/_data/valkey:/data
- ./_data/valkey:/data
env_file:
- path: .env
required: false
+2 -1
View File
@@ -8,7 +8,8 @@ services:
ports:
- "${DJANGO_PORT:-8080}:${DJANGO_PORT:-8080}"
volumes:
- "output:/tmp/prowler_api_output"
- ./_data/api:/home/prowler/.config/prowler-api
- output:/tmp/prowler_api_output
depends_on:
postgres:
condition: service_healthy
+1 -1
View File
@@ -279,4 +279,4 @@ You can filter scans to specific organizations or projects:
prowler mongodbatlas --atlas-project-id <project_id>
```
See more details about MongoDB Atlas Authentication in [Requirements](../getting-started/requirements.md#mongodb-atlas)
See more details about MongoDB Atlas Authentication in [MongoDB Atlas Authentication](../tutorials/mongodbatlas/authentication.md)
@@ -0,0 +1,213 @@
# Check Metadata Guidelines
## Introduction
This guide provides comprehensive guidelines for creating check metadata in Prowler. For basic information on check metadata structure, refer to the [check metadata](./checks.md#metadata-structure-for-prowler-checks) section.
## Check Title Guidelines
### Writing Guidelines
1. **Determine Resource Finding Scope (Singular vs. Plural)**:
When determining whether to use singular or plural in the check title, examine the code for certain patterns. If the code contains a loop that generates an individual report for each resource, use the singular form. If the code produces a single report that covers all resources collectively, use the plural form. For organization- or account-wide checks, select the scope that best matches the breadth of the evaluation. Additionally, review the `status_extended` field messages in the code, as they often provide clues about whether the check is scoped to individual resources or to groups of resources.
Analyze the detection code to determine if the check reports on individual resources or aggregated resources:
- **Singular**: Use when the check creates one report per resource (e.g., "EC2 instance has IMDSv2 enforced", "S3 bucket does not allow public write access").
- **Plural**: Use when the check creates one report for all resources together (e.g., "All EC2 instances have IMDSv2 enforced", "S3 buckets do not allow public write access").
2. **Describe the Compliant (*PASS*) State**:
Always write the title to describe the **desired, compliant state** of the resources. The title should reflect what it looks like when the audited resource is following the check's requirements.
3. **Be Specific and Factual**:
Include the exact secure configuration being verified. Avoid vague or generic terms like "properly configured".
4. **Avoid Redundant or Action Words**:
Do not include verbs like "Check", "Verify", "Ensure", or "Monitor". The title is a declarative statement of the secure condition.
5. **Length Limit**:
Keep the title under 150 characters.
### Common Mistakes to Avoid
- Starting with verbs like "Check", "Verify", "Ensure", "Make sure". Always start with the affected resource instead.
- Being too vague or generic (e.g., "Ensure security groups are properly configured", what does it mean? "properly configured" is not a clear description of the compliant state).
- Focusing on the non-compliant state instead of the compliant state.
- Using unclear scope and resource identification.
## Check Type Guidelines (AWS Only)
### AWS Security Hub Type Format
AWS Security Hub uses a three-part type taxonomy:
- **Namespace**: The top-level security domain.
- **Category**: The security control family or area.
- **Classifier**: The specific security concern (optional).
A partial path may be defined (e.g., `TTPs` or `TTPs/Defense Evasion` are valid).
### Selection Guidelines
1. **Be Specific**: Use the most specific classifier that accurately describes the check.
2. **Standard Compliance**: Consider if the check relates to specific compliance standards.
3. **Multiple Types**: You can specify multiple types if the check addresses multiple concerns.
## Description Guidelines
### Writing Guidelines
1. **Focus on the Finding**: All fields should address how the finding affects the security posture, rather than the control itself.
2. **Use Natural Language**: Write in simple, clear paragraphs with complete, grammatically correct sentences.
3. **Use Markdown Formatting**: Enhance readability with:
- Use **bold** for emphasis on key security concepts.
- Use *italic* for a secondary emphasis. Use it for clarifications, conditions, or optional notes. But don't abuse it.
- Use `code` formatting for specific configuration values, or technical details. Don't use it for service names or common technical terms.
- Use one or two line breaks (`\n` or `\n\n`) to separate distinct ideas.
- Use bullet points (`-`) for listing multiple concepts or actions.
- Use numbers for listing steps or sequential actions.
4. **Be Concise**: Maximum 400 characters (spaces count). Every word should add value.
5. **Explain What the Finding Means**: Focus on what the security control evaluates and what it means when it passes or fails, but without explicitly stating the pass or fail state.
6. **Be Technical but Clear**: Use appropriate technical terminology while remaining understandable.
7. **Avoid Risk Descriptions**: Do not describe potential risks, threats, or consequences.
8. **CheckTitle and Description can be the same**: If the check is very simple and the title is already clear, you can use the same text for the description.
### Common Mistakes to Avoid
- **Technical Implementation Details**: "The control loops through all instances and calls the describe_instances API...".
- **Vague Descriptions**: "This control verifies proper configuration of resources". What does it mean? "proper configuration" is not a clear description of the compliant state.
- **Risk Descriptions**: "This could lead to data breaches" or "This poses a security threat".
- **Starting with Verbs**: "Check if...", "Verify...", "Ensure...". Always start with the affected resource instead.
- **References to Pass/Fail States**: Avoid using words like "pass" or "fail".
## Risk Guidelines
### Writing Guidelines
1. **Explain the Cybersecurity Impact**: Focus on how the finding affects confidentiality, integrity, or availability (CIA triad). If the CIA triad does not apply, explain the risk in terms of the organization's business objectives.
2. **Be Specific About Threats**: Clearly state what could happen if this security control is not in place. What attacks or incidents become possible?
3. **Focus on Risk Context**: Explain the specific security implications of the finding, not just generic security risks.
4. **Use Markdown Formatting**: Enhance readability with markdown formatting:
- Use **bold** for emphasis on key security concepts.
- Use *italic* for a secondary emphasis. Use it for clarifications, conditions, or optional notes. But don't abuse it.
- Use `code` formatting for specific configuration values, or technical details. Don't use it for service names or common technical terms.
- Use one or two line breaks (`\n` or `\n\n`) to separate distinct ideas.
- Use bullet points (`-`) for listing multiple concepts or actions.
- Use numbers for listing steps or sequential actions.
5. **Be Concise**: Maximum 400 characters. Make every word count.
### Common Mistakes to Avoid
- **Generic Risks**: "This could lead to security issues" or "Regulatory compliance violations".
- **Technical Implementation Focus**: "The API call might fail and return incorrect results...".
- **Overly Broad Statements**: "This is a serious security risk that could impact everything".
- **Vague Threats**: "This could be exploited by threat actors" without explaining how.
## Recommendation Guidelines
### Writing Guidelines
1. **Provide Actionable Best Practice Guidance**: Explain what should be done to maintain security posture. Focus on preventive measures and proactive security practices.
2. **Be Principle-Based**: Reference established security principles (least privilege, defense in depth, zero trust, separation of duties) where applicable.
3. **Focus on Prevention**: Explain best practices that prevent the security issue from occurring, not just detection or remediation.
4. **Use Markdown Formatting**: Enhance readability with markdown formatting:
- Use **bold** for emphasis on key security concepts.
- Use *italic* for a secondary emphasis. Use it for clarifications, conditions, or optional notes. But don't abuse it.
- Use `code` formatting for specific configuration values, or technical details. Don't use it for service names or common technical terms.
- Use one or two line breaks (`\n` or `\n\n`) to separate distinct ideas.
- Use bullet points (`-`) for listing multiple concepts or actions.
- Use numbers for listing steps or sequential actions.
5. **Be Concise**: Maximum 400 characters.
### Common Mistakes to Avoid
- **Specific Remediation Steps**: "1. Go to the console\n2. Click on settings..." - Focus on principles, not click-by-click instructions.
- **Implementation Details**: "Configure the JSON policy with the following IAM actions..." - Explain what to achieve, not how.
- **Vague Guidance**: "Follow security best practices..." without explaining what those practices are.
- **Resource-Specific Recommendations**: "Enable MFA on user john.doe@example.com" - Keep it general.
- **Missing Context**: Not explaining why the best practice is important for security.
### Good Examples
- *"Avoid exposing sensitive resources directly to the Internet; configure access controls to limit exposure."*
- *"Apply the principle of least privilege when assigning permissions to users and services."*
- *"Regularly review and update your security configurations to align with current best practices."*
## Remediation Code Guidelines
### Critical Requirement
The **fundamental principle** is to focus on the **specific change** that converts the finding from non-compliant to compliant.
Also is important to keep all code examples as short as possible, including the essential code to fix the issue. Remove any extra configuration, optional parameters, or nice-to-have settings and add comments to explain the code when possible.
### Common Guidelines for All Code Fields
1. **Be Minimal**: Keep code blocks as short as possible - only include what is absolutely necessary.
2. **Focus on the Fix**: Remove any extra configuration, optional parameters, or nice-to-have settings.
3. **Be Accurate**: Ensure all commands and code are syntactically correct.
4. **Use Markdown Formatting**: Format code properly using code blocks and appropriate syntax highlighting.
5. **Follow Best Practices**: Use the most secure and recommended approaches for each platform.
### CLI Guidelines
- Only provide a single command that directly changes the finding from fail to pass.
- The command must be executable as-is and resolve the security issue completely.
- Use proper command syntax for the provider (AWS CLI, Azure CLI, gcloud, kubectl, etc.).
- Do not use markdown formatting or code blocks - just the raw command.
- Do not include multiple commands, comments, or explanations.
- If the issue cannot be resolved with a single command, leave this field empty.
### Native IaC Guidelines
- **Keep It Minimal**: Only include the specific resource/configuration that fixes the security issue.
- Format as markdown code blocks with proper syntax highlighting.
- Include only the required properties to fix the issue.
- Add comments indicating the critical line(s) that remediate the check.
- Use `example_resource` as the generic name for all resources and IDs.
### Terraform Guidelines
- **Keep It Minimal**: Only include the specific resource/configuration that fixes the security issue.
- Provide valid HCL (HashiCorp Configuration Language) code with an example of a compliant configuration.
- Use the latest Terraform syntax and provider versions.
- Include only the required arguments to fix the issue - skip optional parameters.
- Format as markdown code blocks with `hcl` syntax highlighting.
- Add comments indicating the critical line(s) that remediate the check.
- Use `example_resource` as the generic name for all resources and IDs.
- Skip provider requirements unless critical for the fix.
### Other (Manual Steps) Guidelines
- **Keep It Minimal**: Only include the exact steps needed to fix the security issue.
- Provide step-by-step instructions for manual remediation through web interfaces.
- Use numbered lists for sequential steps.
- Be specific about menu locations, button names, and settings.
- Skip optional configurations or nice-to-have settings.
- Format using markdown for better readability.
## Categories Guidelines
### Selection Guidelines
1. **Be Specific**: Only select categories that directly relate to what the automated control evaluates.
2. **Primary Focus**: Consider the primary security concern the automated control addresses.
3. **Avoid Over-Categorization**: Do not select categories just because they are tangentially related.
### Available Categories
| Category | Definition |
|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| encryption | Ensures data is encrypted in transit and/or at rest, including key management practices |
| internet-exposed | Checks that limit or flag public access to services, APIs, or assets from the Internet |
| logging | Ensures appropriate logging of events, activities, and system interactions for traceability |
| secrets | Manages and protects credentials, API keys, tokens, and other sensitive information |
| resilience | Ensures systems can maintain availability and recover from disruptions, failures, or degradation. Includes redundancy, fault-tolerance, auto-scaling, backup, disaster recovery, and failover strategies |
| threat-detection | Identifies suspicious activity or behaviors using IDS, malware scanning, or anomaly detection |
| trust-boundaries | Enforces isolation or segmentation between different trust levels (e.g., VPCs, tenants, network zones) |
| vulnerabilities | Detects or remediates known software, infrastructure, or config vulnerabilities (e.g., CVEs) |
| cluster-security | Secures Kubernetes cluster components such as API server, etcd, and role-based access |
| container-security | Ensures container images and runtimes follow security best practices |
| node-security | Secures nodes running containers or services |
| gen-ai | Checks related to safe and secure use of generative AI services or models |
| ci-cd | Ensures secure configurations in CI/CD pipelines |
| identity-access | Governs user and service identities, including least privilege, MFA, and permission boundaries |
| email-security | Ensures detection and protection against phishing, spam, spoofing, etc. |
| forensics-ready | Ensures systems are instrumented to support post-incident investigations. Any digital trace or evidence (logs, volume snapshots, memory dumps, network captures, etc.) preserved immutably and accompanied by integrity guarantees, which can be used in a forensic analysis |
| software-supply-chain | Detects or prevents tampering, unauthorized packages, or third-party risks in software supply chain |
| e3 | M365-specific controls enabled by or dependent on an E3 license (e.g., baseline security policies, conditional access) |
| e5 | M365-specific controls enabled by or dependent on an E5 license (e.g., advanced threat protection, audit, DLP, and eDiscovery) |
+127 -47
View File
@@ -40,7 +40,7 @@ Each check in Prowler follows a straightforward structure. Within the newly crea
- `__init__.py` (empty file) Ensures Python treats the check folder as a package.
- `<check_name>.py` (code file) Contains the check logic, following the prescribed format. Please refer to the [prowler's check code structure](./checks.md#prowlers-check-code-structure) for more information.
- `<check_name>.metadata.json` (metadata file) Defines the check's metadata for contextual information. Please refer to the [check metadata](./checks.md#) for more information.
- `<check_name>.metadata.json` (metadata file) Defines the check's metadata for contextual information. Please refer to the [check metadata](./checks.md#metadata-structure-for-prowler-checks) for more information.
## Prowler's Check Code Structure
@@ -226,68 +226,148 @@ Below is a generic example of a check metadata file. **Do not include comments i
```json
{
"Provider": "aws",
"CheckID": "example_check_id",
"CheckTitle": "Example Check Title",
"CheckType": ["Infrastructure Security"],
"ServiceName": "ec2",
"SubServiceName": "ami",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "critical",
"CheckID": "service_resource_security_setting",
"CheckTitle": "Service resource has security setting enabled",
"CheckType": [],
"ServiceName": "service",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Example description of the check.",
"Risk": "Example risk if the check fails.",
"RelatedUrl": "https://example.com",
"Description": "This check verifies that the service resource has the required **security setting** enabled to protect against potential vulnerabilities.\n\nIt ensures that the resource follows security best practices and maintains proper access controls. The check evaluates whether the security configuration is properly implemented and active.",
"Risk": "Without proper security settings, the resource may be vulnerable to:\n\n- **Unauthorized access** - Malicious actors could gain entry\n- **Data breaches** - Sensitive information could be compromised\n- **Security threats** - Various attack vectors could be exploited\n\nThis could result in compliance violations and potential financial or reputational damage.",
"RelatedUrl": "",
"AdditionalURLs": ["https://example.com/security-documentation", "https://example.com/best-practices"],
"Remediation": {
"Code": {
"CLI": "example CLI command",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "provider-cli service enable-security-setting --resource-id resource-123",
"NativeIaC": "```yaml\nType: Provider::Service::Resource\nProperties:\n SecuritySetting: enabled\n ResourceId: resource-123\n```",
"Other": "1. Open the provider management console\n2. Navigate to the service section\n3. Select the resource\n4. Enable the security setting\n5. Save the configuration",
"Terraform": "```hcl\nresource \"provider_service_resource\" \"example\" {\n resource_id = \"resource-123\"\n security_setting = true\n}\n```"
},
"Recommendation": {
"Text": "Example recommendation text.",
"Url": "https://example.com/remediation"
"Text": "Enable security settings on all service resources to ensure proper protection. Regularly review and update security configurations to align with current best practices.",
"Url": "https://hub.prowler.com/check/service_resource_security_setting"
}
},
"Categories": ["example-category"],
"Categories": ["internet-exposed", "secrets"],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
"RelatedTo": ["service_resource_security_setting", "service_resource_security_setting_2"],
"Notes": "This is a generic example check that should be customized for specific provider and service requirements."
}
```
### Metadata Fields and Their Purpose
- **Provider** — The Prowler provider related to the check. The name **must** be lowercase and match the provider folder name. For supported providers refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
- **CheckID** — The unique identifier for the check inside the provider, this field **must** match the check's folder and python file and json metadata file name. For more information about the naming refer to the [Naming Format for Checks](#naming-format-for-checks) section.
- **CheckTitle** — A concise, descriptive title for the check.
- **CheckType***For now this field is only standardized for the AWS provider*.
- For AWS this field must follow the [AWS Security Hub Types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types) format. So the common pattern to follow is `namespace/category/classifier`, refer to the attached documentation for the valid values for this fields.
- **ServiceName** — The name of the provider service being audited. This field **must** be in lowercase and match with the service folder name. For supported services refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
- **SubServiceName** — The subservice or resource within the service, if applicable. For more information refer to the [Naming Format for Checks](#naming-format-for-checks) section.
- **ResourceIdTemplate** — A template for the unique resource identifier. For more information refer to the [Resource Identification in Prowler](#resource-identification-in-prowler) section.
- **Severity** — The severity of the finding if the check fails. Must be one of: `critical`, `high`, `medium`, `low`, or `informational`, this field **must** be in lowercase. To get more information about the severity levels refer to the [Prowler's Check Severity Levels](#prowlers-check-severity-levels) section.
- **ResourceType** — The type of resource being audited. *For now this field is only standardized for the AWS provider*.
- For AWS use the [Security Hub resource types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-resources.html) or, if not available, the PascalCase version of the [CloudFormation type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) (e.g., `AwsEc2Instance`). Use "Other" if no match exists.
- **Description** — A short description of what the check does.
- **Risk** — The risk or impact if the check fails, explaining why the finding matters.
- **RelatedUrl** — A URL to official documentation or further reading about the check's purpose. If no official documentation is available, use the risk and recommendation text from trusted third-party sources.
- **Remediation** — Guidance for fixing a failed check, including:
- **Code** — Remediation commands or code snippets for CLI, Terraform, native IaC, or other tools like the Web Console.
- **Recommendation** — A textual human readable recommendation. Here it is not necessary to include actual steps, but rather a general recommendation about what to do to fix the check.
- **Categories** — One or more categories for grouping checks in execution (e.g., `internet-exposed`). For the current list of categories, refer to the [Prowler Hub](https://hub.prowler.com/check).
- **DependsOn** — Currently not used.
- **RelatedTo** — Currently not used.
- **Notes** — Any additional information not covered by other fields.
#### Provider
### Remediation Code Guidelines
The Prowler provider related to the check. The name **must** be lowercase and match the provider folder name. For supported providers refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
When providing remediation steps, reference the following sources:
#### CheckID
- Official provider documentation.
- [Prowler Checks Remediation Index](https://docs.prowler.com/checks/checks-index)
- [TrendMicro Cloud One Conformity](https://www.trendmicro.com/cloudoneconformity)
- [CloudMatos Remediation Repository](https://github.com/cloudmatos/matos/tree/master/remediations)
The unique identifier for the check inside the provider. This field **must** match the check's folder, Python file, and JSON metadata file name. For more information about naming, refer to the [Naming Format for Checks](#naming-format-for-checks) section.
#### CheckTitle
The `CheckTitle` field must be plain text, clearly and succinctly define **the best practice being evaluated and which resource(s) each finding applies to**. The title should be specific, concise (no more than 150 characters), and reference the relevant resource(s) involved.
**Always write the `CheckTitle` to describe the *PASS* case**, the desired secure or compliant state of the resource(s). This helps ensure that findings are easy to interpret and that the title always reflects the best practice being met.
For detailed guidelines on writing effective check titles, including how to determine singular vs. plural scope and common mistakes to avoid, see [CheckTitle Guidelines](./check-metadata-guidelines.md#check-title-guidelines).
#### CheckType
???+ warning
This field is only applicable to the AWS provider.
It follows the [AWS Security Hub Types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types) format using the pattern `namespace/category/classifier`.
For the complete AWS Security Hub selection guidelines, see [CheckType Guidelines](./check-metadata-guidelines.md#check-type-guidelines-aws-only).
#### ServiceName
The name of the provider service being audited. Must be lowercase and match the service folder name. For supported services refer to [Prowler Hub](https://hub.prowler.com/check) or the [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
#### SubServiceName
This field is in the process of being deprecated and should be **left empty**.
#### ResourceIdTemplate
This field is in the process of being deprecated and should be **left empty**.
#### Severity
Severity level if the check fails. Must be one of: `critical`, `high`, `medium`, `low`, or `informational`, and written in lowercase. See [Prowler's Check Severity Levels](#prowlers-check-severity-levels) for details.
#### ResourceType
The type of resource being audited. This field helps categorize and organize findings by resource type for better analysis and reporting. For each provider:
- **AWS**: Use [Security Hub resource types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-resources.html) or PascalCase CloudFormation types removing the `::` separator used in CloudFormation templates (e.g., in CloudFormation template the type of an EC2 instance is `AWS::EC2::Instance` but in the check it should be `AwsEc2Instance`). Use `Other` if none apply.
- **Azure**: Use types from [Azure Resource Graph](https://learn.microsoft.com/en-us/azure/governance/resource-graph/reference/supported-tables-resources), for example: `Microsoft.Storage/storageAccounts`.
- **Google Cloud**: Use [Cloud Asset Inventory asset types](https://cloud.google.com/asset-inventory/docs/asset-types), for example: `compute.googleapis.com/Instance`.
- **Kubernetes**: Use types shown under `KIND` from `kubectl api-resources`.
- **M365 / GitHub**: Leave empty due to lack of standardized types.
#### Description
A concise, natural language explanation that **clearly describes what the finding means**, focusing on clarity and context rather than technical implementation details. Use simple paragraphs with line breaks if needed, but avoid sections, code blocks, or complex formatting. This field is limited to maximum 400 characters.
For detailed writing guidelines and common mistakes to avoid, see [Description Guidelines](./check-metadata-guidelines.md#description-guidelines).
#### Risk
A clear, natural language explanation of **why this finding poses a cybersecurity risk**. Focus on how it may impact confidentiality, integrity, or availability. If those do not apply, describe any relevant operational or financial risks. Use simple paragraphs with line breaks if needed, but avoid sections, code blocks, or complex formatting. Limit your explanation to 400 characters.
For detailed writing guidelines and common mistakes to avoid, see [Risk Guidelines](./check-metadata-guidelines.md#risk-guidelines).
#### RelatedUrl
*Deprecated*. Use `AdditionalURLs` for adding your URLs references.
#### AdditionalURLs
???+ warning
URLs must be valid and not repeated.
A list of official documentation URLs for further reading. These should be authoritative sources that provide additional context, best practices, or detailed information about the security control being checked. Prefer official provider documentation, security standards, or well-established security resources. Avoid third-party blogs or unofficial sources unless they are highly reputable and directly relevant.
#### Remediation
Provides both code examples and best practice recommendations for addressing the security issue.
- **Code**: Contains remediation examples in different formats:
- **CLI**: Command-line interface commands to make the finding compliant in runtime.
- **NativeIaC**: Native Infrastructure as Code templates with an example of a compliant configuration. For now it applies to:
- **AWS**: CloudFormation YAML formatted code (do not use JSON format).
- **Azure**: Bicep formatted code (do not use ARM templates).
- **Terraform**: HashiCorp Configuration Language (HCL) code with an example of a compliant configuration.
- **Other**: Manual steps through web interfaces or other tools to make the finding compliant.
For detailed guidelines on writing remediation code, see [Remediation Code Guidelines](./check-metadata-guidelines.md#remediation-code-guidelines).
- **Recommendation**
- **Text**: Generic best practice guidance in natural language using Markdown format (maximum 400 characters). For writing guidelines, see [Recommendation Guidelines](./check-metadata-guidelines.md#recommendation-guidelines).
- **Url**: [Prowler Hub URL](https://hub.prowler.com/) of the check. This URL is always composed by `https://hub.prowler.com/check/<check_id>`.
#### Categories
One or more functional groupings used for execution filtering (e.g., `internet-exposed`). You can define new categories just by adding to this field.
For the complete list of available categories, see [Categories Guidelines](./check-metadata-guidelines.md#categories-guidelines).
#### DependsOn
List of check IDs of checks that if are compliant, this check will be a compliant too or it is not going to give any finding.
#### RelatedTo
List of check IDs of checks that are conceptually related, even if they do not share a technical dependency.
#### Notes
Any additional information not covered in the above fields.
### Python Model Reference
+1
View File
@@ -101,6 +101,7 @@ Prowler supports multiple output formats, allowing users to tailor findings pres
finding_dict["DESCRIPTION"] = finding.metadata.Description
finding_dict["RISK"] = finding.metadata.Risk
finding_dict["RELATED_URL"] = finding.metadata.RelatedUrl
finding_dict["ADDITIONAL_URLS"] = unroll_list(finding.metadata.AdditionalURLs)
finding_dict["REMEDIATION_RECOMMENDATION_TEXT"] = (
finding.metadata.Remediation.Recommendation.Text
)
+210
View File
@@ -0,0 +1,210 @@
# Renaming Checks in Prowler
To rename a check in Prowler, follow these steps when aligning with Check ID structure, fixing typos, or updating check logic that requires a new name.
When changing a Check ID, update the following files:
## Update Check Folder Structure
First, rename the check folder with the new check name.
**Path:** `prowler/providers/<provider>/services/<service>/<check_name>`
**Example:**
```
# Before
prowler/providers/aws/services/inspector2/inspector2_findings_exist/
# After
prowler/providers/aws/services/inspector2/inspector2_active_findings_exist/
```
Next, rename the file that contains the check logic. Inside that file, also rename the class name to match the new check name.
**Path:** `prowler/providers/<provider>/services/<service>/<check_name>/<check_name>.py`
**Example:**
```python
# Before
class inspector2_findings_exist(Check):
def execute(self):
findings = []
# ... check logic ...
# After
class inspector2_active_findings_exist(Check):
def execute(self):
findings = []
# ... check logic ...
```
Then, rename the file that contains the check metadata. Inside that file, add the old check name as an alias in the `CheckAliases` field and modify the `CheckID` to the new check name.
**Path:** `prowler/providers/<provider>/services/<service>/<check_name>/<check_name>.metadata.json`
**Example:**
```json
{
"Provider": "aws",
"CheckID": "inspector2_active_findings_exist",
"CheckTitle": "Check if Inspector2 active findings exist",
"CheckAliases": [
"inspector2_findings_exist"
],
"CheckType": [],
"ServiceName": "inspector2",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:inspector2:region:account-id/detector-id",
"Severity": "medium",
"ResourceType": "Other",
"Description": "This check determines if there are any active findings in your AWS account that have been detected by AWS Inspector2.",
"Risk": "Without using AWS Inspector, you may not be aware of all the security vulnerabilities in your AWS resources.",
"RelatedUrl": "https://docs.aws.amazon.com/inspector/latest/user/findings-understanding.html",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Inspector/amazon-inspector-findings.html",
"Terraform": ""
},
"Recommendation": {
"Text": "Review the active findings from Inspector2",
"Url": "https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}
```
## Update Test Files
Second, rename the tests folder with the new check name.
**Path:** `tests/providers/<provider>/services/<service>/<check_name>`
**Example:**
```
# Before
tests/providers/aws/services/inspector2/inspector2_findings_exist/
# After
tests/providers/aws/services/inspector2/inspector2_active_findings_exist/
```
Next, rename the test file that contains all the unit tests. Inside that file, rename all appearances of the old check name to the new check name.
**Path:** `tests/providers/<provider>/services/<service>/<check_name>/<check_name>_test.py`
**Example:**
```python
# Before
from prowler.providers.aws.services.inspector2.inspector2_findings_exist.inspector2_findings_exist import (
inspector2_findings_exist,
)
class Test_inspector2_findings_exist:
def test_inspector2_no_findings(self):
# ... test logic ...
def test_inspector2_with_findings(self):
# ... test logic ...
# After
from prowler.providers.aws.services.inspector2.inspector2_active_findings_exist.inspector2_active_findings_exist import (
inspector2_active_findings_exist,
)
class Test_inspector2_active_findings_exist:
def test_inspector2_no_findings(self):
# ... test logic ...
def test_inspector2_with_findings(self):
# ... test logic ...
```
**Important:** Update all references to the old check name in the test file, including:
- Import statements at the top of the file
- Class name in the test class
- Any function calls to the check
- Any string references to the check name
- Mock patches that reference the check
**Complete example of all changes needed in test files:**
```python
# Before
from prowler.providers.aws.services.inspector2.inspector2_findings_exist.inspector2_findings_exist import (
inspector2_findings_exist,
)
class Test_inspector2_findings_exist:
def test_inspector2_no_findings(self):
# Mock setup
with mock.patch(
"prowler.providers.aws.services.inspector2.inspector2_findings_exist.inspector2_findings_exist.inspector2_client",
inspector2_client,
):
check = inspector2_findings_exist()
result = check.execute()
assert len(result) == 1
assert result[0].status == "PASS"
assert "No active findings found" in result[0].status_extended
# After
from prowler.providers.aws.services.inspector2.inspector2_active_findings_exist.inspector2_active_findings_exist import (
inspector2_active_findings_exist,
)
class Test_inspector2_active_findings_exist:
def test_inspector2_no_findings(self):
# Mock setup
with mock.patch(
"prowler.providers.aws.services.inspector2.inspector2_active_findings_exist.inspector2_active_findings_exist.inspector2_client",
inspector2_client,
):
check = inspector2_active_findings_exist()
result = check.execute()
assert len(result) == 1
assert result[0].status == "PASS"
assert "No active findings found" in result[0].status_extended
```
## Update Compliance Mappings
Finally, rename all appearances of the old check name to the new check name inside any compliance framework where the check is mapped.
- `prowler/compliance/<service>/<compliance_where_the_check_is_mapped>.json`
**Example:**
```json
{
"Framework": "CIS",
"Version": "2.0",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services.",
"Requirements": [
{
"Id": "4.1",
"Description": "Ensure a log metric filter and alarm exist for unauthorized API calls",
"Checks": [
"inspector2_active_findings_exist"
],
"Attributes": [
{
"Section": "4 Logging and Monitoring",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms."
}
]
}
]
}
```
The development compliance file may contain examples of the check being renamed. If so, modify this file as well:
- `api/src/backend/api/fixtures/dev/7_dev_compliance.json`
+102 -21
View File
@@ -2,7 +2,11 @@
Prowler requires AWS credentials to function properly. Authentication is available through the following methods:
- Static Credentials
- Assumed Role
## Required Permissions
To ensure full functionality, attach the following AWS managed policies to the designated user or role:
- `arn:aws:iam::aws:policy/SecurityAudit`
@@ -13,37 +17,114 @@ To ensure full functionality, attach the following AWS managed policies to the d
For certain checks, additional read-only permissions are required. Attach the following custom policy to your role: [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json)
## Configure AWS Credentials
## Assume Role (Recommended)
Use one of the following methods to authenticate:
This method grants permanent access and is the recommended setup for production environments.
```console
aws configure
```
=== "CloudFormation"
or
1. Download the [Prowler Scan Role Template](https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/permissions/templates/cloudformation/prowler-scan-role.yml)
```console
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
![Prowler Scan Role Template](./img/prowler-scan-role-template.png)
These credentials must be associated with a user or role with the necessary permissions to perform security checks.
![Download Role Template](./img/download-role-template.png)
2. Open the [AWS Console](https://console.aws.amazon.com), search for **CloudFormation**
![CloudFormation Search](./img/cloudformation-nav.png)
## AWS Profiles
3. Go to **Stacks** and click "Create stack" > "With new resources (standard)"
Specify a custom AWS profile using the following command:
![Create Stack](./img/create-stack.png)
```console
prowler aws -p/--profile <profile_name>
```
4. In **Specify Template**, choose "Upload a template file" and select the downloaded file
## Multi-Factor Authentication (MFA)
![Upload a template file](./img/upload-template-file.png)
![Upload file from downloads](./img/upload-template-from-downloads.png)
For IAM entities requiring Multi-Factor Authentication (MFA), use the `--mfa` flag. Prowler prompts for the following values to initiate a new session:
5. Click "Next", provide a stack name and the **External ID** shown in the Prowler Cloud setup screen
- **ARN of your MFA device**
- **TOTP (Time-Based One-Time Password)**
![External ID](./img/prowler-cloud-external-id.png)
![Stack Data](./img/fill-stack-data.png)
!!! info
An **External ID** is required when assuming the *ProwlerScan* role to comply with AWS [confused deputy prevention](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).
6. Acknowledge the IAM resource creation warning and proceed
![Stack Creation Second Step](./img/stack-creation-second-step.png)
7. Click "Submit" to deploy the stack
![Click on submit](./img/submit-third-page.png)
=== "Terraform"
To provision the scan role using Terraform:
1. Run the following commands:
```bash
terraform init
terraform plan
terraform apply
```
2. During `plan` and `apply`, provide the **External ID** when prompted, which is available in the Prowler Cloud or Prowler App UI:
![Get External ID](./img/get-external-id-prowler-cloud.png)
> 💡 Note: Terraform will use the AWS credentials of the default profile.
---
## Credentials
=== "Long term credentials"
1. Go to the [AWS Console](https://console.aws.amazon.com), open **CloudShell**
![AWS CloudShell](./img/aws-cloudshell.png)
2. Run:
```bash
aws iam create-access-key
```
3. Copy the output containing:
- `AccessKeyId`
- `SecretAccessKey`
![CloudShell Output](./img/cloudshell-output.png)
=== "Short term credentials (Recommended)"
Use the [AWS Access Portal](https://docs.aws.amazon.com/singlesignon/latest/userguide/howtogetcredentials.html) or the CLI:
1. Retrieve short-term credentials for the IAM identity using this command:
```bash
aws sts get-session-token --duration-seconds 900
```
???+ note
Check the aws documentation [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/sts_example_sts_GetSessionToken_section.html)
2. Copy the output containing:
- `AccessKeyId`
- `SecretAccessKey`
- `SessionToken`
> Sample output:
```json
{
"Credentials": {
"AccessKeyId": "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY",
"SessionToken": "AQoEXAMPLEH4aoAH0gNCAPyJxz4BlCFFxWNE1OPTgk5TthT+FvwqnKwRcOIfrRh3c/LTo6UDdyJwOOvEVPvLXCrrrUtdnniCEXAMPLE/IvU1dYUg2RVAJBanLiHb4IgRmpRV3zrkuWJOgQs8IZZaIv2BXIa2R4OlgkBN9bkUDNCJiBeb/AXlzBBko7b15fjrBs2+cTQtpZ3CYWFXG8C5zqx37wnOE49mRl/+OtkIKGO7fAE",
"Expiration": "2020-05-19T18:06:10+00:00"
}
}
```
+58 -134
View File
@@ -1,39 +1,31 @@
# Getting Started with AWS on Prowler Cloud/App
# Getting Started With AWS on Prowler
## Prowler App
<iframe width="560" height="380" src="https://www.youtube-nocookie.com/embed/RPgIWOCERzY" title="Prowler Cloud Onboarding AWS" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="1"></iframe>
Set up your AWS account to enable security scanning using Prowler Cloud/App.
> Walkthrough video onboarding an AWS Account using Assumed Role.
## Requirements
To configure your AWS account, youll need:
1. Access to Prowler Cloud/App
2. Properly configured AWS credentials (either static or via an assumed IAM role)
---
## Step 1: Get Your AWS Account ID
### Step 1: Get Your AWS Account ID
1. Log in to the [AWS Console](https://console.aws.amazon.com)
2. Locate your AWS account ID in the top-right dropdown menu
![Account ID detail](./img/aws-account-id.png)
---
## Step 2: Access Prowler Cloud/App
### Step 2: Access Prowler Cloud or Prowler App
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](../prowler-app.md)
2. Go to `Configuration` > `Cloud Providers`
2. Go to "Configuration" > "Cloud Providers"
![Cloud Providers Page](../img/cloud-providers-page.png)
3. Click `Add Cloud Provider`
3. Click "Add Cloud Provider"
![Add a Cloud Provider](../img/add-cloud-provider.png)
4. Select `Amazon Web Services`
4. Select "Amazon Web Services"
![Select AWS Provider](./img/select-aws.png)
@@ -41,96 +33,39 @@ To configure your AWS account, youll need:
![Add account ID](./img/add-account-id.png)
6. Choose your preferred authentication method (next step)
6. Choose the preferred authentication method (next step)
![Select auth method](./img/select-auth-method.png)
---
## Step 3: Set Up AWS Authentication
### Step 3: Set Up AWS Authentication
Before proceeding, choose your preferred authentication mode:
Before proceeding, choose the preferred authentication mode:
Credentials
**Credentials**
* Quick scan as current user
* No extra setup
* Credentials time out
* Quick scan as current user
* No extra setup
* Credentials time out
Assumed Role
**Assumed Role**
* Preferred Setup
* Permanent Credentials
* Requires access to create role
* Preferred Setup ✅
* Permanent Credentials ✅
* Requires access to create role ❌
---
### 🔐 Assume Role (Recommended)
![Assume Role Overview](./img/assume-role-overview.png)
#### Assume Role (Recommended)
This method grants permanent access and is the recommended setup for production environments.
=== "CloudFormation"
![Assume Role Overview](img/assume-role-overview.png)
1. Download the [Prowler Scan Role Template](https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/permissions/templates/cloudformation/prowler-scan-role.yml)
For detailed instructions on how to create the role, see [Authentication > Assume Role](./authentication.md#assume-role-recommended).
![Prowler Scan Role Template](./img/prowler-scan-role-template.png)
![Download Role Template](./img/download-role-template.png)
2. Open the [AWS Console](https://console.aws.amazon.com), search for **CloudFormation**
![CloudFormation Search](./img/cloudformation-nav.png)
3. Go to **Stacks** and click `Create stack` > `With new resources (standard)`
![Create Stack](./img/create-stack.png)
4. In **Specify Template**, choose `Upload a template file` and select the downloaded file
![Upload a template file](./img/upload-template-file.png)
![Upload file from downloads](./img/upload-template-from-downloads.png)
5. Click `Next`, provide a stack name and the **External ID** shown in the Prowler Cloud setup screen
![External ID](./img/prowler-cloud-external-id.png)
![Stack Data](./img/fill-stack-data.png)
!!! info
An **External ID** is required when assuming the *ProwlerScan* role to comply with AWS [confused deputy prevention](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).
6. Acknowledge the IAM resource creation warning and proceed
![Stack Creation Second Step](./img/stack-creation-second-step.png)
7. Click `Submit` to deploy the stack
![Click on submit](./img/submit-third-page.png)
=== "Terraform"
To provision the scan role using Terraform:
1. Run the following commands:
```bash
terraform init
terraform plan
terraform apply
```
2. During `plan` and `apply`, you will be prompted for the **External ID**, which is available in the Prowler Cloud/App UI:
![Get External ID](./img/get-external-id-prowler-cloud.png)
> 💡 Note: Terraform will use the AWS credentials of your default profile.
---
### Finish Setup with Assume Role
8. Once the role is created, go to the **IAM Console**, click on the `ProwlerScan` role to open its details:
8. Once the role is created, go to the **IAM Console**, click on the "ProwlerScan" role to open its details:
![ProwlerScan role info](./img/prowler-scan-pre-info.png)
@@ -138,80 +73,69 @@ This method grants permanent access and is the recommended setup for production
![New Role Info](./img/get-role-arn.png)
10. Paste the ARN into the corresponding field in Prowler Cloud/App
10. Paste the ARN into the corresponding field in Prowler Cloud or Prowler App
![Input the Role ARN](./img/paste-role-arn-prowler.png)
11. Click `Next`, then `Launch Scan`
11. Click "Next", then "Launch Scan"
![Next button in Prowler Cloud](./img/next-button-prowler-cloud.png)
![Launch Scan](./img/launch-scan-button-prowler-cloud.png)
---
### 🔑 Credentials (Static Access Keys)
#### Credentials (Static Access Keys)
You can also configure your AWS account using static credentials (not recommended for long-term use):
AWS accounts can also be configured using static credentials (not recommended for long-term use):
![Connect via credentials](./img/connect-via-credentials.png)
=== "Long term credentials"
For detailed instructions on how to create the credentials, see [Authentication > Credentials](./authentication.md#credentials).
1. Go to the [AWS Console](https://console.aws.amazon.com), open **CloudShell**
1. Complete the form in Prowler Cloud or Prowler App and click "Next"
![AWS CloudShell](./img/aws-cloudshell.png)
![Filled credentials page](./img/prowler-cloud-credentials-next.png)
2. Run:
2. Click "Launch Scan"
```bash
aws iam create-access-key
```
![Launch Scan](./img/launch-scan-button-prowler-cloud.png)
3. Copy the output containing:
---
- `AccessKeyId`
- `SecretAccessKey`
## Prowler CLI
![CloudShell Output](./img/cloudshell-output.png)
### Configure AWS Credentials
> ⚠️ Save these credentials securely and paste them into the Prowler Cloud/App setup screen.
To authenticate with AWS, use one of the following methods:
=== "Short term credentials (Recommended)"
```console
aws configure
```
You can use your [AWS Access Portal](https://docs.aws.amazon.com/singlesignon/latest/userguide/howtogetcredentials.html) or the CLI:
or
1. Retrieve short-term credentials for the IAM identity using this command:
```console
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
```bash
aws sts get-session-token --duration-seconds 900
```
These credentials must be associated with a user or role with the necessary permissions to perform security checks.
???+ note
Check the aws documentation [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/sts_example_sts_GetSessionToken_section.html)
More details on Assume Role settings from the CLI in [Assume Role](./role-assumption.md) page.
2. Copy the output containing:
- `AccessKeyId`
- `SecretAccessKey`
### AWS Profiles
> Sample output:
```json
{
"Credentials": {
"AccessKeyId": "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY",
"SessionToken": "AQoEXAMPLEH4aoAH0gNCAPyJxz4BlCFFxWNE1OPTgk5TthT+FvwqnKwRcOIfrRh3c/LTo6UDdyJwOOvEVPvLXCrrrUtdnniCEXAMPLE/IvU1dYUg2RVAJBanLiHb4IgRmpRV3zrkuWJOgQs8IZZaIv2BXIa2R4OlgkBN9bkUDNCJiBeb/AXlzBBko7b15fjrBs2+cTQtpZ3CYWFXG8C5zqx37wnOE49mRl/+OtkIKGO7fAE",
"Expiration": "2020-05-19T18:06:10+00:00"
}
}
```
To use a custom AWS profile, specify it with the following command:
> ⚠️ Save these credentials securely and paste them into the Prowler Cloud/App setup screen.
```console
prowler aws -p/--profile <profile_name>
```
Complete the form in Prowler Cloud/App and click `Next`
### Multi-Factor Authentication (MFA)
![Filled credentials page](./img/prowler-cloud-credentials-next.png)
For IAM entities requiring Multi-Factor Authentication (MFA), use the `--mfa` flag. Prowler prompts for the following values to initiate a new session:
Click `Launch Scan`
![Launch Scan](./img/launch-scan-button-prowler-cloud.png)
- **ARN of your MFA device**
- **TOTP (time-based one-time password)**
+1 -1
View File
@@ -1,4 +1,4 @@
# AWS Assume Role in Prowler
# AWS Assume Role in Prowler (CLI)
## Authentication Overview
+100 -22
View File
@@ -1,5 +1,12 @@
# GCP Authentication in Prowler
Prowler for Google Cloud supports multiple authentication methods. To use a specific method, configure the appropriate credentials during execution:
- [**User Credentials** (Application Default Credentials)](#application-default-credentials-user-credentials)
- [**Service Account Key File**](#service-account-key-file)
- [**Access Token**](#access-token)
- [**Service Account Impersonation**](#service-account-impersonation)
## Required Permissions
Prowler for Google Cloud requires the following permissions:
@@ -33,28 +40,92 @@ At least one project must have the following configurations:
```
???+ note
`prowler` will scan the GCP project associated with the credentials.
## Credentials lookup order
Prowler follows the same credential search process as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order), checking credentials in this order:
1. [`GOOGLE_APPLICATION_CREDENTIALS` environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [`CLOUDSDK_AUTH_ACCESS_TOKEN` + optional `GOOGLE_CLOUD_PROJECT`](https://cloud.google.com/sdk/gcloud/reference/auth/print-access-token)
3. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
4. [Attached service account (e.g., Cloud Run, GCE, Cloud Functions)](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
???+ note
The credentials must belong to a user or service account with the necessary permissions.
To ensure full access, assign the roles/reader IAM role to the identity being used.
???+ note
Prowler will use the enabled Google Cloud APIs to get the information needed to perform the checks.
Prowler will scan the GCP project associated with the credentials.
## Application Default Credentials (User Credentials)
This method uses the Google Cloud CLI to authenticate and is suitable for development and testing environments.
## Using an Access Token
### Setup Application Default Credentials
1. In the [GCP Console](https://console.cloud.google.com/), click on "Activate Cloud Shell"
![Activate Cloud Shell](./img/access-console.png)
2. Click "Authorize Cloud Shell"
![Authorize Cloud Shell](./img/authorize-cloud-shell.png)
3. Run the following command:
```bash
gcloud auth application-default login
```
- Type `Y` when prompted
![Run Gcloud Auth](./img/run-gcloud-auth.png)
4. Open the authentication URL provided in a browser and select your Google account
![Choose the account](./img/take-account-email.png)
5. Follow the steps to obtain the authentication code
![Copy auth code](./img/copy-auth-code.png)
6. Paste the authentication code back in Cloud Shell
![Enter Auth Code](./img/enter-auth-code.png)
7. Use `cat <file_name>` to view the temporary credentials file
![Get the FileName](./img/get-temp-file-credentials.png)
8. Extract the following values for Prowler Cloud/App:
- `client_id`
- `client_secret`
- `refresh_token`
![Get the values](./img/get-needed-values-auth.png)
### Using with Prowler CLI
Once application default credentials are set up, run Prowler directly:
```console
prowler gcp --project-ids <project-id>
```
## Service Account Key File
This method uses a service account with a downloaded key file for authentication.
### Create Service Account and Key
1. Go to the [Service Accounts page](https://console.cloud.google.com/iam-admin/serviceaccounts) in the GCP Console
2. Click "Create Service Account"
3. Fill in the service account details and click "Create and Continue"
4. Grant the service account the "Reader" role
5. Click "Done"
6. Find your service account in the list and click on it
7. Go to the "Keys" tab
8. Click "Add Key" > "Create new key"
9. Select "JSON" and click "Create"
10. Save the downloaded key file securely
### Using with Prowler CLI
Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable:
```console
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
prowler gcp --project-ids <project-id>
```
## Access Token
For existing access tokens (e.g., generated with `gcloud auth print-access-token`), run Prowler with:
@@ -69,10 +140,7 @@ prowler gcp --project-ids <project-id>
export GOOGLE_CLOUD_PROJECT=<project-id>
```
## Impersonating a GCP Service Account
## Service Account Impersonation
To impersonate a GCP service account, use the `--impersonate-service-account` argument followed by the service account email:
@@ -81,3 +149,13 @@ prowler gcp --impersonate-service-account <service-account-email>
```
This command leverages the default credentials to impersonate the specified service account.
### Prerequisites for Impersonation
The identity running Prowler must have the following permission on the target service account:
- `roles/iam.serviceAccountTokenCreator`
Or the more specific permission:
- `iam.serviceAccounts.generateAccessToken`
+79 -63
View File
@@ -1,105 +1,121 @@
# Getting Started with GCP on Prowler Cloud/App
# Getting Started With GCP on Prowler
Set up your GCP project to enable security scanning using Prowler Cloud/App.
## Prowler App
## Requirements
To configure your GCP project, youll need:
1. Get the `Project ID`
2. Access to Prowler Cloud/App
3. Configure authentication in GCP:
3.1 Retrieve credentials from Google Cloud
4. Add the credentials to Prowler Cloud/App
---
## Step 1: Get the Project ID
### Step 1: Get the GCP Project ID
1. Go to the [GCP Console](https://console.cloud.google.com/)
2. Locate your Project ID on the welcome screen
2. Locate the Project ID on the welcome screen
![Get the Project ID](./img/project-id-console.png)
---
### Step 2: Access Prowler Cloud or Prowler App
## Step 2: Access Prowler Cloud/App
1. Go to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](../prowler-app.md)
2. Navigate to `Configuration` > `Cloud Providers`
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](../prowler-app.md)
2. Go to "Configuration" > "Cloud Providers"
![Cloud Providers Page](../img/cloud-providers-page.png)
3. Click `Add Cloud Provider`
3. Click "Add Cloud Provider"
![Add a Cloud Provider](../img/add-cloud-provider.png)
4. Select `Google Cloud Platform`
4. Select "Google Cloud Platform"
![Select GCP](./img/select-gcp.png)
5. Add the Project ID and optionally provide a provider alias, then click `Next`
5. Add the Project ID and optionally provide a provider alias, then click "Next"
![Add Project ID](./img/add-project-id.png)
---
### Step 3: Set Up GCP Authentication
## Step 3: Configure Authentication in GCP
Choose the preferred authentication mode before proceeding:
### Retrieve Credentials from Google Cloud
**User Credentials (Application Default Credentials)**
1. In the [GCP Console](https://console.cloud.google.com/), click on `Activate Cloud Shell`
* Quick scan as current user
* Uses Google Cloud CLI authentication
* Credentials may time out
![Activate Cloud Shell](./img/access-console.png)
**Service Account Key File**
2. Click `Authorize Cloud Shell`
* Authenticates as a service identity
* Stable and auditable
* Recommended for production
![Authorize Cloud Shell](./img/authorize-cloud-shell.png)
For detailed instructions on how to set up authentication, see [Authentication](./authentication.md).
3. Run the following command:
6. Once credentials are configured, return to Prowler App and enter the required values:
```bash
gcloud auth application-default login
```
For "Service Account Key":
- Type `Y` when prompted
- `Service Account Key JSON`
![Run Gcloud Auth](./img/run-gcloud-auth.png)
4. Open the authentication URL provided in a browser and select your Google account
![Choose the account](./img/take-account-email.png)
5. Follow the steps to obtain the authentication code
![Copy auth code](./img/copy-auth-code.png)
6. Paste the authentication code back in Cloud Shell
![Enter Auth Code](./img/enter-auth-code.png)
7. Use `cat <file_name>` to view the temporary credentials file
![Get the FileName](./img/get-temp-file-credentials.png)
8. Extract the following values for Prowler Cloud/App:
For "Application Default Credentials":
- `client_id`
- `client_secret`
- `refresh_token`
![Get the values](./img/get-needed-values-auth.png)
![Enter the Credentials](./img/enter-credentials-prowler-cloud.png)
7. Click "Next", then "Launch Scan"
![Launch Scan GCP](./img/launch-scan.png)
---
## Step 4: Add Credentials to Prowler Cloud/App
## Prowler CLI
1. Go back to Prowler Cloud/App and enter the required credentials, then click `Next`
### Credentials Lookup Order
![Enter the Credentials](./img/enter-credentials-prowler-cloud.png)
Prowler follows the same credential search process as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order), checking credentials in this order:
2. Click `Launch Scan` to begin scanning your GCP environment
1. [`GOOGLE_APPLICATION_CREDENTIALS` environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [`CLOUDSDK_AUTH_ACCESS_TOKEN` + optional `GOOGLE_CLOUD_PROJECT`](https://cloud.google.com/sdk/gcloud/reference/auth/print-access-token)
3. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
4. [Attached service account (e.g., Cloud Run, GCE, Cloud Functions)](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
![Launch Scan GCP](./img/launch-scan.png)
???+ note
The credentials must belong to a user or service account with the necessary permissions.
For detailed instructions on how to set the permissions, see [Authentication > Required Permissions](./authentication.md#required-permissions).
???+ note
Prowler will use the enabled Google Cloud APIs to get the information needed to perform the checks.
### Configure GCP Credentials
To authenticate with GCP, use one of the following methods:
```console
gcloud auth application-default login
```
or set the credentials file path:
```console
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"
```
These credentials must belong to a user or service account with the necessary permissions to perform security checks.
For more authentication details, see the [Authentication](./authentication.md) page.
### Project Specification
To scan specific projects, specify them with the following command:
```console
prowler gcp --project-ids <project-id-1> <project-id-2>
```
### Service Account Impersonation
For service account impersonation, use the `--impersonate-service-account` flag:
```console
prowler gcp --impersonate-service-account <service-account-email>
```
More details on authentication methods in the [Authentication](./authentication.md) page.
Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 422 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 584 KiB

@@ -42,4 +42,4 @@ prowler mongodbatlas
- Note the public key and private key
- Store credentials securely
For more details about MongoDB Atlas, see the [MongoDB Atlas Tutorial](../tutorials/mongodbatlas/getting-started-mongodbatlas.md).
For more details about MongoDB Atlas, see the [MongoDB Atlas Tutorial](./getting-started-mongodbatlas.md).
@@ -0,0 +1,189 @@
# Jira Integration
Prowler App enables automatic export of security findings to Jira, providing seamless integration with Atlassian's work item tracking and project management platform. This comprehensive guide demonstrates how to configure and manage Jira integrations to streamline security incident management and enhance team collaboration across security workflows.
Integrating Prowler App with Jira provides:
* **Streamlined management:** Convert security findings directly into actionable Jira work items
* **Enhanced team collaboration:** Leverage existing project management workflows for security remediation
* **Automated ticket creation:** Reduce manual effort in tracking and assigning security work items
## How It Works
When enabled and configured:
1. Security findings can be manually sent to Jira from the Findings table.
2. Each finding creates a Jira work item with all the check's metadata, including guidance on how to remediate it.
## Configuration
To configure Jira integration in Prowler App:
1. Navigate to **Integrations** in the Prowler App interface
2. Locate the **Jira** card and click **Manage**, then select **Add integration**
![Integrations tab](./img/jira/integrations-tab.png)
3. Complete the integration settings:
* **Jira domain:** Enter the Jira domain (e.g., from `https://your-domain.atlassian.net` -> `your-domain`)
* **Email:** Your Jira account email
* **API Token:** API token with the following scopes: `read:jira-user`, `read:jira-work`, `write:jira-work`
![Connection settings](./img/jira/connection-settings.png)
!!! note "Generate Jira API Token"
To generate a Jira API token, visit: https://id.atlassian.com/manage-profile/security/api-tokens
Once configured successfully, the integration is ready to send findings to Jira.
## Sending Findings to Jira
### Manual Export
To manually send individual findings to Jira:
1. Navigate to the **Findings** section in Prowler App
2. Select one finding you want to export
3. Click the action button on the table row and select **Send to Jira**
4. Select the Jira integration and project
5. Click **Send to Jira**
![Send to Jira modal](./img/jira/send-to-jira-modal.png)
## Integration Status
Monitor and manage your Jira integrations through the management interface:
1. Review configured integrations in the integrations dashboard
2. Each integration displays:
- **Connection Status:** Connected or Disconnected indicator
- **Instance Information:** Jira domain and last checked timestamp
### Actions
Each Jira integration provides management actions through dedicated buttons:
| Button | Purpose | Available Actions | Notes |
|--------|---------|------------------|-------|
| **Test** | Verify integration connectivity | • Test Jira API access<br/>• Validate credentials<br/>• Check project permissions<br/>• Verify work item creation capability | Results displayed in notification message |
| **Credentials** | Update authentication settings | • Change API token<br/>• Update email<br/>• Update Jira domain | Click "Update Credentials" to save changes |
| **Enable/Disable** | Toggle integration status | • Enable or disable integration<br/>| Status change takes effect immediately |
| **Delete** | Remove integration permanently | • Permanently delete integration<br/>• Remove all configuration data | ⚠️ **Cannot be undone** - confirm before deleting |
## Troubleshooting
### Connection test fails
- Verify Jira instance domain is correct and accessible
- Confirm API token or credentials are valid
- Ensure API access is enabled in Jira settings and the needed scopes are granted
### Check task status (API)
If the Jira issue does not appear in your Jira project, follow these steps to verify the export task status via the API.
!!! note
Replace `http://localhost:8080` with the base URL where your Prowler API is accessible (for example, `https://api.yourdomain.com`).
1) Get an access token (replace email and password):
```
curl --location 'http://localhost:8080/api/v1/tokens' \
--header 'Content-Type: application/vnd.api+json' \
--header 'Accept: application/vnd.api+json' \
--data-raw '{
"data": {
"type": "tokens",
"attributes": {
"email": "YOUR_USER_EMAIL",
"password": "YOUR_USER_PASSWORD"
}
}
}'
```
2) List tasks filtered by the Jira task (`integration-jira`) using the access token:
```
curl --location --globoff 'http://localhost:8080/api/v1/tasks?filter[name]=integration-jira' \
--header 'Accept: application/vnd.api+json' \
--header 'Authorization: Bearer ACCESS_TOKEN' | jq
```
!!! note
If you dont have `jq` installed, run the command without `| jq`.
3) Share the output so we can help. A typical result will look like:
```
{
"links": {
"first": "https://api.dev.prowler.com/api/v1/tasks?page%5Bnumber%5D=1",
"last": "https://api.dev.prowler.com/api/v1/tasks?page%5Bnumber%5D=122",
"next": "https://api.dev.prowler.com/api/v1/tasks?page%5Bnumber%5D=2",
"prev": null
},
"data": [
{
"type": "tasks",
"id": "9a79ab21-39ae-4161-9f6e-2844eb0da0fb",
"attributes": {
"inserted_at": "2025-09-09T08:11:38.643620Z",
"completed_at": "2025-09-09T08:11:41.264285Z",
"name": "integration-jira",
"state": "completed",
"result": {
"created_count": 0,
"failed_count": 1
},
"task_args": {
"integration_id": "a476c2c0-0a00-4720-bfb9-286e9eb5c7bd",
"project_key": "PRWLR",
"issue_type": "Task",
"finding_ids": [
"01992d53-3af7-7759-be48-68fc405391e6"
]
},
"metadata": {}
}
},
{
"type": "tasks",
"id": "5f525135-9d37-4b01-9ac8-afeaf8793eac",
"attributes": {
"inserted_at": "2025-09-09T08:07:22.184164Z",
"completed_at": "2025-09-09T08:07:24.909185Z",
"name": "integration-jira",
"state": "completed",
"result": {
"created_count": 1,
"failed_count": 0
},
"task_args": {
"integration_id": "a476c2c0-0a00-4720-bfb9-286e9eb5c7bd",
"project_key": "JIRA",
"issue_type": "Task",
"finding_ids": [
"0198f018-8b7b-7154-a509-1a2b1ffba02d"
]
},
"metadata": {}
}
}
],
"meta": {
"pagination": {
"page": 1,
"pages": 122,
"count": 1214
},
"version": "v1"
}
}
```
How to read it:
- "created_count": number of Jira issues successfully created.
- "failed_count": number of Jira issues that could not be created. If `failed_count > 0` or the issue does not appear in Jira, please contact us so we can assist while detailed logs are not available through the UI.
+1 -1
View File
@@ -207,7 +207,7 @@ Follow these steps to remove a role of your account:
Assign administrative permissions by selecting from the following options:
**Invite and Manage Users:** Invite new users and manage existing ones.<br>
**Manage Account:** Adjust account settings and delete users.<br>
**Manage Account:** Adjust account settings, delete users and read/manage users permissions.<br>
**Manage Scans:** Run and review scans.<br>
**Manage Cloud Providers:** Add or modify connected cloud providers.<br>
**Manage Integrations:** Add or modify the Prowler Integrations.
+1 -1
View File
@@ -2,7 +2,7 @@
This page provides instructions for creating and configuring a Microsoft Entra ID (formerly Azure AD) application to use SAML SSO with Prowler App.
You can find a walkthrough video [here](https://youtu.be/UtcjDh5cAjI).
You can find a walkthrough video [here](https://www.youtube.com/watch?v=zegqm55oJVk).
## Creating and Configuring the Enterprise Application
+1 -1
View File
@@ -1,6 +1,6 @@
# Prowler App
**Prowler App** is a user-friendly interface for Prowler CLI, providing a visual dashboard to monitor your cloud security posture. This tutorial will guide you through setting up and using Prowler App.
**Prowler App** is a web application that simplifies running Prowler. This tutorial will guide you through setting up and using it.
## Accessing Prowler App and API Documentation
+3
View File
@@ -106,6 +106,7 @@ The CSV format follows a standardized structure across all providers. The follow
- RELATED\_TO
- NOTES
- PROWLER\_VERSION
- ADDITIONAL\_URLS
#### CSV Headers Mapping
@@ -163,6 +164,7 @@ The JSON-OCSF output format implements the [Detection Finding](https://schema.oc
"depends_on": [],
"related_to": [],
"notes": "",
"additional_urls": [],
"compliance": {
"MITRE-ATTACK": [
"T1552"
@@ -398,6 +400,7 @@ The following is the mapping between the native JSON and the Detection Finding f
| Categories| unmapped.categories
| DependsOn| unmapped.depends\_on
| RelatedTo| unmapped.related\_to
| AdditionalURLs| unmapped.additional\_urls
| Notes| unmapped.notes
| Profile| _Not mapped yet_
| AccountId| cloud.account.uid
+5 -5
View File
@@ -1,5 +1,5 @@
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;accessanalyzer_enabled;Check if IAM Access Analyzer is enabled;IAM;FAIL;IAM Access Analyzer in account <account_uid> is not enabled.;False;accessanalyzer;;low;Other;<resource_uid>;<resource_name>;;;aws;<region>;Check if IAM Access Analyzer is enabled;AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk. IAM Access Analyzer uses a form of mathematical analysis called automated reasoning, which applies logic and mathematical inference to determine all possible access paths allowed by a resource policy.;https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html;Enable IAM Access Analyzer for all accounts, create analyzer and take action over it is recommendations (IAM Access Analyzer is available at no additional cost).;https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html;;;aws accessanalyzer create-analyzer --analyzer-name <NAME> --type <ACCOUNT|ORGANIZATION>;;CIS-1.4: 1.20 | CIS-1.5: 1.20 | KISA-ISMS-P-2023: 2.5.6, 2.6.4, 2.8.1, 2.8.2 | CIS-2.0: 1.20 | KISA-ISMS-P-2023-korean: 2.5.6, 2.6.4, 2.8.1, 2.8.2 | AWS-Account-Security-Onboarding: Enabled security services, Create analyzers in each active regions, Verify that events are present in SecurityHub aggregated view | CIS-3.0: 1.20;;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;account_maintain_current_contact_details;Maintain current contact details.;IAM;MANUAL;Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Contact Information.;False;account;;medium;Other;<resource_uid>;<account_uid>;;;aws;<region>;Maintain current contact details.;Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.;;Using the Billing and Cost Management console complete contact details.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;;;No command available.;https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console;CIS-1.4: 1.1 | CIS-1.5: 1.1 | KISA-ISMS-P-2023: 2.1.3 | CIS-2.0: 1.1 | KISA-ISMS-P-2023-korean: 2.1.3 | AWS-Well-Architected-Framework-Security-Pillar: SEC03-BP03, SEC10-BP01 | AWS-Account-Security-Onboarding: Billing, emergency, security contacts | CIS-3.0: 1.1 | ENS-RD2022: op.ext.7.aws.am.1;;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;account_maintain_different_contact_details_to_security_billing_and_operations;Maintain different contact details to security, billing and operations.;IAM;FAIL;SECURITY, BILLING and OPERATIONS contacts not found or they are not different between each other and between ROOT contact.;False;account;;medium;Other;<resource_uid>;<account_uid>;;;aws;<region>;Maintain different contact details to security, billing and operations.;Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;Using the Billing and Cost Management console complete contact details.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;;;;https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console;KISA-ISMS-P-2023: 2.1.3 | KISA-ISMS-P-2023-korean: 2.1.3;;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;account_security_contact_information_is_registered;Ensure security contact information is registered.;IAM;MANUAL;Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Alternate Contacts -> Security Section.;False;account;;medium;Other;<resource_uid>:root;<account_uid>;;;aws;<region>;Ensure security contact information is registered.;AWS provides customers with the option of specifying the contact information for accounts security team. It is recommended that this information be provided. Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them.;;Go to the My Account section and complete alternate contacts.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;;;No command available.;https://docs.prowler.com/checks/aws/iam-policies/iam_19#aws-console;CIS-1.4: 1.2 | CIS-1.5: 1.2 | AWS-Foundational-Security-Best-Practices: account, acm | KISA-ISMS-P-2023: 2.1.3, 2.2.1 | CIS-2.0: 1.2 | KISA-ISMS-P-2023-korean: 2.1.3, 2.2.1 | AWS-Well-Architected-Framework-Security-Pillar: SEC03-BP03, SEC10-BP01 | AWS-Account-Security-Onboarding: Billing, emergency, security contacts | CIS-3.0: 1.2 | ENS-RD2022: op.ext.7.aws.am.1;;;;;<prowler_version>
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION;ADDITIONAL_URLS
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;accessanalyzer_enabled;Check if IAM Access Analyzer is enabled;IAM;FAIL;IAM Access Analyzer in account <account_uid> is not enabled.;False;accessanalyzer;;low;Other;<resource_uid>;<resource_name>;;;aws;<region>;Check if IAM Access Analyzer is enabled;AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk. IAM Access Analyzer uses a form of mathematical analysis called automated reasoning, which applies logic and mathematical inference to determine all possible access paths allowed by a resource policy.;https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html;Enable IAM Access Analyzer for all accounts, create analyzer and take action over it is recommendations (IAM Access Analyzer is available at no additional cost).;https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html;;;aws accessanalyzer create-analyzer --analyzer-name <NAME> --type <ACCOUNT|ORGANIZATION>;;CIS-1.4: 1.20 | CIS-1.5: 1.20 | KISA-ISMS-P-2023: 2.5.6, 2.6.4, 2.8.1, 2.8.2 | CIS-2.0: 1.20 | KISA-ISMS-P-2023-korean: 2.5.6, 2.6.4, 2.8.1, 2.8.2 | AWS-Account-Security-Onboarding: Enabled security services, Create analyzers in each active regions, Verify that events are present in SecurityHub aggregated view | CIS-3.0: 1.20;;;;;<prowler_version>;https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;account_maintain_current_contact_details;Maintain current contact details.;IAM;MANUAL;Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Contact Information.;False;account;;medium;Other;<resource_uid>;<account_uid>;;;aws;<region>;Maintain current contact details.;Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.;;Using the Billing and Cost Management console complete contact details.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;;;No command available.;https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console;CIS-1.4: 1.1 | CIS-1.5: 1.1 | KISA-ISMS-P-2023: 2.1.3 | CIS-2.0: 1.1 | KISA-ISMS-P-2023-korean: 2.1.3 | AWS-Well-Architected-Framework-Security-Pillar: SEC03-BP03, SEC10-BP01 | AWS-Account-Security-Onboarding: Billing, emergency, security contacts | CIS-3.0: 1.1 | ENS-RD2022: op.ext.7.aws.am.1;;;;;<prowler_version>;https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;account_maintain_different_contact_details_to_security_billing_and_operations;Maintain different contact details to security, billing and operations.;IAM;FAIL;SECURITY, BILLING and OPERATIONS contacts not found or they are not different between each other and between ROOT contact.;False;account;;medium;Other;<resource_uid>;<account_uid>;;;aws;<region>;Maintain different contact details to security, billing and operations.;Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;Using the Billing and Cost Management console complete contact details.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;;;;https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console;KISA-ISMS-P-2023: 2.1.3 | KISA-ISMS-P-2023-korean: 2.1.3;;;;;<prowler_version>;https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
<auth_method>;2025-02-14 14:27:03.913874;<account_uid>;;;;;;<finding_uid>;aws;account_security_contact_information_is_registered;Ensure security contact information is registered.;IAM;MANUAL;Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Alternate Contacts -> Security Section.;False;account;;medium;Other;<resource_uid>:root;<account_uid>;;;aws;<region>;Ensure security contact information is registered.;AWS provides customers with the option of specifying the contact information for accounts security team. It is recommended that this information be provided. Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them.;;Go to the My Account section and complete alternate contacts.;https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html;;;No command available.;https://docs.prowler.com/checks/aws/iam-policies/iam_19#aws-console;CIS-1.4: 1.2 | CIS-1.5: 1.2 | AWS-Foundational-Security-Best-Practices: account, acm | KISA-ISMS-P-2023: 2.1.3, 2.2.1 | CIS-2.0: 1.2 | KISA-ISMS-P-2023-korean: 2.1.3, 2.2.1 | AWS-Well-Architected-Framework-Security-Pillar: SEC03-BP03, SEC10-BP01 | AWS-Account-Security-Onboarding: Billing, emergency, security contacts | CIS-3.0: 1.2 | ENS-RD2022: op.ext.7.aws.am.1;;;;;<prowler_version>;https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
1 AUTH_METHOD TIMESTAMP ACCOUNT_UID ACCOUNT_NAME ACCOUNT_EMAIL ACCOUNT_ORGANIZATION_UID ACCOUNT_ORGANIZATION_NAME ACCOUNT_TAGS FINDING_UID PROVIDER CHECK_ID CHECK_TITLE CHECK_TYPE STATUS STATUS_EXTENDED MUTED SERVICE_NAME SUBSERVICE_NAME SEVERITY RESOURCE_TYPE RESOURCE_UID RESOURCE_NAME RESOURCE_DETAILS RESOURCE_TAGS PARTITION REGION DESCRIPTION RISK RELATED_URL REMEDIATION_RECOMMENDATION_TEXT REMEDIATION_RECOMMENDATION_URL REMEDIATION_CODE_NATIVEIAC REMEDIATION_CODE_TERRAFORM REMEDIATION_CODE_CLI REMEDIATION_CODE_OTHER COMPLIANCE CATEGORIES DEPENDS_ON RELATED_TO NOTES PROWLER_VERSION ADDITIONAL_URLS
2 <auth_method> 2025-02-14 14:27:03.913874 <account_uid> <finding_uid> aws accessanalyzer_enabled Check if IAM Access Analyzer is enabled IAM FAIL IAM Access Analyzer in account <account_uid> is not enabled. False accessanalyzer low Other <resource_uid> <resource_name> aws <region> Check if IAM Access Analyzer is enabled AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk. IAM Access Analyzer uses a form of mathematical analysis called automated reasoning, which applies logic and mathematical inference to determine all possible access paths allowed by a resource policy. https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html Enable IAM Access Analyzer for all accounts, create analyzer and take action over it is recommendations (IAM Access Analyzer is available at no additional cost). https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html aws accessanalyzer create-analyzer --analyzer-name <NAME> --type <ACCOUNT|ORGANIZATION> CIS-1.4: 1.20 | CIS-1.5: 1.20 | KISA-ISMS-P-2023: 2.5.6, 2.6.4, 2.8.1, 2.8.2 | CIS-2.0: 1.20 | KISA-ISMS-P-2023-korean: 2.5.6, 2.6.4, 2.8.1, 2.8.2 | AWS-Account-Security-Onboarding: Enabled security services, Create analyzers in each active regions, Verify that events are present in SecurityHub aggregated view | CIS-3.0: 1.20 <prowler_version> https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
3 <auth_method> 2025-02-14 14:27:03.913874 <account_uid> <finding_uid> aws account_maintain_current_contact_details Maintain current contact details. IAM MANUAL Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Contact Information. False account medium Other <resource_uid> <account_uid> aws <region> Maintain current contact details. Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question. Using the Billing and Cost Management console complete contact details. https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html No command available. https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console CIS-1.4: 1.1 | CIS-1.5: 1.1 | KISA-ISMS-P-2023: 2.1.3 | CIS-2.0: 1.1 | KISA-ISMS-P-2023-korean: 2.1.3 | AWS-Well-Architected-Framework-Security-Pillar: SEC03-BP03, SEC10-BP01 | AWS-Account-Security-Onboarding: Billing, emergency, security contacts | CIS-3.0: 1.1 | ENS-RD2022: op.ext.7.aws.am.1 <prowler_version> https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
4 <auth_method> 2025-02-14 14:27:03.913874 <account_uid> <finding_uid> aws account_maintain_different_contact_details_to_security_billing_and_operations Maintain different contact details to security, billing and operations. IAM FAIL SECURITY, BILLING and OPERATIONS contacts not found or they are not different between each other and between ROOT contact. False account medium Other <resource_uid> <account_uid> aws <region> Maintain different contact details to security, billing and operations. Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question. https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html Using the Billing and Cost Management console complete contact details. https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console KISA-ISMS-P-2023: 2.1.3 | KISA-ISMS-P-2023-korean: 2.1.3 <prowler_version> https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
5 <auth_method> 2025-02-14 14:27:03.913874 <account_uid> <finding_uid> aws account_security_contact_information_is_registered Ensure security contact information is registered. IAM MANUAL Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Alternate Contacts -> Security Section. False account medium Other <resource_uid>:root <account_uid> aws <region> Ensure security contact information is registered. AWS provides customers with the option of specifying the contact information for accounts security team. It is recommended that this information be provided. Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them. Go to the My Account section and complete alternate contacts. https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html No command available. https://docs.prowler.com/checks/aws/iam-policies/iam_19#aws-console CIS-1.4: 1.2 | CIS-1.5: 1.2 | AWS-Foundational-Security-Best-Practices: account, acm | KISA-ISMS-P-2023: 2.1.3, 2.2.1 | CIS-2.0: 1.2 | KISA-ISMS-P-2023-korean: 2.1.3, 2.2.1 | AWS-Well-Architected-Framework-Security-Pillar: SEC03-BP03, SEC10-BP01 | AWS-Account-Security-Onboarding: Billing, emergency, security contacts | CIS-3.0: 1.2 | ENS-RD2022: op.ext.7.aws.am.1 <prowler_version> https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html | https://aws.amazon.com/iam/features/analyze-access/
@@ -27,6 +27,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"CIS-1.4": [
@@ -158,6 +159,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"CIS-1.4": [
@@ -286,6 +288,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"KISA-ISMS-P-2023": [
@@ -391,6 +394,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"CIS-1.4": [
@@ -525,6 +529,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"CIS-1.4": [
+5 -5
View File
@@ -1,5 +1,5 @@
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_cluster_rbac_enabled;Ensure AKS RBAC is enabled;;PASS;RBAC is enabled for cluster '<resource_name>' in subscription '<account_name>'.;False;aks;;medium;Microsoft.ContainerService/ManagedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. You can also configure Kubernetes role-based access control (Kubernetes RBAC) to limit access to cluster resources based a user's identity or group membership.;Kubernetes RBAC and AKS help you secure your cluster access and provide only the minimum required permissions to developers and operators.;https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac?tabs=portal;;https://learn.microsoft.com/en-us/security/benchmark/azure/security-controls-v2-privileged-access#pa-7-follow-just-enough-administration-least-privilege-principle;;https://docs.prowler.com/checks/azure/azure-kubernetes-policies/bc_azr_kubernetes_2#terraform;;https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/AKS/enable-role-based-access-control-for-kubernetes-service.html#;ENS-RD2022: op.acc.2.az.r1.eid.1;;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_clusters_created_with_private_nodes;Ensure clusters are created with Private Nodes;;PASS;Cluster '<resource_name>' was created with private nodes in subscription '<account_name>';False;aks;;high;Microsoft.ContainerService/ManagedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;Disable public IP addresses for cluster nodes, so that they only have private IP addresses. Private Nodes are nodes with no public IP addresses.;Disabling public IP addresses on cluster nodes restricts access to only internal networks, forcing attackers to obtain local network access before attempting to compromise the underlying Kubernetes hosts.;https://learn.microsoft.com/en-us/azure/aks/private-clusters;;https://learn.microsoft.com/en-us/azure/aks/access-private-cluster;;;;;ENS-RD2022: mp.com.4.r2.az.aks.1 | MITRE-ATTACK: T1190, T1530;;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_clusters_public_access_disabled;Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled;;FAIL;Public access to nodes is enabled for cluster '<resource_name>' in subscription '<account_name>';False;aks;;high;Microsoft.ContainerService/ManagedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;Disable access to the Kubernetes API from outside the node network if it is not required.;In a private cluster, the master node has two endpoints, a private and public endpoint. The private endpoint is the internal IP address of the master, behind an internal load balancer in the master's wirtual network. Nodes communicate with the master using the private endpoint. The public endpoint enables the Kubernetes API to be accessed from outside the master's virtual network. Although Kubernetes API requires an authorized token to perform sensitive actions, a vulnerability could potentially expose the Kubernetes publically with unrestricted access. Additionally, an attacker may be able to identify the current cluster and Kubernetes API version and determine whether it is vulnerable to an attack. Unless required, disabling public endpoint will help prevent such threats, and require the attacker to be on the master's virtual network to perform any attack on the Kubernetes API.;https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=azure-portal;To use a private endpoint, create a new private endpoint in your virtual network then create a link between your virtual network and a new private DNS zone;https://learn.microsoft.com/en-us/azure/aks/access-private-cluster?tabs=azure-cli;;;az aks update -n <cluster_name> -g <resource_group> --disable-public-fqdn;;ENS-RD2022: mp.com.4.az.aks.2 | MITRE-ATTACK: T1190, T1530;;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_network_policy_enabled;Ensure Network Policy is Enabled and set as appropriate;;PASS;Network policy is enabled for cluster '<resource_name>' in subscription '<account_name>'.;False;aks;;medium;Microsoft.ContainerService/managedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;When you run modern, microservices-based applications in Kubernetes, you often want to control which components can communicate with each other. The principle of least privilege should be applied to how traffic can flow between pods in an Azure Kubernetes Service (AKS) cluster. Let's say you likely want to block traffic directly to back-end applications. The Network Policy feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster.;All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. Back-end applications are often only exposed to required front-end services, for example. Or, database components are only accessible to the application tiers that connect to them. Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using Network Policies, you define an ordered set of rules to send and receive traffic and apply them to a collection of pods that match one or more label selectors. These network policy rules are defined as YAML manifests. Network policies can be included as part of a wider manifest that also creates a deployment or service.;https://learn.microsoft.com/en-us/security/benchmark/azure/security-controls-v2-network-security#ns-2-connect-private-networks-together;;https://learn.microsoft.com/en-us/azure/aks/use-network-policies;;https://docs.prowler.com/checks/azure/azure-kubernetes-policies/bc_azr_kubernetes_4#terraform;;;ENS-RD2022: mp.com.4.r2.az.aks.1;;;;Network Policy requires the Network Policy add-on. This add-on is included automatically when a cluster with Network Policy is created, but for an existing cluster, needs to be added prior to enabling Network Policy. Enabling/Disabling Network Policy causes a rolling update of all cluster nodes, similar to performing a cluster upgrade. This operation is long-running and will block other operations on the cluster (including delete) until it has run to completion. If Network Policy is used, a cluster must have at least 2 nodes of type n1-standard-1 or higher. The recommended minimum size cluster to run Network Policy enforcement is 3 n1-standard-1 instances. Enabling Network Policy enforcement consumes additional resources in nodes. Specifically, it increases the memory footprint of the kube-system process by approximately 128MB, and requires approximately 300 millicores of CPU.;<prowler_version>
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION;ADDITIONAL_URLS
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_cluster_rbac_enabled;Ensure AKS RBAC is enabled;;PASS;RBAC is enabled for cluster '<resource_name>' in subscription '<account_name>'.;False;aks;;medium;Microsoft.ContainerService/ManagedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. You can also configure Kubernetes role-based access control (Kubernetes RBAC) to limit access to cluster resources based a user's identity or group membership.;Kubernetes RBAC and AKS help you secure your cluster access and provide only the minimum required permissions to developers and operators.;https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac?tabs=portal;;https://learn.microsoft.com/en-us/security/benchmark/azure/security-controls-v2-privileged-access#pa-7-follow-just-enough-administration-least-privilege-principle;;https://docs.prowler.com/checks/azure/azure-kubernetes-policies/bc_azr_kubernetes_2#terraform;;https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/AKS/enable-role-based-access-control-for-kubernetes-service.html#;ENS-RD2022: op.acc.2.az.r1.eid.1;;;;;<prowler_version>;https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_clusters_created_with_private_nodes;Ensure clusters are created with Private Nodes;;PASS;Cluster '<resource_name>' was created with private nodes in subscription '<account_name>';False;aks;;high;Microsoft.ContainerService/ManagedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;Disable public IP addresses for cluster nodes, so that they only have private IP addresses. Private Nodes are nodes with no public IP addresses.;Disabling public IP addresses on cluster nodes restricts access to only internal networks, forcing attackers to obtain local network access before attempting to compromise the underlying Kubernetes hosts.;https://learn.microsoft.com/en-us/azure/aks/private-clusters;;https://learn.microsoft.com/en-us/azure/aks/access-private-cluster;;;;;ENS-RD2022: mp.com.4.r2.az.aks.1 | MITRE-ATTACK: T1190, T1530;;;;;<prowler_version>;https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_clusters_public_access_disabled;Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled;;FAIL;Public access to nodes is enabled for cluster '<resource_name>' in subscription '<account_name>';False;aks;;high;Microsoft.ContainerService/ManagedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;Disable access to the Kubernetes API from outside the node network if it is not required.;In a private cluster, the master node has two endpoints, a private and public endpoint. The private endpoint is the internal IP address of the master, behind an internal load balancer in the master's wirtual network. Nodes communicate with the master using the private endpoint. The public endpoint enables the Kubernetes API to be accessed from outside the master's virtual network. Although Kubernetes API requires an authorized token to perform sensitive actions, a vulnerability could potentially expose the Kubernetes publically with unrestricted access. Additionally, an attacker may be able to identify the current cluster and Kubernetes API version and determine whether it is vulnerable to an attack. Unless required, disabling public endpoint will help prevent such threats, and require the attacker to be on the master's virtual network to perform any attack on the Kubernetes API.;https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=azure-portal;To use a private endpoint, create a new private endpoint in your virtual network then create a link between your virtual network and a new private DNS zone;https://learn.microsoft.com/en-us/azure/aks/access-private-cluster?tabs=azure-cli;;;az aks update -n <cluster_name> -g <resource_group> --disable-public-fqdn;;ENS-RD2022: mp.com.4.az.aks.2 | MITRE-ATTACK: T1190, T1530;;;;;<prowler_version>;https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
<auth_method>;2025-02-14 14:27:30.710664;<account_uid>;<account_name>;;<account_organization_uid>;ProwlerPro.onmicrosoft.com;;<finding_uid>;azure;aks_network_policy_enabled;Ensure Network Policy is Enabled and set as appropriate;;PASS;Network policy is enabled for cluster '<resource_name>' in subscription '<account_name>'.;False;aks;;medium;Microsoft.ContainerService/managedClusters;/subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name>;<resource_name>;;;<partition>;<region>;When you run modern, microservices-based applications in Kubernetes, you often want to control which components can communicate with each other. The principle of least privilege should be applied to how traffic can flow between pods in an Azure Kubernetes Service (AKS) cluster. Let's say you likely want to block traffic directly to back-end applications. The Network Policy feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster.;All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. Back-end applications are often only exposed to required front-end services, for example. Or, database components are only accessible to the application tiers that connect to them. Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using Network Policies, you define an ordered set of rules to send and receive traffic and apply them to a collection of pods that match one or more label selectors. These network policy rules are defined as YAML manifests. Network policies can be included as part of a wider manifest that also creates a deployment or service.;https://learn.microsoft.com/en-us/security/benchmark/azure/security-controls-v2-network-security#ns-2-connect-private-networks-together;;https://learn.microsoft.com/en-us/azure/aks/use-network-policies;;https://docs.prowler.com/checks/azure/azure-kubernetes-policies/bc_azr_kubernetes_4#terraform;;;ENS-RD2022: mp.com.4.r2.az.aks.1;;;;Network Policy requires the Network Policy add-on. This add-on is included automatically when a cluster with Network Policy is created, but for an existing cluster, needs to be added prior to enabling Network Policy. Enabling/Disabling Network Policy causes a rolling update of all cluster nodes, similar to performing a cluster upgrade. This operation is long-running and will block other operations on the cluster (including delete) until it has run to completion. If Network Policy is used, a cluster must have at least 2 nodes of type n1-standard-1 or higher. The recommended minimum size cluster to run Network Policy enforcement is 3 n1-standard-1 instances. Enabling Network Policy enforcement consumes additional resources in nodes. Specifically, it increases the memory footprint of the kube-system process by approximately 128MB, and requires approximately 300 millicores of CPU.;<prowler_version>;https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
1 AUTH_METHOD TIMESTAMP ACCOUNT_UID ACCOUNT_NAME ACCOUNT_EMAIL ACCOUNT_ORGANIZATION_UID ACCOUNT_ORGANIZATION_NAME ACCOUNT_TAGS FINDING_UID PROVIDER CHECK_ID CHECK_TITLE CHECK_TYPE STATUS STATUS_EXTENDED MUTED SERVICE_NAME SUBSERVICE_NAME SEVERITY RESOURCE_TYPE RESOURCE_UID RESOURCE_NAME RESOURCE_DETAILS RESOURCE_TAGS PARTITION REGION DESCRIPTION RISK RELATED_URL REMEDIATION_RECOMMENDATION_TEXT REMEDIATION_RECOMMENDATION_URL REMEDIATION_CODE_NATIVEIAC REMEDIATION_CODE_TERRAFORM REMEDIATION_CODE_CLI REMEDIATION_CODE_OTHER COMPLIANCE CATEGORIES DEPENDS_ON RELATED_TO NOTES PROWLER_VERSION ADDITIONAL_URLS
2 <auth_method> 2025-02-14 14:27:30.710664 <account_uid> <account_name> <account_organization_uid> ProwlerPro.onmicrosoft.com <finding_uid> azure aks_cluster_rbac_enabled Ensure AKS RBAC is enabled PASS RBAC is enabled for cluster '<resource_name>' in subscription '<account_name>'. False aks medium Microsoft.ContainerService/ManagedClusters /subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name> <resource_name> <partition> <region> Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. You can also configure Kubernetes role-based access control (Kubernetes RBAC) to limit access to cluster resources based a user's identity or group membership. Kubernetes RBAC and AKS help you secure your cluster access and provide only the minimum required permissions to developers and operators. https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac?tabs=portal https://learn.microsoft.com/en-us/security/benchmark/azure/security-controls-v2-privileged-access#pa-7-follow-just-enough-administration-least-privilege-principle https://docs.prowler.com/checks/azure/azure-kubernetes-policies/bc_azr_kubernetes_2#terraform https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/AKS/enable-role-based-access-control-for-kubernetes-service.html# ENS-RD2022: op.acc.2.az.r1.eid.1 <prowler_version> https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
3 <auth_method> 2025-02-14 14:27:30.710664 <account_uid> <account_name> <account_organization_uid> ProwlerPro.onmicrosoft.com <finding_uid> azure aks_clusters_created_with_private_nodes Ensure clusters are created with Private Nodes PASS Cluster '<resource_name>' was created with private nodes in subscription '<account_name>' False aks high Microsoft.ContainerService/ManagedClusters /subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name> <resource_name> <partition> <region> Disable public IP addresses for cluster nodes, so that they only have private IP addresses. Private Nodes are nodes with no public IP addresses. Disabling public IP addresses on cluster nodes restricts access to only internal networks, forcing attackers to obtain local network access before attempting to compromise the underlying Kubernetes hosts. https://learn.microsoft.com/en-us/azure/aks/private-clusters https://learn.microsoft.com/en-us/azure/aks/access-private-cluster ENS-RD2022: mp.com.4.r2.az.aks.1 | MITRE-ATTACK: T1190, T1530 <prowler_version> https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
4 <auth_method> 2025-02-14 14:27:30.710664 <account_uid> <account_name> <account_organization_uid> ProwlerPro.onmicrosoft.com <finding_uid> azure aks_clusters_public_access_disabled Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled FAIL Public access to nodes is enabled for cluster '<resource_name>' in subscription '<account_name>' False aks high Microsoft.ContainerService/ManagedClusters /subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name> <resource_name> <partition> <region> Disable access to the Kubernetes API from outside the node network if it is not required. In a private cluster, the master node has two endpoints, a private and public endpoint. The private endpoint is the internal IP address of the master, behind an internal load balancer in the master's wirtual network. Nodes communicate with the master using the private endpoint. The public endpoint enables the Kubernetes API to be accessed from outside the master's virtual network. Although Kubernetes API requires an authorized token to perform sensitive actions, a vulnerability could potentially expose the Kubernetes publically with unrestricted access. Additionally, an attacker may be able to identify the current cluster and Kubernetes API version and determine whether it is vulnerable to an attack. Unless required, disabling public endpoint will help prevent such threats, and require the attacker to be on the master's virtual network to perform any attack on the Kubernetes API. https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=azure-portal To use a private endpoint, create a new private endpoint in your virtual network then create a link between your virtual network and a new private DNS zone https://learn.microsoft.com/en-us/azure/aks/access-private-cluster?tabs=azure-cli az aks update -n <cluster_name> -g <resource_group> --disable-public-fqdn ENS-RD2022: mp.com.4.az.aks.2 | MITRE-ATTACK: T1190, T1530 <prowler_version> https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
5 <auth_method> 2025-02-14 14:27:30.710664 <account_uid> <account_name> <account_organization_uid> ProwlerPro.onmicrosoft.com <finding_uid> azure aks_network_policy_enabled Ensure Network Policy is Enabled and set as appropriate PASS Network policy is enabled for cluster '<resource_name>' in subscription '<account_name>'. False aks medium Microsoft.ContainerService/managedClusters /subscriptions/<account_uid>/resourcegroups/<resource_name>_group/providers/Microsoft.ContainerService/managedClusters/<resource_name> <resource_name> <partition> <region> When you run modern, microservices-based applications in Kubernetes, you often want to control which components can communicate with each other. The principle of least privilege should be applied to how traffic can flow between pods in an Azure Kubernetes Service (AKS) cluster. Let's say you likely want to block traffic directly to back-end applications. The Network Policy feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster. All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. Back-end applications are often only exposed to required front-end services, for example. Or, database components are only accessible to the application tiers that connect to them. Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using Network Policies, you define an ordered set of rules to send and receive traffic and apply them to a collection of pods that match one or more label selectors. These network policy rules are defined as YAML manifests. Network policies can be included as part of a wider manifest that also creates a deployment or service. https://learn.microsoft.com/en-us/security/benchmark/azure/security-controls-v2-network-security#ns-2-connect-private-networks-together https://learn.microsoft.com/en-us/azure/aks/use-network-policies https://docs.prowler.com/checks/azure/azure-kubernetes-policies/bc_azr_kubernetes_4#terraform ENS-RD2022: mp.com.4.r2.az.aks.1 Network Policy requires the Network Policy add-on. This add-on is included automatically when a cluster with Network Policy is created, but for an existing cluster, needs to be added prior to enabling Network Policy. Enabling/Disabling Network Policy causes a rolling update of all cluster nodes, similar to performing a cluster upgrade. This operation is long-running and will block other operations on the cluster (including delete) until it has run to completion. If Network Policy is used, a cluster must have at least 2 nodes of type n1-standard-1 or higher. The recommended minimum size cluster to run Network Policy enforcement is 3 n1-standard-1 instances. Enabling Network Policy enforcement consumes additional resources in nodes. Specifically, it increases the memory footprint of the kube-system process by approximately 128MB, and requires approximately 300 millicores of CPU. <prowler_version> https://learn.microsoft.com/azure/aks/azure-ad-rbac | https://learn.microsoft.com/azure/aks/concepts-identity
@@ -27,6 +27,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "Because Application Insights relies on a Log Analytics Workspace, an organization will incur additional expenses when using this service.",
"compliance": {
"CIS-2.1": [
@@ -131,6 +132,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"CIS-2.1": [
@@ -247,6 +249,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"CIS-2.1": [
@@ -360,6 +363,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "When using an Azure container registry, you might occasionally encounter problems. For example, you might not be able to pull a container image because of an issue with Docker in your local environment. Or, a network issue might prevent you from connecting to the registry.",
"compliance": {
"MITRE-ATTACK": [
@@ -460,6 +464,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"CIS-2.1": [
+5 -5
View File
@@ -1,5 +1,5 @@
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;apikeys_key_exists;Ensure API Keys Only Exist for Active Services;;PASS;Project <account_uid> does not have active API Keys.;False;apikeys;;medium;API Key;<account_uid>;<account_name>;;;;<region>;API Keys should only be used for services in cases where other authentication methods are unavailable. Unused keys with their permissions in tact may still exist within a project. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to use standard authentication flow instead.;Security risks involved in using API-Keys appear below: API keys are simple encrypted strings, API keys do not identify the user or the application making the API request, API keys are typically accessible to clients, making it easy to discover and steal an API key.;;To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead.;https://cloud.google.com/docs/authentication/api-keys;;;gcloud alpha services api-keys delete;;MITRE-ATTACK: T1098 | CIS-2.0: 1.12 | ENS-RD2022: op.acc.2.gcp.rbak.1 | CIS-3.0: 1.12;;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;artifacts_container_analysis_enabled;Ensure Image Vulnerability Analysis using AR Container Analysis or a third-party provider;Security | Configuration;FAIL;AR Container Analysis is not enabled in project <account_uid>.;False;artifacts;Container Analysis;medium;Service;<resource_uid>;<resource_name>;;;;<region>;Scan images stored in Google Container Registry (GCR) for vulnerabilities using AR Container Analysis or a third-party provider. This helps identify and mitigate security risks associated with known vulnerabilities in container images.;Without image vulnerability scanning, container images stored in Artifact Registry may contain known vulnerabilities, increasing the risk of exploitation by malicious actors.;https://cloud.google.com/artifact-analysis/docs;Enable vulnerability scanning for images stored in Artifact Registry using AR Container Analysis or a third-party provider.;https://cloud.google.com/artifact-analysis/docs/container-scanning-overview;;;gcloud services enable containeranalysis.googleapis.com;;MITRE-ATTACK: T1525 | ENS-RD2022: op.exp.4.r4.gcp.log.1, op.mon.3.gcp.scc.1;;;;By default, AR Container Analysis is disabled.;<prowler_version>
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;compute_firewall_rdp_access_from_the_internet_allowed;Ensure That RDP Access Is Restricted From the Internet;;PASS;Firewall <resource_name> does not expose port 3389 (RDP) to the internet.;False;networking;;critical;FirewallRule;<resource_uid>;<resource_name>;;;;<region>;GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.;Allowing unrestricted Remote Desktop Protocol (RDP) access can increase opportunities for malicious activities such as hacking, Man-In-The-Middle attacks (MITM) and Pass-The-Hash (PTH) attacks.;;Ensure that Google Cloud Virtual Private Cloud (VPC) firewall rules do not allow unrestricted access (i.e. 0.0.0.0/0) on TCP port 3389 in order to restrict Remote Desktop Protocol (RDP) traffic to trusted IP addresses or IP ranges only and reduce the attack surface. TCP port 3389 is used for secure remote GUI login to Windows VM instances by connecting a RDP client application with an RDP server.;https://cloud.google.com/vpc/docs/using-firewalls;;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#terraform;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#cli-command;https://www.trendmicro.com/cloudoneconformity/knowledge-base/gcp/CloudVPC/unrestricted-rdp-access.html;MITRE-ATTACK: T1190, T1199, T1048, T1498, T1046 | CIS-2.0: 3.7 | ENS-RD2022: mp.com.1.gcp.fw.1 | CIS-3.0: 3.7;internet-exposed;;;;<prowler_version>
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;compute_firewall_rdp_access_from_the_internet_allowed;Ensure That RDP Access Is Restricted From the Internet;;PASS;Firewall <resource_name> does not expose port 3389 (RDP) to the internet.;False;networking;;critical;FirewallRule;<resource_uid>;<resource_name>;;;;<region>;GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.;Allowing unrestricted Remote Desktop Protocol (RDP) access can increase opportunities for malicious activities such as hacking, Man-In-The-Middle attacks (MITM) and Pass-The-Hash (PTH) attacks.;;Ensure that Google Cloud Virtual Private Cloud (VPC) firewall rules do not allow unrestricted access (i.e. 0.0.0.0/0) on TCP port 3389 in order to restrict Remote Desktop Protocol (RDP) traffic to trusted IP addresses or IP ranges only and reduce the attack surface. TCP port 3389 is used for secure remote GUI login to Windows VM instances by connecting a RDP client application with an RDP server.;https://cloud.google.com/vpc/docs/using-firewalls;;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#terraform;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#cli-command;https://www.trendmicro.com/cloudoneconformity/knowledge-base/gcp/CloudVPC/unrestricted-rdp-access.html;MITRE-ATTACK: T1190, T1199, T1048, T1498, T1046 | CIS-2.0: 3.7 | ENS-RD2022: mp.com.1.gcp.fw.1 | CIS-3.0: 3.7;internet-exposed;;;;<prowler_version>
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION;ADDITIONAL_URLS
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;apikeys_key_exists;Ensure API Keys Only Exist for Active Services;;PASS;Project <account_uid> does not have active API Keys.;False;apikeys;;medium;API Key;<account_uid>;<account_name>;;;;<region>;API Keys should only be used for services in cases where other authentication methods are unavailable. Unused keys with their permissions in tact may still exist within a project. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to use standard authentication flow instead.;Security risks involved in using API-Keys appear below: API keys are simple encrypted strings, API keys do not identify the user or the application making the API request, API keys are typically accessible to clients, making it easy to discover and steal an API key.;;To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead.;https://cloud.google.com/docs/authentication/api-keys;;;gcloud alpha services api-keys delete;;MITRE-ATTACK: T1098 | CIS-2.0: 1.12 | ENS-RD2022: op.acc.2.gcp.rbak.1 | CIS-3.0: 1.12;;;;;<prowler_version>;https://cloud.google.com/api-keys/docs/best-practices | https://cloud.google.com/docs/authentication
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;artifacts_container_analysis_enabled;Ensure Image Vulnerability Analysis using AR Container Analysis or a third-party provider;Security | Configuration;FAIL;AR Container Analysis is not enabled in project <account_uid>.;False;artifacts;Container Analysis;medium;Service;<resource_uid>;<resource_name>;;;;<region>;Scan images stored in Google Container Registry (GCR) for vulnerabilities using AR Container Analysis or a third-party provider. This helps identify and mitigate security risks associated with known vulnerabilities in container images.;Without image vulnerability scanning, container images stored in Artifact Registry may contain known vulnerabilities, increasing the risk of exploitation by malicious actors.;https://cloud.google.com/artifact-analysis/docs;Enable vulnerability scanning for images stored in Artifact Registry using AR Container Analysis or a third-party provider.;https://cloud.google.com/artifact-analysis/docs/container-scanning-overview;;;gcloud services enable containeranalysis.googleapis.com;;MITRE-ATTACK: T1525 | ENS-RD2022: op.exp.4.r4.gcp.log.1, op.mon.3.gcp.scc.1;;;;By default, AR Container Analysis is disabled.;<prowler_version>;https://cloud.google.com/api-keys/docs/best-practices | https://cloud.google.com/docs/authentication
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;compute_firewall_rdp_access_from_the_internet_allowed;Ensure That RDP Access Is Restricted From the Internet;;PASS;Firewall <resource_name> does not expose port 3389 (RDP) to the internet.;False;networking;;critical;FirewallRule;<resource_uid>;<resource_name>;;;;<region>;GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.;Allowing unrestricted Remote Desktop Protocol (RDP) access can increase opportunities for malicious activities such as hacking, Man-In-The-Middle attacks (MITM) and Pass-The-Hash (PTH) attacks.;;Ensure that Google Cloud Virtual Private Cloud (VPC) firewall rules do not allow unrestricted access (i.e. 0.0.0.0/0) on TCP port 3389 in order to restrict Remote Desktop Protocol (RDP) traffic to trusted IP addresses or IP ranges only and reduce the attack surface. TCP port 3389 is used for secure remote GUI login to Windows VM instances by connecting a RDP client application with an RDP server.;https://cloud.google.com/vpc/docs/using-firewalls;;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#terraform;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#cli-command;https://www.trendmicro.com/cloudoneconformity/knowledge-base/gcp/CloudVPC/unrestricted-rdp-access.html;MITRE-ATTACK: T1190, T1199, T1048, T1498, T1046 | CIS-2.0: 3.7 | ENS-RD2022: mp.com.1.gcp.fw.1 | CIS-3.0: 3.7;internet-exposed;;;;<prowler_version>;https://cloud.google.com/api-keys/docs | https://cloud.google.com/docs/authentication
<auth_method>;2025-02-14 14:27:20.697446;<account_uid>;<account_name>;;<account_organization_uid>;<account_organization_name>;<account_tags>;<finding_uid>;gcp;compute_firewall_rdp_access_from_the_internet_allowed;Ensure That RDP Access Is Restricted From the Internet;;PASS;Firewall <resource_name> does not expose port 3389 (RDP) to the internet.;False;networking;;critical;FirewallRule;<resource_uid>;<resource_name>;;;;<region>;GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.;Allowing unrestricted Remote Desktop Protocol (RDP) access can increase opportunities for malicious activities such as hacking, Man-In-The-Middle attacks (MITM) and Pass-The-Hash (PTH) attacks.;;Ensure that Google Cloud Virtual Private Cloud (VPC) firewall rules do not allow unrestricted access (i.e. 0.0.0.0/0) on TCP port 3389 in order to restrict Remote Desktop Protocol (RDP) traffic to trusted IP addresses or IP ranges only and reduce the attack surface. TCP port 3389 is used for secure remote GUI login to Windows VM instances by connecting a RDP client application with an RDP server.;https://cloud.google.com/vpc/docs/using-firewalls;;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#terraform;https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#cli-command;https://www.trendmicro.com/cloudoneconformity/knowledge-base/gcp/CloudVPC/unrestricted-rdp-access.html;MITRE-ATTACK: T1190, T1199, T1048, T1498, T1046 | CIS-2.0: 3.7 | ENS-RD2022: mp.com.1.gcp.fw.1 | CIS-3.0: 3.7;internet-exposed;;;;<prowler_version>;https://cloud.google.com/api-keys/docs | https://cloud.google.com/docs/authentication
1 AUTH_METHOD TIMESTAMP ACCOUNT_UID ACCOUNT_NAME ACCOUNT_EMAIL ACCOUNT_ORGANIZATION_UID ACCOUNT_ORGANIZATION_NAME ACCOUNT_TAGS FINDING_UID PROVIDER CHECK_ID CHECK_TITLE CHECK_TYPE STATUS STATUS_EXTENDED MUTED SERVICE_NAME SUBSERVICE_NAME SEVERITY RESOURCE_TYPE RESOURCE_UID RESOURCE_NAME RESOURCE_DETAILS RESOURCE_TAGS PARTITION REGION DESCRIPTION RISK RELATED_URL REMEDIATION_RECOMMENDATION_TEXT REMEDIATION_RECOMMENDATION_URL REMEDIATION_CODE_NATIVEIAC REMEDIATION_CODE_TERRAFORM REMEDIATION_CODE_CLI REMEDIATION_CODE_OTHER COMPLIANCE CATEGORIES DEPENDS_ON RELATED_TO NOTES PROWLER_VERSION ADDITIONAL_URLS
2 <auth_method> 2025-02-14 14:27:20.697446 <account_uid> <account_name> <account_organization_uid> <account_organization_name> <account_tags> <finding_uid> gcp apikeys_key_exists Ensure API Keys Only Exist for Active Services PASS Project <account_uid> does not have active API Keys. False apikeys medium API Key <account_uid> <account_name> <region> API Keys should only be used for services in cases where other authentication methods are unavailable. Unused keys with their permissions in tact may still exist within a project. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to use standard authentication flow instead. Security risks involved in using API-Keys appear below: API keys are simple encrypted strings, API keys do not identify the user or the application making the API request, API keys are typically accessible to clients, making it easy to discover and steal an API key. To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. https://cloud.google.com/docs/authentication/api-keys gcloud alpha services api-keys delete MITRE-ATTACK: T1098 | CIS-2.0: 1.12 | ENS-RD2022: op.acc.2.gcp.rbak.1 | CIS-3.0: 1.12 <prowler_version> https://cloud.google.com/api-keys/docs/best-practices | https://cloud.google.com/docs/authentication
3 <auth_method> 2025-02-14 14:27:20.697446 <account_uid> <account_name> <account_organization_uid> <account_organization_name> <account_tags> <finding_uid> gcp artifacts_container_analysis_enabled Ensure Image Vulnerability Analysis using AR Container Analysis or a third-party provider Security | Configuration FAIL AR Container Analysis is not enabled in project <account_uid>. False artifacts Container Analysis medium Service <resource_uid> <resource_name> <region> Scan images stored in Google Container Registry (GCR) for vulnerabilities using AR Container Analysis or a third-party provider. This helps identify and mitigate security risks associated with known vulnerabilities in container images. Without image vulnerability scanning, container images stored in Artifact Registry may contain known vulnerabilities, increasing the risk of exploitation by malicious actors. https://cloud.google.com/artifact-analysis/docs Enable vulnerability scanning for images stored in Artifact Registry using AR Container Analysis or a third-party provider. https://cloud.google.com/artifact-analysis/docs/container-scanning-overview gcloud services enable containeranalysis.googleapis.com MITRE-ATTACK: T1525 | ENS-RD2022: op.exp.4.r4.gcp.log.1, op.mon.3.gcp.scc.1 By default, AR Container Analysis is disabled. <prowler_version> https://cloud.google.com/api-keys/docs/best-practices | https://cloud.google.com/docs/authentication
4 <auth_method> 2025-02-14 14:27:20.697446 <account_uid> <account_name> <account_organization_uid> <account_organization_name> <account_tags> <finding_uid> gcp compute_firewall_rdp_access_from_the_internet_allowed Ensure That RDP Access Is Restricted From the Internet PASS Firewall <resource_name> does not expose port 3389 (RDP) to the internet. False networking critical FirewallRule <resource_uid> <resource_name> <region> GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided. Allowing unrestricted Remote Desktop Protocol (RDP) access can increase opportunities for malicious activities such as hacking, Man-In-The-Middle attacks (MITM) and Pass-The-Hash (PTH) attacks. Ensure that Google Cloud Virtual Private Cloud (VPC) firewall rules do not allow unrestricted access (i.e. 0.0.0.0/0) on TCP port 3389 in order to restrict Remote Desktop Protocol (RDP) traffic to trusted IP addresses or IP ranges only and reduce the attack surface. TCP port 3389 is used for secure remote GUI login to Windows VM instances by connecting a RDP client application with an RDP server. https://cloud.google.com/vpc/docs/using-firewalls https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#terraform https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#cli-command https://www.trendmicro.com/cloudoneconformity/knowledge-base/gcp/CloudVPC/unrestricted-rdp-access.html MITRE-ATTACK: T1190, T1199, T1048, T1498, T1046 | CIS-2.0: 3.7 | ENS-RD2022: mp.com.1.gcp.fw.1 | CIS-3.0: 3.7 internet-exposed <prowler_version> https://cloud.google.com/api-keys/docs | https://cloud.google.com/docs/authentication
5 <auth_method> 2025-02-14 14:27:20.697446 <account_uid> <account_name> <account_organization_uid> <account_organization_name> <account_tags> <finding_uid> gcp compute_firewall_rdp_access_from_the_internet_allowed Ensure That RDP Access Is Restricted From the Internet PASS Firewall <resource_name> does not expose port 3389 (RDP) to the internet. False networking critical FirewallRule <resource_uid> <resource_name> <region> GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided. Allowing unrestricted Remote Desktop Protocol (RDP) access can increase opportunities for malicious activities such as hacking, Man-In-The-Middle attacks (MITM) and Pass-The-Hash (PTH) attacks. Ensure that Google Cloud Virtual Private Cloud (VPC) firewall rules do not allow unrestricted access (i.e. 0.0.0.0/0) on TCP port 3389 in order to restrict Remote Desktop Protocol (RDP) traffic to trusted IP addresses or IP ranges only and reduce the attack surface. TCP port 3389 is used for secure remote GUI login to Windows VM instances by connecting a RDP client application with an RDP server. https://cloud.google.com/vpc/docs/using-firewalls https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#terraform https://docs.<account_organization_name>/checks/gcp/google-cloud-networking-policies/bc_gcp_networking_2#cli-command https://www.trendmicro.com/cloudoneconformity/knowledge-base/gcp/CloudVPC/unrestricted-rdp-access.html MITRE-ATTACK: T1190, T1199, T1048, T1498, T1046 | CIS-2.0: 3.7 | ENS-RD2022: mp.com.1.gcp.fw.1 | CIS-3.0: 3.7 internet-exposed <prowler_version> https://cloud.google.com/api-keys/docs | https://cloud.google.com/docs/authentication
@@ -27,6 +27,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"MITRE-ATTACK": [
@@ -147,6 +148,7 @@
"categories": [],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "By default, AR Container Analysis is disabled.",
"compliance": {
"MITRE-ATTACK": [
@@ -267,6 +269,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"MITRE-ATTACK": [
@@ -394,6 +397,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"MITRE-ATTACK": [
@@ -533,6 +537,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "",
"compliance": {
"MITRE-ATTACK": [
@@ -1,5 +1,5 @@
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_always_pull_images_plugin;Ensure that the admission control plugin AlwaysPullImages is set;;FAIL;AlwaysPullImages admission control plugin is not set in pod <resource_uid>;False;apiserver;;medium;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;This check verifies that the AlwaysPullImages admission control plugin is enabled in the Kubernetes API server. This plugin ensures that every new pod always pulls the required images, enforcing image access control and preventing the use of possibly outdated or altered images.;Without AlwaysPullImages, once an image is pulled to a node, any pod can use it without any authorization check, potentially leading to security risks.;https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages;Configure the API server to use the AlwaysPullImages admission control plugin to ensure image security and integrity.;https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwayspullimages-is-set#kubernetes;;--enable-admission-plugins=...,AlwaysPullImages,...;;CIS-1.10: 1.2.11 | CIS-1.8: 1.2.11;cluster-security;;;Enabling AlwaysPullImages can increase network and registry load and decrease container startup speed. It may not be suitable for all environments.;<prowler_version>
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_anonymous_requests;Ensure that the --anonymous-auth argument is set to false;;PASS;API Server does not have anonymous-auth enabled in pod <resource_uid>;False;apiserver;;high;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;Disable anonymous requests to the API server. When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests, which are then served by the API server. Disallowing anonymous requests strengthens security by ensuring all access is authenticated.;Enabling anonymous access to the API server can expose the cluster to unauthorized access and potential security vulnerabilities.;https://kubernetes.io/docs/admin/authentication/#anonymous-requests;Ensure the --anonymous-auth argument in the API server is set to false. This will reject all anonymous requests, enforcing authenticated access to the server.;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false-1#kubernetes;;--anonymous-auth=false;;CIS-1.10: 1.2.1 | CIS-1.8: 1.2.1;trustboundaries;;;While anonymous access can be useful for health checks and discovery, consider the security implications for your specific environment.;<prowler_version>
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_audit_log_maxage_set;Ensure that the --audit-log-maxage argument is set to 30 or as appropriate;;FAIL;Audit log max age is not set to 30 or as appropriate in pod <resource_uid>;False;apiserver;;medium;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;This check ensures that the Kubernetes API server is configured with an appropriate audit log retention period. Setting --audit-log-maxage to 30 or as per business requirements helps in maintaining logs for sufficient time to investigate past events.;Without an adequate log retention period, there may be insufficient audit history to investigate and analyze past events or security incidents.;https://kubernetes.io/docs/concepts/cluster-administration/audit/;Configure the API server audit log retention period to retain logs for at least 30 days or as per your organization's requirements.;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate#kubernetes;;--audit-log-maxage=30;;CIS-1.10: 1.2.17 | CIS-1.8: 1.2.18;logging;;;Ensure the audit log retention period is set appropriately to balance between storage constraints and the need for historical data.;<prowler_version>
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_audit_log_maxbackup_set;Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate;;FAIL;Audit log max backup is not set to 10 or as appropriate in pod <resource_uid>;False;apiserver;;medium;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;This check ensures that the Kubernetes API server is configured with an appropriate number of audit log backups. Setting --audit-log-maxbackup to 10 or as per business requirements helps maintain a sufficient log backup for investigations or analysis.;Without an adequate number of audit log backups, there may be insufficient log history to investigate past events or security incidents.;https://kubernetes.io/docs/concepts/cluster-administration/audit/;Configure the API server audit log backup retention to 10 or as per your organization's requirements.;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate#kubernetes;;--audit-log-maxbackup=10;;CIS-1.10: 1.2.18 | CIS-1.8: 1.2.19;logging;;;Ensure the audit log backup retention period is set appropriately to balance between storage constraints and the need for historical data.;<prowler_version>
AUTH_METHOD;TIMESTAMP;ACCOUNT_UID;ACCOUNT_NAME;ACCOUNT_EMAIL;ACCOUNT_ORGANIZATION_UID;ACCOUNT_ORGANIZATION_NAME;ACCOUNT_TAGS;FINDING_UID;PROVIDER;CHECK_ID;CHECK_TITLE;CHECK_TYPE;STATUS;STATUS_EXTENDED;MUTED;SERVICE_NAME;SUBSERVICE_NAME;SEVERITY;RESOURCE_TYPE;RESOURCE_UID;RESOURCE_NAME;RESOURCE_DETAILS;RESOURCE_TAGS;PARTITION;REGION;DESCRIPTION;RISK;RELATED_URL;REMEDIATION_RECOMMENDATION_TEXT;REMEDIATION_RECOMMENDATION_URL;REMEDIATION_CODE_NATIVEIAC;REMEDIATION_CODE_TERRAFORM;REMEDIATION_CODE_CLI;REMEDIATION_CODE_OTHER;COMPLIANCE;CATEGORIES;DEPENDS_ON;RELATED_TO;NOTES;PROWLER_VERSION;ADDITIONAL_URLS
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_always_pull_images_plugin;Ensure that the admission control plugin AlwaysPullImages is set;;FAIL;AlwaysPullImages admission control plugin is not set in pod <resource_uid>;False;apiserver;;medium;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;This check verifies that the AlwaysPullImages admission control plugin is enabled in the Kubernetes API server. This plugin ensures that every new pod always pulls the required images, enforcing image access control and preventing the use of possibly outdated or altered images.;Without AlwaysPullImages, once an image is pulled to a node, any pod can use it without any authorization check, potentially leading to security risks.;https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages;Configure the API server to use the AlwaysPullImages admission control plugin to ensure image security and integrity.;https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwayspullimages-is-set#kubernetes;;--enable-admission-plugins=...,AlwaysPullImages,...;;CIS-1.10: 1.2.11 | CIS-1.8: 1.2.11;cluster-security;;;Enabling AlwaysPullImages can increase network and registry load and decrease container startup speed. It may not be suitable for all environments.;<prowler_version>;https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_anonymous_requests;Ensure that the --anonymous-auth argument is set to false;;PASS;API Server does not have anonymous-auth enabled in pod <resource_uid>;False;apiserver;;high;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;Disable anonymous requests to the API server. When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests, which are then served by the API server. Disallowing anonymous requests strengthens security by ensuring all access is authenticated.;Enabling anonymous access to the API server can expose the cluster to unauthorized access and potential security vulnerabilities.;https://kubernetes.io/docs/admin/authentication/#anonymous-requests;Ensure the --anonymous-auth argument in the API server is set to false. This will reject all anonymous requests, enforcing authenticated access to the server.;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false-1#kubernetes;;--anonymous-auth=false;;CIS-1.10: 1.2.1 | CIS-1.8: 1.2.1;trustboundaries;;;While anonymous access can be useful for health checks and discovery, consider the security implications for your specific environment.;<prowler_version>;https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_audit_log_maxage_set;Ensure that the --audit-log-maxage argument is set to 30 or as appropriate;;FAIL;Audit log max age is not set to 30 or as appropriate in pod <resource_uid>;False;apiserver;;medium;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;This check ensures that the Kubernetes API server is configured with an appropriate audit log retention period. Setting --audit-log-maxage to 30 or as per business requirements helps in maintaining logs for sufficient time to investigate past events.;Without an adequate log retention period, there may be insufficient audit history to investigate and analyze past events or security incidents.;https://kubernetes.io/docs/concepts/cluster-administration/audit/;Configure the API server audit log retention period to retain logs for at least 30 days or as per your organization's requirements.;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate#kubernetes;;--audit-log-maxage=30;;CIS-1.10: 1.2.17 | CIS-1.8: 1.2.18;logging;;;Ensure the audit log retention period is set appropriately to balance between storage constraints and the need for historical data.;<prowler_version>;https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
<auth_method>;2025-02-14 14:27:38.533897;<account_uid>;context: <context>;;;;;<finding_uid>;kubernetes;apiserver_audit_log_maxbackup_set;Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate;;FAIL;Audit log max backup is not set to 10 or as appropriate in pod <resource_uid>;False;apiserver;;medium;KubernetesAPIServer;<resource_id>;<resource_name>;;;;namespace: kube-system;This check ensures that the Kubernetes API server is configured with an appropriate number of audit log backups. Setting --audit-log-maxbackup to 10 or as per business requirements helps maintain a sufficient log backup for investigations or analysis.;Without an adequate number of audit log backups, there may be insufficient log history to investigate past events or security incidents.;https://kubernetes.io/docs/concepts/cluster-administration/audit/;Configure the API server audit log backup retention to 10 or as per your organization's requirements.;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/;https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate#kubernetes;;--audit-log-maxbackup=10;;CIS-1.10: 1.2.18 | CIS-1.8: 1.2.19;logging;;;Ensure the audit log backup retention period is set appropriately to balance between storage constraints and the need for historical data.;<prowler_version>;https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
1 AUTH_METHOD TIMESTAMP ACCOUNT_UID ACCOUNT_NAME ACCOUNT_EMAIL ACCOUNT_ORGANIZATION_UID ACCOUNT_ORGANIZATION_NAME ACCOUNT_TAGS FINDING_UID PROVIDER CHECK_ID CHECK_TITLE CHECK_TYPE STATUS STATUS_EXTENDED MUTED SERVICE_NAME SUBSERVICE_NAME SEVERITY RESOURCE_TYPE RESOURCE_UID RESOURCE_NAME RESOURCE_DETAILS RESOURCE_TAGS PARTITION REGION DESCRIPTION RISK RELATED_URL REMEDIATION_RECOMMENDATION_TEXT REMEDIATION_RECOMMENDATION_URL REMEDIATION_CODE_NATIVEIAC REMEDIATION_CODE_TERRAFORM REMEDIATION_CODE_CLI REMEDIATION_CODE_OTHER COMPLIANCE CATEGORIES DEPENDS_ON RELATED_TO NOTES PROWLER_VERSION ADDITIONAL_URLS
2 <auth_method> 2025-02-14 14:27:38.533897 <account_uid> context: <context> <finding_uid> kubernetes apiserver_always_pull_images_plugin Ensure that the admission control plugin AlwaysPullImages is set FAIL AlwaysPullImages admission control plugin is not set in pod <resource_uid> False apiserver medium KubernetesAPIServer <resource_id> <resource_name> namespace: kube-system This check verifies that the AlwaysPullImages admission control plugin is enabled in the Kubernetes API server. This plugin ensures that every new pod always pulls the required images, enforcing image access control and preventing the use of possibly outdated or altered images. Without AlwaysPullImages, once an image is pulled to a node, any pod can use it without any authorization check, potentially leading to security risks. https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages Configure the API server to use the AlwaysPullImages admission control plugin to ensure image security and integrity. https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwayspullimages-is-set#kubernetes --enable-admission-plugins=...,AlwaysPullImages,... CIS-1.10: 1.2.11 | CIS-1.8: 1.2.11 cluster-security Enabling AlwaysPullImages can increase network and registry load and decrease container startup speed. It may not be suitable for all environments. <prowler_version> https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
3 <auth_method> 2025-02-14 14:27:38.533897 <account_uid> context: <context> <finding_uid> kubernetes apiserver_anonymous_requests Ensure that the --anonymous-auth argument is set to false PASS API Server does not have anonymous-auth enabled in pod <resource_uid> False apiserver high KubernetesAPIServer <resource_id> <resource_name> namespace: kube-system Disable anonymous requests to the API server. When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests, which are then served by the API server. Disallowing anonymous requests strengthens security by ensuring all access is authenticated. Enabling anonymous access to the API server can expose the cluster to unauthorized access and potential security vulnerabilities. https://kubernetes.io/docs/admin/authentication/#anonymous-requests Ensure the --anonymous-auth argument in the API server is set to false. This will reject all anonymous requests, enforcing authenticated access to the server. https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false-1#kubernetes --anonymous-auth=false CIS-1.10: 1.2.1 | CIS-1.8: 1.2.1 trustboundaries While anonymous access can be useful for health checks and discovery, consider the security implications for your specific environment. <prowler_version> https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
4 <auth_method> 2025-02-14 14:27:38.533897 <account_uid> context: <context> <finding_uid> kubernetes apiserver_audit_log_maxage_set Ensure that the --audit-log-maxage argument is set to 30 or as appropriate FAIL Audit log max age is not set to 30 or as appropriate in pod <resource_uid> False apiserver medium KubernetesAPIServer <resource_id> <resource_name> namespace: kube-system This check ensures that the Kubernetes API server is configured with an appropriate audit log retention period. Setting --audit-log-maxage to 30 or as per business requirements helps in maintaining logs for sufficient time to investigate past events. Without an adequate log retention period, there may be insufficient audit history to investigate and analyze past events or security incidents. https://kubernetes.io/docs/concepts/cluster-administration/audit/ Configure the API server audit log retention period to retain logs for at least 30 days or as per your organization's requirements. https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate#kubernetes --audit-log-maxage=30 CIS-1.10: 1.2.17 | CIS-1.8: 1.2.18 logging Ensure the audit log retention period is set appropriately to balance between storage constraints and the need for historical data. <prowler_version> https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
5 <auth_method> 2025-02-14 14:27:38.533897 <account_uid> context: <context> <finding_uid> kubernetes apiserver_audit_log_maxbackup_set Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate FAIL Audit log max backup is not set to 10 or as appropriate in pod <resource_uid> False apiserver medium KubernetesAPIServer <resource_id> <resource_name> namespace: kube-system This check ensures that the Kubernetes API server is configured with an appropriate number of audit log backups. Setting --audit-log-maxbackup to 10 or as per business requirements helps maintain a sufficient log backup for investigations or analysis. Without an adequate number of audit log backups, there may be insufficient log history to investigate past events or security incidents. https://kubernetes.io/docs/concepts/cluster-administration/audit/ Configure the API server audit log backup retention to 10 or as per your organization's requirements. https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate#kubernetes --audit-log-maxbackup=10 CIS-1.10: 1.2.18 | CIS-1.8: 1.2.19 logging Ensure the audit log backup retention period is set appropriately to balance between storage constraints and the need for historical data. <prowler_version> https://kubernetes.io/docs/concepts/containers/images/ | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
@@ -28,6 +28,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "Enabling AlwaysPullImages can increase network and registry load and decrease container startup speed. It may not be suitable for all environments.",
"compliance": {
"CIS-1.10": [
@@ -161,6 +162,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "While anonymous access can be useful for health checks and discovery, consider the security implications for your specific environment.",
"compliance": {
"CIS-1.10": [
@@ -294,6 +296,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "Ensure the audit log retention period is set appropriately to balance between storage constraints and the need for historical data.",
"compliance": {
"CIS-1.10": [
@@ -427,6 +430,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "Ensure the audit log backup retention period is set appropriately to balance between storage constraints and the need for historical data.",
"compliance": {
"CIS-1.10": [
@@ -560,6 +564,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "Adjust the audit log file size limit based on your organization's storage capabilities and logging requirements.",
"compliance": {
"CIS-1.10": [
@@ -693,6 +698,7 @@
],
"depends_on": [],
"related_to": [],
"additional_urls": [],
"notes": "Audit logs are not enabled by default in Kubernetes. Configuring them is essential for security monitoring and forensic analysis.",
"compliance": {
"CIS-1.10": [
+4
View File
@@ -0,0 +1,4 @@
PROWLER_APP_EMAIL="your_registered@email.com"
PROWLER_APP_PASSWORD="your_user_pass"
PROWLER_APP_TENANT_ID="optional_tenant_to_login"
PROWLER_API_BASE_URL=https://api.prowler.com
+155
View File
@@ -0,0 +1,155 @@
# Prowler MCP Server
Access the entire Prowler ecosystem through the Model Context Protocol (MCP). This server provides two main capabilities:
- **Prowler Cloud and Prowler App (Self-Managed)**: Full access to Prowler Cloud platform and Prowler Self-Managed for managing providers, running scans, and analyzing security findings
- **Prowler Hub**: Access to Prowler's security checks, fixers, and compliance frameworks catalog
## Requirements
- Python 3.12+
- Network access to `https://hub.prowler.com` (for Prowler Hub)
- Network access to Prowler Cloud and Prowler App (Self-Managed) API (it can be Prowler Cloud API or self-hosted Prowler App API)
- Prowler Cloud account credentials (for Prowler Cloud and Prowler App (Self-Managed) features)
## Installation
### From Sources
It is needed to have [uv](https://docs.astral.sh/uv/) installed.
```bash
git clone https://github.com/prowler-cloud/prowler.git
```
## Running
After installation, start the MCP server via the console script:
```bash
cd prowler/mcp_server
uv run prowler-mcp
```
Alternatively, you can run from wherever you want using `uvx` command:
```bash
uvx /path/to/prowler/mcp_server/
```
## Available Tools
### Prowler Hub
All tools are exposed under the `prowler_hub` prefix.
- `prowler_hub_get_check_filters`: Return available filter values for checks (providers, services, severities, categories, compliances). Call this before `prowler_hub_get_checks` to build valid queries.
- `prowler_hub_get_checks`: List checks with option of advanced filtering.
- `prowler_hub_search_checks`: Fulltext search across check metadata.
- `prowler_hub_get_compliance_frameworks`: List/filter compliance frameworks.
- `prowler_hub_search_compliance_frameworks`: Full-text search across frameworks.
- `prowler_hub_list_providers`: List Prowler official providers and their services.
- `prowler_hub_get_artifacts_count`: Return total artifact count (checks + frameworks).
### Prowler Cloud and Prowler App (Self-Managed)
All tools are exposed under the `prowler_app` prefix.
#### Findings Management
- `prowler_app_list_findings`: List security findings from Prowler scans with advanced filtering
- `prowler_app_get_finding`: Get detailed information about a specific security finding
- `prowler_app_get_latest_findings`: Retrieve latest findings from the latest scans for each provider
- `prowler_app_get_findings_metadata`: Fetch unique metadata values from filtered findings
- `prowler_app_get_latest_findings_metadata`: Fetch metadata from latest findings across all providers
#### Provider Management
- `prowler_app_list_providers`: List all providers with filtering options
- `prowler_app_create_provider`: Create a new provider in the current tenant
- `prowler_app_get_provider`: Get detailed information about a specific provider
- `prowler_app_update_provider`: Update provider details (alias, etc.)
- `prowler_app_delete_provider`: Delete a specific provider
- `prowler_app_test_provider_connection`: Test provider connection status
#### Provider Secrets Management
- `prowler_app_list_provider_secrets`: List all provider secrets with filtering
- `prowler_app_add_provider_secret`: Add or update credentials for a provider
- `prowler_app_get_provider_secret`: Get detailed information about a provider secret
- `prowler_app_update_provider_secret`: Update provider secret details
- `prowler_app_delete_provider_secret`: Delete a provider secret
#### Scan Management
- `prowler_app_list_scans`: List all scans with filtering options
- `prowler_app_create_scan`: Trigger a manual scan for a specific provider
- `prowler_app_get_scan`: Get detailed information about a specific scan
- `prowler_app_update_scan`: Update scan details
- `prowler_app_get_scan_compliance_report`: Download compliance report as CSV
- `prowler_app_get_scan_report`: Download ZIP file containing scan report
#### Schedule Management
- `prowler_app_schedules_daily_scan`: Create a daily scheduled scan for a provider
#### Processor Management
- `prowler_app_processors_list`: List all processors with filtering
- `prowler_app_processors_create`: Create a new processor. For now, only mute lists are supported.
- `prowler_app_processors_retrieve`: Get processor details by ID
- `prowler_app_processors_partial_update`: Update processor configuration
- `prowler_app_processors_destroy`: Delete a processor
## Configuration
### Environment Variables
For Prowler Cloud and Prowler App (Self-Managed) features, you need to set the following environment variables:
```bash
# Required for Prowler Cloud and Prowler App (Self-Managed) authentication
export PROWLER_APP_EMAIL="your-email@example.com"
export PROWLER_APP_PASSWORD="your-password"
# Optional - in case not provided the first membership that was added to the user will be used. This can be found as `Organization ID` in your User Profile in Prowler App
export PROWLER_APP_TENANT_ID="your-tenant-id"
# Optional - for custom API endpoint, in case not provided Prowler Cloud API will be used
export PROWLER_API_BASE_URL="https://api.prowler.com"
```
### MCP Client Configuration
Configure your MCP client, like Claude Desktop, Cursor, etc, to launch the server with the `uvx` command. Below is a generic snippet; consult your client's documentation for exact locations.
```json
{
"mcpServers": {
"prowler": {
"command": "uvx",
"args": ["/path/to/prowler/mcp_server/"],
"env": {
"PROWLER_APP_EMAIL": "your-email@example.com",
"PROWLER_APP_PASSWORD": "your-password",
"PROWLER_APP_TENANT_ID": "your-tenant-id", // Optional, this can be found as `Organization ID` in your User Profile in Prowler App
"PROWLER_API_BASE_URL": "https://api.prowler.com" // Optional
}
}
}
}
```
### Claude Desktop (macOS/Windows)
Add the example server to Claude Desktop's config file, then restart the app.
- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%AppData%\Claude\claude_desktop_config.json` (e.g. `C:\\Users\\<you>\\AppData\\Roaming\\Claude\\claude_desktop_config.json`)
### Cursor (macOS/Linux)
If you want to have it globally available, add the example server to Cursor's config file, then restart the app.
- macOS/Linux: `~/.cursor/mcp.json`
If you want to have it only for the current project, add the example server to the project's root in a new `.cursor/mcp.json` file.
## License
This project follows the repositorys main license. See the [LICENSE](../LICENSE) file at the repository root.
+12
View File
@@ -0,0 +1,12 @@
"""
Prowler MCP - Model Context Protocol server for Prowler ecosystem
This package provides MCP tools for accessing:
- Prowler Hub: All security artifacts (detections, remediations and frameworks) supported by Prowler
"""
__version__ = "0.1.0"
__author__ = "Prowler Team"
__email__ = "engineering@prowler.com"
__all__ = ["__version__", "prowler_mcp_server"]
@@ -0,0 +1,3 @@
from prowler_mcp_server.lib.logger import logger
__all__ = ["logger"]
@@ -0,0 +1,4 @@
from fastmcp.utilities.logging import get_logger
# Create and export logger
logger = get_logger("prowler-mcp-server")
+22
View File
@@ -0,0 +1,22 @@
import asyncio
import sys
from prowler_mcp_server.lib.logger import logger
from prowler_mcp_server.server import prowler_mcp_server, setup_main_server
def main():
"""Main entry point for the MCP server."""
try:
asyncio.run(setup_main_server())
prowler_mcp_server.run()
except KeyboardInterrupt:
logger.info("Shutting down Prowler MCP server...")
sys.exit(0)
except Exception as e:
logger.error(f"Error starting server: {e}")
sys.exit(1)
if __name__ == "__main__":
main()
@@ -0,0 +1,200 @@
"""Authentication manager for Prowler App API."""
import base64
import json
import os
from datetime import datetime
from typing import Dict, Optional
import httpx
from prowler_mcp_server import __version__
from prowler_mcp_server.lib.logger import logger
class ProwlerAppAuth:
"""Handles authentication and token management for Prowler App API."""
def __init__(self):
self.base_url = os.getenv(
"PROWLER_API_BASE_URL", "https://api.prowler.com"
).rstrip("/")
self.email = os.getenv("PROWLER_APP_EMAIL")
self.password = os.getenv("PROWLER_APP_PASSWORD")
self.tenant_id = os.getenv("PROWLER_APP_TENANT_ID", None)
self.access_token: Optional[str] = None
self.refresh_token: Optional[str] = None
self._validate_credentials()
def _validate_credentials(self):
"""Validate that all required credentials are present."""
if not self.email:
raise ValueError("PROWLER_APP_EMAIL environment variable is required")
if not self.password:
raise ValueError("PROWLER_APP_PASSWORD environment variable is required")
def _parse_jwt(self, token: str) -> Optional[Dict]:
"""Parse JWT token and return payload, similar to JS parseJwt function."""
if not token:
return None
try:
parts = token.split(".")
if len(parts) != 3:
return None
# Decode base64url
base64_payload = parts[1]
# Replace base64url characters
base64_payload = base64_payload.replace("-", "+").replace("_", "/")
# Add padding if necessary
while len(base64_payload) % 4:
base64_payload += "="
# Decode and parse JSON
decoded = base64.b64decode(base64_payload).decode("utf-8")
return json.loads(decoded)
except Exception as e:
logger.warning(f"Failed to parse JWT token: {e}")
return None
async def authenticate(self) -> str:
"""Authenticate with Prowler App API and return access token."""
logger.info("Starting authentication with Prowler App API")
async with httpx.AsyncClient() as client:
try:
# Prepare JSON:API formatted request body
auth_attributes = {"email": self.email, "password": self.password}
if self.tenant_id:
auth_attributes["tenant_id"] = self.tenant_id
request_body = {
"data": {
"type": "tokens",
"attributes": auth_attributes,
}
}
response = await client.post(
f"{self.base_url}/api/v1/tokens",
json=request_body,
headers={
"Content-Type": "application/vnd.api+json",
"Accept": "application/vnd.api+json",
},
)
response.raise_for_status()
data = response.json()
# Extract token from JSON:API response format
self.access_token = (
data.get("data", {}).get("attributes", {}).get("access")
)
self.refresh_token = (
data.get("data", {}).get("attributes", {}).get("refresh")
)
logger.debug(f"Access token: {self.access_token}")
if not self.access_token:
raise ValueError("Token not found in response")
logger.info("Authentication successful")
return self.access_token
except httpx.HTTPStatusError as e:
logger.error(
f"Authentication failed with HTTP status {e.response.status_code}: {e.response.text}"
)
raise ValueError(f"Authentication failed: {e.response.text}")
except Exception as e:
logger.error(f"Authentication failed with error: {e}")
raise ValueError(f"Authentication failed: {e}")
async def refresh_access_token(self) -> str:
"""Refresh the access token using the refresh token."""
if not self.refresh_token:
logger.info("No refresh token available, performing full authentication")
return await self.authenticate()
logger.info("Refreshing access token")
async with httpx.AsyncClient() as client:
try:
# Prepare JSON:API formatted request body for refresh
request_body = {
"data": {
"type": "tokens",
"attributes": {"refresh": self.refresh_token},
}
}
response = await client.post(
f"{self.base_url}/api/v1/tokens/refresh",
json=request_body,
headers={
"Content-Type": "application/vnd.api+json",
"Accept": "application/vnd.api+json",
},
)
response.raise_for_status()
data = response.json()
# Extract new access token from JSON:API response
self.access_token = (
data.get("data", {}).get("attributes", {}).get("access")
)
logger.info("Token refresh successful")
return self.access_token
except httpx.HTTPStatusError as e:
logger.warning(
f"Token refresh failed, attempting re-authentication: {e}"
)
# If refresh fails, re-authenticate
return await self.authenticate()
async def get_valid_token(self) -> str:
"""Get a valid access token, checking JWT expiry."""
current_token = self.access_token
need_new_token = True
if current_token:
payload = self._parse_jwt(current_token)
if payload:
now = int(datetime.now().timestamp())
time_left = payload.get("exp", 0) - now
if time_left > 120: # 2 minutes margin
need_new_token = False
if need_new_token:
token = await self.authenticate()
# Verify the new token
payload = self._parse_jwt(token)
return token
else:
return current_token
def get_headers(self, token: str) -> Dict[str, str]:
"""Get headers for API requests with authentication."""
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/vnd.api+json",
"Accept": "application/vnd.api+json",
"User-Agent": f"prowler-mcp-server/{__version__}",
}
# Add tenant ID header if available
if self.tenant_id:
headers["X-Tenant-Id"] = self.tenant_id
return headers
@@ -0,0 +1,732 @@
{
"endpoints": {
"* /api/v1/providers*": {
"parameters": {
"id": {
"name": "provider_id",
"description": "The UUID of the provider. This UUID is generated by Prowler and it is not related with the UID of the provider (that is the one that is set by the provider).\n\tThe format is UUIDv4: \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877\""
}
}
},
"GET /api/v1/providers": {
"name": "list_providers",
"description": "List all providers with options for filtering by various criteria.",
"parameters": {
"fields[providers]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"uid,delta,status\")"
},
"filter[alias]": {
"name": "filter_alias",
"description": "Filter by exact alias name"
},
"filter[alias__icontains]": {
"name": "filter_alias_contains",
"description": "Filter by partial alias match"
},
"filter[alias__in]": {
"name": "filter_alias_in",
"description": "Filter by multiple aliases (comma-separated, e.g. \"aws_alias_1,azure_alias_2\"). Useful when searching for multiple providers at once."
},
"filter[connected]": {
"name": "filter_connected",
"description": "Filter by connected status (True for connected, False for connection failed, if not set all both are returned).\n\tIf the connection haven't been attempted yet, the status will be None and does not apply for this filter."
},
"filter[id]": {
"name": "filter_id",
"description": "Filter by exact ID of the provider (UUID)"
},
"filter[id__in]": {
"name": "filter_id_in",
"description": "Filter by multiple IDs of the providers (comma-separated UUIDs, e.g. \"a1b2c3d4-5678-90ab-cdef-1234567890ab,deadbeef-1234-5678-9abc-def012345678,0f1e2d3c-4b5a-6978-8c9d-0e1f2a3b4c5d\"). Useful when searching for multiple providers at once."
},
"filter[inserted_at]": {
"name": "filter_inserted_at",
"description": "Filter by exact date (format: YYYY-MM-DD). This is the date when the provider was inserted into the database."
},
"filter[inserted_at__gte]": {
"name": "filter_inserted_at_gte",
"description": "Filter providers inserted on or after this date (format: YYYY-MM-DD)"
},
"filter[inserted_at__lte]": {
"name": "filter_inserted_at_lte",
"description": "Filter providers inserted on or before this date (format: YYYY-MM-DD)"
},
"filter[provider]": {
"name": "filter_provider",
"description": "Filter by single provider type"
},
"filter[provider__in]": {
"name": "filter_provider_in",
"description": "Filter by multiple provider types (comma-separated, e.g. \"aws,azure,gcp\")"
},
"filter[search]": {
"name": "filter_search",
"description": "A search term accross \"provider\", \"alias\" and \"uid\""
},
"filter[uid]": {
"name": "filter_uid",
"description": "Filter by exact finding UID"
},
"filter[uid__icontains]": {
"name": "filter_uid_contains",
"description": "Filter by partial finding UID match"
},
"filter[uid__in]": {
"name": "filter_uid_in",
"description": "Filter by multiple UIDs (comma-separated UUIDs)"
},
"filter[updated_at]": {
"name": "filter_updated_at",
"description": "Filter by exact date (format: YYYY-MM-DD). This is the date when the provider was updated in the database."
},
"filter[updated_at__gte]": {
"name": "filter_updated_at_gte",
"description": "Filter providers updated on or after this date (format: YYYY-MM-DD)"
},
"filter[updated_at__lte]": {
"name": "filter_updated_at_lte",
"description": "Filter providers updated on or before this date (format: YYYY-MM-DD)"
},
"include": {
"name": "include",
"description": "Include related resources in the response, for now only \"provider_groups\" is supported"
},
"page[number]": {
"name": "page_number",
"description": "Page number to retrieve (default: 1)"
},
"page[size]": {
"name": "page_size",
"description": "Number of results per page (default: 100)"
},
"sort": {
"name": "sort",
"description": "Sort the results by the specified fields. Use '-' prefix for descending order. (e.g. \"-provider,inserted_at\", this first sorts by provider alphabetically and then inside of each category by inserted_at date)"
}
}
},
"POST /api/v1/providers": {
"name": "create_provider",
"description": "Create a new provider in the current Prowler Tenant.\n\tThis is just for creating a new provider, not for adding/configuring credentials. To add credentials to an existing provider, use tool add_provider_secret from Prowler MCP server",
"parameters": {
"alias": {
"description": "Pseudonym name to identify the provider"
},
"provider": {
"description": "Type of provider to create"
},
"uid": {
"description": "UID for the provider. This UID is dependent on the provider type: \n\tAWS: AWS account ID\n\tAzure: Azure subscription ID\n\tGCP: GCP project ID\n\tKubernetes: Kubernetes namespace\n\tM365: M365 domain ID\n\tGitHub: GitHub username or organization name"
}
}
},
"GET /api/v1/providers/{id}": {
"name": "get_provider",
"description": "Get detailed information about a specific provider",
"parameters": {
"fields[providers]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"uid,alias,connection\")."
},
"include": {
"description": "Include related resources in the response, for now only \"provider_groups\" is supported"
}
}
},
"PATCH /api/v1/providers/{id}": {
"name": "update_provider",
"description": "Update the details of a specific provider",
"parameters": {
"alias": {
"description": "Pseudonym name to identify the provider, if not set, the alias will not be updated"
}
}
},
"DELETE /api/v1/providers/{id}": {
"name": "delete_provider",
"description": "Delete a specific provider"
},
"POST /api/v1/providers/{id}/connection": {
"name": "test_provider_connection",
"description": "Test the connection status of a specific provider with the credentials set in the provider secret. Needed to be done before running a scan."
},
"GET /api/v1/providers/secrets": {
"name": "list_provider_secrets",
"description": "List all provider secrets with options for filtering by various criteria",
"parameters": {
"fields[provider-secrets]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"name,secret_type,provider\")"
},
"filter[inserted_at]": {
"name": "filter_inserted_at",
"description": "Filter by exact date when the secret was inserted (format: YYYY-MM-DD)"
},
"filter[name]": {
"name": "filter_name",
"description": "Filter by exact secret name"
},
"filter[name__icontains]": {
"name": "filter_name_contains",
"description": "Filter by partial secret name match"
},
"filter[provider]": {
"name": "filter_provider",
"description": "Filter by prowler provider UUID (UUIDv4)"
},
"filter[search]": {
"name": "filter_search",
"description": "Search term in name attribute"
},
"filter[updated_at]": {
"name": "filter_updated_at",
"description": "Filter by exact update date (format: YYYY-MM-DD)"
},
"page[number]": {
"name": "page_number",
"description": "Page number to retrieve (default: 1)"
},
"page[size]": {
"name": "page_size",
"description": "Number of results per page"
},
"sort": {
"name": "sort",
"description": "Sort the results by the specified fields. You can specify multiple fields separated by commas; the results will be sorted by the first field, then by the second within each group of the first, and so on. Use '-' as a prefix to a field name for descending order (e.g. \"-name,inserted_at\" sorts by name descending, then by inserted_at ascending within each name). If not set, the default sort order will be applied"
}
}
},
"* /api/v1/providers/secrets*": {
"parameters": {
"secret": {
"name": "credentials",
"description": "Provider-specific credentials dictionary. Supported formats:\n - AWS Static: {\"aws_access_key_id\": \"...\", \"aws_secret_access_key\": \"...\", \"aws_session_token\": \"...\"}\n - AWS Assume Role: {\"role_arn\": \"...\", \"external_id\": \"...\", \"session_duration\": 3600, \"role_session_name\": \"...\"}\n - Azure: {\"tenant_id\": \"...\", \"client_id\": \"...\", \"client_secret\": \"...\"}\n - M365: {\"tenant_id\": \"...\", \"client_id\": \"...\", \"client_secret\": \"...\", \"user\": \"...\", \"password\": \"...\"}\n - GCP Static: {\"client_id\": \"...\", \"client_secret\": \"...\", \"refresh_token\": \"...\"}\n - GCP Service Account: {\"service_account_key\": {...}}\n - Kubernetes: {\"kubeconfig_content\": \"...\"}\n - GitHub PAT: {\"personal_access_token\": \"...\"}\n - GitHub OAuth: {\"oauth_app_token\": \"...\"}\n - GitHub App: {\"github_app_id\": 123, \"github_app_key\": \"path/to/key\"}"
},
"secret_type": {
"description": "Type of secret:\n\tstatic: Static credentials\n\trole: Assume role credentials (for now only AWS is supported)\n\tservice_account: Service account credentials (for now only GCP is supported)"
}
}
},
"POST /api/v1/providers/secrets": {
"name": "add_provider_secret",
"description": "Add or update complete credentials for an existing provider",
"parameters": {
"provider_id": {
"description": "The UUID of the provider. This UUID is generated by Prowler and it is not related with the UID of the provider, the format is UUIDv4: \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877\""
},
"name": {
"name": "secret_name",
"description": "Name for the credential secret. This must be between 3 and 100 characters long"
}
}
},
"GET /api/v1/providers/secrets/{id}": {
"name": "get_provider_secret",
"description": "Get detailed information about a specific provider secret",
"parameters": {
"id": {
"name": "provider_secret_id",
"description": "The UUID of the provider secret"
},
"fields[provider-secrets]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"name,secret_type,provider\")"
}
}
},
"PATCH /api/v1/providers/secrets/{id}": {
"name": "update_provider_secret",
"description": "Update the details of a specific provider secret",
"parameters": {
"id": {
"name": "provider_secret_id",
"description": "The UUID of the provider secret."
},
"name": {
"name": "secret_name",
"description": "Name for the credential secret. This must be between 3 and 100 characters long"
}
}
},
"DELETE /api/v1/providers/secrets/{id}": {
"name": "delete_provider_secret",
"description": "Delete a specific provider secret",
"parameters": {
"id": {
"name": "provider_secret_id",
"description": "The UUID of the provider secret."
}
}
},
"GET /api/v1/findings*": {
"parameters": {
"fields[findings]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"uid,delta,status,status_extended,severity,check_id,scan\")"
},
"filter[check_id]": {
"name": "filter_check_id",
"description": "Filter by exact check ID (e.g. ec2_launch_template_imdsv2_required). To get the list of available checks for a provider, use tool get_checks from Prowler Hub MCP server"
},
"filter[check_id__icontains]": {
"name": "filter_check_id_contains",
"description": "Filter by partial check ID match (e.g. \"iam\" matches all IAM-related checks for all providers)"
},
"filter[check_id__in]": {
"name": "filter_check_id_in",
"description": "Filter by multiple check IDs (comma-separated, e.g. \"ec2_launch_template_imdsv2_required,bedrock_guardrail_prompt_attack_filter_enabled,vpc_endpoint_multi_az_enabled\")"
},
"filter[delta]": {
"name": "filter_delta",
"description": "Filter by finding delta status"
},
"filter[id]": {
"name": "filter_id",
"description": "Filter by exact finding ID (main key in the database, it is a UUIDv7). It is not the same as the finding UID."
},
"filter[id__in]": {
"name": "filter_id_in",
"description": "Filter by multiple finding IDs (comma-separated UUIDs)"
},
"filter[inserted_at]": {
"name": "filter_inserted_at",
"description": "Filter by exact date (format: YYYY-MM-DD)."
},
"filter[inserted_at__date]": {
"name": "filter_inserted_at_date",
"description": "Filter by exact date (format: YYYY-MM-DD). Same as filter_inserted_at parameter."
},
"filter[inserted_at__gte]": {
"name": "filter_inserted_at_gte",
"description": "Filter findings inserted on or after this date (format: YYYY-MM-DD)"
},
"filter[inserted_at__lte]": {
"name": "filter_inserted_at_lte",
"description": "Filter findings inserted on or before this date (format: YYYY-MM-DD)"
},
"filter[muted]": {
"name": "filter_muted",
"description": "Filter by muted status (True for muted, False for non-muted, if not set all both are returned). A muted finding is a finding that has been muted by the user to ignore it."
},
"filter[provider]": {
"name": "filter_provider",
"description": "Filter by exact provider UUID (UUIDv4). This UUID is generated by Prowler and it is not related with the UID of the provider (that is the one that is set by the provider). The format is UUIDv4: \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877\""
},
"filter[provider__in]": {
"name": "filter_provider_in",
"description": "Filter by multiple provider UUIDs (comma-separated UUIDs, e.g. \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877,deadbeef-1234-5678-9abc-def012345678,0f1e2d3c-4b5a-6978-8c9d-0e1f2a3b4c5d\"). Useful when searching for multiple providers at once."
},
"filter[provider_alias]": {
"name": "filter_provider_alias",
"description": "Filter by exact provider alias name"
},
"filter[provider_alias__icontains]": {
"name": "filter_provider_alias_contains",
"description": "Filter by partial provider alias match"
},
"filter[provider_alias__in]": {
"name": "filter_provider_alias_in",
"description": "Filter by multiple provider aliases (comma-separated)"
},
"filter[provider_id]": {
"name": "filter_provider_id",
"description": "Filter by exact provider ID (UUID)"
},
"filter[provider_id__in]": {
"name": "filter_provider_id_in",
"description": "Filter by multiple provider IDs (comma-separated UUIDs)"
},
"filter[provider_type]": {
"name": "filter_provider_type",
"description": "Filter by single provider type"
},
"filter[provider_type__in]": {
"name": "filter_provider_type_in",
"description": "Filter by multiple provider types (comma-separated, e.g. \"aws,azure,gcp\"). Allowed values are: aws, azure, gcp, kubernetes, m365, github"
},
"filter[provider_uid]": {
"name": "filter_provider_uid",
"description": "Filter by exact provider UID. This UID is dependent on the provider type: \n\tAWS: AWS account ID\n\tAzure: Azure subscription ID\n\tGCP: GCP project ID\n\tKubernetes: Kubernetes namespace\n\tM365: M365 domain ID\n\tGitHub: GitHub username or organization name"
},
"filter[provider_uid__icontains]": {
"name": "filter_provider_uid_contains",
"description": "Filter by partial provider UID match"
},
"filter[provider_uid__in]": {
"name": "filter_provider_uid_in",
"description": "Filter by multiple provider UIDs (comma-separated UUIDs)"
},
"filter[region]": {
"name": "filter_region",
"description": "Filter by exact region name (e.g. us-east-1, eu-west-1, etc.). To get a list of available regions in a subset of findings, use tool get_findings_metadata from Prowler MCP server"
},
"filter[region__icontains]": {
"name": "filter_region_contains",
"description": "Filter by partial region match (e.g. \"us-\" matches all US regions)"
},
"filter[region__in]": {
"name": "filter_region_in",
"description": "Filter by multiple regions (comma-separated, e.g. \"us-east-1,us-west-2,eu-west-1\")"
},
"filter[resource_name]": {
"name": "filter_resource_name",
"description": "Filter by exact resource name that finding is associated with"
},
"filter[resource_name__icontains]": {
"name": "filter_resource_name_contains",
"description": "Filter by partial resource name match that finding is associated with"
},
"filter[resource_name__in]": {
"name": "filter_resource_name_in",
"description": "Filter by multiple resource names (comma-separated) that finding is associated with"
},
"filter[resource_type]": {
"name": "filter_resource_type",
"description": "Filter by exact resource type that finding is associated with"
},
"filter[resource_type__icontains]": {
"name": "filter_resource_type_contains",
"description": "Filter by partial resource type match that finding is associated with"
},
"filter[resource_type__in]": {
"name": "filter_resource_type_in",
"description": "Filter by multiple resource types (comma-separated) that finding is associated with"
},
"filter[resource_uid]": {
"name": "filter_resource_uid",
"description": "Filter by exact resource UID that finding is associated with"
},
"filter[resource_uid__icontains]": {
"name": "filter_resource_uid_contains",
"description": "Filter by partial resource UID match that finding is associated with"
},
"filter[resource_uid__in]": {
"name": "filter_resource_uid_in",
"description": "Filter by multiple resource UIDss (comma-separated) that finding is associated with"
},
"filter[resources]": {
"name": "filter_resources",
"description": "Filter by multiple resources (comma-separated) that finding is associated with. The accepted vaules are internal Prowler generated resource UUIDs"
},
"filter[scan]": {
"name": "filter_scan",
"description": "Filter by scan UUID"
},
"filter[scan__in]": {
"name": "filter_scan_in",
"description": "Filter by multiple scan UUIDs (comma-separated UUIDs)"
},
"filter[service]": {
"name": "filter_service",
"description": "Filter by exact service name (e.g. s3, rds, ec2, keyvault, etc.). To get the list of available services, use tool list_providers from Prowler Hub MCP server"
},
"filter[service__icontains]": {
"name": "filter_service_contains",
"description": "Filter by partial service name match (e.g. \"storage\" matches all storage-related services)"
},
"filter[service__in]": {
"name": "filter_service_in",
"description": "Filter by multiple service names (comma-separated, e.g. \"s3,ec2,iam\")"
},
"filter[severity]": {
"name": "filter_severity",
"description": "Filter by single severity (critical, high, medium, low, informational)"
},
"filter[severity__in]": {
"name": "filter_severity_in",
"description": "Filter by multiple severities (comma-separated, e.g. \"critical,high\")"
},
"filter[status]": {
"name": "filter_status",
"description": "Filter by single status"
},
"filter[status__in]": {
"name": "filter_status_in",
"description": "Filter by multiple statuses (comma-separated, e.g. \"FAIL,MANUAL\"). Allowed values are: PASS, FAIL, MANUAL"
},
"filter[uid]": {
"name": "filter_uid",
"description": "Filter by exact finding UID assigned by Prowler"
},
"filter[uid__in]": {
"name": "filter_uid_in",
"description": "Filter by multiple finding UIDs (comma-separated UUIDs)"
},
"filter[updated_at]": {
"name": "filter_updated_at",
"description": "Filter by exact update date (format: YYYY-MM-DD)"
},
"filter[updated_at__gte]": {
"name": "filter_updated_at_gte",
"description": "Filter by update date on or after this date (format: YYYY-MM-DD)"
},
"filter[updated_at__lte]": {
"name": "filter_updated_at_lte",
"description": "Filter by update date on or before this date (format: YYYY-MM-DD)"
},
"include": {
"name": "include",
"description": "Include related resources in the response, supported values are: \"resources\" and \"scan\""
},
"page[number]": {
"name": "page_number",
"description": "Page number to retrieve (default: 1)"
},
"page[size]": {
"name": "page_size",
"description": "Number of results per page (default: 100)"
},
"sort": {
"name": "sort",
"description": "Sort the results by the specified fields. You can specify multiple fields separated by commas; the results will be sorted by the first field, then by the second within each group of the first, and so on. Use '-' as a prefix to a field name for descending order (e.g. \"status,-severity\" sorts by status ascending alphabetically and then by severity descending within each status alphabetically)"
}
}
},
"GET /api/v1/findings": {
"name": "list_findings",
"description": "List security findings from Prowler scans with advanced filtering.\n\tAt least one of the variations of the filter[inserted_at] is required. If not provided, defaults to findings from the last day."
},
"GET /api/v1/findings/{id}": {
"name": "get_finding",
"description": "Get detailed information about a specific security finding",
"parameters": {
"id": {
"name": "finding_id",
"description": "The UUID of the finding"
}
}
},
"GET /api/v1/findings/latest": {
"name": "get_latest_findings",
"description": "Retrieve a list of the latest findings from the latest scans for each provider with advanced filtering options"
},
"GET /api/v1/findings/metadata": {
"name": "get_findings_metadata",
"description": "Fetch unique metadata values from a filtered set of findings. This is useful for dynamic filtering",
"parameters": {
"fields[findings-metadata]": {
"name": "metadata_fields",
"description": "Specific metadata fields to return (comma-separated, e.g. 'regions,services,check_ids')"
}
}
},
"GET /api/v1/findings/metadata/latest": {
"name": "get_latest_findings_metadata",
"description": "Fetch unique metadata values from the latest findings across all providers"
},
"* /api/v1/scans*": {
"parameters": {
"id": {
"name": "scan_id",
"description": "The UUID of the scan. The format is UUIDv4: \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877\""
}
}
},
"GET /api/v1/scans": {
"name": "list_scans",
"description": "List all scans with options for filtering by various criteria.",
"parameters": {
"fields[scans]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"name,state,progress,duration\")"
},
"filter[completed_at]": {
"name": "filter_completed_at",
"description": "Filter by exact completion date (format: YYYY-MM-DD)"
},
"filter[inserted_at]": {
"name": "filter_inserted_at",
"description": "Filter by exact insertion date (format: YYYY-MM-DD)"
},
"filter[name]": {
"name": "filter_name",
"description": "Filter by exact scan name"
},
"filter[name__icontains]": {
"name": "filter_name_contains",
"description": "Filter by partial scan name match"
},
"filter[next_scan_at]": {
"name": "filter_next_scan_at",
"description": "Filter by exact next scan date (format: YYYY-MM-DD)"
},
"filter[next_scan_at__gte]": {
"name": "filter_next_scan_at_gte",
"description": "Filter scans scheduled on or after this date (format: YYYY-MM-DD)"
},
"filter[next_scan_at__lte]": {
"name": "filter_next_scan_at_lte",
"description": "Filter scans scheduled on or before this date (format: YYYY-MM-DD)"
},
"filter[provider]": {
"name": "filter_provider",
"description": "Filter by exact provider UUID (UUIDv4). This UUID is generated by Prowler and it is not related with the UID of the provider (that is the one that is set by the provider). The format is UUIDv4: \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877\""
},
"filter[provider__in]": {
"name": "filter_provider_in",
"description": "Filter by multiple provider UUIDs (comma-separated UUIDs, e.g. \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877,deadbeef-1234-5678-9abc-def012345678,0f1e2d3c-4b5a-6978-8c9d-0e1f2a3b4c5d\"). Useful when searching for multiple providers at once."
},
"filter[provider_alias]": {
"name": "filter_provider_alias",
"description": "Filter by exact provider alias name"
},
"filter[provider_alias__icontains]": {
"name": "filter_provider_alias_contains",
"description": "Filter by partial provider alias match"
},
"filter[provider_alias__in]": {
"name": "filter_provider_alias_in",
"description": "Filter by multiple provider aliases (comma-separated)"
},
"filter[provider_type]": {
"name": "filter_provider_type",
"description": "Filter by single provider type (aws, azure, gcp, github, kubernetes, m365)"
},
"filter[provider_type__in]": {
"name": "filter_provider_type_in",
"description": "Filter by multiple provider types (comma-separated, e.g. \"aws,azure,gcp\"). Allowed values are: aws, azure, gcp, kubernetes, m365, github"
},
"filter[provider_uid]": {
"name": "filter_provider_uid",
"description": "Filter by exact provider UID. This UID is dependent on the provider type: \n\tAWS: AWS account ID\n\tAzure: Azure subscription ID\n\tGCP: GCP project ID\n\tKubernetes: Kubernetes namespace\n\tM365: M365 domain ID\n\tGitHub: GitHub username or organization name"
},
"filter[provider_uid__icontains]": {
"name": "filter_provider_uid_contains",
"description": "Filter by partial provider UID match"
},
"filter[provider_uid__in]": {
"name": "filter_provider_uid_in",
"description": "Filter by multiple provider UIDs (comma-separated)"
},
"filter[scheduled_at]": {
"name": "filter_scheduled_at",
"description": "Filter by exact scheduled date (format: YYYY-MM-DD)"
},
"filter[scheduled_at__gte]": {
"name": "filter_scheduled_at_gte",
"description": "Filter scans scheduled on or after this date (format: YYYY-MM-DD)"
},
"filter[scheduled_at__lte]": {
"name": "filter_scheduled_at_lte",
"description": "Filter scans scheduled on or before this date (format: YYYY-MM-DD)"
},
"filter[search]": {
"name": "filter_search",
"description": "Search term across multiple scan attributes including: name (scan name), trigger (Manual/Scheduled), state (Available, Executing, Completed, Failed, etc.), unique_resource_count (number of resources found), progress (scan progress percentage), duration (scan duration), scheduled_at (when scan is scheduled), started_at (when scan started), completed_at (when scan completed), and next_scan_at (next scheduled scan time)"
},
"filter[started_at]": {
"name": "filter_started_at",
"description": "Filter by exact start date (format: YYYY-MM-DD)"
},
"filter[started_at__gte]": {
"name": "filter_started_at_gte",
"description": "Filter scans started on or after this date (format: YYYY-MM-DD)"
},
"filter[started_at__lte]": {
"name": "filter_started_at_lte",
"description": "Filter scans started on or before this date (format: YYYY-MM-DD)"
},
"filter[state]": {
"name": "filter_state",
"description": "Filter by exact scan state"
},
"filter[state__in]": {
"name": "filter_state_in",
"description": "Filter by multiple scan states (comma-separated)"
},
"filter[trigger]": {
"name": "filter_trigger",
"description": "Filter by scan trigger type"
},
"filter[trigger__in]": {
"name": "filter_trigger_in",
"description": "Filter by multiple trigger types (comma-separated)"
},
"include": {
"name": "include",
"description": "Include related resources in the response, supported value is \"provider\""
},
"page[number]": {
"name": "page_number",
"description": "Page number to retrieve (default: 1)"
},
"page[size]": {
"name": "page_size",
"description": "Number of results per page (default: 100)"
},
"sort": {
"name": "sort",
"description": "Sort the results by the specified fields. Use '-' prefix for descending order. (e.g. \"-started_at,name\")"
}
}
},
"POST /api/v1/scans": {
"name": "create_scan",
"description": "Trigger a manual scan for a specific provider",
"parameters": {
"provider_id": {
"name": "provider_id",
"description": "Prowler generated UUID of the provider to scan. The format is UUIDv4: \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877\""
},
"name": {
"description": "Optional name for the scan"
}
}
},
"GET /api/v1/scans/{id}": {
"name": "get_scan",
"description": "Get detailed information about a specific scan",
"parameters": {
"fields[scans]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"name,state,progress,duration\")"
},
"include": {
"description": "Include related resources in the response, supported value is \"provider\""
}
}
},
"PATCH /api/v1/scans/{id}": {
"name": "update_scan",
"description": "Update the details of a specific scan",
"parameters": {
"name": {
"description": "Name for the scan to be updated"
}
}
},
"GET /api/v1/scans/{id}/compliance/{name}": {
"name": "get_scan_compliance_report",
"description": "Download a specific compliance report (e.g., 'cis_1.4_aws') as a CSV file",
"parameters": {
"name": {
"name": "compliance_name"
},
"fields[scan-reports]": {
"name": "fields",
"description": "The tool will return only the specified fields, if not set all are returned (comma-separated, e.g. \"id,name\")"
}
}
},
"GET /api/v1/scans/{id}/report": {
"name": "get_scan_report",
"description": "Download a ZIP file containing the scan report",
"parameters": {
"fields[scan-reports]": {
"name": "fields",
"description": "Not use this parameter for now"
}
}
},
"POST /api/v1/schedules/daily": {
"name": "schedules_daily_scan",
"parameters": {
"provider_id": {
"name": "provider_id",
"description": "Prowler generated UUID of the provider to scan. The format is UUIDv4: \"4d0e2614-6385-4fa7-bf0b-c2e2f75c6877\""
}
}
}
}
}
@@ -0,0 +1,942 @@
#!/usr/bin/env python3
"""
Generate FastMCP server code from OpenAPI specification.
This script parses an OpenAPI specification file and generates FastMCP tool functions
with proper type hints, parameters, and docstrings.
"""
import json
import os
import re
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional
import requests
import yaml
from prowler_mcp_server.lib.logger import logger
class OpenAPIToMCPGenerator:
def __init__(
self,
spec_file: str,
custom_auth_module: Optional[str] = None,
exclude_patterns: Optional[List[str]] = None,
exclude_operations: Optional[List[str]] = None,
exclude_tags: Optional[List[str]] = None,
include_only_tags: Optional[List[str]] = None,
config_file: Optional[str] = None,
):
"""
Initialize the generator with an OpenAPI spec file.
Args:
spec_file: Path to OpenAPI specification file
custom_auth_module: Module path for custom authentication
exclude_patterns: List of regex patterns to exclude endpoints (matches against path)
exclude_operations: List of operation IDs to exclude
exclude_tags: List of tags to exclude
include_only_tags: If specified, only include endpoints with these tags
config_file: Path to JSON configuration file for custom mappings
"""
self.spec_file = spec_file
self.custom_auth_module = custom_auth_module
self.exclude_patterns = exclude_patterns or []
self.exclude_operations = exclude_operations or []
self.exclude_tags = exclude_tags or []
self.include_only_tags = include_only_tags
self.config_file = config_file
self.config = self._load_config() if config_file else {}
self.spec = self._load_spec()
self.generated_tools = []
self.imports = set()
self.type_mapping = {
"string": "str",
"integer": "int",
"number": "float",
"boolean": "bool",
"array": "str",
"object": "Dict[str, Any]",
}
def _load_config(self) -> Dict:
"""Load configuration from JSON file."""
try:
with open(self.config_file, "r") as f:
return json.load(f)
except FileNotFoundError:
# print(f"Warning: Config file {self.config_file} not found. Using defaults.")
return {}
except json.JSONDecodeError:
# print(f"Warning: Error parsing config file: {e}. Using defaults.")
return {}
def _load_spec(self) -> Dict:
"""Load OpenAPI specification from file."""
with open(self.spec_file, "r") as f:
if self.spec_file.endswith(".yaml") or self.spec_file.endswith(".yml"):
return yaml.safe_load(f)
else:
return json.load(f)
def _get_endpoint_config(self, path: str, method: str) -> Dict:
"""Get endpoint configuration from config file with pattern matching and inheritance.
Configuration resolution order (most to least specific):
1. Exact endpoint match (e.g., "GET /api/v1/findings/metadata")
2. Pattern matches, sorted by specificity:
- Patterns without wildcards are more specific
- Longer patterns are more specific
- Example: "GET /api/v1/findings/*" matches all findings endpoints
When multiple configurations match, they are merged with more specific
configurations overriding less specific ones.
"""
if not self.config:
return {}
endpoint_key = f"{method.upper()} {path}"
merged_config = {}
# Get endpoints configuration (now supports both exact and pattern matches)
endpoints = self.config.get("endpoints", {})
# Separate exact matches from patterns
exact_match = None
pattern_matches = []
for config_key, config_value in endpoints.items():
if "*" in config_key or "?" in config_key:
# This is a pattern - convert wildcards to regex
regex_pattern = config_key.replace("*", ".*").replace("?", ".")
if re.match(f"^{regex_pattern}$", endpoint_key):
pattern_matches.append((config_key, config_value))
elif config_key == endpoint_key:
# Exact match
exact_match = (config_key, config_value)
# Also check for patterns in endpoint_patterns for backward compatibility
endpoint_patterns = self.config.get("endpoint_patterns", {})
for pattern, pattern_config in endpoint_patterns.items():
regex_pattern = pattern.replace("*", ".*").replace("?", ".")
if re.match(f"^{regex_pattern}$", endpoint_key):
pattern_matches.append((pattern, pattern_config))
# Sort pattern matches by specificity
# More specific patterns should be applied last to override less specific ones
pattern_matches.sort(
key=lambda x: (
x[0].count("*") + x[0].count("?"), # Fewer wildcards = more specific
-len(
x[0]
), # Longer patterns = more specific (negative for reverse sort)
),
reverse=True,
) # Reverse so least specific comes first
# Apply configurations from least to most specific
# First apply pattern matches (from least to most specific)
for pattern, pattern_config in pattern_matches:
merged_config = self._merge_configs(merged_config, pattern_config)
# Finally apply exact match (most specific)
if exact_match:
merged_config = self._merge_configs(merged_config, exact_match[1])
# Fallback to old endpoint_mappings for backward compatibility
if not merged_config:
endpoint_mappings = self.config.get("endpoint_mappings", {})
if endpoint_key in endpoint_mappings:
merged_config = {"name": endpoint_mappings[endpoint_key]}
return merged_config
def _merge_configs(self, base_config: Dict, override_config: Dict) -> Dict:
"""Merge two configurations, with override_config taking precedence.
Special handling for parameters: merges parameter configurations deeply.
"""
import copy
result = copy.deepcopy(base_config)
for key, value in override_config.items():
if key == "parameters" and key in result:
# Deep merge parameters
if not isinstance(result[key], dict):
result[key] = {}
if isinstance(value, dict):
for param_name, param_config in value.items():
if param_name in result[key] and isinstance(
result[key][param_name], dict
):
# Merge parameter configurations
result[key][param_name] = {
**result[key][param_name],
**param_config,
}
else:
result[key][param_name] = param_config
else:
# For other keys, override completely
result[key] = value
return result
def _sanitize_function_name(self, operation_id: str) -> str:
"""Convert operation ID to valid Python function name."""
# Replace non-alphanumeric characters with underscores
name = re.sub(r"[^a-zA-Z0-9_]", "_", operation_id)
# Ensure it doesn't start with a number
if name and name[0].isdigit():
name = f"op_{name}"
return name.lower()
def _get_python_type(self, schema: Dict) -> str:
"""Convert OpenAPI schema to Python type hint."""
if not schema:
return "Any"
# Handle oneOf/anyOf/allOf schemas - these are typically objects
if "oneOf" in schema or "anyOf" in schema or "allOf" in schema:
# These are complex schemas, typically representing different object variants
return "Dict[str, Any]"
schema_type = schema.get("type", "string")
# Handle enums
if "enum" in schema:
enum_values = schema["enum"]
if all(isinstance(v, str) for v in enum_values):
# Create Literal type for string enums
self.imports.add("from typing import Literal")
enum_str = ", ".join(f'"{v}"' for v in enum_values)
return f"Literal[{enum_str}]"
else:
return self.type_mapping.get(schema_type, "Any")
# Handle arrays
if schema_type == "array":
return "str"
# Handle format specifications
if schema_type == "string":
format_type = schema.get("format", "")
if format_type in ["date", "date-time"]:
return "str" # Keep as string for API calls
elif format_type == "uuid":
return "str"
elif format_type == "email":
return "str"
return self.type_mapping.get(schema_type, "Any")
def _resolve_ref(self, ref: str) -> Dict:
"""Resolve a $ref reference in the OpenAPI spec."""
if not ref.startswith("#/"):
return {}
# Split the reference path
ref_parts = ref[2:].split("/") # Remove '#/' and split
# Navigate through the spec to find the referenced schema
resolved = self.spec
for part in ref_parts:
resolved = resolved.get(part, {})
return resolved
def _extract_parameters(
self, operation: Dict, endpoint_config: Optional[Dict] = None
) -> List[Dict]:
"""Extract and process parameters from an operation."""
parameters = []
for param in operation.get("parameters", []):
# Sanitize parameter name for Python
python_name = (
param.get("name", "")
.replace("[", "_")
.replace("]", "")
.replace(".", "_")
.replace("-", "_")
) # Also replace hyphens
param_info = {
"name": param.get("name", ""),
"python_name": python_name,
"in": param.get("in", "query"),
"required": param.get("required", False),
"description": param.get("description", ""),
"type": self._get_python_type(param.get("schema", {})),
"original_schema": param.get("schema", {}),
}
# Apply custom parameter configuration from endpoint config
if endpoint_config and "parameters" in endpoint_config:
param_config = endpoint_config["parameters"]
if param_info["name"] in param_config:
custom_param = param_config[param_info["name"]]
if "name" in custom_param:
param_info["python_name"] = custom_param["name"]
if "description" in custom_param:
param_info["description"] = custom_param["description"]
parameters.append(param_info)
# Handle request body if present - extract as individual parameters
if "requestBody" in operation:
body = operation["requestBody"]
content = body.get("content", {})
# Check for different content types
schema = None
if "application/vnd.api+json" in content:
schema = content["application/vnd.api+json"].get("schema", {})
elif "application/json" in content:
schema = content["application/json"].get("schema", {})
if schema:
# Resolve $ref if present
if "$ref" in schema:
schema = self._resolve_ref(schema["$ref"])
# Try to extract individual fields from the schema
body_params = self._extract_body_parameters(
schema, body.get("required", False)
)
# Apply custom parameter config to body parameters
if endpoint_config and "parameters" in endpoint_config:
param_config = endpoint_config["parameters"]
for param in body_params:
if param["name"] in param_config:
custom_param = param_config[param["name"]]
if "name" in custom_param:
param["python_name"] = custom_param["name"]
if "description" in custom_param:
param["description"] = custom_param["description"]
parameters.extend(body_params)
return parameters
def _extract_body_parameters(self, schema: Dict, is_required: bool) -> List[Dict]:
"""Extract individual parameters from request body schema."""
parameters = []
# Handle JSON:API format with data.attributes structure
if "properties" in schema:
data = schema["properties"].get("data", {})
if "properties" in data:
# Extract attributes
attributes = data["properties"].get("attributes", {})
if "properties" in attributes:
# Get required fields from attributes
required_attrs = attributes.get("required", [])
for prop_name, prop_schema in attributes["properties"].items():
# Skip read-only fields for POST/PUT/PATCH operations
if prop_schema.get("readOnly", False):
continue
python_name = prop_name.replace("-", "_")
# Check if this field is required
is_field_required = prop_name in required_attrs
param_info = {
"name": prop_name, # Keep original name for API
"python_name": python_name,
"in": "body",
"required": is_field_required,
"description": prop_schema.get(
"description",
prop_schema.get("title", f"{prop_name} parameter"),
),
"type": self._get_python_type(prop_schema),
"original_schema": prop_schema,
"resource_type": (
data["properties"]
.get("type", {})
.get("enum", ["resource"])[0]
if "type" in data["properties"]
else "resource"
),
}
parameters.append(param_info)
# Also check for relationships (like provider_id)
relationships = data["properties"].get("relationships", {})
if "properties" in relationships:
required_rels = relationships.get("required", [])
for rel_name, rel_schema in relationships["properties"].items():
# Extract ID from relationship
python_name = f"{rel_name}_id"
is_rel_required = rel_name in required_rels
param_info = {
"name": f"{rel_name}_id",
"python_name": python_name,
"in": "body",
"required": is_rel_required,
"description": f"ID of the related {rel_name}",
"type": "str",
"original_schema": rel_schema,
}
parameters.append(param_info)
# If no structured params found, fall back to generic body parameter
if not parameters and schema:
parameters.append(
{
"name": "body",
"python_name": "body",
"in": "body",
"required": is_required,
"description": "Request body data",
"type": "Dict[str, Any]",
"original_schema": schema,
}
)
return parameters
def _generate_docstring(
self,
operation: Dict,
parameters: List[Dict],
path: str,
method: str,
endpoint_config: Optional[Dict] = None,
) -> str:
"""Generate a comprehensive docstring for the tool function."""
lines = []
# Main description - use custom or default
endpoint_config = endpoint_config or {}
# Use custom description if provided, otherwise fall back to OpenAPI
if "description" in endpoint_config:
lines.append(f' """{endpoint_config["description"]}')
else:
summary = operation.get("summary", "")
description = operation.get("description", "")
if summary:
lines.append(f' """{summary}')
else:
lines.append(f' """Execute {method.upper()} {path}')
if "description" not in endpoint_config:
# Only add OpenAPI description if no custom description was provided
description = operation.get("description", "")
if description and description != summary:
lines.append("")
# Clean up description - remove extra whitespace
clean_desc = " ".join(description.split())
lines.append(f" {clean_desc}")
# Add endpoint info
lines.append("")
lines.append(f" Endpoint: {method.upper()} {path}")
# Parameters section
if parameters:
lines.append("")
lines.append(" Args:")
for param in parameters:
# Use custom description if available
param_desc = param["description"] or "No description provided"
# Handle multi-line descriptions properly
required_text = "(required)" if param["required"] else "(optional)"
if "\n" in param_desc:
# Split on actual newlines (not escaped)
desc_lines = param_desc.split("\n")
first_line = desc_lines[0].strip()
lines.append(
f" {param['python_name']} {required_text}: {first_line}"
)
# Add subsequent lines with proper indentation (12 spaces for continuation)
for desc_line in desc_lines[1:]:
desc_line = desc_line.strip()
if desc_line:
lines.append(f" {desc_line}")
else:
# Clean up parameter description for single line
param_desc = " ".join(param_desc.split())
lines.append(
f" {param['python_name']} {required_text}: {param_desc}"
)
# Add enum values if present
if "enum" in param.get("original_schema", {}):
enum_values = param["original_schema"]["enum"]
lines.append(
f" Allowed values: {', '.join(str(v) for v in enum_values)}"
)
# Returns section
lines.append("")
lines.append(" Returns:")
lines.append(" Dict containing the API response")
lines.append(' """')
return "\n".join(lines)
def _generate_function_signature(
self, func_name: str, parameters: List[Dict]
) -> str:
"""Generate the function signature with proper type hints."""
# Sort parameters: required first, then optional
sorted_params = sorted(
parameters, key=lambda x: (not x["required"], x["python_name"])
)
param_strings = []
for param in sorted_params:
if param["required"]:
param_strings.append(f" {param['python_name']}: {param['type']}")
else:
param_strings.append(
f" {param['python_name']}: Optional[{param['type']}] = None"
)
if param_strings:
params_str = ",\n".join(param_strings)
return f"async def {func_name}(\n{params_str}\n) -> Dict[str, Any]:"
else:
return f"async def {func_name}() -> Dict[str, Any]:"
def _generate_function_body(
self, path: str, method: str, parameters: List[Dict], operation_id: str
) -> str:
"""Generate the function body for making API calls."""
lines = []
# Add try block
lines.append(" try:")
# Get authentication token if custom auth module is provided
if self.custom_auth_module:
lines.append(" token = await auth_manager.get_valid_token()")
lines.append("")
# Build parameters
query_params = [p for p in parameters if p["in"] == "query"]
path_params = [p for p in parameters if p["in"] == "path"]
body_params = [p for p in parameters if p["in"] == "body"]
# Build query parameters
if query_params:
lines.append(" params = {}")
for param in query_params:
if param["required"]:
lines.append(
f" params['{param['name']}'] = {param['python_name']}"
)
else:
lines.append(f" if {param['python_name']} is not None:")
lines.append(
f" params['{param['name']}'] = {param['python_name']}"
)
lines.append("")
# Build path with path parameters
final_path = path
for param in path_params:
lines.append(
f" path = '{path}'.replace('{{{param['name']}}}', str({param['python_name']}))"
)
final_path = "path"
# Build request body if there are body parameters
if body_params:
# Check if we have individual params or a single body param
if len(body_params) == 1 and body_params[0]["python_name"] == "body":
# Single body parameter - use it directly
lines.append(" request_body = body")
else:
# Get resource type from first body param (they should all have the same)
resource_type = (
body_params[0].get("resource_type", "resource")
if body_params
else "resource"
)
# Build JSON:API structure from individual parameters
lines.append(" # Build request body")
lines.append(" request_body = {")
lines.append(' "data": {')
lines.append(f' "type": "{resource_type}"')
# Separate attributes from relationships
# Note: Check if param was originally from attributes section, not just by name
attribute_params = []
relationship_params = []
for p in body_params:
# If this param came from the attributes section (has resource_type), it's an attribute
# even if its name ends with _id
if "resource_type" in p:
attribute_params.append(p)
elif p["python_name"].endswith("_id") and "resource_type" not in p:
relationship_params.append(p)
else:
attribute_params.append(p)
if attribute_params:
lines.append(",")
lines.append(' "attributes": {}')
lines.append(" }")
lines.append(" }")
if attribute_params:
lines.append("")
lines.append(" # Add attributes")
for param in attribute_params:
if param["required"]:
lines.append(
f' request_body["data"]["attributes"]["{param["name"]}"] = {param["python_name"]}'
)
else:
lines.append(
f" if {param['python_name']} is not None:"
)
lines.append(
f' request_body["data"]["attributes"]["{param["name"]}"] = {param["python_name"]}'
)
if relationship_params:
lines.append("")
lines.append(" # Add relationships")
lines.append(' request_body["data"]["relationships"] = {}')
for param in relationship_params:
rel_name = param["python_name"].replace("_id", "")
if param["required"]:
lines.append(
f' request_body["data"]["relationships"]["{rel_name}"] = {{'
)
lines.append(' "data": {')
lines.append(f' "type": "{rel_name}s",')
lines.append(
f' "id": {param["python_name"]}'
)
lines.append(" }")
lines.append(" }")
else:
lines.append(
f" if {param['python_name']} is not None:"
)
lines.append(
f' request_body["data"]["relationships"]["{rel_name}"] = {{'
)
lines.append(' "data": {')
lines.append(f' "type": "{rel_name}s",')
lines.append(
f' "id": {param["python_name"]}'
)
lines.append(" }")
lines.append(" }")
lines.append("")
# Prepare HTTP client call
lines.append(" async with httpx.AsyncClient() as client:")
# Build the request
request_params = [
(
f'f"{{auth_manager.base_url}}{{{final_path}}}"'
if final_path == "path"
else f'f"{{auth_manager.base_url}}{path}"'
)
]
if self.custom_auth_module:
request_params.append("headers=auth_manager.get_headers(token)")
if query_params:
request_params.append("params=params")
if body_params:
request_params.append("json=request_body")
request_params.append("timeout=30.0")
params_str = ",\n ".join(request_params)
lines.append(f" response = await client.{method}(")
lines.append(f" {params_str}")
lines.append(" )")
lines.append(" response.raise_for_status()")
lines.append("")
# Parse response
lines.append(" data = response.json()")
lines.append("")
lines.append(" return {")
lines.append(' "success": True,')
lines.append(' "data": data.get("data", data),')
lines.append(' "meta": data.get("meta", {})')
lines.append(" }")
lines.append("")
# Exception handling
lines.append(" except Exception as e:")
lines.append(" return {")
lines.append(' "success": False,')
lines.append(
f' "error": f"Failed to execute {operation_id}: {{str(e)}}"'
)
lines.append(" }")
return "\n".join(lines)
def _should_exclude_endpoint(self, path: str, operation: Dict) -> bool:
"""
Determine if an endpoint should be excluded from generation.
Args:
path: The API endpoint path
operation: The operation dictionary from OpenAPI spec
Returns:
True if endpoint should be excluded, False otherwise
"""
# Check if operation is marked as deprecated
if operation.get("deprecated", False):
return True
# Check operation ID exclusion
operation_id = operation.get("operationId", "")
if operation_id in self.exclude_operations:
return True
# Check path pattern exclusion
for pattern in self.exclude_patterns:
if re.search(pattern, path):
return True
# Check tags
tags = operation.get("tags", [])
# If include_only_tags is specified, exclude if no matching tag
if self.include_only_tags:
if not any(tag in self.include_only_tags for tag in tags):
return True
# Check excluded tags
if any(tag in self.exclude_tags for tag in tags):
logger.debug(f"Excluding endpoint {path} due to tag {tags}")
return True
return False
def generate_tools(self) -> str:
"""Generate all FastMCP tools from the OpenAPI spec."""
output_lines = []
# Generate header
output_lines.append('"""')
output_lines.append("Auto-generated FastMCP server from OpenAPI specification")
output_lines.append(f"Generated on: {datetime.now().isoformat()}")
output_lines.append(
f"Source: {self.spec_file} (version: {self.spec.get('info', {}).get('version', 'unknown')})"
)
output_lines.append('"""')
output_lines.append("")
# Add imports
self.imports.add("from typing import Dict, Any, Optional")
self.imports.add("import httpx")
self.imports.add("from fastmcp import FastMCP")
if self.custom_auth_module:
self.imports.add(f"from {self.custom_auth_module} import ProwlerAppAuth")
# Process all paths and operations
paths = self.spec.get("paths", {})
tools_by_tag = {} # Group tools by tag for better organization
excluded_count = 0
for path, path_item in paths.items():
for method in ["get", "post", "put", "patch", "delete"]:
if method in path_item:
operation = path_item[method]
# Check if this endpoint should be excluded
if self._should_exclude_endpoint(path, operation):
excluded_count += 1
continue
operation_id = operation.get("operationId", f"{method}_{path}")
tags = operation.get("tags", ["default"])
# Get endpoint configuration
endpoint_config = self._get_endpoint_config(path, method)
# Use custom function name if provided
if "name" in endpoint_config:
func_name = endpoint_config["name"]
else:
func_name = self._sanitize_function_name(operation_id)
parameters = self._extract_parameters(operation, endpoint_config)
tool_code = []
# Add @app_mcp_server.tool() decorator
tool_code.append("@app_mcp_server.tool()")
# Generate function signature
tool_code.append(
self._generate_function_signature(func_name, parameters)
)
# Generate docstring with custom description if provided
tool_code.append(
self._generate_docstring(
operation, parameters, path, method, endpoint_config
)
)
# Generate function body
tool_code.append(
self._generate_function_body(
path, method, parameters, operation_id
)
)
# Group by tag
for tag in tags:
if tag not in tools_by_tag:
tools_by_tag[tag] = []
tools_by_tag[tag].append("\n".join(tool_code))
# Write imports (consolidate typing imports)
typing_imports = set()
other_imports = []
for imp in sorted(self.imports):
if imp.startswith("from typing import"):
# Extract the imported items
items = imp.replace("from typing import", "").strip()
typing_imports.update([item.strip() for item in items.split(",")])
else:
other_imports.append(imp)
# Add consolidated typing import if needed
if typing_imports:
output_lines.append(
f"from typing import {', '.join(sorted(typing_imports))}"
)
# Add other imports
for imp in other_imports:
output_lines.append(imp)
output_lines.append("")
output_lines.append("# Initialize MCP server")
output_lines.append('app_mcp_server = FastMCP("prowler-app")')
output_lines.append("")
if self.custom_auth_module:
output_lines.append("# Initialize authentication manager")
output_lines.append("auth_manager = ProwlerAppAuth()")
output_lines.append("")
# Write tools grouped by tag
for tag, tools in tools_by_tag.items():
output_lines.append("")
output_lines.append("# " + "=" * 76)
output_lines.append(f"# {tag.upper()} ENDPOINTS")
output_lines.append("# " + "=" * 76)
output_lines.append("")
for tool in tools:
output_lines.append("")
output_lines.append(tool)
return "\n".join(output_lines)
def save_to_file(self, output_file: str):
"""Save the generated code to a file."""
generated_code = self.generate_tools()
Path(output_file).write_text(generated_code)
# print(f"Generated FastMCP server saved to: {output_file}")
# # Report statistics
# paths = self.spec.get("paths", {})
# total_endpoints = sum(
# len(
# [m for m in ["get", "post", "put", "patch", "delete"] if m in path_item]
# )
# for path_item in paths.values()
# )
# # Count excluded endpoints by reason
# excluded_count = 0
# deprecated_count = 0
# for path, path_item in paths.items():
# for method in ["get", "post", "put", "patch", "delete"]:
# if method in path_item:
# operation = path_item[method]
# if operation.get("deprecated", False):
# deprecated_count += 1
# if self._should_exclude_endpoint(path, operation):
# excluded_count += 1
# generated_count = total_endpoints - excluded_count
# print(f"Total endpoints in spec: {total_endpoints}")
# print(f"Endpoints excluded: {excluded_count}")
# if deprecated_count > 0:
# print(f" - Deprecated: {deprecated_count}")
# print(f"Endpoints generated: {generated_count}")
# Show exclusion rules if any
# if self.exclude_patterns:
# # print(f"Excluded patterns: {self.exclude_patterns}")
# if self.exclude_operations:
# # print(f"Excluded operations: {self.exclude_operations}")
# if self.exclude_tags:
# # print(f"Excluded tags: {self.exclude_tags}")
# if self.include_only_tags:
# # print(f"Including only tags: {self.include_only_tags}")
def generate_server_file():
# Get the spec file from the API directly (https://api.prowler.com/api/v1/schema)
api_base_url = os.getenv("PROWLER_API_BASE_URL", "https://api.prowler.com")
spec_file = f"{api_base_url}/api/v1/schema"
# Download the spec yaml file
response = requests.get(spec_file)
response.raise_for_status()
spec_data = response.text
# Save the spec data to a file
with open(str(Path(__file__).parent / "schema.yaml"), "w") as f:
f.write(spec_data)
# Example usage
generator = OpenAPIToMCPGenerator(
spec_file=str(Path(__file__).parent / "schema.yaml"),
custom_auth_module="prowler_mcp_server.prowler_app.utils.auth",
include_only_tags=[
"Provider",
"Scan",
"Schedule",
"Finding",
"Processor",
],
config_file=str(
Path(__file__).parent / "mcp_config.json"
), # Use custom naming config
)
# Generate and save the MCP server
generator.save_to_file(str(Path(__file__).parent.parent / "server.py"))
@@ -0,0 +1,3 @@
"""Prowler Hub module for MCP server."""
__all__ = ["prowler_hub_mcp"]
@@ -0,0 +1,486 @@
"""
Prowler Hub MCP module
Provides access to Prowler Hub API for security checks and compliance frameworks.
"""
from typing import Any, Optional
import httpx
from fastmcp import FastMCP
from prowler_mcp_server import __version__
# Initialize FastMCP for Prowler Hub
hub_mcp_server = FastMCP("prowler-hub")
# API base URL
BASE_URL = "https://hub.prowler.com/api"
# HTTP client configuration
prowler_hub_client = httpx.Client(
base_url=BASE_URL,
timeout=30.0,
headers={
"Accept": "application/json",
"User-Agent": f"prowler-mcp-server/{__version__}",
},
)
# GitHub raw content base URL for Prowler checks
GITHUB_RAW_BASE = (
"https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/"
"prowler/providers"
)
# Separate HTTP client for GitHub raw content
github_raw_client = httpx.Client(
timeout=30.0,
headers={
"Accept": "*/*",
"User-Agent": f"prowler-mcp-server/{__version__}",
},
)
def github_check_path(provider_id: str, check_id: str, suffix: str) -> str:
"""Build the GitHub raw URL for a given check artifact suffix using provider
and check_id.
Suffix examples: ".metadata.json", ".py", "_fixer.py"
"""
try:
service_id = check_id.split("_", 1)[0]
except IndexError:
service_id = check_id
return f"{GITHUB_RAW_BASE}/{provider_id}/services/{service_id}/{check_id}/{check_id}{suffix}"
@hub_mcp_server.tool()
async def get_check_filters() -> dict[str, Any]:
"""
Get available values for filtering for tool `get_checks`. Recommended to use before calling `get_checks` to get the available values for the filters.
Returns:
Available filter options including providers, types, services, severities,
categories, and compliance frameworks with their respective counts
"""
try:
response = prowler_hub_client.get("/check/filters")
response.raise_for_status()
filters = response.json()
return {"filters": filters}
except httpx.HTTPStatusError as e:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {"error": str(e)}
# Security Check Tools
@hub_mcp_server.tool()
async def get_checks(
providers: Optional[str] = None,
types: Optional[str] = None,
services: Optional[str] = None,
severities: Optional[str] = None,
categories: Optional[str] = None,
compliances: Optional[str] = None,
ids: Optional[str] = None,
fields: Optional[str] = "id,service,severity,title,description,risk",
) -> dict[str, Any]:
"""
List security Prowler Checks. The list can be filtered by the parameters defined for the tool.
It is recommended to use the tool `get_check_filters` to get the available values for the filters.
A not filtered request will return more than 1000 checks, so it is recommended to use the filters.
Args:
providers: Filter by Prowler provider IDs. Example: "aws,azure". Use the tool `list_providers` to get the available providers IDs.
types: Filter by check types.
services: Filter by provider services IDs. Example: "s3,keyvault". Use the tool `list_providers` to get the available services IDs in a provider.
severities: Filter by severity levels. Example: "medium,high". Available values are "low", "medium", "high", "critical".
categories: Filter by categories. Example: "cluster-security,encryption".
compliances: Filter by compliance framework IDs. Example: "cis_4.0_aws,ens_rd2022_azure".
ids: Filter by specific check IDs. Example: "s3_bucket_level_public_access_block".
fields: Specify which fields from checks metadata to return (id is always included). Example: "id,title,description,risk".
Available values are "id", "title", "description", "provider", "type", "service", "subservice", "severity", "risk", "reference", "remediation", "services_required", "aws_arn_template", "notes", "categories", "default_value", "resource_type", "related_url", "depends_on", "related_to", "fixer".
The default parameters are "id,title,description".
If null, all fields will be returned.
Returns:
List of security checks matching the filters. The structure is as follows:
{
"count": N,
"checks": [
{"id": "check_id_1", "title": "check_title_1", "description": "check_description_1", ...},
{"id": "check_id_2", "title": "check_title_2", "description": "check_description_2", ...},
{"id": "check_id_3", "title": "check_title_3", "description": "check_description_3", ...},
...
]
}
"""
params: dict[str, str] = {}
if providers:
params["providers"] = providers
if types:
params["types"] = types
if services:
params["services"] = services
if severities:
params["severities"] = severities
if categories:
params["categories"] = categories
if compliances:
params["compliances"] = compliances
if ids:
params["ids"] = ids
if fields:
params["fields"] = fields
try:
response = prowler_hub_client.get("/check", params=params)
response.raise_for_status()
checks = response.json()
checks_dict = {}
for check in checks:
check_data = {}
# Always include the id field as it's mandatory for the response structure
if "id" in check:
check_data["id"] = check["id"]
# Include other requested fields
for field in fields.split(","):
if field != "id" and field in check: # Skip id since it's already added
check_data[field] = check[field]
checks_dict[check["id"]] = check_data
return {"count": len(checks), "checks": checks_dict}
except httpx.HTTPStatusError as e:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {"error": str(e)}
@hub_mcp_server.tool()
async def get_check_raw_metadata(
provider_id: str,
check_id: str,
) -> dict[str, Any]:
"""
Fetch the raw check metadata JSON, this is a low level version of the tool `get_checks`.
It is recommended to use the tool `get_checks` filtering about the `ids` parameter instead of using this tool.
Args:
provider_id: Prowler provider ID (e.g., "aws", "azure").
check_id: Prowler check ID (folder and base filename).
Returns:
Raw metadata JSON as stored in Prowler.
"""
if provider_id and check_id:
url = github_check_path(provider_id, check_id, ".metadata.json")
try:
resp = github_raw_client.get(url)
resp.raise_for_status()
return resp.json()
except httpx.HTTPStatusError as e:
if e.response.status_code == 404:
return {
"error": f"Check {check_id} not found in Prowler",
}
else:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {
"error": f"Error fetching check {check_id} from Prowler: {str(e)}",
}
else:
return {
"error": "Provider ID and check ID are required",
}
@hub_mcp_server.tool()
async def get_check_code(
provider_id: str,
check_id: str,
) -> dict[str, Any]:
"""
Fetch the check implementation Python code from Prowler.
Args:
provider_id: Prowler provider ID (e.g., "aws", "azure").
check_id: Prowler check ID (e.g., "opensearch_service_domains_not_publicly_accessible").
Returns:
Dict with the code content as text.
"""
if provider_id and check_id:
url = github_check_path(provider_id, check_id, ".py")
try:
resp = github_raw_client.get(url)
resp.raise_for_status()
return {
"content": resp.text,
}
except httpx.HTTPStatusError as e:
if e.response.status_code == 404:
return {
"error": f"Check {check_id} not found in Prowler",
}
else:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {
"error": str(e),
}
else:
return {
"error": "Provider ID and check ID are required",
}
@hub_mcp_server.tool()
async def get_check_fixer(
provider_id: str,
check_id: str,
) -> dict[str, Any]:
"""
Fetch the check fixer Python code from Prowler, if it exists.
Args:
provider_id: Prowler provider ID (e.g., "aws", "azure").
check_id: Prowler check ID (e.g., "opensearch_service_domains_not_publicly_accessible").
Returns:
Dict with fixer content as text if present, existence flag.
"""
if provider_id and check_id:
url = github_check_path(provider_id, check_id, "_fixer.py")
try:
resp = github_raw_client.get(url)
if resp.status_code == 404:
return {
"error": f"Fixer not found for check {check_id}",
}
resp.raise_for_status()
return {
"content": resp.text,
}
except httpx.HTTPStatusError as e:
if e.response.status_code == 404:
return {
"error": f"Check {check_id} not found in Prowler",
}
else:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {
"error": str(e),
}
else:
return {
"error": "Provider ID and check ID are required",
}
@hub_mcp_server.tool()
async def search_checks(term: str) -> dict[str, Any]:
"""
Search the term across all text properties of check metadata.
Args:
term: Search term to find in check titles, descriptions, and other text fields
Returns:
List of checks matching the search term
"""
try:
response = prowler_hub_client.get("/check/search", params={"term": term})
response.raise_for_status()
checks = response.json()
return {
"count": len(checks),
"checks": checks,
}
except httpx.HTTPStatusError as e:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {"error": str(e)}
# Compliance Framework Tools
@hub_mcp_server.tool()
async def get_compliance_frameworks(
provider: Optional[str] = None,
fields: Optional[
str
] = "id,framework,provider,description,total_checks,total_requirements",
) -> dict[str, Any]:
"""
List and filter compliance frameworks. The list can be filtered by the parameters defined for the tool.
Args:
provider: Filter by one Prowler provider ID. Example: "aws". Use the tool `list_providers` to get the available providers IDs.
fields: Specify which fields to return (id is always included). Example: "id,provider,description,version".
It is recommended to run with the default parameters because the full response is too large.
Available values are "id", "framework", "provider", "description", "total_checks", "total_requirements", "created_at", "updated_at".
The default parameters are "id,framework,provider,description,total_checks,total_requirements".
If null, all fields will be returned.
Returns:
List of compliance frameworks. The structure is as follows:
{
"count": N,
"frameworks": {
"framework_id": {
"id": "framework_id",
"provider": "provider_id",
"description": "framework_description",
"version": "framework_version"
}
}
}
"""
params = {}
if provider:
params["provider"] = provider
if fields:
params["fields"] = fields
try:
response = prowler_hub_client.get("/compliance", params=params)
response.raise_for_status()
frameworks = response.json()
frameworks_dict = {}
for framework in frameworks:
framework_data = {}
# Always include the id field as it's mandatory for the response structure
if "id" in framework:
framework_data["id"] = framework["id"]
# Include other requested fields
for field in fields.split(","):
if (
field != "id" and field in framework
): # Skip id since it's already added
framework_data[field] = framework[field]
frameworks_dict[framework["id"]] = framework_data
return {"count": len(frameworks), "frameworks": frameworks_dict}
except httpx.HTTPStatusError as e:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {"error": str(e)}
@hub_mcp_server.tool()
async def search_compliance_frameworks(term: str) -> dict[str, Any]:
"""
Search compliance frameworks by term.
Args:
term: Search term to find in framework names and descriptions
Returns:
List of compliance frameworks matching the search term
"""
try:
response = prowler_hub_client.get("/compliance/search", params={"term": term})
response.raise_for_status()
frameworks = response.json()
return {
"count": len(frameworks),
"search_term": term,
"frameworks": frameworks,
}
except httpx.HTTPStatusError as e:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {"error": str(e)}
# Provider Tools
@hub_mcp_server.tool()
async def list_providers() -> dict[str, Any]:
"""
Get all available Prowler providers and their associated services.
Returns:
List of Prowler providers with their associated services. The structure is as follows:
{
"count": N,
"providers": {
"provider_id": {
"name": "provider_name",
"services": ["service_id_1", "service_id_2", "service_id_3", ...]
}
}
}
"""
try:
response = prowler_hub_client.get("/providers")
response.raise_for_status()
providers = response.json()
providers_dict = {}
for provider in providers:
providers_dict[provider["id"]] = {
"name": provider.get("name", ""),
"services": provider.get("services", []),
}
return {"count": len(providers), "providers": providers_dict}
except httpx.HTTPStatusError as e:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {"error": str(e)}
# Analytics Tools
@hub_mcp_server.tool()
async def get_artifacts_count() -> dict[str, Any]:
"""
Get total count of security artifacts (checks + compliance frameworks).
Returns:
Total number of artifacts in the Prowler Hub.
"""
try:
response = prowler_hub_client.get("/n_artifacts")
response.raise_for_status()
data = response.json()
return {
"total_artifacts": data.get("n", 0),
"details": "Total count includes both security checks and compliance frameworks",
}
except httpx.HTTPStatusError as e:
return {
"error": f"HTTP error {e.response.status_code}: {e.response.text}",
}
except Exception as e:
return {"error": str(e)}
+41
View File
@@ -0,0 +1,41 @@
import os
from fastmcp import FastMCP
from prowler_mcp_server.lib.logger import logger
# Initialize main Prowler MCP server
prowler_mcp_server = FastMCP("prowler-mcp-server")
async def setup_main_server():
"""Set up the main Prowler MCP server with all available integrations."""
# Import Prowler Hub tools with prowler_hub_ prefix
try:
logger.info("Importing Prowler Hub server...")
from prowler_mcp_server.prowler_hub.server import hub_mcp_server
await prowler_mcp_server.import_server(hub_mcp_server, prefix="prowler_hub")
logger.info("Successfully imported Prowler Hub server")
except Exception as e:
logger.error(f"Failed to import Prowler Hub server: {e}")
try:
logger.info("Importing Prowler App server...")
if not os.path.exists(
os.path.join(os.path.dirname(__file__), "prowler_app", "server.py")
):
from prowler_mcp_server.prowler_app.utils.server_generator import (
generate_server_file,
)
logger.info("Prowler App server not found, generating...")
generate_server_file()
from prowler_mcp_server.prowler_app.server import app_mcp_server
await prowler_mcp_server.import_server(app_mcp_server, prefix="prowler_app")
logger.info("Successfully imported Prowler App server")
except Exception as e:
logger.error(f"Failed to import Prowler App server: {e}")
+21
View File
@@ -0,0 +1,21 @@
[build-system]
build-backend = "setuptools.build_meta"
requires = ["setuptools>=61.0", "wheel"]
[project]
dependencies = [
"fastmcp>=2.11.3",
"httpx>=0.27.0"
]
description = "MCP server for Prowler ecosystem"
name = "prowler-mcp"
readme = "README.md"
requires-python = ">=3.12"
version = "0.1.0"
[project.scripts]
generate-prowler-app-mcp-server = "prowler_mcp_server.prowler_app.utils.server_generator:generate_server_file"
prowler-mcp = "prowler_mcp_server.main:main"
[tool.uv]
package = true
+1052
View File
File diff suppressed because it is too large Load Diff
+4 -1
View File
@@ -70,6 +70,7 @@ nav:
- Integrations:
- Amazon S3: tutorials/prowler-app-s3-integration.md
- AWS Security Hub: tutorials/prowler-app-security-hub-integration.md
- Jira: tutorials/prowler-app-jira-integration.md
- Lighthouse AI: tutorials/prowler-app-lighthouse.md
- Tutorials:
- SSO with Entra: tutorials/prowler-app-sso-entra.md
@@ -99,7 +100,7 @@ nav:
- AWS:
- Getting Started: tutorials/aws/getting-started-aws.md
- Authentication: tutorials/aws/authentication.md
- Assume Role: tutorials/aws/role-assumption.md
- Assume Role (CLI): tutorials/aws/role-assumption.md
- AWS Organizations: tutorials/aws/organizations.md
- AWS Regions and Partitions: tutorials/aws/regions-and-partitions.md
- Tag-based Scan: tutorials/aws/tag-based-scan.md
@@ -161,6 +162,8 @@ nav:
- Integration Tests: developer-guide/integration-testing.md
- Debugging: developer-guide/debugging.md
- Configurable Checks: developer-guide/configurable-checks.md
- Renaming Checks: developer-guide/renaming-checks.md
- Check Metadata Writting Guidelines: developer-guide/check-metadata-guidelines.md
- Security: security.md
- Contact Us: contact.md
- Troubleshooting: troubleshooting.md
Generated
+16 -16
View File
@@ -2530,14 +2530,14 @@ files = [
[[package]]
name = "markdown"
version = "3.8.2"
version = "3.9"
description = "Python implementation of John Gruber's Markdown."
optional = false
python-versions = ">=3.9"
groups = ["docs"]
groups = ["main", "docs"]
files = [
{file = "markdown-3.8.2-py3-none-any.whl", hash = "sha256:5c83764dbd4e00bdd94d85a19b8d55ccca20fe35b2e678a1422b380324dd5f24"},
{file = "markdown-3.8.2.tar.gz", hash = "sha256:247b9a70dd12e27f67431ce62523e675b866d254f900c4fe75ce3dda62237c45"},
{file = "markdown-3.9-py3-none-any.whl", hash = "sha256:9f4d91ed810864ea88a6f32c07ba8bee1346c0cc1f6b1f9f6c822f2a9667d280"},
{file = "markdown-3.9.tar.gz", hash = "sha256:d2900fe1782bd33bdbbd56859defef70c2e78fc46668f8eb9df3128138f2cb6a"},
]
[package.dependencies]
@@ -2942,28 +2942,28 @@ test = ["pytest", "pytest-cov"]
[[package]]
name = "moto"
version = "5.0.28"
version = "5.1.11"
description = "A library that allows you to easily mock out tests based on AWS infrastructure"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "moto-5.0.28-py3-none-any.whl", hash = "sha256:2dfbea1afe3b593e13192059a1a7fc4b3cf7fdf92e432070c22346efa45aa0f0"},
{file = "moto-5.0.28.tar.gz", hash = "sha256:4d3437693411ec943c13c77de5b0b520c4b0a9ac850fead4ba2a54709e086e8b"},
{file = "moto-5.1.11-py3-none-any.whl", hash = "sha256:d09429ed5f67f8568637700cd525997d6abe7f91439a6f900b4f98a9fe4ecac9"},
{file = "moto-5.1.11.tar.gz", hash = "sha256:1330b6d9b91088e971469dfb67f297595541914b364e0b49047bb82622975ec7"},
]
[package.dependencies]
antlr4-python3-runtime = {version = "*", optional = true, markers = "extra == \"all\""}
aws-xray-sdk = {version = ">=0.93,<0.96 || >0.96", optional = true, markers = "extra == \"all\""}
boto3 = ">=1.9.201"
botocore = ">=1.14.0,<1.35.45 || >1.35.45,<1.35.46 || >1.35.46"
botocore = ">=1.20.88,<1.35.45 || >1.35.45,<1.35.46 || >1.35.46"
cfn-lint = {version = ">=0.40.0", optional = true, markers = "extra == \"all\""}
cryptography = ">=35.0.0"
docker = {version = ">=3.0.0", optional = true, markers = "extra == \"all\""}
graphql-core = {version = "*", optional = true, markers = "extra == \"all\""}
Jinja2 = ">=2.10.1"
joserfc = {version = ">=0.9.0", optional = true, markers = "extra == \"all\""}
jsonpath-ng = {version = "*", optional = true, markers = "extra == \"all\""}
jsonpath_ng = {version = "*", optional = true, markers = "extra == \"all\""}
jsonschema = {version = "*", optional = true, markers = "extra == \"all\""}
multipart = {version = "*", optional = true, markers = "extra == \"all\""}
openapi-spec-validator = {version = ">=0.5.0", optional = true, markers = "extra == \"all\""}
@@ -2978,7 +2978,7 @@ werkzeug = ">=0.5,<2.2.0 || >2.2.0,<2.2.1 || >2.2.1"
xmltodict = "*"
[package.extras]
all = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsonpath-ng", "jsonschema", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.6.1)", "pyparsing (>=3.0.7)", "setuptools"]
all = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsonpath_ng", "jsonschema", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.6.1)", "pyparsing (>=3.0.7)", "setuptools"]
apigateway = ["PyYAML (>=5.1)", "joserfc (>=0.9.0)", "openapi-spec-validator (>=0.5.0)"]
apigatewayv2 = ["PyYAML (>=5.1)", "openapi-spec-validator (>=0.5.0)"]
appsync = ["graphql-core"]
@@ -2988,16 +2988,16 @@ cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>
cognitoidp = ["joserfc (>=0.9.0)"]
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.6.1)"]
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.6.1)"]
events = ["jsonpath-ng"]
events = ["jsonpath_ng"]
glue = ["pyparsing (>=3.0.7)"]
proxy = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "graphql-core", "joserfc (>=0.9.0)", "jsonpath-ng", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.6.1)", "pyparsing (>=3.0.7)", "setuptools"]
proxy = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "graphql-core", "joserfc (>=0.9.0)", "jsonpath_ng", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.6.1)", "pyparsing (>=3.0.7)", "setuptools"]
quicksight = ["jsonschema"]
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.6.1)", "pyparsing (>=3.0.7)"]
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.6.1)"]
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.6.1)"]
server = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "joserfc (>=0.9.0)", "jsonpath-ng", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.6.1)", "pyparsing (>=3.0.7)", "setuptools"]
server = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "joserfc (>=0.9.0)", "jsonpath_ng", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.6.1)", "pyparsing (>=3.0.7)", "setuptools"]
ssm = ["PyYAML (>=5.1)"]
stepfunctions = ["antlr4-python3-runtime", "jsonpath-ng"]
stepfunctions = ["antlr4-python3-runtime", "jsonpath_ng"]
xray = ["aws-xray-sdk (>=0.93,!=0.96)", "setuptools"]
[[package]]
@@ -5891,4 +5891,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">3.9.1,<3.13"
content-hash = "aea38b0311bfabac00d4bf9ee5d2fa0a7f3e32dd2ee5c5d27eb54c69a80b35e9"
content-hash = "890d165dc90871b6c2f34a31c61f5857ade538cc62fe33f024a2f57e1c5ac1b1"
+25
View File
@@ -1,6 +1,31 @@
# Prowler SDK Changelog
All notable changes to the **Prowler SDK** are documented in this file.
## [v5.13.0] (Prowler UNRELEASED)
### Added
- Support for AdditionalURLs in outputs [(#8651)](https://github.com/prowler-cloud/prowler/pull/8651)
- Support for markdown metadata fields in Dashboard [(#8667)](https://github.com/prowler-cloud/prowler/pull/8667)
- Documentation for renaming checks [(#8717)](https://github.com/prowler-cloud/prowler/pull/8717)
- Add explicit "name" field for each compliance framework and include "FRAMEWORK" and "NAME" in CSV output [(#7920)](https://github.com/prowler-cloud/prowler/pull/7920)
### Changed
- Update AWS Neptune service metadata to new format [(#8494)](https://github.com/prowler-cloud/prowler/pull/8494)
- Update AWS Config service metadata to new format [(#8641)](https://github.com/prowler-cloud/prowler/pull/8641)
- HTML output now properly renders markdown syntax in Risk and Recommendation fields [(#8727)](https://github.com/prowler-cloud/prowler/pull/8727)
- Update `moto` dependency from 5.0.28 to 5.1.11 [(#7100)](https://github.com/prowler-cloud/prowler/pull/7100)
### Fixed
- Fix SNS topics showing empty AWS_ResourceID in Quick Inventory output [(#8762)](https://github.com/prowler-cloud/prowler/issues/8762)
## [v5.12.1] (Prowler v5.12.1)
### Fixed
- Replaced old check id with new ones for compliance files [(#8682)](https://github.com/prowler-cloud/prowler/pull/8682)
- `firehose_stream_encrypted_at_rest` check false positives and new api call in kafka service [(#8599)](https://github.com/prowler-cloud/prowler/pull/8599)
- Replace defender rules policies key to use old name [(#8702)](https://github.com/prowler-cloud/prowler/pull/8702)
## [v5.12.0] (Prowler v5.12.0)
### Added
@@ -1,5 +1,6 @@
{
"Framework": "AWS-Account-Security-Onboarding",
"Name": "AWS Account Security Onboarding",
"Version": "",
"Provider": "AWS",
"Description": "Checklist when onboarding new AWS Accounts to existing AWS Organization.",
@@ -1,5 +1,6 @@
{
"Framework": "AWS-Audit-Manager-Control-Tower-Guardrails",
"Name": "AWS Audit Manager Control Tower Guardrails",
"Version": "",
"Provider": "AWS",
"Description": "AWS Control Tower is a management and governance service that you can use to navigate through the setup process and governance requirements that are involved in creating a multi-account AWS environment.",
@@ -1,5 +1,6 @@
{
"Framework": "AWS-Foundational-Security-Best-Practices",
"Name": "AWS Foundational Security Best Practices",
"Version": "",
"Provider": "AWS",
"Description": "The AWS Foundational Security Best Practices standard is a set of controls that detect when your deployed accounts and resources deviate from security best practices.",
@@ -4730,4 +4731,4 @@
]
}
]
}
}
@@ -1,5 +1,6 @@
{
"Framework": "AWS-Foundational-Technical-Review",
"Name": "AWS Foundational Technical Review",
"Version": "",
"Provider": "AWS",
"Description": "The AWS Foundational Technical Review (FTR) assesses an AWS Partner's solution against a specific set of Amazon Web Services (AWS) best practices around security, performance, and operational processes that are most critical for customer success. Passing the FTR is required to qualify AWS Software Partners for AWS Partner Network (APN) programs such as AWS Competency and AWS Service Ready but any AWS Partner who offers a technology solution may request a FTR review through AWS Partner Central.",
@@ -364,8 +365,8 @@
"ec2_ami_public",
"ec2_instance_public_ip",
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
@@ -1,5 +1,6 @@
{
"Framework": "AWS-Well-Architected-Framework-Reliability-Pillar",
"Name": "AWS Well-Architected Framework Reliability Pillar",
"Version": "",
"Provider": "AWS",
"Description": "Best Practices for the AWS Well-Architected Framework Reliability Pillar encompasses the ability of a workload to perform its intended function correctly and consistently when its expected to. This includes the ability to operate and test the workload through its total lifecycle.",
@@ -1,5 +1,6 @@
{
"Framework": "AWS-Well-Architected-Framework-Security-Pillar",
"Name": "AWS Well-Architected Framework Security Pillar",
"Version": "",
"Provider": "AWS",
"Description": "Best Practices for AWS Well-Architected Framework Security Pillar. The focus of this framework is the security pillar of the AWS Well-Architected Framework. It provides guidance to help you apply best practices, current recommendations in the design, delivery, and maintenance of secure AWS workloads.",
@@ -721,8 +722,8 @@
"ec2_networkacl_allow_ingress_tcp_port_22",
"ec2_networkacl_allow_ingress_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "CIS",
"Name": "CIS Amazon Web Services Foundations Benchmark v1.4.0",
"Version": "1.4",
"Provider": "AWS",
"Description": "The CIS Benchmark for CIS Amazon Web Services Foundations Benchmark, v1.4.0, Level 1 and 2 provides prescriptive guidance for configuring security options for a subset of Amazon Web Services. It has an emphasis on foundational, testable, and architecture agnostic settings",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "CIS",
"Name": "CIS Amazon Web Services Foundations Benchmark v1.5.0",
"Version": "1.5",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "CIS",
"Name": "CIS Amazon Web Services Foundations Benchmark v2.0.0",
"Version": "2.0",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "CIS",
"Name": "CIS Amazon Web Services Foundations Benchmark v3.0.0",
"Version": "3.0",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "CIS",
"Name": "CIS Amazon Web Services Foundations Benchmark v4.0.1",
"Version": "4.0.1",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "CIS",
"Name": "CIS Amazon Web Services Foundations Benchmark v5.0.0",
"Version": "5.0",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "CISA",
"Name": "CISA Cyber Essentials framework",
"Version": "",
"Provider": "AWS",
"Description": "Cybersecurity & Infrastructure Security Agency's (CISA) Cyber Essentials is a guide for leaders of small businesses as well as leaders of small and local government agencies to develop an actionable understanding of where to start implementing organizational cybersecurity practices.",
@@ -1,5 +1,6 @@
{
"Framework": "ENS",
"Name": "ENS RD 311/2022",
"Version": "RD2022",
"Provider": "AWS",
"Description": "The accreditation scheme of the ENS (National Security Scheme) has been developed by the Ministry of Finance and Public Administrations and the CCN (National Cryptological Center). This includes the basic principles and minimum requirements necessary for the adequate protection of information.",
@@ -1,5 +1,6 @@
{
"Framework": "FedRAMP-Low-Revision-4",
"Name": "FedRAMP Low Revision 4",
"Version": "",
"Provider": "AWS",
"Description": "The Federal Risk and Authorization Management Program (FedRAMP) was established in 2011. It provides a cost-effective, risk-based approach for the adoption and use of cloud services by the U.S. federal government. FedRAMP empowers federal agencies to use modern cloud technologies, with an emphasis on the security and protection of federal information.",
@@ -1,5 +1,6 @@
{
"Framework": "FedRamp-Moderate-Revision-4",
"Name": "FedRAMP Moderate Revision 4",
"Version": "",
"Provider": "AWS",
"Description": "The Federal Risk and Authorization Management Program (FedRAMP) was established in 2011. It provides a cost-effective, risk-based approach for the adoption and use of cloud services by the U.S. federal government. FedRAMP empowers federal agencies to use modern cloud technologies, with an emphasis on the security and protection of federal information.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "FFIEC",
"Name": "FFIEC Cybersecurity Assessment Tool framework",
"Version": "",
"Provider": "AWS",
"Description": "In light of the increasing volume and sophistication of cyber threats, the Federal Financial Institutions Examination Council (FFIEC) developed the Cybersecurity Assessment Tool (Assessment), on behalf of its members, to help institutions identify their risks and determine their cybersecurity maturity.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "GDPR",
"Name": "GDPR compliance framework",
"Version": "",
"Provider": "AWS",
"Description": "The General Data Protection Regulation (GDPR) is a new European privacy law that became enforceable on May 25, 2018. The GDPR replaces the EU Data Protection Directive, also known as Directive 95/46/EC. It's intended to harmonize data protection laws throughout the European Union (EU). It does this by applying a single data protection law that's binding throughout each EU member state.",
@@ -1,5 +1,6 @@
{
"Framework": "GxP-21-CFR-Part-11",
"Name": "GxP (Good Practices) 21 CFR Part 11",
"Version": "",
"Provider": "AWS",
"Description": "GxP refers to the regulations and guidelines that are applicable to life sciences organizations that make food and medical products. Medical products that fall under this include medicines, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers. It's also to ensure the integrity of data that's used to make product-related safety decisions.",
@@ -1,5 +1,6 @@
{
"Framework": "GxP-EU-Annex-11",
"Name": "GxP (Good Practices) EU Annex 11",
"Version": "",
"Provider": "AWS",
"Description": "The GxP EU Annex 11 framework is the European equivalent to the FDA 21 CFR part 11 framework in the United States. This annex applies to all forms of computerized systems that are used as part of Good Manufacturing Practices (GMP) regulated activities. A computerized system is a set of software and hardware components that together fulfill certain functionalities. The application should be validated and IT infrastructure should be qualified. Where a computerized system replaces a manual operation, there should be no resultant decrease in product quality, process control, or quality assurance. There should be no increase in the overall risk of the process.",
+1
View File
@@ -1,5 +1,6 @@
{
"Framework": "HIPAA",
"Name": "HIPAA compliance framework",
"Version": "",
"Provider": "AWS",
"Description": "The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is legislation that helps US workers to retain health insurance coverage when they change or lose jobs. The legislation also seeks to encourage electronic health records to improve the efficiency and quality of the US healthcare system through improved information sharing.",
@@ -1,5 +1,6 @@
{
"Framework": "ISO27001",
"Name": "ISO/IEC 27001 Information Security Management Standard 2013",
"Version": "2013",
"Provider": "AWS",
"Description": "ISO (the International Organization for Standardization) and IEC (the International Electrotechnical Commission) form the specialized system for worldwide standardization. National bodies that are members of ISO or IEC participate in the development of International Standards through technical committees established by the respective organization to deal with particular fields of technical activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.",
@@ -1,5 +1,6 @@
{
"Framework": "ISO27001",
"Name": "ISO/IEC 27001 Information Security Management Standard 2022",
"Version": "2022",
"Provider": "AWS",
"Description": "ISO (the International Organization for Standardization) and IEC (the International Electrotechnical Commission) form the specialized system for worldwide standardization. National bodies that are members of ISO or IEC participate in the development of International Standards through technical committees established by the respective organization to deal with particular fields of technical activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.",
@@ -1510,8 +1511,8 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
@@ -1604,8 +1605,8 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
@@ -1698,8 +1699,8 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
@@ -1,5 +1,6 @@
{
"Framework": "KISA-ISMS-P",
"Name": "KISA ISMS compliance framework 2023",
"Version": "2023",
"Provider": "AWS",
"Description": "The ISMS-P certification, established by KISA (Korea Internet & Security Agency), is a system where an independent certification body evaluates whether a company or organization's information security and privacy protection measures comply with certification standards, and grants certification. This helps organizations improve public trust in their services and respond effectively to increasingly complex cyber threats. The ISMS-P framework also provides comprehensive guidelines for systematically establishing, implementing, and managing information security and privacy protection.",
@@ -1558,8 +1559,8 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
@@ -1682,7 +1683,7 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_elasticsearch_kibana_9200_9300_5601",
@@ -1814,7 +1815,7 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_memcached_11211",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mysql_3306",
@@ -1917,7 +1918,7 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_telnet_23",
@@ -3024,8 +3025,8 @@
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_high_risk_tcp_ports",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_ftp_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
@@ -4588,4 +4589,4 @@
]
}
]
}
}

Some files were not shown because too many files have changed in this diff Show More