Compare commits

...

89 Commits

Author SHA1 Message Date
Daniel Barranquero 6bc1eafd66 Merge branch 'PRWLR-5093-design-the-fixer-class' into update-fixers-docs 2025-07-15 12:34:36 +02:00
Daniel Barranquero e45e4ae0fe feat(docs): add snapshots 2025-07-15 12:32:58 +02:00
Daniel Barranquero ef4718d16c Merge branch 'master' into update-fixers-docs 2025-07-15 12:20:55 +02:00
Daniel Barranquero 933ba4c3be Merge branch 'master' into PRWLR-5093-design-the-fixer-class 2025-07-15 12:17:55 +02:00
Sergio Garcia 7da6d7b5dd chore(github): add test_connection function (#8248) 2025-07-15 17:01:40 +08:00
Víctor Fernández Poyatos db6a27d1f5 feat(resources): latest and metadata endpoints and performance (#8112) 2025-07-14 18:02:06 +02:00
Alejandro Bailo e07c833cab feat: SAML toast error (#8267)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-07-14 17:04:23 +02:00
Adrián Jesús Peña Rodríguez 728fc9d6ff fix(saml): remove user in case of error (#8260) 2025-07-14 14:07:27 +02:00
Daniel Barranquero 877471783e feat(docs): add new version of fixer docs 2025-07-14 13:57:45 +02:00
Prowler Bot cf9ff78605 chore(regions_update): Changes in regions for AWS services (#8263)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-07-14 19:45:38 +08:00
Daniel Barranquero 55e9695915 Merge branch 'master' into update-fixers-docs 2025-07-14 09:41:53 +02:00
Adrián Jesús Peña Rodríguez a2faf548af chore: update changelog (#8255) 2025-07-11 12:06:03 +02:00
Adrián Jesús Peña Rodríguez 8bcec4926b fix: set lxml version (#8253) 2025-07-11 11:43:31 +02:00
Hugo Pereira Brito a4e96f809b fix(docs): GitHub provider mkdocs and -h (#8246) 2025-07-11 16:32:15 +08:00
Adrián Jesús Peña Rodríguez fa27255dd7 chore(saml): redirect to login page on fail (#8247) 2025-07-11 09:22:38 +02:00
Pepe Fagoaga 05360e469f chore(bump): add no-changelog label (#8240) 2025-07-10 19:14:37 +08:00
Hugo Pereira Brito 9d405ddcbd fix: changelog entries with new specification (#8232) 2025-07-10 14:40:33 +05:45
Víctor Fernández Poyatos 430f831543 feat(exceptions): add custom error for provider connection during scans (#8234) 2025-07-10 14:13:19 +05:45
Pepe Fagoaga da9d7199b7 chore(changelog): add missing entry from the password policy (#8236) 2025-07-10 09:07:04 +02:00
Pepe Fagoaga d63a383ec6 feat(security): password strength (#8225)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-07-10 11:50:22 +05:45
Víctor Fernández Poyatos 55c226029e feat(resources): optimize include parameters for resources view (#8229) 2025-07-09 16:16:56 +02:00
Alejandro Bailo 8d2f6aa30c feat: Include/exclude muted findings (#8228) 2025-07-09 16:06:05 +02:00
Rubén De la Torre Vico a319f80701 feat(storage): add new check storage_smb_protocol_version_is_latest (#8128)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-09 17:28:00 +08:00
Adrián Jesús Peña Rodríguez 15a8671f0d feat(saml): prevent duplicate SAML entityID configuration (#8224) 2025-07-09 09:50:22 +02:00
Rubén De la Torre Vico d34e709d91 fix(azure/storage): use BaseModel for all Storage models (#8222) 2025-07-09 15:49:17 +08:00
Hugo Pereira Brito ddc53c3c6d fix(firehose): list all streams and fix firehose_stream_encrypted_at_rest logic (#8213) 2025-07-09 15:38:54 +08:00
Alejandro Bailo a3aef18cfe feat: Mutelist implementation (#8190)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
Co-authored-by: Drew Kerrigan <drew@prowler.com>
2025-07-09 08:15:23 +02:00
Alejandro Bailo 49ca3ca325 fix: SAML 403 message (#8221) 2025-07-09 08:10:14 +02:00
Drew Kerrigan 89c67079a3 feat: Processors API endpoint, implement MuteList (#7993)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-07-08 21:33:28 +05:45
Pepe Fagoaga 2de8075d87 fix(overview): use findings latest to get new (#8219) 2025-07-08 15:48:19 +02:00
Adrián Jesús Peña Rodríguez e124275dbf fix(saml): ensure SocialApp and SAMLDomainIndex are deleted with SAMLConfiguration (#8210) 2025-07-08 13:57:23 +02:00
Rubén De la Torre Vico 760d28e752 chore(deps): update dash libs (#8215) 2025-07-08 19:55:50 +08:00
Víctor Fernández Poyatos 3fb0733887 feat(tasks): create overview queue for summaries and overviews (#8214) 2025-07-08 13:53:23 +02:00
Pablo Lara 7de9a37edb fix(api): make invitation email comparison case-insensitive (#8206)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-07-08 16:39:27 +05:45
Pepe Fagoaga fe00b788cc fix: Remove type validation while updating provider credentials (#8197)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-07-08 15:27:02 +05:45
Rubén De la Torre Vico 4c50f4d811 feat(azure/vm): add new check vm_backup_enabled (#8182)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-08 17:01:22 +08:00
Rubén De la Torre Vico c0c736bffe chore: ignore some files from AI editors (#8209) 2025-07-08 10:43:38 +02:00
dependabot[bot] a3aa7d0a63 chore(deps): bump python from 3.12.10-slim-bookworm to 3.12.11-slim-bookworm (#8157)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-08 16:43:13 +08:00
Rubén De la Torre Vico 3ceb86c4d9 feat(azure/vm): add new check vm_scaleset_associated_load_balancer (#8181) 2025-07-08 16:40:43 +08:00
Rubén De la Torre Vico 3628e7b3e8 feat(azure/vm): add new check vm_ensure_using_approved_images (#8168) 2025-07-08 16:40:33 +08:00
Chandrapal Badshah f29c2ac9f0 docs(lighthouse): Add Lighthouse Docs (#8196)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-07-08 11:56:23 +05:45
Pablo Lara b4927c3ad1 chore: Update CHANGELOG UI (#8204) 2025-07-07 17:54:44 +02:00
Adrián Jesús Peña Rodríguez 19f3c1d310 chore(saml): restore SAML button (#8203) 2025-07-07 17:34:05 +02:00
Adrián Jesús Peña Rodríguez cd97e57521 fix(saml): restore SAML, deactivate urls, enable idp-initiate (#8175) 2025-07-07 16:42:11 +02:00
Hugo Pereira Brito b38207507a chore(docs): enhance M365 auth documentation (#8199)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-07 22:01:41 +08:00
Rubén De la Torre Vico ab96e0aac0 feat(azure/vm): add new check vm_linux_enforce_ssh_authentication (#8149) 2025-07-07 22:01:11 +08:00
Prowler Bot 4477cecc59 chore(regions_update): Changes in regions for AWS services (#8198)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-07-07 18:04:49 +08:00
Pablo Lara 641d671312 chore: upgrade to Next.js 14.2.30 and lock TypeScript to 5.5.4 for ES… (#8189) 2025-07-04 13:20:30 +02:00
Víctor Fernández Poyatos e7c2fa0699 fix(findings): avoid backfill on empty scans (#8183) 2025-07-04 12:24:49 +02:00
Pedro Martín 7eb08b0f14 fix(ec2): allow empty values for http_endpoint in templates (#8184) 2025-07-04 18:03:51 +08:00
Rubén De la Torre Vico 6f3112f754 feat(storage): add new check storage_smb_channel_encryption_with_secure_algorithm (#8123)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-04 15:26:33 +08:00
Kay Agahd f5ecae6da1 fix(iam): detect wildcarded ARNs in sts:AssumeRole policy resources (#8164)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-03 23:09:48 +08:00
Prowler Bot 1c75f6b804 chore(release): Bump version to v5.9.0 (#8178)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-07-03 23:08:37 +08:00
Daniel Barranquero 91b64d8572 chore(docs): update m365 docs for app auth in cloud (#8147)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-03 23:08:15 +08:00
Pablo Lara 233ae74560 fix: disable dynamic filters for now (#8177) 2025-07-03 14:17:02 +02:00
Alejandro Bailo fac97f9785 fix: remove duplicated calls during promise all resolving (#8176) 2025-07-03 14:02:57 +02:00
Pablo Lara e81c7a3893 fix: bug when updating credentials for m365 (#8173) 2025-07-03 11:31:40 +02:00
Daniel Barranquero 82ab20deec feat(compute): add tests for gcp fixer 2025-06-24 10:01:07 +02:00
Daniel Barranquero d7e3b1c760 feat(gcp): working version of gcp fixer 2025-06-23 13:14:25 +02:00
Daniel Barranquero 166e07939d feat(gcp): add first version of gcp tests 2025-06-23 12:33:51 +02:00
Daniel Barranquero c5cf1c4bfb merge branch 'master' into PRWLR-5093-design-the-fixer-class 2025-06-23 10:20:06 +02:00
Daniel Barranquero 09b33d05a3 fix vulture 2025-06-11 12:52:36 +02:00
Daniel Barranquero 6a7cfd175c chore(tests): improve tests 2025-06-11 12:47:42 +02:00
Daniel Barranquero 82543c0d63 fix(azure): change azure fixer tests 2025-06-11 09:45:32 +02:00
Daniel Barranquero 7360395263 feat(tests): add tests for new fixers 2025-06-10 19:15:16 +02:00
Daniel Barranquero 4ae790ee73 fix: tests with function apps 2025-06-10 11:17:54 +02:00
Daniel Barranquero 7a2d3db082 chore(app): fix app service tests 2025-06-09 16:35:11 +02:00
Daniel Barranquero 40934d34b2 fix: flake8 2025-06-09 13:38:16 +02:00
Daniel Barranquero 5c93372210 chore(tests): add tests for azure and m365 fixers 2025-06-09 13:32:47 +02:00
Daniel Barranquero ffcc516f00 chore(kms): modify fixer test 2025-06-09 11:30:44 +02:00
Daniel Barranquero 9d4094e19e fix: remove unnecessary changes 2025-06-04 13:21:25 +02:00
Daniel Barranquero 00e491415f chore(app): new version of the fixer 2025-06-04 12:51:39 +02:00
Daniel Barranquero e17cbed4b3 Merge branch 'PRWLR-7353-fix-app-function-ftps-deployment-disabled-check' into PRWLR-5093-design-the-fixer-class 2025-06-04 12:34:01 +02:00
Daniel Barranquero d1e41f16ef fix: solve comments 2025-06-04 12:32:32 +02:00
Daniel Barranquero a17c3f94fc chore(azure): add permissions to azure fixer info 2025-06-04 10:48:55 +02:00
Daniel Barranquero 70f8232747 Merge branch 'PRWLR-7353-fix-app-function-ftps-deployment-disabled-check' into PRWLR-5093-design-the-fixer-class 2025-06-04 09:47:06 +02:00
Daniel Barranquero 31189f0d11 chore(app): mantain none by default 2025-06-04 09:43:17 +02:00
Daniel Barranquero 5aaf6e4858 feat(app): add changelog 2025-06-04 09:29:14 +02:00
Daniel Barranquero e05cc4cfab fix(app): change api call for app function ftps check 2025-06-03 17:57:52 +02:00
Daniel Barranquero 18a6f29593 feat(gcp): add first version of gcp fixers 2025-06-03 17:40:05 +02:00
Daniel Barranquero fc826da50c chore(azure): add changes to azure fixers 2025-06-03 17:38:09 +02:00
Daniel Barranquero b30ee077da merge branch 'master' into PRWLR-5093-design-the-fixer-class 2025-06-02 10:38:00 +02:00
Daniel Barranquero efdd967763 feat(fixers): add first version of azure fixers 2025-05-23 11:07:38 +02:00
Daniel Barranquero ee146cd43e feat(m365): add first fixer for m365 2025-05-21 14:00:34 +02:00
Daniel Barranquero f40aea757e feat(fixers): add first version of M365 fixers 2025-05-21 10:24:59 +02:00
Daniel Barranquero 7db24f8cb7 Merge branch 'master' into PRWLR-5093-design-the-fixer-class 2025-05-20 13:14:59 +02:00
Daniel Barranquero f78e5c9e33 feat(fixers): change classes structure 2025-05-20 09:41:14 +02:00
Daniel Barranquero d91bbe1ef4 feat(fixer): add fixing and modify errors from the v1 2025-05-15 16:40:09 +02:00
Daniel Barranquero c0d211492e feat(fixer): add poc for Fixer class 2025-05-15 13:57:29 +02:00
220 changed files with 17130 additions and 6799 deletions
-6
View File
@@ -136,12 +136,6 @@ jobs:
run: |
poetry check --lock
- name: Prevents known compatibility error between lxml and libxml2/libxmlsec versions - https://github.com/xmlsec/python-xmlsec/issues/320
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pip install --force-reinstall --no-binary lxml lxml
- name: Lint with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
+2
View File
@@ -97,6 +97,7 @@ jobs:
commit-message: "chore(release): Bump version to v${{ env.BUMP_VERSION_TO }}"
branch: "version-bump-to-v${{ env.BUMP_VERSION_TO }}"
title: "chore(release): Bump version to v${{ env.BUMP_VERSION_TO }}"
labels: no-changelog
body: |
### Description
@@ -135,6 +136,7 @@ jobs:
commit-message: "chore(release): Bump version to v${{ env.PATCH_VERSION_TO }}"
branch: "version-bump-to-v${{ env.PATCH_VERSION_TO }}"
title: "chore(release): Bump version to v${{ env.PATCH_VERSION_TO }}"
labels: no-changelog
body: |
### Description
+10
View File
@@ -44,6 +44,16 @@ junit-reports/
# Cursor files
.cursorignore
.cursor/
# RooCode files
.roo/
.rooignore
.roomodes
# Cline files
.cline/
.clineignore
# Terraform
.terraform*
+1 -1
View File
@@ -1,4 +1,4 @@
FROM python:3.12.10-slim-bookworm AS build
FROM python:3.12.11-slim-bookworm AS build
LABEL maintainer="https://github.com/prowler-cloud/prowler"
LABEL org.opencontainers.image.source="https://github.com/prowler-cloud/prowler"
+33
View File
@@ -4,6 +4,39 @@ All notable changes to the **Prowler API** are documented in this file.
## [v1.10.0] (Prowler UNRELEASED)
### Added
- SSO with SAML support [(#8175)](https://github.com/prowler-cloud/prowler/pull/8175)
- `GET /resources/metadata`, `GET /resources/metadata/latest` and `GET /resources/latest` to expose resource metadata and latest scan results [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
### Changed
- `/processors` endpoints to post-process findings. Currently, only the Mutelist processor is supported to allow to mute findings.
- Optimized the underlying queries for resources endpoints [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
- Optimized include parameters for resources view [(#8229)](https://github.com/prowler-cloud/prowler/pull/8229)
### Fixed
- Search filter for findings and resources [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
### Security
- Enhanced password validation to enforce 12+ character passwords with special characters, uppercase, lowercase, and numbers [(#8225)](https://github.com/prowler-cloud/prowler/pull/8225)
---
## [v1.9.1] (Prowler v5.8.1)
### Added
- Custom exception for provider connection errors during scans [(#8234)](https://github.com/prowler-cloud/prowler/pull/8234)
### Changed
- Summary and overview tasks now use a dedicated queue and no longer propagate errors to compliance tasks [(#8214)](https://github.com/prowler-cloud/prowler/pull/8214)
### Fixed
- Scan with no resources will not trigger legacy code for findings metadata [(#8183)](https://github.com/prowler-cloud/prowler/pull/8183)
- Invitation email comparison case-insensitive [(#8206)](https://github.com/prowler-cloud/prowler/pull/8206)
### Removed
- Validation of the provider's secret type during updates [(#8197)](https://github.com/prowler-cloud/prowler/pull/8197)
---
## [v1.9.0] (Prowler v5.8.0)
-4
View File
@@ -57,10 +57,6 @@ RUN poetry install --no-root && \
RUN poetry run python "$(poetry env info --path)/src/prowler/prowler/providers/m365/lib/powershell/m365_powershell.py"
# Prevents known compatibility error between lxml and libxml2/libxmlsec versions.
# See: https://github.com/xmlsec/python-xmlsec/issues/320
RUN poetry run pip install --force-reinstall --no-binary lxml lxml
COPY src/backend/ ./backend/
COPY docker-entrypoint.sh ./docker-entrypoint.sh
+1 -1
View File
@@ -257,7 +257,7 @@ cd src/backend
python manage.py loaddata api/fixtures/0_dev_users.json --database admin
```
> The default credentials are `dev@prowler.com:thisisapassword123` or `dev2@prowler.com:thisisapassword123`
> The default credentials are `dev@prowler.com:Thisisapassword123@` or `dev2@prowler.com:Thisisapassword123@`
## Run tests
+1 -1
View File
@@ -32,7 +32,7 @@ start_prod_server() {
start_worker() {
echo "Starting the worker..."
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill -E --max-tasks-per-child 1
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview -E --max-tasks-per-child 1
}
start_worker_beat() {
+199 -163
View File
@@ -2684,144 +2684,150 @@ adal = ["adal (>=1.0.2)"]
[[package]]
name = "lxml"
version = "5.4.0"
version = "5.3.2"
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
optional = false
python-versions = ">=3.6"
groups = ["main"]
files = [
{file = "lxml-5.4.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e7bc6df34d42322c5289e37e9971d6ed114e3776b45fa879f734bded9d1fea9c"},
{file = "lxml-5.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6854f8bd8a1536f8a1d9a3655e6354faa6406621cf857dc27b681b69860645c7"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:696ea9e87442467819ac22394ca36cb3d01848dad1be6fac3fb612d3bd5a12cf"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ef80aeac414f33c24b3815ecd560cee272786c3adfa5f31316d8b349bfade28"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b9c2754cef6963f3408ab381ea55f47dabc6f78f4b8ebb0f0b25cf1ac1f7609"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7a62cc23d754bb449d63ff35334acc9f5c02e6dae830d78dab4dd12b78a524f4"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f82125bc7203c5ae8633a7d5d20bcfdff0ba33e436e4ab0abc026a53a8960b7"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:b67319b4aef1a6c56576ff544b67a2a6fbd7eaee485b241cabf53115e8908b8f"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:a8ef956fce64c8551221f395ba21d0724fed6b9b6242ca4f2f7beb4ce2f41997"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:0a01ce7d8479dce84fc03324e3b0c9c90b1ece9a9bb6a1b6c9025e7e4520e78c"},
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:91505d3ddebf268bb1588eb0f63821f738d20e1e7f05d3c647a5ca900288760b"},
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a3bcdde35d82ff385f4ede021df801b5c4a5bcdfb61ea87caabcebfc4945dc1b"},
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:aea7c06667b987787c7d1f5e1dfcd70419b711cdb47d6b4bb4ad4b76777a0563"},
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:a7fb111eef4d05909b82152721a59c1b14d0f365e2be4c742a473c5d7372f4f5"},
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:43d549b876ce64aa18b2328faff70f5877f8c6dede415f80a2f799d31644d776"},
{file = "lxml-5.4.0-cp310-cp310-win32.whl", hash = "sha256:75133890e40d229d6c5837b0312abbe5bac1c342452cf0e12523477cd3aa21e7"},
{file = "lxml-5.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:de5b4e1088523e2b6f730d0509a9a813355b7f5659d70eb4f319c76beea2e250"},
{file = "lxml-5.4.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:98a3912194c079ef37e716ed228ae0dcb960992100461b704aea4e93af6b0bb9"},
{file = "lxml-5.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0ea0252b51d296a75f6118ed0d8696888e7403408ad42345d7dfd0d1e93309a7"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b92b69441d1bd39f4940f9eadfa417a25862242ca2c396b406f9272ef09cdcaa"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20e16c08254b9b6466526bc1828d9370ee6c0d60a4b64836bc3ac2917d1e16df"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7605c1c32c3d6e8c990dd28a0970a3cbbf1429d5b92279e37fda05fb0c92190e"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ecf4c4b83f1ab3d5a7ace10bafcb6f11df6156857a3c418244cef41ca9fa3e44"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0cef4feae82709eed352cd7e97ae062ef6ae9c7b5dbe3663f104cd2c0e8d94ba"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:df53330a3bff250f10472ce96a9af28628ff1f4efc51ccba351a8820bca2a8ba"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:aefe1a7cb852fa61150fcb21a8c8fcea7b58c4cb11fbe59c97a0a4b31cae3c8c"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:ef5a7178fcc73b7d8c07229e89f8eb45b2908a9238eb90dcfc46571ccf0383b8"},
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:d2ed1b3cb9ff1c10e6e8b00941bb2e5bb568b307bfc6b17dffbbe8be5eecba86"},
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:72ac9762a9f8ce74c9eed4a4e74306f2f18613a6b71fa065495a67ac227b3056"},
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:f5cb182f6396706dc6cc1896dd02b1c889d644c081b0cdec38747573db88a7d7"},
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:3a3178b4873df8ef9457a4875703488eb1622632a9cee6d76464b60e90adbfcd"},
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:e094ec83694b59d263802ed03a8384594fcce477ce484b0cbcd0008a211ca751"},
{file = "lxml-5.4.0-cp311-cp311-win32.whl", hash = "sha256:4329422de653cdb2b72afa39b0aa04252fca9071550044904b2e7036d9d97fe4"},
{file = "lxml-5.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:fd3be6481ef54b8cfd0e1e953323b7aa9d9789b94842d0e5b142ef4bb7999539"},
{file = "lxml-5.4.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:b5aff6f3e818e6bdbbb38e5967520f174b18f539c2b9de867b1e7fde6f8d95a4"},
{file = "lxml-5.4.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:942a5d73f739ad7c452bf739a62a0f83e2578afd6b8e5406308731f4ce78b16d"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:460508a4b07364d6abf53acaa0a90b6d370fafde5693ef37602566613a9b0779"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:529024ab3a505fed78fe3cc5ddc079464e709f6c892733e3f5842007cec8ac6e"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7ca56ebc2c474e8f3d5761debfd9283b8b18c76c4fc0967b74aeafba1f5647f9"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a81e1196f0a5b4167a8dafe3a66aa67c4addac1b22dc47947abd5d5c7a3f24b5"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00b8686694423ddae324cf614e1b9659c2edb754de617703c3d29ff568448df5"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:c5681160758d3f6ac5b4fea370495c48aac0989d6a0f01bb9a72ad8ef5ab75c4"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:2dc191e60425ad70e75a68c9fd90ab284df64d9cd410ba8d2b641c0c45bc006e"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:67f779374c6b9753ae0a0195a892a1c234ce8416e4448fe1e9f34746482070a7"},
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:79d5bfa9c1b455336f52343130b2067164040604e41f6dc4d8313867ed540079"},
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3d3c30ba1c9b48c68489dc1829a6eede9873f52edca1dda900066542528d6b20"},
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:1af80c6316ae68aded77e91cd9d80648f7dd40406cef73df841aa3c36f6907c8"},
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:4d885698f5019abe0de3d352caf9466d5de2baded00a06ef3f1216c1a58ae78f"},
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:aea53d51859b6c64e7c51d522c03cc2c48b9b5d6172126854cc7f01aa11f52bc"},
{file = "lxml-5.4.0-cp312-cp312-win32.whl", hash = "sha256:d90b729fd2732df28130c064aac9bb8aff14ba20baa4aee7bd0795ff1187545f"},
{file = "lxml-5.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:1dc4ca99e89c335a7ed47d38964abcb36c5910790f9bd106f2a8fa2ee0b909d2"},
{file = "lxml-5.4.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:773e27b62920199c6197130632c18fb7ead3257fce1ffb7d286912e56ddb79e0"},
{file = "lxml-5.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ce9c671845de9699904b1e9df95acfe8dfc183f2310f163cdaa91a3535af95de"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9454b8d8200ec99a224df8854786262b1bd6461f4280064c807303c642c05e76"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cccd007d5c95279e529c146d095f1d39ac05139de26c098166c4beb9374b0f4d"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0fce1294a0497edb034cb416ad3e77ecc89b313cff7adbee5334e4dc0d11f422"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:24974f774f3a78ac12b95e3a20ef0931795ff04dbb16db81a90c37f589819551"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:497cab4d8254c2a90bf988f162ace2ddbfdd806fce3bda3f581b9d24c852e03c"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:e794f698ae4c5084414efea0f5cc9f4ac562ec02d66e1484ff822ef97c2cadff"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_ppc64le.whl", hash = "sha256:2c62891b1ea3094bb12097822b3d44b93fc6c325f2043c4d2736a8ff09e65f60"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_s390x.whl", hash = "sha256:142accb3e4d1edae4b392bd165a9abdee8a3c432a2cca193df995bc3886249c8"},
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:1a42b3a19346e5601d1b8296ff6ef3d76038058f311902edd574461e9c036982"},
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4291d3c409a17febf817259cb37bc62cb7eb398bcc95c1356947e2871911ae61"},
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:4f5322cf38fe0e21c2d73901abf68e6329dc02a4994e483adbcf92b568a09a54"},
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:0be91891bdb06ebe65122aa6bf3fc94489960cf7e03033c6f83a90863b23c58b"},
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:15a665ad90054a3d4f397bc40f73948d48e36e4c09f9bcffc7d90c87410e478a"},
{file = "lxml-5.4.0-cp313-cp313-win32.whl", hash = "sha256:d5663bc1b471c79f5c833cffbc9b87d7bf13f87e055a5c86c363ccd2348d7e82"},
{file = "lxml-5.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:bcb7a1096b4b6b24ce1ac24d4942ad98f983cd3810f9711bcd0293f43a9d8b9f"},
{file = "lxml-5.4.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:7be701c24e7f843e6788353c055d806e8bd8466b52907bafe5d13ec6a6dbaecd"},
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fb54f7c6bafaa808f27166569b1511fc42701a7713858dddc08afdde9746849e"},
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:97dac543661e84a284502e0cf8a67b5c711b0ad5fb661d1bd505c02f8cf716d7"},
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:c70e93fba207106cb16bf852e421c37bbded92acd5964390aad07cb50d60f5cf"},
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:9c886b481aefdf818ad44846145f6eaf373a20d200b5ce1a5c8e1bc2d8745410"},
{file = "lxml-5.4.0-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:fa0e294046de09acd6146be0ed6727d1f42ded4ce3ea1e9a19c11b6774eea27c"},
{file = "lxml-5.4.0-cp36-cp36m-win32.whl", hash = "sha256:61c7bbf432f09ee44b1ccaa24896d21075e533cd01477966a5ff5a71d88b2f56"},
{file = "lxml-5.4.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7ce1a171ec325192c6a636b64c94418e71a1964f56d002cc28122fceff0b6121"},
{file = "lxml-5.4.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:795f61bcaf8770e1b37eec24edf9771b307df3af74d1d6f27d812e15a9ff3872"},
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:29f451a4b614a7b5b6c2e043d7b64a15bd8304d7e767055e8ab68387a8cacf4e"},
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:891f7f991a68d20c75cb13c5c9142b2a3f9eb161f1f12a9489c82172d1f133c0"},
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4aa412a82e460571fad592d0f93ce9935a20090029ba08eca05c614f99b0cc92"},
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:ac7ba71f9561cd7d7b55e1ea5511543c0282e2b6450f122672a2694621d63b7e"},
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:c5d32f5284012deaccd37da1e2cd42f081feaa76981f0eaa474351b68df813c5"},
{file = "lxml-5.4.0-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:ce31158630a6ac85bddd6b830cffd46085ff90498b397bd0a259f59d27a12188"},
{file = "lxml-5.4.0-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:31e63621e073e04697c1b2d23fcb89991790eef370ec37ce4d5d469f40924ed6"},
{file = "lxml-5.4.0-cp37-cp37m-win32.whl", hash = "sha256:be2ba4c3c5b7900246a8f866580700ef0d538f2ca32535e991027bdaba944063"},
{file = "lxml-5.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:09846782b1ef650b321484ad429217f5154da4d6e786636c38e434fa32e94e49"},
{file = "lxml-5.4.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:eaf24066ad0b30917186420d51e2e3edf4b0e2ea68d8cd885b14dc8afdcf6556"},
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2b31a3a77501d86d8ade128abb01082724c0dfd9524f542f2f07d693c9f1175f"},
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e108352e203c7afd0eb91d782582f00a0b16a948d204d4dec8565024fafeea5"},
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a11a96c3b3f7551c8a8109aa65e8594e551d5a84c76bf950da33d0fb6dfafab7"},
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:ca755eebf0d9e62d6cb013f1261e510317a41bf4650f22963474a663fdfe02aa"},
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:4cd915c0fb1bed47b5e6d6edd424ac25856252f09120e3e8ba5154b6b921860e"},
{file = "lxml-5.4.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:226046e386556a45ebc787871d6d2467b32c37ce76c2680f5c608e25823ffc84"},
{file = "lxml-5.4.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:b108134b9667bcd71236c5a02aad5ddd073e372fb5d48ea74853e009fe38acb6"},
{file = "lxml-5.4.0-cp38-cp38-win32.whl", hash = "sha256:1320091caa89805df7dcb9e908add28166113dcd062590668514dbd510798c88"},
{file = "lxml-5.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:073eb6dcdf1f587d9b88c8c93528b57eccda40209cf9be549d469b942b41d70b"},
{file = "lxml-5.4.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:bda3ea44c39eb74e2488297bb39d47186ed01342f0022c8ff407c250ac3f498e"},
{file = "lxml-5.4.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9ceaf423b50ecfc23ca00b7f50b64baba85fb3fb91c53e2c9d00bc86150c7e40"},
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:664cdc733bc87449fe781dbb1f309090966c11cc0c0cd7b84af956a02a8a4729"},
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67ed8a40665b84d161bae3181aa2763beea3747f748bca5874b4af4d75998f87"},
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b4a3bd174cc9cdaa1afbc4620c049038b441d6ba07629d89a83b408e54c35cd"},
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:b0989737a3ba6cf2a16efb857fb0dfa20bc5c542737fddb6d893fde48be45433"},
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:dc0af80267edc68adf85f2a5d9be1cdf062f973db6790c1d065e45025fa26140"},
{file = "lxml-5.4.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:639978bccb04c42677db43c79bdaa23785dc7f9b83bfd87570da8207872f1ce5"},
{file = "lxml-5.4.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:5a99d86351f9c15e4a901fc56404b485b1462039db59288b203f8c629260a142"},
{file = "lxml-5.4.0-cp39-cp39-win32.whl", hash = "sha256:3e6d5557989cdc3ebb5302bbdc42b439733a841891762ded9514e74f60319ad6"},
{file = "lxml-5.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:a8c9b7f16b63e65bbba889acb436a1034a82d34fa09752d754f88d708eca80e1"},
{file = "lxml-5.4.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:1b717b00a71b901b4667226bba282dd462c42ccf618ade12f9ba3674e1fabc55"},
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:27a9ded0f0b52098ff89dd4c418325b987feed2ea5cc86e8860b0f844285d740"},
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b7ce10634113651d6f383aa712a194179dcd496bd8c41e191cec2099fa09de5"},
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:53370c26500d22b45182f98847243efb518d268374a9570409d2e2276232fd37"},
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c6364038c519dffdbe07e3cf42e6a7f8b90c275d4d1617a69bb59734c1a2d571"},
{file = "lxml-5.4.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:b12cb6527599808ada9eb2cd6e0e7d3d8f13fe7bbb01c6311255a15ded4c7ab4"},
{file = "lxml-5.4.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:5f11a1526ebd0dee85e7b1e39e39a0cc0d9d03fb527f56d8457f6df48a10dc0c"},
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48b4afaf38bf79109bb060d9016fad014a9a48fb244e11b94f74ae366a64d252"},
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de6f6bb8a7840c7bf216fb83eec4e2f79f7325eca8858167b68708b929ab2172"},
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5cca36a194a4eb4e2ed6be36923d3cffd03dcdf477515dea687185506583d4c9"},
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b7c86884ad23d61b025989d99bfdd92a7351de956e01c61307cb87035960bcb1"},
{file = "lxml-5.4.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:53d9469ab5460402c19553b56c3648746774ecd0681b1b27ea74d5d8a3ef5590"},
{file = "lxml-5.4.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:56dbdbab0551532bb26c19c914848d7251d73edb507c3079d6805fa8bba5b706"},
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14479c2ad1cb08b62bb941ba8e0e05938524ee3c3114644df905d2331c76cd57"},
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32697d2ea994e0db19c1df9e40275ffe84973e4232b5c274f47e7c1ec9763cdd"},
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:24f6df5f24fc3385f622c0c9d63fe34604893bc1a5bdbb2dbf5870f85f9a404a"},
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:151d6c40bc9db11e960619d2bf2ec5829f0aaffb10b41dcf6ad2ce0f3c0b2325"},
{file = "lxml-5.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:4025bf2884ac4370a3243c5aa8d66d3cb9e15d3ddd0af2d796eccc5f0244390e"},
{file = "lxml-5.4.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:9459e6892f59ecea2e2584ee1058f5d8f629446eab52ba2305ae13a32a059530"},
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47fb24cc0f052f0576ea382872b3fc7e1f7e3028e53299ea751839418ade92a6"},
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:50441c9de951a153c698b9b99992e806b71c1f36d14b154592580ff4a9d0d877"},
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:ab339536aa798b1e17750733663d272038bf28069761d5be57cb4a9b0137b4f8"},
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:9776af1aad5a4b4a1317242ee2bea51da54b2a7b7b48674be736d463c999f37d"},
{file = "lxml-5.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:63e7968ff83da2eb6fdda967483a7a023aa497d85ad8f05c3ad9b1f2e8c84987"},
{file = "lxml-5.4.0.tar.gz", hash = "sha256:d12832e1dbea4be280b22fd0ea7c9b87f0d8fc51ba06e92dc62d52f804f78ebd"},
{file = "lxml-5.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:c4b84d6b580a9625dfa47269bf1fd7fbba7ad69e08b16366a46acb005959c395"},
{file = "lxml-5.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b4c08ecb26e4270a62f81f81899dfff91623d349e433b126931c9c4577169666"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef926e9f11e307b5a7c97b17c5c609a93fb59ffa8337afac8f89e6fe54eb0b37"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:017ceeabe739100379fe6ed38b033cd244ce2da4e7f6f07903421f57da3a19a2"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dae97d9435dc90590f119d056d233c33006b2fd235dd990d5564992261ee7ae8"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:910f39425c6798ce63c93976ae5af5fff6949e2cb446acbd44d6d892103eaea8"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9780de781a0d62a7c3680d07963db3048b919fc9e3726d9cfd97296a65ffce1"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:1a06b0c6ba2e3ca45a009a78a4eb4d6b63831830c0a83dcdc495c13b9ca97d3e"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:4c62d0a34d1110769a1bbaf77871a4b711a6f59c4846064ccb78bc9735978644"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:8f961a4e82f411b14538fe5efc3e6b953e17f5e809c463f0756a0d0e8039b700"},
{file = "lxml-5.3.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:3dfc78f5f9251b6b8ad37c47d4d0bfe63ceb073a916e5b50a3bf5fd67a703335"},
{file = "lxml-5.3.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:10e690bc03214d3537270c88e492b8612d5e41b884f232df2b069b25b09e6711"},
{file = "lxml-5.3.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:aa837e6ee9534de8d63bc4c1249e83882a7ac22bd24523f83fad68e6ffdf41ae"},
{file = "lxml-5.3.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:da4c9223319400b97a2acdfb10926b807e51b69eb7eb80aad4942c0516934858"},
{file = "lxml-5.3.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:dc0e9bdb3aa4d1de703a437576007d366b54f52c9897cae1a3716bb44fc1fc85"},
{file = "lxml-5.3.2-cp310-cp310-win32.win32.whl", hash = "sha256:dd755a0a78dd0b2c43f972e7b51a43be518ebc130c9f1a7c4480cf08b4385486"},
{file = "lxml-5.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:d64ea1686474074b38da13ae218d9fde0d1dc6525266976808f41ac98d9d7980"},
{file = "lxml-5.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9d61a7d0d208ace43986a92b111e035881c4ed45b1f5b7a270070acae8b0bfb4"},
{file = "lxml-5.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:856dfd7eda0b75c29ac80a31a6411ca12209183e866c33faf46e77ace3ce8a79"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7a01679e4aad0727bedd4c9407d4d65978e920f0200107ceeffd4b019bd48529"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b6b37b4c3acb8472d191816d4582379f64d81cecbdce1a668601745c963ca5cc"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3df5a54e7b7c31755383f126d3a84e12a4e0333db4679462ef1165d702517477"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c09a40f28dcded933dc16217d6a092be0cc49ae25811d3b8e937c8060647c353"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1ef20f1851ccfbe6c5a04c67ec1ce49da16ba993fdbabdce87a92926e505412"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:f79a63289dbaba964eb29ed3c103b7911f2dce28c36fe87c36a114e6bd21d7ad"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:75a72697d95f27ae00e75086aed629f117e816387b74a2f2da6ef382b460b710"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:b9b00c9ee1cc3a76f1f16e94a23c344e0b6e5c10bec7f94cf2d820ce303b8c01"},
{file = "lxml-5.3.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:77cbcab50cbe8c857c6ba5f37f9a3976499c60eada1bf6d38f88311373d7b4bc"},
{file = "lxml-5.3.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:29424058f072a24622a0a15357bca63d796954758248a72da6d512f9bd9a4493"},
{file = "lxml-5.3.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:7d82737a8afe69a7c80ef31d7626075cc7d6e2267f16bf68af2c764b45ed68ab"},
{file = "lxml-5.3.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:95473d1d50a5d9fcdb9321fdc0ca6e1edc164dce4c7da13616247d27f3d21e31"},
{file = "lxml-5.3.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:2162068f6da83613f8b2a32ca105e37a564afd0d7009b0b25834d47693ce3538"},
{file = "lxml-5.3.2-cp311-cp311-win32.whl", hash = "sha256:f8695752cf5d639b4e981afe6c99e060621362c416058effd5c704bede9cb5d1"},
{file = "lxml-5.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:d1a94cbb4ee64af3ab386c2d63d6d9e9cf2e256ac0fd30f33ef0a3c88f575174"},
{file = "lxml-5.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:16b3897691ec0316a1aa3c6585f61c8b7978475587c5b16fc1d2c28d283dc1b0"},
{file = "lxml-5.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:a8d4b34a0eeaf6e73169dcfd653c8d47f25f09d806c010daf074fba2db5e2d3f"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9cd7a959396da425022e1e4214895b5cfe7de7035a043bcc2d11303792b67554"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cac5eaeec3549c5df7f8f97a5a6db6963b91639389cdd735d5a806370847732b"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:29b5f7d77334877c2146e7bb8b94e4df980325fab0a8af4d524e5d43cd6f789d"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:13f3495cfec24e3d63fffd342cc8141355d1d26ee766ad388775f5c8c5ec3932"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e70ad4c9658beeff99856926fd3ee5fde8b519b92c693f856007177c36eb2e30"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:507085365783abd7879fa0a6fa55eddf4bdd06591b17a2418403bb3aff8a267d"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:5bb304f67cbf5dfa07edad904732782cbf693286b9cd85af27059c5779131050"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:3d84f5c093645c21c29a4e972b84cb7cf682f707f8706484a5a0c7ff13d7a988"},
{file = "lxml-5.3.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:bdc13911db524bd63f37b0103af014b7161427ada41f1b0b3c9b5b5a9c1ca927"},
{file = "lxml-5.3.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:1ec944539543f66ebc060ae180d47e86aca0188bda9cbfadff47d86b0dc057dc"},
{file = "lxml-5.3.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:59d437cc8a7f838282df5a199cf26f97ef08f1c0fbec6e84bd6f5cc2b7913f6e"},
{file = "lxml-5.3.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:0e275961adbd32e15672e14e0cc976a982075208224ce06d149c92cb43db5b93"},
{file = "lxml-5.3.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:038aeb6937aa404480c2966b7f26f1440a14005cb0702078c173c028eca72c31"},
{file = "lxml-5.3.2-cp312-cp312-win32.whl", hash = "sha256:3c2c8d0fa3277147bff180e3590be67597e17d365ce94beb2efa3138a2131f71"},
{file = "lxml-5.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:77809fcd97dfda3f399102db1794f7280737b69830cd5c961ac87b3c5c05662d"},
{file = "lxml-5.3.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:77626571fb5270ceb36134765f25b665b896243529eefe840974269b083e090d"},
{file = "lxml-5.3.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:78a533375dc7aa16d0da44af3cf6e96035e484c8c6b2b2445541a5d4d3d289ee"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6f62b2404b3f3f0744bbcabb0381c5fe186fa2a9a67ecca3603480f4846c585"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2ea918da00091194526d40c30c4996971f09dacab032607581f8d8872db34fbf"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c35326f94702a7264aa0eea826a79547d3396a41ae87a70511b9f6e9667ad31c"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e3bef90af21d31c4544bc917f51e04f94ae11b43156356aff243cdd84802cbf2"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:52fa7ba11a495b7cbce51573c73f638f1dcff7b3ee23697467dc063f75352a69"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:ad131e2c4d2c3803e736bb69063382334e03648de2a6b8f56a878d700d4b557d"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_28_ppc64le.whl", hash = "sha256:00a4463ca409ceacd20490a893a7e08deec7870840eff33dc3093067b559ce3e"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_28_s390x.whl", hash = "sha256:87e8d78205331cace2b73ac8249294c24ae3cba98220687b5b8ec5971a2267f1"},
{file = "lxml-5.3.2-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:bf6389133bb255e530a4f2f553f41c4dd795b1fbb6f797aea1eff308f1e11606"},
{file = "lxml-5.3.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b3709fc752b42fb6b6ffa2ba0a5b9871646d97d011d8f08f4d5b3ee61c7f3b2b"},
{file = "lxml-5.3.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:abc795703d0de5d83943a4badd770fbe3d1ca16ee4ff3783d7caffc252f309ae"},
{file = "lxml-5.3.2-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:98050830bb6510159f65d9ad1b8aca27f07c01bb3884ba95f17319ccedc4bcf9"},
{file = "lxml-5.3.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6ba465a91acc419c5682f8b06bcc84a424a7aa5c91c220241c6fd31de2a72bc6"},
{file = "lxml-5.3.2-cp313-cp313-win32.whl", hash = "sha256:56a1d56d60ea1ec940f949d7a309e0bff05243f9bd337f585721605670abb1c1"},
{file = "lxml-5.3.2-cp313-cp313-win_amd64.whl", hash = "sha256:1a580dc232c33d2ad87d02c8a3069d47abbcdce974b9c9cc82a79ff603065dbe"},
{file = "lxml-5.3.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1a59f7fe888d0ec1916d0ad69364c5400cfa2f885ae0576d909f342e94d26bc9"},
{file = "lxml-5.3.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d67b50abc2df68502a26ed2ccea60c1a7054c289fb7fc31c12e5e55e4eec66bd"},
{file = "lxml-5.3.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2cb08d2cb047c98d6fbbb2e77d6edd132ad6e3fa5aa826ffa9ea0c9b1bc74a84"},
{file = "lxml-5.3.2-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:495ddb7e10911fb4d673d8aa8edd98d1eadafb3b56e8c1b5f427fd33cadc455b"},
{file = "lxml-5.3.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:884d9308ac7d581b705a3371185282e1b8eebefd68ccf288e00a2d47f077cc51"},
{file = "lxml-5.3.2-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:37f3d7cf7f2dd2520df6cc8a13df4c3e3f913c8e0a1f9a875e44f9e5f98d7fee"},
{file = "lxml-5.3.2-cp36-cp36m-win32.whl", hash = "sha256:e885a1bf98a76dff0a0648850c3083b99d9358ef91ba8fa307c681e8e0732503"},
{file = "lxml-5.3.2-cp36-cp36m-win_amd64.whl", hash = "sha256:b45f505d0d85f4cdd440cd7500689b8e95110371eaa09da0c0b1103e9a05030f"},
{file = "lxml-5.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b53cd668facd60b4f0dfcf092e01bbfefd88271b5b4e7b08eca3184dd006cb30"},
{file = "lxml-5.3.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e5dea998c891f082fe204dec6565dbc2f9304478f2fc97bd4d7a940fec16c873"},
{file = "lxml-5.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d46bc3e58b01e4f38d75e0d7f745a46875b7a282df145aca9d1479c65ff11561"},
{file = "lxml-5.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:661feadde89159fd5f7d7639a81ccae36eec46974c4a4d5ccce533e2488949c8"},
{file = "lxml-5.3.2-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:43af2a69af2cacc2039024da08a90174e85f3af53483e6b2e3485ced1bf37151"},
{file = "lxml-5.3.2-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:1539f962d82436f3d386eb9f29b2a29bb42b80199c74a695dff51b367a61ec0a"},
{file = "lxml-5.3.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:6673920bf976421b5fac4f29b937702eef4555ee42329546a5fc68bae6178a48"},
{file = "lxml-5.3.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:9fa722a9cd8845594593cce399a49aa6bfc13b6c83a7ee05e2ab346d9253d52f"},
{file = "lxml-5.3.2-cp37-cp37m-win32.whl", hash = "sha256:2eadd4efa487f4710755415aed3d6ae9ac8b4327ea45226ffccb239766c8c610"},
{file = "lxml-5.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:83d8707b1b08cd02c04d3056230ec3b771b18c566ec35e723e60cdf037064e08"},
{file = "lxml-5.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bc6e8678bfa5ccba370103976ccfcf776c85c83da9220ead41ea6fd15d2277b4"},
{file = "lxml-5.3.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0bed509662f67f719119ad56006cd4a38efa68cfa74383060612044915e5f7ad"},
{file = "lxml-5.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e3925975fadd6fd72a6d80541a6ec75dfbad54044a03aa37282dafcb80fbdfa"},
{file = "lxml-5.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83c0462dedc5213ac586164c6d7227da9d4d578cf45dd7fbab2ac49b63a008eb"},
{file = "lxml-5.3.2-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:53e3f9ca72858834688afa17278649d62aa768a4b2018344be00c399c4d29e95"},
{file = "lxml-5.3.2-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:32ba634ef3f1b20f781019a91d78599224dc45745dd572f951adbf1c0c9b0d75"},
{file = "lxml-5.3.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:1b16504c53f41da5fcf04868a80ac40a39d3eec5329caf761114caec6e844ad1"},
{file = "lxml-5.3.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:1f9682786138549da44ca4c49b20e7144d063b75f2b2ba611f4cff9b83db1062"},
{file = "lxml-5.3.2-cp38-cp38-win32.whl", hash = "sha256:d8f74ef8aacdf6ee5c07566a597634bb8535f6b53dc89790db43412498cf6026"},
{file = "lxml-5.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:49f1cee0fa27e1ee02589c696a9bdf4027e7427f184fa98e6bef0c6613f6f0fa"},
{file = "lxml-5.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:741c126bcf9aa939e950e64e5e0a89c8e01eda7a5f5ffdfc67073f2ed849caea"},
{file = "lxml-5.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ab6e9e6aca1fd7d725ffa132286e70dee5b9a4561c5ed291e836440b82888f89"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58e8c9b9ed3c15c2d96943c14efc324b69be6352fe5585733a7db2bf94d97841"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7811828ddfb8c23f4f1fbf35e7a7b2edec2f2e4c793dee7c52014f28c4b35238"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:72968623efb1e12e950cbdcd1d0f28eb14c8535bf4be153f1bfffa818b1cf189"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ebfceaa2ea588b54efb6160e3520983663d45aed8a3895bb2031ada080fb5f04"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d685d458505b2bfd2e28c812749fe9194a2b0ce285a83537e4309a187ffa270b"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:334e0e414dab1f5366ead8ca34ec3148415f236d5660e175f1d640b11d645847"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:02e56f7de72fa82561eae69628a7d6febd7891d72248c7ff7d3e7814d4031017"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:638d06b4e1d34d1a074fa87deed5fb55c18485fa0dab97abc5604aad84c12031"},
{file = "lxml-5.3.2-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:354dab7206d22d7a796fa27c4c5bffddd2393da2ad61835355a4759d435beb47"},
{file = "lxml-5.3.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:d9d9f82ff2c3bf9bb777cb355149f7f3a98ec58f16b7428369dc27ea89556a4c"},
{file = "lxml-5.3.2-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:95ad58340e3b7d2b828efc370d1791856613c5cb62ae267158d96e47b3c978c9"},
{file = "lxml-5.3.2-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:30fe05f4b7f6e9eb32862745512e7cbd021070ad0f289a7f48d14a0d3fc1d8a9"},
{file = "lxml-5.3.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:34c688fef86f73dbca0798e0a61bada114677006afa524a8ce97d9e5fabf42e6"},
{file = "lxml-5.3.2-cp39-cp39-win32.whl", hash = "sha256:4d6d3d1436d57f41984920667ec5ef04bcb158f80df89ac4d0d3f775a2ac0c87"},
{file = "lxml-5.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:2996e1116bbb3ae2a1fbb2ba4da8f92742290b4011e7e5bce2bd33bbc9d9485a"},
{file = "lxml-5.3.2-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:521ab9c80b98c30b2d987001c3ede2e647e92eeb2ca02e8cb66ef5122d792b24"},
{file = "lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f1231b0f9810289d41df1eacc4ebb859c63e4ceee29908a0217403cddce38d0"},
{file = "lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:271f1a4d5d2b383c36ad8b9b489da5ea9c04eca795a215bae61ed6a57cf083cd"},
{file = "lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:6fca8a5a13906ba2677a5252752832beb0f483a22f6c86c71a2bb320fba04f61"},
{file = "lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:ea0c3b7922209160faef194a5b6995bfe7fa05ff7dda6c423ba17646b7b9de10"},
{file = "lxml-5.3.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:0a006390834603e5952a2ff74b9a31a6007c7cc74282a087aa6467afb4eea987"},
{file = "lxml-5.3.2-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:eae4136a3b8c4cf76f69461fc8f9410d55d34ea48e1185338848a888d71b9675"},
{file = "lxml-5.3.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d48e06be8d8c58e7feaedd8a37897a6122637efb1637d7ce00ddf5f11f9a92ad"},
{file = "lxml-5.3.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d4b83aed409134093d90e114007034d2c1ebcd92e501b71fd9ec70e612c8b2eb"},
{file = "lxml-5.3.2-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:7a0e77edfe26d3703f954d46bed52c3ec55f58586f18f4b7f581fc56954f1d84"},
{file = "lxml-5.3.2-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:19f6fcfd15b82036b4d235749d78785eb9c991c7812012dc084e0d8853b4c1c0"},
{file = "lxml-5.3.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d49919c95d31ee06eefd43d8c6f69a3cc9bdf0a9b979cc234c4071f0eb5cb173"},
{file = "lxml-5.3.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2d0a60841410123c533990f392819804a8448853f06daf412c0f383443925e89"},
{file = "lxml-5.3.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4b7f729e03090eb4e3981f10efaee35e6004b548636b1a062b8b9a525e752abc"},
{file = "lxml-5.3.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:579df6e20d8acce3bcbc9fb8389e6ae00c19562e929753f534ba4c29cfe0be4b"},
{file = "lxml-5.3.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:2abcf3f3b8367d6400b908d00d4cd279fc0b8efa287e9043820525762d383699"},
{file = "lxml-5.3.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:348c06cb2e3176ce98bee8c397ecc89181681afd13d85870df46167f140a305f"},
{file = "lxml-5.3.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:617ecaccd565cbf1ac82ffcaa410e7da5bd3a4b892bb3543fb2fe19bd1c4467d"},
{file = "lxml-5.3.2-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c3eb4278dcdb9d86265ed2c20b9ecac45f2d6072e3904542e591e382c87a9c00"},
{file = "lxml-5.3.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:258b6b53458c5cbd2a88795557ff7e0db99f73a96601b70bc039114cd4ee9e02"},
{file = "lxml-5.3.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0a9d8d25ed2f2183e8471c97d512a31153e123ac5807f61396158ef2793cb6e"},
{file = "lxml-5.3.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:73bcb635a848c18a3e422ea0ab0092f2e4ef3b02d8ebe87ab49748ebc8ec03d8"},
{file = "lxml-5.3.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:1545de0a69a16ced5767bae8cca1801b842e6e49e96f5e4a8a5acbef023d970b"},
{file = "lxml-5.3.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:165fcdc2f40fc0fe88a3c3c06c9c2a097388a90bda6a16e6f7c9199c903c9b8e"},
{file = "lxml-5.3.2.tar.gz", hash = "sha256:773947d0ed809ddad824b7b14467e1a481b8976e87278ac4a730c2f7c7fcddc1"},
]
[package.extras]
@@ -4982,7 +4988,6 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f66efbc1caa63c088dead1c4170d148eabc9b80d95fb75b6c92ac0aad2437d76"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:22353049ba4181685023b25b5b51a574bce33e7f51c759371a7422dcae5402a6"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:932205970b9f9991b34f55136be327501903f7c66830e9760a8ffb15b07f05cd"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a52d48f4e7bf9005e8f0a89209bf9a73f7190ddf0489eee5eb51377385f59f2a"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-win32.whl", hash = "sha256:3eac5a91891ceb88138c113f9db04f3cebdae277f5d44eaa3651a4f573e6a5da"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-win_amd64.whl", hash = "sha256:ab007f2f5a87bd08ab1499bdf96f3d5c6ad4dcfa364884cb4549aa0154b13a28"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:4a6679521a58256a90b0d89e03992c15144c5f3858f40d7c18886023d7943db6"},
@@ -4991,7 +4996,6 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:811ea1594b8a0fb466172c384267a4e5e367298af6b228931f273b111f17ef52"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cf12567a7b565cbf65d438dec6cfbe2917d3c1bdddfce84a9930b7d35ea59642"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7dd5adc8b930b12c8fc5b99e2d535a09889941aa0d0bd06f4749e9a9397c71d2"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1492a6051dab8d912fc2adeef0e8c72216b24d57bd896ea607cb90bb0c4981d3"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-win32.whl", hash = "sha256:bd0a08f0bab19093c54e18a14a10b4322e1eacc5217056f3c063bd2f59853ce4"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-win_amd64.whl", hash = "sha256:a274fb2cb086c7a3dea4322ec27f4cb5cc4b6298adb583ab0e211a4682f241eb"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:20b0f8dc160ba83b6dcc0e256846e1a02d044e13f7ea74a3d1d56ede4e48c632"},
@@ -5000,7 +5004,6 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:749c16fcc4a2b09f28843cda5a193e0283e47454b63ec4b81eaa2242f50e4ccd"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:bf165fef1f223beae7333275156ab2022cffe255dcc51c27f066b4370da81e31"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:32621c177bbf782ca5a18ba4d7af0f1082a3f6e517ac2a18b3974d4edf349680"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b82a7c94a498853aa0b272fd5bc67f29008da798d4f93a2f9f289feb8426a58d"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-win32.whl", hash = "sha256:e8c4ebfcfd57177b572e2040777b8abc537cdef58a2120e830124946aa9b42c5"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-win_amd64.whl", hash = "sha256:0467c5965282c62203273b838ae77c0d29d7638c8a4e3a1c8bdd3602c10904e4"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:4c8c5d82f50bb53986a5e02d1b3092b03622c02c2eb78e29bec33fd9593bae1a"},
@@ -5009,7 +5012,6 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96777d473c05ee3e5e3c3e999f5d23c6f4ec5b0c38c098b3a5229085f74236c6"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:3bc2a80e6420ca8b7d3590791e2dfc709c88ab9152c00eeb511c9875ce5778bf"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:e188d2699864c11c36cdfdada94d781fd5d6b0071cd9c427bceb08ad3d7c70e1"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4f6f3eac23941b32afccc23081e1f50612bdbe4e982012ef4f5797986828cd01"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-win32.whl", hash = "sha256:6442cb36270b3afb1b4951f060eccca1ce49f3d087ca1ca4563a6eb479cb3de6"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-win_amd64.whl", hash = "sha256:e5b8daf27af0b90da7bb903a876477a9e6d7270be6146906b276605997c7e9a3"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:fc4b630cd3fa2cf7fce38afa91d7cfe844a9f75d7f0f36393fa98815e911d987"},
@@ -5018,7 +5020,6 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e2f1c3765db32be59d18ab3953f43ab62a761327aafc1594a2a1fbe038b8b8a7"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d85252669dc32f98ebcd5d36768f5d4faeaeaa2d655ac0473be490ecdae3c285"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e143ada795c341b56de9418c58d028989093ee611aa27ffb9b7f609c00d813ed"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2c59aa6170b990d8d2719323e628aaf36f3bfbc1c26279c0eeeb24d05d2d11c7"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-win32.whl", hash = "sha256:beffaed67936fbbeffd10966a4eb53c402fafd3d6833770516bf7314bc6ffa12"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-win_amd64.whl", hash = "sha256:040ae85536960525ea62868b642bdb0c2cc6021c9f9d507810c0c604e66f5a7b"},
{file = "ruamel.yaml.clib-0.2.12.tar.gz", hash = "sha256:6c8fbb13ec503f99a91901ab46e0b07ae7941cd527393187039aec586fdfd36f"},
@@ -5693,35 +5694,70 @@ files = [
[[package]]
name = "xmlsec"
version = "1.3.15"
version = "1.3.14"
description = "Python bindings for the XML Security Library"
optional = false
python-versions = ">=3.5"
groups = ["main"]
files = [
{file = "xmlsec-1.3.15-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:60209f82a254a1d6083397c4eeae131e7ac2f64bfddb97f2b0b240369f03c4df"},
{file = "xmlsec-1.3.15-cp310-cp310-win32.whl", hash = "sha256:a62be0f8964bbec1efd2ca39b025c40da620a2ef9cb5440ff4ffa7e0c6906f70"},
{file = "xmlsec-1.3.15-cp310-cp310-win_amd64.whl", hash = "sha256:685b92860bbf048e3b725bd5e9310bd4d3515f7eafcb2c284dda62078a1ce90c"},
{file = "xmlsec-1.3.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c760230d4f77b7828857d076434e0810850eb2603775dc92fa9f760a98c2f694"},
{file = "xmlsec-1.3.15-cp311-cp311-win32.whl", hash = "sha256:901458034b7476e1fd0881a85814e184d00eec2b5df33b1ceeb312681e8cb9e8"},
{file = "xmlsec-1.3.15-cp311-cp311-win_amd64.whl", hash = "sha256:2ecbb65eea79a25769fbaa56c9e8bc4553aea63a9704795e962dfe06679b0191"},
{file = "xmlsec-1.3.15-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0edff08e0442cdcc82bebf353ba4bcfd5a022f4b2751052ee1564afc5c78bef4"},
{file = "xmlsec-1.3.15-cp312-cp312-win32.whl", hash = "sha256:e5c402e5633fd39f75fe124219d66d383a040ba04d0de54e024afeb7fe7d3e3a"},
{file = "xmlsec-1.3.15-cp312-cp312-win_amd64.whl", hash = "sha256:0c47f2347e8dcc0a48648b9702af53179618c204414a8e36926a9f61214ebf0b"},
{file = "xmlsec-1.3.15-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:6ac2154311d32a6571e22f224ed16356029e59bd5ca76edeb3922a809adfe89c"},
{file = "xmlsec-1.3.15-cp313-cp313-win32.whl", hash = "sha256:5ed218129f89b0592926ad2be42c017bece469db9b7380dc41bc09b01ca26d5d"},
{file = "xmlsec-1.3.15-cp313-cp313-win_amd64.whl", hash = "sha256:5fc29e69b064323317b3862751a3a8107670e0a17510ca4517bbdc1939a90b1a"},
{file = "xmlsec-1.3.15-cp36-cp36m-win32.whl", hash = "sha256:d0404dd76097b1f6dcbeff404c46cf045442a8cf9500f60c46a26ae03130ab9c"},
{file = "xmlsec-1.3.15-cp36-cp36m-win_amd64.whl", hash = "sha256:672bb43a12d6b8e2e4a392ef495ea731ded5acc1585f9358174295a6fb5df262"},
{file = "xmlsec-1.3.15-cp37-cp37m-win32.whl", hash = "sha256:96e24b22e862f0c50840a5af23cb7df186e7a1547b311a67ebca5b1e43ea0d86"},
{file = "xmlsec-1.3.15-cp37-cp37m-win_amd64.whl", hash = "sha256:bec066ce81a82a5a2b994b1e7be2af11715fd716a55754c645668acf9c5a64c0"},
{file = "xmlsec-1.3.15-cp38-cp38-win32.whl", hash = "sha256:95e80981b2e0ea74a7040cbf66b40072f4424298d7b50c3e587a026a7dab34ad"},
{file = "xmlsec-1.3.15-cp38-cp38-win_amd64.whl", hash = "sha256:c2a40f8549769ba5fdc223f0ae564d3b4d4ca52b6461d46bc508d3321267b2ad"},
{file = "xmlsec-1.3.15-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a2d5692a683054dec769f4a1d6e8fade88ddcfc2cef89b20d0ecc1c75deb0dd6"},
{file = "xmlsec-1.3.15-cp39-cp39-macosx_13_0_arm64.whl", hash = "sha256:f0115d3b4f156df2cfee8424d75dcb7f5ca2cb4870af18b713098830493d3cb0"},
{file = "xmlsec-1.3.15-cp39-cp39-win32.whl", hash = "sha256:ffb32d3c5af289c8598d4f9215c9f8f6c208f1551e78f0180f525bc08c8a67d2"},
{file = "xmlsec-1.3.15-cp39-cp39-win_amd64.whl", hash = "sha256:3211da05c11c7a0d2b913a7834bff59e649150f41127949b3322442bc3986b56"},
{file = "xmlsec-1.3.15.tar.gz", hash = "sha256:baa856b83d0012e278e6f6cbec96ac8128de667ca9fa9a2eeb02c752e816f6d8"},
{file = "xmlsec-1.3.14-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4dea6df3ffcb65d0b215678c3a0fe7bbc66785d6eae81291296e372498bad43a"},
{file = "xmlsec-1.3.14-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1fa1311f7489d050dde9028f5a2b5849c2927bb09c9a93491cb2f28fdc563912"},
{file = "xmlsec-1.3.14-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28cd9f513cf01dc0c5b9d9f0728714ecde2e7f46b3b6f63de91f4ae32f3008b3"},
{file = "xmlsec-1.3.14-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:77749b338503fb6e151052c664064b34264f4168e2cb0cca1de78b7e5312a783"},
{file = "xmlsec-1.3.14-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4af81ce8044862ec865782efd353d22abdcd95b92364eef3c934de57ae6d5852"},
{file = "xmlsec-1.3.14-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:cf35a25be3eb6263b2e0544ba26294651113fab79064f994d347a2ca5973e8e2"},
{file = "xmlsec-1.3.14-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:004e8a82e26728bf8a60f8ece1ef3ffafdac30ef538139dfe28870e8503ca64a"},
{file = "xmlsec-1.3.14-cp310-cp310-win32.whl", hash = "sha256:e6cbc914d77678db0c8bc39e723d994174633d18f9d6be4665ec29cce978a96d"},
{file = "xmlsec-1.3.14-cp310-cp310-win_amd64.whl", hash = "sha256:4922afa9234d1c5763950b26c328a5320019e55eb6000272a79dfe54fee8e704"},
{file = "xmlsec-1.3.14-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7799a9ff3593f9dd43464e18b1a621640bffc40456c47c23383727f937dca7fc"},
{file = "xmlsec-1.3.14-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1fe23c2dd5f5dbcb24f40e2c1061e2672a32aabee7cf8ac5337036a485607d72"},
{file = "xmlsec-1.3.14-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0be3b7a28e54a03b87faf07fb3c6dc3e50a2c79b686718c3ad08300b8bf6bb67"},
{file = "xmlsec-1.3.14-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:48e894ad3e7de373f56efc09d6a56f7eae73a8dd4cec8943313134849e9c6607"},
{file = "xmlsec-1.3.14-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:204d3c586b8bd6f02a5d4c59850a8157205569d40c32567f49576fa5795d897d"},
{file = "xmlsec-1.3.14-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:6679cec780386d848e7351d4b0de92c4483289ea4f0a2187e216159f939a4c6b"},
{file = "xmlsec-1.3.14-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c4d41c83c8a2b8d8030204391ebeb6174fbdb044f0331653c4b5a4ce4150bcc0"},
{file = "xmlsec-1.3.14-cp311-cp311-win32.whl", hash = "sha256:df4aa0782a53032fd35e18dcd6d328d6126324bfcfdef0cb5c2856f25b4b6f94"},
{file = "xmlsec-1.3.14-cp311-cp311-win_amd64.whl", hash = "sha256:1072878301cb9243a54679e0520e6a5be2266c07a28b0ecef9e029d05a90ffcd"},
{file = "xmlsec-1.3.14-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:1eb3dcf244a52f796377112d8f238dbb522eb87facffb498425dc8582a84a6bf"},
{file = "xmlsec-1.3.14-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:330147ce59fbe56a9be5b2085d739c55a569f112576b3f1b33681f87416eaf33"},
{file = "xmlsec-1.3.14-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed4034939d8566ccdcd3b4e4f23c63fd807fb8763ae5668d59a19e11640a8242"},
{file = "xmlsec-1.3.14-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a98eadfcb0c3b23ccceb7a2f245811f8d784bd287640dcfe696a26b9db1e2fc0"},
{file = "xmlsec-1.3.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:86ff7b2711557c1087b72b0a1a88d82eafbf2a6d38b97309a6f7101d4a7041c3"},
{file = "xmlsec-1.3.14-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:774d5d1e45f07f953c1cc14fd055c1063f0725f7248b6b0e681f59fd8638934d"},
{file = "xmlsec-1.3.14-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:bd10ca3201f164482775a7ce61bf7ee9aade2e7d032046044dd0f6f52c91d79d"},
{file = "xmlsec-1.3.14-cp312-cp312-win32.whl", hash = "sha256:19c86bab1498e4c2e56d8e2c878f461ccb6e56b67fd7522b0c8fda46d8910781"},
{file = "xmlsec-1.3.14-cp312-cp312-win_amd64.whl", hash = "sha256:d0762f4232bce2c7f6c0af329db8b821b4460bbe123a2528fb5677d03db7a4b5"},
{file = "xmlsec-1.3.14-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:03ccba7dacf197850de954666af0221c740a5de631a80136362a1559223fab75"},
{file = "xmlsec-1.3.14-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c12900e1903e289deb84eb893dca88591d6884d3e3cda4fb711b8812118416e8"},
{file = "xmlsec-1.3.14-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6566434e2e5c58e472362a6187f208601f1627a148683a6f92bd16479f1d9e20"},
{file = "xmlsec-1.3.14-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:2401e162aaab7d9416c3405bac7a270e5f370988a0f1f46f0f29b735edba87e1"},
{file = "xmlsec-1.3.14-cp36-cp36m-win32.whl", hash = "sha256:ba3b39c493e3b04354615068a3218f30897fcc2f42c6d8986d0c1d63aca87782"},
{file = "xmlsec-1.3.14-cp36-cp36m-win_amd64.whl", hash = "sha256:4edd8db4df04bbac9c4a5ab4af855b74fe2bf2c248d07cac2e6d92a485f1a685"},
{file = "xmlsec-1.3.14-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b6dd86f440fec9242515c64f0be93fec8b4289287db1f6de2651eee9995aaecb"},
{file = "xmlsec-1.3.14-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ad1634cabe0915fe2a12e142db0ed2daf5be80cbe3891a2cecbba0750195cc6b"},
{file = "xmlsec-1.3.14-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dba457ff87c39cbae3c5020475a728d24bbd9d00376df9af9724cd3bb59ff07a"},
{file = "xmlsec-1.3.14-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:12d90059308bb0c1b94bde065784e6852999d08b91bcb2048c17e62b954acb07"},
{file = "xmlsec-1.3.14-cp37-cp37m-win32.whl", hash = "sha256:ce4e165a1436697e5e39587c4fba24db4545a5c9801e0d749f1afd09ad3ab901"},
{file = "xmlsec-1.3.14-cp37-cp37m-win_amd64.whl", hash = "sha256:7e8e0171916026cbe8e2022c959558d02086655fd3c3466f2bc0451b09cf9ee8"},
{file = "xmlsec-1.3.14-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c42735cc68fdb4c6065cf0a0701dfff3a12a1734c63a36376349af9a5481f27b"},
{file = "xmlsec-1.3.14-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:38e035bf48300b7dbde2dd01d3b8569f8584fc9c73809be13886e6b6c77b74fb"},
{file = "xmlsec-1.3.14-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:73eabf5ef58189d81655058cf328c1dfa9893d89f1bff5fc941481f08533f338"},
{file = "xmlsec-1.3.14-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bddd2a2328b4e08c8a112e06cf2cd2b4d281f4ad94df15b4cef18f06cdc49d78"},
{file = "xmlsec-1.3.14-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:57fed3bc7943681c9ed4d2221600ab440f060d8d1a8f92f346f2b41effe175b8"},
{file = "xmlsec-1.3.14-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:147934bd39dfd840663fb6b920ea9201455fa886427975713f1b42d9f20b9b29"},
{file = "xmlsec-1.3.14-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e732a75fcb6b84872b168f972fbbf3749baf76308635f14015d1d35ed0c5719c"},
{file = "xmlsec-1.3.14-cp38-cp38-win32.whl", hash = "sha256:b109cdf717257fd4daa77c1d3ec8a3fb2a81318a6d06a36c55a8a53ae381ae5e"},
{file = "xmlsec-1.3.14-cp38-cp38-win_amd64.whl", hash = "sha256:b7ba2ea38e3d9efa520b14f3c0b7d99a7c055244ae5ba8bc9f4ca73b18f3a215"},
{file = "xmlsec-1.3.14-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1b9b5de6bc69fdec23147e5f712cb05dc86df105462f254f140d743cc680cc7b"},
{file = "xmlsec-1.3.14-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:82ac81deb7d7bf5cc8a748148948e5df5386597ff43fb92ec651cc5c7addb0e7"},
{file = "xmlsec-1.3.14-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bae37b2920115cf00759ee9fb7841cbdebcef3a8a92734ab93ae8fa41ac581d"},
{file = "xmlsec-1.3.14-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4fac2a787ae3b9fb761f9aec6b9f10f2d1c1b87abb574ebd8ff68435bdc97e3d"},
{file = "xmlsec-1.3.14-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:34c61ec0c0e70fda710290ae74b9efe1928d9242ed82c4eecf97aa696cff68e6"},
{file = "xmlsec-1.3.14-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:995e87acecc263a2f6f2aa3cc204268f651cac8f4d7a2047f11b2cd49979cc38"},
{file = "xmlsec-1.3.14-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2f84a1c509c52773365645a87949081ee9ea9c535cd452048cc8ca4ad3b45666"},
{file = "xmlsec-1.3.14-cp39-cp39-win32.whl", hash = "sha256:7882963e9cb9c0bd0e8c2715a29159a366417ff4a30d8baf42b05bc5cf249446"},
{file = "xmlsec-1.3.14-cp39-cp39-win_amd64.whl", hash = "sha256:a487c3d144f791c32f5e560aa27a705fba23171728b8a8511f36de053ff6bc93"},
{file = "xmlsec-1.3.14.tar.gz", hash = "sha256:934f804f2f895bcdb86f1eaee236b661013560ee69ec108d29cdd6e5f292a2d9"},
]
[package.dependencies]
@@ -5869,4 +5905,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "0750d4d8d4c0b020c87a5c6e3c459f1f5f445e6f1395f7e492adea9a901e2056"
content-hash = "6802b33984c2f8438c9dc02dac0a0c14d5a78af60251bd0c80ca59bc2182c48e"
+4 -2
View File
@@ -23,12 +23,14 @@ dependencies = [
"drf-spectacular==0.27.2",
"drf-spectacular-jsonapi==0.5.1",
"gunicorn==23.0.0",
"lxml==5.3.2",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
"psycopg2-binary==2.9.9",
"pytest-celery[redis] (>=1.0.1,<2.0.0)",
"sentry-sdk[django] (>=2.20.0,<3.0.0)",
"uuid6==2024.7.10",
"openai (>=1.82.0,<2.0.0)"
"openai (>=1.82.0,<2.0.0)",
"xmlsec==1.3.14"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
@@ -36,7 +38,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.9.0"
version = "1.10.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
+33 -90
View File
@@ -17,8 +17,8 @@ class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
def pre_social_login(self, request, sociallogin):
# Link existing accounts with the same email address
email = sociallogin.account.extra_data.get("email")
# if sociallogin.account.provider == "saml":
# email = sociallogin.user.email
if sociallogin.provider.id == "saml":
email = sociallogin.user.email
if email:
existing_user = self.get_user_by_email(email)
if existing_user:
@@ -31,98 +31,41 @@ class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
"""
with transaction.atomic(using=MainRouter.admin_db):
user = super().save_user(request, sociallogin, form)
provider = sociallogin.provider.id
extra = sociallogin.account.extra_data
# if provider == "saml":
# # Handle SAML-specific logic
# user.first_name = (
# extra.get("firstName", [""])[0] if extra.get("firstName") else ""
# )
# user.last_name = (
# extra.get("lastName", [""])[0] if extra.get("lastName") else ""
# )
# user.company_name = (
# extra.get("organization", [""])[0]
# if extra.get("organization")
# else ""
# )
# user.name = f"{user.first_name} {user.last_name}".strip()
# if user.name == "":
# user.name = "N/A"
# user.save(using=MainRouter.admin_db)
# email_domain = user.email.split("@")[-1]
# tenant = (
# SAMLConfiguration.objects.using(MainRouter.admin_db)
# .get(email_domain=email_domain)
# .tenant
# )
# with rls_transaction(str(tenant.id)):
# role_name = (
# extra.get("userType", ["saml_default_role"])[0].strip()
# if extra.get("userType")
# else "saml_default_role"
# )
# try:
# role = Role.objects.using(MainRouter.admin_db).get(
# name=role_name, tenant_id=tenant.id
# )
# except Role.DoesNotExist:
# role = Role.objects.using(MainRouter.admin_db).create(
# name=role_name,
# tenant_id=tenant.id,
# manage_users=False,
# manage_account=False,
# manage_billing=False,
# manage_providers=False,
# manage_integrations=False,
# manage_scans=False,
# unlimited_visibility=False,
# )
# Membership.objects.using(MainRouter.admin_db).create(
# user=user,
# tenant=tenant,
# role=Membership.RoleChoices.MEMBER,
# )
# UserRoleRelationship.objects.using(MainRouter.admin_db).create(
# user=user,
# role=role,
# tenant_id=tenant.id,
# )
# Handle other providers (e.g., GitHub, Google)
user.save(using=MainRouter.admin_db)
social_account_name = extra.get("name")
if social_account_name:
user.name = social_account_name
if provider != "saml":
# Handle other providers (e.g., GitHub, Google)
user.save(using=MainRouter.admin_db)
social_account_name = extra.get("name")
if social_account_name:
user.name = social_account_name
user.save(using=MainRouter.admin_db)
tenant = Tenant.objects.using(MainRouter.admin_db).create(
name=f"{user.email.split('@')[0]} default tenant"
)
with rls_transaction(str(tenant.id)):
Membership.objects.using(MainRouter.admin_db).create(
user=user, tenant=tenant, role=Membership.RoleChoices.OWNER
)
role = Role.objects.using(MainRouter.admin_db).create(
name="admin",
tenant_id=tenant.id,
manage_users=True,
manage_account=True,
manage_billing=True,
manage_providers=True,
manage_integrations=True,
manage_scans=True,
unlimited_visibility=True,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
tenant = Tenant.objects.using(MainRouter.admin_db).create(
name=f"{user.email.split('@')[0]} default tenant"
)
with rls_transaction(str(tenant.id)):
Membership.objects.using(MainRouter.admin_db).create(
user=user, tenant=tenant, role=Membership.RoleChoices.OWNER
)
role = Role.objects.using(MainRouter.admin_db).create(
name="admin",
tenant_id=tenant.id,
manage_users=True,
manage_account=True,
manage_billing=True,
manage_providers=True,
manage_integrations=True,
manage_scans=True,
unlimited_visibility=True,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
)
else:
request.session["saml_user_created"] = str(user.id)
return user
+12
View File
@@ -529,3 +529,15 @@ class IntegrationTypeEnum(EnumType):
class IntegrationTypeEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("integration_type", *args, **kwargs)
# Postgres enum definition for Processor type
class ProcessorTypeEnum(EnumType):
enum_type_name = "processor_type"
class ProcessorTypeEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("processor_type", *args, **kwargs)
+5
View File
@@ -57,6 +57,11 @@ class TaskInProgressException(TaskManagementError):
super().__init__()
# Provider connection errors
class ProviderConnectionError(Exception):
"""Base exception for provider connection errors."""
def custom_exception_handler(exc, context):
if isinstance(exc, django_validation_error):
if hasattr(exc, "error_dict"):
+89
View File
@@ -1,5 +1,6 @@
from datetime import date, datetime, timedelta, timezone
from dateutil.parser import parse
from django.conf import settings
from django.db.models import Q
from django_filters.rest_framework import (
@@ -28,6 +29,7 @@ from api.models import (
Invitation,
Membership,
PermissionChoices,
Processor,
Provider,
ProviderGroup,
ProviderSecret,
@@ -338,6 +340,8 @@ class ResourceFilter(ProviderRelationshipFilterSet):
tags = CharFilter(method="filter_tag")
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
scan = UUIDFilter(field_name="provider__scan", lookup_expr="exact")
scan__in = UUIDInFilter(field_name="provider__scan", lookup_expr="in")
class Meta:
model = Resource
@@ -352,6 +356,82 @@ class ResourceFilter(ProviderRelationshipFilterSet):
"updated_at": ["gte", "lte"],
}
def filter_queryset(self, queryset):
if not (self.data.get("scan") or self.data.get("scan__in")) and not (
self.data.get("updated_at")
or self.data.get("updated_at__date")
or self.data.get("updated_at__gte")
or self.data.get("updated_at__lte")
):
raise ValidationError(
[
{
"detail": "At least one date filter is required: filter[updated_at], filter[updated_at.gte], "
"or filter[updated_at.lte].",
"status": 400,
"source": {"pointer": "/data/attributes/updated_at"},
"code": "required",
}
]
)
gte_date = (
parse(self.data.get("updated_at__gte")).date()
if self.data.get("updated_at__gte")
else datetime.now(timezone.utc).date()
)
lte_date = (
parse(self.data.get("updated_at__lte")).date()
if self.data.get("updated_at__lte")
else datetime.now(timezone.utc).date()
)
if abs(lte_date - gte_date) > timedelta(
days=settings.FINDINGS_MAX_DAYS_IN_RANGE
):
raise ValidationError(
[
{
"detail": f"The date range cannot exceed {settings.FINDINGS_MAX_DAYS_IN_RANGE} days.",
"status": 400,
"source": {"pointer": "/data/attributes/updated_at"},
"code": "invalid",
}
]
)
return super().filter_queryset(queryset)
def filter_tag_key(self, queryset, name, value):
return queryset.filter(Q(tags__key=value) | Q(tags__key__icontains=value))
def filter_tag_value(self, queryset, name, value):
return queryset.filter(Q(tags__value=value) | Q(tags__value__icontains=value))
def filter_tag(self, queryset, name, value):
# We won't know what the user wants to filter on just based on the value,
# and we don't want to build special filtering logic for every possible
# provider tag spec, so we'll just do a full text search
return queryset.filter(tags__text_search=value)
class LatestResourceFilter(ProviderRelationshipFilterSet):
tag_key = CharFilter(method="filter_tag_key")
tag_value = CharFilter(method="filter_tag_value")
tag = CharFilter(method="filter_tag")
tags = CharFilter(method="filter_tag")
class Meta:
model = Resource
fields = {
"provider": ["exact", "in"],
"uid": ["exact", "icontains"],
"name": ["exact", "icontains"],
"region": ["exact", "icontains", "in"],
"service": ["exact", "icontains", "in"],
"type": ["exact", "icontains", "in"],
}
def filter_tag_key(self, queryset, name, value):
return queryset.filter(Q(tags__key=value) | Q(tags__key__icontains=value))
@@ -704,3 +784,12 @@ class IntegrationFilter(FilterSet):
fields = {
"inserted_at": ["date", "gte", "lte"],
}
class ProcessorFilter(FilterSet):
processor_type = ChoiceFilter(choices=Processor.ProcessorChoices.choices)
processor_type__in = ChoiceInFilter(
choices=Processor.ProcessorChoices.choices,
field_name="processor_type",
lookup_expr="in",
)
@@ -3,7 +3,7 @@
"model": "api.user",
"pk": "8b38e2eb-6689-4f1e-a4ba-95b275130200",
"fields": {
"password": "pbkdf2_sha256$720000$vA62S78kog2c2ytycVQdke$Fp35GVLLMyy5fUq3krSL9I02A+ocQ+RVa4S22LIAO5s=",
"password": "pbkdf2_sha256$870000$Z63pGJ7nre48hfcGbk5S0O$rQpKczAmijs96xa+gPVJifpT3Fetb8DOusl5Eq6gxac=",
"last_login": null,
"name": "Devie Prowlerson",
"email": "dev@prowler.com",
@@ -16,7 +16,7 @@
"model": "api.user",
"pk": "b6493a3a-c997-489b-8b99-278bf74de9f6",
"fields": {
"password": "pbkdf2_sha256$720000$vA62S78kog2c2ytycVQdke$Fp35GVLLMyy5fUq3krSL9I02A+ocQ+RVa4S22LIAO5s=",
"password": "pbkdf2_sha256$870000$Z63pGJ7nre48hfcGbk5S0O$rQpKczAmijs96xa+gPVJifpT3Fetb8DOusl5Eq6gxac=",
"last_login": null,
"name": "Devietoo Prowlerson",
"email": "dev2@prowler.com",
+150
View File
@@ -0,0 +1,150 @@
# Generated by Django 5.1.10 on 2025-07-02 15:47
import uuid
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
import api.db_utils
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0031_scan_disable_on_cascade_periodic_tasks"),
]
operations = [
migrations.AlterField(
model_name="integration",
name="integration_type",
field=api.db_utils.IntegrationTypeEnumField(
choices=[
("amazon_s3", "Amazon S3"),
("aws_security_hub", "AWS Security Hub"),
("jira", "JIRA"),
("slack", "Slack"),
]
),
),
migrations.CreateModel(
name="SAMLToken",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
("expires_at", models.DateTimeField(editable=False)),
("token", models.JSONField(unique=True)),
(
"user",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL,
),
),
],
options={
"db_table": "saml_tokens",
},
),
migrations.CreateModel(
name="SAMLConfiguration",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
(
"email_domain",
models.CharField(
help_text="Email domain used to identify the tenant, e.g. prowlerdemo.com",
max_length=254,
unique=True,
),
),
(
"metadata_xml",
models.TextField(
help_text="Raw IdP metadata XML to configure SingleSignOnService, certificates, etc."
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "saml_configurations",
},
),
migrations.AddConstraint(
model_name="samlconfiguration",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_samlconfiguration",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="samlconfiguration",
constraint=models.UniqueConstraint(
fields=("tenant",), name="unique_samlconfig_per_tenant"
),
),
migrations.CreateModel(
name="SAMLDomainIndex",
fields=[
(
"id",
models.BigAutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("email_domain", models.CharField(max_length=254, unique=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "saml_domain_index",
},
),
migrations.AddConstraint(
model_name="samldomainindex",
constraint=models.UniqueConstraint(
fields=("email_domain", "tenant"),
name="unique_resources_by_email_domain",
),
),
migrations.AddConstraint(
model_name="samldomainindex",
constraint=api.rls.BaseSecurityConstraint(
name="statements_on_samldomainindex",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,34 @@
# Generated by Django 5.1.5 on 2025-03-03 15:46
from functools import partial
from django.db import migrations
from api.db_utils import PostgresEnumMigration, ProcessorTypeEnum, register_enum
from api.models import Processor
ProcessorTypeEnumMigration = PostgresEnumMigration(
enum_name="processor_type",
enum_values=tuple(
processor_type[0] for processor_type in Processor.ProcessorChoices.choices
),
)
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0032_saml"),
]
operations = [
migrations.RunPython(
ProcessorTypeEnumMigration.create_enum_type,
reverse_code=ProcessorTypeEnumMigration.drop_enum_type,
),
migrations.RunPython(
partial(register_enum, enum_class=ProcessorTypeEnum),
reverse_code=migrations.RunPython.noop,
),
]
@@ -0,0 +1,88 @@
# Generated by Django 5.1.5 on 2025-03-26 13:04
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.db_utils
import api.rls
from api.rls import RowLevelSecurityConstraint
class Migration(migrations.Migration):
dependencies = [
("api", "0033_processors_enum"),
]
operations = [
migrations.CreateModel(
name="Processor",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"processor_type",
api.db_utils.ProcessorTypeEnumField(
choices=[("mutelist", "Mutelist")]
),
),
("configuration", models.JSONField(default=dict)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "processors",
"abstract": False,
"indexes": [
models.Index(
fields=["tenant_id", "id"], name="processor_tenant_id_idx"
),
models.Index(
fields=["tenant_id", "processor_type"],
name="processor_tenant_type_idx",
),
],
},
),
migrations.AddConstraint(
model_name="processor",
constraint=models.UniqueConstraint(
fields=("tenant_id", "processor_type"),
name="unique_processor_types_tenant",
),
),
migrations.AddConstraint(
model_name="processor",
constraint=RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_processor",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddField(
model_name="scan",
name="processor",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="scans",
related_query_name="scan",
to="api.processor",
),
),
]
@@ -0,0 +1,22 @@
import django.core.validators
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0034_processors"),
]
operations = [
migrations.AddField(
model_name="finding",
name="muted_reason",
field=models.TextField(
blank=True,
max_length=500,
null=True,
validators=[django.core.validators.MinLengthValidator(3)],
),
),
]
@@ -0,0 +1,30 @@
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0035_finding_muted_reason"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="resource_finding_mappings",
index_name="rfm_tenant_finding_idx",
columns="tenant_id, finding_id",
method="BTREE",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="resource_finding_mappings",
index_name="rfm_tenant_finding_idx",
),
),
]
@@ -0,0 +1,17 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0036_rfm_tenant_finding_index_partitions"),
]
operations = [
migrations.AddIndex(
model_name="resourcefindingmapping",
index=models.Index(
fields=["tenant_id", "finding_id"],
name="rfm_tenant_finding_idx",
),
),
]
@@ -0,0 +1,15 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0037_rfm_tenant_finding_index_parent"),
]
operations = [
migrations.AddField(
model_name="resource",
name="failed_findings_count",
field=models.IntegerField(default=0),
)
]
@@ -0,0 +1,20 @@
from django.contrib.postgres.operations import AddIndexConcurrently
from django.db import migrations, models
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0038_resource_failed_findings_count"),
]
operations = [
AddIndexConcurrently(
model_name="resource",
index=models.Index(
fields=["tenant_id", "-failed_findings_count", "id"],
name="resources_failed_findings_idx",
),
),
]
+316 -214
View File
@@ -1,15 +1,21 @@
import json
import logging
import re
import xml.etree.ElementTree as ET
from datetime import datetime, timedelta, timezone
from uuid import UUID, uuid4
from allauth.socialaccount.models import SocialApp
from config.custom_logging import BackendLogger
from config.settings.social_login import SOCIALACCOUNT_PROVIDERS
from cryptography.fernet import Fernet, InvalidToken
from django.conf import settings
from django.contrib.auth.models import AbstractBaseUser
from django.contrib.postgres.fields import ArrayField
from django.contrib.postgres.indexes import GinIndex
from django.contrib.postgres.search import SearchVector, SearchVectorField
from django.contrib.sites.models import Site
from django.core.exceptions import ValidationError
from django.core.validators import MinLengthValidator
from django.db import models
from django.db.models import Q
@@ -21,12 +27,14 @@ from psqlextra.models import PostgresPartitionedModel
from psqlextra.types import PostgresPartitioningMethod
from uuid6 import uuid7
from api.db_router import MainRouter
from api.db_utils import (
CustomUserManager,
FindingDeltaEnumField,
IntegrationTypeEnumField,
InvitationStateEnumField,
MemberRoleEnumField,
ProcessorTypeEnumField,
ProviderEnumField,
ProviderSecretTypeEnumField,
ScanTriggerEnumField,
@@ -402,20 +410,6 @@ class Scan(RowLevelSecurityProtectedModel):
name = models.CharField(
blank=True, null=True, max_length=100, validators=[MinLengthValidator(3)]
)
provider = models.ForeignKey(
Provider,
on_delete=models.CASCADE,
related_name="scans",
related_query_name="scan",
)
task = models.ForeignKey(
Task,
on_delete=models.CASCADE,
related_name="scans",
related_query_name="scan",
null=True,
blank=True,
)
trigger = ScanTriggerEnumField(
choices=TriggerChoices.choices,
)
@@ -434,8 +428,28 @@ class Scan(RowLevelSecurityProtectedModel):
PeriodicTask, on_delete=models.SET_NULL, null=True, blank=True
)
output_location = models.CharField(blank=True, null=True, max_length=200)
# TODO: mutelist foreign key
provider = models.ForeignKey(
Provider,
on_delete=models.CASCADE,
related_name="scans",
related_query_name="scan",
)
task = models.ForeignKey(
Task,
on_delete=models.CASCADE,
related_name="scans",
related_query_name="scan",
null=True,
blank=True,
)
processor = models.ForeignKey(
"Processor",
on_delete=models.SET_NULL,
related_name="scans",
related_query_name="scan",
null=True,
blank=True,
)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "scans"
@@ -547,6 +561,8 @@ class Resource(RowLevelSecurityProtectedModel):
details = models.TextField(blank=True, null=True)
partition = models.TextField(blank=True, null=True)
failed_findings_count = models.IntegerField(default=0)
# Relationships
tags = models.ManyToManyField(
ResourceTag,
@@ -593,6 +609,10 @@ class Resource(RowLevelSecurityProtectedModel):
fields=["tenant_id", "provider_id"],
name="resources_tenant_provider_idx",
),
models.Index(
fields=["tenant_id", "-failed_findings_count", "id"],
name="resources_failed_findings_idx",
),
]
constraints = [
@@ -691,6 +711,9 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
check_id = models.CharField(max_length=100, blank=False, null=False)
check_metadata = models.JSONField(default=dict, null=False)
muted = models.BooleanField(default=False, null=False)
muted_reason = models.TextField(
blank=True, null=True, validators=[MinLengthValidator(3)], max_length=500
)
compliance = models.JSONField(default=dict, null=True, blank=True)
# Denormalize resource data for performance
@@ -832,6 +855,12 @@ class ResourceFindingMapping(PostgresPartitionedModel, RowLevelSecurityProtected
# - tenant_id
# - id
indexes = [
models.Index(
fields=["tenant_id", "finding_id"],
name="rfm_tenant_finding_idx",
),
]
constraints = [
models.UniqueConstraint(
fields=("tenant_id", "resource_id", "finding_id"),
@@ -936,6 +965,11 @@ class Invitation(RowLevelSecurityProtectedModel):
null=True,
)
def save(self, *args, **kwargs):
if self.email:
self.email = self.email.strip().lower()
super().save(*args, **kwargs)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "invitations"
@@ -1364,242 +1398,271 @@ class IntegrationProviderRelationship(RowLevelSecurityProtectedModel):
]
# class SAMLToken(models.Model):
# id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
# inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
# updated_at = models.DateTimeField(auto_now=True, editable=False)
# expires_at = models.DateTimeField(editable=False)
# token = models.JSONField(unique=True)
# user = models.ForeignKey(User, on_delete=models.CASCADE)
class SAMLToken(models.Model):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
expires_at = models.DateTimeField(editable=False)
token = models.JSONField(unique=True)
user = models.ForeignKey(User, on_delete=models.CASCADE)
# class Meta:
# db_table = "saml_tokens"
class Meta:
db_table = "saml_tokens"
# def save(self, *args, **kwargs):
# if not self.expires_at:
# self.expires_at = datetime.now(timezone.utc) + timedelta(seconds=15)
# super().save(*args, **kwargs)
def save(self, *args, **kwargs):
if not self.expires_at:
self.expires_at = datetime.now(timezone.utc) + timedelta(seconds=15)
super().save(*args, **kwargs)
# def is_expired(self) -> bool:
# return datetime.now(timezone.utc) >= self.expires_at
def is_expired(self) -> bool:
return datetime.now(timezone.utc) >= self.expires_at
# class SAMLDomainIndex(models.Model):
# """
# Public index of SAML domains. No RLS. Used for fast lookup in SAML login flow.
# """
class SAMLDomainIndex(models.Model):
"""
Public index of SAML domains. No RLS. Used for fast lookup in SAML login flow.
"""
# email_domain = models.CharField(max_length=254, unique=True)
# tenant = models.ForeignKey("Tenant", on_delete=models.CASCADE)
email_domain = models.CharField(max_length=254, unique=True)
tenant = models.ForeignKey("Tenant", on_delete=models.CASCADE)
# class Meta:
# db_table = "saml_domain_index"
class Meta:
db_table = "saml_domain_index"
# constraints = [
# models.UniqueConstraint(
# fields=("email_domain", "tenant"),
# name="unique_resources_by_email_domain",
# ),
# BaseSecurityConstraint(
# name="statements_on_%(class)s",
# statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
# ),
# ]
constraints = [
models.UniqueConstraint(
fields=("email_domain", "tenant"),
name="unique_resources_by_email_domain",
),
BaseSecurityConstraint(
name="statements_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
# class SAMLConfiguration(RowLevelSecurityProtectedModel):
# """
# Stores per-tenant SAML settings, including email domain and IdP metadata.
# Automatically syncs to a SocialApp instance on save.
class SAMLConfiguration(RowLevelSecurityProtectedModel):
"""
Stores per-tenant SAML settings, including email domain and IdP metadata.
Automatically syncs to a SocialApp instance on save.
# Note:
# This model exists to provide a tenant-aware abstraction over SAML configuration.
# It supports row-level security, custom validation, and metadata parsing, enabling
# Prowler to expose a clean API and admin interface for managing SAML integrations.
Note:
This model exists to provide a tenant-aware abstraction over SAML configuration.
It supports row-level security, custom validation, and metadata parsing, enabling
Prowler to expose a clean API and admin interface for managing SAML integrations.
# Although Django Allauth uses the SocialApp model to store provider configuration,
# it is not designed for multi-tenant use. SocialApp lacks support for tenant scoping,
# email domain mapping, and structured metadata handling.
Although Django Allauth uses the SocialApp model to store provider configuration,
it is not designed for multi-tenant use. SocialApp lacks support for tenant scoping,
email domain mapping, and structured metadata handling.
# By managing SAMLConfiguration separately, we ensure:
# - Strong isolation between tenants via RLS.
# - Ownership of raw IdP metadata and its validation.
# - An explicit link between SAML config and business-level identifiers (e.g. email domain).
# - Programmatic transformation into the SocialApp format used by Allauth.
By managing SAMLConfiguration separately, we ensure:
- Strong isolation between tenants via RLS.
- Ownership of raw IdP metadata and its validation.
- An explicit link between SAML config and business-level identifiers (e.g. email domain).
- Programmatic transformation into the SocialApp format used by Allauth.
# In short, this model acts as a secure and user-friendly layer over Allauth's lower-level primitives.
# """
In short, this model acts as a secure and user-friendly layer over Allauth's lower-level primitives.
"""
# id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
# email_domain = models.CharField(
# max_length=254,
# unique=True,
# help_text="Email domain used to identify the tenant, e.g. prowlerdemo.com",
# )
# metadata_xml = models.TextField(
# help_text="Raw IdP metadata XML to configure SingleSignOnService, certificates, etc."
# )
# created_at = models.DateTimeField(auto_now_add=True)
# updated_at = models.DateTimeField(auto_now=True)
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
email_domain = models.CharField(
max_length=254,
unique=True,
help_text="Email domain used to identify the tenant, e.g. prowlerdemo.com",
)
metadata_xml = models.TextField(
help_text="Raw IdP metadata XML to configure SingleSignOnService, certificates, etc."
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
# class JSONAPIMeta:
# resource_name = "saml-configurations"
class JSONAPIMeta:
resource_name = "saml-configurations"
# class Meta:
# db_table = "saml_configurations"
class Meta:
db_table = "saml_configurations"
# constraints = [
# RowLevelSecurityConstraint(
# field="tenant_id",
# name="rls_on_%(class)s",
# statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
# ),
# # 1 config per tenant
# models.UniqueConstraint(
# fields=["tenant"],
# name="unique_samlconfig_per_tenant",
# ),
# ]
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
# 1 config per tenant
models.UniqueConstraint(
fields=["tenant"],
name="unique_samlconfig_per_tenant",
),
]
# def clean(self, old_email_domain=None):
# # Domain must not contain @
# if "@" in self.email_domain:
# raise ValidationError({"email_domain": "Domain must not contain @"})
def clean(self, old_email_domain=None, is_create=False):
# Domain must not contain @
if "@" in self.email_domain:
raise ValidationError({"email_domain": "Domain must not contain @"})
# # Enforce at most one config per tenant
# qs = SAMLConfiguration.objects.filter(tenant=self.tenant)
# # Exclude ourselves in case of update
# if self.pk:
# qs = qs.exclude(pk=self.pk)
# if qs.exists():
# raise ValidationError(
# {"tenant": "A SAML configuration already exists for this tenant."}
# )
# Enforce at most one config per tenant
qs = SAMLConfiguration.objects.filter(tenant=self.tenant)
# Exclude ourselves in case of update
if self.pk:
qs = qs.exclude(pk=self.pk)
if qs.exists():
raise ValidationError(
{"tenant": "A SAML configuration already exists for this tenant."}
)
# # The email domain must be unique in the entire system
# qs = SAMLConfiguration.objects.using(MainRouter.admin_db).filter(
# email_domain__iexact=self.email_domain
# )
# if qs.exists() and old_email_domain != self.email_domain:
# raise ValidationError(
# {"tenant": "There is a problem with your email domain."}
# )
# The email domain must be unique in the entire system
qs = SAMLConfiguration.objects.using(MainRouter.admin_db).filter(
email_domain__iexact=self.email_domain
)
if qs.exists() and old_email_domain != self.email_domain:
raise ValidationError(
{"tenant": "There is a problem with your email domain."}
)
# def save(self, *args, **kwargs):
# self.email_domain = self.email_domain.strip().lower()
# is_create = not SAMLConfiguration.objects.filter(pk=self.pk).exists()
# The entityID must be unique in the system
idp_settings = self._parsed_metadata
entity_id = idp_settings.get("entity_id")
# if not is_create:
# old = SAMLConfiguration.objects.get(pk=self.pk)
# old_email_domain = old.email_domain
# old_metadata_xml = old.metadata_xml
# else:
# old_email_domain = None
# old_metadata_xml = None
if entity_id:
# Find any SocialApp with this entityID
q = SocialApp.objects.filter(provider="saml", provider_id=entity_id)
# self.clean(old_email_domain)
# super().save(*args, **kwargs)
# If updating, exclude our own SocialApp from the check
if not is_create:
q = q.exclude(client_id=old_email_domain)
else:
q = q.exclude(client_id=self.email_domain)
# if is_create or (
# old_email_domain != self.email_domain
# or old_metadata_xml != self.metadata_xml
# ):
# self._sync_social_app(old_email_domain)
if q.exists():
raise ValidationError(
{"metadata_xml": "There is a problem with your metadata."}
)
# # Sync the public index
# if not is_create and old_email_domain and old_email_domain != self.email_domain:
# SAMLDomainIndex.objects.filter(email_domain=old_email_domain).delete()
def save(self, *args, **kwargs):
self.email_domain = self.email_domain.strip().lower()
is_create = not SAMLConfiguration.objects.filter(pk=self.pk).exists()
# # Create/update the new domain index
# SAMLDomainIndex.objects.update_or_create(
# email_domain=self.email_domain, defaults={"tenant": self.tenant}
# )
if not is_create:
old = SAMLConfiguration.objects.get(pk=self.pk)
old_email_domain = old.email_domain
old_metadata_xml = old.metadata_xml
else:
old_email_domain = None
old_metadata_xml = None
# def _parse_metadata(self):
# """
# Parse the raw IdP metadata XML and extract:
# - entity_id
# - sso_url
# - slo_url (may be None)
# - x509cert (required)
# """
# ns = {
# "md": "urn:oasis:names:tc:SAML:2.0:metadata",
# "ds": "http://www.w3.org/2000/09/xmldsig#",
# }
# try:
# root = ET.fromstring(self.metadata_xml)
# except ET.ParseError as e:
# raise ValidationError({"metadata_xml": f"Invalid XML: {e}"})
self._parsed_metadata = self._parse_metadata()
self.clean(old_email_domain, is_create)
super().save(*args, **kwargs)
# # Entity ID
# entity_id = root.attrib.get("entityID")
if is_create or (
old_email_domain != self.email_domain
or old_metadata_xml != self.metadata_xml
):
self._sync_social_app(old_email_domain)
# # SSO endpoint (must exist)
# sso = root.find(".//md:IDPSSODescriptor/md:SingleSignOnService", ns)
# if sso is None or "Location" not in sso.attrib:
# raise ValidationError(
# {"metadata_xml": "Missing SingleSignOnService in metadata."}
# )
# sso_url = sso.attrib["Location"]
# Sync the public index
if not is_create and old_email_domain and old_email_domain != self.email_domain:
SAMLDomainIndex.objects.filter(email_domain=old_email_domain).delete()
# # SLO endpoint (optional)
# slo = root.find(".//md:IDPSSODescriptor/md:SingleLogoutService", ns)
# slo_url = slo.attrib.get("Location") if slo is not None else None
# Create/update the new domain index
SAMLDomainIndex.objects.update_or_create(
email_domain=self.email_domain, defaults={"tenant": self.tenant}
)
# # X.509 certificate (required)
# cert = root.find(
# './/md:KeyDescriptor[@use="signing"]/ds:KeyInfo/ds:X509Data/ds:X509Certificate',
# ns,
# )
# if cert is None or not cert.text or not cert.text.strip():
# raise ValidationError(
# {
# "metadata_xml": 'Metadata must include a <ds:X509Certificate> under <KeyDescriptor use="signing">.'
# }
# )
# x509cert = cert.text.strip()
def delete(self, *args, **kwargs):
super().delete(*args, **kwargs)
# return {
# "entity_id": entity_id,
# "sso_url": sso_url,
# "slo_url": slo_url,
# "x509cert": x509cert,
# }
SocialApp.objects.filter(provider="saml", client_id=self.email_domain).delete()
SAMLDomainIndex.objects.filter(email_domain=self.email_domain).delete()
# def _sync_social_app(self, previous_email_domain=None):
# """
# Create or update the corresponding SocialApp based on email_domain.
# If the domain changed, update the matching SocialApp.
# """
# idp_settings = self._parse_metadata()
# settings_dict = SOCIALACCOUNT_PROVIDERS["saml"].copy()
# settings_dict["idp"] = idp_settings
def _parse_metadata(self):
"""
Parse the raw IdP metadata XML and extract:
- entity_id
- sso_url
- slo_url (may be None)
- x509cert (required)
"""
ns = {
"md": "urn:oasis:names:tc:SAML:2.0:metadata",
"ds": "http://www.w3.org/2000/09/xmldsig#",
}
try:
root = ET.fromstring(self.metadata_xml)
except ET.ParseError as e:
raise ValidationError({"metadata_xml": f"Invalid XML: {e}"})
# current_site = Site.objects.get(id=settings.SITE_ID)
# Entity ID
entity_id = root.attrib.get("entityID")
if not entity_id:
raise ValidationError({"metadata_xml": "Missing entityID in metadata."})
# social_app_qs = SocialApp.objects.filter(
# provider="saml", client_id=previous_email_domain or self.email_domain
# )
# SSO endpoint (must exist)
sso = root.find(".//md:IDPSSODescriptor/md:SingleSignOnService", ns)
if sso is None or "Location" not in sso.attrib:
raise ValidationError(
{"metadata_xml": "Missing SingleSignOnService in metadata."}
)
sso_url = sso.attrib["Location"]
# client_id = self.email_domain[:191]
# name = f"SAML-{self.email_domain}"[:40]
# SLO endpoint (optional)
slo = root.find(".//md:IDPSSODescriptor/md:SingleLogoutService", ns)
slo_url = slo.attrib.get("Location") if slo is not None else None
# if social_app_qs.exists():
# social_app = social_app_qs.first()
# social_app.client_id = client_id
# social_app.name = name
# social_app.settings = settings_dict
# social_app.save()
# social_app.sites.set([current_site])
# else:
# social_app = SocialApp.objects.create(
# provider="saml",
# client_id=client_id,
# name=name,
# settings=settings_dict,
# )
# social_app.sites.set([current_site])
# X.509 certificate (required)
cert = root.find(
'.//md:KeyDescriptor[@use="signing"]/ds:KeyInfo/ds:X509Data/ds:X509Certificate',
ns,
)
if cert is None or not cert.text or not cert.text.strip():
raise ValidationError(
{
"metadata_xml": 'Metadata must include a <ds:X509Certificate> under <KeyDescriptor use="signing">.'
}
)
x509cert = cert.text.strip()
return {
"entity_id": entity_id,
"sso_url": sso_url,
"slo_url": slo_url,
"x509cert": x509cert,
}
def _sync_social_app(self, previous_email_domain=None):
"""
Create or update the corresponding SocialApp based on email_domain.
If the domain changed, update the matching SocialApp.
"""
settings_dict = SOCIALACCOUNT_PROVIDERS["saml"].copy()
settings_dict["idp"] = self._parsed_metadata
current_site = Site.objects.get(id=settings.SITE_ID)
social_app_qs = SocialApp.objects.filter(
provider="saml", client_id=previous_email_domain or self.email_domain
)
client_id = self.email_domain[:191]
name = f"SAML-{self.email_domain}"[:40]
if social_app_qs.exists():
social_app = social_app_qs.first()
social_app.client_id = client_id
social_app.name = name
social_app.settings = settings_dict
social_app.provider_id = self._parsed_metadata["entity_id"]
social_app.save()
social_app.sites.set([current_site])
else:
social_app = SocialApp.objects.create(
provider="saml",
client_id=client_id,
name=name,
settings=settings_dict,
provider_id=self._parsed_metadata["entity_id"],
)
social_app.sites.set([current_site])
class ResourceScanSummary(RowLevelSecurityProtectedModel):
@@ -1776,3 +1839,42 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
class JSONAPIMeta:
resource_name = "lighthouse-configurations"
class Processor(RowLevelSecurityProtectedModel):
class ProcessorChoices(models.TextChoices):
MUTELIST = "mutelist", _("Mutelist")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
processor_type = ProcessorTypeEnumField(choices=ProcessorChoices.choices)
configuration = models.JSONField(default=dict)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "processors"
constraints = [
models.UniqueConstraint(
fields=("tenant_id", "processor_type"),
name="unique_processor_types_tenant",
),
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
indexes = [
models.Index(
fields=["tenant_id", "id"],
name="processor_tenant_id_idx",
),
models.Index(
fields=["tenant_id", "processor_type"],
name="processor_tenant_type_idx",
),
]
class JSONAPIMeta:
resource_name = "processors"
File diff suppressed because it is too large Load Diff
@@ -11,7 +11,7 @@ def test_basic_authentication():
client = APIClient()
test_user = "test_email@prowler.com"
test_password = "test_password"
test_password = "Test_password@1"
# Check that a 401 is returned when no basic authentication is provided
no_auth_response = client.get(reverse("provider-list"))
@@ -108,7 +108,7 @@ def test_user_me_when_inviting_users(create_test_user, tenants_fixture, roles_fi
user1_email = "user1@testing.com"
user2_email = "user2@testing.com"
password = "thisisapassword123"
password = "Thisisapassword123@"
user1_response = client.post(
reverse("user-list"),
@@ -187,7 +187,7 @@ class TestTokenSwitchTenant:
client = APIClient()
test_user = "test_email@prowler.com"
test_password = "test_password"
test_password = "Test_password1@"
# Check that we can create a new user without any kind of authentication
user_creation_response = client.post(
@@ -17,7 +17,7 @@ def test_delete_provider_without_executing_task(
client = APIClient()
test_user = "test_email@prowler.com"
test_password = "test_password"
test_password = "Test_password1@"
prowler_task = tasks_fixture[0]
task_mock = Mock()
+36 -47
View File
@@ -1,4 +1,4 @@
from unittest.mock import MagicMock
from unittest.mock import MagicMock, patch
import pytest
from allauth.socialaccount.models import SocialLogin
@@ -20,31 +20,34 @@ class TestProwlerSocialAccountAdapter:
adapter = ProwlerSocialAccountAdapter()
assert adapter.get_user_by_email("notfound@example.com") is None
# def test_pre_social_login_links_existing_user(self, create_test_user, rf):
# adapter = ProwlerSocialAccountAdapter()
def test_pre_social_login_links_existing_user(self, create_test_user, rf):
adapter = ProwlerSocialAccountAdapter()
# sociallogin = MagicMock(spec=SocialLogin)
# sociallogin.account = MagicMock()
# sociallogin.account.provider = "saml"
# sociallogin.account.extra_data = {}
# sociallogin.user = create_test_user
# sociallogin.connect = MagicMock()
sociallogin = MagicMock(spec=SocialLogin)
sociallogin.account = MagicMock()
sociallogin.provider = MagicMock()
sociallogin.provider.id = "saml"
sociallogin.account.extra_data = {}
sociallogin.user = create_test_user
sociallogin.connect = MagicMock()
# adapter.pre_social_login(rf.get("/"), sociallogin)
adapter.pre_social_login(rf.get("/"), sociallogin)
# call_args = sociallogin.connect.call_args
# assert call_args is not None
call_args = sociallogin.connect.call_args
assert call_args is not None
# called_request, called_user = call_args[0]
# assert called_request.path == "/"
# assert called_user.email == create_test_user.email
called_request, called_user = call_args[0]
assert called_request.path == "/"
assert called_user.email == create_test_user.email
def test_pre_social_login_no_link_if_email_missing(self, rf):
adapter = ProwlerSocialAccountAdapter()
sociallogin = MagicMock(spec=SocialLogin)
sociallogin.account = MagicMock()
sociallogin.account.provider = "github"
sociallogin.provider = MagicMock()
sociallogin.user = MagicMock()
sociallogin.provider.id = "saml"
sociallogin.account.extra_data = {}
sociallogin.connect = MagicMock()
@@ -52,37 +55,23 @@ class TestProwlerSocialAccountAdapter:
sociallogin.connect.assert_not_called()
# def test_save_user_saml_flow(
# self,
# rf,
# saml_setup,
# saml_sociallogin,
# ):
# adapter = ProwlerSocialAccountAdapter()
# request = rf.get("/")
# saml_sociallogin.user.email = saml_setup["email"]
# saml_sociallogin.account.extra_data = {
# "firstName": [],
# "lastName": [],
# "organization": [],
# "userType": [],
# }
def test_save_user_saml_sets_session_flag(self, rf):
adapter = ProwlerSocialAccountAdapter()
request = rf.get("/")
request.session = {}
# tenant = Tenant.objects.using(MainRouter.admin_db).get(
# id=saml_setup["tenant_id"]
# )
# saml_config = SAMLConfiguration.objects.using(MainRouter.admin_db).get(
# tenant=tenant
# )
# assert saml_config.email_domain == saml_setup["domain"]
sociallogin = MagicMock(spec=SocialLogin)
sociallogin.provider = MagicMock()
sociallogin.provider.id = "saml"
sociallogin.account = MagicMock()
sociallogin.account.extra_data = {}
# user = adapter.save_user(request, saml_sociallogin)
mock_user = MagicMock()
mock_user.id = 123
# assert user.name == "N/A"
# assert user.company_name == ""
# assert user.email == saml_setup["email"]
# assert (
# Membership.objects.using(MainRouter.admin_db)
# .filter(user=user, tenant=tenant)
# .exists()
# )
with patch("api.adapters.super") as mock_super:
with patch("api.adapters.transaction"):
with patch("api.adapters.MainRouter"):
mock_super.return_value.save_user.return_value = mock_user
adapter.save_user(request, sociallogin)
assert request.session["saml_user_created"] == "123"
+179 -121
View File
@@ -1,6 +1,9 @@
import pytest
from allauth.socialaccount.models import SocialApp
from django.core.exceptions import ValidationError
from api.models import Resource, ResourceTag
from api.db_router import MainRouter
from api.models import Resource, ResourceTag, SAMLConfiguration, SAMLDomainIndex
@pytest.mark.django_db
@@ -122,147 +125,202 @@ class TestResourceModel:
# assert Finding.objects.filter(uid=long_uid).exists()
# @pytest.mark.django_db
# class TestSAMLConfigurationModel:
# VALID_METADATA = """<?xml version='1.0' encoding='UTF-8'?>
# <md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
# <md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
# <md:KeyDescriptor use='signing'>
# <ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
# <ds:X509Data>
# <ds:X509Certificate>FAKECERTDATA</ds:X509Certificate>
# </ds:X509Data>
# </ds:KeyInfo>
# </md:KeyDescriptor>
# <md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://idp.test/sso'/>
# </md:IDPSSODescriptor>
# </md:EntityDescriptor>
# """
@pytest.mark.django_db
class TestSAMLConfigurationModel:
VALID_METADATA = """<?xml version='1.0' encoding='UTF-8'?>
<md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
<md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
<md:KeyDescriptor use='signing'>
<ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
<ds:X509Data>
<ds:X509Certificate>FAKECERTDATA</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://idp.test/sso'/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
"""
# def test_creates_valid_configuration(self):
# tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant A")
# config = SAMLConfiguration.objects.using(MainRouter.admin_db).create(
# email_domain="ssoexample.com",
# metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
# tenant=tenant,
# )
def test_creates_valid_configuration(self, tenants_fixture):
tenant = tenants_fixture[0]
config = SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain="ssoexample.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
# assert config.email_domain == "ssoexample.com"
# assert SocialApp.objects.filter(client_id="ssoexample.com").exists()
assert config.email_domain == "ssoexample.com"
assert SocialApp.objects.filter(client_id="ssoexample.com").exists()
# def test_email_domain_with_at_symbol_fails(self):
# tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant B")
# config = SAMLConfiguration(
# email_domain="invalid@domain.com",
# metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
# tenant=tenant,
# )
def test_email_domain_with_at_symbol_fails(self, tenants_fixture):
tenant = tenants_fixture[0]
config = SAMLConfiguration(
email_domain="invalid@domain.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
# with pytest.raises(ValidationError) as exc_info:
# config.clean()
with pytest.raises(ValidationError) as exc_info:
config.clean()
# errors = exc_info.value.message_dict
# assert "email_domain" in errors
# assert "Domain must not contain @" in errors["email_domain"][0]
errors = exc_info.value.message_dict
assert "email_domain" in errors
assert "Domain must not contain @" in errors["email_domain"][0]
# def test_duplicate_email_domain_fails(self):
# tenant1 = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant C1")
# tenant2 = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant C2")
def test_duplicate_email_domain_fails(self, tenants_fixture):
tenant1, tenant2, *_ = tenants_fixture
# SAMLConfiguration.objects.using(MainRouter.admin_db).create(
# email_domain="duplicate.com",
# metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
# tenant=tenant1,
# )
SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain="duplicate.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant1,
)
# config = SAMLConfiguration(
# email_domain="duplicate.com",
# metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
# tenant=tenant2,
# )
config = SAMLConfiguration(
email_domain="duplicate.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant2,
)
# with pytest.raises(ValidationError) as exc_info:
# config.clean()
with pytest.raises(ValidationError) as exc_info:
config.clean()
# errors = exc_info.value.message_dict
# assert "tenant" in errors
# assert "There is a problem with your email domain." in errors["tenant"][0]
errors = exc_info.value.message_dict
assert "tenant" in errors
assert "There is a problem with your email domain." in errors["tenant"][0]
# def test_duplicate_tenant_config_fails(self):
# tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant D")
def test_duplicate_tenant_config_fails(self, tenants_fixture):
tenant = tenants_fixture[0]
# SAMLConfiguration.objects.using(MainRouter.admin_db).create(
# email_domain="unique1.com",
# metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
# tenant=tenant,
# )
SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain="unique1.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
# config = SAMLConfiguration(
# email_domain="unique2.com",
# metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
# tenant=tenant,
# )
config = SAMLConfiguration(
email_domain="unique2.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
# with pytest.raises(ValidationError) as exc_info:
# config.clean()
with pytest.raises(ValidationError) as exc_info:
config.clean()
# errors = exc_info.value.message_dict
# assert "tenant" in errors
# assert (
# "A SAML configuration already exists for this tenant."
# in errors["tenant"][0]
# )
errors = exc_info.value.message_dict
assert "tenant" in errors
assert (
"A SAML configuration already exists for this tenant."
in errors["tenant"][0]
)
# def test_invalid_metadata_xml_fails(self):
# tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant E")
# config = SAMLConfiguration(
# email_domain="brokenxml.com",
# metadata_xml="<bad<xml>",
# tenant=tenant,
# )
def test_invalid_metadata_xml_fails(self, tenants_fixture):
tenant = tenants_fixture[0]
config = SAMLConfiguration(
email_domain="brokenxml.com",
metadata_xml="<bad<xml>",
tenant=tenant,
)
# with pytest.raises(ValidationError) as exc_info:
# config._parse_metadata()
with pytest.raises(ValidationError) as exc_info:
config._parse_metadata()
# errors = exc_info.value.message_dict
# assert "metadata_xml" in errors
# assert "Invalid XML" in errors["metadata_xml"][0]
# assert "not well-formed" in errors["metadata_xml"][0]
errors = exc_info.value.message_dict
assert "metadata_xml" in errors
assert "Invalid XML" in errors["metadata_xml"][0]
assert "not well-formed" in errors["metadata_xml"][0]
# def test_metadata_missing_sso_fails(self):
# tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant F")
# xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
# <md:IDPSSODescriptor></md:IDPSSODescriptor>
# </md:EntityDescriptor>"""
# config = SAMLConfiguration(
# email_domain="nosso.com",
# metadata_xml=xml,
# tenant=tenant,
# )
def test_metadata_missing_sso_fails(self, tenants_fixture):
tenant = tenants_fixture[0]
xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
<md:IDPSSODescriptor></md:IDPSSODescriptor>
</md:EntityDescriptor>"""
config = SAMLConfiguration(
email_domain="nosso.com",
metadata_xml=xml,
tenant=tenant,
)
# with pytest.raises(ValidationError) as exc_info:
# config._parse_metadata()
with pytest.raises(ValidationError) as exc_info:
config._parse_metadata()
# errors = exc_info.value.message_dict
# assert "metadata_xml" in errors
# assert "Missing SingleSignOnService" in errors["metadata_xml"][0]
errors = exc_info.value.message_dict
assert "metadata_xml" in errors
assert "Missing SingleSignOnService" in errors["metadata_xml"][0]
# def test_metadata_missing_certificate_fails(self):
# tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant G")
# xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
# <md:IDPSSODescriptor>
# <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://example.com/sso"/>
# </md:IDPSSODescriptor>
# </md:EntityDescriptor>"""
# config = SAMLConfiguration(
# email_domain="nocert.com",
# metadata_xml=xml,
# tenant=tenant,
# )
def test_metadata_missing_certificate_fails(self, tenants_fixture):
tenant = tenants_fixture[0]
xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
<md:IDPSSODescriptor>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://example.com/sso"/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>"""
config = SAMLConfiguration(
email_domain="nocert.com",
metadata_xml=xml,
tenant=tenant,
)
# with pytest.raises(ValidationError) as exc_info:
# config._parse_metadata()
with pytest.raises(ValidationError) as exc_info:
config._parse_metadata()
# errors = exc_info.value.message_dict
# assert "metadata_xml" in errors
# assert "X509Certificate" in errors["metadata_xml"][0]
errors = exc_info.value.message_dict
assert "metadata_xml" in errors
assert "X509Certificate" in errors["metadata_xml"][0]
def test_deletes_saml_configuration_and_related_objects(self, tenants_fixture):
tenant = tenants_fixture[0]
email_domain = "deleteme.com"
# Create the configuration
config = SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain=email_domain,
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
# Verify that the SocialApp and SAMLDomainIndex exist
assert SocialApp.objects.filter(client_id=email_domain).exists()
assert (
SAMLDomainIndex.objects.using(MainRouter.admin_db)
.filter(email_domain=email_domain)
.exists()
)
# Delete the configuration
config.delete()
# Verify that the configuration and its related objects are deleted
assert (
not SAMLConfiguration.objects.using(MainRouter.admin_db)
.filter(pk=config.pk)
.exists()
)
assert not SocialApp.objects.filter(client_id=email_domain).exists()
assert (
not SAMLDomainIndex.objects.using(MainRouter.admin_db)
.filter(email_domain=email_domain)
.exists()
)
def test_duplicate_entity_id_fails_on_creation(self, tenants_fixture):
tenant1, tenant2, *_ = tenants_fixture
SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain="first.com",
metadata_xml=self.VALID_METADATA,
tenant=tenant1,
)
config = SAMLConfiguration(
email_domain="second.com",
metadata_xml=self.VALID_METADATA,
tenant=tenant2,
)
with pytest.raises(ValidationError) as exc_info:
config.save()
errors = exc_info.value.message_dict
assert "metadata_xml" in errors
assert "There is a problem with your metadata." in errors["metadata_xml"][0]
+3 -3
View File
@@ -60,7 +60,7 @@ class TestUserViewSet:
def test_create_user_with_all_permissions(self, authenticated_client_rbac):
valid_user_payload = {
"name": "test",
"password": "newpassword123",
"password": "Newpassword123@",
"email": "new_user@test.com",
}
response = authenticated_client_rbac.post(
@@ -74,7 +74,7 @@ class TestUserViewSet:
):
valid_user_payload = {
"name": "test",
"password": "newpassword123",
"password": "Newpassword123@",
"email": "new_user@test.com",
}
response = authenticated_client_no_permissions_rbac.post(
@@ -321,7 +321,7 @@ class TestProviderViewSet:
@pytest.mark.django_db
class TestLimitedVisibility:
TEST_EMAIL = "rbac@rbac.com"
TEST_PASSWORD = "thisisapassword123"
TEST_PASSWORD = "Thisisapassword123@"
@pytest.fixture
def limited_admin_user(
+60 -4
View File
@@ -131,6 +131,21 @@ class TestInitializeProwlerProvider:
initialize_prowler_provider(provider)
mock_return_prowler_provider.return_value.assert_called_once_with(key="value")
@patch("api.utils.return_prowler_provider")
def test_initialize_prowler_provider_with_mutelist(
self, mock_return_prowler_provider
):
provider = MagicMock()
provider.secret.secret = {"key": "value"}
mutelist_processor = MagicMock()
mutelist_processor.configuration = {"Mutelist": {"key": "value"}}
mock_return_prowler_provider.return_value = MagicMock()
initialize_prowler_provider(provider, mutelist_processor)
mock_return_prowler_provider.return_value.assert_called_once_with(
key="value", mutelist_content={"key": "value"}
)
class TestProwlerProviderConnectionTest:
@patch("api.utils.return_prowler_provider")
@@ -200,6 +215,25 @@ class TestGetProwlerProviderKwargs:
expected_result = {**secret_dict, **expected_extra_kwargs}
assert result == expected_result
def test_get_prowler_provider_kwargs_with_mutelist(self):
provider_uid = "provider_uid"
secret_dict = {"key": "value"}
secret_mock = MagicMock()
secret_mock.secret = secret_dict
mutelist_processor = MagicMock()
mutelist_processor.configuration = {"Mutelist": {"key": "value"}}
provider = MagicMock()
provider.provider = Provider.ProviderChoices.AWS.value
provider.secret = secret_mock
provider.uid = provider_uid
result = get_prowler_provider_kwargs(provider, mutelist_processor)
expected_result = {**secret_dict, "mutelist_content": {"key": "value"}}
assert result == expected_result
def test_get_prowler_provider_kwargs_unsupported_provider(self):
# Setup
provider_uid = "provider_uid"
@@ -254,7 +288,7 @@ class TestValidateInvitation:
assert result == invitation
mock_db.get.assert_called_once_with(
token="VALID_TOKEN", email="user@example.com"
token="VALID_TOKEN", email__iexact="user@example.com"
)
def test_invitation_not_found_raises_validation_error(self):
@@ -269,7 +303,7 @@ class TestValidateInvitation:
"invitation_token": "Invalid invitation code."
}
mock_db.get.assert_called_once_with(
token="INVALID_TOKEN", email="user@example.com"
token="INVALID_TOKEN", email__iexact="user@example.com"
)
def test_invitation_not_found_raises_not_found(self):
@@ -284,7 +318,7 @@ class TestValidateInvitation:
assert exc_info.value.detail == "Invitation is not valid."
mock_db.get.assert_called_once_with(
token="INVALID_TOKEN", email="user@example.com"
token="INVALID_TOKEN", email__iexact="user@example.com"
)
def test_invitation_expired(self, invitation):
@@ -332,5 +366,27 @@ class TestValidateInvitation:
"invitation_token": "Invalid invitation code."
}
mock_db.get.assert_called_once_with(
token="VALID_TOKEN", email="different@example.com"
token="VALID_TOKEN", email__iexact="different@example.com"
)
def test_valid_invitation_uppercase_email(self):
"""Test that validate_invitation works with case-insensitive email lookup."""
uppercase_email = "USER@example.com"
invitation = MagicMock(spec=Invitation)
invitation.token = "VALID_TOKEN"
invitation.email = uppercase_email
invitation.expires_at = datetime.now(timezone.utc) + timedelta(days=1)
invitation.state = Invitation.State.PENDING
invitation.tenant = MagicMock()
with patch("api.utils.Invitation.objects.using") as mock_using:
mock_db = mock_using.return_value
mock_db.get.return_value = invitation
result = validate_invitation("VALID_TOKEN", "user@example.com")
assert result == invitation
mock_db.get.assert_called_once_with(
token="VALID_TOKEN", email__iexact="user@example.com"
)
File diff suppressed because it is too large Load Diff
+15 -4
View File
@@ -7,7 +7,7 @@ from rest_framework.exceptions import NotFound, ValidationError
from api.db_router import MainRouter
from api.exceptions import InvitationTokenExpiredException
from api.models import Invitation, Provider, Resource
from api.models import Invitation, Processor, Provider, Resource
from api.v1.serializers import FindingMetadataSerializer
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.azure.azure_provider import AzureProvider
@@ -83,11 +83,14 @@ def return_prowler_provider(
return prowler_provider
def get_prowler_provider_kwargs(provider: Provider) -> dict:
def get_prowler_provider_kwargs(
provider: Provider, mutelist_processor: Processor | None = None
) -> dict:
"""Get the Prowler provider kwargs based on the given provider type.
Args:
provider (Provider): The provider object containing the provider type and associated secret.
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
Returns:
dict: The provider kwargs for the corresponding provider class.
@@ -105,16 +108,24 @@ def get_prowler_provider_kwargs(provider: Provider) -> dict:
}
elif provider.provider == Provider.ProviderChoices.KUBERNETES.value:
prowler_provider_kwargs = {**prowler_provider_kwargs, "context": provider.uid}
if mutelist_processor:
mutelist_content = mutelist_processor.configuration.get("Mutelist", {})
if mutelist_content:
prowler_provider_kwargs["mutelist_content"] = mutelist_content
return prowler_provider_kwargs
def initialize_prowler_provider(
provider: Provider,
mutelist_processor: Processor | None = None,
) -> AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider:
"""Initialize a Prowler provider instance based on the given provider type.
Args:
provider (Provider): The provider object containing the provider type and associated secrets.
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
Returns:
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider: An instance of the corresponding provider class
@@ -122,7 +133,7 @@ def initialize_prowler_provider(
provider's secrets.
"""
prowler_provider = return_prowler_provider(provider)
prowler_provider_kwargs = get_prowler_provider_kwargs(provider)
prowler_provider_kwargs = get_prowler_provider_kwargs(provider, mutelist_processor)
return prowler_provider(**prowler_provider_kwargs)
@@ -187,7 +198,7 @@ def validate_invitation(
# Admin DB connector is used to bypass RLS protection since the invitation belongs to a tenant the user
# is not a member of yet
invitation = Invitation.objects.using(MainRouter.admin_db).get(
token=invitation_token, email=email
token=invitation_token, email__iexact=email
)
except Invitation.DoesNotExist:
if raise_not_found:
+14 -2
View File
@@ -24,20 +24,32 @@ class PaginateByPkMixin:
request, # noqa: F841
base_queryset,
manager,
select_related: list[str] | None = None,
prefetch_related: list[str] | None = None,
select_related: list | None = None,
prefetch_related: list | None = None,
) -> Response:
"""
Paginate a queryset by primary key.
This method is useful when you want to paginate a queryset that has been
filtered or annotated in a way that would be lost if you used the default
pagination method.
"""
pk_list = base_queryset.values_list("id", flat=True)
page = self.paginate_queryset(pk_list)
if page is None:
return Response(self.get_serializer(base_queryset, many=True).data)
queryset = manager.filter(id__in=page)
if select_related:
queryset = queryset.select_related(*select_related)
if prefetch_related:
queryset = queryset.prefetch_related(*prefetch_related)
# Optimize tags loading, if applicable
if hasattr(self, "_optimize_tags_loading"):
queryset = self._optimize_tags_loading(queryset)
queryset = sorted(queryset, key=lambda obj: page.index(obj.id))
serialized = self.get_serializer(queryset, many=True).data
@@ -0,0 +1,23 @@
import yaml
from rest_framework_json_api import serializers
from rest_framework_json_api.serializers import ValidationError
class BaseValidateSerializer(serializers.Serializer):
def validate(self, data):
if hasattr(self, "initial_data"):
initial_data = set(self.initial_data.keys()) - {"id", "type"}
unknown_keys = initial_data - set(self.fields.keys())
if unknown_keys:
raise ValidationError(f"Invalid fields: {unknown_keys}")
return data
class YamlOrJsonField(serializers.JSONField):
def to_internal_value(self, data):
if isinstance(data, str):
try:
data = yaml.safe_load(data)
except yaml.YAMLError as exc:
raise serializers.ValidationError("Invalid YAML format") from exc
return super().to_internal_value(data)
@@ -1,19 +1,7 @@
from drf_spectacular.utils import extend_schema_field
from rest_framework_json_api import serializers
from rest_framework_json_api.serializers import ValidationError
class BaseValidateSerializer(serializers.Serializer):
def validate(self, data):
if hasattr(self, "initial_data"):
initial_data = set(self.initial_data.keys()) - {"id", "type"}
unknown_keys = initial_data - set(self.fields.keys())
if unknown_keys:
raise ValidationError(f"Invalid fields: {unknown_keys}")
return data
# Integrations
from api.v1.serializer_utils.base import BaseValidateSerializer
class S3ConfigSerializer(BaseValidateSerializer):
@@ -0,0 +1,21 @@
from drf_spectacular.utils import extend_schema_field
from api.v1.serializer_utils.base import YamlOrJsonField
from prowler.lib.mutelist.mutelist import mutelist_schema
@extend_schema_field(
{
"oneOf": [
{
"type": "object",
"title": "Mutelist",
"properties": {"Mutelist": mutelist_schema},
"additionalProperties": False,
},
]
}
)
class ProcessorConfigField(YamlOrJsonField):
pass
+208 -18
View File
@@ -7,7 +7,9 @@ from django.contrib.auth.models import update_last_login
from django.contrib.auth.password_validation import validate_password
from drf_spectacular.utils import extend_schema_field
from jwt.exceptions import InvalidKeyError
from rest_framework.validators import UniqueTogetherValidator
from rest_framework_json_api import serializers
from rest_framework_json_api.relations import SerializerMethodResourceRelatedField
from rest_framework_json_api.serializers import ValidationError
from rest_framework_simplejwt.exceptions import TokenError
from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
@@ -21,6 +23,7 @@ from api.models import (
InvitationRoleRelationship,
LighthouseConfiguration,
Membership,
Processor,
Provider,
ProviderGroup,
ProviderGroupMembership,
@@ -29,6 +32,7 @@ from api.models import (
ResourceTag,
Role,
RoleProviderGroupRelationship,
SAMLConfiguration,
Scan,
StateChoices,
StatusChoices,
@@ -43,7 +47,9 @@ from api.v1.serializer_utils.integrations import (
IntegrationCredentialField,
S3ConfigSerializer,
)
from api.v1.serializer_utils.processors import ProcessorConfigField
from api.v1.serializer_utils.providers import ProviderSecretField
from prowler.lib.mutelist.mutelist import Mutelist
# Tokens
@@ -129,6 +135,12 @@ class TokenSerializer(BaseTokenSerializer):
class TokenSocialLoginSerializer(BaseTokenSerializer):
email = serializers.EmailField(write_only=True)
tenant_id = serializers.UUIDField(
write_only=True,
required=False,
help_text="If not provided, the tenant ID of the first membership that was added"
" to the user will be used.",
)
# Output tokens
refresh = serializers.CharField(read_only=True)
@@ -850,6 +862,7 @@ class ScanSerializer(RLSSerializer):
"completed_at",
"scheduled_at",
"next_scan_at",
"processor",
"url",
]
@@ -987,8 +1000,12 @@ class ResourceSerializer(RLSSerializer):
tags = serializers.SerializerMethodField()
type_ = serializers.CharField(read_only=True)
failed_findings_count = serializers.IntegerField(read_only=True)
findings = serializers.ResourceRelatedField(many=True, read_only=True)
findings = SerializerMethodResourceRelatedField(
many=True,
read_only=True,
)
class Meta:
model = Resource
@@ -1004,6 +1021,7 @@ class ResourceSerializer(RLSSerializer):
"tags",
"provider",
"findings",
"failed_findings_count",
"url",
]
extra_kwargs = {
@@ -1013,8 +1031,8 @@ class ResourceSerializer(RLSSerializer):
}
included_serializers = {
"findings": "api.v1.serializers.FindingSerializer",
"provider": "api.v1.serializers.ProviderSerializer",
"findings": "api.v1.serializers.FindingIncludeSerializer",
"provider": "api.v1.serializers.ProviderIncludeSerializer",
}
@extend_schema_field(
@@ -1025,6 +1043,10 @@ class ResourceSerializer(RLSSerializer):
}
)
def get_tags(self, obj):
# Use prefetched tags if available to avoid N+1 queries
if hasattr(obj, "prefetched_tags"):
return {tag.key: tag.value for tag in obj.prefetched_tags}
# Fallback to the original method if prefetch is not available
return obj.get_tags(self.context.get("tenant_id"))
def get_fields(self):
@@ -1034,10 +1056,17 @@ class ResourceSerializer(RLSSerializer):
fields["type"] = type_
return fields
def get_findings(self, obj):
return (
obj.latest_findings
if hasattr(obj, "latest_findings")
else obj.findings.all()
)
class ResourceIncludeSerializer(RLSSerializer):
"""
Serializer for the Resource model.
Serializer for the included Resource model.
"""
tags = serializers.SerializerMethodField()
@@ -1070,6 +1099,10 @@ class ResourceIncludeSerializer(RLSSerializer):
}
)
def get_tags(self, obj):
# Use prefetched tags if available to avoid N+1 queries
if hasattr(obj, "prefetched_tags"):
return {tag.key: tag.value for tag in obj.prefetched_tags}
# Fallback to the original method if prefetch is not available
return obj.get_tags(self.context.get("tenant_id"))
def get_fields(self):
@@ -1080,6 +1113,17 @@ class ResourceIncludeSerializer(RLSSerializer):
return fields
class ResourceMetadataSerializer(serializers.Serializer):
services = serializers.ListField(child=serializers.CharField(), allow_empty=True)
regions = serializers.ListField(child=serializers.CharField(), allow_empty=True)
types = serializers.ListField(child=serializers.CharField(), allow_empty=True)
# Temporarily disabled until we implement tag filtering in the UI
# tags = serializers.JSONField(help_text="Tags are described as key-value pairs.")
class Meta:
resource_name = "resources-metadata"
class FindingSerializer(RLSSerializer):
"""
Serializer for the Finding model.
@@ -1103,6 +1147,7 @@ class FindingSerializer(RLSSerializer):
"updated_at",
"first_seen_at",
"muted",
"muted_reason",
"url",
# Relationships
"scan",
@@ -1115,6 +1160,28 @@ class FindingSerializer(RLSSerializer):
}
class FindingIncludeSerializer(RLSSerializer):
"""
Serializer for the include Finding model.
"""
class Meta:
model = Finding
fields = [
"id",
"uid",
"status",
"severity",
"check_id",
"check_metadata",
"inserted_at",
"updated_at",
"first_seen_at",
"muted",
"muted_reason",
]
# To be removed when the related endpoint is removed as well
class FindingDynamicFilterSerializer(serializers.Serializer):
services = serializers.ListField(child=serializers.CharField(), allow_empty=True)
@@ -1308,12 +1375,13 @@ class ProviderSecretUpdateSerializer(BaseWriteProviderSecretSerializer):
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"provider": {"read_only": True},
"secret_type": {"read_only": True},
"secret_type": {"required": False},
}
def validate(self, attrs):
provider = self.instance.provider
secret_type = self.instance.secret_type
# To allow updating a secret with the same type without making the `secret_type` mandatory
secret_type = attrs.get("secret_type") or self.instance.secret_type
secret = attrs.get("secret")
validated_attrs = super().validate(attrs)
@@ -2064,26 +2132,148 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
return super().update(instance, validated_data)
# Processors
class ProcessorSerializer(RLSSerializer):
"""
Serializer for the Processor model.
"""
configuration = ProcessorConfigField()
class Meta:
model = Processor
fields = [
"id",
"inserted_at",
"updated_at",
"processor_type",
"configuration",
"url",
]
class ProcessorCreateSerializer(RLSSerializer, BaseWriteSerializer):
configuration = ProcessorConfigField(required=True)
class Meta:
model = Processor
fields = [
"inserted_at",
"updated_at",
"processor_type",
"configuration",
]
extra_kwargs = {
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
}
validators = [
UniqueTogetherValidator(
queryset=Processor.objects.all(),
fields=["processor_type"],
message="A processor with the same type already exists.",
)
]
def validate(self, attrs):
validated_attrs = super().validate(attrs)
self.validate_processor_data(attrs)
return validated_attrs
def validate_processor_data(self, attrs):
processor_type = attrs.get("processor_type")
configuration = attrs.get("configuration")
if processor_type == "mutelist":
self.validate_mutelist_configuration(configuration)
def validate_mutelist_configuration(self, configuration):
if not isinstance(configuration, dict):
raise serializers.ValidationError("Invalid Mutelist configuration.")
mutelist_configuration = configuration.get("Mutelist", {})
if not mutelist_configuration:
raise serializers.ValidationError(
"Invalid Mutelist configuration: 'Mutelist' is a required property."
)
try:
Mutelist.validate_mutelist(mutelist_configuration, raise_on_exception=True)
return
except Exception as error:
raise serializers.ValidationError(
f"Invalid Mutelist configuration: {error}"
)
class ProcessorUpdateSerializer(BaseWriteSerializer):
configuration = ProcessorConfigField(required=True)
class Meta:
model = Processor
fields = [
"inserted_at",
"updated_at",
"configuration",
]
extra_kwargs = {
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
}
def validate(self, attrs):
validated_attrs = super().validate(attrs)
self.validate_processor_data(attrs)
return validated_attrs
def validate_processor_data(self, attrs):
processor_type = self.instance.processor_type
configuration = attrs.get("configuration")
if processor_type == "mutelist":
self.validate_mutelist_configuration(configuration)
def validate_mutelist_configuration(self, configuration):
if not isinstance(configuration, dict):
raise serializers.ValidationError("Invalid Mutelist configuration.")
mutelist_configuration = configuration.get("Mutelist", {})
if not mutelist_configuration:
raise serializers.ValidationError(
"Invalid Mutelist configuration: 'Mutelist' is a required property."
)
try:
Mutelist.validate_mutelist(mutelist_configuration, raise_on_exception=True)
return
except Exception as error:
raise serializers.ValidationError(
f"Invalid Mutelist configuration: {error}"
)
# SSO
# class SamlInitiateSerializer(serializers.Serializer):
# email_domain = serializers.CharField()
class SamlInitiateSerializer(serializers.Serializer):
email_domain = serializers.CharField()
# class JSONAPIMeta:
# resource_name = "saml-initiate"
class JSONAPIMeta:
resource_name = "saml-initiate"
# class SamlMetadataSerializer(serializers.Serializer):
# class JSONAPIMeta:
# resource_name = "saml-meta"
class SamlMetadataSerializer(serializers.Serializer):
class JSONAPIMeta:
resource_name = "saml-meta"
# class SAMLConfigurationSerializer(RLSSerializer):
# class Meta:
# model = SAMLConfiguration
# fields = ["id", "email_domain", "metadata_xml", "created_at", "updated_at"]
# read_only_fields = ["id", "created_at", "updated_at"]
class SAMLConfigurationSerializer(RLSSerializer):
class Meta:
model = SAMLConfiguration
fields = ["id", "email_domain", "metadata_xml", "created_at", "updated_at"]
read_only_fields = ["id", "created_at", "updated_at"]
class LighthouseConfigSerializer(RLSSerializer):
+39 -19
View File
@@ -1,9 +1,11 @@
from allauth.socialaccount.providers.saml.views import ACSView, MetadataView, SLSView
from django.urls import include, path
from drf_spectacular.views import SpectacularRedocView
from rest_framework_nested import routers
from api.v1.views import (
ComplianceOverviewViewSet,
CustomSAMLLoginView,
CustomTokenObtainView,
CustomTokenRefreshView,
CustomTokenSwitchTenantView,
@@ -16,6 +18,7 @@ from api.v1.views import (
LighthouseConfigViewSet,
MembershipViewSet,
OverviewViewSet,
ProcessorViewSet,
ProviderGroupProvidersRelationshipView,
ProviderGroupViewSet,
ProviderSecretViewSet,
@@ -23,10 +26,14 @@ from api.v1.views import (
ResourceViewSet,
RoleProviderGroupRelationshipView,
RoleViewSet,
SAMLConfigurationViewSet,
SAMLInitiateAPIView,
SAMLTokenValidateView,
ScanViewSet,
ScheduleViewSet,
SchemaView,
TaskViewSet,
TenantFinishACSView,
TenantMembersViewSet,
TenantViewSet,
UserRoleRelationshipView,
@@ -50,7 +57,8 @@ router.register(
router.register(r"overviews", OverviewViewSet, basename="overview")
router.register(r"schedules", ScheduleViewSet, basename="schedule")
router.register(r"integrations", IntegrationViewSet, basename="integration")
# router.register(r"saml-config", SAMLConfigurationViewSet, basename="saml-config")
router.register(r"processors", ProcessorViewSet, basename="processor")
router.register(r"saml-config", SAMLConfigurationViewSet, basename="saml-config")
router.register(
r"lighthouse-configurations",
LighthouseConfigViewSet,
@@ -119,24 +127,36 @@ urlpatterns = [
),
name="provider_group-providers-relationship",
),
# API endpoint to start SAML SSO flow (WIP)
# path(
# "auth/saml/initiate/", SAMLInitiateAPIView.as_view(), name="api_saml_initiate"
# ),
# # Custom SAML endpoints (must come before allauth.urls) (WIP)
# path(
# "accounts/saml/<organization_slug>/login/",
# CustomSAMLLoginView.as_view(),
# name="saml_login",
# ),
# path(
# "accounts/saml/<organization_slug>/acs/finish/",
# TenantFinishACSView.as_view(),
# name="saml_finish_acs",
# ),
# Allauth SAML endpoints for tenants (WIP)
# path("accounts/", include("allauth.urls")),
# path("tokens/saml", SAMLTokenValidateView.as_view(), name="token-saml"),
# API endpoint to start SAML SSO flow
path(
"auth/saml/initiate/", SAMLInitiateAPIView.as_view(), name="api_saml_initiate"
),
path(
"accounts/saml/<organization_slug>/login/",
CustomSAMLLoginView.as_view(),
name="saml_login",
),
path(
"accounts/saml/<organization_slug>/acs/",
ACSView.as_view(),
name="saml_acs",
),
path(
"accounts/saml/<organization_slug>/acs/finish/",
TenantFinishACSView.as_view(),
name="saml_finish_acs",
),
path(
"accounts/saml/<organization_slug>/sls/",
SLSView.as_view(),
name="saml_sls",
),
path(
"accounts/saml/<organization_slug>/metadata/",
MetadataView.as_view(),
name="saml_metadata",
),
path("tokens/saml", SAMLTokenValidateView.as_view(), name="token-saml"),
path("tokens/google", GoogleSocialLoginView.as_view(), name="token-google"),
path("tokens/github", GithubSocialLoginView.as_view(), name="token-github"),
path("", include(router.urls)),
File diff suppressed because it is too large Load Diff
+88
View File
@@ -1,3 +1,5 @@
import string
from django.core.exceptions import ValidationError
from django.utils.translation import gettext as _
@@ -20,3 +22,89 @@ class MaximumLengthValidator:
return _(
f"Your password must contain no more than {self.max_length} characters."
)
class SpecialCharactersValidator:
def __init__(self, special_characters=None, min_special_characters=1):
# Use string.punctuation if no custom characters provided
self.special_characters = special_characters or string.punctuation
self.min_special_characters = min_special_characters
def validate(self, password, user=None):
if (
sum(1 for char in password if char in self.special_characters)
< self.min_special_characters
):
raise ValidationError(
_("This password must contain at least one special character."),
code="password_no_special_characters",
params={
"special_characters": self.special_characters,
"min_special_characters": self.min_special_characters,
},
)
def get_help_text(self):
return _(
f"Your password must contain at least one special character from: {self.special_characters}"
)
class UppercaseValidator:
def __init__(self, min_uppercase=1):
self.min_uppercase = min_uppercase
def validate(self, password, user=None):
if sum(1 for char in password if char.isupper()) < self.min_uppercase:
raise ValidationError(
_(
"This password must contain at least %(min_uppercase)d uppercase letter."
),
code="password_no_uppercase_letters",
params={"min_uppercase": self.min_uppercase},
)
def get_help_text(self):
return _(
f"Your password must contain at least {self.min_uppercase} uppercase letter."
)
class LowercaseValidator:
def __init__(self, min_lowercase=1):
self.min_lowercase = min_lowercase
def validate(self, password, user=None):
if sum(1 for char in password if char.islower()) < self.min_lowercase:
raise ValidationError(
_(
"This password must contain at least %(min_lowercase)d lowercase letter."
),
code="password_no_lowercase_letters",
params={"min_lowercase": self.min_lowercase},
)
def get_help_text(self):
return _(
f"Your password must contain at least {self.min_lowercase} lowercase letter."
)
class NumericValidator:
def __init__(self, min_numeric=1):
self.min_numeric = min_numeric
def validate(self, password, user=None):
if sum(1 for char in password if char.isdigit()) < self.min_numeric:
raise ValidationError(
_(
"This password must contain at least %(min_numeric)d numeric character."
),
code="password_no_numeric_characters",
params={"min_numeric": self.min_numeric},
)
def get_help_text(self):
return _(
f"Your password must contain at least {self.min_numeric} numeric character."
)
+29
View File
@@ -11,6 +11,7 @@ SECRET_KEY = env("SECRET_KEY", default="secret")
DEBUG = env.bool("DJANGO_DEBUG", default=False)
ALLOWED_HOSTS = ["localhost", "127.0.0.1"]
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
USE_X_FORWARDED_HOST = True
# Application definition
@@ -158,6 +159,30 @@ AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
{
"NAME": "api.validators.SpecialCharactersValidator",
"OPTIONS": {
"min_special_characters": 1,
},
},
{
"NAME": "api.validators.UppercaseValidator",
"OPTIONS": {
"min_uppercase": 1,
},
},
{
"NAME": "api.validators.LowercaseValidator",
"OPTIONS": {
"min_lowercase": 1,
},
},
{
"NAME": "api.validators.NumericValidator",
"OPTIONS": {
"min_numeric": 1,
},
},
]
SIMPLE_JWT = {
@@ -248,3 +273,7 @@ X_FRAME_OPTIONS = "DENY"
SECURE_REFERRER_POLICY = "strict-origin-when-cross-origin"
DJANGO_DELETION_BATCH_SIZE = env.int("DJANGO_DELETION_BATCH_SIZE", 5000)
# SAML requirement
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
+6 -3
View File
@@ -4,6 +4,7 @@ from config.env import env
IGNORED_EXCEPTIONS = [
# Provider is not connected due to credentials errors
"is not connected",
"ProviderConnectionError",
# Authentication Errors from AWS
"InvalidToken",
"AccessDeniedException",
@@ -16,7 +17,7 @@ IGNORED_EXCEPTIONS = [
"InternalServerErrorException",
"AccessDenied",
"No Shodan API Key", # Shodan Check
"RequestLimitExceeded", # For now we don't want to log the RequestLimitExceeded errors
"RequestLimitExceeded", # For now, we don't want to log the RequestLimitExceeded errors
"ThrottlingException",
"Rate exceeded",
"SubscriptionRequiredException",
@@ -42,7 +43,9 @@ IGNORED_EXCEPTIONS = [
"AWSAccessKeyIDInvalidError",
"AWSSessionTokenExpiredError",
"EndpointConnectionError", # AWS Service is not available in a region
"Pool is closed", # The following comes from urllib3: eu-west-1 -- HTTPClientError[126]: An HTTP Client raised an unhandled exception: AWSHTTPSConnectionPool(host='hostname.s3.eu-west-1.amazonaws.com', port=443): Pool is closed.
# The following comes from urllib3: eu-west-1 -- HTTPClientError[126]: An HTTP Client raised an
# unhandled exception: AWSHTTPSConnectionPool(host='hostname.s3.eu-west-1.amazonaws.com', port=443): Pool is closed.
"Pool is closed",
# Authentication Errors from GCP
"ClientAuthenticationError",
"AuthorizationFailed",
@@ -71,7 +74,7 @@ IGNORED_EXCEPTIONS = [
def before_send(event, hint):
"""
before_send handles the Sentry events in order to sent them or not
before_send handles the Sentry events in order to send them or not
"""
# Ignore logs with the ignored_exceptions
# https://docs.python.org/3/library/logging.html#logrecord-objects
@@ -25,9 +25,18 @@ SOCIALACCOUNT_EMAIL_AUTHENTICATION = True
SOCIALACCOUNT_EMAIL_AUTHENTICATION_AUTO_CONNECT = True
SOCIALACCOUNT_ADAPTER = "api.adapters.ProwlerSocialAccountAdapter"
# SAML keys (TODO: Validate certificates)
# SAML_PUBLIC_CERT = env("SAML_PUBLIC_CERT", default="")
# SAML_PRIVATE_KEY = env("SAML_PRIVATE_KEY", default="")
# def inline(pem: str) -> str:
# return "".join(
# line.strip()
# for line in pem.splitlines()
# if "CERTIFICATE" not in line and "KEY" not in line
# )
# # SAML keys (TODO: Validate certificates)
# SAML_PUBLIC_CERT = inline(env("SAML_PUBLIC_CERT", default=""))
# SAML_PRIVATE_KEY = inline(env("SAML_PRIVATE_KEY", default=""))
SOCIALACCOUNT_PROVIDERS = {
"google": {
@@ -60,17 +69,14 @@ SOCIALACCOUNT_PROVIDERS = {
"entity_id": "urn:prowler.com:sp",
},
"advanced": {
# TODO: Validate certificates
# "x509cert": SAML_PUBLIC_CERT,
# "private_key": SAML_PRIVATE_KEY,
# "authn_request_signed": True,
# "want_assertion_signed": True,
# "want_message_signed": True,
# "want_assertion_signed": True,
"reject_idp_initiated_sso": False,
"name_id_format": "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress",
"authn_request_signed": False,
"logout_request_signed": False,
"logout_response_signed": False,
"want_assertion_encrypted": False,
"want_name_id_encrypted": False,
},
},
}
+132 -49
View File
@@ -1,8 +1,9 @@
import logging
from datetime import datetime, timedelta, timezone
from unittest.mock import patch
from unittest.mock import MagicMock, patch
import pytest
from allauth.socialaccount.models import SocialLogin
from django.conf import settings
from django.db import connection as django_connection
from django.db import connections as django_connections
@@ -22,12 +23,16 @@ from api.models import (
Invitation,
LighthouseConfiguration,
Membership,
Processor,
Provider,
ProviderGroup,
ProviderSecret,
Resource,
ResourceTag,
ResourceTagMapping,
Role,
SAMLConfiguration,
SAMLDomainIndex,
Scan,
ScanSummary,
StateChoices,
@@ -388,6 +393,19 @@ def providers_fixture(tenants_fixture):
return provider1, provider2, provider3, provider4, provider5, provider6
@pytest.fixture
def processor_fixture(tenants_fixture):
tenant, *_ = tenants_fixture
processor = Processor.objects.create(
tenant_id=tenant.id,
processor_type="mutelist",
configuration="Mutelist:\n Accounts:\n *:\n Checks:\n iam_user_hardware_mfa_enabled:\n "
" Regions:\n - *\n Resources:\n - *",
)
return processor
@pytest.fixture
def provider_groups_fixture(tenants_fixture):
tenant, *_ = tenants_fixture
@@ -637,6 +655,7 @@ def findings_fixture(scans_fixture, resources_fixture):
check_metadata={
"CheckId": "test_check_id",
"Description": "test description apple sauce",
"servicename": "ec2",
},
first_seen_at="2024-01-02T00:00:00Z",
)
@@ -663,6 +682,7 @@ def findings_fixture(scans_fixture, resources_fixture):
check_metadata={
"CheckId": "test_check_id",
"Description": "test description orange juice",
"servicename": "s3",
},
first_seen_at="2024-01-02T00:00:00Z",
muted=True,
@@ -1118,62 +1138,125 @@ def latest_scan_finding(authenticated_client, providers_fixture, resources_fixtu
return finding
# @pytest.fixture
# def saml_setup(tenants_fixture):
# tenant_id = tenants_fixture[0].id
# domain = "example.com"
@pytest.fixture(scope="function")
def latest_scan_resource(authenticated_client, providers_fixture):
provider = providers_fixture[0]
tenant_id = str(providers_fixture[0].tenant_id)
scan = Scan.objects.create(
name="latest completed scan for resource",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant_id=tenant_id,
)
resource = Resource.objects.create(
tenant_id=tenant_id,
provider=provider,
uid="latest_resource_uid",
name="Latest Resource",
region="us-east-1",
service="ec2",
type="instance",
metadata='{"test": "metadata"}',
details='{"test": "details"}',
)
# SAMLDomainIndex.objects.create(email_domain=domain, tenant_id=tenant_id)
resource_tag = ResourceTag.objects.create(
tenant_id=tenant_id,
key="environment",
value="test",
)
ResourceTagMapping.objects.create(
tenant_id=tenant_id,
resource=resource,
tag=resource_tag,
)
# metadata_xml = """<?xml version='1.0' encoding='UTF-8'?>
# <md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
# <md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
# <md:KeyDescriptor use='signing'>
# <ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
# <ds:X509Data>
# <ds:X509Certificate>TEST</ds:X509Certificate>
# </ds:X509Data>
# </ds:KeyInfo>
# </md:KeyDescriptor>
# <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
# <md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://TEST/sso/saml'/>
# <md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect' Location='https://TEST/sso/saml'/>
# </md:IDPSSODescriptor>
# </md:EntityDescriptor>
# """
# SAMLConfiguration.objects.create(
# tenant_id=str(tenant_id),
# email_domain=domain,
# metadata_xml=metadata_xml,
# )
finding = Finding.objects.create(
tenant_id=tenant_id,
uid="test_finding_uid_latest",
scan=scan,
delta="new",
status=Status.FAIL,
status_extended="test status extended ",
impact=Severity.critical,
impact_extended="test impact extended",
severity=Severity.critical,
raw_result={
"status": Status.FAIL,
"impact": Severity.critical,
"severity": Severity.critical,
},
tags={"test": "latest"},
check_id="test_check_id_latest",
check_metadata={
"CheckId": "test_check_id_latest",
"Description": "test description latest",
},
first_seen_at="2024-01-02T00:00:00Z",
)
finding.add_resources([resource])
# return {
# "email": f"user@{domain}",
# "domain": domain,
# "tenant_id": tenant_id,
# }
backfill_resource_scan_summaries(tenant_id, str(scan.id))
return resource
# @pytest.fixture
# def saml_sociallogin(users_fixture):
# user = users_fixture[0]
# user.email = "samlsso@acme.com"
# extra_data = {
# "firstName": ["Test"],
# "lastName": ["User"],
# "organization": ["Prowler"],
# "userType": ["member"],
# }
@pytest.fixture
def saml_setup(tenants_fixture):
tenant_id = tenants_fixture[0].id
domain = "prowler.com"
# account = MagicMock()
# account.provider = "saml"
# account.extra_data = extra_data
SAMLDomainIndex.objects.create(email_domain=domain, tenant_id=tenant_id)
# sociallogin = MagicMock(spec=SocialLogin)
# sociallogin.account = account
# sociallogin.user = user
metadata_xml = """<?xml version='1.0' encoding='UTF-8'?>
<md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
<md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
<md:KeyDescriptor use='signing'>
<ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
<ds:X509Data>
<ds:X509Certificate>TEST</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://TEST/sso/saml'/>
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect' Location='https://TEST/sso/saml'/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
"""
SAMLConfiguration.objects.create(
tenant_id=str(tenant_id),
email_domain=domain,
metadata_xml=metadata_xml,
)
# return sociallogin
return {
"email": f"user@{domain}",
"domain": domain,
"tenant_id": tenant_id,
}
@pytest.fixture
def saml_sociallogin(users_fixture):
user = users_fixture[0]
user.email = "samlsso@acme.com"
extra_data = {
"firstName": ["Test"],
"lastName": ["User"],
"organization": ["Prowler"],
"userType": ["member"],
}
account = MagicMock()
account.provider = "saml"
account.extra_data = extra_data
sociallogin = MagicMock(spec=SocialLogin)
sociallogin.account = account
sociallogin.user = user
return sociallogin
def get_authorization_header(access_token: str) -> dict:
+76 -3
View File
@@ -6,7 +6,7 @@ from datetime import datetime, timezone
from celery.utils.log import get_task_logger
from config.settings.celery import CELERY_DEADLOCK_ATTEMPTS
from django.db import IntegrityError, OperationalError
from django.db.models import Case, Count, IntegerField, Sum, When
from django.db.models import Case, Count, IntegerField, OuterRef, Subquery, Sum, When
from tasks.utils import CustomEncoder
from api.compliance import (
@@ -14,9 +14,11 @@ from api.compliance import (
generate_scan_compliance,
)
from api.db_utils import create_objects_in_batches, rls_transaction
from api.exceptions import ProviderConnectionError
from api.models import (
ComplianceRequirementOverview,
Finding,
Processor,
Provider,
Resource,
ResourceScanSummary,
@@ -132,14 +134,28 @@ def perform_prowler_scan(
scan_instance.started_at = datetime.now(tz=timezone.utc)
scan_instance.save()
# Find the mutelist processor if it exists
with rls_transaction(tenant_id):
try:
mutelist_processor = Processor.objects.get(
tenant_id=tenant_id, processor_type=Processor.ProcessorChoices.MUTELIST
)
except Processor.DoesNotExist:
mutelist_processor = None
except Exception as e:
logger.error(f"Error processing mutelist rules: {e}")
mutelist_processor = None
try:
with rls_transaction(tenant_id):
try:
prowler_provider = initialize_prowler_provider(provider_instance)
prowler_provider = initialize_prowler_provider(
provider_instance, mutelist_processor
)
provider_instance.connected = True
except Exception as e:
provider_instance.connected = False
exc = ValueError(
exc = ProviderConnectionError(
f"Provider {provider_instance.provider} is not connected: {e}"
)
finally:
@@ -274,6 +290,9 @@ def perform_prowler_scan(
if not last_first_seen_at:
last_first_seen_at = datetime.now(tz=timezone.utc)
# If the finding is muted at this time the reason must be the configured Mutelist
muted_reason = "Muted by mutelist" if finding.muted else None
# Create the finding
finding_instance = Finding.objects.create(
tenant_id=tenant_id,
@@ -289,6 +308,7 @@ def perform_prowler_scan(
scan=scan_instance,
first_seen_at=last_first_seen_at,
muted=finding.muted,
muted_reason=muted_reason,
compliance=finding.compliance,
)
finding_instance.add_resources([resource_instance])
@@ -356,12 +376,16 @@ def perform_prowler_scan(
def aggregate_findings(tenant_id: str, scan_id: str):
"""
Aggregates findings for a given scan and stores the results in the ScanSummary table.
Also updates the failed_findings_count for each resource based on the latest findings.
This function retrieves all findings associated with a given `scan_id` and calculates various
metrics such as counts of failed, passed, and muted findings, as well as their deltas (new,
changed, unchanged). The results are grouped by `check_id`, `service`, `severity`, and `region`.
These aggregated metrics are then stored in the `ScanSummary` table.
Additionally, it updates the failed_findings_count field for each resource based on the most
recent findings for each finding.uid.
Args:
tenant_id (str): The ID of the tenant to which the scan belongs.
scan_id (str): The ID of the scan for which findings need to be aggregated.
@@ -381,6 +405,8 @@ def aggregate_findings(tenant_id: str, scan_id: str):
- muted_new: Muted findings with a delta of 'new'.
- muted_changed: Muted findings with a delta of 'changed'.
"""
_update_resource_failed_findings_count(tenant_id, scan_id)
with rls_transaction(tenant_id):
findings = Finding.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
@@ -505,6 +531,53 @@ def aggregate_findings(tenant_id: str, scan_id: str):
ScanSummary.objects.bulk_create(scan_aggregations, batch_size=3000)
def _update_resource_failed_findings_count(tenant_id: str, scan_id: str):
"""
Update the failed_findings_count field for resources based on the latest findings.
This function calculates the number of failed findings for each resource by:
1. Getting the latest finding for each finding.uid
2. Counting failed findings per resource
3. Updating the failed_findings_count field for each resource
Args:
tenant_id (str): The ID of the tenant to which the scan belongs.
scan_id (str): The ID of the scan for which to update resource counts.
"""
with rls_transaction(tenant_id):
scan = Scan.objects.get(pk=scan_id)
provider_id = scan.provider_id
resources = list(
Resource.all_objects.filter(tenant_id=tenant_id, provider_id=provider_id)
)
# For each resource, calculate failed findings count based on latest findings
for resource in resources:
with rls_transaction(tenant_id):
# Get the latest finding for each finding.uid that affects this resource
latest_findings_subquery = (
Finding.all_objects.filter(
tenant_id=tenant_id, uid=OuterRef("uid"), resources=resource
)
.order_by("-inserted_at")
.values("id")[:1]
)
# Count failed findings from the latest findings
failed_count = Finding.all_objects.filter(
tenant_id=tenant_id,
resources=resource,
id__in=Subquery(latest_findings_subquery),
status=FindingStatus.FAIL,
muted=False,
).count()
resource.failed_findings_count = failed_count
resource.save(update_fields=["failed_findings_count"])
def create_compliance_requirements(tenant_id: str, scan_id: str):
"""
Create detailed compliance requirement overview records for a scan.
+25 -19
View File
@@ -37,6 +37,26 @@ from prowler.lib.outputs.finding import Finding as FindingOutput
logger = get_task_logger(__name__)
def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str):
"""
Helper function to perform tasks after a scan is completed.
Args:
tenant_id (str): The tenant ID under which the scan was performed.
scan_id (str): The ID of the scan that was performed.
provider_id (str): The primary key of the Provider instance that was scanned.
"""
create_compliance_requirements_task.apply_async(
kwargs={"tenant_id": tenant_id, "scan_id": scan_id}
)
chain(
perform_scan_summary_task.si(tenant_id=tenant_id, scan_id=scan_id),
generate_outputs_task.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
).apply_async()
@shared_task(base=RLSTask, name="provider-connection-check")
@set_tenant
def check_provider_connection_task(provider_id: str):
@@ -103,13 +123,7 @@ def perform_scan_task(
checks_to_execute=checks_to_execute,
)
chain(
perform_scan_summary_task.si(tenant_id, scan_id),
create_compliance_requirements_task.si(tenant_id=tenant_id, scan_id=scan_id),
generate_outputs.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
).apply_async()
_perform_scan_complete_tasks(tenant_id, scan_id, provider_id)
return result
@@ -214,20 +228,12 @@ def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
scheduler_task_id=periodic_task_instance.id,
)
chain(
perform_scan_summary_task.si(tenant_id, scan_instance.id),
create_compliance_requirements_task.si(
tenant_id=tenant_id, scan_id=str(scan_instance.id)
),
generate_outputs.si(
scan_id=str(scan_instance.id), provider_id=provider_id, tenant_id=tenant_id
),
).apply_async()
_perform_scan_complete_tasks(tenant_id, str(scan_instance.id), provider_id)
return result
@shared_task(name="scan-summary")
@shared_task(name="scan-summary", queue="overview")
def perform_scan_summary_task(tenant_id: str, scan_id: str):
return aggregate_findings(tenant_id=tenant_id, scan_id=scan_id)
@@ -243,7 +249,7 @@ def delete_tenant_task(tenant_id: str):
queue="scan-reports",
)
@set_tenant(keep_tenant=True)
def generate_outputs(scan_id: str, provider_id: str, tenant_id: str):
def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
"""
Process findings in batches and generate output files in multiple formats.
@@ -381,7 +387,7 @@ def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
return backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(base=RLSTask, name="scan-compliance-overviews")
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="overview")
def create_compliance_requirements_task(tenant_id: str, scan_id: str):
"""
Creates detailed compliance requirement records for a scan.
+86 -1
View File
@@ -7,11 +7,13 @@ import pytest
from tasks.jobs.scan import (
_create_finding_delta,
_store_resources,
_update_resource_failed_findings_count,
create_compliance_requirements,
perform_prowler_scan,
)
from tasks.utils import CustomEncoder
from api.exceptions import ProviderConnectionError
from api.models import (
ComplianceRequirementOverview,
Finding,
@@ -158,6 +160,7 @@ class TestPerformScan:
assert scan_finding.raw_result == finding.raw
assert scan_finding.muted
assert scan_finding.compliance == finding.compliance
assert scan_finding.muted_reason == "Muted by mutelist"
assert scan_resource.tenant == tenant
assert scan_resource.uid == finding.resource_uid
@@ -203,7 +206,7 @@ class TestPerformScan:
provider_id = str(provider.id)
checks_to_execute = ["check1", "check2"]
with pytest.raises(ValueError):
with pytest.raises(ProviderConnectionError):
perform_prowler_scan(tenant_id, scan_id, provider_id, checks_to_execute)
scan.refresh_from_db()
@@ -1164,3 +1167,85 @@ class TestCreateComplianceRequirements:
assert len(req_2_objects) == 2
assert all(obj.requirement_status == "PASS" for obj in req_1_objects)
assert all(obj.requirement_status == "FAIL" for obj in req_2_objects)
@pytest.mark.django_db
class TestUpdateResourceFailedFindingsCount:
@patch("api.models.Resource.all_objects.filter")
@patch("api.models.Finding.all_objects.filter")
def test_failed_findings_count_update(
self,
mock_finding_filter,
mock_resource_filter,
tenants_fixture,
scans_fixture,
providers_fixture,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
resource1 = MagicMock()
resource1.uid = "res-1"
resource1.failed_findings_count = None
resource1.save = MagicMock()
resource2 = MagicMock()
resource2.uid = "res-2"
resource2.failed_findings_count = None
resource2.save = MagicMock()
mock_resource_filter.return_value = [resource1, resource2]
fake_subquery_qs = MagicMock()
fake_subquery_qs.order_by.return_value = fake_subquery_qs
fake_subquery_qs.values.return_value = fake_subquery_qs
fake_subquery_qs.__getitem__.return_value = fake_subquery_qs
def finding_filter_side_effect(*args, **kwargs):
if "status" in kwargs:
qs_count = MagicMock()
if kwargs.get("resources") == resource1:
qs_count.count.return_value = 3
else:
qs_count.count.return_value = 0
return qs_count
return fake_subquery_qs
mock_finding_filter.side_effect = finding_filter_side_effect
_update_resource_failed_findings_count(tenant_id, scan_id)
# resource1 should have been updated to 3
assert resource1.failed_findings_count == 3
resource1.save.assert_called_once_with(update_fields=["failed_findings_count"])
# resource2 should have been updated to 0
assert resource2.failed_findings_count == 0
resource2.save.assert_called_once_with(update_fields=["failed_findings_count"])
@patch("api.models.Resource.all_objects.filter", return_value=[])
@patch("api.models.Finding.all_objects.filter")
def test_no_resources_no_error(
self,
mock_finding_filter,
mock_resource_filter,
tenants_fixture,
scans_fixture,
providers_fixture,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
scan.provider = provider
scan.save()
_update_resource_failed_findings_count(str(tenant.id), str(scan.id))
mock_finding_filter.assert_not_called()
+31 -8
View File
@@ -3,9 +3,10 @@ from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from tasks.tasks import generate_outputs
from tasks.tasks import _perform_scan_complete_tasks, generate_outputs_task
# TODO Move this to outputs/reports jobs
@pytest.mark.django_db
class TestGenerateOutputs:
def setup_method(self):
@@ -17,7 +18,7 @@ class TestGenerateOutputs:
with patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
result = generate_outputs(
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
@@ -99,7 +100,7 @@ class TestGenerateOutputs:
mock_compress.return_value = "/tmp/zipped.zip"
mock_upload.return_value = "s3://bucket/zipped.zip"
result = generate_outputs(
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
@@ -150,7 +151,7 @@ class TestGenerateOutputs:
True,
]
result = generate_outputs(
result = generate_outputs_task(
scan_id="scan",
provider_id="provider",
tenant_id=self.tenant_id,
@@ -208,7 +209,7 @@ class TestGenerateOutputs:
{"aws": [(lambda x: True, MagicMock())]},
),
):
generate_outputs(
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
@@ -276,7 +277,7 @@ class TestGenerateOutputs:
}
},
):
result = generate_outputs(
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
@@ -346,7 +347,7 @@ class TestGenerateOutputs:
):
mock_summary.return_value.exists.return_value = True
result = generate_outputs(
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
@@ -407,9 +408,31 @@ class TestGenerateOutputs:
),
):
with caplog.at_level("ERROR"):
generate_outputs(
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert "Error deleting output files" in caplog.text
class TestScanCompleteTasks:
@patch("tasks.tasks.create_compliance_requirements_task.apply_async")
@patch("tasks.tasks.perform_scan_summary_task.si")
@patch("tasks.tasks.generate_outputs_task.si")
def test_scan_complete_tasks(
self, mock_outputs_task, mock_scan_summary_task, mock_compliance_tasks
):
_perform_scan_complete_tasks("tenant-id", "scan-id", "provider-id")
mock_compliance_tasks.assert_called_once_with(
kwargs={"tenant_id": "tenant-id", "scan_id": "scan-id"},
)
mock_scan_summary_task.assert_called_once_with(
scan_id="scan-id",
tenant_id="tenant-id",
)
mock_outputs_task.assert_called_once_with(
scan_id="scan-id",
provider_id="provider-id",
tenant_id="tenant-id",
)
+134
View File
@@ -0,0 +1,134 @@
# Extending Prowler Lighthouse
This guide helps developers customize and extend Prowler Lighthouse by adding or modifying AI agents.
## Understanding AI Agents
AI agents combine Large Language Models (LLMs) with specialized tools that provide environmental context. These tools can include API calls, system command execution, or any function-wrapped capability.
### Types of AI Agents
AI agents fall into two main categories:
- **Autonomous Agents**: Freely chooses from available tools to complete tasks, adapting their approach based on context. They decide which tools to use and when.
- **Workflow Agents**: Follows structured paths with predefined logic. They execute specific tool sequences and can include conditional logic.
Prowler Lighthouse is an autonomous agent - selecting the right tool(s) based on the users query.
???+ note
To learn more about AI agents, read [Anthropic's blog post on building effective agents](https://www.anthropic.com/engineering/building-effective-agents).
### LLM Dependency
The autonomous nature of agents depends on the underlying LLM. Autonomous agents using identical system prompts and tools but powered by different LLM providers might approach user queries differently. Agent with one LLM might solve a problem efficiently, while with another it might take a different route or fail entirely.
After evaluating multiple LLM providers (OpenAI, Gemini, Claude, LLama) based on tool calling features and response accuracy, we recommend using the `gpt-4o` model.
## Prowler Lighthouse Architecture
Prowler Lighthouse uses a multi-agent architecture orchestrated by the [Langgraph-Supervisor](https://www.npmjs.com/package/@langchain/langgraph-supervisor) library.
### Architecture Components
<img src="../../tutorials/img/lighthouse-architecture.png" alt="Prowler Lighthouse architecture">
Prowler Lighthouse integrates with the NextJS application:
- The [Langgraph-Supervisor](https://www.npmjs.com/package/@langchain/langgraph-supervisor) library integrates directly with NextJS
- The system uses the authenticated user session to interact with the Prowler API server
- Agents only access data the current user is authorized to view
- Session management operates automatically, ensuring Role-Based Access Control (RBAC) is maintained
## Available Prowler AI Agents
The following specialized AI agents are available in Prowler:
### Agent Overview
- **provider_agent**: Fetches information about cloud providers connected to Prowler
- **user_info_agent**: Retrieves information about Prowler users
- **scans_agent**: Fetches information about Prowler scans
- **compliance_agent**: Retrieves compliance overviews across scans
- **findings_agent**: Fetches information about individual findings across scans
- **overview_agent**: Retrieves overview information (providers, findings by status and severity, etc.)
## How to Add New Capabilities
### Updating the Supervisor Prompt
The supervisor agent controls system behavior, tone, and capabilities. You can find the supervisor prompt at: [https://github.com/prowler-cloud/prowler/blob/master/ui/lib/lighthouse/prompts.ts](https://github.com/prowler-cloud/prowler/blob/master/ui/lib/lighthouse/prompts.ts)
#### Supervisor Prompt Modifications
Modifying the supervisor prompt allows you to:
- Change personality or response style
- Add new high-level capabilities
- Modify task delegation to specialized agents
- Set up guardrails (query types to answer or decline)
???+ note
The supervisor agent should not have its own tools. This design keeps the system modular and maintainable.
### How to Create New Specialized Agents
The supervisor agent and all specialized agents are defined in the `route.ts` file. The supervisor agent uses [langgraph-supervisor](https://www.npmjs.com/package/@langchain/langgraph-supervisor), while other agents use the prebuilt [create-react-agent](https://langchain-ai.github.io/langgraphjs/how-tos/create-react-agent/).
To add new capabilities or all Lighthouse to interact with other APIs, create additional specialized agents:
1. First determine what the new agent would do. Create a detailed prompt defining the agent's purpose and capabilities. You can see an example from [here](https://github.com/prowler-cloud/prowler/blob/master/ui/lib/lighthouse/prompts.ts#L359-L385).
???+ note
Ensure that the new agent's capabilities don't collide with existing agents. For example, if there's already a *findings_agent* that talks to findings APIs don't create a new agent to do the same.
2. Create necessary tools for the agents to access specific data or perform actions. A tool is a specialized function that extends the capabilities of LLM by allowing it to access external data or APIs. A tool is triggered by LLM based on the description of the tool and the user's query.
For example, the description of `getScanTool` is "Fetches detailed information about a specific scan by its ID." If the description doesn't convey what the tool is capable of doing, LLM will not invoke the function. If the description of `getScanTool` was set to something random or not set at all, LLM will not answer queries like "Give me the critical issues from the scan ID xxxxxxxxxxxxxxx"
???+ note
Ensure that one tool is added to one agent only. Adding tools is optional. There can be agents with no tools at all.
3. Use the `createReactAgent` function to define a new agent. For example, the rolesAgent name is "roles_agent" and has access to call tools "*getRolesTool*" and "*getRoleTool*"
```js
const rolesAgent = createReactAgent({
llm: llm,
tools: [getRolesTool, getRoleTool],
name: "roles_agent",
prompt: rolesAgentPrompt,
});
```
4. Create a detailed prompt defining the agent's purpose and capabilities.
5. Add the new agent to the available agents list:
```js
const agents = [
userInfoAgent,
providerAgent,
overviewAgent,
scansAgent,
complianceAgent,
findingsAgent,
rolesAgent, // New agent added here
];
// Create supervisor workflow
const workflow = createSupervisor({
agents: agents,
llm: supervisorllm,
prompt: supervisorPrompt,
outputMode: "last_message",
});
```
6. Update the supervisor's system prompt to summarize the new agent's capabilities.
### Best Practices for Agent Development
When developing new agents or capabilities:
- **Clear Responsibility Boundaries**: Each agent should have a defined purpose with minimal overlap. No two agents should access the same tools or different tools accessing the same Prowler APIs.
- **Minimal Data Access**: Agents should only request the data they need, keeping requests specific to minimize context window usage, cost, and response time.
- **Thorough Prompting:** Ensure agent prompts include clear instructions about:
- The agent's purpose and limitations
- How to use its tools
- How to format responses for the supervisor
- Error handling procedures (Optional)
- **Security Considerations:** Agents should never modify data or access sensitive information like secrets or credentials.
- **Testing:** Thoroughly test new agents with various queries before deploying to production.
+45 -18
View File
@@ -156,7 +156,7 @@ Follow the instructions in the [Create Prowler Service Principal](../tutorials/m
If you don't add the external API permissions described in the mentioned section above you will only be able to run the checks that work through MS Graph. This means that you won't run all the provider.
If you want to scan all the checks from M365 you will need to use the recommended authentication method or add the external API permissions.
If you want to scan all the checks from M365 you will need to add the required permissions to the service principal application. Refer to the [Needed permissions](/docs/tutorials/microsoft365/getting-started-m365.md#needed-permissions) section for more information.
### Service Principal and User Credentials authentication
@@ -172,9 +172,10 @@ export M365_USER="your_email@example.com"
export M365_PASSWORD="examplepassword"
```
These two new environment variables are **required** to execute the PowerShell modules needed to retrieve information from M365 services. Prowler uses Service Principal authentication to access Microsoft Graph and user credentials to authenticate to Microsoft PowerShell modules.
These two new environment variables are **required** in this authentication method to execute the PowerShell modules needed to retrieve information from M365 services. Prowler uses Service Principal authentication to access Microsoft Graph and user credentials to authenticate to Microsoft PowerShell modules.
- `M365_USER` should be your Microsoft account email using the **assigned domain in the tenant**. This means it must look like `example@YourCompany.onmicrosoft.com` or `example@YourCompany.com`, but it must be the exact domain assigned to that user in the tenant.
???+ warning
If the user is newly created, you need to sign in with that account first, as Microsoft will prompt you to change the password. If you dont complete this step, user authentication will fail because Microsoft marks the initial password as expired.
@@ -207,30 +208,56 @@ Since this is a delegated permission authentication method, necessary permission
### Needed permissions
Prowler for M365 requires two types of permission scopes to be set (if you want to run the full provider including PowerShell checks). Both must be configured using Microsoft Entra ID:
Prowler for M365 requires different permission scopes depending on the authentication method you choose. The permissions must be configured using Microsoft Entra ID:
- **Service Principal Application Permissions**: These are set at the **application** level and are used to retrieve data from the identity being assessed:
- `AuditLog.Read.All`: Required for Entra service.
- `Directory.Read.All`: Required for all services.
- `Policy.Read.All`: Required for all services.
- `SharePointTenantSettings.Read.All`: Required for SharePoint service.
- `User.Read` (IMPORTANT: this must be set as **delegated**): Required for the sign-in.
- `Exchange.ManageAsApp` from external API `Office 365 Exchange Online`: Required for Exchange PowerShell module app authentication. You also need to assign the `Exchange Administrator` role to the app.
- `application_access` from external API `Skype and Teams Tenant Admin API`: Required for Teams PowerShell module app authentication.
#### For Service Principal Authentication (`--sp-env-auth`) - Recommended
???+ note
You can replace `Directory.Read.All` with `Domain.Read.All` is a more restrictive permission but you won't be able to run the Entra checks related with DirectoryRoles and GetUsers.
When using service principal authentication, you need to add the following **Application Permissions** configured to:
> If you do this you will need to add also the `Organization.Read.All` permission to the service principal application in order to authenticate.
**Microsoft Graph API Permissions:**
- `AuditLog.Read.All`: Required for Entra service.
- `Directory.Read.All`: Required for all services.
- `Policy.Read.All`: Required for all services.
- `SharePointTenantSettings.Read.All`: Required for SharePoint service.
- `User.Read` (IMPORTANT: this must be set as **delegated**): Required for the sign-in.
**External API Permissions:**
- `Exchange.ManageAsApp` from external API `Office 365 Exchange Online`: Required for Exchange PowerShell module app authentication. You also need to assign the `Exchange Administrator` role to the app.
- `application_access` from external API `Skype and Teams Tenant Admin API`: Required for Teams PowerShell module app authentication.
???+ note
You can replace `Directory.Read.All` with `Domain.Read.All` is a more restrictive permission but you won't be able to run the Entra checks related with DirectoryRoles and GetUsers.
> If you do this you will need to add also the `Organization.Read.All` permission to the service principal application in order to authenticate.
- **Powershell Modules Permissions** (if using user credentials): These are set at the `M365_USER` level, so the user used to run Prowler must have one of the following roles:
- `Global Reader` (recommended): this allows you to read all roles needed.
- `Exchange Administrator` and `Teams Administrator`: user needs both roles but with this [roles](https://learn.microsoft.com/en-us/exchange/permissions-exo/permissions-exo#microsoft-365-permissions-in-exchange-online) you can access to the same information as a Global Reader (since only read access is needed, Global Reader is recommended).
???+ warning
With service principal only authentication, you can only run checks that work through MS Graph API. Some checks that require PowerShell modules will not be executed.
In order to know how to assign those permissions and roles follow the instructions in the Microsoft Entra ID [permissions](../tutorials/microsoft365/getting-started-m365.md#grant-required-api-permissions) and [roles](../tutorials/microsoft365/getting-started-m365.md#assign-required-roles-to-your-user) section.
#### For Service Principal + User Credentials Authentication (`--env-auth`)
When using service principal with user credentials authentication, you need **both** sets of permissions:
**1. Service Principal Application Permissions**:
- You **will need** all the Microsoft Graph API permissions listed above.
- You **won't need** the External API permissions listed above.
**2. User-Level Permissions**: These are set at the `M365_USER` level, so the user used to run Prowler must have one of the following roles:
- `Global Reader` (recommended): this allows you to read all roles needed.
- `Exchange Administrator` and `Teams Administrator`: user needs both roles but with this [roles](https://learn.microsoft.com/en-us/exchange/permissions-exo/permissions-exo#microsoft-365-permissions-in-exchange-online) you can access to the same information as a Global Reader (since only read access is needed, Global Reader is recommended).
???+ note
This is the **recommended authentication method** because it allows you to run the full M365 provider including PowerShell checks, providing complete coverage of all available security checks.
#### For Browser Authentication (`--browser-auth`)
When using browser authentication, permissions are delegated to the user, so the user must have the appropriate permissions rather than the application.
???+ warning
With browser authentication, you will only be able to run checks that work through MS Graph API. PowerShell module checks will not be executed.
---
**To assign these permissions and roles**, follow the instructions in the Microsoft Entra ID [permissions](../tutorials/microsoft365/getting-started-m365.md#grant-required-api-permissions) and [roles](../tutorials/microsoft365/getting-started-m365.md#assign-required-roles-to-your-user) section.
### Supported PowerShell versions
Binary file not shown.

Before

Width:  |  Height:  |  Size: 119 KiB

After

Width:  |  Height:  |  Size: 433 KiB

+207 -132
View File
@@ -1,152 +1,227 @@
# Prowler Fixer (remediation)
Prowler allows you to fix some of the failed findings it identifies. You can use the `--fixer` flag to run the fixes that are available for the checks that failed.
# Prowler Fixers (remediations)
```sh
prowler <provider> -c <check_to_fix_1> <check_to_fix_2> ... --fixer
```
Prowler supports automated remediation ("fixers") for certain findings. This system is extensible and provider-agnostic, allowing you to implement fixers for AWS, Azure, GCP, and M365 using a unified interface.
<img src="../img/fixer.png">
---
## Overview
- **Fixers** are Python classes that encapsulate the logic to remediate a failed check.
- Each provider has its own base fixer class, inheriting from a common abstract base (`Fixer`).
- Fixers are automatically discovered and invoked by Prowler when the `--fixer` flag is used.
???+ note
You can see all the available fixes for each provider with the `--list-remediations` or `--list-fixers flag.
Right now, fixers are only available through the CLI.
```sh
prowler <provider> --list-fixers
```
It's important to note that using the fixers for `Access Analyzer`, `GuardDuty`, and `SecurityHub` may incur additional costs. These AWS services might trigger actions or deploy resources that can lead to charges on your AWS account.
## Writing a Fixer
To write a fixer, you need to create a file called `<check_id>_fixer.py` inside the check folder, with a function called `fixer` that receives either the region or the resource to be fixed as a parameter, and returns a boolean value indicating if the fix was successful or not.
---
For example, the regional fixer for the `ec2_ebs_default_encryption` check, which enables EBS encryption by default in a region, would look like this:
```python
from prowler.lib.logger import logger
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
## How to Use Fixers
To run fixers for failed findings:
def fixer(region):
"""
Enable EBS encryption by default in a region. NOTE: Custom KMS keys for EBS Default Encryption may be overwritten.
Requires the ec2:EnableEbsEncryptionByDefault permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:EnableEbsEncryptionByDefault",
"Resource": "*"
}
]
}
Args:
region (str): AWS region
Returns:
bool: True if EBS encryption by default is enabled, False otherwise
"""
try:
regional_client = ec2_client.regional_clients[region]
return regional_client.enable_ebs_encryption_by_default()[
"EbsEncryptionByDefault"
]
except Exception as error:
logger.error(
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
```sh
prowler <provider> -c <check_id_1> <check_id_2> ... --fixer
```
On the other hand, the fixer for the `s3_account_level_public_access_blocks` check, which enables the account-level public access blocks for S3, would look like this:
<img src="../img/fixer-info.png">
<img src="../img/fixer-no-needed.png">
To list all available fixers for a provider:
```sh
prowler <provider> --list-fixers
```
> **Note:** Some fixers may incur additional costs (e.g., enabling certain cloud services like `Access Analyzer`, `GuardDuty`, and `SecurityHub` in AWS).
---
## Fixer Class Structure
### Base Class
All fixers inherit from the abstract `Fixer` class (`prowler/lib/fix/fixer.py`). This class defines the required interface and common logic.
**Key methods and properties:**
- `__init__(description, cost_impact=False, cost_description=None)`: Sets metadata for the fixer.
- `_get_fixer_info()`: Returns a dictionary with fixer metadata.
- `fix(finding=None, **kwargs)`: Abstract method. Must be implemented by each fixer to perform the remediation.
- `get_fixer_for_finding(finding)`: Factory method to dynamically load the correct fixer for a finding.
- `run_fixer(findings)`: Runs the fixer(s) for one or more findings.
### Provider-Specific Base Classes
Each provider extends the base class to add provider-specific logic and metadata:
- **AWS:** `AWSFixer` (`prowler/providers/aws/lib/fix/fixer.py`)
- **Azure:** `AzureFixer` (`prowler/providers/azure/lib/fix/fixer.py`)
- **GCP:** `GCPFixer` (`prowler/providers/gcp/lib/fix/fixer.py`)
- **M365:** `M365Fixer` (`prowler/providers/m365/lib/fix/fixer.py`)
These classes may add fields such as required permissions, IAM policies, or provider-specific client handling.
---
## Writing a Fixer
### 1. **Location and Naming**
- Place your fixer in the checks directory, named `<check_id>_fixer.py`.
- The fixer class should be named in PascalCase, matching the check ID, ending with `Fixer`.
Example: For `ec2_ebs_default_encryption`, use `Ec2EbsDefaultEncryptionFixer`.
### 2. **Class Definition**
- Inherit from the providers base fixer class.
- Implement the `fix()` method. This method receives a finding and/or keyword arguments and must return `True` if the remediation was successful, `False` otherwise.
**Example (AWS):**
```python
from prowler.lib.logger import logger
from prowler.providers.aws.services.s3.s3control_client import s3control_client
from prowler.providers.aws.lib.fix.fixer import AWSFixer
def fixer(resource_id: str) -> bool:
"""
Enable S3 Block Public Access for the account. NOTE: By blocking all S3 public access you may break public S3 buckets.
Requires the s3:PutAccountPublicAccessBlock permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:PutAccountPublicAccessBlock",
class Ec2EbsDefaultEncryptionFixer(AWSFixer):
def __init__(self):
super().__init__(
description="Enable EBS encryption by default in a region.",
service="ec2",
iam_policy_required={
"Action": ["ec2:EnableEbsEncryptionByDefault"],
"Resource": "*"
}
]
}
Returns:
bool: True if S3 Block Public Access is enabled, False otherwise
"""
try:
s3control_client.client.put_public_access_block(
AccountId=resource_id,
PublicAccessBlockConfiguration={
"BlockPublicAcls": True,
"IgnorePublicAcls": True,
"BlockPublicPolicy": True,
"RestrictPublicBuckets": True,
},
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
else:
def fix(self, finding=None, **kwargs):
# Remediation logic here
return True
```
## Fixer Config file
For some fixers, you can have configurable parameters depending on your use case. You can either use the default config file in `prowler/config/fixer_config.yaml` or create a custom config file and pass it to the fixer with the `--fixer-config` flag. The config file should be a YAML file with the following structure:
```yaml
# Fixer configuration file
aws:
# ec2_ebs_default_encryption
# No configuration needed for this check
**Example (Azure):**
```python
from prowler.providers.azure.lib.fix.fixer import AzureFixer
# s3_account_level_public_access_blocks
# No configuration needed for this check
class AppFunctionFtpsDeploymentDisabledFixer(AzureFixer):
def __init__(self):
super().__init__(
description="Disable FTP/FTPS deployments for Azure Functions.",
service="app",
permissions_required={
"actions": [
"Microsoft.Web/sites/write",
"Microsoft.Web/sites/config/write"
]
}
)
# iam_password_policy_* checks:
iam_password_policy:
MinimumPasswordLength: 14
RequireSymbols: True
RequireNumbers: True
RequireUppercaseCharacters: True
RequireLowercaseCharacters: True
AllowUsersToChangePassword: True
MaxPasswordAge: 90
PasswordReusePrevention: 24
HardExpiry: False
# accessanalyzer_enabled
accessanalyzer_enabled:
AnalyzerName: "DefaultAnalyzer"
AnalyzerType: "ACCOUNT_UNUSED_ACCESS"
# guardduty_is_enabled
# No configuration needed for this check
# securityhub_enabled
securityhub_enabled:
EnableDefaultStandards: True
# cloudtrail_multi_region_enabled
cloudtrail_multi_region_enabled:
TrailName: "DefaultTrail"
S3BucketName: "my-cloudtrail-bucket"
IsMultiRegionTrail: True
EnableLogFileValidation: True
# CloudWatchLogsLogGroupArn: "arn:aws:logs:us-east-1:123456789012:log-group:my-cloudtrail-log-group"
# CloudWatchLogsRoleArn: "arn:aws:iam::123456789012:role/my-cloudtrail-role"
# KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
# kms_cmk_rotation_enabled
# No configuration needed for this check
# ec2_ebs_snapshot_account_block_public_access
ec2_ebs_snapshot_account_block_public_access:
State: "block-all-sharing"
# ec2_instance_account_imdsv2_enabled
# No configuration needed for this check
def fix(self, finding=None, **kwargs):
# Remediation logic here
return True
```
**Example (GCP):**
```python
from prowler.providers.gcp.lib.fix.fixer import GCPFixer
class ComputeInstancePublicIPFixer(GCPFixer):
def __init__(self):
super().__init__(
description="Remove public IP from Compute Engine instance.",
service="compute",
iam_policy_required={
"roles": ["roles/compute.instanceAdmin.v1"]
}
)
def fix(self, finding=None, **kwargs):
# Remediation logic here
return True
```
**Example (M365):**
```python
from prowler.providers.m365.lib.fix.fixer import M365Fixer
class AppFunctionFtpsDeploymentDisabledFixer(M365Fixer):
def __init__(self):
super().__init__(
description="Disable FTP/FTPS deployments for Azure Functions.",
service="app",
permissions_required={
"actions": [
"Microsoft.Web/sites/write",
"Microsoft.Web/sites/config/write"
]
}
)
def fix(self, finding=None, **kwargs):
# Remediation logic here
return True
```
---
## Fixer info
Each fixer should provide:
- **description:** What the fixer does.
- **cost_impact:** Whether the remediation may incur costs.
- **cost_description:** Details about potential costs (if any).
For some providers, there will be additional information that needs to be added to the fixer info, like:
- **service:** The cloud service affected.
- **permissions/IAM policy required:** The minimum permissions needed for the fixer to work.
In order to get the fixer info, you can use the flag `--fixer-info`. And it will print the fixer info in a pretty format.
---
## Fixer Config File
Some fixers support configurable parameters.
You can use the default config file at `prowler/config/fixer_config.yaml` or provide your own with `--fixer-config`.
**Example YAML:**
```yaml
aws:
ec2_ebs_default_encryption: {}
iam_password_policy:
MinimumPasswordLength: 14
RequireSymbols: True
# ...
azure:
app_function_ftps_deployment_disabled:
ftps_state: "Disabled"
```
---
## Best Practices
- Always document the permissions required for your fixer.
- Handle exceptions gracefully and log errors.
- Return `True` only if the remediation was actually successful.
- Use the providers client libraries and follow their best practices for API calls.
---
## Troubleshooting
- If a fixer is not available for a check, Prowler will print a warning.
- If a fixer fails due to missing permissions, check the required IAM roles or permissions and update your execution identity accordingly.
- Use the `--list-fixers` flag to see all available fixers for your provider.
---
## Extending to New Providers
To add support for a new provider:
1. Implement a new base fixer class inheriting from `Fixer`.
2. Place it in the appropriate provider directory.
3. Follow the same structure for check-specific fixers.
---
**For more details, see the code in `prowler/lib/fix/fixer.py` and the provider-specific fixer base classes.**
+4 -4
View File
@@ -27,18 +27,18 @@ prowler github --oauth-app-token oauth_token
```
### GitHub App Credentials
Use GitHub App credentials by specifying the App ID and the private key.
Use GitHub App credentials by specifying the App ID and the private key path.
```console
prowler github --github-app-id app_id --github-app-key app_key
prowler github --github-app-id app_id --github-app-key-path app_key_path
```
### Automatic Login Method Detection
If no login method is explicitly provided, Prowler will automatically attempt to authenticate using environment variables in the following order of precedence:
1. `GITHUB_PERSONAL_ACCESS_TOKEN`
2. `OAUTH_APP_TOKEN`
3. `GITHUB_APP_ID` and `GITHUB_APP_KEY`
2. `GITHUB_OAUTH_APP_TOKEN`
3. `GITHUB_APP_ID` and `GITHUB_APP_KEY` (where the key is the content of the private key file)
???+ note
Ensure the corresponding environment variables are set up before running Prowler for automatic detection if you don't plan to specify the login method.
Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 268 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

@@ -12,7 +12,7 @@ This allows Prowler to authenticate against Microsoft 365 using the following me
To launch the tool first you need to specify which method is used through the following flags:
```console
# To use service principal (app) authentication and Microsoft user credentials (to use PowerShell)
# To use service principal (app) authentication and Microsoft user credentials
prowler m365 --env-auth
# To use service principal authentication
@@ -25,4 +25,4 @@ prowler m365 --az-cli-auth
prowler m365 --browser-auth --tenant-id "XXXXXXXX"
```
To use Prowler you need to set up also the permissions required to access your resources in your Microsoft 365 account, to more details refer to [Requirements](../../getting-started/requirements.md#microsoft-365)
To use Prowler you need to set up also the permissions required to access your resources in your Microsoft 365 account, to more details refer to [Requirements](../../getting-started/requirements.md#needed-permissions-2)
@@ -30,7 +30,7 @@ Go to the Entra ID portal, then you can search for `Domain` or go to Identity >
![Custom Domain Names](./img/custom-domain-names.png)
Once you are there just select the domain you want to use.
Once you are there just select the domain you want to use as unique identifier for your M365 account in Prowler Cloud/App.
---
@@ -139,10 +139,7 @@ The permissions you need to grant depends on whether you are using user credenti
Make sure you add the correct set of permissions for the authentication method you are using.
#### If using application(service principal) authentication
???+ warning "Warning"
Currently Prowler Cloud only supports user authentication.
#### If using application(service principal) authentication (Recommended)
To grant the permissions for the PowerShell modules via application authentication, you need to add the necessary APIs to your app registration.
@@ -191,12 +188,15 @@ To grant the permissions for the PowerShell modules via application authenticati
![Final Permission Assignment](./img/final-permissions.png)
---
#### If using user authentication
This method is not recommended because it requires a user with MFA enabled and Microsoft will not allow MFA capable users to authenticate programmatically after 1st September 2025. See [Microsoft documentation](https://learn.microsoft.com/en-us/entra/identity/authentication/concept-mandatory-multifactor-authentication?tabs=dotnet) for more information.
???+ warning
Remember that if the user is newly created, you need to sign in with that account first, as Microsoft will prompt you to change the password. If you dont complete this step, user authentication will fail because Microsoft marks the initial password as expired.
---
#### If using user authentication (Currently Prowler Cloud only supports this method)
1. Search and select:
@@ -253,6 +253,8 @@ To grant the permissions for the PowerShell modules via application authenticati
- `Client ID`
- `Tenant ID`
- `AZURE_CLIENT_SECRET` from earlier
If you are using user authentication, also add:
- `M365_USER` the user using the correct assigned domain, more info [here](../../getting-started/requirements.md#service-principal-and-user-credentials-authentication-recommended)
- `M365_PASSWORD` the password of the user
Binary file not shown.

Before

Width:  |  Height:  |  Size: 119 KiB

After

Width:  |  Height:  |  Size: 433 KiB

+204
View File
@@ -0,0 +1,204 @@
# Prowler Lighthouse
Prowler Lighthouse is an AI Cloud Security Analyst chatbot that helps you understand, prioritize, and remediate security findings in your cloud environments. It's designed to provide security expertise for teams without dedicated resources, acting as your 24/7 virtual cloud security analyst.
<img src="../img/lighthouse-intro.png" alt="Prowler Lighthouse">
## How It Works
Prowler Lighthouse uses OpenAI's language models and integrates with your Prowler security findings data.
Here's what's happening behind the scenes:
- The system uses a multi-agent architecture built with [LanggraphJS](https://github.com/langchain-ai/langgraphjs) for LLM logic and [Vercel AI SDK UI](https://sdk.vercel.ai/docs/ai-sdk-ui/overview) for frontend chatbot.
- It uses a ["supervisor" architecture](https://langchain-ai.lang.chat/langgraphjs/tutorials/multi_agent/agent_supervisor/) that interacts with different agents for specialized tasks. For example, `findings_agent` can analyze detected security findings, while `overview_agent` provides a summary of connected cloud accounts.
- The system connects to OpenAI models to understand, fetch the right data, and respond to the user's query.
???+ note
Lighthouse is tested against `gpt-4o` and `gpt-4o-mini` OpenAI models.
- The supervisor agent is the main contact point. It is what users interact with directly from the chat interface. It coordinates with other agents to answer users' questions comprehensively.
<img src="../img/lighthouse-architecture.png" alt="Lighthouse Architecture">
???+ note
All agents can only read relevant security data. They cannot modify your data or access sensitive information like configured secrets or tenant details.
## Set up
Getting started with Prowler Lighthouse is easy:
1. Go to the configuration page in your Prowler dashboard.
2. Enter your OpenAI API key.
3. Select your preferred model. The recommended one for best results is `gpt-4o`.
4. (Optional) Add business context to improve response quality and prioritization.
<img src="../img/lighthouse-config.png" alt="Lighthouse Configuration">
### Adding Business Context
The optional business context field lets you provide additional information to help Lighthouse understand your environment and priorities, including:
- Your organization's cloud security goals
- Information about account owners or responsible teams
- Compliance requirements for your organization
- Current security initiatives or focus areas
Better context leads to more relevant responses and prioritization that aligns with your needs.
## Capabilities
Prowler Lighthouse is designed to be your AI security team member, with capabilities including:
### Natural Language Querying
Ask questions in plain English about your security findings. Examples:
- "What are my highest risk findings?"
- "Show me all S3 buckets with public access."
- "What security issues were found in my production accounts?"
<img src="../img/lighthouse-feature1.png" alt="Natural language querying">
### Detailed Remediation Guidance
Get tailored step-by-step instructions for fixing security issues:
- Clear explanations of the problem and its impact
- Commands or console steps to implement fixes
- Alternative approaches with different solutions
<img src="../img/lighthouse-feature2.png" alt="Detailed Remediation">
### Enhanced Context and Analysis
Lighthouse can provide additional context to help you understand the findings:
- Explain security concepts related to findings in simple terms
- Provide risk assessments based on your environment and context
- Connect related findings to show broader security patterns
<img src="../img/lighthouse-config.png" alt="Business Context">
<img src="../img/lighthouse-feature3.png" alt="Contextual Responses">
## Important Notes
Prowler Lighthouse is powerful, but there are limitations:
- **Continuous improvement**: Please report any issues, as the feature may make mistakes or encounter errors, despite extensive testing.
- **Access limitations**: Lighthouse can only access data the logged-in user can view. If you can't see certain information, Lighthouse can't see it either.
- **NextJS session dependence**: If your Prowler application session expires or logs out, Lighthouse will error out. Refresh and log back in to continue.
- **Response quality**: The response quality depends on the selected OpenAI model. For best results, use gpt-4o.
### Getting Help
If you encounter issues with Prowler Lighthouse or have suggestions for improvements, please [reach out through our Slack channel](https://goto.prowler.com/slack).
### What Data Is Shared to OpenAI?
The following API endpoints are accessible to Prowler Lighthouse. Data from the following API endpoints could be shared with OpenAI depending on the scope of user's query:
#### Accessible API Endpoints
**User Management:**
- List all users - `/api/v1/users`
- Retrieve the current user's information - `/api/v1/users/me`
**Provider Management:**
- List all providers - `/api/v1/providers`
- Retrieve data from a provider - `/api/v1/providers/{id}`
**Scan Management:**
- List all scans - `/api/v1/scans`
- Retrieve data from a specific scan - `/api/v1/scans/{id}`
**Resource Management:**
- List all resources - `/api/v1/resources`
- Retrieve data for a resource - `/api/v1/resources/{id}`
**Findings Management:**
- List all findings - `/api/v1/findings`
- Retrieve data from a specific finding - `/api/v1/findings/{id}`
- Retrieve metadata values from findings - `/api/v1/findings/metadata`
**Overview Data:**
- Get aggregated findings data - `/api/v1/overviews/findings`
- Get findings data by severity - `/api/v1/overviews/findings_severity`
- Get aggregated provider data - `/api/v1/overviews/providers`
- Get findings data by service - `/api/v1/overviews/services`
**Compliance Management:**
- List compliance overviews for a scan - `/api/v1/compliance-overviews`
- Retrieve data from a specific compliance overview - `/api/v1/compliance-overviews/{id}`
#### Excluded API Endpoints
Not all Prowler API endpoints are integrated with Lighthouse. They are intentionally excluded for the following reasons:
- OpenAI/other LLM providers shouldn't have access to sensitive data (like fetching provider secrets and other sensitive config)
- Users queries don't need responses from those API endpoints (ex: tasks, tenant details, downloading zip file, etc.)
**Excluded Endpoints:**
**User Management:**
- List specific users information - `/api/v1/users/{id}`
- List user memberships - `/api/v1/users/{user_pk}/memberships`
- Retrieve membership data from the user - `/api/v1/users/{user_pk}/memberships/{id}`
**Tenant Management:**
- List all tenants - `/api/v1/tenants`
- Retrieve data from a tenant - `/api/v1/tenants/{id}`
- List tenant memberships - `/api/v1/tenants/{tenant_pk}/memberships`
- List all invitations - `/api/v1/tenants/invitations`
- Retrieve data from tenant invitation - `/api/v1/tenants/invitations/{id}`
**Security and Configuration:**
- List all secrets - `/api/v1/providers/secrets`
- Retrieve data from a secret - `/api/v1/providers/secrets/{id}`
- List all provider groups - `/api/v1/provider-groups`
- Retrieve data from a provider group - `/api/v1/provider-groups/{id}`
**Reports and Tasks:**
- Download zip report - `/api/v1/scans/{v1}/report`
- List all tasks - `/api/v1/tasks`
- Retrieve data from a specific task - `/api/v1/tasks/{id}`
**Lighthouse Configuration:**
- List OpenAI configuration - `/api/v1/lighthouse-config`
- Retrieve OpenAI key and configuration - `/api/v1/lighthouse-config/{id}`
???+ note
Agents only have access to hit GET endpoints. They don't have access to other HTTP methods.
## FAQs
**1. Why only OpenAI models?**
During feature development, we evaluated other LLM models.
- **Claude AI** - Claude models have [tier-based ratelimits](https://docs.anthropic.com/en/api/rate-limits#requirements-to-advance-tier). For Lighthouse to answer slightly complex questions, there are a handful of API calls to the LLM provider within few seconds. With Claude's tiering system, users must purchase $400 credits or convert their subscription to monthly invoicing after talking to their sales team. This pricing may not suit all Prowler users.
- **Gemini Models** - Gemini lacks a solid tool calling feature like OpenAI. It calls functions recursively until exceeding limits. Gemini-2.5-Pro-Experimental is better than previous models regarding tool calling and responding, but it's still experimental.
- **Deepseek V3** - Doesn't support system prompt messages.
**2. Why a multi-agent supervisor model?**
Context windows are limited. While demo data fits inside the context window, querying real-world data often exceeds it. A multi-agent architecture is used so different agents fetch different sizes of data and respond with the minimum required data to the supervisor. This spreads the context window usage across agents.
**3. Is my security data shared with OpenAI?**
Minimal data is shared to generate useful responses. Agents can access security findings and remediation details when needed. Provider secrets are protected by design and cannot be read. The Lighthouse key is only accessible to our NextJS server and is never sent to LLMs. Resource metadata (names, tags, account/project IDs, etc) may be shared with OpenAI based on your query requirements.
**4. Can the Lighthouse change my cloud environment?**
No. The agent doesn't have the tools to make the changes, even if the configured cloud provider API keys contain permissions to modify resources.
+10 -51
View File
@@ -1,6 +1,6 @@
# Configuring SAML Single Sign-On (SSO) in Prowler
This guide explains how to enable and test SAML SSO integration in Prowler. It includes environment setup, certificate configuration, API endpoints, and how to configure Okta as your Identity Provider (IdP).
This guide explains how to enable and test SAML SSO integration in Prowler. It includes environment setup, API endpoints, and how to configure Okta as your Identity Provider (IdP).
---
@@ -20,26 +20,6 @@ Update this variable to specify which domains Django should accept incoming requ
DJANGO_ALLOWED_HOSTS=localhost,127.0.0.1,prowler-api,mycompany.prowler
```
# SAML Certificates
To enable SAML support, you must provide a public certificate and private key to allow Prowler to sign SAML requests and validate responses.
### Why is this necessary?
SAML relies on digital signatures to verify trust between the Identity Provider (IdP) and the Service Provider (SP). Prowler acts as the SP and must use a certificate to sign outbound authentication requests.
### Add to your .env file:
```env
SAML_PUBLIC_CERT="-----BEGIN CERTIFICATE-----
...your certificate here...
-----END CERTIFICATE-----"
SAML_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
...your private key here...
-----END PRIVATE KEY-----"
```
# SAML Configuration API
You can manage SAML settings via the API. Prowler provides full CRUD support for tenant-specific SAML configuration.
@@ -60,7 +40,7 @@ You can manage SAML settings via the API. Prowler provides full CRUD support for
### Description
This endpoint receives an email and checks if there is an active SAML configuration for the associated domain (i.e., the part after the @). If a configuration exists and the required certificates are present, it responds with an HTTP 302 redirect to the appropriate saml_login endpoint for the organization.
This endpoint receives an email and checks if there is an active SAML configuration for the associated domain (i.e., the part after the @). If a configuration exists it responds with an HTTP 302 redirect to the appropriate saml_login endpoint for the organization.
- POST /api/v1/accounts/saml/initiate/
@@ -78,7 +58,7 @@ This endpoint receives an email and checks if there is an active SAML configurat
• 302 FOUND: Redirects to the SAML login URL associated with the organization.
• 403 FORBIDDEN: The domain is not authorized or SAML certificates are missing from the configuration.
• 403 FORBIDDEN: The domain is not authorized.
### Validation logic
@@ -86,8 +66,6 @@ This endpoint receives an email and checks if there is an active SAML configurat
• Retrieves the related SAMLConfiguration object via tenant_id.
• Verifies that SAML_PUBLIC_CERT and SAML_PRIVATE_KEY environment variables are set.
# SAML Integration: Testing Guide
@@ -95,26 +73,7 @@ This document outlines the process for testing the SAML integration functionalit
---
## 1. Generate Self-Signed Certificate and Private Key
First, generate a self-signed certificate and corresponding private key using OpenSSL:
```bash
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \
-keyout saml_private_key.pem \
-out saml_public_cert.pem \
-subj "/C=US/ST=Test/L=Test/O=Test/OU=Test/CN=localhost"
```
## 2. Add Certificate Values to .env
Paste the generated values into your .env file:
```
SAML_PUBLIC_CERT=<paste certificate content here>
SAML_PRIVATE_KEY=<paste private key content here>
```
## 3. Start Ngrok and Update ALLOWED_HOSTS
## 1. Start Ngrok and Update ALLOWED_HOSTS
Start ngrok on port 8080:
```
@@ -127,7 +86,7 @@ Then, copy the generated ngrok URL and include it in the ALLOWED_HOSTS setting.
ALLOWED_HOSTS = env.list("DJANGO_ALLOWED_HOSTS", default=["*"])
```
## 4. Configure the Identity Provider (IdP)
## 2. Configure the Identity Provider (IdP)
Start your environment and configure your IdP. You will need to download the IdP's metadata XML file.
@@ -137,7 +96,7 @@ Your Assertion Consumer Service (ACS) URL must follow this format:
https://<PROXY_URL>/api/v1/accounts/saml/<CONFIGURED_DOMAIN>/acs/
```
## 5. IdP Attribute Mapping
## 3. IdP Attribute Mapping
The following fields are expected from the IdP:
@@ -151,7 +110,7 @@ The following fields are expected from the IdP:
These values are dynamic. If the values change in the IdP, they will be updated on the next login.
## 6. SAML Configuration API (POST)
## 4. SAML Configuration API (POST)
SAML configuration is managed via a CRUD API. Use the following POST request to create a new configuration:
@@ -171,7 +130,7 @@ curl --location 'http://localhost:8080/api/v1/saml-config' \
}'
```
## 7. SAML SSO Callback Configuration
## 5. SAML SSO Callback Configuration
### Environment Variable Configuration
@@ -201,7 +160,7 @@ AUTH_URL="<WEB_UI_URL>"
- Both environment variables are required for proper SAML SSO functionality
- Verify that the `NEXT_PUBLIC_API_BASE_URL` environment variable is properly configured to reference the correct API server base URL corresponding to your target deployment environment. This ensures proper routing of SAML callback requests to the appropriate backend services.
## 8. Start SAML Login Flow
## 6. Start SAML Login Flow
Once everything is configured, start the SAML login process by visiting the following URL:
@@ -211,6 +170,6 @@ https://<PROXY_IP>/api/v1/accounts/saml/<CONFIGURED_DOMAIN>/login/?email=<USER_E
At the end you will get a valid access and refresh token
## 9. Notes on the initiate Endpoint
## 7. Notes on the initiate Endpoint
The initiate endpoint is not strictly required. It was created to allow extra checks or behavior modifications (like enumeration mitigation). It also simplifies UI integration with SAML, but again, it's optional.
+9 -1
View File
@@ -170,7 +170,15 @@ By default, the `kubeconfig` file is located at `~/.kube/config`.
---
### **Step 4.5: M365 Credentials**
For M365, Prowler App uses a service principal application with user and password to authenticate, for more information about the requirements needed for this provider check this [section](../getting-started/requirements.md#microsoft-365). Also, the detailed steps of how to add this provider to Prowler Cloud and start using it are [here](./microsoft365/getting-started-m365.md).
For M365, you must enter your Domain ID and choose the authentication method you want to use:
- Service Principal Authentication (Recommended)
- User Authentication (only works if the user does not have MFA enabled)
???+ note
User authentication with M365_USER and M365_PASSWORD is optional and will only work if the account does not enforce MFA.
For full setup instructions and requirements, check the [Microsoft 365 provider requirements](./microsoft365/getting-started-m365.md).
<img src="../../img/m365-credentials.png" alt="Prowler Cloud M365 Credentials" width="700"/>
+4
View File
@@ -54,6 +54,7 @@ nav:
- Role-Based Access Control: tutorials/prowler-app-rbac.md
- Social Login: tutorials/prowler-app-social-login.md
- SSO with SAML: tutorials/prowler-app-sso.md
- Lighthouse: tutorials/prowler-app-lighthouse.md
- CLI:
- Miscellaneous: tutorials/misc.md
- Reporting: tutorials/reporting.md
@@ -106,6 +107,8 @@ nav:
- Getting Started: tutorials/microsoft365/getting-started-m365.md
- Authentication: tutorials/microsoft365/authentication.md
- Use of PowerShell: tutorials/microsoft365/use-of-powershell.md
- GitHub:
- Authentication: tutorials/github/authentication.md
- IaC:
- Getting Started: tutorials/iac/getting-started-iac.md
- Developer Guide:
@@ -117,6 +120,7 @@ nav:
- Outputs: developer-guide/outputs.md
- Integrations: developer-guide/integrations.md
- Compliance: developer-guide/security-compliance-framework.md
- Lighthouse: developer-guide/lighthouse.md
- Provider Specific Details:
- AWS: developer-guide/aws-details.md
- Azure: developer-guide/azure-details.md
Generated
+67 -66
View File
@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.3 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -666,6 +666,42 @@ azure-common = ">=1.1,<2.0"
azure-mgmt-core = ">=1.3.0,<2.0.0"
msrest = ">=0.6.21"
[[package]]
name = "azure-mgmt-recoveryservices"
version = "3.1.0"
description = "Microsoft Azure Recovery Services Client Library for Python"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "azure_mgmt_recoveryservices-3.1.0-py3-none-any.whl", hash = "sha256:21c58afdf4ae66806783e95f8cd17e3bec31be7178c48784db21f0b05de7fa66"},
{file = "azure_mgmt_recoveryservices-3.1.0.tar.gz", hash = "sha256:7f2db98401708cf145322f50bc491caf7967bec4af3bf7b0984b9f07d3092687"},
]
[package.dependencies]
azure-common = ">=1.1"
azure-mgmt-core = ">=1.5.0"
isodate = ">=0.6.1"
typing-extensions = ">=4.6.0"
[[package]]
name = "azure-mgmt-recoveryservicesbackup"
version = "9.2.0"
description = "Microsoft Azure Recovery Services Backup Management Client Library for Python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "azure_mgmt_recoveryservicesbackup-9.2.0-py3-none-any.whl", hash = "sha256:c0002858d0166b6a10189a1fd580a49c83dc31b111e98010a5b2ea0f767dfff1"},
{file = "azure_mgmt_recoveryservicesbackup-9.2.0.tar.gz", hash = "sha256:c402b3e22a6c3879df56bc37e0063142c3352c5102599ff102d19824f1b32b29"},
]
[package.dependencies]
azure-common = ">=1.1"
azure-mgmt-core = ">=1.3.2"
isodate = ">=0.6.1"
typing-extensions = ">=4.6.0"
[[package]]
name = "azure-mgmt-resource"
version = "23.3.0"
@@ -1675,21 +1711,18 @@ xml-validation = ["lxml (>=4,<6)"]
[[package]]
name = "dash"
version = "2.18.2"
version = "3.1.1"
description = "A Python framework for building reactive web-apps. Developed by Plotly."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "dash-2.18.2-py3-none-any.whl", hash = "sha256:0ce0479d1bc958e934630e2de7023b8a4558f23ce1f9f5a4b34b65eb3903a869"},
{file = "dash-2.18.2.tar.gz", hash = "sha256:20e8404f73d0fe88ce2eae33c25bbc513cbe52f30d23a401fa5f24dbb44296c8"},
{file = "dash-3.1.1-py3-none-any.whl", hash = "sha256:66fff37e79c6aa114cd55aea13683d1e9afe0e3f96b35388baca95ff6cfdad23"},
{file = "dash-3.1.1.tar.gz", hash = "sha256:916b31cec46da0a3339da0e9df9f446126aa7f293c0544e07adf9fe4ba060b18"},
]
[package.dependencies]
dash-core-components = "2.0.0"
dash-html-components = "2.0.0"
dash-table = "5.0.0"
Flask = ">=1.0.4,<3.1"
Flask = ">=1.0.4,<3.2"
importlib-metadata = "*"
nest-asyncio = "*"
plotly = ">=5.0.0"
@@ -1697,11 +1730,12 @@ requests = "*"
retrying = "*"
setuptools = "*"
typing-extensions = ">=4.1.1"
Werkzeug = "<3.1"
Werkzeug = "<3.2"
[package.extras]
celery = ["celery[redis] (>=5.1.2)", "redis (>=3.5.3)"]
ci = ["black (==22.3.0)", "dash-dangerously-set-inner-html", "dash-flow-example (==0.0.5)", "flake8 (==7.0.0)", "flaky (==3.8.1)", "flask-talisman (==1.0.0)", "jupyterlab (<4.0.0)", "mimesis (<=11.1.0)", "mock (==4.0.3)", "numpy (<=1.26.3)", "openpyxl", "orjson (==3.10.3)", "pandas (>=1.4.0)", "pyarrow", "pylint (==3.0.3)", "pytest-mock", "pytest-rerunfailures", "pytest-sugar (==0.9.6)", "pyzmq (==25.1.2)", "xlrd (>=2.0.1)"]
async = ["flask[async]"]
celery = ["celery[redis] (>=5.1.2,<5.4.0)", "kombu (<5.4.0)", "redis (>=3.5.3,<=5.0.4)"]
ci = ["black (==22.3.0)", "flake8 (==7.0.0)", "flaky (==3.8.1)", "flask-talisman (==1.0.0)", "ipython (<9.0.0)", "jupyterlab (<4.0.0)", "mimesis (<=11.1.0)", "mock (==4.0.3)", "mypy (==1.15.0) ; python_version >= \"3.12\"", "numpy (<=1.26.3)", "openpyxl", "orjson (==3.10.3)", "pandas (>=1.4.0)", "pyarrow", "pylint (==3.0.3)", "pyright (==1.1.398) ; python_version >= \"3.7\"", "pytest-mock", "pytest-rerunfailures", "pytest-sugar (==0.9.6)", "pyzmq (==25.1.2)", "xlrd (>=2.0.1)"]
compress = ["flask-compress"]
dev = ["PyYAML (>=5.4.1)", "coloredlogs (>=15.0.1)", "fire (>=0.4.0)"]
diskcache = ["diskcache (>=5.2.1)", "multiprocess (>=0.70.12)", "psutil (>=5.8.0)"]
@@ -1709,57 +1743,21 @@ testing = ["beautifulsoup4 (>=4.8.2)", "cryptography", "dash-testing-stub (>=0.0
[[package]]
name = "dash-bootstrap-components"
version = "1.6.0"
version = "2.0.3"
description = "Bootstrap themed components for use in Plotly Dash"
optional = false
python-versions = "<4,>=3.8"
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "dash_bootstrap_components-1.6.0-py3-none-any.whl", hash = "sha256:97f0f47b38363f18863e1b247462229266ce12e1e171cfb34d3c9898e6e5cd1e"},
{file = "dash_bootstrap_components-1.6.0.tar.gz", hash = "sha256:960a1ec9397574792f49a8241024fa3cecde0f5930c971a3fc81f016cbeb1095"},
{file = "dash_bootstrap_components-2.0.3-py3-none-any.whl", hash = "sha256:82754d3d001ad5482b8a82b496c7bf98a1c68d2669d607a89dda7ec627304af5"},
{file = "dash_bootstrap_components-2.0.3.tar.gz", hash = "sha256:5c161b04a6e7ed19a7d54e42f070c29fd6c385d5a7797e7a82999aa2fc15b1de"},
]
[package.dependencies]
dash = ">=2.0.0"
dash = ">=3.0.4"
[package.extras]
pandas = ["numpy", "pandas"]
[[package]]
name = "dash-core-components"
version = "2.0.0"
description = "Core component suite for Dash"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "dash_core_components-2.0.0-py3-none-any.whl", hash = "sha256:52b8e8cce13b18d0802ee3acbc5e888cb1248a04968f962d63d070400af2e346"},
{file = "dash_core_components-2.0.0.tar.gz", hash = "sha256:c6733874af975e552f95a1398a16c2ee7df14ce43fa60bb3718a3c6e0b63ffee"},
]
[[package]]
name = "dash-html-components"
version = "2.0.0"
description = "Vanilla HTML components for Dash"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "dash_html_components-2.0.0-py3-none-any.whl", hash = "sha256:b42cc903713c9706af03b3f2548bda4be7307a7cf89b7d6eae3da872717d1b63"},
{file = "dash_html_components-2.0.0.tar.gz", hash = "sha256:8703a601080f02619a6390998e0b3da4a5daabe97a1fd7a9cebc09d015f26e50"},
]
[[package]]
name = "dash-table"
version = "5.0.0"
description = "Dash table"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "dash_table-5.0.0-py3-none-any.whl", hash = "sha256:19036fa352bb1c11baf38068ec62d172f0515f73ca3276c79dee49b95ddc16c9"},
{file = "dash_table-5.0.0.tar.gz", hash = "sha256:18624d693d4c8ef2ddec99a6f167593437a7ea0bf153aa20f318c170c5bc7308"},
]
pandas = ["numpy (>=2.0.2)", "pandas (>=2.2.3)"]
[[package]]
name = "decorator"
@@ -2039,23 +2037,24 @@ pyflakes = ">=3.2.0,<3.3.0"
[[package]]
name = "flask"
version = "3.0.3"
version = "3.1.1"
description = "A simple framework for building complex web applications."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "flask-3.0.3-py3-none-any.whl", hash = "sha256:34e815dfaa43340d1d15a5c3a02b8476004037eb4840b34910c6e21679d288f3"},
{file = "flask-3.0.3.tar.gz", hash = "sha256:ceb27b0af3823ea2737928a4d99d125a06175b8512c445cbd9a9ce200ef76842"},
{file = "flask-3.1.1-py3-none-any.whl", hash = "sha256:07aae2bb5eaf77993ef57e357491839f5fd9f4dc281593a81a9e4d79a24f295c"},
{file = "flask-3.1.1.tar.gz", hash = "sha256:284c7b8f2f58cb737f0cf1c30fd7eaf0ccfcde196099d24ecede3fc2005aa59e"},
]
[package.dependencies]
blinker = ">=1.6.2"
blinker = ">=1.9.0"
click = ">=8.1.3"
importlib-metadata = {version = ">=3.6.0", markers = "python_version < \"3.10\""}
itsdangerous = ">=2.1.2"
Jinja2 = ">=3.1.2"
Werkzeug = ">=3.0.0"
itsdangerous = ">=2.2.0"
jinja2 = ">=3.1.2"
markupsafe = ">=2.1.1"
werkzeug = ">=3.1.0"
[package.extras]
async = ["asgiref (>=3.2)"]
@@ -2678,6 +2677,8 @@ python-versions = "*"
groups = ["dev"]
files = [
{file = "jsonpath-ng-1.7.0.tar.gz", hash = "sha256:f6f5f7fd4e5ff79c785f1573b394043b39849fb2bb47bcead935d12b00beab3c"},
{file = "jsonpath_ng-1.7.0-py2-none-any.whl", hash = "sha256:898c93fc173f0c336784a3fa63d7434297544b7198124a68f9a3ef9597b0ae6e"},
{file = "jsonpath_ng-1.7.0-py3-none-any.whl", hash = "sha256:f3d7f9e848cba1b6da28c55b1c26ff915dc9e0b1ba7e752a53d6da8d5cbd00b6"},
]
[package.dependencies]
@@ -6350,14 +6351,14 @@ test = ["websockets"]
[[package]]
name = "werkzeug"
version = "3.0.6"
version = "3.1.3"
description = "The comprehensive WSGI web application library."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "werkzeug-3.0.6-py3-none-any.whl", hash = "sha256:1bc0c2310d2fbb07b1dd1105eba2f7af72f322e1e455f2f93c993bee8c8a5f17"},
{file = "werkzeug-3.0.6.tar.gz", hash = "sha256:a8dd59d4de28ca70471a34cba79bed5f7ef2e036a76b3ab0835474246eb41f8d"},
{file = "werkzeug-3.1.3-py3-none-any.whl", hash = "sha256:54b78bf3716d19a65be4fceccc0d1d7b89e608834989dfae50ea87564639213e"},
{file = "werkzeug-3.1.3.tar.gz", hash = "sha256:60723ce945c19328679790e3282cc758aa4a6040e4bb330f53d30fa546d44746"},
]
[package.dependencies]
@@ -6622,4 +6623,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">3.9.1,<3.13"
content-hash = "c442552635c8e904d1c7a50f4787c8e90ec90787960ee1867f2235a7aa2205f0"
content-hash = "4b0eee5566caf8e9d1e2e6fe8ac37733b29dd4275c2d65ac5291fa3acd514d9e"
+21
View File
@@ -5,10 +5,28 @@ All notable changes to the **Prowler SDK** are documented in this file.
## [v5.9.0] (Prowler UNRELEASED)
### Added
- `storage_smb_channel_encryption_with_secure_algorithm` check for Azure provider [(#8123)](https://github.com/prowler-cloud/prowler/pull/8123)
- `storage_smb_protocol_version_is_latest` check for Azure provider [(#8128)](https://github.com/prowler-cloud/prowler/pull/8128)
- `vm_backup_enabled` check for Azure provider [(#8182)](https://github.com/prowler-cloud/prowler/pull/8182)
- `vm_linux_enforce_ssh_authentication` check for Azure provider [(#8149)](https://github.com/prowler-cloud/prowler/pull/8149)
- `vm_ensure_using_approved_images` check for Azure provider [(#8168)](https://github.com/prowler-cloud/prowler/pull/8168)
- `vm_scaleset_associated_load_balancer` check for Azure provider [(#8181)](https://github.com/prowler-cloud/prowler/pull/8181)
- Add `test_connection` method to GitHub provider [(#8248)](https://github.com/prowler-cloud/prowler/pull/8248)
### Changed
### Fixed
- Add GitHub provider to lateral panel in documentation and change -h environment variable output [(#8246)](https://github.com/prowler-cloud/prowler/pull/8246)
---
## [v5.8.1] (Prowler 5.8.1)
### Fixed
- Detect wildcarded ARNs in sts:AssumeRole policy resources [(#8164)](https://github.com/prowler-cloud/prowler/pull/8164)
- List all streams and `firehose_stream_encrypted_at_rest` logic [(#8213)](https://github.com/prowler-cloud/prowler/pull/8213)
- Allow empty values for http_endpoint in templates [(#8184)](https://github.com/prowler-cloud/prowler/pull/8184)
- Convert all Azure Storage models to Pydantic models to avoid serialization issues [(#8222)](https://github.com/prowler-cloud/prowler/pull/8222)
---
@@ -72,6 +90,9 @@ All notable changes to the **Prowler SDK** are documented in this file.
### Removed
- OCSF version number references to point always to the latest [(#8064)](https://github.com/prowler-cloud/prowler/pull/8064)
### Fixed
- Update SDK Azure call for ftps_state in the App Service. [(#7923)](https://github.com/prowler-cloud/prowler/pull/7923)
---
## [v5.7.5] (Prowler 5.7.5)
+4 -3
View File
@@ -31,7 +31,6 @@ from prowler.lib.check.check import (
print_fixers,
print_services,
remove_custom_checks_module,
run_fixer,
)
from prowler.lib.check.checks_loader import load_checks_to_execute
from prowler.lib.check.compliance import update_checks_metadata_with_compliance
@@ -42,6 +41,7 @@ from prowler.lib.check.custom_checks_metadata import (
)
from prowler.lib.check.models import CheckMetadata
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.lib.fix.fixer import Fixer
from prowler.lib.logger import logger, set_logging_config
from prowler.lib.outputs.asff.asff import ASFF
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
@@ -300,6 +300,7 @@ def prowler():
output_options = M365OutputOptions(
args, bulk_checks_metadata, global_provider.identity
)
global_provider.set_output_options(output_options)
elif provider == "nhn":
output_options = NHNOutputOptions(
args, bulk_checks_metadata, global_provider.identity
@@ -332,11 +333,11 @@ def prowler():
)
# Prowler Fixer
if output_options.fixer:
if args.fixer:
print(f"{Style.BRIGHT}\nRunning Prowler Fixer, please wait...{Style.RESET_ALL}")
# Check if there are any FAIL findings
if any("FAIL" in finding.status for finding in findings):
fixed_findings = run_fixer(findings)
fixed_findings = Fixer.run_fixer(findings)
if not fixed_findings:
print(
f"{Style.BRIGHT}{Fore.RED}\nThere were findings to fix, but the fixer failed or it is not implemented for those findings yet. {Style.RESET_ALL}\n"
+1 -1
View File
@@ -12,7 +12,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "5.8.0"
prowler_version = "5.9.0"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
square_logo_img = "https://prowler.com/wp-content/uploads/logo-html.png"
aws_logo = "https://user-images.githubusercontent.com/38561120/235953920-3e3fba08-0795-41dc-b480-9bea57db9f2e.png"
-85
View File
@@ -298,91 +298,6 @@ def import_check(check_path: str) -> ModuleType:
return lib
def run_fixer(check_findings: list) -> int:
"""
Run the fixer for the check if it exists and there are any FAIL findings
Args:
check_findings (list): list of findings
Returns:
int: number of fixed findings
"""
try:
# Map findings to each check
findings_dict = {}
fixed_findings = 0
for finding in check_findings:
if finding.check_metadata.CheckID not in findings_dict:
findings_dict[finding.check_metadata.CheckID] = []
findings_dict[finding.check_metadata.CheckID].append(finding)
for check, findings in findings_dict.items():
# Check if there are any FAIL findings for the check
if any("FAIL" in finding.status for finding in findings):
try:
check_module_path = f"prowler.providers.{findings[0].check_metadata.Provider}.services.{findings[0].check_metadata.ServiceName}.{check}.{check}_fixer"
lib = import_check(check_module_path)
fixer = getattr(lib, "fixer")
except ModuleNotFoundError:
logger.error(f"Fixer method not implemented for check {check}")
else:
print(
f"\nFixing fails for check {Fore.YELLOW}{check}{Style.RESET_ALL}..."
)
for finding in findings:
if finding.status == "FAIL":
# Check what type of fixer is:
# - If it is a fixer for a specific resource and region
# - If it is a fixer for a specific region
# - If it is a fixer for a specific resource
if (
"region" in fixer.__code__.co_varnames
and "resource_id" in fixer.__code__.co_varnames
):
print(
f"\t{orange_color}FIXING{Style.RESET_ALL} {finding.resource_id} in {finding.region}... "
)
if fixer(
resource_id=finding.resource_id,
region=finding.region,
):
fixed_findings += 1
print(f"\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
print(f"\t{Fore.RED}ERROR{Style.RESET_ALL}")
elif "region" in fixer.__code__.co_varnames:
print(
f"\t{orange_color}FIXING{Style.RESET_ALL} {finding.region}... "
)
if fixer(region=finding.region):
fixed_findings += 1
print(f"\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
print(f"\t{Fore.RED}ERROR{Style.RESET_ALL}")
elif "resource_arn" in fixer.__code__.co_varnames:
print(
f"\t{orange_color}FIXING{Style.RESET_ALL} Resource {finding.resource_arn}... "
)
if fixer(resource_arn=finding.resource_arn):
fixed_findings += 1
print(f"\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
print(f"\t{Fore.RED}ERROR{Style.RESET_ALL}")
else:
print(
f"\t{orange_color}FIXING{Style.RESET_ALL} Resource {finding.resource_id}... "
)
if fixer(resource_id=finding.resource_id):
fixed_findings += 1
print(f"\t\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
print(f"\t\t{Fore.RED}ERROR{Style.RESET_ALL}")
return fixed_findings
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def execute_checks(
checks_to_execute: list,
global_provider: Any,
+10
View File
@@ -72,6 +72,7 @@ Detailed documentation at https://docs.prowler.com
self.__init_config_parser__()
self.__init_custom_checks_metadata_parser__()
self.__init_third_party_integrations_parser__()
self.__init_fixer_parser__()
# Init Providers Arguments
init_providers_parser(self)
@@ -393,3 +394,12 @@ Detailed documentation at https://docs.prowler.com
action="store_true",
help="Send a summary of the execution with a Slack APP in your channel. Environment variables SLACK_API_TOKEN and SLACK_CHANNEL_NAME are required (see more in https://docs.prowler.cloud/en/latest/tutorials/integrations/#slack).",
)
def __init_fixer_parser__(self):
"""Initialize the fixer parser with its arguments"""
fixer_parser = self.common_providers_parser.add_argument_group("Fixer")
fixer_parser.add_argument(
"--fixer",
action="store_true",
help="Fix the failed findings that can be fixed by Prowler",
)
View File
+219
View File
@@ -0,0 +1,219 @@
from abc import ABC, abstractmethod
from typing import Dict, List, Optional, Union
from colorama import Fore, Style
from prowler.lib.check.models import Check_Report
from prowler.lib.logger import logger
class Fixer(ABC):
"""Base class for all fixers"""
def __init__(
self,
description: str,
cost_impact: bool = False,
cost_description: Optional[str] = None,
):
"""
Initialize base fixer class.
Args:
description (str): Description of the fixer
cost_impact (bool): Whether the fixer has a cost impact
cost_description (Optional[str]): Description of the cost impact
"""
self._client = None
self.logger = logger
self.description = description
self.cost_impact = cost_impact
self.cost_description = cost_description
def _get_fixer_info(self) -> Dict:
"""Get fixer metadata"""
return {
"description": self.description,
"cost_impact": self.cost_impact,
"cost_description": self.cost_description,
}
@abstractmethod
def fix(self, finding: Optional[Check_Report] = None, **kwargs) -> bool:
"""
Main method that all fixers must implement.
Args:
finding (Optional[Check_Report]): Finding to fix
**kwargs: Additional arguments specific to each fixer
Returns:
bool: True if fix was successful, False otherwise
"""
@property
def client(self):
"""Lazy load of the client"""
return self._client
@classmethod
def get_fixer_for_finding(
cls,
finding: Check_Report,
) -> Optional["Fixer"]:
"""
Factory method to get the appropriate fixer for a finding.
Args:
finding (Check_Report): The finding to fix
credentials (Optional[Dict]): Optional credentials for isolated execution
session_config (Optional[Dict]): Optional session configuration
Returns:
Optional[Fixer]: An instance of the appropriate fixer or None if no fixer is found
"""
try:
# Extract check name from finding
check_name = finding.check_metadata.CheckID
if not check_name:
logger.error("Finding does not contain a check ID")
return None
# Convert check name to fixer class name
# Example: rds_instance_no_public_access -> RdsInstanceNoPublicAccessFixer
fixer_name = (
"".join(word.capitalize() for word in check_name.split("_")) + "Fixer"
)
# Get provider from finding
provider = finding.check_metadata.Provider
if not provider:
logger.error("Finding does not contain a provider")
return None
# Get service name from finding
service_name = finding.check_metadata.ServiceName
# Import the fixer class dynamically
try:
# Build the module path using the service name and check name
module_path = f"prowler.providers.{provider.lower()}.services.{service_name}.{check_name}.{check_name}_fixer"
module = __import__(module_path, fromlist=[fixer_name])
fixer_class = getattr(module, fixer_name)
return fixer_class()
except (ImportError, AttributeError):
print(
f"\n{Fore.YELLOW}No fixer available for check {check_name}{Style.RESET_ALL}"
)
return None
except Exception as e:
logger.error(f"Error getting fixer for finding: {str(e)}")
return None
@classmethod
def run_fixer(
cls,
findings: Union[Check_Report, List[Check_Report]],
) -> int:
"""
Method to execute the fixer on one or multiple findings.
Args:
findings (Union[Check_Report, List[Check_Report]]): A single finding or list of findings to fix
Returns:
int: Number of findings successfully fixed
"""
try:
# Handle single finding case
if isinstance(findings, Check_Report):
if findings.status != "FAIL":
return 0
check_id = findings.check_metadata.CheckID
if not check_id:
return 0
return cls.run_individual_fixer(check_id, [findings])
# Handle multiple findings case
fixed_findings = 0
findings_by_check = {}
# Group findings by check
for finding in findings:
if finding.status != "FAIL":
continue
check_id = finding.check_metadata.CheckID
if not check_id:
continue
if check_id not in findings_by_check:
findings_by_check[check_id] = []
findings_by_check[check_id].append(finding)
# Process each check
for check_id, check_findings in findings_by_check.items():
fixed_findings += cls.run_individual_fixer(check_id, check_findings)
return fixed_findings
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return 0
@classmethod
def run_individual_fixer(cls, check_id: str, findings: List[Check_Report]) -> int:
"""
Run the fixer for a specific check ID.
Args:
check_id (str): The check ID to fix
findings (List[Check_Report]): List of findings to process
Returns:
int: Number of findings successfully fixed
"""
try:
# Filter findings for this check_id and status FAIL
check_findings = [
finding
for finding in findings
if finding.check_metadata.CheckID == check_id
and finding.status == "FAIL"
]
if not check_findings:
return 0
# Get the fixer for this check
fixer = cls.get_fixer_for_finding(check_findings[0])
if not fixer:
return 0
# Print fixer information
print(f"\n{Fore.CYAN}Fixer Information for {check_id}:{Style.RESET_ALL}")
print(f"{Fore.CYAN}================================={Style.RESET_ALL}")
for key, value in fixer._get_fixer_info().items():
print(f"{Fore.CYAN}{key}: {Style.RESET_ALL}{value}")
print(f"{Fore.CYAN}================================={Style.RESET_ALL}\n")
print(
f"\nFixing fails for check {Fore.YELLOW}{check_id}{Style.RESET_ALL}..."
)
fixed_findings = 0
for finding in check_findings:
if fixer.fix(finding=finding):
fixed_findings += 1
print(f"\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
print(f"\t{Fore.RED}ERROR{Style.RESET_ALL}")
return fixed_findings
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return 0
@@ -1985,6 +1985,7 @@
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -4023,6 +4024,7 @@
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -4760,6 +4762,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -4806,6 +4809,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -4852,6 +4856,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -4898,6 +4903,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -4943,6 +4949,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -5479,25 +5486,37 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ap-southeast-5",
"ap-southeast-7",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"mx-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-cn": [
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
@@ -5513,18 +5532,27 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ap-southeast-5",
"ap-southeast-7",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"mx-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6123,6 +6151,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -7946,6 +7975,7 @@
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
@@ -8222,6 +8252,7 @@
"eu-central-1",
"eu-north-1",
"eu-west-1",
"eu-west-2",
"us-east-1",
"us-east-2",
"us-west-2"
@@ -9660,6 +9691,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -9829,6 +9861,13 @@
]
}
},
"sagemakerautopilot": {
"regions": {
"aws": [],
"aws-cn": [],
"aws-us-gov": []
}
},
"savingsplans": {
"regions": {
"aws": [
@@ -9879,6 +9918,7 @@
"aws": [
"af-south-1",
"ap-east-1",
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
@@ -9888,6 +9928,8 @@
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ap-southeast-5",
"ap-southeast-7",
"ca-central-1",
"ca-west-1",
"eu-central-1",
@@ -9901,6 +9943,7 @@
"il-central-1",
"me-central-1",
"me-south-1",
"mx-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -12095,6 +12138,15 @@
]
}
},
"workspaces-instances": {
"regions": {
"aws": [
"ap-northeast-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"workspaces-web": {
"regions": {
"aws": [
@@ -159,14 +159,6 @@ def init_parser(self):
help="Scan unused services",
)
# Prowler Fixer
prowler_fixer_subparser = aws_parser.add_argument_group("Prowler Fixer")
prowler_fixer_subparser.add_argument(
"--fixer",
action="store_true",
help="Fix the failed findings that can be fixed by Prowler",
)
def validate_session_duration(session_duration: int) -> int:
"""validate_session_duration validates that the input session_duration is valid"""
+101
View File
@@ -0,0 +1,101 @@
from typing import Dict, Optional
from colorama import Style
from prowler.config.config import orange_color
from prowler.lib.check.models import Check_Report_AWS
from prowler.lib.fix.fixer import Fixer
from prowler.lib.logger import logger
class AWSFixer(Fixer):
"""AWS specific fixer implementation"""
def __init__(
self,
description: str,
cost_impact: bool = False,
cost_description: Optional[str] = None,
service: str = "",
iam_policy_required: Optional[Dict] = None,
):
"""
Initialize AWS fixer with metadata.
Args:
description (str): Description of the fixer
cost_impact (bool): Whether the fixer has a cost impact
cost_description (Optional[str]): Description of the cost impact
service (str): AWS service name
iam_policy_required (Optional[Dict]): Required IAM policy for the fixer
"""
super().__init__(description, cost_impact, cost_description)
self.service = service
self.iam_policy_required = iam_policy_required or {}
def _get_fixer_info(self):
"""Each fixer must define its metadata"""
fixer_info = super()._get_fixer_info()
fixer_info["service"] = self.service
fixer_info["iam_policy_required"] = self.iam_policy_required
return fixer_info
def fix(self, finding: Optional[Check_Report_AWS] = None, **kwargs) -> bool:
"""
AWS specific method to execute the fixer.
This method handles the printing of fixing status messages.
Args:
finding (Optional[Check_Report_AWS]): Finding to fix
**kwargs: Additional AWS-specific arguments (region, resource_id, resource_arn)
Returns:
bool: True if fixing was successful, False otherwise
"""
try:
# Get values either from finding or kwargs
region = None
resource_id = None
resource_arn = None
if finding:
region = finding.region if hasattr(finding, "region") else None
resource_id = (
finding.resource_id if hasattr(finding, "resource_id") else None
)
resource_arn = (
finding.resource_arn if hasattr(finding, "resource_arn") else None
)
else:
region = kwargs.get("region")
resource_id = kwargs.get("resource_id")
resource_arn = kwargs.get("resource_arn")
# Print the appropriate message based on available information
if region and resource_id:
print(
f"\t{orange_color}FIXING {resource_id} in {region}...{Style.RESET_ALL}"
)
elif region:
print(f"\t{orange_color}FIXING {region}...{Style.RESET_ALL}")
elif resource_arn:
print(
f"\t{orange_color}FIXING Resource {resource_arn}...{Style.RESET_ALL}"
)
elif resource_id:
print(
f"\t{orange_color}FIXING Resource {resource_id}...{Style.RESET_ALL}"
)
else:
logger.error(
"Either finding or required kwargs (region, resource_id, resource_arn) must be provided"
)
return False
return True
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
@@ -18,7 +18,10 @@ class ec2_launch_template_imdsv2_required(Check):
and version.template_data.http_tokens == "required"
):
versions_with_imdsv2_required.append(str(version.version_number))
elif version.template_data.http_endpoint == "disabled":
elif (
version.template_data.http_endpoint == "disabled"
or not version.template_data.http_endpoint
):
versions_with_metadata_disabled.append(str(version.version_number))
else:
versions_with_no_imdsv2.append(str(version.version_number))
@@ -25,18 +25,47 @@ class Firehose(AWSService):
def _list_delivery_streams(self, regional_client):
logger.info("Firehose - Listing delivery streams...")
try:
for stream_name in regional_client.list_delivery_streams()[
"DeliveryStreamNames"
]:
stream_arn = f"arn:{self.audited_partition}:firehose:{regional_client.region}:{self.audited_account}:deliverystream/{stream_name}"
if not self.audit_resources or (
is_resource_filtered(stream_arn, self.audit_resources)
):
self.delivery_streams[stream_arn] = DeliveryStream(
arn=stream_arn,
name=stream_name,
region=regional_client.region,
# Manual pagination using ExclusiveStartDeliveryStreamName
# This ensures we get all streams alphabetically without duplicates
exclusive_start_delivery_stream_name = None
processed_streams = set()
while True:
kwargs = {}
if exclusive_start_delivery_stream_name:
kwargs["ExclusiveStartDeliveryStreamName"] = (
exclusive_start_delivery_stream_name
)
response = regional_client.list_delivery_streams(**kwargs)
stream_names = response.get("DeliveryStreamNames", [])
for stream_name in stream_names:
if stream_name in processed_streams:
continue
processed_streams.add(stream_name)
stream_arn = f"arn:{self.audited_partition}:firehose:{regional_client.region}:{self.audited_account}:deliverystream/{stream_name}"
if not self.audit_resources or (
is_resource_filtered(stream_arn, self.audit_resources)
):
self.delivery_streams[stream_arn] = DeliveryStream(
arn=stream_arn,
name=stream_name,
region=regional_client.region,
)
if not response.get("HasMoreDeliveryStreams", False):
break
# Set the starting point for the next page (last stream name from current batch)
# ExclusiveStartDeliveryStreamName will start after this stream alphabetically
if stream_names:
exclusive_start_delivery_stream_name = stream_names[-1]
else:
break
except ClientError as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -61,13 +90,45 @@ class Firehose(AWSService):
describe_stream = self.regional_clients[
stream.region
].describe_delivery_stream(DeliveryStreamName=stream.name)
encryption_config = describe_stream.get(
"DeliveryStreamDescription", {}
).get("DeliveryStreamEncryptionConfiguration", {})
stream.kms_encryption = EncryptionStatus(
encryption_config.get("Status", "DISABLED")
)
stream.kms_key_arn = encryption_config.get("KeyARN", "")
stream.delivery_stream_type = describe_stream.get(
"DeliveryStreamDescription", {}
).get("DeliveryStreamType", "")
source_config = describe_stream.get("DeliveryStreamDescription", {}).get(
"Source", {}
)
stream.source = Source(
direct_put=DirectPutSourceDescription(
troughput_hint_in_mb_per_sec=source_config.get(
"DirectPutSourceDescription", {}
).get("TroughputHintInMBPerSec", 0)
),
kinesis_stream=KinesisStreamSourceDescription(
kinesis_stream_arn=source_config.get(
"KinesisStreamSourceDescription", {}
).get("KinesisStreamARN", "")
),
msk=MSKSourceDescription(
msk_cluster_arn=source_config.get("MSKSourceDescription", {}).get(
"MSKClusterARN", ""
)
),
database=DatabaseSourceDescription(
endpoint=source_config.get("DatabaseSourceDescription", {}).get(
"Endpoint", ""
)
),
)
except ClientError as error:
logger.error(
f"{stream.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -85,6 +146,39 @@ class EncryptionStatus(Enum):
DISABLING_FAILED = "DISABLING_FAILED"
class DirectPutSourceDescription(BaseModel):
"""Model for the DirectPut source of a Firehose stream"""
troughput_hint_in_mb_per_sec: int = Field(default_factory=int)
class KinesisStreamSourceDescription(BaseModel):
"""Model for the KinesisStream source of a Firehose stream"""
kinesis_stream_arn: str = Field(default_factory=str)
class MSKSourceDescription(BaseModel):
"""Model for the MSK source of a Firehose stream"""
msk_cluster_arn: str = Field(default_factory=str)
class DatabaseSourceDescription(BaseModel):
"""Model for the Database source of a Firehose stream"""
endpoint: str = Field(default_factory=str)
class Source(BaseModel):
"""Model for the source of a Firehose stream"""
direct_put: Optional[DirectPutSourceDescription]
kinesis_stream: Optional[KinesisStreamSourceDescription]
msk: Optional[MSKSourceDescription]
database: Optional[DatabaseSourceDescription]
class DeliveryStream(BaseModel):
"""Model for a Firehose Delivery Stream"""
@@ -94,3 +188,5 @@ class DeliveryStream(BaseModel):
kms_key_arn: Optional[str] = Field(default_factory=str)
kms_encryption: Optional[str] = Field(default_factory=str)
tags: Optional[List[Dict[str, str]]] = Field(default_factory=list)
delivery_stream_type: Optional[str] = Field(default_factory=str)
source: Source = Field(default_factory=Source)
@@ -3,6 +3,8 @@ from typing import List
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.firehose.firehose_client import firehose_client
from prowler.providers.aws.services.firehose.firehose_service import EncryptionStatus
from prowler.providers.aws.services.kinesis.kinesis_client import kinesis_client
from prowler.providers.aws.services.kinesis.kinesis_service import EncryptionType
class firehose_stream_encrypted_at_rest(Check):
@@ -22,14 +24,22 @@ class firehose_stream_encrypted_at_rest(Check):
findings = []
for stream in firehose_client.delivery_streams.values():
report = Check_Report_AWS(metadata=self.metadata(), resource=stream)
report.status = "PASS"
report.status_extended = (
f"Firehose Stream {stream.name} does have at rest encryption enabled."
)
report.status = "FAIL"
report.status_extended = f"Firehose Stream {stream.name} does not have at rest encryption enabled or the source stream is not encrypted."
if stream.kms_encryption != EncryptionStatus.ENABLED:
report.status = "FAIL"
report.status_extended = f"Firehose Stream {stream.name} does not have at rest encryption enabled."
# Encrypted Kinesis Stream source
if stream.delivery_stream_type == "KinesisStreamAsSource":
source_stream = kinesis_client.streams.get(
stream.source.kinesis_stream.kinesis_stream_arn
)
if source_stream.encrypted_at_rest != EncryptionType.NONE:
report.status = "PASS"
report.status_extended = f"Firehose Stream {stream.name} does not have at rest encryption enabled but the source stream {source_stream.name} has at rest encryption enabled."
# Check if the stream has encryption enabled directly
elif stream.kms_encryption == EncryptionStatus.ENABLED:
report.status = "PASS"
report.status_extended = f"Firehose Stream {stream.name} does have at rest encryption enabled."
findings.append(report)
@@ -5,6 +5,14 @@ from prowler.providers.aws.services.iam.iam_client import iam_client
class iam_no_custom_policy_permissive_role_assumption(Check):
def execute(self) -> Check_Report_AWS:
findings = []
def resource_has_wildcard(resource):
if isinstance(resource, str):
return "*" in resource
if isinstance(resource, list):
return any("*" in r for r in resource)
return False
for policy in iam_client.policies:
# Check only custom policies
if policy.type == "Custom":
@@ -12,6 +20,7 @@ class iam_no_custom_policy_permissive_role_assumption(Check):
report.region = iam_client.region
report.status = "PASS"
report.status_extended = f"Custom Policy {policy.name} does not allow permissive STS Role assumption."
if policy.document:
if not isinstance(policy.document["Statement"], list):
policy_statements = [policy.document["Statement"]]
@@ -19,30 +28,23 @@ class iam_no_custom_policy_permissive_role_assumption(Check):
policy_statements = policy.document["Statement"]
for statement in policy_statements:
if (
statement["Effect"] == "Allow"
statement.get("Effect") == "Allow"
and "Action" in statement
and "Resource" in statement
and "*" in statement["Resource"]
and resource_has_wildcard(statement["Resource"])
):
if isinstance(statement["Action"], list):
for action in statement["Action"]:
if (
action == "sts:AssumeRole"
or action == "sts:*"
or action == "*"
):
report.status = "FAIL"
report.status_extended = f"Custom Policy {policy.name} allows permissive STS Role assumption."
break
else:
if (
statement["Action"] == "sts:AssumeRole"
or statement["Action"] == "sts:*"
or statement["Action"] == "*"
):
actions = (
statement["Action"]
if isinstance(statement["Action"], list)
else [statement["Action"]]
)
for action in actions:
if action in ["sts:AssumeRole", "sts:*", "*"]:
report.status = "FAIL"
report.status_extended = f"Custom Policy {policy.name} allows permissive STS Role assumption."
break
break
if report.status == "FAIL":
break
findings.append(report)
@@ -1,36 +1,74 @@
from typing import Optional
from prowler.lib.check.models import Check_Report_AWS
from prowler.lib.logger import logger
from prowler.providers.aws.lib.fix.fixer import AWSFixer
from prowler.providers.aws.services.kms.kms_client import kms_client
def fixer(resource_id: str, region: str) -> bool:
class KmsCmkNotDeletedUnintentionallyFixer(AWSFixer):
"""
Cancel the scheduled deletion of a KMS key.
Specifically, this fixer calls the 'cancel_key_deletion' method to restore the KMS key's availability if it is marked for deletion.
Requires the kms:CancelKeyDeletion permission.
Permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "kms:CancelKeyDeletion",
"Resource": "*"
}
]
}
Args:
resource_id (str): The ID of the KMS key to cancel the deletion for.
region (str): AWS region where the KMS key exists.
Returns:
bool: True if the operation is successful (deletion cancellation is completed), False otherwise.
Fixer for KMS keys marked for deletion.
This fixer cancels the scheduled deletion of KMS keys.
"""
try:
regional_client = kms_client.regional_clients[region]
regional_client.cancel_key_deletion(KeyId=resource_id)
except Exception as error:
logger.error(
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
def __init__(self):
"""
Initialize KMS fixer.
"""
super().__init__(
description="Cancel the scheduled deletion of a KMS key",
cost_impact=False,
cost_description=None,
service="kms",
iam_policy_required={
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "kms:CancelKeyDeletion",
"Resource": "*",
}
],
},
)
return False
else:
return True
def fix(self, finding: Optional[Check_Report_AWS] = None, **kwargs) -> bool:
"""
Cancel the scheduled deletion of a KMS key.
This fixer calls the 'cancel_key_deletion' method to restore the KMS key's availability
if it is marked for deletion.
Args:
finding (Optional[Check_Report_AWS]): Finding to fix
**kwargs: Additional arguments (region and resource_id are required if finding is not provided)
Returns:
bool: True if the operation is successful (deletion cancellation is completed), False otherwise
"""
try:
# Get region and resource_id either from finding or kwargs
if finding:
region = finding.region
resource_id = finding.resource_id
else:
region = kwargs.get("region")
resource_id = kwargs.get("resource_id")
if not region or not resource_id:
raise ValueError("Region and resource_id are required")
# Show the fixing message
super().fix(region=region, resource_id=resource_id)
# Get the client for this region
regional_client = kms_client.regional_clients[region]
# Cancel key deletion
regional_client.cancel_key_deletion(KeyId=resource_id)
return True
except Exception as error:
logger.error(
f"{region if 'region' in locals() else 'unknown'} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
+97
View File
@@ -0,0 +1,97 @@
from typing import Dict, Optional
from colorama import Style
from prowler.config.config import orange_color
from prowler.lib.check.models import Check_Report_Azure
from prowler.lib.fix.fixer import Fixer
from prowler.lib.logger import logger
class AzureFixer(Fixer):
"""Azure specific fixer implementation"""
def __init__(
self,
description: str,
cost_impact: bool = False,
cost_description: Optional[str] = None,
service: str = "",
permissions_required: Optional[Dict] = None,
):
super().__init__(description, cost_impact, cost_description)
self.service = service
self.permissions_required = permissions_required or {}
def _get_fixer_info(self):
"""Each fixer must define its metadata"""
fixer_info = super()._get_fixer_info()
fixer_info["service"] = self.service
fixer_info["permissions_required"] = self.permissions_required
return fixer_info
def fix(self, finding: Optional[Check_Report_Azure] = None, **kwargs) -> bool:
"""
Azure specific method to execute the fixer.
This method handles the printing of fixing status messages.
Args:
finding (Optional[Check_Report_Azure]): Finding to fix
**kwargs: Additional Azure-specific arguments (subscription_id, resource_id, resource_group)
Returns:
bool: True if fixing was successful, False otherwise
"""
try:
# Get values either from finding or kwargs
subscription_id = None
resource_id = None
resource_group = None
if finding:
subscription_id = (
finding.subscription if hasattr(finding, "subscription") else None
)
resource_id = (
finding.resource_id if hasattr(finding, "resource_id") else None
)
resource_group = (
finding.resource.get("resource_group_name")
if hasattr(finding.resource, "resource_group_name")
else None
)
else:
subscription_id = kwargs.get("subscription_id")
resource_id = kwargs.get("resource_id")
resource_group = kwargs.get("resource_group")
# Print the appropriate message based on available information
if subscription_id and resource_id and resource_group:
print(
f"\t{orange_color}FIXING Resource {resource_id} in Resource Group {resource_group} (Subscription: {subscription_id})...{Style.RESET_ALL}"
)
elif subscription_id and resource_id:
print(
f"\t{orange_color}FIXING Resource {resource_id} (Subscription: {subscription_id})...{Style.RESET_ALL}"
)
elif subscription_id:
print(
f"\t{orange_color}FIXING Subscription {subscription_id}...{Style.RESET_ALL}"
)
elif resource_id:
print(
f"\t{orange_color}FIXING Resource {resource_id}...{Style.RESET_ALL}"
)
else:
logger.error(
"Either finding or required kwargs (subscription_id, resource_id, resource_group) must be provided"
)
return False
return True
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
@@ -0,0 +1,75 @@
from typing import Optional
from azure.mgmt.web.models import SiteConfigResource
from prowler.lib.check.models import Check_Report_Azure
from prowler.providers.azure.lib.fix.fixer import AzureFixer
from prowler.providers.azure.services.app.app_client import app_client
class AppFunctionFtpsDeploymentDisabledFixer(AzureFixer):
"""
This class handles the remediation of the app_function_ftps_deployment_disabled check.
It disables FTP/FTPS deployments for Azure Functions to prevent unauthorized access.
"""
def __init__(self):
super().__init__(
description="Disable FTP/FTPS deployments for Azure Functions",
service="app",
cost_impact=False,
cost_description=None,
permissions_required={
"Microsoft.Web/sites/config/write": "Write access to the site configuration",
},
)
def fix(self, finding: Optional[Check_Report_Azure] = None, **kwargs) -> bool:
"""
Fix the failed check by disabling FTP/FTPS deployments for the Azure Function.
Args:
finding (Check_Report_Azure): Finding to fix
**kwargs: Additional Azure-specific arguments (subscription_id, resource_id, resource_group)
Returns:
bool: True if FTP/FTPS is disabled, False otherwise
"""
try:
if finding:
resource_group = finding.resource.get("resource_group_name")
resource_id = finding.resource_name
suscription_id = finding.subscription
else:
resource_group = kwargs.get("resource_group")
resource_id = kwargs.get("resource_id")
suscription_id = kwargs.get("subscription_id")
if not resource_group or not resource_id or not suscription_id:
raise ValueError(
"Resource group, app name and subscription name are required"
)
super().fix(
resource_group=resource_group,
resource_id=resource_id,
suscription_id=suscription_id,
)
client = app_client.clients[suscription_id]
site_config = SiteConfigResource(ftps_state="Disabled")
client.web_apps.update_configuration(
resource_group_name=resource_group,
name=resource_id,
site_config=site_config,
)
return True
except Exception as error:
self.logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
@@ -170,6 +170,7 @@ class App(AzureService):
ftps_state=getattr(
function_config, "ftps_state", None
),
resource_group_name=function.resource_group,
)
}
)
@@ -293,3 +294,4 @@ class FunctionApp:
public_access: bool
vnet_subnet_id: str
ftps_state: Optional[str]
resource_group_name: str
@@ -0,0 +1,4 @@
from prowler.providers.azure.services.recovery.recovery_service import Recovery
from prowler.providers.common.provider import Provider
recovery_client = Recovery(Provider.get_global_provider())
@@ -0,0 +1,101 @@
from typing import Optional
from azure.mgmt.recoveryservices import RecoveryServicesClient
from azure.mgmt.recoveryservicesbackup import RecoveryServicesBackupClient
from azure.mgmt.recoveryservicesbackup.activestamp.models import DataSourceType
from pydantic import BaseModel, Field
from prowler.lib.logger import logger
from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.azure.lib.service.service import AzureService
class BackupItem(BaseModel):
"""Minimal BackupItem: only essential identifying and descriptive fields."""
id: str
name: str
workload_type: Optional[DataSourceType]
class BackupVault(BaseModel):
"""Minimal BackupVault: only essential identifying fields and its backup items."""
id: str
name: str
location: str
backup_protected_items: dict[str, BackupItem] = Field(default_factory=dict)
class Recovery(AzureService):
def __init__(self, provider: AzureProvider):
super().__init__(RecoveryServicesClient, provider)
self.vaults: dict[str, dict[str, BackupVault]] = self._get_vaults()
RecoveryBackup(provider, self.vaults)
def _get_vaults(self) -> dict[str, dict[str, BackupVault]]:
"""
Retrieve all Recovery Services vaults for each subscription.
Returns:
Nested dictionary of vaults by subscription.
"""
logger.info("Recovery - Getting Recovery Services vaults...")
vaults_dict: dict[str, dict[str, BackupVault]] = {}
try:
vaults_dict: dict[str, dict[str, BackupVault]] = {}
for subscription_name, client in self.clients.items():
vaults = client.vaults.list_by_subscription_id()
vaults_dict[subscription_name] = {}
for vault in vaults:
vault_obj = BackupVault(
id=vault.id,
name=vault.name,
location=vault.location,
)
vaults_dict[subscription_name][vault_obj.id] = vault_obj
except Exception as error:
logger.error(
f"Subscription name: {subscription_name} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return vaults_dict
class RecoveryBackup(AzureService):
def __init__(
self, provider: AzureProvider, vaults: dict[str, dict[str, BackupVault]]
):
super().__init__(RecoveryServicesBackupClient, provider)
for subscription_name, vaults in vaults.items():
for vault in vaults.values():
vault.backup_protected_items = self._get_backup_protected_items(
subscription_name=subscription_name, vault=vault
)
def _get_backup_protected_items(
self, subscription_name: str, vault: BackupVault
) -> dict[str, BackupItem]:
"""
Retrieve all backup protected items for a given vault.
"""
logger.info("Recovery - Getting backup protected items...")
backup_protected_items_dict: dict[str, BackupItem] = {}
try:
backup_protected_items = self.clients[
subscription_name
].backup_protected_items.list(
vault_name=vault.name,
resource_group_name=vault.id.split("/")[4],
)
for item in backup_protected_items:
item_properties = getattr(item, "properties", None)
backup_protected_items_dict[item.id] = BackupItem(
id=item.id,
name=item.name,
workload_type=(
item_properties.workload_type if item_properties else None
),
)
except Exception as e:
logger.error(f"Recovery - Error getting backup protected items: {e}")
return backup_protected_items_dict
@@ -0,0 +1,3 @@
"""Constants for the storage service."""
LATEST_SMB_VERSION = "SMB3.1.1"
@@ -1,6 +1,5 @@
from dataclasses import dataclass
from enum import Enum
from typing import List, Optional
from typing import Optional
from azure.mgmt.storage import StorageManagementClient
from pydantic import BaseModel
@@ -33,7 +32,7 @@ class Storage(AzureService):
resouce_group_name = None
key_expiration_period_in_days = None
if storage_account.key_policy:
key_expiration_period_in_days = (
key_expiration_period_in_days = int(
storage_account.key_policy.key_expiration_period_in_days
)
replication_settings = ReplicationSettings(storage_account.sku.name)
@@ -158,6 +157,35 @@ class Storage(AzureService):
"share_delete_retention_policy",
None,
)
smb_channel_encryption_raw = getattr(
getattr(
getattr(
file_service_properties,
"protocol_settings",
None,
),
"smb",
None,
),
"channel_encryption",
None,
)
smb_supported_versions_raw = getattr(
getattr(
getattr(
file_service_properties,
"protocol_settings",
None,
),
"smb",
None,
),
"versions",
None,
)
account.file_service_properties = FileServiceProperties(
id=file_service_properties.id,
name=file_service_properties.name,
@@ -174,6 +202,18 @@ class Storage(AzureService):
0,
),
),
smb_protocol_settings=SMBProtocolSettings(
channel_encryption=(
smb_channel_encryption_raw.rstrip(";").split(";")
if smb_channel_encryption_raw
else []
),
supported_versions=(
smb_supported_versions_raw.rstrip(";").split(";")
if smb_supported_versions_raw
else []
),
),
)
except Exception as error:
logger.error(
@@ -181,30 +221,26 @@ class Storage(AzureService):
)
@dataclass
class DeleteRetentionPolicy:
class DeleteRetentionPolicy(BaseModel):
enabled: bool
days: int
@dataclass
class BlobProperties:
class BlobProperties(BaseModel):
id: str
name: str
type: str
default_service_version: str
container_delete_retention_policy: DeleteRetentionPolicy
versioning_enabled: bool = False
default_service_version: Optional[str] = None
versioning_enabled: Optional[bool] = None
@dataclass
class NetworkRuleSet:
class NetworkRuleSet(BaseModel):
bypass: str
default_action: str
@dataclass
class PrivateEndpointConnection:
class PrivateEndpointConnection(BaseModel):
id: str
name: str
type: str
@@ -221,27 +257,32 @@ class ReplicationSettings(Enum):
STANDARD_RAGZRS = "Standard_RAGZRS"
class SMBProtocolSettings(BaseModel):
channel_encryption: list[str]
supported_versions: list[str]
class FileServiceProperties(BaseModel):
id: str
name: str
type: str
share_delete_retention_policy: DeleteRetentionPolicy
smb_protocol_settings: SMBProtocolSettings
@dataclass
class Account:
class Account(BaseModel):
id: str
name: str
location: str
resouce_group_name: str
enable_https_traffic_only: bool
infrastructure_encryption: bool
infrastructure_encryption: Optional[bool] = None
allow_blob_public_access: bool
network_rule_set: NetworkRuleSet
encryption_type: str
minimum_tls_version: str
private_endpoint_connections: List[PrivateEndpointConnection]
key_expiration_period_in_days: str
location: str
private_endpoint_connections: list[PrivateEndpointConnection]
key_expiration_period_in_days: Optional[int] = None
replication_settings: ReplicationSettings = ReplicationSettings.STANDARD_LRS
allow_cross_tenant_replication: bool = True
allow_shared_key_access: bool = True
@@ -0,0 +1,30 @@
{
"Provider": "azure",
"CheckID": "storage_smb_channel_encryption_with_secure_algorithm",
"CheckTitle": "Ensure SMB channel encryption uses a secure algorithm for SMB file shares",
"CheckType": [],
"ServiceName": "storage",
"SubServiceName": "",
"ResourceIdTemplate": "/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}/fileServices/default",
"Severity": "medium",
"ResourceType": "AzureStorageAccount",
"Description": "Implement SMB channel encryption with a secure algorithm for SMB file shares to ensure data confidentiality and integrity in transit.",
"Risk": "Not using the recommended SMB channel encryption may expose data transmitted over SMB channels to unauthorized interception and tampering.",
"RelatedUrl": "https://learn.microsoft.com/en-us/azure/well-architected/service-guides/azure-files#recommendations-for-smb-file-shares",
"Remediation": {
"Code": {
"CLI": "az storage account file-service-properties update --resource-group <resource-group> --account-name <storage-account> --channel-encryption AES-256-GCM",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "Use the portal, CLI or PowerShell to set the SMB channel encryption to a secure algorithm.",
"Url": "https://learn.microsoft.com/en-us/azure/storage/files/files-smb-protocol?tabs=azure-portal#smb-security-settings"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "This check passes if SMB channel encryption is set to a secure algorithm."
}
@@ -0,0 +1,51 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.storage.storage_client import storage_client
SECURE_ENCRYPTION_ALGORITHMS = ["AES-256-GCM"]
class storage_smb_channel_encryption_with_secure_algorithm(Check):
"""
Ensure SMB channel encryption for file shares is set to the recommended algorithm (AES-256-GCM or higher).
This check evaluates whether SMB file shares are configured to use only the recommended SMB channel encryption algorithms.
- PASS: Storage account has the recommended SMB channel encryption (AES-256-GCM or higher) enabled for file shares.
- FAIL: Storage account does not have the recommended SMB channel encryption enabled for file shares or uses an unsupported algorithm.
"""
def execute(self) -> list[Check_Report_Azure]:
findings = []
for subscription, storage_accounts in storage_client.storage_accounts.items():
for account in storage_accounts:
if account.file_service_properties:
pretty_current_algorithms = (
", ".join(
account.file_service_properties.smb_protocol_settings.channel_encryption
)
if account.file_service_properties.smb_protocol_settings.channel_encryption
else "none"
)
report = Check_Report_Azure(
metadata=self.metadata(),
resource=account.file_service_properties,
)
report.subscription = subscription
report.resource_name = account.name
if (
not account.file_service_properties.smb_protocol_settings.channel_encryption
):
report.status = "FAIL"
report.status_extended = f"Storage account {account.name} from subscription {subscription} does not have SMB channel encryption enabled for file shares."
elif any(
algorithm in SECURE_ENCRYPTION_ALGORITHMS
for algorithm in account.file_service_properties.smb_protocol_settings.channel_encryption
):
report.status = "PASS"
report.status_extended = f"Storage account {account.name} from subscription {subscription} has a secure algorithm for SMB channel encryption ({', '.join(SECURE_ENCRYPTION_ALGORITHMS)}) enabled for file shares since it supports {pretty_current_algorithms}."
else:
report.status = "FAIL"
report.status_extended = f"Storage account {account.name} from subscription {subscription} does not have SMB channel encryption with a secure algorithm for file shares since it supports {pretty_current_algorithms}."
findings.append(report)
return findings
@@ -0,0 +1,30 @@
{
"Provider": "azure",
"CheckID": "storage_smb_protocol_version_is_latest",
"CheckTitle": "Ensure SMB protocol version for file shares is set to the latest version.",
"CheckType": [],
"ServiceName": "storage",
"SubServiceName": "",
"ResourceIdTemplate": "/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}/fileServices/default",
"Severity": "medium",
"ResourceType": "AzureStorageAccount",
"Description": "Ensure that SMB file shares are configured to use only the latest SMB protocol version.",
"Risk": "Allowing older SMB protocol versions may expose file shares to known vulnerabilities and security risks.",
"RelatedUrl": "https://learn.microsoft.com/en-us/azure/storage/files/files-smb-protocol#smb-security-settings",
"Remediation": {
"Code": {
"CLI": "az storage account file-service-properties update --resource-group <resource-group> --account-name <storage-account> --versions <latest-version>",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "Configure your Azure Storage Account file shares to allow only the latest SMB protocol version.",
"Url": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/StorageAccounts/latest-smb-protocol-version.html"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}

Some files were not shown because too many files have changed in this diff Show More