Compare commits

..

128 Commits

Author SHA1 Message Date
Andoni A. 3066d82863 docs: workshop as docs PoC 2025-10-28 08:13:44 +01:00
Andoni Alonso 969ca8863a chore(github): use gh instead of github-script to lable community (#9035) 2025-10-27 17:47:16 +01:00
Rubén De la Torre Vico 03c6f98db4 chore(aws): enhance metadata for directconnect service (#8855)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-27 16:51:13 +01:00
Chandrapal Badshah 8ebefb8aa1 feat: add lighthouse support for multiple providers (#8772)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-10-27 16:23:54 +01:00
Andoni Alonso c3694fdc5b chore(github): add label to community contributed PRs (#9009) 2025-10-27 14:48:27 +01:00
Prowler Bot df10bc0c4c chore(regions_update): Changes in regions for AWS services (#9022)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-10-27 09:35:35 -04:00
Pedro Martín e694b0f634 fix(gcp): set unknown for resource name under metric resources (#9023) 2025-10-27 14:19:15 +01:00
Rubén De la Torre Vico 81e3f87003 chore: add AGENTS.md for Prowler SDK (#9017)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
2025-10-27 13:47:14 +01:00
César Arroba 7ffe2aeec9 chore(github): improve ui codeql action and config (#9026) 2025-10-27 13:23:54 +01:00
César Arroba 672aa6eb2f chore(github): improve sdk codeql action and config (#9025) 2025-10-27 13:23:18 +01:00
StylusFrost 2e999f55f9 test(ui): add Playwright E2E testing guidelines and folder structure (#8899)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-10-27 13:21:49 +01:00
StylusFrost 18998b8867 test(ui): E2E Test - New user sign-up/registration (#8895)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-10-27 11:25:34 +01:00
Alex K ff4a186df6 feat(github): add organization base repository permission strict check (CIS GitHub 1.3.8) (#8785)
Co-authored-by: akorshak-afg <alex.korshak@afg.org>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-10-27 09:45:50 +01:00
Pepe Fagoaga b8dab5e0ed docs: add version label in pages (#9020) 2025-10-27 09:20:37 +01:00
César Arroba 0b3142f7a8 chore(mcp): MCP pull request action (#8990) 2025-10-24 12:44:57 +02:00
César Arroba f5dc0c9ee0 chore(github): fix prepare release action (#8998) 2025-10-24 12:44:32 +02:00
Prowler Bot a230809095 chore(release): Bump version to v5.14.0 (#9015)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-10-24 16:16:35 +05:45
Andoni Alonso e6d1b5639b chore(github): include roadmap in features request template (#9000) 2025-10-23 15:06:34 +02:00
Alan Buscaglia b1856e42f0 chore: update changelog for release v5.13.0 (#8996) 2025-10-23 13:54:30 +02:00
Víctor Fernández Poyatos ba8dbb0d28 fix(s3): file uploading for threatscore (#8993) 2025-10-23 16:07:06 +05:45
Daniel Barranquero b436cc1cac chore(sdk): update changelog to released (#8994) 2025-10-23 15:55:50 +05:45
Josema Camacho 51baa88644 chore(api): Update changelog for API's version 1.14.0 to Prowler 5.13.0 (#8992) 2025-10-23 12:03:07 +02:00
Rubén De la Torre Vico 5098b12e97 chore(mcp): update changelog to released (#8991) 2025-10-23 11:47:58 +02:00
Daniel Barranquero 3d1e7015a6 fix(entra): value errors due tu enums (#8919) 2025-10-23 11:36:51 +02:00
Alejandro Bailo 0b7f02f7e4 feat: Check Findings component (#8976)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-23 10:38:25 +02:00
Daniel Barranquero c0396e97bf feat(docs): add new provider e2e guide (#8430)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-10-23 10:09:15 +02:00
Andoni Alonso 8d4fa46038 chore: script to generate AWS accounts list from AWS Org for bulk provisioning (#8903) 2025-10-22 16:23:14 -04:00
Daniel Barranquero 4b160257b9 chore(sdk): update changelog for v5.13.0 (#8989) 2025-10-22 12:26:58 -04:00
César Arroba 6184de52d9 chore(github): fix pr merged action (#8988) 2025-10-22 18:05:31 +02:00
César Arroba fdf45ea777 chore(github): improve pr merged action (#8987) 2025-10-22 17:52:00 +02:00
César Arroba b7ce9ae5f3 chore(github): improve mcp container action (#8986) 2025-10-22 17:35:38 +02:00
César Arroba 2039a5005c chore(github): rename prepare release action (#8985) 2025-10-22 17:29:22 +02:00
César Arroba 52ed92ac6a chore(github): improve check changelog action (#8983) 2025-10-22 17:17:22 +02:00
César Arroba f5cccecac6 chore(github): improve prepare release action (#8981) 2025-10-22 17:02:51 +02:00
César Arroba a47f6444f8 chore(github): improve conflicts checker action (#8980) 2025-10-22 16:45:38 +02:00
lydiavilchez f8c8dee2b3 feat(gcp): add cloudstorage_bucket_lifecycle_management_enabled check (#8936)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-22 16:45:26 +02:00
Andoni Alonso 6656629391 docs: include docker platform warning in App installation too (#8979) 2025-10-22 16:07:28 +02:00
Pedro Martín 9f372902ad feat(threatscore): support compliance pdf reporting (#8867)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-10-22 15:59:56 +02:00
Alan Buscaglia b4ff1dcc75 refactor(graphs): graph components kebab case (#8966)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-10-22 15:51:43 +02:00
César Arroba f596907223 chore(github): improve labeler action (#8978) 2025-10-22 12:50:19 +02:00
César Arroba fe768c0a3e chore(github): improve trufflehog action (#8977) 2025-10-22 12:39:39 +02:00
César Arroba 18f3bc098c chore(github): trigger only if repository is prowler (#8974) 2025-10-22 09:27:33 +02:00
César Arroba 67b1983d85 chore(github): fix action (#8973) 2025-10-22 09:10:47 +02:00
César Arroba a3db23af7d chore(github): improve conventional commits action (#8969) 2025-10-21 17:57:29 +02:00
César Arroba 3eaa21f06f chore(github): improve backport label action (#8970) 2025-10-21 17:57:04 +02:00
Rubén De la Torre Vico 5d5c109067 chore(aws): enhance metadata for dlm service (#8860)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-21 17:40:19 +02:00
César Arroba c6cb4e4814 chore(github): improve backport action (#8968) 2025-10-21 17:14:40 +02:00
César Arroba ab06a09173 chore(api): improve pull request action (#8963) 2025-10-21 17:10:48 +02:00
Rubén De la Torre Vico 9c6c007f73 fix(mcp): add missing argument to health check (#8967) 2025-10-21 16:45:05 +02:00
Rubén De la Torre Vico 206f23b5a5 chore(aws): enhance metadata for dms service (#8861)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-21 16:31:18 +02:00
Andoni Alonso 5c9e9bc86a docs: fix security heading (#8965) 2025-10-21 16:13:55 +02:00
Rubén De la Torre Vico 34554d6123 feat(mcp): add support for production deployment with uvicorn (#8958) 2025-10-21 16:03:24 +02:00
Pepe Fagoaga 000cb93157 chore: remove security template as it's already there (#8964) 2025-10-21 19:34:42 +05:45
Adrián Jesús Peña Rodríguez 524209bdf2 feat(api): add provider_id__in filter for ScanSummary queries (#8951) 2025-10-21 15:24:09 +02:00
César Arroba c4a0da8204 chore(github): review and update issue templates (#8961) 2025-10-21 13:40:25 +02:00
César Arroba f0cba0321c chore(codeql): improve API CodeQL action and settings (#8962) 2025-10-21 13:40:07 +02:00
dependabot[bot] 79888c9312 chore(deps): bump playwright and @playwright/test in /ui (#8956)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 13:22:21 +02:00
Rubén De la Torre Vico a79910a694 chore(aws): enhance metadata for cloudtrail service (#8831)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-21 12:45:31 +02:00
César Arroba 4cadee7bb1 chore(github): update codeowners file (#8960) 2025-10-21 11:48:21 +02:00
Pedro Martín 756d436a2f feat(compliance): improve CCC catalogs (#8944) 2025-10-21 03:16:05 +02:00
Alejandro Bailo 5e85ef5835 feat(ui): new card components and derivates for overview (#8921)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-20 16:49:09 +02:00
Prowler Bot 0fa9e2da6c chore(regions_update): Changes in regions for AWS services (#8946)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-10-20 09:20:29 -04:00
Andoni Alonso ce7510db28 docs: remove anchors from redirects (#8953) 2025-10-20 14:58:53 +02:00
Pepe Fagoaga 8e3d50c807 fix(docs): redirect user-guide-tutorials (#8945) 2025-10-20 14:51:15 +02:00
Pepe Fagoaga d8908d2ccc docs(fix): space in providers table (#8938) 2025-10-20 14:39:03 +02:00
Alejandro Bailo 0b9969a723 feat: update M365 credentials form (#8929)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-20 13:51:11 +02:00
StylusFrost 985d73f44f test(ui): enhance Playwright test setups for user authentication (#8881)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2025-10-20 13:45:20 +02:00
Pedro Martín 1d705e22da feat(util): add from_yaml_to_json.py (#8943) 2025-10-20 12:29:29 +02:00
Rubén De la Torre Vico ca55d4ce86 chore(aws): enhance metadata for directoryservice service (#8859)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-20 12:20:16 +02:00
Hugo Pereira Brito 0201073fcb fix(docs): small enhancement in warning (#8950) 2025-10-20 12:19:49 +02:00
Alejandro Bailo 928c556721 fix: Mutelist view blinks at opening (#8932) 2025-10-17 19:26:57 +02:00
Rubén De la Torre Vico a653ad7852 chore(deps): remove docs group dependency (#8937) 2025-10-17 16:37:32 +02:00
Sergio Garcia a3c811f801 docs(github): clarify GitHub App configuration requirements (#8930) 2025-10-17 09:30:54 -04:00
Hugo Pereira Brito c85d3e9188 feat(docs): add M365 certificate and azure cli authentication methods (#8939) 2025-10-17 13:42:48 +02:00
Rubén De la Torre Vico 6f394cf9de docs(mcp): add comprehensive MCP Server documentation (#8931) 2025-10-17 11:48:48 +02:00
Rubén De la Torre Vico ba765fa07d chore(aws): enhance metadata for efs service (#8889)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-16 17:05:26 +02:00
Daniel Barranquero d928ee442f fix(gcp): no resource_name errors (#8928) 2025-10-16 14:58:45 +02:00
Alejandro Bailo 30ab5f52b9 feat(ui): add comprehensive agentic files (#8885)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-16 11:37:58 +02:00
Sergio Garcia c424707e32 feat(oci): Add Oracle Cloud Infrastructure provider with CIS 3.0 (#8893)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-15 13:05:51 -04:00
Pedro Martín 92efbe3926 chore(readme): update compliance numbers (#8926) 2025-10-15 18:17:15 +02:00
Pedro Martín 4a61578dd8 feat(compliance): add CCC catalogs for AWS, Azure and GCP (#8000)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-15 21:48:20 +05:45
Rubén De la Torre Vico ec75b5d0a3 feat(mcp): migrate documentation search from ReadTheDocs to Mintlify API (#8916) 2025-10-15 17:40:18 +02:00
Pepe Fagoaga db5bab51ae chore: delete mkdocs.yml (#8924) 2025-10-15 11:13:39 -04:00
Pepe Fagoaga be476b732a chore: delete readthedocs preview environment (#8923) 2025-10-15 20:54:40 +05:45
Andoni Alonso 434b37f758 docs: add prowler old root path redirect (#8922) 2025-10-15 20:41:46 +05:45
Andoni Alonso c08c27e5c6 docs: migrate to Mintlify (#8894) 2025-10-15 16:38:56 +02:00
Hugo Pereira Brito 8773751779 chore(api): enhance m365 user auth deprecation (#8913)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-10-15 15:41:40 +02:00
Víctor Fernández Poyatos f70a959a49 docs: API keys support (#8918) 2025-10-15 12:37:34 +02:00
Rubén De la Torre Vico 20314cad8c chore(mcp): add changelog with first version (#8884) 2025-10-15 12:04:48 +02:00
Pedro Martín 564ad56d2f feat(compliance): add C5 Germany for aws (#8830)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-15 11:47:23 +02:00
César Arroba b2d91c97d8 chore(mcp): modify MCP container action (#8902) 2025-10-14 18:18:27 +02:00
César Arroba c232195df4 chore(mcp): check for MCP changes on release preparation action (#8904) 2025-10-14 18:06:15 +02:00
Alan Buscaglia b4b9d800a8 style(ui): Migrate from Work Sans to Inter font (#8914) 2025-10-14 17:33:26 +02:00
dependabot[bot] fc1d3d4a47 chore(deps-dev): bump authlib from 1.6.4 to 1.6.5 in /api (#8910)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 09:49:52 -04:00
Pedro Martín d4be0f4d7a fix(compliance): add missing attributes for Mitre-Attack (#8907)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-10-14 15:48:02 +02:00
dependabot[bot] 305339ffb4 chore(deps-dev): bump authlib from 1.6.4 to 1.6.5 (#8900)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 09:31:42 -04:00
Daniel Barranquero 272e4547b2 fix(gcp): keyerrors in services cloudsql and monitoring (#8909) 2025-10-14 09:30:00 -04:00
Prowler Bot 8c3e1b96f9 chore(regions_update): Changes in regions for AWS services (#8901)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-10-14 09:27:32 -04:00
Rubén De la Torre Vico d496f5a58e fix(mcp): change int and float types to str (#8896) 2025-10-14 13:41:02 +02:00
Víctor Fernández Poyatos 5789e87f4f fix(api-keys): update created field to never update (#8908) 2025-10-14 13:30:41 +02:00
Alan Buscaglia 1994750151 fix(ui): Api Key Implementation Retouches (#8906) 2025-10-14 12:27:59 +02:00
Rubén De la Torre Vico 27304a8007 feat(mcp): add health check endpoint (#8905) 2025-10-14 12:16:51 +02:00
Rubén De la Torre Vico 9761651f8d chore(aws): enhance metadata for cloudfront service (#8829)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-14 09:26:33 +02:00
Rubén De la Torre Vico 406aace585 chore(aws): enhance metadata for autoscaling service (#8824)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-13 16:52:29 +02:00
Rubén De la Torre Vico ebd5814112 chore(aws): enhance metadata for backup service (#8826)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-13 14:22:49 +02:00
Alan Buscaglia 42e816081e feat: reusable graph components (#8873)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2025-10-13 13:53:28 +02:00
Alan Buscaglia 741217ce80 feat(ui): API keys implementation (#8874) 2025-10-13 13:48:00 +02:00
Rubén De la Torre Vico 5f9ab68bd9 feat(mcp): add GitHub Action to publish MCP Server container to DockerHub (#8875)
Co-authored-by: César Arroba <19954079+cesararroba@users.noreply.github.com>
2025-10-13 10:31:02 +02:00
Alejandro Bailo fba2854f65 fix(ui): minor bugs (#8898) 2025-10-10 14:56:34 +02:00
Víctor Fernández Poyatos 8794515318 fix(api-keys): make name required and unique (#8891) 2025-10-10 12:35:27 +02:00
Víctor Fernández Poyatos 335db928dc feat(database): add db read replica support (#8869) 2025-10-10 12:27:43 +02:00
Alejandro Bailo 046baa8eb9 feat(ui): refreshToken implementation (#8864) 2025-10-10 11:02:10 +02:00
Alan Buscaglia ef60ea99c3 fix(api): throw errors for all non-ok responses (#8880) 2025-10-10 10:47:04 +02:00
Hugo Pereira Brito 1483efa18e feat(m365): add M365 certificate auth to API (#8538) 2025-10-10 10:43:11 +02:00
Hugo Pereira Brito b74744b135 feat(m365): add M365 certificate auth to API (#8538) 2025-10-09 16:50:28 +02:00
Pepe Fagoaga e80eed6baf chore(ui): remove .env.template (#8887) 2025-10-09 19:06:12 +05:45
Adrián Jesús Peña Rodríguez 1ba22f6f45 feat(api): update role mapping logic in TenantFinishACSView to handle single/manage account users (#8882) 2025-10-09 14:30:26 +02:00
Hugo Pereira Brito da6b7b89cb fix(tests): jira test double lines (#8886) 2025-10-09 13:44:01 +02:00
Hugo Pereira Brito cc9aa7f7ee feat(jira): support of ADF for MarkDown metadata fields (#8878) 2025-10-09 12:31:31 +02:00
Hugo Pereira Brito ecf749fce8 chore(m365): deprecate user auth (#8865) 2025-10-09 12:24:24 +02:00
Pedro Martín 1a7f52fc9c fix(threatscore): improve the way ThreatScore is calculated (#8582)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-10-09 11:50:10 +02:00
Víctor Fernández Poyatos b630234cdf fix(api-key): use admin connector to validate authentication (#8883) 2025-10-09 11:26:21 +02:00
Víctor Fernández Poyatos d6685eec1f feat(api-keys): support include parameter for entity details (#8876) 2025-10-09 11:14:13 +02:00
Pepe Fagoaga 86cff92d1f fix: conventional commit checker (#8879) 2025-10-08 13:19:43 -05:00
Rubén De la Torre Vico 28e81783ef feat(mcp): add API key support for STDIO mode and enhance HTTP mode authentication (#8823) 2025-10-08 15:52:26 +02:00
Rubén De la Torre Vico 13266b8743 feat(mcp): add Prowler Documentation MCP server (#8795) 2025-10-08 12:22:42 +02:00
Rubén De la Torre Vico 4e143cf013 feat(mcp): add HTTP transport support (#8784) 2025-10-08 11:32:39 +02:00
Rubén De la Torre Vico 5cfe140b7b fix(mcp): accept string type for all parameter types in MCP server (#8866) 2025-10-08 10:31:57 +02:00
1623 changed files with 94640 additions and 8213 deletions
+6
View File
@@ -29,6 +29,12 @@ POSTGRES_ADMIN_PASSWORD=postgres
POSTGRES_USER=prowler
POSTGRES_PASSWORD=postgres
POSTGRES_DB=prowler_db
# Read replica settings (optional)
# POSTGRES_REPLICA_HOST=postgres-db
# POSTGRES_REPLICA_PORT=5432
# POSTGRES_REPLICA_USER=prowler
# POSTGRES_REPLICA_PASSWORD=postgres
# POSTGRES_REPLICA_DB=prowler_db
# Celery-Prowler task settings
TASK_RETRY_DELAY_SECONDS=0.1
+27 -5
View File
@@ -1,6 +1,28 @@
# SDK
/* @prowler-cloud/sdk
/.github/ @prowler-cloud/sdk
prowler @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
tests @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
api @prowler-cloud/api
ui @prowler-cloud/ui
/prowler/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/tests/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/dashboard/ @prowler-cloud/sdk
/docs/ @prowler-cloud/sdk
/examples/ @prowler-cloud/sdk
/util/ @prowler-cloud/sdk
/contrib/ @prowler-cloud/sdk
/permissions/ @prowler-cloud/sdk
/codecov.yml @prowler-cloud/sdk @prowler-cloud/api
# API
/api/ @prowler-cloud/api
# UI
/ui/ @prowler-cloud/ui
# AI
/mcp_server/ @prowler-cloud/ai
# Platform
/.github/ @prowler-cloud/platform
/Makefile @prowler-cloud/platform
/kubernetes/ @prowler-cloud/platform
**/Dockerfile* @prowler-cloud/platform
**/docker-compose*.yml @prowler-cloud/platform
**/docker-compose*.yaml @prowler-cloud/platform
+44
View File
@@ -3,6 +3,41 @@ description: Create a report to help us improve
labels: ["bug", "status/needs-triage"]
body:
- type: checkboxes
id: search
attributes:
label: Issue search
options:
- label: I have searched the existing issues and this bug has not been reported yet
required: true
- type: dropdown
id: component
attributes:
label: Which component is affected?
multiple: true
options:
- Prowler CLI/SDK
- Prowler API
- Prowler UI
- Prowler Dashboard
- Prowler MCP Server
- Documentation
- Other
validations:
required: true
- type: dropdown
id: provider
attributes:
label: Cloud Provider (if applicable)
multiple: true
options:
- AWS
- Azure
- GCP
- Kubernetes
- GitHub
- Microsoft 365
- Not applicable
- type: textarea
id: reproduce
attributes:
@@ -78,6 +113,15 @@ body:
prowler --version
validations:
required: true
- type: input
id: python-version
attributes:
label: Python version
description: Which Python version are you using?
placeholder: |-
python --version
validations:
required: true
- type: input
id: pip-version
attributes:
+10
View File
@@ -1 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: 📖 Documentation
url: https://docs.prowler.com
about: Check our comprehensive documentation for guides and tutorials
- name: 💬 GitHub Discussions
url: https://github.com/prowler-cloud/prowler/discussions
about: Ask questions and discuss with the community
- name: 🌟 Prowler Community
url: https://goto.prowler.com/slack
about: Join our community for support and updates
@@ -3,6 +3,42 @@ description: Suggest an idea for this project
labels: ["feature-request", "status/needs-triage"]
body:
- type: checkboxes
id: search
attributes:
label: Feature search
options:
- label: I have searched the existing issues and this feature has not been requested yet or is already in our [Public Roadmap](https://roadmap.prowler.com/roadmap)
required: true
- type: dropdown
id: component
attributes:
label: Which component would this feature affect?
multiple: true
options:
- Prowler CLI/SDK
- Prowler API
- Prowler UI
- Prowler Dashboard
- Prowler MCP Server
- Documentation
- New component/Integration
validations:
required: true
- type: dropdown
id: provider
attributes:
label: Related to specific cloud provider?
multiple: true
options:
- AWS
- Azure
- GCP
- Kubernetes
- GitHub
- Microsoft 365
- All providers
- Not provider-specific
- type: textarea
id: Problem
attributes:
@@ -19,6 +55,14 @@ body:
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: use-case
attributes:
label: Use case and benefits
description: Who would benefit from this feature and how?
placeholder: This would help security teams by...
validations:
required: true
- type: textarea
id: Alternatives
attributes:
@@ -0,0 +1,71 @@
name: 'Setup Python with Poetry'
description: 'Setup Python environment with Poetry and install dependencies'
author: 'Prowler'
inputs:
python-version:
description: 'Python version to use'
required: true
working-directory:
description: 'Working directory for Poetry'
required: false
default: '.'
poetry-version:
description: 'Poetry version to install'
required: false
default: '2.1.1'
install-dependencies:
description: 'Install Python dependencies with Poetry'
required: false
default: 'true'
runs:
using: 'composite'
steps:
- name: Replace @master with current branch in pyproject.toml
if: github.event_name == 'pull_request' && github.base_ref == 'master'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
shell: bash
run: |
python -m pip install --upgrade pip
pipx install poetry==${{ inputs.poetry-version }}
- name: Update SDK resolved_reference to latest commit
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
shell: bash
working-directory: ${{ inputs.working-directory }}
run: poetry lock
- name: Set up Python ${{ inputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ inputs.python-version }}
cache: 'poetry'
cache-dependency-path: ${{ inputs.working-directory }}/poetry.lock
- name: Install Python dependencies
if: inputs.install-dependencies == 'true'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --no-root
poetry run pip list
+152
View File
@@ -0,0 +1,152 @@
name: 'Container Security Scan with Trivy'
description: 'Scans container images for vulnerabilities using Trivy and reports results'
author: 'Prowler'
inputs:
image-name:
description: 'Container image name to scan'
required: true
image-tag:
description: 'Container image tag to scan'
required: true
default: ${{ github.sha }}
severity:
description: 'Severities to scan for (comma-separated)'
required: false
default: 'CRITICAL,HIGH,MEDIUM,LOW'
fail-on-critical:
description: 'Fail the build if critical vulnerabilities are found'
required: false
default: 'false'
upload-sarif:
description: 'Upload results to GitHub Security tab'
required: false
default: 'true'
create-pr-comment:
description: 'Create a comment on the PR with scan results'
required: false
default: 'true'
artifact-retention-days:
description: 'Days to retain the Trivy report artifact'
required: false
default: '2'
outputs:
critical-count:
description: 'Number of critical vulnerabilities found'
value: ${{ steps.security-check.outputs.critical }}
high-count:
description: 'Number of high vulnerabilities found'
value: ${{ steps.security-check.outputs.high }}
total-count:
description: 'Total number of vulnerabilities found'
value: ${{ steps.security-check.outputs.total }}
runs:
using: 'composite'
steps:
- name: Run Trivy vulnerability scan (SARIF)
if: inputs.upload-sarif == 'true'
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '0'
- name: Upload Trivy results to GitHub Security tab
if: inputs.upload-sarif == 'true'
uses: github/codeql-action/upload-sarif@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
sarif_file: 'trivy-results.sarif'
category: 'trivy-container'
- name: Run Trivy vulnerability scan (JSON)
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'json'
output: 'trivy-report.json'
severity: ${{ inputs.severity }}
exit-code: '0'
- name: Upload Trivy report artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always()
with:
name: trivy-scan-report-${{ inputs.image-name }}
path: trivy-report.json
retention-days: ${{ inputs.artifact-retention-days }}
- name: Generate security summary
id: security-check
shell: bash
run: |
CRITICAL=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="CRITICAL")] | length' trivy-report.json)
HIGH=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="HIGH")] | length' trivy-report.json)
TOTAL=$(jq '[.Results[]?.Vulnerabilities[]?] | length' trivy-report.json)
echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
echo "high=$HIGH" >> $GITHUB_OUTPUT
echo "total=$TOTAL" >> $GITHUB_OUTPUT
echo "### 🔒 Container Security Scan" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Image:** \`${{ inputs.image-name }}:${{ inputs.image-tag }}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- 🔴 Critical: $CRITICAL" >> $GITHUB_STEP_SUMMARY
echo "- 🟠 High: $HIGH" >> $GITHUB_STEP_SUMMARY
echo "- **Total**: $TOTAL" >> $GITHUB_STEP_SUMMARY
- name: Comment scan results on PR
if: inputs.create-pr-comment == 'true' && github.event_name == 'pull_request'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
env:
IMAGE_NAME: ${{ inputs.image-name }}
GITHUB_SHA: ${{ inputs.image-tag }}
SEVERITY: ${{ inputs.severity }}
with:
script: |
const comment = require('./.github/scripts/trivy-pr-comment.js');
// Unique identifier to find our comment
const marker = '<!-- trivy-scan-comment:${{ inputs.image-name }} -->';
const body = marker + '\n' + comment;
// Find existing comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const existingComment = comments.find(c => c.body?.includes(marker));
if (existingComment) {
// Update existing comment
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existingComment.id,
body: body
});
console.log('✅ Updated existing Trivy scan comment');
} else {
// Create new comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: body
});
console.log('✅ Created new Trivy scan comment');
}
- name: Check for critical vulnerabilities
if: inputs.fail-on-critical == 'true' && steps.security-check.outputs.critical != '0'
shell: bash
run: |
echo "::error::Found ${{ steps.security-check.outputs.critical }} critical vulnerabilities"
echo "::warning::Please update packages or use a different base image"
exit 1
+11 -2
View File
@@ -1,3 +1,12 @@
name: "API - CodeQL Config"
name: 'API: CodeQL Config'
paths:
- "api/"
- 'api/'
paths-ignore:
- 'api/tests/**'
- 'api/**/__pycache__/**'
- 'api/**/migrations/**'
- 'api/**/*.md'
queries:
- uses: security-and-quality
+17 -3
View File
@@ -1,4 +1,18 @@
name: "SDK - CodeQL Config"
name: 'SDK: CodeQL Config'
paths:
- 'prowler/'
paths-ignore:
- "api/"
- "ui/"
- 'api/'
- 'ui/'
- 'dashboard/'
- 'mcp_server/'
- 'tests/**'
- 'util/**'
- 'contrib/**'
- 'examples/**'
- 'prowler/**/__pycache__/**'
- 'prowler/**/*.md'
queries:
- uses: security-and-quality
+16 -2
View File
@@ -1,3 +1,17 @@
name: "UI - CodeQL Config"
name: 'UI: CodeQL Config'
paths:
- "ui/"
- 'ui/'
paths-ignore:
- 'ui/node_modules/**'
- 'ui/.next/**'
- 'ui/out/**'
- 'ui/tests/**'
- 'ui/**/*.test.ts'
- 'ui/**/*.test.tsx'
- 'ui/**/*.spec.ts'
- 'ui/**/*.spec.tsx'
- 'ui/**/*.md'
queries:
- uses: security-and-quality
+5
View File
@@ -42,6 +42,11 @@ provider/mongodbatlas:
- any-glob-to-any-file: "prowler/providers/mongodbatlas/**"
- any-glob-to-any-file: "tests/providers/mongodbatlas/**"
provider/oci:
- changed-files:
- any-glob-to-any-file: "prowler/providers/oraclecloud/**"
- any-glob-to-any-file: "tests/providers/oraclecloud/**"
github_actions:
- changed-files:
- any-glob-to-any-file: ".github/workflows/*"
+102
View File
@@ -0,0 +1,102 @@
const fs = require('fs');
// Configuration from environment variables
const REPORT_FILE = process.env.TRIVY_REPORT_FILE || 'trivy-report.json';
const IMAGE_NAME = process.env.IMAGE_NAME || 'container-image';
const GITHUB_SHA = process.env.GITHUB_SHA || 'unknown';
const GITHUB_REPOSITORY = process.env.GITHUB_REPOSITORY || '';
const GITHUB_RUN_ID = process.env.GITHUB_RUN_ID || '';
const SEVERITY = process.env.SEVERITY || 'CRITICAL,HIGH,MEDIUM,LOW';
// Parse severities to scan
const scannedSeverities = SEVERITY.split(',').map(s => s.trim());
// Read and parse the Trivy report
const report = JSON.parse(fs.readFileSync(REPORT_FILE, 'utf-8'));
let vulnCount = 0;
let vulnsByType = { CRITICAL: 0, HIGH: 0, MEDIUM: 0, LOW: 0 };
let affectedPackages = new Set();
if (report.Results && Array.isArray(report.Results)) {
for (const result of report.Results) {
if (result.Vulnerabilities && Array.isArray(result.Vulnerabilities)) {
for (const vuln of result.Vulnerabilities) {
vulnCount++;
if (vulnsByType[vuln.Severity] !== undefined) {
vulnsByType[vuln.Severity]++;
}
if (vuln.PkgName) {
affectedPackages.add(vuln.PkgName);
}
}
}
}
}
const shortSha = GITHUB_SHA.substring(0, 7);
const timestamp = new Date().toISOString().replace('T', ' ').substring(0, 19) + ' UTC';
// Severity icons and labels
const severityConfig = {
CRITICAL: { icon: '🔴', label: 'Critical' },
HIGH: { icon: '🟠', label: 'High' },
MEDIUM: { icon: '🟡', label: 'Medium' },
LOW: { icon: '🔵', label: 'Low' }
};
let comment = '## 🔒 Container Security Scan\n\n';
comment += `**Image:** \`${IMAGE_NAME}:${shortSha}\`\n`;
comment += `**Last scan:** ${timestamp}\n\n`;
if (vulnCount === 0) {
comment += '### ✅ No Vulnerabilities Detected\n\n';
comment += 'The container image passed all security checks. No known CVEs were found.\n';
} else {
comment += '### 📊 Vulnerability Summary\n\n';
comment += '| Severity | Count |\n';
comment += '|----------|-------|\n';
// Only show severities that were scanned
for (const severity of scannedSeverities) {
const config = severityConfig[severity];
const count = vulnsByType[severity] || 0;
const isBold = (severity === 'CRITICAL' || severity === 'HIGH') && count > 0;
const countDisplay = isBold ? `**${count}**` : count;
comment += `| ${config.icon} ${config.label} | ${countDisplay} |\n`;
}
comment += `| **Total** | **${vulnCount}** |\n\n`;
if (affectedPackages.size > 0) {
comment += `**${affectedPackages.size}** package(s) affected\n\n`;
}
if (vulnsByType.CRITICAL > 0) {
comment += '### ⚠️ Action Required\n\n';
comment += '**Critical severity vulnerabilities detected.** These should be addressed before merging:\n';
comment += '- Review the detailed scan results\n';
comment += '- Update affected packages to patched versions\n';
comment += '- Consider using a different base image if updates are unavailable\n\n';
} else if (vulnsByType.HIGH > 0) {
comment += '### ⚠️ Attention Needed\n\n';
comment += '**High severity vulnerabilities found.** Please review and plan remediation:\n';
comment += '- Assess the risk and exploitability\n';
comment += '- Prioritize updates in the next maintenance cycle\n\n';
} else {
comment += '### ️ Review Recommended\n\n';
comment += 'Medium/Low severity vulnerabilities found. Consider addressing during regular maintenance.\n\n';
}
}
comment += '---\n';
comment += '📋 **Resources:**\n';
if (GITHUB_REPOSITORY && GITHUB_RUN_ID) {
comment += `- [Download full report](https://github.com/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}) (see artifacts)\n`;
}
comment += '- [View in Security tab](https://github.com/' + (GITHUB_REPOSITORY || 'repository') + '/security/code-scanning)\n';
comment += '- Scanned with [Trivy](https://github.com/aquasecurity/trivy)\n';
module.exports = comment;
+30 -33
View File
@@ -1,36 +1,34 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: API - CodeQL
name: 'API: CodeQL'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "api/**"
- 'api/**'
- '.github/workflows/api-codeql.yml'
- '.github/codeql/api-codeql-config.yml'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "api/**"
- 'api/**'
- '.github/workflows/api-codeql.yml'
- '.github/codeql/api-codeql-config.yml'
schedule:
- cron: '00 12 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
analyze:
name: Analyze
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
actions: read
contents: read
@@ -39,21 +37,20 @@ jobs:
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
language:
- 'python'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: '/language:${{ matrix.language }}'
+148 -153
View File
@@ -1,20 +1,30 @@
name: API - Pull Request
name: 'API: Pull Request'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
- '.github/workflows/api-pull-request.yml'
- 'api/**'
- '!api/docs/**'
- '!api/README.md'
- '!api/CHANGELOG.md'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
- '.github/workflows/api-pull-request.yml'
- 'api/**'
- '!api/docs/**'
- '!api/README.md'
- '!api/CHANGELOG.md'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POSTGRES_HOST: localhost
@@ -29,21 +39,94 @@ env:
VALKEY_DB: 0
API_WORKING_DIR: ./api
IMAGE_NAME: prowler-api
IGNORE_FILES: |
api/docs/**
api/README.md
api/CHANGELOG.md
jobs:
test:
code-quality:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
strategy:
matrix:
python-version: ["3.12"]
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
- name: Poetry check
run: poetry check --lock
- name: Ruff lint
run: poetry run ruff check . --exclude contrib
- name: Ruff format
run: poetry run ruff format --check . --exclude contrib
- name: Pylint
run: poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/
security-scans:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
strategy:
matrix:
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
- name: Bandit
run: poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
run: poetry run vulture --exclude "contrib,tests,conftest.py" --min-confidence 100 .
tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
strategy:
matrix:
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
# Service containers to run with `test`
services:
# Label used to access the service container
postgres:
image: postgres
env:
@@ -52,7 +135,6 @@ jobs:
POSTGRES_USER: ${{ env.POSTGRES_USER }}
POSTGRES_PASSWORD: ${{ env.POSTGRES_PASSWORD }}
POSTGRES_DB: ${{ env.POSTGRES_DB }}
# Set health checks to wait until postgres has started
ports:
- 5432:5432
options: >-
@@ -66,7 +148,6 @@ jobs:
VALKEY_HOST: ${{ env.VALKEY_HOST }}
VALKEY_PORT: ${{ env.VALKEY_PORT }}
VALKEY_DB: ${{ env.VALKEY_DB }}
# Set health checks to wait until postgres has started
ports:
- 6379:6379
options: >-
@@ -76,158 +157,72 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
.github/workflows/api-pull-request.yml
files_ignore: ${{ env.IGNORE_FILES }}
- name: Replace @master with current branch in pyproject.toml - Only for pull requests to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'pull_request' && github.base_ref == 'master'
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push' && github.ref == 'refs/heads/master'
run: |
# Get the latest commit hash from the prowler-cloud/prowler repository
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
# Update the resolved_reference specifically for prowler-cloud/prowler repository
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
# Verify the change was made
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry lock
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
working-directory: ./api
- name: Install dependencies
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry install --no-root
poetry run pip list
VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
) && curl -L -o /tmp/hadolint "https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64" \
&& chmod +x /tmp/hadolint
- name: Poetry check
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry check --lock
- name: Lint with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff check . --exclude contrib
- name: Check Format with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff format --check . --exclude contrib
- name: Lint with pylint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/
- name: Bandit
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: |
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run vulture --exclude "contrib,tests,conftest.py" --min-confidence 100 .
- name: Hadolint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest --cov=./src/backend --cov-report=xml src/backend
- name: Run tests with pytest
run: poetry run pytest --cov=./src/backend --cov-report=xml src/backend
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: api
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Lint Dockerfile with Hadolint
uses: hadolint/hadolint-action@2332a7b74a6de0dda2e2221d575162eba76ba5e5 # v3.3.0
with:
files: api/**
files_ignore: ${{ env.IGNORE_FILES }}
dockerfile: api/Dockerfile
ignore: DL3013
container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
security-events: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Docker Buildx
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build Container
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
- name: Build container
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.API_WORKING_DIR }}
push: false
tags: ${{ env.IMAGE_NAME }}:latest
outputs: type=docker
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan container with Trivy
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+22 -15
View File
@@ -1,28 +1,35 @@
name: Prowler - Automatic Backport
name: 'Tools: Backport'
on:
pull_request_target:
branches: ['master']
types: ['labeled', 'closed']
branches:
- 'master'
types:
- 'labeled'
- 'closed'
paths:
- '.github/workflows/backport.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: false
env:
# The prefix of the label that triggers the backport must not contain the branch name
# so, for example, if the branch is 'master', the label should be 'backport-to-<branch>'
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_IGNORE: was-backported
jobs:
backport:
name: Backport PR
if: github.event.pull_request.merged == true && !(contains(github.event.pull_request.labels.*.name, 'backport')) && !(contains(github.event.pull_request.labels.*.name, 'was-backported'))
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
id-token: write
pull-requests: write
contents: write
pull-requests: write
steps:
- name: Check labels
id: preview_label_check
id: label_check
uses: agilepathway/label-checker@c3d16ad512e7cea5961df85ff2486bb774caf3c5 # v1.6.65
with:
allow_failure: true
@@ -31,17 +38,17 @@ jobs:
none_of: ${{ env.BACKPORT_LABEL_IGNORE }}
repo_token: ${{ secrets.GITHUB_TOKEN }}
- name: Backport Action
if: steps.preview_label_check.outputs.label_check == 'success'
- name: Backport PR
if: steps.label_check.outputs.label_check == 'success'
uses: sorenlouv/backport-github-action@ad888e978060bc1b2798690dd9d03c4036560947 # v9.5.1
with:
github_token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
auto_backport_label_prefix: ${{ env.BACKPORT_LABEL_PREFIX }}
- name: Info log
if: ${{ success() && steps.preview_label_check.outputs.label_check == 'success' }}
- name: Display backport info log
if: success() && steps.label_check.outputs.label_check == 'success'
run: cat ~/.backport/backport.info.log
- name: Debug log
if: ${{ failure() && steps.preview_label_check.outputs.label_check == 'success' }}
- name: Display backport debug log
if: failure() && steps.label_check.outputs.label_check == 'success'
run: cat ~/.backport/backport.debug.log
@@ -1,36 +0,0 @@
name: Prowler - Pull Request Documentation Link
on:
pull_request:
branches:
- 'master'
- 'v3'
paths:
- 'docs/**'
- '.github/workflows/build-documentation-on-pr.yml'
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
jobs:
documentation-link:
name: Documentation Link
runs-on: ubuntu-latest
steps:
- name: Find existing documentation comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-comment
with:
issue-number: ${{ env.PR_NUMBER }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- prowler-docs-link -->'
- name: Create or update PR comment with the Prowler Documentation URI
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ env.PR_NUMBER }}
body: |
<!-- prowler-docs-link -->
You can check the documentation for this PR here -> [Prowler Documentation](https://prowler-prowler-docs--${{ env.PR_NUMBER }}.com.readthedocs.build/projects/prowler-open-source/en/${{ env.PR_NUMBER }}/)
edit-mode: replace
+20 -12
View File
@@ -1,23 +1,31 @@
name: Prowler - Conventional Commit
name: 'Tools: Conventional Commit'
on:
pull_request:
types:
- "opened"
- "edited"
- "synchronize"
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
- 'master'
- 'v3'
- 'v4.*'
- 'v5.*'
types:
- 'opened'
- 'edited'
- 'synchronize'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
conventional-commit-check:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: read
steps:
- name: conventional-commit-check
id: conventional-commit-check
- name: Check PR title format
uses: agenthunt/conventional-commit-checker-action@9e552d650d0e205553ec7792d447929fc78e012b # v2.0.0
with:
pr-title-regex: '^([^\s(]+)(?:\(([^)]+)\))?: (.+)'
pr-title-regex: '^(feat|fix|docs|style|refactor|perf|test|chore|build|ci|revert)(\([^)]+\))?!?: .+'
+49 -46
View File
@@ -1,67 +1,70 @@
name: Prowler - Create Backport Label
name: 'Tools: Backport Label'
on:
release:
types: [published]
types:
- 'published'
concurrency:
group: ${{ github.workflow }}-${{ github.event.release.tag_name }}
cancel-in-progress: false
env:
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_COLOR: B60205
jobs:
create_label:
create-label:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: write
contents: read
issues: write
steps:
- name: Create backport label
- name: Create backport label for minor releases
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_TAG: ${{ github.event.release.tag_name }}
OWNER_REPO: ${{ github.repository }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
VERSION_ONLY=${RELEASE_TAG#v} # Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
RELEASE_TAG="${{ github.event.release.tag_name }}"
if [ -z "$RELEASE_TAG" ]; then
echo "Error: No release tag provided"
exit 1
fi
echo "Processing release tag: $RELEASE_TAG"
# Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
VERSION_ONLY="${RELEASE_TAG#v}"
# Check if it's a minor version (X.Y.0)
if [[ "$VERSION_ONLY" =~ ^[0-9]+\.[0-9]+\.0$ ]]; then
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is a minor version. Proceeding to create backport label."
if [[ "$VERSION_ONLY" =~ ^([0-9]+)\.([0-9]+)\.0$ ]]; then
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is a minor version. Proceeding to create backport label."
TWO_DIGIT_VERSION=${VERSION_ONLY%.0} # Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
# Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
MAJOR="${BASH_REMATCH[1]}"
MINOR="${BASH_REMATCH[2]}"
TWO_DIGIT_VERSION="${MAJOR}.${MINOR}"
FINAL_LABEL_NAME="backport-to-v${TWO_DIGIT_VERSION}"
FINAL_DESCRIPTION="Backport PR to the v${TWO_DIGIT_VERSION} branch"
LABEL_NAME="${BACKPORT_LABEL_PREFIX}v${TWO_DIGIT_VERSION}"
LABEL_DESC="Backport PR to the v${TWO_DIGIT_VERSION} branch"
LABEL_COLOR="$BACKPORT_LABEL_COLOR"
echo "Effective label name will be: ${FINAL_LABEL_NAME}"
echo "Effective description will be: ${FINAL_DESCRIPTION}"
echo "Label name: $LABEL_NAME"
echo "Label description: $LABEL_DESC"
# Check if the label already exists
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token ${GITHUB_TOKEN}" "https://api.github.com/repos/${OWNER_REPO}/labels/${FINAL_LABEL_NAME}")
if [ "${STATUS_CODE}" -eq 200 ]; then
echo "Label '${FINAL_LABEL_NAME}' already exists."
elif [ "${STATUS_CODE}" -eq 404 ]; then
echo "Label '${FINAL_LABEL_NAME}' does not exist. Creating it..."
# Prepare JSON data payload
JSON_DATA=$(printf '{"name":"%s","description":"%s","color":"B60205"}' "${FINAL_LABEL_NAME}" "${FINAL_DESCRIPTION}")
CREATE_STATUS_CODE=$(curl -s -o /tmp/curl_create_response.json -w "%{http_code}" -X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${GITHUB_TOKEN}" \
--data "${JSON_DATA}" \
"https://api.github.com/repos/${OWNER_REPO}/labels")
CREATE_RESPONSE_BODY=$(cat /tmp/curl_create_response.json)
rm -f /tmp/curl_create_response.json
if [ "$CREATE_STATUS_CODE" -eq 201 ]; then
echo "Label '${FINAL_LABEL_NAME}' created successfully."
else
echo "Error creating label '${FINAL_LABEL_NAME}'. Status: $CREATE_STATUS_CODE"
echo "Response: $CREATE_RESPONSE_BODY"
exit 1
fi
# Check if label already exists
if gh label list --repo ${{ github.repository }} --limit 1000 | grep -q "^${LABEL_NAME}[[:space:]]"; then
echo "Label '$LABEL_NAME' already exists."
else
echo "Error checking for label '${FINAL_LABEL_NAME}'. HTTP Status: ${STATUS_CODE}"
exit 1
echo "Label '$LABEL_NAME' does not exist. Creating it..."
gh label create "$LABEL_NAME" \
--description "$LABEL_DESC" \
--color "$LABEL_COLOR" \
--repo ${{ github.repository }}
echo "Label '$LABEL_NAME' created successfully."
fi
else
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is not a minor version. Skipping backport label creation."
exit 0
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is not a minor version. Skipping backport label creation."
fi
+24 -10
View File
@@ -1,19 +1,33 @@
name: Prowler - Find secrets
name: 'Tools: TruffleHog'
on: pull_request
on:
push:
branches:
- 'master'
- 'v5.*'
pull_request:
branches:
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
trufflehog:
scan-secrets:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
steps:
- name: Checkout
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@466da5b0bb161144f6afca9afe5d57975828c410 # v3.90.8
- name: Scan for secrets with TruffleHog
uses: trufflesecurity/trufflehog@ad6fc8fb446b8fafbf7ea8193d2d6bfd42f45690 # v3.90.11
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD
extra_args: --only-verified
extra_args: '--results=verified,unknown'
+34
View File
@@ -0,0 +1,34 @@
name: Label Community Contributors PRs
on:
pull_request:
types: [opened]
jobs:
add-community-label:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- name: Label community contributors
env:
GH_TOKEN: ${{ github.token }}
run: |
# Fetch fresh PR data to get current author_association
ASSOCIATION=$(gh api /repos/${{ github.repository }}/pulls/${{ github.event.number }} --jq '.author_association')
AUTHOR=$(gh api /repos/${{ github.repository }}/pulls/${{ github.event.number }} --jq '.user.login')
echo "Author: $AUTHOR, Association: $ASSOCIATION"
# Members have associations like: OWNER, MEMBER, COLLABORATOR
# Non-members have: CONTRIBUTOR, FIRST_TIME_CONTRIBUTOR, FIRST_TIMER, NONE
if [[ "$ASSOCIATION" != "OWNER" && "$ASSOCIATION" != "MEMBER" && "$ASSOCIATION" != "COLLABORATOR" ]]; then
gh api /repos/${{ github.repository }}/issues/${{ github.event.number }}/labels \
-X POST \
-f labels[]='community'
echo "Added 'community' label for $ASSOCIATION contributor"
else
echo "Skipped labeling for $ASSOCIATION"
fi
+20 -8
View File
@@ -1,17 +1,29 @@
name: Prowler - PR Labeler
name: 'Tools: PR Labeler'
on:
pull_request_target:
branches:
- "master"
- "v3"
- "v4.*"
pull_request_target:
branches:
- 'master'
- 'v5.*'
types:
- 'opened'
- 'reopened'
- 'synchronize'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
labeler:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
- name: Apply labels to PR
uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
with:
sync-labels: true
@@ -0,0 +1,110 @@
name: 'MCP: Container Build and Push'
on:
push:
branches:
- 'master'
paths:
- 'mcp_server/**'
- '.github/workflows/mcp-container-build-push.yml'
release:
types:
- 'published'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
# Tags
LATEST_TAG: latest
RELEASE_TAG: ${{ github.event.release.tag_name }}
STABLE_TAG: stable
WORKING_DIRECTORY: ./mcp_server
# Container registries
PROWLERCLOUD_DOCKERHUB_REPOSITORY: prowlercloud
PROWLERCLOUD_DOCKERHUB_IMAGE: prowler-mcp
jobs:
setup:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
short-sha: ${{ steps.set-short-sha.outputs.short-sha }}
steps:
- name: Calculate short SHA
id: set-short-sha
run: echo "short-sha=${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
container-build-push:
needs: setup
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push MCP container (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}
labels: |
org.opencontainers.image.title=Prowler MCP Server
org.opencontainers.image.description=Model Context Protocol server for Prowler
org.opencontainers.image.vendor=ProwlerPro, Inc.
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ github.event.head_commit.timestamp }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push MCP container (release)
if: github.event_name == 'release'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
labels: |
org.opencontainers.image.title=Prowler MCP Server
org.opencontainers.image.description=Model Context Protocol server for Prowler
org.opencontainers.image.vendor=ProwlerPro, Inc.
org.opencontainers.image.version=${{ env.RELEASE_TAG }}
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ github.event.release.published_at }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Trigger MCP deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
event-type: mcp-prowler-deployment
client-payload: '{"sha": "${{ github.sha }}", "short_sha": "${{ needs.setup.outputs.short-sha }}"}'
+80
View File
@@ -0,0 +1,80 @@
name: 'MCP: Pull Request'
on:
push:
branches:
- 'master'
- 'v5.*'
paths:
- '.github/workflows/mcp-pull-request.yml'
- 'mcp_server/**'
- '!mcp_server/README.md'
- '!mcp_server/CHANGELOG.md'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- '.github/workflows/mcp-pull-request.yml'
- 'mcp_server/**'
- '!mcp_server/README.md'
- '!mcp_server/CHANGELOG.md'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
MCP_WORKING_DIR: ./mcp_server
IMAGE_NAME: prowler-mcp
jobs:
dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Lint Dockerfile with Hadolint
uses: hadolint/hadolint-action@2332a7b74a6de0dda2e2221d575162eba76ba5e5 # v3.3.0
with:
dockerfile: mcp_server/Dockerfile
container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
security-events: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build MCP container
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.MCP_WORKING_DIR }}
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan MCP container with Trivy
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+103
View File
@@ -0,0 +1,103 @@
name: 'Tools: Check Changelog'
on:
pull_request:
types:
- 'opened'
- 'synchronize'
- 'reopened'
- 'labeled'
- 'unlabeled'
branches:
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
check-changelog:
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
env:
MONITORED_FOLDERS: 'api ui prowler mcp_server'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
ui/**
prowler/**
mcp_server/**
- name: Check for folder changes and changelog presence
id: check-folders
run: |
missing_changelogs=""
# Check api folder
if [[ "${{ steps.changed-files.outputs.any_changed }}" == "true" ]]; then
for folder in $MONITORED_FOLDERS; do
# Get files changed in this folder
changed_in_folder=$(echo "${{ steps.changed-files.outputs.all_changed_files }}" | tr ' ' '\n' | grep "^${folder}/" || true)
if [ -n "$changed_in_folder" ]; then
echo "Detected changes in ${folder}/"
# Check if CHANGELOG.md was updated
if ! echo "$changed_in_folder" | grep -q "^${folder}/CHANGELOG.md$"; then
echo "No changelog update found for ${folder}/"
missing_changelogs="${missing_changelogs}- \`${folder}\`"$'\n'
fi
fi
done
fi
{
echo "missing_changelogs<<EOF"
echo -e "${missing_changelogs}"
echo "EOF"
} >> $GITHUB_OUTPUT
- name: Find existing changelog comment
if: github.event.pull_request.head.repo.full_name == github.repository
id: find-comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- changelog-check -->'
- name: Update PR comment with changelog status
if: github.event.pull_request.head.repo.full_name == github.repository
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find-comment.outputs.comment-id }}
edit-mode: replace
body: |
<!-- changelog-check -->
${{ steps.check-folders.outputs.missing_changelogs != '' && format('⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
{0}
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.', steps.check-folders.outputs.missing_changelogs) || '✅ All necessary `CHANGELOG.md` files have been updated.' }}
- name: Fail if changelog is missing
if: steps.check-folders.outputs.missing_changelogs != ''
run: |
echo "::error::Missing changelog updates in some folders"
exit 1
+52 -104
View File
@@ -1,42 +1,40 @@
name: Prowler - PR Conflict Checker
name: 'Tools: PR Conflict Checker'
on:
pull_request:
pull_request_target:
types:
- opened
- synchronize
- reopened
- 'opened'
- 'synchronize'
- 'reopened'
branches:
- "master"
- "v5.*"
# Leaving this commented until we find a way to run it for forks but in Prowler's context
# pull_request_target:
# types:
# - opened
# - synchronize
# - reopened
# branches:
# - "master"
# - "v5.*"
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
conflict-checker:
check-conflicts:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
issues: write
steps:
- name: Checkout repository
- name: Checkout PR head
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
**
files: '**'
- name: Check for conflict markers
id: conflict-check
@@ -51,10 +49,10 @@ jobs:
if [ -f "$file" ]; then
echo "Checking file: $file"
# Look for conflict markers
if grep -l "^<<<<<<<\|^=======\|^>>>>>>>" "$file" 2>/dev/null; then
# Look for conflict markers (more precise regex)
if grep -qE '^(<<<<<<<|=======|>>>>>>>)' "$file" 2>/dev/null; then
echo "Conflict markers found in: $file"
CONFLICT_FILES="$CONFLICT_FILES$file "
CONFLICT_FILES="${CONFLICT_FILES}- \`${file}\`"$'\n'
HAS_CONFLICTS=true
fi
fi
@@ -62,114 +60,64 @@ jobs:
if [ "$HAS_CONFLICTS" = true ]; then
echo "has_conflicts=true" >> $GITHUB_OUTPUT
echo "conflict_files=$CONFLICT_FILES" >> $GITHUB_OUTPUT
echo "Conflict markers detected in files: $CONFLICT_FILES"
{
echo "conflict_files<<EOF"
echo "$CONFLICT_FILES"
echo "EOF"
} >> $GITHUB_OUTPUT
echo "Conflict markers detected"
else
echo "has_conflicts=false" >> $GITHUB_OUTPUT
echo "No conflict markers found in changed files"
fi
- name: Add conflict label
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
- name: Manage conflict label
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.pull_request.number }}
HAS_CONFLICTS: ${{ steps.conflict-check.outputs.has_conflicts }}
run: |
LABEL_NAME="has-conflicts"
const hasConflictLabel = labels.some(label => label.name === 'has-conflicts');
# Add or remove label based on conflict status
if [ "$HAS_CONFLICTS" = "true" ]; then
echo "Adding conflict label to PR #${PR_NUMBER}..."
gh pr edit "$PR_NUMBER" --add-label "$LABEL_NAME" --repo ${{ github.repository }} || true
else
echo "Removing conflict label from PR #${PR_NUMBER}..."
gh pr edit "$PR_NUMBER" --remove-label "$LABEL_NAME" --repo ${{ github.repository }} || true
fi
if (!hasConflictLabel) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['has-conflicts']
});
console.log('Added has-conflicts label');
} else {
console.log('has-conflicts label already exists');
}
- name: Remove conflict label
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: 'has-conflicts'
});
console.log('Removed has-conflicts label');
} catch (error) {
if (error.status === 404) {
console.log('has-conflicts label was not present');
} else {
throw error;
}
}
- name: Find existing conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
- name: Find existing comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
body-includes: '<!-- conflict-checker-comment -->'
- name: Create or update conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
- name: Create or update comment
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
⚠️ **Conflict Markers Detected**
<!-- conflict-checker-comment -->
${{ steps.conflict-check.outputs.has_conflicts == 'true' && '⚠️ **Conflict Markers Detected**' || '✅ **Conflict Markers Resolved**' }}
This pull request contains unresolved conflict markers in the following files:
```
${{ steps.conflict-check.outputs.conflict_files }}
```
${{ steps.conflict-check.outputs.has_conflicts == 'true' && format('This pull request contains unresolved conflict markers in the following files:
{0}
Please resolve these conflicts by:
1. Locating the conflict markers: `<<<<<<<`, `=======`, and `>>>>>>>`
2. Manually editing the files to resolve the conflicts
3. Removing all conflict markers
4. Committing and pushing the changes
- name: Find existing conflict comment when resolved
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-resolved-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Update comment when conflicts resolved
if: steps.conflict-check.outputs.has_conflicts == 'false' && steps.find-resolved-comment.outputs.comment-id != ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-resolved-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
✅ **Conflict Markers Resolved**
All conflict markers have been successfully resolved in this pull request.
4. Committing and pushing the changes', steps.conflict-check.outputs.conflict_files) || 'All conflict markers have been successfully resolved in this pull request.' }}
- name: Fail workflow if conflicts detected
if: steps.conflict-check.outputs.has_conflicts == 'true'
run: |
echo "::error::Workflow failed due to conflict markers in files: ${{ steps.conflict-check.outputs.conflict_files }}"
echo "::error::Workflow failed due to conflict markers detected in the PR"
exit 1
@@ -1,27 +1,31 @@
name: Prowler - Merged Pull Request
name: 'Tools: PR Merged'
on:
pull_request_target:
branches: ['master']
types: ['closed']
branches:
- 'master'
types:
- 'closed'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: false
jobs:
trigger-cloud-pull-request:
name: Trigger Cloud Pull Request
if: github.event.pull_request.merged == true && github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.merge_commit_sha }}
- name: Set short git commit SHA
- name: Calculate short commit SHA
id: vars
run: |
shortSha=$(git rev-parse --short ${{ github.event.pull_request.merge_commit_sha }})
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
SHORT_SHA="${{ github.event.pull_request.merge_commit_sha }}"
echo "SHORT_SHA=${SHORT_SHA::7}" >> $GITHUB_ENV
- name: Trigger pull request
- name: Trigger Cloud repository pull request
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
@@ -31,8 +35,12 @@ jobs:
{
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
"PROWLER_PR_NUMBER": "${{ github.event.pull_request.number }}",
"PROWLER_PR_TITLE": ${{ toJson(github.event.pull_request.title) }},
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
"PROWLER_PR_URL": ${{ toJson(github.event.pull_request.html_url) }}
"PROWLER_PR_URL": ${{ toJson(github.event.pull_request.html_url) }},
"PROWLER_PR_MERGED_BY": "${{ github.event.pull_request.merged_by.login }}",
"PROWLER_PR_BASE_BRANCH": "${{ github.event.pull_request.base.ref }}",
"PROWLER_PR_HEAD_BRANCH": "${{ github.event.pull_request.head.ref }}"
}
@@ -1,6 +1,6 @@
name: Prowler - Release Preparation
name: 'Tools: Prepare Release'
run-name: Prowler Release Preparation for ${{ inputs.prowler_version }}
run-name: 'Prepare Release for Prowler ${{ inputs.prowler_version }}'
on:
workflow_dispatch:
@@ -10,18 +10,23 @@ on:
required: true
type: string
concurrency:
group: ${{ github.workflow }}-${{ inputs.prowler_version }}
cancel-in-progress: false
env:
PROWLER_VERSION: ${{ github.event.inputs.prowler_version }}
PROWLER_VERSION: ${{ inputs.prowler_version }}
jobs:
prepare-release:
if: github.repository == 'prowler-cloud/prowler'
if: github.event_name == 'workflow_dispatch' && github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout code
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
@@ -34,15 +39,15 @@ jobs:
- name: Install Poetry
run: |
python3 -m pip install --user poetry
python3 -m pip install --user poetry==2.1.1
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Configure Git
run: |
git config --global user.name "prowler-bot"
git config --global user.email "179230569+prowler-bot@users.noreply.github.com"
git config --global user.name 'prowler-bot'
git config --global user.email '179230569+prowler-bot@users.noreply.github.com'
- name: Parse version and determine branch
- name: Parse version and read changelogs
run: |
# Validate version format (reusing pattern from sdk-bump-version.yml)
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
@@ -76,10 +81,12 @@ jobs:
UI_VERSION=$(extract_latest_version "ui/CHANGELOG.md")
API_VERSION=$(extract_latest_version "api/CHANGELOG.md")
SDK_VERSION=$(extract_latest_version "prowler/CHANGELOG.md")
MCP_VERSION=$(extract_latest_version "mcp_server/CHANGELOG.md")
echo "UI_VERSION=${UI_VERSION}" >> "${GITHUB_ENV}"
echo "API_VERSION=${API_VERSION}" >> "${GITHUB_ENV}"
echo "SDK_VERSION=${SDK_VERSION}" >> "${GITHUB_ENV}"
echo "MCP_VERSION=${MCP_VERSION}" >> "${GITHUB_ENV}"
if [ -n "$UI_VERSION" ]; then
echo "Read UI version from changelog: $UI_VERSION"
@@ -99,18 +106,25 @@ jobs:
echo "Warning: No SDK version found in prowler/CHANGELOG.md"
fi
if [ -n "$MCP_VERSION" ]; then
echo "Read MCP version from changelog: $MCP_VERSION"
else
echo "Warning: No MCP version found in mcp_server/CHANGELOG.md"
fi
echo "Prowler version: $PROWLER_VERSION"
echo "Branch name: $BRANCH_NAME"
echo "UI version: $UI_VERSION"
echo "API version: $API_VERSION"
echo "SDK version: $SDK_VERSION"
echo "MCP version: $MCP_VERSION"
echo "Is minor release: $([ $PATCH_VERSION -eq 0 ] && echo 'true' || echo 'false')"
else
echo "Invalid version syntax: '$PROWLER_VERSION' (must be N.N.N)" >&2
exit 1
fi
- name: Extract changelog entries
- name: Extract and combine changelog entries
run: |
set -e
@@ -136,8 +150,8 @@ jobs:
# Remove --- separators
sed -i '/^---$/d' "$output_file"
# Remove trailing empty lines
sed -i '/^$/d' "$output_file"
# Remove only trailing empty lines (not all empty lines)
sed -i -e :a -e '/^\s*$/d;N;ba' "$output_file"
}
# Calculate expected versions for this release
@@ -183,7 +197,26 @@ jobs:
touch "prowler_changelog.md"
fi
# Combine changelogs in order: UI, API, SDK
# MCP has changes if the changelog references this Prowler version
# Check if the changelog contains "(Prowler X.Y.Z)" or "(Prowler UNRELEASED)"
if [ -f "mcp_server/CHANGELOG.md" ]; then
MCP_PROWLER_REF=$(grep -m 1 "^## \[.*\] (Prowler" mcp_server/CHANGELOG.md | sed -E 's/.*\(Prowler ([^)]+)\).*/\1/' | tr -d '[:space:]')
if [ "$MCP_PROWLER_REF" = "$PROWLER_VERSION" ] || [ "$MCP_PROWLER_REF" = "UNRELEASED" ]; then
echo "HAS_MCP_CHANGES=true" >> $GITHUB_ENV
echo "✓ MCP changes detected - Prowler reference: $MCP_PROWLER_REF (version: $MCP_VERSION)"
extract_changelog "mcp_server/CHANGELOG.md" "$MCP_VERSION" "mcp_changelog.md"
else
echo "HAS_MCP_CHANGES=false" >> $GITHUB_ENV
echo " No MCP changes for this release (Prowler reference: $MCP_PROWLER_REF, input: $PROWLER_VERSION)"
touch "mcp_changelog.md"
fi
else
echo "HAS_MCP_CHANGES=false" >> $GITHUB_ENV
echo " No MCP changelog found"
touch "mcp_changelog.md"
fi
# Combine changelogs in order: UI, API, SDK, MCP
> combined_changelog.md
if [ "$HAS_UI_CHANGES" = "true" ] && [ -s "ui_changelog.md" ]; then
@@ -207,10 +240,22 @@ jobs:
echo "" >> combined_changelog.md
fi
if [ "$HAS_MCP_CHANGES" = "true" ] && [ -s "mcp_changelog.md" ]; then
echo "## MCP" >> combined_changelog.md
echo "" >> combined_changelog.md
cat mcp_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
# Add fallback message if no changelogs were added
if [ ! -s combined_changelog.md ]; then
echo "No component changes detected for this release." >> combined_changelog.md
fi
echo "Combined changelog preview:"
cat combined_changelog.md
- name: Checkout existing branch for patch release
- name: Checkout release branch for patch release
if: ${{ env.PATCH_VERSION != '0' }}
run: |
echo "Patch release detected, checking out existing branch $BRANCH_NAME..."
@@ -225,7 +270,7 @@ jobs:
exit 1
fi
- name: Verify version in pyproject.toml
- name: Verify SDK version in pyproject.toml
run: |
CURRENT_VERSION=$(grep '^version = ' pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
PROWLER_VERSION_TRIMMED=$(echo "$PROWLER_VERSION" | tr -d '[:space:]')
@@ -235,7 +280,7 @@ jobs:
fi
echo "✓ pyproject.toml version: $CURRENT_VERSION"
- name: Verify version in prowler/config/config.py
- name: Verify SDK version in prowler/config/config.py
run: |
CURRENT_VERSION=$(grep '^prowler_version = ' prowler/config/config.py | sed -E 's/prowler_version = "([^"]+)"/\1/' | tr -d '[:space:]')
PROWLER_VERSION_TRIMMED=$(echo "$PROWLER_VERSION" | tr -d '[:space:]')
@@ -245,7 +290,7 @@ jobs:
fi
echo "✓ prowler/config/config.py version: $CURRENT_VERSION"
- name: Verify version in api/pyproject.toml
- name: Verify API version in api/pyproject.toml
if: ${{ env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_API_VERSION=$(grep '^version = ' api/pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
@@ -256,7 +301,7 @@ jobs:
fi
echo "✓ api/pyproject.toml version: $CURRENT_API_VERSION"
- name: Verify prowler dependency in api/pyproject.toml
- name: Verify API prowler dependency in api/pyproject.toml
if: ${{ env.PATCH_VERSION != '0' && env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
@@ -267,7 +312,7 @@ jobs:
fi
echo "✓ api/pyproject.toml prowler dependency: $CURRENT_PROWLER_REF"
- name: Verify version in api/src/backend/api/v1/views.py
- name: Verify API version in api/src/backend/api/v1/views.py
if: ${{ env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_API_VERSION=$(grep 'spectacular_settings.VERSION = ' api/src/backend/api/v1/views.py | sed -E 's/.*spectacular_settings.VERSION = "([^"]+)".*/\1/' | tr -d '[:space:]')
@@ -278,7 +323,7 @@ jobs:
fi
echo "✓ api/src/backend/api/v1/views.py version: $CURRENT_API_VERSION"
- name: Checkout existing release branch for minor release
- name: Checkout release branch for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
echo "Minor release detected (patch = 0), checking out existing branch $BRANCH_NAME..."
@@ -290,19 +335,12 @@ jobs:
exit 1
fi
- name: Prepare prowler dependency update for minor release
- name: Update API prowler dependency for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
BRANCH_NAME_TRIMMED=$(echo "$BRANCH_NAME" | tr -d '[:space:]')
# Create a temporary branch for the PR from the minor version branch
TEMP_BRANCH="update-api-dependency-$BRANCH_NAME_TRIMMED-$(date +%s)"
echo "TEMP_BRANCH=$TEMP_BRANCH" >> $GITHUB_ENV
# Create temp branch from the current minor version branch
git checkout -b "$TEMP_BRANCH"
# Minor release: update the dependency to use the release branch
echo "Updating prowler dependency from '$CURRENT_PROWLER_REF' to '$BRANCH_NAME_TRIMMED'"
sed -i "s|prowler @ git+https://github.com/prowler-cloud/prowler.git@[^\"]*\"|prowler @ git+https://github.com/prowler-cloud/prowler.git@$BRANCH_NAME_TRIMMED\"|" api/pyproject.toml
@@ -320,20 +358,19 @@ jobs:
poetry lock
cd ..
# Commit and push the temporary branch
git add api/pyproject.toml api/poetry.lock
git commit -m "chore(api): update prowler dependency to $BRANCH_NAME_TRIMMED for release $PROWLER_VERSION"
git push origin "$TEMP_BRANCH"
echo "✓ Prepared prowler dependency update to: $UPDATED_PROWLER_REF"
- name: Create Pull Request against release branch
- name: Create PR for API dependency update
if: ${{ env.PATCH_VERSION == '0' }}
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
branch: ${{ env.TEMP_BRANCH }}
commit-message: 'chore(api): update prowler dependency to ${{ env.BRANCH_NAME }} for release ${{ env.PROWLER_VERSION }}'
branch: update-api-dependency-${{ env.BRANCH_NAME }}-${{ github.run_number }}
base: ${{ env.BRANCH_NAME }}
add-paths: |
api/pyproject.toml
api/poetry.lock
title: "chore(api): Update prowler dependency to ${{ env.BRANCH_NAME }} for release ${{ env.PROWLER_VERSION }}"
body: |
### Description
@@ -366,5 +403,6 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Clean up temporary files
if: always()
run: |
rm -f prowler_changelog.md api_changelog.md ui_changelog.md combined_changelog.md
rm -f prowler_changelog.md api_changelog.md ui_changelog.md mcp_changelog.md combined_changelog.md
@@ -1,77 +0,0 @@
name: Prowler - Check Changelog
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
jobs:
check-changelog:
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: write
env:
MONITORED_FOLDERS: "api ui prowler dashboard"
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Get list of changed files
id: changed_files
run: |
git fetch origin ${{ github.base_ref }}
git diff --name-only origin/${{ github.base_ref }}...HEAD > changed_files.txt
cat changed_files.txt
- name: Check for folder changes and changelog presence
id: check_folders
run: |
missing_changelogs=""
for folder in $MONITORED_FOLDERS; do
if grep -q "^${folder}/" changed_files.txt; then
echo "Detected changes in ${folder}/"
if ! grep -q "^${folder}/CHANGELOG.md$" changed_files.txt; then
echo "No changelog update found for ${folder}/"
missing_changelogs="${missing_changelogs}- \`${folder}\`\n"
fi
fi
done
echo "missing_changelogs<<EOF" >> $GITHUB_OUTPUT
echo -e "${missing_changelogs}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Find existing changelog comment
if: github.event.pull_request.head.repo.full_name == github.repository
id: find_comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad #v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- changelog-check -->'
- name: Update PR comment with changelog status
if: github.event.pull_request.head.repo.full_name == github.repository
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
edit-mode: replace
body: |
<!-- changelog-check -->
${{ steps.check_folders.outputs.missing_changelogs != '' && format('⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
{0}
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.', steps.check_folders.outputs.missing_changelogs) || '✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉' }}
- name: Fail if changelog is missing
if: steps.check_folders.outputs.missing_changelogs != ''
run: |
echo "ERROR: Missing changelog updates in some folders."
exit 1
+38 -43
View File
@@ -1,44 +1,40 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: SDK - CodeQL
name: 'SDK: CodeQL'
on:
push:
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
paths-ignore:
- 'ui/**'
- 'api/**'
- '.github/**'
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- '.github/workflows/sdk-codeql.yml'
- '.github/codeql/sdk-codeql-config.yml'
- '!prowler/CHANGELOG.md'
pull_request:
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
paths-ignore:
- 'ui/**'
- 'api/**'
- '.github/**'
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- '.github/workflows/sdk-codeql.yml'
- '.github/codeql/sdk-codeql-config.yml'
- '!prowler/CHANGELOG.md'
schedule:
- cron: '00 12 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
analyze:
name: Analyze
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
actions: read
contents: read
@@ -47,21 +43,20 @@ jobs:
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
language:
- 'python'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/sdk-codeql-config.yml
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/sdk-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: '/language:${{ matrix.language }}'
+16 -1
View File
@@ -249,6 +249,21 @@ jobs:
run: |
poetry run pytest -n auto --cov=./prowler/providers/mongodbatlas --cov-report=xml:mongodb_atlas_coverage.xml tests/providers/mongodbatlas
# Test OCI
- name: OCI - Check if any file has changed
id: oci-changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/oraclecloud/**
./tests/providers/oraclecloud/**
./poetry.lock
- name: OCI - Test
if: steps.oci-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/oraclecloud --cov-report=xml:oci_coverage.xml tests/providers/oraclecloud
# Common Tests
- name: Lib - Test
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
@@ -268,4 +283,4 @@ jobs:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: prowler
files: ./aws_coverage.xml,./azure_coverage.xml,./gcp_coverage.xml,./kubernetes_coverage.xml,./github_coverage.xml,./nhn_coverage.xml,./m365_coverage.xml,./lib_coverage.xml,./config_coverage.xml
files: ./aws_coverage.xml,./azure_coverage.xml,./gcp_coverage.xml,./kubernetes_coverage.xml,./github_coverage.xml,./nhn_coverage.xml,./m365_coverage.xml,./oci_coverage.xml,./lib_coverage.xml,./config_coverage.xml
+23 -24
View File
@@ -1,36 +1,36 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: UI - CodeQL
name: 'UI: CodeQL'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "ui/**"
- 'ui/**'
- '.github/workflows/ui-codeql.yml'
- '.github/codeql/ui-codeql-config.yml'
- '!ui/CHANGELOG.md'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "ui/**"
- 'ui/**'
- '.github/workflows/ui-codeql.yml'
- '.github/codeql/ui-codeql-config.yml'
- '!ui/CHANGELOG.md'
schedule:
- cron: "00 12 * * *"
- cron: '00 12 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
analyze:
name: Analyze
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
actions: read
contents: read
@@ -39,14 +39,13 @@ jobs:
strategy:
fail-fast: false
matrix:
language: ["javascript"]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
language:
- 'javascript-typescript'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
@@ -56,4 +55,4 @@ jobs:
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
category: '/language:${{ matrix.language }}'
+1
View File
@@ -18,6 +18,7 @@ jobs:
AUTH_TRUST_HOST: true
NEXTAUTH_URL: 'http://localhost:3000'
NEXT_PUBLIC_API_BASE_URL: 'http://localhost:8080/api/v1'
E2E_NEW_PASSWORD: ${{ secrets.E2E_NEW_PASSWORD }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
+9
View File
@@ -39,6 +39,12 @@ secrets-*/
# JUnit Reports
junit-reports/
# Test and coverage artifacts
*_coverage.xml
pytest_*.xml
.coverage
htmlcov/
# VSCode files
.vscode/
@@ -83,3 +89,6 @@ CLAUDE.md
# MCP Server
mcp_server/prowler_mcp_server/prowler_app/server.py
mcp_server/prowler_mcp_server/prowler_app/utils/schema.yaml
# Compliance report
*.pdf
+9 -1
View File
@@ -46,6 +46,14 @@ help: ## Show this help.
@echo "Prowler Makefile"
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
##@ Build no cache
build-no-cache-dev:
docker compose -f docker-compose-dev.yml build --no-cache api-dev worker-dev worker-beat
##@ Development Environment
run-api-dev: ## Start development environment with API, PostgreSQL, Valkey, and workers
docker compose -f docker-compose-dev.yml up api-dev postgres valkey worker-dev worker-beat --build
docker compose -f docker-compose-dev.yml up api-dev postgres valkey worker-dev worker-beat
##@ Development Environment
build-and-run-api-dev: build-no-cache-dev run-api-dev
+4 -3
View File
@@ -82,12 +82,13 @@ prowler dashboard
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Stage | Interface |
|---|---|---|---|---|---|---|---|
| AWS | 576 | 82 | 36 | 10 | Official | Stable | UI, API, CLI |
| GCP | 79 | 13 | 10 | 3 | Official | Stable | UI, API, CLI |
| Azure | 162 | 19 | 11 | 4 | Official | Stable | UI, API, CLI |
| AWS | 576 | 82 | 38 | 10 | Official | Stable | UI, API, CLI |
| GCP | 79 | 13 | 11 | 3 | Official | Stable | UI, API, CLI |
| Azure | 162 | 19 | 12 | 4 | Official | Stable | UI, API, CLI |
| Kubernetes | 83 | 7 | 5 | 7 | Official | Stable | UI, API, CLI |
| GitHub | 17 | 2 | 1 | 0 | Official | Stable | UI, API, CLI |
| M365 | 70 | 7 | 3 | 2 | Official | Stable | UI, API, CLI |
| OCI | 51 | 13 | 1 | 10 | Official | Stable | CLI |
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | Beta | CLI |
| MongoDB Atlas | 10 | 3 | 0 | 0 | Official | Beta | CLI |
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | Beta | CLI |
+14 -1
View File
@@ -2,12 +2,25 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.14.0] (Prowler UNRELEASED)
## [1.15.0] (Prowler UNRELEASED)
### Added
- Support for configuring multiple LLM providers [(#8772)](https://github.com/prowler-cloud/prowler/pull/8772)
## [1.14.0] (Prowler 5.13.0)
### Added
- Default JWT keys are generated and stored if they are missing from configuration [(#8655)](https://github.com/prowler-cloud/prowler/pull/8655)
- `compliance_name` for each compliance [(#7920)](https://github.com/prowler-cloud/prowler/pull/7920)
- Support C5 compliance framework for the AWS provider [(#8830)](https://github.com/prowler-cloud/prowler/pull/8830)
- Support for M365 Certificate authentication [(#8538)](https://github.com/prowler-cloud/prowler/pull/8538)
- API Key support [(#8805)](https://github.com/prowler-cloud/prowler/pull/8805)
- SAML role mapping protection for single-admin tenants to prevent accidental lockout [(#8882)](https://github.com/prowler-cloud/prowler/pull/8882)
- Support for `passed_findings` and `total_findings` fields in compliance requirement overview for accurate Prowler ThreatScore calculation [(#8582)](https://github.com/prowler-cloud/prowler/pull/8582)
- PDF reporting for Prowler ThreatScore [(#8867)](https://github.com/prowler-cloud/prowler/pull/8867)
- Database read replica support [(#8869)](https://github.com/prowler-cloud/prowler/pull/8869)
- Support Common Cloud Controls for AWS, Azure and GCP [(#8000)](https://github.com/prowler-cloud/prowler/pull/8000)
- Add `provider_id__in` filter support to findings and findings severity overview endpoints [(#8951)](https://github.com/prowler-cloud/prowler/pull/8951)
### Changed
- Now the MANAGE_ACCOUNT permission is required to modify or read user permissions instead of MANAGE_USERS [(#8281)](https://github.com/prowler-cloud/prowler/pull/8281)
+532 -5
View File
@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.4 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.2.0 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -273,14 +273,14 @@ tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" a
[[package]]
name = "authlib"
version = "1.6.4"
version = "1.6.5"
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "authlib-1.6.4-py2.py3-none-any.whl", hash = "sha256:39313d2a2caac3ecf6d8f95fbebdfd30ae6ea6ae6a6db794d976405fdd9aa796"},
{file = "authlib-1.6.4.tar.gz", hash = "sha256:104b0442a43061dc8bc23b133d1d06a2b0a9c2e3e33f34c4338929e816287649"},
{file = "authlib-1.6.5-py2.py3-none-any.whl", hash = "sha256:3e0e0507807f842b02175507bdee8957a1d5707fd4afb17c32fb43fee90b6e3a"},
{file = "authlib-1.6.5.tar.gz", hash = "sha256:6aaf9c79b7cc96c900f0b284061691c5d4e61221640a948fe690b556a6d6d10b"},
]
[package.dependencies]
@@ -1256,6 +1256,98 @@ files = [
{file = "contextlib2-21.6.0.tar.gz", hash = "sha256:ab1e2bfe1d01d968e1b7e8d9023bc51ef3509bba217bb730cee3827e1ee82869"},
]
[[package]]
name = "contourpy"
version = "1.3.3"
description = "Python library for calculating contours of 2D quadrilateral grids"
optional = false
python-versions = ">=3.11"
groups = ["main"]
files = [
{file = "contourpy-1.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:709a48ef9a690e1343202916450bc48b9e51c049b089c7f79a267b46cffcdaa1"},
{file = "contourpy-1.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:23416f38bfd74d5d28ab8429cc4d63fa67d5068bd711a85edb1c3fb0c3e2f381"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:929ddf8c4c7f348e4c0a5a3a714b5c8542ffaa8c22954862a46ca1813b667ee7"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:9e999574eddae35f1312c2b4b717b7885d4edd6cb46700e04f7f02db454e67c1"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf67e0e3f482cb69779dd3061b534eb35ac9b17f163d851e2a547d56dba0a3a"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:51e79c1f7470158e838808d4a996fa9bac72c498e93d8ebe5119bc1e6becb0db"},
{file = "contourpy-1.3.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:598c3aaece21c503615fd59c92a3598b428b2f01bfb4b8ca9c4edeecc2438620"},
{file = "contourpy-1.3.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:322ab1c99b008dad206d406bb61d014cf0174df491ae9d9d0fac6a6fda4f977f"},
{file = "contourpy-1.3.3-cp311-cp311-win32.whl", hash = "sha256:fd907ae12cd483cd83e414b12941c632a969171bf90fc937d0c9f268a31cafff"},
{file = "contourpy-1.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:3519428f6be58431c56581f1694ba8e50626f2dd550af225f82fb5f5814d2a42"},
{file = "contourpy-1.3.3-cp311-cp311-win_arm64.whl", hash = "sha256:15ff10bfada4bf92ec8b31c62bf7c1834c244019b4a33095a68000d7075df470"},
{file = "contourpy-1.3.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b08a32ea2f8e42cf1d4be3169a98dd4be32bafe4f22b6c4cb4ba810fa9e5d2cb"},
{file = "contourpy-1.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:556dba8fb6f5d8742f2923fe9457dbdd51e1049c4a43fd3986a0b14a1d815fc6"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:92d9abc807cf7d0e047b95ca5d957cf4792fcd04e920ca70d48add15c1a90ea7"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b2e8faa0ed68cb29af51edd8e24798bb661eac3bd9f65420c1887b6ca89987c8"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:626d60935cf668e70a5ce6ff184fd713e9683fb458898e4249b63be9e28286ea"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4d00e655fcef08aba35ec9610536bfe90267d7ab5ba944f7032549c55a146da1"},
{file = "contourpy-1.3.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:451e71b5a7d597379ef572de31eeb909a87246974d960049a9848c3bc6c41bf7"},
{file = "contourpy-1.3.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:459c1f020cd59fcfe6650180678a9993932d80d44ccde1fa1868977438f0b411"},
{file = "contourpy-1.3.3-cp312-cp312-win32.whl", hash = "sha256:023b44101dfe49d7d53932be418477dba359649246075c996866106da069af69"},
{file = "contourpy-1.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:8153b8bfc11e1e4d75bcb0bff1db232f9e10b274e0929de9d608027e0d34ff8b"},
{file = "contourpy-1.3.3-cp312-cp312-win_arm64.whl", hash = "sha256:07ce5ed73ecdc4a03ffe3e1b3e3c1166db35ae7584be76f65dbbe28a7791b0cc"},
{file = "contourpy-1.3.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:177fb367556747a686509d6fef71d221a4b198a3905fe824430e5ea0fda54eb5"},
{file = "contourpy-1.3.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:d002b6f00d73d69333dac9d0b8d5e84d9724ff9ef044fd63c5986e62b7c9e1b1"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:348ac1f5d4f1d66d3322420f01d42e43122f43616e0f194fc1c9f5d830c5b286"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:655456777ff65c2c548b7c454af9c6f33f16c8884f11083244b5819cc214f1b5"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:644a6853d15b2512d67881586bd03f462c7ab755db95f16f14d7e238f2852c67"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4debd64f124ca62069f313a9cb86656ff087786016d76927ae2cf37846b006c9"},
{file = "contourpy-1.3.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a15459b0f4615b00bbd1e91f1b9e19b7e63aea7483d03d804186f278c0af2659"},
{file = "contourpy-1.3.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ca0fdcd73925568ca027e0b17ab07aad764be4706d0a925b89227e447d9737b7"},
{file = "contourpy-1.3.3-cp313-cp313-win32.whl", hash = "sha256:b20c7c9a3bf701366556e1b1984ed2d0cedf999903c51311417cf5f591d8c78d"},
{file = "contourpy-1.3.3-cp313-cp313-win_amd64.whl", hash = "sha256:1cadd8b8969f060ba45ed7c1b714fe69185812ab43bd6b86a9123fe8f99c3263"},
{file = "contourpy-1.3.3-cp313-cp313-win_arm64.whl", hash = "sha256:fd914713266421b7536de2bfa8181aa8c699432b6763a0ea64195ebe28bff6a9"},
{file = "contourpy-1.3.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:88df9880d507169449d434c293467418b9f6cbe82edd19284aa0409e7fdb933d"},
{file = "contourpy-1.3.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d06bb1f751ba5d417047db62bca3c8fde202b8c11fb50742ab3ab962c81e8216"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e4e6b05a45525357e382909a4c1600444e2a45b4795163d3b22669285591c1ae"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ab3074b48c4e2cf1a960e6bbeb7f04566bf36b1861d5c9d4d8ac04b82e38ba20"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:6c3d53c796f8647d6deb1abe867daeb66dcc8a97e8455efa729516b997b8ed99"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:50ed930df7289ff2a8d7afeb9603f8289e5704755c7e5c3bbd929c90c817164b"},
{file = "contourpy-1.3.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:4feffb6537d64b84877da813a5c30f1422ea5739566abf0bd18065ac040e120a"},
{file = "contourpy-1.3.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:2b7e9480ffe2b0cd2e787e4df64270e3a0440d9db8dc823312e2c940c167df7e"},
{file = "contourpy-1.3.3-cp313-cp313t-win32.whl", hash = "sha256:283edd842a01e3dcd435b1c5116798d661378d83d36d337b8dde1d16a5fc9ba3"},
{file = "contourpy-1.3.3-cp313-cp313t-win_amd64.whl", hash = "sha256:87acf5963fc2b34825e5b6b048f40e3635dd547f590b04d2ab317c2619ef7ae8"},
{file = "contourpy-1.3.3-cp313-cp313t-win_arm64.whl", hash = "sha256:3c30273eb2a55024ff31ba7d052dde990d7d8e5450f4bbb6e913558b3d6c2301"},
{file = "contourpy-1.3.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fde6c716d51c04b1c25d0b90364d0be954624a0ee9d60e23e850e8d48353d07a"},
{file = "contourpy-1.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:cbedb772ed74ff5be440fa8eee9bd49f64f6e3fc09436d9c7d8f1c287b121d77"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:22e9b1bd7a9b1d652cd77388465dc358dafcd2e217d35552424aa4f996f524f5"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a22738912262aa3e254e4f3cb079a95a67132fc5a063890e224393596902f5a4"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:afe5a512f31ee6bd7d0dda52ec9864c984ca3d66664444f2d72e0dc4eb832e36"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f64836de09927cba6f79dcd00fdd7d5329f3fccc633468507079c829ca4db4e3"},
{file = "contourpy-1.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1fd43c3be4c8e5fd6e4f2baeae35ae18176cf2e5cced681cca908addf1cdd53b"},
{file = "contourpy-1.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:6afc576f7b33cf00996e5c1102dc2a8f7cc89e39c0b55df93a0b78c1bd992b36"},
{file = "contourpy-1.3.3-cp314-cp314-win32.whl", hash = "sha256:66c8a43a4f7b8df8b71ee1840e4211a3c8d93b214b213f590e18a1beca458f7d"},
{file = "contourpy-1.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:cf9022ef053f2694e31d630feaacb21ea24224be1c3ad0520b13d844274614fd"},
{file = "contourpy-1.3.3-cp314-cp314-win_arm64.whl", hash = "sha256:95b181891b4c71de4bb404c6621e7e2390745f887f2a026b2d99e92c17892339"},
{file = "contourpy-1.3.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:33c82d0138c0a062380332c861387650c82e4cf1747aaa6938b9b6516762e772"},
{file = "contourpy-1.3.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:ea37e7b45949df430fe649e5de8351c423430046a2af20b1c1961cae3afcda77"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d304906ecc71672e9c89e87c4675dc5c2645e1f4269a5063b99b0bb29f232d13"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ca658cd1a680a5c9ea96dc61cdbae1e85c8f25849843aa799dfd3cb370ad4fbe"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:ab2fd90904c503739a75b7c8c5c01160130ba67944a7b77bbf36ef8054576e7f"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b7301b89040075c30e5768810bc96a8e8d78085b47d8be6e4c3f5a0b4ed478a0"},
{file = "contourpy-1.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:2a2a8b627d5cc6b7c41a4beff6c5ad5eb848c88255fda4a8745f7e901b32d8e4"},
{file = "contourpy-1.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:fd6ec6be509c787f1caf6b247f0b1ca598bef13f4ddeaa126b7658215529ba0f"},
{file = "contourpy-1.3.3-cp314-cp314t-win32.whl", hash = "sha256:e74a9a0f5e3fff48fb5a7f2fd2b9b70a3fe014a67522f79b7cca4c0c7e43c9ae"},
{file = "contourpy-1.3.3-cp314-cp314t-win_amd64.whl", hash = "sha256:13b68d6a62db8eafaebb8039218921399baf6e47bf85006fd8529f2a08ef33fc"},
{file = "contourpy-1.3.3-cp314-cp314t-win_arm64.whl", hash = "sha256:b7448cb5a725bb1e35ce88771b86fba35ef418952474492cf7c764059933ff8b"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:cd5dfcaeb10f7b7f9dc8941717c6c2ade08f587be2226222c12b25f0483ed497"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:0c1fc238306b35f246d61a1d416a627348b5cf0648648a031e14bb8705fcdfe8"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70f9aad7de812d6541d29d2bbf8feb22ff7e1c299523db288004e3157ff4674e"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5ed3657edf08512fc3fe81b510e35c2012fbd3081d2e26160f27ca28affec989"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:3d1a3799d62d45c18bafd41c5fa05120b96a28079f2393af559b843d1a966a77"},
{file = "contourpy-1.3.3.tar.gz", hash = "sha256:083e12155b210502d0bca491432bb04d56dc3432f95a979b429f2848c3dbe880"},
]
[package.dependencies]
numpy = ">=1.25"
[package.extras]
bokeh = ["bokeh", "selenium"]
docs = ["furo", "sphinx (>=7.2)", "sphinx-copybutton"]
mypy = ["bokeh", "contourpy[bokeh,docs]", "docutils-stubs", "mypy (==1.17.0)", "types-Pillow"]
test = ["Pillow", "contourpy[test-no-images]", "matplotlib"]
test-no-images = ["pytest", "pytest-cov", "pytest-rerunfailures", "pytest-xdist", "wurlitzer"]
[[package]]
name = "coverage"
version = "7.5.4"
@@ -1390,6 +1482,22 @@ ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==44.0.1)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "cycler"
version = "0.12.1"
description = "Composable style cycles"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30"},
{file = "cycler-0.12.1.tar.gz", hash = "sha256:88bb128f02ba341da8ef447245a9e138fae777f6a23943da4540077d3601eb1c"},
]
[package.extras]
docs = ["ipython", "matplotlib", "numpydoc", "sphinx"]
tests = ["pytest", "pytest-cov", "pytest-xdist"]
[[package]]
name = "dash"
version = "3.1.1"
@@ -2120,6 +2228,87 @@ werkzeug = ">=3.1.0"
async = ["asgiref (>=3.2)"]
dotenv = ["python-dotenv"]
[[package]]
name = "fonttools"
version = "4.60.1"
description = "Tools to manipulate font files"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "fonttools-4.60.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:9a52f254ce051e196b8fe2af4634c2d2f02c981756c6464dc192f1b6050b4e28"},
{file = "fonttools-4.60.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7420a2696a44650120cdd269a5d2e56a477e2bfa9d95e86229059beb1c19e15"},
{file = "fonttools-4.60.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee0c0b3b35b34f782afc673d503167157094a16f442ace7c6c5e0ca80b08f50c"},
{file = "fonttools-4.60.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:282dafa55f9659e8999110bd8ed422ebe1c8aecd0dc396550b038e6c9a08b8ea"},
{file = "fonttools-4.60.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4ba4bd646e86de16160f0fb72e31c3b9b7d0721c3e5b26b9fa2fc931dfdb2652"},
{file = "fonttools-4.60.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0b0835ed15dd5b40d726bb61c846a688f5b4ce2208ec68779bc81860adb5851a"},
{file = "fonttools-4.60.1-cp310-cp310-win32.whl", hash = "sha256:1525796c3ffe27bb6268ed2a1bb0dcf214d561dfaf04728abf01489eb5339dce"},
{file = "fonttools-4.60.1-cp310-cp310-win_amd64.whl", hash = "sha256:268ecda8ca6cb5c4f044b1fb9b3b376e8cd1b361cef275082429dc4174907038"},
{file = "fonttools-4.60.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:7b4c32e232a71f63a5d00259ca3d88345ce2a43295bb049d21061f338124246f"},
{file = "fonttools-4.60.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3630e86c484263eaac71d117085d509cbcf7b18f677906824e4bace598fb70d2"},
{file = "fonttools-4.60.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5c1015318e4fec75dd4943ad5f6a206d9727adf97410d58b7e32ab644a807914"},
{file = "fonttools-4.60.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e6c58beb17380f7c2ea181ea11e7db8c0ceb474c9dd45f48e71e2cb577d146a1"},
{file = "fonttools-4.60.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:ec3681a0cb34c255d76dd9d865a55f260164adb9fa02628415cdc2d43ee2c05d"},
{file = "fonttools-4.60.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f4b5c37a5f40e4d733d3bbaaef082149bee5a5ea3156a785ff64d949bd1353fa"},
{file = "fonttools-4.60.1-cp311-cp311-win32.whl", hash = "sha256:398447f3d8c0c786cbf1209711e79080a40761eb44b27cdafffb48f52bcec258"},
{file = "fonttools-4.60.1-cp311-cp311-win_amd64.whl", hash = "sha256:d066ea419f719ed87bc2c99a4a4bfd77c2e5949cb724588b9dd58f3fd90b92bf"},
{file = "fonttools-4.60.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:7b0c6d57ab00dae9529f3faf187f2254ea0aa1e04215cf2f1a8ec277c96661bc"},
{file = "fonttools-4.60.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:839565cbf14645952d933853e8ade66a463684ed6ed6c9345d0faf1f0e868877"},
{file = "fonttools-4.60.1-cp312-cp312-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8177ec9676ea6e1793c8a084a90b65a9f778771998eb919d05db6d4b1c0b114c"},
{file = "fonttools-4.60.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:996a4d1834524adbb423385d5a629b868ef9d774670856c63c9a0408a3063401"},
{file = "fonttools-4.60.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a46b2f450bc79e06ef3b6394f0c68660529ed51692606ad7f953fc2e448bc903"},
{file = "fonttools-4.60.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6ec722ee589e89a89f5b7574f5c45604030aa6ae24cb2c751e2707193b466fed"},
{file = "fonttools-4.60.1-cp312-cp312-win32.whl", hash = "sha256:b2cf105cee600d2de04ca3cfa1f74f1127f8455b71dbad02b9da6ec266e116d6"},
{file = "fonttools-4.60.1-cp312-cp312-win_amd64.whl", hash = "sha256:992775c9fbe2cf794786fa0ffca7f09f564ba3499b8fe9f2f80bd7197db60383"},
{file = "fonttools-4.60.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6f68576bb4bbf6060c7ab047b1574a1ebe5c50a17de62830079967b211059ebb"},
{file = "fonttools-4.60.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:eedacb5c5d22b7097482fa834bda0dafa3d914a4e829ec83cdea2a01f8c813c4"},
{file = "fonttools-4.60.1-cp313-cp313-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b33a7884fabd72bdf5f910d0cf46be50dce86a0362a65cfc746a4168c67eb96c"},
{file = "fonttools-4.60.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2409d5fb7b55fd70f715e6d34e7a6e4f7511b8ad29a49d6df225ee76da76dd77"},
{file = "fonttools-4.60.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c8651e0d4b3bdeda6602b85fdc2abbefc1b41e573ecb37b6779c4ca50753a199"},
{file = "fonttools-4.60.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:145daa14bf24824b677b9357c5e44fd8895c2a8f53596e1b9ea3496081dc692c"},
{file = "fonttools-4.60.1-cp313-cp313-win32.whl", hash = "sha256:2299df884c11162617a66b7c316957d74a18e3758c0274762d2cc87df7bc0272"},
{file = "fonttools-4.60.1-cp313-cp313-win_amd64.whl", hash = "sha256:a3db56f153bd4c5c2b619ab02c5db5192e222150ce5a1bc10f16164714bc39ac"},
{file = "fonttools-4.60.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:a884aef09d45ba1206712c7dbda5829562d3fea7726935d3289d343232ecb0d3"},
{file = "fonttools-4.60.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8a44788d9d91df72d1a5eac49b31aeb887a5f4aab761b4cffc4196c74907ea85"},
{file = "fonttools-4.60.1-cp314-cp314-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e852d9dda9f93ad3651ae1e3bb770eac544ec93c3807888798eccddf84596537"},
{file = "fonttools-4.60.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:154cb6ee417e417bf5f7c42fe25858c9140c26f647c7347c06f0cc2d47eff003"},
{file = "fonttools-4.60.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:5664fd1a9ea7f244487ac8f10340c4e37664675e8667d6fee420766e0fb3cf08"},
{file = "fonttools-4.60.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:583b7f8e3c49486e4d489ad1deacfb8d5be54a8ef34d6df824f6a171f8511d99"},
{file = "fonttools-4.60.1-cp314-cp314-win32.whl", hash = "sha256:66929e2ea2810c6533a5184f938502cfdaea4bc3efb7130d8cc02e1c1b4108d6"},
{file = "fonttools-4.60.1-cp314-cp314-win_amd64.whl", hash = "sha256:f3d5be054c461d6a2268831f04091dc82753176f6ea06dc6047a5e168265a987"},
{file = "fonttools-4.60.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:b6379e7546ba4ae4b18f8ae2b9bc5960936007a1c0e30b342f662577e8bc3299"},
{file = "fonttools-4.60.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9d0ced62b59e0430b3690dbc5373df1c2aa7585e9a8ce38eff87f0fd993c5b01"},
{file = "fonttools-4.60.1-cp314-cp314t-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:875cb7764708b3132637f6c5fb385b16eeba0f7ac9fa45a69d35e09b47045801"},
{file = "fonttools-4.60.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a184b2ea57b13680ab6d5fbde99ccef152c95c06746cb7718c583abd8f945ccc"},
{file = "fonttools-4.60.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:026290e4ec76583881763fac284aca67365e0be9f13a7fb137257096114cb3bc"},
{file = "fonttools-4.60.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:f0e8817c7d1a0c2eedebf57ef9a9896f3ea23324769a9a2061a80fe8852705ed"},
{file = "fonttools-4.60.1-cp314-cp314t-win32.whl", hash = "sha256:1410155d0e764a4615774e5c2c6fc516259fe3eca5882f034eb9bfdbee056259"},
{file = "fonttools-4.60.1-cp314-cp314t-win_amd64.whl", hash = "sha256:022beaea4b73a70295b688f817ddc24ed3e3418b5036ffcd5658141184ef0d0c"},
{file = "fonttools-4.60.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:122e1a8ada290423c493491d002f622b1992b1ab0b488c68e31c413390dc7eb2"},
{file = "fonttools-4.60.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a140761c4ff63d0cb9256ac752f230460ee225ccef4ad8f68affc723c88e2036"},
{file = "fonttools-4.60.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0eae96373e4b7c9e45d099d7a523444e3554360927225c1cdae221a58a45b856"},
{file = "fonttools-4.60.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:596ecaca36367027d525b3b426d8a8208169d09edcf8c7506aceb3a38bfb55c7"},
{file = "fonttools-4.60.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2ee06fc57512144d8b0445194c2da9f190f61ad51e230f14836286470c99f854"},
{file = "fonttools-4.60.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b42d86938e8dda1cd9a1a87a6d82f1818eaf933348429653559a458d027446da"},
{file = "fonttools-4.60.1-cp39-cp39-win32.whl", hash = "sha256:8b4eb332f9501cb1cd3d4d099374a1e1306783ff95489a1026bde9eb02ccc34a"},
{file = "fonttools-4.60.1-cp39-cp39-win_amd64.whl", hash = "sha256:7473a8ed9ed09aeaa191301244a5a9dbe46fe0bf54f9d6cd21d83044c3321217"},
{file = "fonttools-4.60.1-py3-none-any.whl", hash = "sha256:906306ac7afe2156fcf0042173d6ebbb05416af70f6b370967b47f8f00103bbb"},
{file = "fonttools-4.60.1.tar.gz", hash = "sha256:ef00af0439ebfee806b25f24c8f92109157ff3fac5731dc7867957812e87b8d9"},
]
[package.extras]
all = ["brotli (>=1.0.1) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\"", "lxml (>=4.0)", "lz4 (>=1.7.4.2)", "matplotlib", "munkres ; platform_python_implementation == \"PyPy\"", "pycairo", "scipy ; platform_python_implementation != \"PyPy\"", "skia-pathops (>=0.5.0)", "sympy", "uharfbuzz (>=0.23.0)", "unicodedata2 (>=15.1.0) ; python_version <= \"3.12\"", "xattr ; sys_platform == \"darwin\"", "zopfli (>=0.1.4)"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["munkres ; platform_python_implementation == \"PyPy\"", "pycairo", "scipy ; platform_python_implementation != \"PyPy\""]
lxml = ["lxml (>=4.0)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr ; sys_platform == \"darwin\""]
unicode = ["unicodedata2 (>=15.1.0) ; python_version <= \"3.12\""]
woff = ["brotli (>=1.0.1) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\"", "zopfli (>=0.1.4)"]
[[package]]
name = "freezegun"
version = "1.5.1"
@@ -2787,6 +2976,117 @@ files = [
[package.dependencies]
referencing = ">=0.31.0"
[[package]]
name = "kiwisolver"
version = "1.4.9"
description = "A fast implementation of the Cassowary constraint solver"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "kiwisolver-1.4.9-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:b4b4d74bda2b8ebf4da5bd42af11d02d04428b2c32846e4c2c93219df8a7987b"},
{file = "kiwisolver-1.4.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:fb3b8132019ea572f4611d770991000d7f58127560c4889729248eb5852a102f"},
{file = "kiwisolver-1.4.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:84fd60810829c27ae375114cd379da1fa65e6918e1da405f356a775d49a62bcf"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b78efa4c6e804ecdf727e580dbb9cba85624d2e1c6b5cb059c66290063bd99a9"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d4efec7bcf21671db6a3294ff301d2fc861c31faa3c8740d1a94689234d1b415"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:90f47e70293fc3688b71271100a1a5453aa9944a81d27ff779c108372cf5567b"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:8fdca1def57a2e88ef339de1737a1449d6dbf5fab184c54a1fca01d541317154"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:9cf554f21be770f5111a1690d42313e140355e687e05cf82cb23d0a721a64a48"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:fc1795ac5cd0510207482c3d1d3ed781143383b8cfd36f5c645f3897ce066220"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:ccd09f20ccdbbd341b21a67ab50a119b64a403b09288c27481575105283c1586"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:540c7c72324d864406a009d72f5d6856f49693db95d1fbb46cf86febef873634"},
{file = "kiwisolver-1.4.9-cp310-cp310-win_amd64.whl", hash = "sha256:ede8c6d533bc6601a47ad4046080d36b8fc99f81e6f1c17b0ac3c2dc91ac7611"},
{file = "kiwisolver-1.4.9-cp310-cp310-win_arm64.whl", hash = "sha256:7b4da0d01ac866a57dd61ac258c5607b4cd677f63abaec7b148354d2b2cdd536"},
{file = "kiwisolver-1.4.9-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:eb14a5da6dc7642b0f3a18f13654847cd8b7a2550e2645a5bda677862b03ba16"},
{file = "kiwisolver-1.4.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:39a219e1c81ae3b103643d2aedb90f1ef22650deb266ff12a19e7773f3e5f089"},
{file = "kiwisolver-1.4.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2405a7d98604b87f3fc28b1716783534b1b4b8510d8142adca34ee0bc3c87543"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dc1ae486f9abcef254b5618dfb4113dd49f94c68e3e027d03cf0143f3f772b61"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8a1f570ce4d62d718dce3f179ee78dac3b545ac16c0c04bb363b7607a949c0d1"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:cb27e7b78d716c591e88e0a09a2139c6577865d7f2e152488c2cc6257f460872"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:15163165efc2f627eb9687ea5f3a28137217d217ac4024893d753f46bce9de26"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:bdee92c56a71d2b24c33a7d4c2856bd6419d017e08caa7802d2963870e315028"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:412f287c55a6f54b0650bd9b6dce5aceddb95864a1a90c87af16979d37c89771"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:2c93f00dcba2eea70af2be5f11a830a742fe6b579a1d4e00f47760ef13be247a"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f117e1a089d9411663a3207ba874f31be9ac8eaa5b533787024dc07aeb74f464"},
{file = "kiwisolver-1.4.9-cp311-cp311-win_amd64.whl", hash = "sha256:be6a04e6c79819c9a8c2373317d19a96048e5a3f90bec587787e86a1153883c2"},
{file = "kiwisolver-1.4.9-cp311-cp311-win_arm64.whl", hash = "sha256:0ae37737256ba2de764ddc12aed4956460277f00c4996d51a197e72f62f5eec7"},
{file = "kiwisolver-1.4.9-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ac5a486ac389dddcc5bef4f365b6ae3ffff2c433324fb38dd35e3fab7c957999"},
{file = "kiwisolver-1.4.9-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f2ba92255faa7309d06fe44c3a4a97efe1c8d640c2a79a5ef728b685762a6fd2"},
{file = "kiwisolver-1.4.9-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4a2899935e724dd1074cb568ce7ac0dce28b2cd6ab539c8e001a8578eb106d14"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f6008a4919fdbc0b0097089f67a1eb55d950ed7e90ce2cc3e640abadd2757a04"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:67bb8b474b4181770f926f7b7d2f8c0248cbcb78b660fdd41a47054b28d2a752"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2327a4a30d3ee07d2fbe2e7933e8a37c591663b96ce42a00bc67461a87d7df77"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:7a08b491ec91b1d5053ac177afe5290adacf1f0f6307d771ccac5de30592d198"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d8fc5c867c22b828001b6a38d2eaeb88160bf5783c6cb4a5e440efc981ce286d"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:3b3115b2581ea35bb6d1f24a4c90af37e5d9b49dcff267eeed14c3893c5b86ab"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:858e4c22fb075920b96a291928cb7dea5644e94c0ee4fcd5af7e865655e4ccf2"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ed0fecd28cc62c54b262e3736f8bb2512d8dcfdc2bcf08be5f47f96bf405b145"},
{file = "kiwisolver-1.4.9-cp312-cp312-win_amd64.whl", hash = "sha256:f68208a520c3d86ea51acf688a3e3002615a7f0238002cccc17affecc86a8a54"},
{file = "kiwisolver-1.4.9-cp312-cp312-win_arm64.whl", hash = "sha256:2c1a4f57df73965f3f14df20b80ee29e6a7930a57d2d9e8491a25f676e197c60"},
{file = "kiwisolver-1.4.9-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a5d0432ccf1c7ab14f9949eec60c5d1f924f17c037e9f8b33352fa05799359b8"},
{file = "kiwisolver-1.4.9-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efb3a45b35622bb6c16dbfab491a8f5a391fe0e9d45ef32f4df85658232ca0e2"},
{file = "kiwisolver-1.4.9-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1a12cf6398e8a0a001a059747a1cbf24705e18fe413bc22de7b3d15c67cffe3f"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b67e6efbf68e077dd71d1a6b37e43e1a99d0bff1a3d51867d45ee8908b931098"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5656aa670507437af0207645273ccdfee4f14bacd7f7c67a4306d0dcaeaf6eed"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:bfc08add558155345129c7803b3671cf195e6a56e7a12f3dde7c57d9b417f525"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:40092754720b174e6ccf9e845d0d8c7d8e12c3d71e7fc35f55f3813e96376f78"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:497d05f29a1300d14e02e6441cf0f5ee81c1ff5a304b0d9fb77423974684e08b"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:bdd1a81a1860476eb41ac4bc1e07b3f07259e6d55bbf739b79c8aaedcf512799"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:e6b93f13371d341afee3be9f7c5964e3fe61d5fa30f6a30eb49856935dfe4fc3"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:d75aa530ccfaa593da12834b86a0724f58bff12706659baa9227c2ccaa06264c"},
{file = "kiwisolver-1.4.9-cp313-cp313-win_amd64.whl", hash = "sha256:dd0a578400839256df88c16abddf9ba14813ec5f21362e1fe65022e00c883d4d"},
{file = "kiwisolver-1.4.9-cp313-cp313-win_arm64.whl", hash = "sha256:d4188e73af84ca82468f09cadc5ac4db578109e52acb4518d8154698d3a87ca2"},
{file = "kiwisolver-1.4.9-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:5a0f2724dfd4e3b3ac5a82436a8e6fd16baa7d507117e4279b660fe8ca38a3a1"},
{file = "kiwisolver-1.4.9-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:1b11d6a633e4ed84fc0ddafd4ebfd8ea49b3f25082c04ad12b8315c11d504dc1"},
{file = "kiwisolver-1.4.9-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61874cdb0a36016354853593cffc38e56fc9ca5aa97d2c05d3dcf6922cd55a11"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:60c439763a969a6af93b4881db0eed8fadf93ee98e18cbc35bc8da868d0c4f0c"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:92a2f997387a1b79a75e7803aa7ded2cfbe2823852ccf1ba3bcf613b62ae3197"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a31d512c812daea6d8b3be3b2bfcbeb091dbb09177706569bcfc6240dcf8b41c"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:52a15b0f35dad39862d376df10c5230155243a2c1a436e39eb55623ccbd68185"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a30fd6fdef1430fd9e1ba7b3398b5ee4e2887783917a687d86ba69985fb08748"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:cc9617b46837c6468197b5945e196ee9ca43057bb7d9d1ae688101e4e1dddf64"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:0ab74e19f6a2b027ea4f845a78827969af45ce790e6cb3e1ebab71bdf9f215ff"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dba5ee5d3981160c28d5490f0d1b7ed730c22470ff7f6cc26cfcfaacb9896a07"},
{file = "kiwisolver-1.4.9-cp313-cp313t-win_arm64.whl", hash = "sha256:0749fd8f4218ad2e851e11cc4dc05c7cbc0cbc4267bdfdb31782e65aace4ee9c"},
{file = "kiwisolver-1.4.9-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:9928fe1eb816d11ae170885a74d074f57af3a0d65777ca47e9aeb854a1fba386"},
{file = "kiwisolver-1.4.9-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d0005b053977e7b43388ddec89fa567f43d4f6d5c2c0affe57de5ebf290dc552"},
{file = "kiwisolver-1.4.9-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:2635d352d67458b66fd0667c14cb1d4145e9560d503219034a18a87e971ce4f3"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:767c23ad1c58c9e827b649a9ab7809fd5fd9db266a9cf02b0e926ddc2c680d58"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:72d0eb9fba308b8311685c2268cf7d0a0639a6cd027d8128659f72bdd8a024b4"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f68e4f3eeca8fb22cc3d731f9715a13b652795ef657a13df1ad0c7dc0e9731df"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d84cd4061ae292d8ac367b2c3fa3aad11cb8625a95d135fe93f286f914f3f5a6"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:a60ea74330b91bd22a29638940d115df9dc00af5035a9a2a6ad9399ffb4ceca5"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:ce6a3a4e106cf35c2d9c4fa17c05ce0b180db622736845d4315519397a77beaf"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:77937e5e2a38a7b48eef0585114fe7930346993a88060d0bf886086d2aa49ef5"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:24c175051354f4a28c5d6a31c93906dc653e2bf234e8a4bbfb964892078898ce"},
{file = "kiwisolver-1.4.9-cp314-cp314-win_amd64.whl", hash = "sha256:0763515d4df10edf6d06a3c19734e2566368980d21ebec439f33f9eb936c07b7"},
{file = "kiwisolver-1.4.9-cp314-cp314-win_arm64.whl", hash = "sha256:0e4e2bf29574a6a7b7f6cb5fa69293b9f96c928949ac4a53ba3f525dffb87f9c"},
{file = "kiwisolver-1.4.9-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:d976bbb382b202f71c67f77b0ac11244021cfa3f7dfd9e562eefcea2df711548"},
{file = "kiwisolver-1.4.9-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2489e4e5d7ef9a1c300a5e0196e43d9c739f066ef23270607d45aba368b91f2d"},
{file = "kiwisolver-1.4.9-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:e2ea9f7ab7fbf18fffb1b5434ce7c69a07582f7acc7717720f1d69f3e806f90c"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b34e51affded8faee0dfdb705416153819d8ea9250bbbf7ea1b249bdeb5f1122"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d8aacd3d4b33b772542b2e01beb50187536967b514b00003bdda7589722d2a64"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:7cf974dd4e35fa315563ac99d6287a1024e4dc2077b8a7d7cd3d2fb65d283134"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:85bd218b5ecfbee8c8a82e121802dcb519a86044c9c3b2e4aef02fa05c6da370"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:0856e241c2d3df4efef7c04a1e46b1936b6120c9bcf36dd216e3acd84bc4fb21"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:9af39d6551f97d31a4deebeac6f45b156f9755ddc59c07b402c148f5dbb6482a"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:bb4ae2b57fc1d8cbd1cf7b1d9913803681ffa903e7488012be5b76dedf49297f"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:aedff62918805fb62d43a4aa2ecd4482c380dc76cd31bd7c8878588a61bd0369"},
{file = "kiwisolver-1.4.9-cp314-cp314t-win_amd64.whl", hash = "sha256:1fa333e8b2ce4d9660f2cda9c0e1b6bafcfb2457a9d259faa82289e73ec24891"},
{file = "kiwisolver-1.4.9-cp314-cp314t-win_arm64.whl", hash = "sha256:4a48a2ce79d65d363597ef7b567ce3d14d68783d2b2263d98db3d9477805ba32"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:4d1d9e582ad4d63062d34077a9a1e9f3c34088a2ec5135b1f7190c07cf366527"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:deed0c7258ceb4c44ad5ec7d9918f9f14fd05b2be86378d86cf50e63d1e7b771"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0a590506f303f512dff6b7f75fd2fd18e16943efee932008fe7140e5fa91d80e"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e09c2279a4d01f099f52d5c4b3d9e208e91edcbd1a175c9662a8b16e000fece9"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:c9e7cdf45d594ee04d5be1b24dd9d49f3d1590959b2271fb30b5ca2b262c00fb"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:720e05574713db64c356e86732c0f3c5252818d05f9df320f0ad8380641acea5"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:17680d737d5335b552994a2008fab4c851bcd7de33094a82067ef3a576ff02fa"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:85b5352f94e490c028926ea567fc569c52ec79ce131dadb968d3853e809518c2"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:464415881e4801295659462c49461a24fb107c140de781d55518c4b80cb6790f"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:fb940820c63a9590d31d88b815e7a3aa5915cad3ce735ab45f0c730b39547de1"},
{file = "kiwisolver-1.4.9.tar.gz", hash = "sha256:c3b22c26c6fd6811b0ae8363b95ca8ce4ea3c202d3d0975b2914310ceb1bcc4d"},
]
[[package]]
name = "kombu"
version = "5.5.4"
@@ -3137,6 +3437,85 @@ dev = ["marshmallow[tests]", "pre-commit (>=3.5,<5.0)", "tox"]
docs = ["autodocsumm (==0.2.14)", "furo (==2024.8.6)", "sphinx (==8.1.3)", "sphinx-copybutton (==0.5.2)", "sphinx-issues (==5.0.0)", "sphinxext-opengraph (==0.9.1)"]
tests = ["pytest", "simplejson"]
[[package]]
name = "matplotlib"
version = "3.10.6"
description = "Python plotting package"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "matplotlib-3.10.6-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:bc7316c306d97463a9866b89d5cc217824e799fa0de346c8f68f4f3d27c8693d"},
{file = "matplotlib-3.10.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d00932b0d160ef03f59f9c0e16d1e3ac89646f7785165ce6ad40c842db16cc2e"},
{file = "matplotlib-3.10.6-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8fa4c43d6bfdbfec09c733bca8667de11bfa4970e8324c471f3a3632a0301c15"},
{file = "matplotlib-3.10.6-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ea117a9c1627acaa04dbf36265691921b999cbf515a015298e54e1a12c3af837"},
{file = "matplotlib-3.10.6-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:08fc803293b4e1694ee325896030de97f74c141ccff0be886bb5915269247676"},
{file = "matplotlib-3.10.6-cp310-cp310-win_amd64.whl", hash = "sha256:2adf92d9b7527fbfb8818e050260f0ebaa460f79d61546374ce73506c9421d09"},
{file = "matplotlib-3.10.6-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:905b60d1cb0ee604ce65b297b61cf8be9f4e6cfecf95a3fe1c388b5266bc8f4f"},
{file = "matplotlib-3.10.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7bac38d816637343e53d7185d0c66677ff30ffb131044a81898b5792c956ba76"},
{file = "matplotlib-3.10.6-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:942a8de2b5bfff1de31d95722f702e2966b8a7e31f4e68f7cd963c7cd8861cf6"},
{file = "matplotlib-3.10.6-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a3276c85370bc0dfca051ec65c5817d1e0f8f5ce1b7787528ec8ed2d524bbc2f"},
{file = "matplotlib-3.10.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9df5851b219225731f564e4b9e7f2ac1e13c9e6481f941b5631a0f8e2d9387ce"},
{file = "matplotlib-3.10.6-cp311-cp311-win_amd64.whl", hash = "sha256:abb5d9478625dd9c9eb51a06d39aae71eda749ae9b3138afb23eb38824026c7e"},
{file = "matplotlib-3.10.6-cp311-cp311-win_arm64.whl", hash = "sha256:886f989ccfae63659183173bb3fced7fd65e9eb793c3cc21c273add368536951"},
{file = "matplotlib-3.10.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:31ca662df6a80bd426f871105fdd69db7543e28e73a9f2afe80de7e531eb2347"},
{file = "matplotlib-3.10.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1678bb61d897bb4ac4757b5ecfb02bfb3fddf7f808000fb81e09c510712fda75"},
{file = "matplotlib-3.10.6-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:56cd2d20842f58c03d2d6e6c1f1cf5548ad6f66b91e1e48f814e4fb5abd1cb95"},
{file = "matplotlib-3.10.6-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:662df55604a2f9a45435566d6e2660e41efe83cd94f4288dfbf1e6d1eae4b0bb"},
{file = "matplotlib-3.10.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:08f141d55148cd1fc870c3387d70ca4df16dee10e909b3b038782bd4bda6ea07"},
{file = "matplotlib-3.10.6-cp312-cp312-win_amd64.whl", hash = "sha256:590f5925c2d650b5c9d813c5b3b5fc53f2929c3f8ef463e4ecfa7e052044fb2b"},
{file = "matplotlib-3.10.6-cp312-cp312-win_arm64.whl", hash = "sha256:f44c8d264a71609c79a78d50349e724f5d5fc3684ead7c2a473665ee63d868aa"},
{file = "matplotlib-3.10.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:819e409653c1106c8deaf62e6de6b8611449c2cd9939acb0d7d4e57a3d95cc7a"},
{file = "matplotlib-3.10.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:59c8ac8382fefb9cb71308dde16a7c487432f5255d8f1fd32473523abecfecdf"},
{file = "matplotlib-3.10.6-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:84e82d9e0fd70c70bc55739defbd8055c54300750cbacf4740c9673a24d6933a"},
{file = "matplotlib-3.10.6-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:25f7a3eb42d6c1c56e89eacd495661fc815ffc08d9da750bca766771c0fd9110"},
{file = "matplotlib-3.10.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f9c862d91ec0b7842920a4cfdaaec29662195301914ea54c33e01f1a28d014b2"},
{file = "matplotlib-3.10.6-cp313-cp313-win_amd64.whl", hash = "sha256:1b53bd6337eba483e2e7d29c5ab10eee644bc3a2491ec67cc55f7b44583ffb18"},
{file = "matplotlib-3.10.6-cp313-cp313-win_arm64.whl", hash = "sha256:cbd5eb50b7058b2892ce45c2f4e92557f395c9991f5c886d1bb74a1582e70fd6"},
{file = "matplotlib-3.10.6-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:acc86dd6e0e695c095001a7fccff158c49e45e0758fdf5dcdbb0103318b59c9f"},
{file = "matplotlib-3.10.6-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e228cd2ffb8f88b7d0b29e37f68ca9aaf83e33821f24a5ccc4f082dd8396bc27"},
{file = "matplotlib-3.10.6-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:658bc91894adeab669cf4bb4a186d049948262987e80f0857216387d7435d833"},
{file = "matplotlib-3.10.6-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8913b7474f6dd83ac444c9459c91f7f0f2859e839f41d642691b104e0af056aa"},
{file = "matplotlib-3.10.6-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:091cea22e059b89f6d7d1a18e2c33a7376c26eee60e401d92a4d6726c4e12706"},
{file = "matplotlib-3.10.6-cp313-cp313t-win_amd64.whl", hash = "sha256:491e25e02a23d7207629d942c666924a6b61e007a48177fdd231a0097b7f507e"},
{file = "matplotlib-3.10.6-cp313-cp313t-win_arm64.whl", hash = "sha256:3d80d60d4e54cda462e2cd9a086d85cd9f20943ead92f575ce86885a43a565d5"},
{file = "matplotlib-3.10.6-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:70aaf890ce1d0efd482df969b28a5b30ea0b891224bb315810a3940f67182899"},
{file = "matplotlib-3.10.6-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1565aae810ab79cb72e402b22facfa6501365e73ebab70a0fdfb98488d2c3c0c"},
{file = "matplotlib-3.10.6-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f3b23315a01981689aa4e1a179dbf6ef9fbd17143c3eea77548c2ecfb0499438"},
{file = "matplotlib-3.10.6-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:30fdd37edf41a4e6785f9b37969de57aea770696cb637d9946eb37470c94a453"},
{file = "matplotlib-3.10.6-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:bc31e693da1c08012c764b053e702c1855378e04102238e6a5ee6a7117c53a47"},
{file = "matplotlib-3.10.6-cp314-cp314-win_amd64.whl", hash = "sha256:05be9bdaa8b242bc6ff96330d18c52f1fc59c6fb3a4dd411d953d67e7e1baf98"},
{file = "matplotlib-3.10.6-cp314-cp314-win_arm64.whl", hash = "sha256:f56a0d1ab05d34c628592435781d185cd99630bdfd76822cd686fb5a0aecd43a"},
{file = "matplotlib-3.10.6-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:94f0b4cacb23763b64b5dace50d5b7bfe98710fed5f0cef5c08135a03399d98b"},
{file = "matplotlib-3.10.6-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:cc332891306b9fb39462673d8225d1b824c89783fee82840a709f96714f17a5c"},
{file = "matplotlib-3.10.6-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee1d607b3fb1590deb04b69f02ea1d53ed0b0bf75b2b1a5745f269afcbd3cdd3"},
{file = "matplotlib-3.10.6-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:376a624a218116461696b27b2bbf7a8945053e6d799f6502fc03226d077807bf"},
{file = "matplotlib-3.10.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:83847b47f6524c34b4f2d3ce726bb0541c48c8e7692729865c3df75bfa0f495a"},
{file = "matplotlib-3.10.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c7e0518e0d223683532a07f4b512e2e0729b62674f1b3a1a69869f98e6b1c7e3"},
{file = "matplotlib-3.10.6-cp314-cp314t-win_arm64.whl", hash = "sha256:4dd83e029f5b4801eeb87c64efd80e732452781c16a9cf7415b7b63ec8f374d7"},
{file = "matplotlib-3.10.6-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:13fcd07ccf17e354398358e0307a1f53f5325dca22982556ddb9c52837b5af41"},
{file = "matplotlib-3.10.6-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:470fc846d59d1406e34fa4c32ba371039cd12c2fe86801159a965956f2575bd1"},
{file = "matplotlib-3.10.6-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f7173f8551b88f4ef810a94adae3128c2530e0d07529f7141be7f8d8c365f051"},
{file = "matplotlib-3.10.6-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:f2d684c3204fa62421bbf770ddfebc6b50130f9cad65531eeba19236d73bb488"},
{file = "matplotlib-3.10.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:6f4a69196e663a41d12a728fab8751177215357906436804217d6d9cf0d4d6cf"},
{file = "matplotlib-3.10.6-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d6ca6ef03dfd269f4ead566ec6f3fb9becf8dab146fb999022ed85ee9f6b3eb"},
{file = "matplotlib-3.10.6.tar.gz", hash = "sha256:ec01b645840dd1996df21ee37f208cd8ba57644779fa20464010638013d3203c"},
]
[package.dependencies]
contourpy = ">=1.0.1"
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.3.1"
numpy = ">=1.23"
packaging = ">=20.0"
pillow = ">=8"
pyparsing = ">=2.3.1"
python-dateutil = ">=2.7"
[package.extras]
dev = ["meson-python (>=0.13.1,<0.17.0)", "pybind11 (>=2.13.2,!=2.13.3)", "setuptools (>=64)", "setuptools_scm (>=7)"]
[[package]]
name = "mccabe"
version = "0.7.0"
@@ -3857,6 +4236,131 @@ files = [
[package.dependencies]
setuptools = "*"
[[package]]
name = "pillow"
version = "11.3.0"
description = "Python Imaging Library (Fork)"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pillow-11.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b9c17fd4ace828b3003dfd1e30bff24863e0eb59b535e8f80194d9cc7ecf860"},
{file = "pillow-11.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:65dc69160114cdd0ca0f35cb434633c75e8e7fad4cf855177a05bf38678f73ad"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7107195ddc914f656c7fc8e4a5e1c25f32e9236ea3ea860f257b0436011fddd0"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:cc3e831b563b3114baac7ec2ee86819eb03caa1a2cef0b481a5675b59c4fe23b"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f1f182ebd2303acf8c380a54f615ec883322593320a9b00438eb842c1f37ae50"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4445fa62e15936a028672fd48c4c11a66d641d2c05726c7ec1f8ba6a572036ae"},
{file = "pillow-11.3.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:71f511f6b3b91dd543282477be45a033e4845a40278fa8dcdbfdb07109bf18f9"},
{file = "pillow-11.3.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:040a5b691b0713e1f6cbe222e0f4f74cd233421e105850ae3b3c0ceda520f42e"},
{file = "pillow-11.3.0-cp310-cp310-win32.whl", hash = "sha256:89bd777bc6624fe4115e9fac3352c79ed60f3bb18651420635f26e643e3dd1f6"},
{file = "pillow-11.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:19d2ff547c75b8e3ff46f4d9ef969a06c30ab2d4263a9e287733aa8b2429ce8f"},
{file = "pillow-11.3.0-cp310-cp310-win_arm64.whl", hash = "sha256:819931d25e57b513242859ce1876c58c59dc31587847bf74cfe06b2e0cb22d2f"},
{file = "pillow-11.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:1cd110edf822773368b396281a2293aeb91c90a2db00d78ea43e7e861631b722"},
{file = "pillow-11.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9c412fddd1b77a75aa904615ebaa6001f169b26fd467b4be93aded278266b288"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1aa4de119a0ecac0a34a9c8bde33f34022e2e8f99104e47a3ca392fd60e37d"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:91da1d88226663594e3f6b4b8c3c8d85bd504117d043740a8e0ec449087cc494"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:643f189248837533073c405ec2f0bb250ba54598cf80e8c1e043381a60632f58"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:106064daa23a745510dabce1d84f29137a37224831d88eb4ce94bb187b1d7e5f"},
{file = "pillow-11.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cd8ff254faf15591e724dc7c4ddb6bf4793efcbe13802a4ae3e863cd300b493e"},
{file = "pillow-11.3.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:932c754c2d51ad2b2271fd01c3d121daaa35e27efae2a616f77bf164bc0b3e94"},
{file = "pillow-11.3.0-cp311-cp311-win32.whl", hash = "sha256:b4b8f3efc8d530a1544e5962bd6b403d5f7fe8b9e08227c6b255f98ad82b4ba0"},
{file = "pillow-11.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:1a992e86b0dd7aeb1f053cd506508c0999d710a8f07b4c791c63843fc6a807ac"},
{file = "pillow-11.3.0-cp311-cp311-win_arm64.whl", hash = "sha256:30807c931ff7c095620fe04448e2c2fc673fcbb1ffe2a7da3fb39613489b1ddd"},
{file = "pillow-11.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fdae223722da47b024b867c1ea0be64e0df702c5e0a60e27daad39bf960dd1e4"},
{file = "pillow-11.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:921bd305b10e82b4d1f5e802b6850677f965d8394203d182f078873851dada69"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:eb76541cba2f958032d79d143b98a3a6b3ea87f0959bbe256c0b5e416599fd5d"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:67172f2944ebba3d4a7b54f2e95c786a3a50c21b88456329314caaa28cda70f6"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:97f07ed9f56a3b9b5f49d3661dc9607484e85c67e27f3e8be2c7d28ca032fec7"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:676b2815362456b5b3216b4fd5bd89d362100dc6f4945154ff172e206a22c024"},
{file = "pillow-11.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3e184b2f26ff146363dd07bde8b711833d7b0202e27d13540bfe2e35a323a809"},
{file = "pillow-11.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6be31e3fc9a621e071bc17bb7de63b85cbe0bfae91bb0363c893cbe67247780d"},
{file = "pillow-11.3.0-cp312-cp312-win32.whl", hash = "sha256:7b161756381f0918e05e7cb8a371fff367e807770f8fe92ecb20d905d0e1c149"},
{file = "pillow-11.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:a6444696fce635783440b7f7a9fc24b3ad10a9ea3f0ab66c5905be1c19ccf17d"},
{file = "pillow-11.3.0-cp312-cp312-win_arm64.whl", hash = "sha256:2aceea54f957dd4448264f9bf40875da0415c83eb85f55069d89c0ed436e3542"},
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_arm64_iphoneos.whl", hash = "sha256:1c627742b539bba4309df89171356fcb3cc5a9178355b2727d1b74a6cf155fbd"},
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:30b7c02f3899d10f13d7a48163c8969e4e653f8b43416d23d13d1bbfdc93b9f8"},
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f"},
{file = "pillow-11.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c"},
{file = "pillow-11.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8"},
{file = "pillow-11.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2"},
{file = "pillow-11.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:45dfc51ac5975b938e9809451c51734124e73b04d0f0ac621649821a63852e7b"},
{file = "pillow-11.3.0-cp313-cp313-win32.whl", hash = "sha256:a4d336baed65d50d37b88ca5b60c0fa9d81e3a87d4a7930d3880d1624d5b31f3"},
{file = "pillow-11.3.0-cp313-cp313-win_amd64.whl", hash = "sha256:0bce5c4fd0921f99d2e858dc4d4d64193407e1b99478bc5cacecba2311abde51"},
{file = "pillow-11.3.0-cp313-cp313-win_arm64.whl", hash = "sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59"},
{file = "pillow-11.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe"},
{file = "pillow-11.3.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:83e1b0161c9d148125083a35c1c5a89db5b7054834fd4387499e06552035236c"},
{file = "pillow-11.3.0-cp313-cp313t-win32.whl", hash = "sha256:2a3117c06b8fb646639dce83694f2f9eac405472713fcb1ae887469c0d4f6788"},
{file = "pillow-11.3.0-cp313-cp313t-win_amd64.whl", hash = "sha256:857844335c95bea93fb39e0fa2726b4d9d758850b34075a7e3ff4f4fa3aa3b31"},
{file = "pillow-11.3.0-cp313-cp313t-win_arm64.whl", hash = "sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e"},
{file = "pillow-11.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12"},
{file = "pillow-11.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77"},
{file = "pillow-11.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874"},
{file = "pillow-11.3.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:98a9afa7b9007c67ed84c57c9e0ad86a6000da96eaa638e4f8abe5b65ff83f0a"},
{file = "pillow-11.3.0-cp314-cp314-win32.whl", hash = "sha256:02a723e6bf909e7cea0dac1b0e0310be9d7650cd66222a5f1c571455c0a45214"},
{file = "pillow-11.3.0-cp314-cp314-win_amd64.whl", hash = "sha256:a418486160228f64dd9e9efcd132679b7a02a5f22c982c78b6fc7dab3fefb635"},
{file = "pillow-11.3.0-cp314-cp314-win_arm64.whl", hash = "sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477"},
{file = "pillow-11.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50"},
{file = "pillow-11.3.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a1bc6ba083b145187f648b667e05a2534ecc4b9f2784c2cbe3089e44868f2b9b"},
{file = "pillow-11.3.0-cp314-cp314t-win32.whl", hash = "sha256:118ca10c0d60b06d006be10a501fd6bbdfef559251ed31b794668ed569c87e12"},
{file = "pillow-11.3.0-cp314-cp314t-win_amd64.whl", hash = "sha256:8924748b688aa210d79883357d102cd64690e56b923a186f35a82cbc10f997db"},
{file = "pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa"},
{file = "pillow-11.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:48d254f8a4c776de343051023eb61ffe818299eeac478da55227d96e241de53f"},
{file = "pillow-11.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7aee118e30a4cf54fdd873bd3a29de51e29105ab11f9aad8c32123f58c8f8081"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:23cff760a9049c502721bdb743a7cb3e03365fafcdfc2ef9784610714166e5a4"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6359a3bc43f57d5b375d1ad54a0074318a0844d11b76abccf478c37c986d3cfc"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:092c80c76635f5ecb10f3f83d76716165c96f5229addbd1ec2bdbbda7d496e06"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cadc9e0ea0a2431124cde7e1697106471fc4c1da01530e679b2391c37d3fbb3a"},
{file = "pillow-11.3.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6a418691000f2a418c9135a7cf0d797c1bb7d9a485e61fe8e7722845b95ef978"},
{file = "pillow-11.3.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:97afb3a00b65cc0804d1c7abddbf090a81eaac02768af58cbdcaaa0a931e0b6d"},
{file = "pillow-11.3.0-cp39-cp39-win32.whl", hash = "sha256:ea944117a7974ae78059fcc1800e5d3295172bb97035c0c1d9345fca1419da71"},
{file = "pillow-11.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:e5c5858ad8ec655450a7c7df532e9842cf8df7cc349df7225c60d5d348c8aada"},
{file = "pillow-11.3.0-cp39-cp39-win_arm64.whl", hash = "sha256:6abdbfd3aea42be05702a8dd98832329c167ee84400a1d1f61ab11437f1717eb"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3cee80663f29e3843b68199b9d6f4f54bd1d4a6b59bdd91bceefc51238bcb967"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b5f56c3f344f2ccaf0dd875d3e180f631dc60a51b314295a3e681fe8cf851fbe"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e67d793d180c9df62f1f40aee3accca4829d3794c95098887edc18af4b8b780c"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d000f46e2917c705e9fb93a3606ee4a819d1e3aa7a9b442f6444f07e77cf5e25"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:527b37216b6ac3a12d7838dc3bd75208ec57c1c6d11ef01902266a5a0c14fc27"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:be5463ac478b623b9dd3937afd7fb7ab3d79dd290a28e2b6df292dc75063eb8a"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:8dc70ca24c110503e16918a658b869019126ecfe03109b754c402daff12b3d9f"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:465b9e8844e3c3519a983d58b80be3f668e2a7a5db97f2784e7079fbc9f9822c"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5418b53c0d59b3824d05e029669efa023bbef0f3e92e75ec8428f3799487f361"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:504b6f59505f08ae014f724b6207ff6222662aab5cc9542577fb084ed0676ac7"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c84d689db21a1c397d001aa08241044aa2069e7587b398c8cc63020390b1c1b8"},
{file = "pillow-11.3.0.tar.gz", hash = "sha256:3828ee7586cd0b2091b6209e5ad53e20d0649bbe87164a459d0676e035e8f523"},
]
[package.extras]
docs = ["furo", "olefile", "sphinx (>=8.2)", "sphinx-autobuild", "sphinx-copybutton", "sphinx-inline-tabs", "sphinxext-opengraph"]
fpx = ["olefile"]
mic = ["olefile"]
test-arrow = ["pyarrow"]
tests = ["check-manifest", "coverage (>=7.4.2)", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout", "pytest-xdist", "trove-classifiers (>=2024.10.12)"]
typing = ["typing-extensions ; python_version < \"3.10\""]
xmp = ["defusedxml"]
[[package]]
name = "platformdirs"
version = "4.3.8"
@@ -5016,6 +5520,29 @@ attrs = ">=22.2.0"
rpds-py = ">=0.7.0"
typing-extensions = {version = ">=4.4.0", markers = "python_version < \"3.13\""}
[[package]]
name = "reportlab"
version = "4.4.4"
description = "The Reportlab Toolkit"
optional = false
python-versions = "<4,>=3.9"
groups = ["main"]
files = [
{file = "reportlab-4.4.4-py3-none-any.whl", hash = "sha256:299b3b0534e7202bb94ed2ddcd7179b818dcda7de9d8518a57c85a58a1ebaadb"},
{file = "reportlab-4.4.4.tar.gz", hash = "sha256:cb2f658b7f4a15be2cc68f7203aa67faef67213edd4f2d4bdd3eb20dab75a80d"},
]
[package.dependencies]
charset-normalizer = "*"
pillow = ">=9.0.0"
[package.extras]
accel = ["rl_accel (>=0.9.0,<1.1)"]
bidi = ["rlbidi"]
pycairo = ["freetype-py (>=2.3.0,<2.4)", "rlPyCairo (>=0.2.0,<1)"]
renderpm = ["rl_renderPM (>=4.0.3,<4.1)"]
shaping = ["uharfbuzz"]
[[package]]
name = "requests"
version = "2.32.5"
@@ -6259,4 +6786,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "03442fd4673006c5a74374f90f53621fd1c9d117279fe6cc0355ef833eb7f9bb"
content-hash = "3c9164d668d37d6373eb5200bbe768232ead934d9312b9c68046b1df922789f3"
+4 -2
View File
@@ -33,7 +33,9 @@ dependencies = [
"xmlsec==1.3.14",
"h2 (==4.3.0)",
"markdown (>=3.9,<4.0)",
"drf-simple-apikey (==2.2.1)"
"drf-simple-apikey (==2.2.1)",
"matplotlib (>=3.10.6,<4.0.0)",
"reportlab (>=4.4.4,<5.0.0)"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
@@ -41,7 +43,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.14.0"
version = "1.15.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
+21 -2
View File
@@ -10,6 +10,7 @@ from rest_framework.exceptions import AuthenticationFailed
from rest_framework.request import Request
from rest_framework_simplejwt.authentication import JWTAuthentication
from api.db_router import MainRouter
from api.models import TenantAPIKey, TenantAPIKeyManager
@@ -19,6 +20,22 @@ class TenantAPIKeyAuthentication(BaseAPIKeyAuth):
def __init__(self):
self.key_crypto = get_crypto()
def _authenticate_credentials(self, request, key):
"""
Override to use admin connection, bypassing RLS during authentication.
Delegates to parent after temporarily routing model queries to admin DB.
"""
# Temporarily point the model's manager to admin database
original_objects = self.model.objects
self.model.objects = self.model.objects.using(MainRouter.admin_db)
try:
# Call parent method which will now use admin database
return super()._authenticate_credentials(request, key)
finally:
# Restore original manager
self.model.objects = original_objects
def authenticate(self, request: Request):
prefixed_key = self.get_key(request)
@@ -43,13 +60,15 @@ class TenantAPIKeyAuthentication(BaseAPIKeyAuth):
api_key_pk = UUID(api_key_pk)
try:
api_key_instance = TenantAPIKey.objects.get(id=api_key_pk, prefix=prefix)
api_key_instance = TenantAPIKey.objects.using(MainRouter.admin_db).get(
id=api_key_pk, prefix=prefix
)
except TenantAPIKey.DoesNotExist:
raise AuthenticationFailed("Invalid API Key.")
# Update last_used_at
api_key_instance.last_used_at = timezone.now()
api_key_instance.save(update_fields=["last_used_at"])
api_key_instance.save(update_fields=["last_used_at"], using=MainRouter.admin_db)
return entity, {
"tenant_id": str(api_key_instance.tenant_id),
+83 -18
View File
@@ -1,13 +1,15 @@
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from django.db import transaction
from rest_framework import permissions
from rest_framework.exceptions import NotAuthenticated
from rest_framework.filters import SearchFilter
from rest_framework.permissions import SAFE_METHODS
from rest_framework_json_api import filters
from rest_framework_json_api.views import ModelViewSet
from api.authentication import CombinedJWTOrAPIKeyAuthentication
from api.db_router import MainRouter
from api.db_router import MainRouter, reset_read_db_alias, set_read_db_alias
from api.db_utils import POSTGRES_USER_VAR, rls_transaction
from api.filters import CustomDjangoFilterBackend
from api.models import Role, Tenant
@@ -31,6 +33,20 @@ class BaseViewSet(ModelViewSet):
ordering_fields = "__all__"
ordering = ["id"]
def _get_request_db_alias(self, request):
if request is None:
return MainRouter.default_db
read_alias = (
MainRouter.replica_db
if request.method in SAFE_METHODS
and MainRouter.replica_db in settings.DATABASES
else None
)
if read_alias:
return read_alias
return MainRouter.default_db
def initial(self, request, *args, **kwargs):
"""
Sets required_permissions before permissions are checked.
@@ -48,8 +64,21 @@ class BaseViewSet(ModelViewSet):
class BaseRLSViewSet(BaseViewSet):
def dispatch(self, request, *args, **kwargs):
with transaction.atomic():
return super().dispatch(request, *args, **kwargs)
self.db_alias = self._get_request_db_alias(request)
alias_token = None
try:
if self.db_alias != MainRouter.default_db:
alias_token = set_read_db_alias(self.db_alias)
if request is not None:
request.db_alias = self.db_alias
with transaction.atomic(using=self.db_alias):
return super().dispatch(request, *args, **kwargs)
finally:
if alias_token is not None:
reset_read_db_alias(alias_token)
self.db_alias = MainRouter.default_db
def initial(self, request, *args, **kwargs):
# Ideally, this logic would be in the `.setup()` method but DRF view sets don't call it
@@ -61,7 +90,9 @@ class BaseRLSViewSet(BaseViewSet):
if tenant_id is None:
raise NotAuthenticated("Tenant ID is not present in token")
with rls_transaction(tenant_id):
with rls_transaction(
tenant_id, using=getattr(self, "db_alias", MainRouter.default_db)
):
self.request.tenant_id = tenant_id
return super().initial(request, *args, **kwargs)
@@ -73,18 +104,33 @@ class BaseRLSViewSet(BaseViewSet):
class BaseTenantViewset(BaseViewSet):
def dispatch(self, request, *args, **kwargs):
with transaction.atomic():
tenant = super().dispatch(request, *args, **kwargs)
self.db_alias = self._get_request_db_alias(request)
alias_token = None
try:
# If the request is a POST, create the admin role
if request.method == "POST":
isinstance(tenant, dict) and self._create_admin_role(tenant.data["id"])
except Exception as e:
self._handle_creation_error(e, tenant)
raise
if self.db_alias != MainRouter.default_db:
alias_token = set_read_db_alias(self.db_alias)
return tenant
if request is not None:
request.db_alias = self.db_alias
with transaction.atomic(using=self.db_alias):
tenant = super().dispatch(request, *args, **kwargs)
try:
# If the request is a POST, create the admin role
if request.method == "POST":
isinstance(tenant, dict) and self._create_admin_role(
tenant.data["id"]
)
except Exception as e:
self._handle_creation_error(e, tenant)
raise
return tenant
finally:
if alias_token is not None:
reset_read_db_alias(alias_token)
self.db_alias = MainRouter.default_db
def _create_admin_role(self, tenant_id):
Role.objects.using(MainRouter.admin_db).create(
@@ -117,14 +163,31 @@ class BaseTenantViewset(BaseViewSet):
raise NotAuthenticated("Tenant ID is not present in token")
user_id = str(request.user.id)
with rls_transaction(value=user_id, parameter=POSTGRES_USER_VAR):
with rls_transaction(
value=user_id,
parameter=POSTGRES_USER_VAR,
using=getattr(self, "db_alias", MainRouter.default_db),
):
return super().initial(request, *args, **kwargs)
class BaseUserViewset(BaseViewSet):
def dispatch(self, request, *args, **kwargs):
with transaction.atomic():
return super().dispatch(request, *args, **kwargs)
self.db_alias = self._get_request_db_alias(request)
alias_token = None
try:
if self.db_alias != MainRouter.default_db:
alias_token = set_read_db_alias(self.db_alias)
if request is not None:
request.db_alias = self.db_alias
with transaction.atomic(using=self.db_alias):
return super().dispatch(request, *args, **kwargs)
finally:
if alias_token is not None:
reset_read_db_alias(alias_token)
self.db_alias = MainRouter.default_db
def initial(self, request, *args, **kwargs):
# TODO refactor after improving RLS on users
@@ -137,6 +200,8 @@ class BaseUserViewset(BaseViewSet):
if tenant_id is None:
raise NotAuthenticated("Tenant ID is not present in token")
with rls_transaction(tenant_id):
with rls_transaction(
tenant_id, using=getattr(self, "db_alias", MainRouter.default_db)
):
self.request.tenant_id = tenant_id
return super().initial(request, *args, **kwargs)
+10 -6
View File
@@ -150,12 +150,16 @@ def generate_scan_compliance(
requirement["checks"][check_id] = status
requirement["checks_status"][status.lower()] += 1
if requirement["status"] != "FAIL" and any(
value == "FAIL" for value in requirement["checks"].values()
):
requirement["status"] = "FAIL"
compliance_overview[compliance_id]["requirements_status"]["passed"] -= 1
compliance_overview[compliance_id]["requirements_status"]["failed"] += 1
if requirement["status"] != "FAIL" and any(
value == "FAIL" for value in requirement["checks"].values()
):
requirement["status"] = "FAIL"
compliance_overview[compliance_id]["requirements_status"][
"passed"
] -= 1
compliance_overview[compliance_id]["requirements_status"][
"failed"
] += 1
def generate_compliance_overview_template(prowler_compliance: dict):
+30
View File
@@ -1,9 +1,31 @@
from contextvars import ContextVar
from django.conf import settings
ALLOWED_APPS = ("django", "socialaccount", "account", "authtoken", "silk")
_read_db_alias = ContextVar("read_db_alias", default=None)
def set_read_db_alias(alias: str | None):
if not alias:
return None
return _read_db_alias.set(alias)
def get_read_db_alias() -> str | None:
return _read_db_alias.get()
def reset_read_db_alias(token) -> None:
if token is not None:
_read_db_alias.reset(token)
class MainRouter:
default_db = "default"
admin_db = "admin"
replica_db = "replica"
def db_for_read(self, model, **hints): # noqa: F841
model_table_name = model._meta.db_table
@@ -11,6 +33,9 @@ class MainRouter:
model_table_name.startswith(f"{app}_") for app in ALLOWED_APPS
):
return self.admin_db
read_alias = get_read_db_alias()
if read_alias:
return read_alias
return None
def db_for_write(self, model, **hints): # noqa: F841
@@ -27,3 +52,8 @@ class MainRouter:
if {obj1._state.db, obj2._state.db} <= {self.default_db, self.admin_db}:
return True
return None
READ_REPLICA_ALIAS = (
MainRouter.replica_db if MainRouter.replica_db in settings.DATABASES else None
)
+33 -11
View File
@@ -6,12 +6,14 @@ from datetime import datetime, timedelta, timezone
from django.conf import settings
from django.contrib.auth.models import BaseUserManager
from django.db import connection, models, transaction
from django.db import DEFAULT_DB_ALIAS, connection, connections, models, transaction
from django_celery_beat.models import PeriodicTask
from psycopg2 import connect as psycopg2_connect
from psycopg2.extensions import AsIs, new_type, register_adapter, register_type
from rest_framework_json_api.serializers import ValidationError
from api.db_router import get_read_db_alias, reset_read_db_alias, set_read_db_alias
DB_USER = settings.DATABASES["default"]["USER"] if not settings.TESTING else "test"
DB_PASSWORD = (
settings.DATABASES["default"]["PASSWORD"] if not settings.TESTING else "test"
@@ -49,7 +51,11 @@ def psycopg_connection(database_alias: str):
@contextmanager
def rls_transaction(value: str, parameter: str = POSTGRES_TENANT_VAR):
def rls_transaction(
value: str,
parameter: str = POSTGRES_TENANT_VAR,
using: str | None = None,
):
"""
Creates a new database transaction setting the given configuration value for Postgres RLS. It validates the
if the value is a valid UUID.
@@ -57,16 +63,32 @@ def rls_transaction(value: str, parameter: str = POSTGRES_TENANT_VAR):
Args:
value (str): Database configuration parameter value.
parameter (str): Database configuration parameter name, by default is 'api.tenant_id'.
using (str | None): Optional database alias to run the transaction against. Defaults to the
active read alias (if any) or Django's default connection.
"""
with transaction.atomic():
with connection.cursor() as cursor:
try:
# just in case the value is a UUID object
uuid.UUID(str(value))
except ValueError:
raise ValidationError("Must be a valid UUID")
cursor.execute(SET_CONFIG_QUERY, [parameter, value])
yield cursor
requested_alias = using or get_read_db_alias()
db_alias = requested_alias or DEFAULT_DB_ALIAS
if db_alias not in connections:
db_alias = DEFAULT_DB_ALIAS
router_token = None
try:
if db_alias != DEFAULT_DB_ALIAS:
router_token = set_read_db_alias(db_alias)
with transaction.atomic(using=db_alias):
conn = connections[db_alias]
with conn.cursor() as cursor:
try:
# just in case the value is a UUID object
uuid.UUID(str(value))
except ValueError:
raise ValidationError("Must be a valid UUID")
cursor.execute(SET_CONFIG_QUERY, [parameter, value])
yield cursor
finally:
if router_token is not None:
reset_read_db_alias(router_token)
class CustomUserManager(BaseUserManager):
+45
View File
@@ -27,6 +27,8 @@ from api.models import (
Finding,
Integration,
Invitation,
LighthouseProviderConfiguration,
LighthouseProviderModels,
Membership,
OverviewStatusChoices,
PermissionChoices,
@@ -765,6 +767,7 @@ class ComplianceOverviewFilter(FilterSet):
class ScanSummaryFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
provider_id = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider_id__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
@@ -927,3 +930,45 @@ class TenantApiKeyFilter(FilterSet):
"revoked": ["exact"],
"name": ["exact", "icontains"],
}
class LighthouseProviderConfigFilter(FilterSet):
provider_type = ChoiceFilter(
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices
)
provider_type__in = ChoiceInFilter(
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices,
field_name="provider_type",
lookup_expr="in",
)
is_active = BooleanFilter()
class Meta:
model = LighthouseProviderConfiguration
fields = {
"provider_type": ["exact", "in"],
"is_active": ["exact"],
}
class LighthouseProviderModelsFilter(FilterSet):
provider_type = ChoiceFilter(
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices,
field_name="provider_configuration__provider_type",
)
provider_type__in = ChoiceInFilter(
choices=LighthouseProviderConfiguration.LLMProviderChoices.choices,
field_name="provider_configuration__provider_type",
lookup_expr="in",
)
# Allow filtering by model id
model_id = CharFilter(field_name="model_id", lookup_expr="exact")
model_id__icontains = CharFilter(field_name="model_id", lookup_expr="icontains")
model_id__in = CharInFilter(field_name="model_id", lookup_expr="in")
class Meta:
model = LighthouseProviderModels
fields = {
"model_id": ["exact", "icontains", "in"],
}
+14 -3
View File
@@ -2,6 +2,7 @@
import uuid
import django.core.validators
import django.db.models.deletion
import drf_simple_apikey.models
from django.conf import settings
@@ -20,7 +21,13 @@ class Migration(migrations.Migration):
migrations.CreateModel(
name="TenantAPIKey",
fields=[
("name", models.CharField(blank=True, max_length=255, null=True)),
(
"name",
models.CharField(
max_length=100,
validators=[django.core.validators.MinLengthValidator(3)],
),
),
(
"expiry_date",
models.DateTimeField(
@@ -37,7 +44,7 @@ class Migration(migrations.Migration):
help_text="If the API key is revoked, entities cannot use it anymore. (This cannot be undone.)",
),
),
("created", models.DateTimeField(auto_now=True)),
("created", models.DateTimeField(auto_now_add=True, editable=False)),
(
"whitelisted_ips",
models.JSONField(
@@ -110,7 +117,11 @@ class Migration(migrations.Migration):
"constraints": [
models.UniqueConstraint(
fields=("tenant_id", "prefix"), name="unique_api_key_prefixes"
)
),
models.UniqueConstraint(
fields=("tenant_id", "name"),
name="unique_api_key_name_per_tenant",
),
],
},
),
@@ -0,0 +1,21 @@
# Generated by Django 5.1.12 on 2025-10-07 10:41
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0048_api_key"),
]
operations = [
migrations.AddField(
model_name="compliancerequirementoverview",
name="passed_findings",
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name="compliancerequirementoverview",
name="total_findings",
field=models.IntegerField(default=0),
),
]
@@ -0,0 +1,266 @@
# Generated by Django 5.1.12 on 2025-10-09 07:50
import json
import logging
import uuid
import django.db.models.deletion
from config.custom_logging import BackendLogger
from cryptography.fernet import Fernet
from django.conf import settings
from django.db import migrations, models
import api.rls
from api.db_router import MainRouter
logger = logging.getLogger(BackendLogger.API)
def migrate_lighthouse_configs_forward(apps, schema_editor):
"""
Migrate data from old LighthouseConfiguration to new multi-provider models.
Old system: one LighthouseConfiguration per tenant (always OpenAI).
"""
LighthouseConfiguration = apps.get_model("api", "LighthouseConfiguration")
LighthouseProviderConfiguration = apps.get_model(
"api", "LighthouseProviderConfiguration"
)
LighthouseTenantConfiguration = apps.get_model(
"api", "LighthouseTenantConfiguration"
)
LighthouseProviderModels = apps.get_model("api", "LighthouseProviderModels")
fernet = Fernet(settings.SECRETS_ENCRYPTION_KEY.encode())
# Migrate only tenants that actually have a LighthouseConfiguration
for old_config in (
LighthouseConfiguration.objects.using(MainRouter.admin_db)
.select_related("tenant")
.all()
):
tenant = old_config.tenant
tenant_id = str(tenant.id)
try:
# Create OpenAI provider configuration for this tenant
api_key_decrypted = fernet.decrypt(bytes(old_config.api_key)).decode()
credentials_encrypted = fernet.encrypt(
json.dumps({"api_key": api_key_decrypted}).encode()
)
provider_config = LighthouseProviderConfiguration.objects.using(
MainRouter.admin_db
).create(
tenant=tenant,
provider_type="openai",
credentials=credentials_encrypted,
is_active=old_config.is_active,
)
# Create tenant configuration from old values
LighthouseTenantConfiguration.objects.using(MainRouter.admin_db).create(
tenant=tenant,
business_context=old_config.business_context or "",
default_provider="openai",
default_models={"openai": old_config.model},
)
# Create initial provider model record
LighthouseProviderModels.objects.using(MainRouter.admin_db).create(
tenant=tenant,
provider_configuration=provider_config,
model_id=old_config.model,
model_name=old_config.model,
default_parameters={},
)
except Exception:
logger.exception(
"Failed to migrate lighthouse config for tenant %s", tenant_id
)
continue
class Migration(migrations.Migration):
dependencies = [
("api", "0049_compliancerequirementoverview_passed_failed_findings"),
]
operations = [
migrations.CreateModel(
name="LighthouseProviderConfiguration",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"provider_type",
models.CharField(
choices=[("openai", "OpenAI")],
help_text="LLM provider name",
max_length=50,
),
),
("base_url", models.URLField(blank=True, null=True)),
(
"credentials",
models.BinaryField(
help_text="Encrypted JSON credentials for the provider"
),
),
("is_active", models.BooleanField(default=True)),
],
options={
"db_table": "lighthouse_provider_configurations",
"abstract": False,
},
),
migrations.CreateModel(
name="LighthouseProviderModels",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
("model_id", models.CharField(max_length=100)),
("model_name", models.CharField(max_length=100)),
("default_parameters", models.JSONField(blank=True, default=dict)),
],
options={
"db_table": "lighthouse_provider_models",
"abstract": False,
},
),
migrations.CreateModel(
name="LighthouseTenantConfiguration",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
("business_context", models.TextField(blank=True, default="")),
("default_provider", models.CharField(blank=True, max_length=50)),
("default_models", models.JSONField(blank=True, default=dict)),
],
options={
"db_table": "lighthouse_tenant_config",
"abstract": False,
},
),
migrations.AddField(
model_name="lighthouseproviderconfiguration",
name="tenant",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
migrations.AddField(
model_name="lighthouseprovidermodels",
name="provider_configuration",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="available_models",
to="api.lighthouseproviderconfiguration",
),
),
migrations.AddField(
model_name="lighthouseprovidermodels",
name="tenant",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
migrations.AddField(
model_name="lighthousetenantconfiguration",
name="tenant",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
migrations.AddIndex(
model_name="lighthouseproviderconfiguration",
index=models.Index(
fields=["tenant_id", "provider_type"], name="lh_pc_tenant_type_idx"
),
),
migrations.AddConstraint(
model_name="lighthouseproviderconfiguration",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_lighthouseproviderconfiguration",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="lighthouseproviderconfiguration",
constraint=models.UniqueConstraint(
fields=("tenant_id", "provider_type"),
name="unique_provider_config_per_tenant",
),
),
migrations.AddIndex(
model_name="lighthouseprovidermodels",
index=models.Index(
fields=["tenant_id", "provider_configuration"],
name="lh_prov_models_cfg_idx",
),
),
migrations.AddConstraint(
model_name="lighthouseprovidermodels",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_lighthouseprovidermodels",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="lighthouseprovidermodels",
constraint=models.UniqueConstraint(
fields=("tenant_id", "provider_configuration", "model_id"),
name="unique_provider_model_per_configuration",
),
),
migrations.AddConstraint(
model_name="lighthousetenantconfiguration",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_lighthousetenantconfiguration",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="lighthousetenantconfiguration",
constraint=models.UniqueConstraint(
fields=("tenant_id",), name="unique_tenant_lighthouse_config"
),
),
# Migrate data from old LighthouseConfiguration to new tables
# This runs after all tables, indexes, and constraints are created
# The old Lighthouse configuration table is not removed, so reverse_code is noop
# During rollbacks, the old Lighthouse configuration remains intact while the new tables are removed
migrations.RunPython(
migrate_lighthouse_configs_forward,
reverse_code=migrations.RunPython.noop,
),
]
+189 -25
View File
@@ -220,6 +220,8 @@ class Membership(models.Model):
class TenantAPIKey(AbstractAPIKey, RowLevelSecurityProtectedModel):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
name = models.CharField(max_length=100, validators=[MinLengthValidator(3)])
created = models.DateTimeField(auto_now_add=True, editable=False)
prefix = models.CharField(
max_length=11,
unique=True,
@@ -255,6 +257,10 @@ class TenantAPIKey(AbstractAPIKey, RowLevelSecurityProtectedModel):
fields=("tenant_id", "prefix"),
name="unique_api_key_prefixes",
),
models.UniqueConstraint(
fields=("tenant_id", "name"),
name="unique_api_key_name_per_tenant",
),
]
indexes = [
@@ -1293,6 +1299,8 @@ class ComplianceRequirementOverview(RowLevelSecurityProtectedModel):
passed_checks = models.IntegerField(default=0)
failed_checks = models.IntegerField(default=0)
total_checks = models.IntegerField(default=0)
passed_findings = models.IntegerField(default=0)
total_findings = models.IntegerField(default=0)
scan = models.ForeignKey(
Scan,
@@ -1865,22 +1873,6 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
def clean(self):
super().clean()
# Validate temperature
if not 0 <= self.temperature <= 1:
raise ModelValidationError(
detail="Temperature must be between 0 and 1",
code="invalid_temperature",
pointer="/data/attributes/temperature",
)
# Validate max_tokens
if not 500 <= self.max_tokens <= 5000:
raise ModelValidationError(
detail="Max tokens must be between 500 and 5000",
code="invalid_max_tokens",
pointer="/data/attributes/max_tokens",
)
@property
def api_key_decoded(self):
"""Return the decrypted API key, or None if unavailable or invalid."""
@@ -1905,15 +1897,6 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
code="invalid_api_key",
pointer="/data/attributes/api_key",
)
# Validate OpenAI API key format
openai_key_pattern = r"^sk-[\w-]+T3BlbkFJ[\w-]+$"
if not re.match(openai_key_pattern, value):
raise ModelValidationError(
detail="Invalid OpenAI API key format.",
code="invalid_api_key",
pointer="/data/attributes/api_key",
)
self.api_key = fernet.encrypt(value.encode())
def save(self, *args, **kwargs):
@@ -1976,3 +1959,184 @@ class Processor(RowLevelSecurityProtectedModel):
class JSONAPIMeta:
resource_name = "processors"
class LighthouseProviderConfiguration(RowLevelSecurityProtectedModel):
"""
Per-tenant configuration for an LLM provider (credentials, base URL, activation).
One configuration per provider type per tenant.
"""
class LLMProviderChoices(models.TextChoices):
OPENAI = "openai", _("OpenAI")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
provider_type = models.CharField(
max_length=50,
choices=LLMProviderChoices.choices,
help_text="LLM provider name",
)
# For OpenAI-compatible providers
base_url = models.URLField(blank=True, null=True)
# Encrypted JSON for provider-specific auth
credentials = models.BinaryField(
blank=False, null=False, help_text="Encrypted JSON credentials for the provider"
)
is_active = models.BooleanField(default=True)
def __str__(self):
return f"{self.get_provider_type_display()} ({self.tenant_id})"
def clean(self):
super().clean()
@property
def credentials_decoded(self):
if not self.credentials:
return None
try:
decrypted_data = fernet.decrypt(bytes(self.credentials))
return json.loads(decrypted_data.decode())
except (InvalidToken, json.JSONDecodeError) as e:
logger.warning("Failed to decrypt provider credentials: %s", e)
return None
except Exception as e:
logger.exception(
"Unexpected error while decrypting provider credentials: %s", e
)
return None
@credentials_decoded.setter
def credentials_decoded(self, value):
"""
Set and encrypt credentials (assumes serializer performed validation).
"""
if not value:
raise ModelValidationError(
detail="Credentials are required",
code="invalid_credentials",
pointer="/data/attributes/credentials",
)
self.credentials = fernet.encrypt(json.dumps(value).encode())
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "lighthouse_provider_configurations"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
models.UniqueConstraint(
fields=["tenant_id", "provider_type"],
name="unique_provider_config_per_tenant",
),
]
indexes = [
models.Index(
fields=["tenant_id", "provider_type"],
name="lh_pc_tenant_type_idx",
),
]
class JSONAPIMeta:
resource_name = "lighthouse-providers"
class LighthouseTenantConfiguration(RowLevelSecurityProtectedModel):
"""
Tenant-level Lighthouse settings (business context and defaults).
One record per tenant.
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
business_context = models.TextField(blank=True, default="")
# Preferred provider key (e.g., "openai", "bedrock", "openai_compatible")
default_provider = models.CharField(max_length=50, blank=True)
# Mapping of provider -> model id, e.g., {"openai": "gpt-4o", "bedrock": "anthropic.claude-v2"}
default_models = models.JSONField(default=dict, blank=True)
def __str__(self):
return f"Lighthouse Tenant Config for {self.tenant_id}"
def clean(self):
super().clean()
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "lighthouse_tenant_config"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
models.UniqueConstraint(
fields=["tenant_id"], name="unique_tenant_lighthouse_config"
),
]
class JSONAPIMeta:
resource_name = "lighthouse-config"
class LighthouseProviderModels(RowLevelSecurityProtectedModel):
"""
Per-tenant, per-provider configuration list of available LLM models.
RLS-protected; populated via provider API using tenant-scoped credentials.
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
# Scope to a specific provider configuration within a tenant
provider_configuration = models.ForeignKey(
LighthouseProviderConfiguration,
on_delete=models.CASCADE,
related_name="available_models",
)
model_id = models.CharField(max_length=100)
# Human-friendly model name
model_name = models.CharField(max_length=100)
# Model-specific default parameters (e.g., temperature, max_tokens)
default_parameters = models.JSONField(default=dict, blank=True)
def __str__(self):
return f"{self.provider_configuration.provider_type}:{self.model_id} ({self.tenant_id})"
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "lighthouse_provider_models"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
models.UniqueConstraint(
fields=["tenant_id", "provider_configuration", "model_id"],
name="unique_provider_model_per_configuration",
),
]
indexes = [
models.Index(
fields=["tenant_id", "provider_configuration"],
name="lh_prov_models_cfg_idx",
),
]
+2 -1
View File
@@ -11,11 +11,12 @@ class APIJSONRenderer(JSONRenderer):
def render(self, data, accepted_media_type=None, renderer_context=None):
request = renderer_context.get("request")
tenant_id = getattr(request, "tenant_id", None) if request else None
db_alias = getattr(request, "db_alias", None) if request else None
include_param_present = "include" in request.query_params if request else False
# Use rls_transaction if needed for included resources, otherwise do nothing
context_manager = (
rls_transaction(tenant_id)
rls_transaction(tenant_id, using=db_alias)
if tenant_id and include_param_present
else nullcontext()
)
+38 -1
View File
@@ -6,7 +6,14 @@ from django.dispatch import receiver
from django_celery_results.backends.database import DatabaseBackend
from api.db_utils import delete_related_daily_task
from api.models import Membership, Provider, TenantAPIKey, User
from api.models import (
LighthouseProviderConfiguration,
LighthouseTenantConfiguration,
Membership,
Provider,
TenantAPIKey,
User,
)
def create_task_result_on_publish(sender=None, headers=None, **kwargs): # noqa: F841
@@ -56,3 +63,33 @@ def revoke_membership_api_keys(sender, instance, **kwargs): # noqa: F841
TenantAPIKey.objects.filter(
entity=instance.user, tenant_id=instance.tenant.id
).update(revoked=True)
@receiver(pre_delete, sender=LighthouseProviderConfiguration)
def cleanup_lighthouse_defaults_before_delete(sender, instance, **kwargs): # noqa: F841
"""
Ensure tenant Lighthouse defaults do not reference a soon-to-be-deleted provider.
This runs for both per-instance deletes and queryset (bulk) deletes.
"""
try:
tenant_cfg = LighthouseTenantConfiguration.objects.get(
tenant_id=instance.tenant_id
)
except LighthouseTenantConfiguration.DoesNotExist:
return
updated = False
defaults = tenant_cfg.default_models or {}
if instance.provider_type in defaults:
defaults.pop(instance.provider_type, None)
tenant_cfg.default_models = defaults
updated = True
if tenant_cfg.default_provider == instance.provider_type:
tenant_cfg.default_provider = ""
updated = True
if updated:
tenant_cfg.save()
File diff suppressed because it is too large Load Diff
@@ -1003,6 +1003,504 @@ class TestCombinedAuthentication:
assert api_key.last_used_at is not None
@pytest.mark.django_db
class TestAPIKeyRLSBypass:
"""Test RLS bypass fix for API key authentication.
These tests verify that API key authentication works correctly even when
RLS context is not set, which is critical since we don't know the tenant_id
until we look up the API key (which itself is protected by RLS).
The fix ensures all database operations during authentication use the admin
database, bypassing RLS constraints.
"""
def test_api_key_authentication_without_rls_context(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Verify API key authentication works without pre-existing RLS context.
This is the core fix: authentication must succeed even when prowler.tenant_id
is not set, since we need to look up the API key to discover the tenant.
"""
client = APIClient()
api_key = api_keys_fixture[0]
api_key_headers = get_api_key_header(api_key._raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 200
assert "data" in response.json()
def test_api_key_lookup_uses_admin_database(
self, create_test_user, tenants_fixture
):
"""Verify API key lookup uses admin database during authentication.
The TenantAPIKey model is RLS-protected, so queries against it would
normally fail without prowler.tenant_id set. The fix routes lookups
to the admin database which bypasses RLS.
"""
client = APIClient()
tenant = tenants_fixture[0]
role = Role.objects.create(
tenant_id=tenant.id,
name="Admin DB Test Role",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=create_test_user,
role=role,
tenant_id=tenant.id,
)
api_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Admin DB Test Key",
tenant_id=tenant.id,
entity=create_test_user,
)
api_key_headers = get_api_key_header(raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 200
api_key.refresh_from_db()
assert api_key.last_used_at is not None
def test_tenant_context_established_after_authentication(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Verify correct tenant context is established after API key auth.
After authentication, the tenant_id from the API key should be used
to set up the proper RLS context for subsequent queries.
"""
client = APIClient()
api_key = api_keys_fixture[0]
api_key_headers = get_api_key_header(api_key._raw_key)
# Use tenant-list endpoint to get actual tenant IDs
tenant_response = client.get(reverse("tenant-list"), headers=api_key_headers)
assert tenant_response.status_code == 200
tenant_data = tenant_response.json()["data"]
tenant_ids = [t["id"] for t in tenant_data]
# Verify the API key's tenant is in the list of accessible tenants
assert str(api_key.tenant_id) in tenant_ids
def test_concurrent_authentication_different_tenants(self, tenants_fixture):
"""Verify multiple API keys from different tenants can authenticate simultaneously.
This tests that the admin database routing works correctly in concurrent
scenarios and doesn't cause tenant isolation issues.
"""
client = APIClient()
user1 = User.objects.create_user(
name="concurrent_user1",
email="concurrent1@test.com",
password=TEST_PASSWORD,
)
user2 = User.objects.create_user(
name="concurrent_user2",
email="concurrent2@test.com",
password=TEST_PASSWORD,
)
tenant1 = tenants_fixture[0]
tenant2 = tenants_fixture[1]
Membership.objects.create(user=user1, tenant=tenant1)
Membership.objects.create(user=user2, tenant=tenant2)
role1 = Role.objects.create(
tenant_id=tenant1.id,
name="Concurrent Role 1",
unlimited_visibility=True,
manage_account=True,
)
role2 = Role.objects.create(
tenant_id=tenant2.id,
name="Concurrent Role 2",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=user1,
role=role1,
tenant_id=tenant1.id,
)
UserRoleRelationship.objects.create(
user=user2,
role=role2,
tenant_id=tenant2.id,
)
api_key1, raw_key1 = TenantAPIKey.objects.create_api_key(
name="Concurrent Key 1",
tenant_id=tenant1.id,
entity=user1,
)
api_key2, raw_key2 = TenantAPIKey.objects.create_api_key(
name="Concurrent Key 2",
tenant_id=tenant2.id,
entity=user2,
)
headers1 = get_api_key_header(raw_key1)
headers2 = get_api_key_header(raw_key2)
response1 = client.get(reverse("provider-list"), headers=headers1)
response2 = client.get(reverse("provider-list"), headers=headers2)
assert response1.status_code == 200
assert response2.status_code == 200
api_key1.refresh_from_db()
api_key2.refresh_from_db()
assert api_key1.last_used_at is not None
assert api_key2.last_used_at is not None
assert api_key1.tenant_id == tenant1.id
assert api_key2.tenant_id == tenant2.id
def test_api_key_update_last_used_uses_admin_db(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Verify last_used_at update uses admin database.
The update to last_used_at during authentication must also use the
admin database since it occurs before RLS context is established.
"""
client = APIClient()
api_key = api_keys_fixture[0]
assert api_key.last_used_at is None
api_key_headers = get_api_key_header(api_key._raw_key)
first_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert first_response.status_code == 200
api_key.refresh_from_db()
first_timestamp = api_key.last_used_at
assert first_timestamp is not None
time.sleep(0.1)
second_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert second_response.status_code == 200
api_key.refresh_from_db()
second_timestamp = api_key.last_used_at
assert second_timestamp > first_timestamp
def test_api_key_prefix_lookup_bypasses_rls(
self, create_test_user, tenants_fixture
):
"""Verify prefix-based API key lookup works without RLS context.
The authentication process splits the key into prefix and encrypted parts,
then looks up by prefix. This lookup must work via admin database.
"""
client = APIClient()
tenant = tenants_fixture[0]
role = Role.objects.create(
tenant_id=tenant.id,
name="Prefix Test Role",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=create_test_user,
role=role,
tenant_id=tenant.id,
)
api_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Prefix Test Key",
tenant_id=tenant.id,
entity=create_test_user,
)
prefix = raw_key.split(".")[0]
assert prefix == api_key.prefix
api_key_headers = get_api_key_header(raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 200
def test_expired_api_key_check_uses_admin_db(
self, create_test_user, tenants_fixture
):
"""Verify expired API key validation works via admin database.
Checking if a key is expired requires reading from TenantAPIKey,
which must use admin database during authentication.
"""
client = APIClient()
tenant = tenants_fixture[0]
expired_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Expired Test Key",
tenant_id=tenant.id,
entity=create_test_user,
expiry_date=datetime.now(timezone.utc) - timedelta(days=1),
)
api_key_headers = get_api_key_header(raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert "expired" in response.json()["errors"][0]["detail"].lower()
def test_revoked_api_key_check_uses_admin_db(
self, create_test_user, tenants_fixture
):
"""Verify revoked API key validation works via admin database.
Checking if a key is revoked requires reading from TenantAPIKey,
which must use admin database during authentication.
"""
client = APIClient()
tenant = tenants_fixture[0]
role = Role.objects.create(
tenant_id=tenant.id,
name="Revoked Test Role",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=create_test_user,
role=role,
tenant_id=tenant.id,
)
api_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Revoked Test Key",
tenant_id=tenant.id,
entity=create_test_user,
)
api_key.revoked = True
api_key.save()
api_key_headers = get_api_key_header(raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert "revoked" in response.json()["errors"][0]["detail"].lower()
@pytest.mark.django_db
class TestAPIKeyMultiTenantWorkflows:
"""Test complete multi-tenant workflows using API keys.
These integration tests verify end-to-end scenarios where API keys
are used across different tenants and ensure proper isolation.
"""
def test_user_with_multiple_tenant_memberships_api_keys(self, tenants_fixture):
"""User with memberships in multiple tenants can use different API keys.
Tests that a user can have separate API keys for different tenants
and each key only accesses resources in its tenant.
"""
client = APIClient()
user = User.objects.create_user(
name="multi_tenant_user",
email="multitenant@test.com",
password=TEST_PASSWORD,
)
tenant1 = tenants_fixture[0]
tenant2 = tenants_fixture[1]
Membership.objects.create(user=user, tenant=tenant1)
Membership.objects.create(user=user, tenant=tenant2)
role1 = Role.objects.create(
tenant_id=tenant1.id,
name="Multi Tenant Role 1",
unlimited_visibility=True,
manage_account=True,
)
role2 = Role.objects.create(
tenant_id=tenant2.id,
name="Multi Tenant Role 2",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=user,
role=role1,
tenant_id=tenant1.id,
)
UserRoleRelationship.objects.create(
user=user,
role=role2,
tenant_id=tenant2.id,
)
key1, raw_key1 = TenantAPIKey.objects.create_api_key(
name="Tenant 1 Key",
tenant_id=tenant1.id,
entity=user,
)
key2, raw_key2 = TenantAPIKey.objects.create_api_key(
name="Tenant 2 Key",
tenant_id=tenant2.id,
entity=user,
)
headers1 = get_api_key_header(raw_key1)
headers2 = get_api_key_header(raw_key2)
response1 = client.get(reverse("provider-list"), headers=headers1)
response2 = client.get(reverse("provider-list"), headers=headers2)
assert response1.status_code == 200
assert response2.status_code == 200
me_response1 = client.get(reverse("user-me"), headers=headers1)
me_response2 = client.get(reverse("user-me"), headers=headers2)
assert me_response1.status_code == 200
assert me_response2.status_code == 200
assert me_response1.json()["data"]["id"] == str(user.id)
assert me_response2.json()["data"]["id"] == str(user.id)
def test_api_key_cannot_access_different_tenant_resources(
self, tenants_fixture, providers_fixture
):
"""API key from one tenant cannot access resources from another tenant.
Verifies RLS enforcement after authentication ensures tenant isolation.
"""
client = APIClient()
user1 = User.objects.create_user(
name="tenant1_user",
email="tenant1user@test.com",
password=TEST_PASSWORD,
)
user2 = User.objects.create_user(
name="tenant2_user",
email="tenant2user@test.com",
password=TEST_PASSWORD,
)
tenant1 = tenants_fixture[0]
tenant2 = tenants_fixture[1]
Membership.objects.create(user=user1, tenant=tenant1)
Membership.objects.create(user=user2, tenant=tenant2)
role1 = Role.objects.create(
tenant_id=tenant1.id,
name="Isolation Test Role 1",
unlimited_visibility=True,
manage_account=True,
)
role2 = Role.objects.create(
tenant_id=tenant2.id,
name="Isolation Test Role 2",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=user1,
role=role1,
tenant_id=tenant1.id,
)
UserRoleRelationship.objects.create(
user=user2,
role=role2,
tenant_id=tenant2.id,
)
key1, raw_key1 = TenantAPIKey.objects.create_api_key(
name="Isolation Key 1",
tenant_id=tenant1.id,
entity=user1,
)
headers1 = get_api_key_header(raw_key1)
provider_response = client.get(reverse("provider-list"), headers=headers1)
assert provider_response.status_code == 200
providers_data = provider_response.json()["data"]
if providers_data:
for provider in providers_data:
provider_tenant_id = str(tenants_fixture[0].id)
assert str(tenant2.id) != provider_tenant_id
def test_api_key_workflow_create_authenticate_revoke(
self, create_test_user_rbac, tenants_fixture
):
"""Complete workflow: create API key via JWT, use it, then revoke via JWT.
Tests the full lifecycle using both JWT and API key authentication.
"""
client = APIClient()
tenants_fixture[0]
jwt_access_token, _ = get_api_tokens(
client, create_test_user_rbac.email, TEST_PASSWORD
)
jwt_headers = get_authorization_header(jwt_access_token)
create_response = client.post(
reverse("api-key-list"),
data={
"data": {
"type": "api-keys",
"attributes": {
"name": "Workflow Test Key",
},
}
},
format="vnd.api+json",
headers=jwt_headers,
)
assert create_response.status_code == 201
api_key_data = create_response.json()["data"]
api_key_id = api_key_data["id"]
raw_api_key = api_key_data["attributes"]["api_key"]
api_key_headers = get_api_key_header(raw_api_key)
auth_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert auth_response.status_code == 200
revoke_response = client.delete(
reverse("api-key-revoke", kwargs={"pk": api_key_id}),
headers=jwt_headers,
)
assert revoke_response.status_code == 200
revoked_auth_response = client.get(
reverse("provider-list"), headers=api_key_headers
)
assert revoked_auth_response.status_code == 401
assert "revoked" in revoked_auth_response.json()["errors"][0]["detail"].lower()
def get_api_key_header(api_key: str) -> dict:
"""Helper to create API key authorization header."""
return {"Authorization": f"Api-Key {api_key}"}
@@ -0,0 +1,384 @@
import time
from datetime import datetime, timedelta, timezone
from unittest.mock import patch
from uuid import uuid4
import pytest
from django.test import RequestFactory
from rest_framework.exceptions import AuthenticationFailed
from api.authentication import TenantAPIKeyAuthentication
from api.db_router import MainRouter
from api.models import TenantAPIKey
@pytest.mark.django_db
class TestTenantAPIKeyAuthentication:
@pytest.fixture
def auth_backend(self):
"""Create an instance of TenantAPIKeyAuthentication."""
return TenantAPIKeyAuthentication()
@pytest.fixture
def request_factory(self):
"""Create a Django request factory."""
return RequestFactory()
def test_authenticate_credentials_uses_admin_database(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test that _authenticate_credentials routes queries to admin database."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
# Extract the encrypted key part (after the prefix and separator)
_, encrypted_key = raw_key.split(TenantAPIKey.objects.separator, 1)
# Create a mock request
request = request_factory.get("/")
# Call the method
entity, auth_dict = auth_backend._authenticate_credentials(
request, encrypted_key
)
# Verify that the entity is the user associated with the API key
assert entity == api_key.entity
assert entity.id == api_key.entity.id
def test_authenticate_credentials_restores_manager_on_success(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test that the manager is restored after successful authentication."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
_, encrypted_key = raw_key.split(TenantAPIKey.objects.separator, 1)
# Store the original manager
original_manager = TenantAPIKey.objects
request = request_factory.get("/")
# Call the method
auth_backend._authenticate_credentials(request, encrypted_key)
# Verify the manager was restored
assert TenantAPIKey.objects == original_manager
def test_authenticate_credentials_restores_manager_on_exception(
self, auth_backend, request_factory
):
"""Test that the manager is restored even when an exception occurs."""
# Store the original manager
original_manager = TenantAPIKey.objects
request = request_factory.get("/")
# Try to authenticate with an invalid key that will raise an exception
with pytest.raises(Exception):
auth_backend._authenticate_credentials(request, "invalid_encrypted_key")
# Verify the manager was restored despite the exception
assert TenantAPIKey.objects == original_manager
def test_authenticate_valid_api_key(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test successful authentication with a valid API key."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
# Create a request with the API key in the Authorization header
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
# Authenticate
entity, auth_dict = auth_backend.authenticate(request)
# Verify the entity and auth dict
assert entity == api_key.entity
assert auth_dict["tenant_id"] == str(api_key.tenant_id)
assert auth_dict["sub"] == str(api_key.entity.id)
assert auth_dict["api_key_prefix"] == api_key.prefix
# Verify that last_used_at was updated
api_key.refresh_from_db()
assert api_key.last_used_at is not None
assert (datetime.now(timezone.utc) - api_key.last_used_at).seconds < 5
def test_authenticate_valid_api_key_uses_admin_database(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test that authenticate uses admin database for API key lookup."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
# Mock the manager's using method to verify it's called with admin_db
with patch.object(
TenantAPIKey.objects, "using", wraps=TenantAPIKey.objects.using
) as mock_using:
auth_backend.authenticate(request)
# Verify that .using('admin') was called
mock_using.assert_called_with(MainRouter.admin_db)
def test_authenticate_invalid_key_format_missing_separator(
self, auth_backend, request_factory
):
"""Test authentication fails with invalid API key format (no separator)."""
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = "Api-Key invalid_key_no_separator"
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "Invalid API Key."
def test_authenticate_invalid_key_format_empty_prefix(
self, auth_backend, request_factory
):
"""Test authentication fails with empty prefix."""
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = "Api-Key .encrypted_part"
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "Invalid API Key."
def test_authenticate_invalid_encrypted_key(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test authentication fails with invalid encrypted key."""
api_key = api_keys_fixture[0]
# Create an invalid key with valid prefix but invalid encryption
invalid_key = f"{api_key.prefix}.invalid_encrypted_data"
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {invalid_key}"
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "Invalid API Key."
def test_authenticate_revoked_api_key(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test authentication fails with a revoked API key."""
# Use the revoked API key (index 2 from fixture)
api_key = api_keys_fixture[2]
raw_key = api_key._raw_key
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
# The revoked key should fail during credential validation
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "This API Key has been revoked."
def test_authenticate_expired_api_key(
self, auth_backend, create_test_user, tenants_fixture, request_factory
):
"""Test authentication fails with an expired API key."""
tenant = tenants_fixture[0]
user = create_test_user
# Create an expired API key
api_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Expired API Key",
tenant_id=tenant.id,
entity=user,
expiry_date=datetime.now(timezone.utc) - timedelta(days=1),
)
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "API Key has already expired."
def test_authenticate_nonexistent_api_key(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test authentication fails when API key doesn't exist in database."""
# Create a valid-looking encrypted key with a non-existent UUID
api_key = api_keys_fixture[0]
non_existent_uuid = str(uuid4())
# Manually create an encrypted key with a non-existent ID
payload = {
"_pk": non_existent_uuid,
"_exp": (datetime.now(timezone.utc) + timedelta(days=30)).timestamp(),
}
encrypted_key = auth_backend.key_crypto.generate(payload)
fake_key = f"{api_key.prefix}.{encrypted_key}"
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {fake_key}"
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "No entity matching this api key."
def test_authenticate_updates_last_used_at(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test that last_used_at is updated on successful authentication."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
# Store the original last_used_at
original_last_used = api_key.last_used_at
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
# Authenticate
auth_backend.authenticate(request)
# Refresh from database
api_key.refresh_from_db()
# Verify last_used_at was updated
assert api_key.last_used_at is not None
if original_last_used:
assert api_key.last_used_at > original_last_used
def test_authenticate_saves_to_admin_database(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test that the API key save operation uses admin database."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
# Mock the save method to verify it's called with using='admin'
with patch.object(TenantAPIKey, "save") as mock_save:
auth_backend.authenticate(request)
# Verify save was called with using=admin_db
mock_save.assert_called_once_with(
update_fields=["last_used_at"], using=MainRouter.admin_db
)
def test_authenticate_returns_correct_auth_dict(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test that the auth dict contains all required fields."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
entity, auth_dict = auth_backend.authenticate(request)
# Verify all required fields are present
assert "tenant_id" in auth_dict
assert "sub" in auth_dict
assert "api_key_prefix" in auth_dict
# Verify values are correct
assert auth_dict["tenant_id"] == str(api_key.tenant_id)
assert auth_dict["sub"] == str(api_key.entity.id)
assert auth_dict["api_key_prefix"] == api_key.prefix
def test_authenticate_with_multiple_api_keys_same_tenant(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test that authentication works correctly with multiple API keys for the same tenant."""
# Test with first API key
api_key1 = api_keys_fixture[0]
raw_key1 = api_key1._raw_key
request1 = request_factory.get("/")
request1.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key1}"
entity1, auth_dict1 = auth_backend.authenticate(request1)
assert entity1 == api_key1.entity
assert auth_dict1["api_key_prefix"] == api_key1.prefix
# Test with second API key
api_key2 = api_keys_fixture[1]
raw_key2 = api_key2._raw_key
request2 = request_factory.get("/")
request2.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key2}"
entity2, auth_dict2 = auth_backend.authenticate(request2)
assert entity2 == api_key2.entity
assert auth_dict2["api_key_prefix"] == api_key2.prefix
# Verify they're different keys but same tenant
assert auth_dict1["api_key_prefix"] != auth_dict2["api_key_prefix"]
assert auth_dict1["tenant_id"] == auth_dict2["tenant_id"]
def test_authenticate_with_wrong_prefix_in_db(
self, auth_backend, api_keys_fixture, request_factory
):
"""Test authentication fails when prefix doesn't match database."""
api_key = api_keys_fixture[0]
raw_key = api_key._raw_key
# Extract the encrypted part and combine with wrong prefix
_, encrypted_part = raw_key.split(TenantAPIKey.objects.separator, 1)
wrong_key = f"pk_wrong123.{encrypted_part}"
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {wrong_key}"
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "Invalid API Key."
def test_authenticate_credentials_exception_handling(
self, auth_backend, request_factory
):
"""Test that exceptions in _authenticate_credentials are properly handled."""
request = request_factory.get("/")
# Test with completely invalid data that will cause InvalidToken
with pytest.raises(Exception):
auth_backend._authenticate_credentials(request, "completely_invalid")
def test_authenticate_with_expired_timestamp(
self, auth_backend, create_test_user, tenants_fixture, request_factory
):
"""Test that expired timestamp in encrypted key causes authentication failure."""
tenant = tenants_fixture[0]
user = create_test_user
# Create an API key with a very short expiry
api_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Short-lived API Key",
tenant_id=tenant.id,
entity=user,
expiry_date=datetime.now(timezone.utc) + timedelta(seconds=1),
)
# Wait for the key to expire
time.sleep(2)
request = request_factory.get("/")
request.META["HTTP_AUTHORIZATION"] = f"Api-Key {raw_key}"
# Should fail with expired key
with pytest.raises(AuthenticationFailed) as exc_info:
auth_backend.authenticate(request)
assert str(exc_info.value.detail) == "API Key has already expired."
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,13 @@
import re
from rest_framework_json_api import serializers
class OpenAICredentialsSerializer(serializers.Serializer):
api_key = serializers.CharField()
def validate_api_key(self, value: str) -> str:
pattern = r"^sk-[\w-]+$"
if not re.match(pattern, value or ""):
raise serializers.ValidationError("Invalid OpenAI API key format.")
return value
@@ -105,23 +105,25 @@ from rest_framework_json_api import serializers
"type": "string",
"description": "The Azure application (client) ID for authentication in Azure AD.",
},
"client_secret": {
"type": "string",
"description": "The client secret associated with the application (client) ID, providing "
"secure access.",
},
"tenant_id": {
"type": "string",
"description": "The Azure tenant ID, representing the directory where the application is "
"registered.",
},
"client_secret": {
"type": "string",
"description": "The client secret associated with the application (client) ID, providing "
"secure access.",
},
"user": {
"type": "email",
"description": "User microsoft email address.",
"deprecated": True,
},
"password": {
"type": "string",
"description": "User password.",
"deprecated": True,
},
},
"required": [
@@ -132,6 +134,30 @@ from rest_framework_json_api import serializers
"password",
],
},
{
"type": "object",
"title": "M365 Certificate Credentials",
"properties": {
"client_id": {
"type": "string",
"description": "The Azure application (client) ID for authentication in Azure AD.",
},
"tenant_id": {
"type": "string",
"description": "The Azure tenant ID, representing the directory where the application is "
"registered.",
},
"certificate_content": {
"type": "string",
"description": "The certificate content in base64 format for certificate-based authentication.",
},
},
"required": [
"client_id",
"tenant_id",
"certificate_content",
],
},
{
"type": "object",
"title": "GCP Static Credentials",
+468 -1
View File
@@ -1,3 +1,4 @@
import base64
import json
from datetime import datetime, timedelta, timezone
@@ -5,8 +6,10 @@ from django.conf import settings
from django.contrib.auth import authenticate
from django.contrib.auth.models import update_last_login
from django.contrib.auth.password_validation import validate_password
from django.db import IntegrityError
from drf_spectacular.utils import extend_schema_field
from jwt.exceptions import InvalidKeyError
from rest_framework.reverse import reverse
from rest_framework.validators import UniqueTogetherValidator
from rest_framework_json_api import serializers
from rest_framework_json_api.relations import SerializerMethodResourceRelatedField
@@ -24,6 +27,9 @@ from api.models import (
Invitation,
InvitationRoleRelationship,
LighthouseConfiguration,
LighthouseProviderConfiguration,
LighthouseProviderModels,
LighthouseTenantConfiguration,
Membership,
Processor,
Provider,
@@ -53,6 +59,7 @@ from api.v1.serializer_utils.integrations import (
S3ConfigSerializer,
SecurityHubConfigSerializer,
)
from api.v1.serializer_utils.lighthouse import OpenAICredentialsSerializer
from api.v1.serializer_utils.processors import ProcessorConfigField
from api.v1.serializer_utils.providers import ProviderSecretField
from prowler.lib.mutelist.mutelist import Mutelist
@@ -317,6 +324,23 @@ class UserSerializer(BaseSerializerV1):
)
class UserIncludeSerializer(UserSerializer):
class Meta:
model = User
fields = [
"id",
"name",
"email",
"company_name",
"date_joined",
"roles",
]
included_serializers = {
"roles": "api.v1.serializers.RoleIncludeSerializer",
}
class UserCreateSerializer(BaseWriteSerializer):
password = serializers.CharField(write_only=True)
company_name = serializers.CharField(required=False)
@@ -1378,10 +1402,38 @@ class AzureProviderSecret(serializers.Serializer):
class M365ProviderSecret(serializers.Serializer):
client_id = serializers.CharField()
client_secret = serializers.CharField()
client_secret = serializers.CharField(required=False)
tenant_id = serializers.CharField()
user = serializers.EmailField(required=False)
password = serializers.CharField(required=False)
certificate_content = serializers.CharField(required=False)
def validate(self, attrs):
if attrs.get("client_secret") and attrs.get("certificate_content"):
raise serializers.ValidationError(
"You cannot provide both client_secret and certificate_content."
)
if not attrs.get("client_secret") and not attrs.get("certificate_content"):
raise serializers.ValidationError(
"You must provide either client_secret or certificate_content."
)
return super().validate(attrs)
def validate_certificate_content(self, certificate_content):
"""Validate that M365 certificate content is valid base64 encoded data."""
if certificate_content:
try:
base64.b64decode(certificate_content, validate=True)
except Exception as e:
raise ValidationError(
{
"certificate_content": [
f"The provided certificate content is not valid base64 encoded data: {str(e)}"
]
},
code="m365-certificate-content",
)
return certificate_content
class Meta:
resource_name = "provider-secrets"
@@ -1974,6 +2026,17 @@ class ComplianceOverviewDetailSerializer(serializers.Serializer):
resource_name = "compliance-requirements-details"
class ComplianceOverviewDetailThreatscoreSerializer(ComplianceOverviewDetailSerializer):
"""
Serializer for detailed compliance requirement information for Threatscore.
Includes additional fields specific to the Threatscore framework.
"""
passed_findings = serializers.IntegerField()
total_findings = serializers.IntegerField()
class ComplianceOverviewAttributesSerializer(serializers.Serializer):
id = serializers.CharField()
compliance_name = serializers.CharField()
@@ -2693,6 +2756,16 @@ class LighthouseConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
"updated_at": {"read_only": True},
}
def validate_temperature(self, value):
if not 0 <= value <= 1:
raise ValidationError("Temperature must be between 0 and 1.")
return value
def validate_max_tokens(self, value):
if not 500 <= value <= 5000:
raise ValidationError("Max tokens must be between 500 and 5000.")
return value
def validate(self, attrs):
tenant_id = self.context.get("request").tenant_id
if LighthouseConfiguration.objects.filter(tenant_id=tenant_id).exists():
@@ -2701,6 +2774,11 @@ class LighthouseConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
"tenant_id": "Lighthouse configuration already exists for this tenant."
}
)
api_key = attrs.get("api_key")
if api_key is not None:
OpenAICredentialsSerializer(data={"api_key": api_key}).is_valid(
raise_exception=True
)
return super().validate(attrs)
def create(self, validated_data):
@@ -2745,6 +2823,24 @@ class LighthouseConfigUpdateSerializer(BaseWriteSerializer):
"max_tokens": {"required": False},
}
def validate_temperature(self, value):
if not 0 <= value <= 1:
raise ValidationError("Temperature must be between 0 and 1.")
return value
def validate_max_tokens(self, value):
if not 500 <= value <= 5000:
raise ValidationError("Max tokens must be between 500 and 5000.")
return value
def validate(self, attrs):
api_key = attrs.get("api_key", None)
if api_key is not None:
OpenAICredentialsSerializer(data={"api_key": api_key}).is_valid(
raise_exception=True
)
return super().validate(attrs)
def update(self, instance, validated_data):
api_key = validated_data.pop("api_key", None)
instance = super().update(instance, validated_data)
@@ -2779,6 +2875,10 @@ class TenantApiKeySerializer(RLSSerializer):
"entity",
]
included_serializers = {
"entity": "api.v1.serializers.UserIncludeSerializer",
}
class TenantApiKeyCreateSerializer(RLSSerializer, BaseWriteSerializer):
"""Serializer for creating new API keys."""
@@ -2811,6 +2911,13 @@ class TenantApiKeyCreateSerializer(RLSSerializer, BaseWriteSerializer):
"api_key": {"read_only": True},
}
def validate_name(self, value):
"""Validate that the name is unique within the tenant."""
tenant_id = self.context.get("tenant_id")
if TenantAPIKey.objects.filter(tenant_id=tenant_id, name=value).exists():
raise ValidationError("An API key with this name already exists.")
return value
def get_api_key(self, obj):
"""Return the raw API key if it was stored during creation."""
return getattr(obj, "_raw_api_key", None)
@@ -2852,3 +2959,363 @@ class TenantApiKeyUpdateSerializer(RLSSerializer, BaseWriteSerializer):
"inserted_at": {"read_only": True},
"last_used_at": {"read_only": True},
}
def validate_name(self, value):
"""Validate that the name is unique within the tenant, excluding current instance."""
tenant_id = self.context.get("tenant_id")
if (
TenantAPIKey.objects.filter(tenant_id=tenant_id, name=value)
.exclude(id=self.instance.id)
.exists()
):
raise ValidationError("An API key with this name already exists.")
return value
# Lighthouse: Provider configurations
class LighthouseProviderConfigSerializer(RLSSerializer):
"""
Read serializer for LighthouseProviderConfiguration.
"""
# Decrypted credentials are only returned in to_representation when requested
credentials = serializers.JSONField(required=False, read_only=True)
class Meta:
model = LighthouseProviderConfiguration
fields = [
"id",
"inserted_at",
"updated_at",
"provider_type",
"base_url",
"is_active",
"credentials",
"url",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"is_active": {"read_only": True},
"url": {"read_only": True, "view_name": "lighthouse-providers-detail"},
}
class JSONAPIMeta:
resource_name = "lighthouse-providers"
def to_representation(self, instance):
data = super().to_representation(instance)
# Support JSON:API fields filter: fields[lighthouse-providers]=credentials,base_url
fields_param = self.context.get("request", None) and self.context[
"request"
].query_params.get("fields[lighthouse-providers]", "")
creds = instance.credentials_decoded
requested_fields = (
[f.strip() for f in fields_param.split(",")] if fields_param else []
)
if "credentials" in requested_fields:
# Return full decrypted credentials JSON
data["credentials"] = creds
else:
# Return masked credentials by default
def mask_value(value):
if isinstance(value, str):
return "*" * len(value)
if isinstance(value, dict):
return {k: mask_value(v) for k, v in value.items()}
if isinstance(value, list):
return [mask_value(v) for v in value]
return value
# Always return masked credentials, even if creds is None
if creds is not None:
data["credentials"] = mask_value(creds)
else:
# If credentials_decoded returns None, return None for credentials field
data["credentials"] = None
return data
class LighthouseProviderConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
"""
Create serializer for LighthouseProviderConfiguration.
Accepts credentials as JSON; stored encrypted via credentials_decoded.
"""
credentials = serializers.JSONField(write_only=True, required=True)
class Meta:
model = LighthouseProviderConfiguration
fields = [
"provider_type",
"base_url",
"credentials",
"is_active",
]
extra_kwargs = {
"is_active": {"required": False},
"base_url": {"required": False, "allow_null": True},
}
def create(self, validated_data):
credentials = validated_data.pop("credentials")
instance = LighthouseProviderConfiguration(**validated_data)
instance.tenant_id = self.context.get("tenant_id")
instance.credentials_decoded = credentials
try:
instance.save()
return instance
except IntegrityError:
raise ValidationError(
{
"provider_type": "Configuration for this provider already exists for the tenant."
}
)
def validate(self, attrs):
provider_type = attrs.get("provider_type")
credentials = attrs.get("credentials") or {}
if provider_type == LighthouseProviderConfiguration.LLMProviderChoices.OPENAI:
try:
OpenAICredentialsSerializer(data=credentials).is_valid(
raise_exception=True
)
except ValidationError as e:
details = e.detail.copy()
for key, value in details.items():
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
return super().validate(attrs)
class LighthouseProviderConfigUpdateSerializer(BaseWriteSerializer):
"""
Update serializer for LighthouseProviderConfiguration.
"""
credentials = serializers.JSONField(write_only=True, required=False)
class Meta:
model = LighthouseProviderConfiguration
fields = [
"id",
"provider_type",
"base_url",
"credentials",
"is_active",
]
extra_kwargs = {
"id": {"read_only": True},
"provider_type": {"read_only": True},
"base_url": {"required": False, "allow_null": True},
"is_active": {"required": False},
}
def update(self, instance, validated_data):
credentials = validated_data.pop("credentials", None)
for attr, value in validated_data.items():
setattr(instance, attr, value)
if credentials is not None:
instance.credentials_decoded = credentials
instance.save()
return instance
def validate(self, attrs):
provider_type = getattr(self.instance, "provider_type", None)
credentials = attrs.get("credentials", None)
if (
credentials is not None
and provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
try:
OpenAICredentialsSerializer(data=credentials).is_valid(
raise_exception=True
)
except ValidationError as e:
details = e.detail.copy()
for key, value in details.items():
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
return super().validate(attrs)
# Lighthouse: Tenant configuration
class LighthouseTenantConfigSerializer(RLSSerializer):
"""
Read serializer for LighthouseTenantConfiguration.
"""
# Build singleton URL without pk
url = serializers.SerializerMethodField()
def get_url(self, obj):
request = self.context.get("request")
return reverse("lighthouse-config", request=request)
class Meta:
model = LighthouseTenantConfiguration
fields = [
"id",
"inserted_at",
"updated_at",
"business_context",
"default_provider",
"default_models",
"url",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"url": {"read_only": True},
}
class LighthouseTenantConfigUpdateSerializer(BaseWriteSerializer):
class Meta:
model = LighthouseTenantConfiguration
fields = [
"id",
"business_context",
"default_provider",
"default_models",
]
extra_kwargs = {
"id": {"read_only": True},
}
def validate(self, attrs):
request = self.context.get("request")
tenant_id = self.context.get("tenant_id") or (
getattr(request, "tenant_id", None) if request else None
)
default_provider = attrs.get(
"default_provider", getattr(self.instance, "default_provider", "")
)
default_models = attrs.get(
"default_models", getattr(self.instance, "default_models", {})
)
if default_provider:
supported = set(LighthouseProviderConfiguration.LLMProviderChoices.values)
if default_provider not in supported:
raise ValidationError(
{"default_provider": f"Unsupported provider '{default_provider}'."}
)
if not LighthouseProviderConfiguration.objects.filter(
tenant_id=tenant_id, provider_type=default_provider, is_active=True
).exists():
raise ValidationError(
{
"default_provider": f"No active configuration found for '{default_provider}'."
}
)
if default_models is not None and not isinstance(default_models, dict):
raise ValidationError(
{"default_models": "Must be an object mapping provider -> model_id."}
)
for provider_type, model_id in (default_models or {}).items():
provider_cfg = LighthouseProviderConfiguration.objects.filter(
tenant_id=tenant_id, provider_type=provider_type, is_active=True
).first()
if not provider_cfg:
raise ValidationError(
{
"default_models": f"No active configuration for provider '{provider_type}'."
}
)
if not LighthouseProviderModels.objects.filter(
tenant_id=tenant_id,
provider_configuration=provider_cfg,
model_id=model_id,
).exists():
raise ValidationError(
{
"default_models": f"Invalid model '{model_id}' for provider '{provider_type}'."
}
)
return super().validate(attrs)
# Lighthouse: Provider models
class LighthouseProviderModelsSerializer(RLSSerializer):
"""
Read serializer for LighthouseProviderModels.
"""
provider_configuration = serializers.ResourceRelatedField(read_only=True)
class Meta:
model = LighthouseProviderModels
fields = [
"id",
"inserted_at",
"updated_at",
"provider_configuration",
"model_id",
"model_name",
"default_parameters",
"url",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"url": {"read_only": True, "view_name": "lighthouse-models-detail"},
}
class LighthouseProviderModelsCreateSerializer(RLSSerializer, BaseWriteSerializer):
provider_configuration = serializers.ResourceRelatedField(
queryset=LighthouseProviderConfiguration.objects.all()
)
class Meta:
model = LighthouseProviderModels
fields = [
"provider_configuration",
"model_id",
"default_parameters",
]
extra_kwargs = {
"default_parameters": {"required": False},
}
class LighthouseProviderModelsUpdateSerializer(BaseWriteSerializer):
class Meta:
model = LighthouseProviderModels
fields = [
"id",
"default_parameters",
]
extra_kwargs = {
"id": {"read_only": True},
}
+22 -1
View File
@@ -17,6 +17,9 @@ from api.v1.views import (
InvitationAcceptViewSet,
InvitationViewSet,
LighthouseConfigViewSet,
LighthouseProviderConfigViewSet,
LighthouseProviderModelsViewSet,
LighthouseTenantConfigViewSet,
MembershipViewSet,
OverviewViewSet,
ProcessorViewSet,
@@ -34,12 +37,12 @@ from api.v1.views import (
ScheduleViewSet,
SchemaView,
TaskViewSet,
TenantApiKeyViewSet,
TenantFinishACSView,
TenantMembersViewSet,
TenantViewSet,
UserRoleRelationshipView,
UserViewSet,
TenantApiKeyViewSet,
)
router = routers.DefaultRouter(trailing_slash=False)
@@ -67,6 +70,16 @@ router.register(
basename="lighthouseconfiguration",
)
router.register(r"api-keys", TenantApiKeyViewSet, basename="api-key")
router.register(
r"lighthouse/providers",
LighthouseProviderConfigViewSet,
basename="lighthouse-providers",
)
router.register(
r"lighthouse/models",
LighthouseProviderModelsViewSet,
basename="lighthouse-models",
)
tenants_router = routers.NestedSimpleRouter(router, r"tenants", lookup="tenant")
tenants_router.register(
@@ -137,6 +150,14 @@ urlpatterns = [
),
name="provider_group-providers-relationship",
),
# Lighthouse tenant config as singleton endpoint
path(
"lighthouse/configuration",
LighthouseTenantConfigViewSet.as_view(
{"get": "list", "patch": "partial_update"}
),
name="lighthouse-config",
),
# API endpoint to start SAML SSO flow
path(
"auth/saml/initiate/", SAMLInitiateAPIView.as_view(), name="api_saml_initiate"
+423 -39
View File
@@ -1,4 +1,6 @@
import fnmatch
import glob
import json
import logging
import os
from datetime import datetime, timedelta, timezone
@@ -59,11 +61,13 @@ from tasks.tasks import (
backfill_scan_resource_summaries_task,
check_integration_connection_task,
check_lighthouse_connection_task,
check_lighthouse_provider_connection_task,
check_provider_connection_task,
delete_provider_task,
delete_tenant_task,
jira_integration_task,
perform_scan_task,
refresh_lighthouse_provider_models_task,
)
from api.base_views import BaseRLSViewSet, BaseTenantViewset, BaseUserViewset
@@ -83,6 +87,8 @@ from api.filters import (
InvitationFilter,
LatestFindingFilter,
LatestResourceFilter,
LighthouseProviderConfigFilter,
LighthouseProviderModelsFilter,
MembershipFilter,
ProcessorFilter,
ProviderFilter,
@@ -105,6 +111,9 @@ from api.models import (
Integration,
Invitation,
LighthouseConfiguration,
LighthouseProviderConfiguration,
LighthouseProviderModels,
LighthouseTenantConfiguration,
Membership,
Processor,
Provider,
@@ -142,6 +151,7 @@ from api.v1.mixins import PaginateByPkMixin, TaskManagementMixin
from api.v1.serializers import (
ComplianceOverviewAttributesSerializer,
ComplianceOverviewDetailSerializer,
ComplianceOverviewDetailThreatscoreSerializer,
ComplianceOverviewMetadataSerializer,
ComplianceOverviewSerializer,
FindingDynamicFilterSerializer,
@@ -158,6 +168,12 @@ from api.v1.serializers import (
LighthouseConfigCreateSerializer,
LighthouseConfigSerializer,
LighthouseConfigUpdateSerializer,
LighthouseProviderConfigCreateSerializer,
LighthouseProviderConfigSerializer,
LighthouseProviderConfigUpdateSerializer,
LighthouseProviderModelsSerializer,
LighthouseTenantConfigSerializer,
LighthouseTenantConfigUpdateSerializer,
MembershipSerializer,
OverviewFindingSerializer,
OverviewProviderSerializer,
@@ -305,7 +321,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.14.0"
spectacular_settings.VERSION = "1.15.0"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
@@ -665,36 +681,48 @@ class TenantFinishACSView(FinishACSView):
.get(email_domain=email_domain)
.tenant
)
role_name = (
extra.get("userType", ["no_permissions"])[0].strip()
if extra.get("userType")
else "no_permissions"
# Check if tenant has only one user with MANAGE_ACCOUNT role
users_with_manage_account = (
UserRoleRelationship.objects.using(MainRouter.admin_db)
.filter(role__manage_account=True, tenant_id=tenant.id)
.values("user")
.distinct()
.count()
)
try:
role = Role.objects.using(MainRouter.admin_db).get(
name=role_name, tenant=tenant
# Only apply role mapping from userType if tenant does NOT have exactly one user with MANAGE_ACCOUNT
if users_with_manage_account != 1:
role_name = (
extra.get("userType", ["no_permissions"])[0].strip()
if extra.get("userType")
else "no_permissions"
)
except Role.DoesNotExist:
role = Role.objects.using(MainRouter.admin_db).create(
name=role_name,
tenant=tenant,
manage_users=False,
manage_account=False,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
try:
role = Role.objects.using(MainRouter.admin_db).get(
name=role_name, tenant=tenant
)
except Role.DoesNotExist:
role = Role.objects.using(MainRouter.admin_db).create(
name=role_name,
tenant=tenant,
manage_users=False,
manage_account=False,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).filter(
user=user,
tenant_id=tenant.id,
).delete()
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).filter(
user=user,
tenant_id=tenant.id,
).delete()
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
)
membership, _ = Membership.objects.using(MainRouter.admin_db).get_or_create(
user=user,
tenant=tenant,
@@ -1580,6 +1608,25 @@ class ProviderViewSet(BaseRLSViewSet):
},
request=None,
),
threatscore=extend_schema(
tags=["Scan"],
summary="Retrieve threatscore report",
description="Download a specific threatscore report (e.g., 'prowler_threatscore_aws') as a PDF file.",
request=None,
responses={
200: OpenApiResponse(
description="PDF file containing the threatscore report"
),
202: OpenApiResponse(description="The task is in progress"),
401: OpenApiResponse(
description="API key missing or user not Authenticated"
),
403: OpenApiResponse(description="There is a problem with credentials"),
404: OpenApiResponse(
description="The scan has no threatscore reports, or the threatscore report generation task has not started yet"
),
},
),
)
@method_decorator(CACHE_DECORATOR, name="list")
@method_decorator(CACHE_DECORATOR, name="retrieve")
@@ -1636,6 +1683,9 @@ class ScanViewSet(BaseRLSViewSet):
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
return ScanComplianceReportSerializer
elif self.action == "threatscore":
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
return super().get_serializer_class()
def partial_update(self, request, *args, **kwargs):
@@ -1740,7 +1790,18 @@ class ScanViewSet(BaseRLSViewSet):
status=status.HTTP_502_BAD_GATEWAY,
)
contents = resp.get("Contents", [])
keys = [obj["Key"] for obj in contents if obj["Key"].endswith(suffix)]
keys = []
for obj in contents:
key = obj["Key"]
key_basename = os.path.basename(key)
if any(ch in suffix for ch in ("*", "?", "[")):
if fnmatch.fnmatch(key_basename, suffix):
keys.append(key)
elif key_basename == suffix:
keys.append(key)
elif key.endswith(suffix):
# Backward compatibility if suffix already includes directories
keys.append(key)
if not keys:
return Response(
{
@@ -1867,6 +1928,45 @@ class ScanViewSet(BaseRLSViewSet):
content, filename = loader
return self._serve_file(content, filename, "text/csv")
@action(
detail=True,
methods=["get"],
url_name="threatscore",
)
def threatscore(self, request, pk=None):
scan = self.get_object()
running_resp = self._get_task_status(scan)
if running_resp:
return running_resp
if not scan.output_location:
return Response(
{
"detail": "The scan has no reports, or the threatscore report generation task has not started yet."
},
status=status.HTTP_404_NOT_FOUND,
)
if scan.output_location.startswith("s3://"):
bucket = env.str("DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET", "")
key_prefix = scan.output_location.removeprefix(f"s3://{bucket}/")
prefix = os.path.join(
os.path.dirname(key_prefix),
"threatscore",
"*_threatscore_report.pdf",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "threatscore", "*_threatscore_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, Response):
return loader
content, filename = loader
return self._serve_file(content, filename, "application/pdf")
def create(self, request, *args, **kwargs):
input_serializer = self.get_serializer(data=request.data)
input_serializer.is_valid(raise_exception=True)
@@ -3436,15 +3536,16 @@ class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
)
filtered_queryset = self.filter_queryset(self.get_queryset())
all_requirements = (
filtered_queryset.values(
"requirement_id", "framework", "version", "description"
)
.distinct()
.annotate(
total_instances=Count("id"),
manual_count=Count("id", filter=Q(requirement_status="MANUAL")),
)
all_requirements = filtered_queryset.values(
"requirement_id",
"framework",
"version",
"description",
).annotate(
total_instances=Count("id"),
manual_count=Count("id", filter=Q(requirement_status="MANUAL")),
passed_findings_sum=Sum("passed_findings"),
total_findings_sum=Sum("total_findings"),
)
passed_instances = (
@@ -3463,6 +3564,8 @@ class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
total_instances = requirement["total_instances"]
passed_count = passed_counts.get(requirement_id, 0)
is_manual = requirement["manual_count"] == total_instances
passed_findings = requirement["passed_findings_sum"] or 0
total_findings = requirement["total_findings_sum"] or 0
if is_manual:
requirement_status = "MANUAL"
elif passed_count == total_instances:
@@ -3477,10 +3580,19 @@ class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
"version": requirement["version"],
"description": requirement["description"],
"status": requirement_status,
"passed_findings": passed_findings,
"total_findings": total_findings,
}
)
serializer = self.get_serializer(requirements_summary, many=True)
# Use different serializer for threatscore framework
if "threatscore" not in compliance_id:
serializer = self.get_serializer(requirements_summary, many=True)
else:
serializer = ComplianceOverviewDetailThreatscoreSerializer(
requirements_summary, many=True
)
return Response(serializer.data, status=status.HTTP_200_OK)
@action(detail=False, methods=["get"], url_name="attributes")
@@ -4079,21 +4191,25 @@ class IntegrationJiraViewSet(BaseRLSViewSet):
tags=["Lighthouse AI"],
summary="List all Lighthouse AI configurations",
description="Retrieve a list of all Lighthouse AI configurations.",
deprecated=True,
),
create=extend_schema(
tags=["Lighthouse AI"],
summary="Create a new Lighthouse AI configuration",
description="Create a new Lighthouse AI configuration with the specified details.",
deprecated=True,
),
partial_update=extend_schema(
tags=["Lighthouse AI"],
summary="Partially update a Lighthouse AI configuration",
description="Update certain fields of an existing Lighthouse AI configuration.",
deprecated=True,
),
destroy=extend_schema(
tags=["Lighthouse AI"],
summary="Delete a Lighthouse AI configuration",
description="Remove a Lighthouse AI configuration by its ID.",
deprecated=True,
),
connection=extend_schema(
tags=["Lighthouse AI"],
@@ -4101,6 +4217,7 @@ class IntegrationJiraViewSet(BaseRLSViewSet):
description="Verify the connection to the OpenAI API for a specific Lighthouse AI configuration.",
request=None,
responses={202: OpenApiResponse(response=TaskSerializer)},
deprecated=True,
),
)
class LighthouseConfigViewSet(BaseRLSViewSet):
@@ -4151,6 +4268,273 @@ class LighthouseConfigViewSet(BaseRLSViewSet):
)
@extend_schema_view(
list=extend_schema(
tags=["Lighthouse AI"],
summary="List all LLM provider configs",
description="Retrieve all LLM provider configurations for the current tenant",
),
retrieve=extend_schema(
tags=["Lighthouse AI"],
summary="Retrieve LLM provider config",
description="Get details for a specific provider configuration in the current tenant.",
),
create=extend_schema(
tags=["Lighthouse AI"],
summary="Create LLM provider config",
description="Create a per-tenant configuration for an LLM provider. Only one configuration per provider type is allowed per tenant.",
),
partial_update=extend_schema(
tags=["Lighthouse AI"],
summary="Update LLM provider config",
description="Partially update a provider configuration (e.g., base_url, is_active).",
),
destroy=extend_schema(
tags=["Lighthouse AI"],
summary="Delete LLM provider config",
description="Delete a provider configuration. Any tenant defaults that reference this provider are cleared during deletion.",
),
)
class LighthouseProviderConfigViewSet(BaseRLSViewSet):
queryset = LighthouseProviderConfiguration.objects.all()
serializer_class = LighthouseProviderConfigSerializer
http_method_names = ["get", "post", "patch", "delete"]
filterset_class = LighthouseProviderConfigFilter
def get_queryset(self):
if getattr(self, "swagger_fake_view", False):
return LighthouseProviderConfiguration.objects.none()
return LighthouseProviderConfiguration.objects.filter(
tenant_id=self.request.tenant_id
)
def get_serializer_class(self):
if self.action == "create":
return LighthouseProviderConfigCreateSerializer
elif self.action == "partial_update":
return LighthouseProviderConfigUpdateSerializer
elif self.action in ["connection", "refresh_models"]:
return TaskSerializer
return super().get_serializer_class()
def create(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
instance = serializer.save()
read_serializer = LighthouseProviderConfigSerializer(
instance, context=self.get_serializer_context()
)
headers = self.get_success_headers(read_serializer.data)
return Response(
data=read_serializer.data,
status=status.HTTP_201_CREATED,
headers=headers,
)
def partial_update(self, request, *args, **kwargs):
instance = self.get_object()
serializer = self.get_serializer(
instance,
data=request.data,
partial=True,
context=self.get_serializer_context(),
)
serializer.is_valid(raise_exception=True)
serializer.save()
read_serializer = LighthouseProviderConfigSerializer(
instance, context=self.get_serializer_context()
)
return Response(data=read_serializer.data, status=status.HTTP_200_OK)
@extend_schema(
tags=["Lighthouse AI"],
summary="Check LLM provider connection",
description="Validate provider credentials asynchronously and toggle is_active.",
request=None,
responses={202: OpenApiResponse(response=TaskSerializer)},
)
@action(detail=True, methods=["post"], url_name="connection")
def connection(self, request, pk=None):
instance = self.get_object()
if (
instance.provider_type
!= LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
return Response(
data={
"errors": [{"detail": "Only 'openai' provider supported in MVP"}]
},
status=status.HTTP_400_BAD_REQUEST,
)
with transaction.atomic():
task = check_lighthouse_provider_connection_task.delay(
provider_config_id=str(instance.id), tenant_id=self.request.tenant_id
)
prowler_task = Task.objects.get(id=task.id)
serializer = TaskSerializer(prowler_task)
return Response(
data=serializer.data,
status=status.HTTP_202_ACCEPTED,
headers={
"Content-Location": reverse(
"task-detail", kwargs={"pk": prowler_task.id}
)
},
)
@extend_schema(
tags=["Lighthouse AI"],
summary="Refresh LLM models catalog",
description="Fetch available models for this provider configuration and upsert into catalog.",
request=None,
responses={202: OpenApiResponse(response=TaskSerializer)},
)
@action(
detail=True,
methods=["post"],
url_path="refresh-models",
url_name="refresh-models",
)
def refresh_models(self, request, pk=None):
instance = self.get_object()
if (
instance.provider_type
!= LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
return Response(
data={
"errors": [{"detail": "Only 'openai' provider supported in MVP"}]
},
status=status.HTTP_400_BAD_REQUEST,
)
with transaction.atomic():
task = refresh_lighthouse_provider_models_task.delay(
provider_config_id=str(instance.id), tenant_id=self.request.tenant_id
)
prowler_task = Task.objects.get(id=task.id)
serializer = TaskSerializer(prowler_task)
return Response(
data=serializer.data,
status=status.HTTP_202_ACCEPTED,
headers={
"Content-Location": reverse(
"task-detail", kwargs={"pk": prowler_task.id}
)
},
)
@extend_schema_view(
list=extend_schema(
tags=["Lighthouse AI"],
summary="Get Lighthouse AI Tenant config",
description="Retrieve current tenant-level Lighthouse AI settings. Returns a single configuration object.",
),
partial_update=extend_schema(
tags=["Lighthouse AI"],
summary="Update Lighthouse AI Tenant config",
description="Update tenant-level settings. Validates that the default provider is configured and active and that default model IDs exist for the chosen providers. Auto-creates configuration if it doesn't exist.",
),
)
class LighthouseTenantConfigViewSet(BaseRLSViewSet):
"""
Singleton endpoint for tenant-level Lighthouse AI configuration.
This viewset implements a true singleton pattern:
- GET returns the single configuration object (or 404 if not found)
- PATCH updates/creates the configuration (upsert semantics)
- No ID is required in the URL
"""
queryset = LighthouseTenantConfiguration.objects.all()
serializer_class = LighthouseTenantConfigSerializer
http_method_names = ["get", "patch"]
def get_queryset(self):
if getattr(self, "swagger_fake_view", False):
return LighthouseTenantConfiguration.objects.none()
return LighthouseTenantConfiguration.objects.filter(
tenant_id=self.request.tenant_id
)
def get_serializer_class(self):
if self.action == "partial_update":
return LighthouseTenantConfigUpdateSerializer
return super().get_serializer_class()
def get_object(self):
"""Retrieve the singleton instance for the current tenant."""
obj = LighthouseTenantConfiguration.objects.filter(
tenant_id=self.request.tenant_id
).first()
if obj is None:
raise NotFound("Tenant Lighthouse configuration not found")
self.check_object_permissions(self.request, obj)
return obj
def list(self, request, *args, **kwargs):
"""GET endpoint for singleton - returns single object, not an array."""
instance = self.get_object()
serializer = self.get_serializer(instance)
return Response(serializer.data)
def partial_update(self, request, *args, **kwargs):
"""PATCH endpoint for singleton - no pk required. Auto-creates if not exists."""
# Auto-create tenant config if it doesn't exist (upsert semantics)
instance, created = LighthouseTenantConfiguration.objects.get_or_create(
tenant_id=self.request.tenant_id,
defaults={},
)
# Extract attributes from JSON:API payload
try:
payload = json.loads(request.body)
attributes = payload.get("data", {}).get("attributes", {})
except (json.JSONDecodeError, AttributeError):
raise ValidationError("Invalid JSON:API payload")
serializer = self.get_serializer(instance, data=attributes, partial=True)
serializer.is_valid(raise_exception=True)
serializer.save()
read_serializer = LighthouseTenantConfigSerializer(
instance, context=self.get_serializer_context()
)
return Response(read_serializer.data, status=status.HTTP_200_OK)
@extend_schema_view(
list=extend_schema(
tags=["Lighthouse AI"],
summary="List all LLM models",
description="List available LLM models per configured provider for the current tenant.",
),
retrieve=extend_schema(
tags=["Lighthouse AI"],
summary="Retrieve LLM model details",
description="Get details for a specific LLM model.",
),
)
class LighthouseProviderModelsViewSet(BaseRLSViewSet):
queryset = LighthouseProviderModels.objects.all()
serializer_class = LighthouseProviderModelsSerializer
filterset_class = LighthouseProviderModelsFilter
# Expose as read-only catalog collection
http_method_names = ["get"]
def get_queryset(self):
if getattr(self, "swagger_fake_view", False):
return LighthouseProviderModels.objects.none()
return LighthouseProviderModels.objects.filter(tenant_id=self.request.tenant_id)
def get_serializer_class(self):
return super().get_serializer_class()
@extend_schema_view(
list=extend_schema(
tags=["Processor"],
+23 -8
View File
@@ -5,24 +5,39 @@ DEBUG = env.bool("DJANGO_DEBUG", default=True)
ALLOWED_HOSTS = env.list("DJANGO_ALLOWED_HOSTS", default=["*"])
# Database
default_db_name = env("POSTGRES_DB", default="prowler_db")
default_db_user = env("POSTGRES_USER", default="prowler_user")
default_db_password = env("POSTGRES_PASSWORD", default="prowler")
default_db_host = env("POSTGRES_HOST", default="postgres-db")
default_db_port = env("POSTGRES_PORT", default="5432")
DATABASES = {
"prowler_user": {
"ENGINE": "psqlextra.backend",
"NAME": env("POSTGRES_DB", default="prowler_db"),
"USER": env("POSTGRES_USER", default="prowler_user"),
"PASSWORD": env("POSTGRES_PASSWORD", default="prowler"),
"HOST": env("POSTGRES_HOST", default="postgres-db"),
"PORT": env("POSTGRES_PORT", default="5432"),
"NAME": default_db_name,
"USER": default_db_user,
"PASSWORD": default_db_password,
"HOST": default_db_host,
"PORT": default_db_port,
},
"admin": {
"ENGINE": "psqlextra.backend",
"NAME": env("POSTGRES_DB", default="prowler_db"),
"NAME": default_db_name,
"USER": env("POSTGRES_ADMIN_USER", default="prowler"),
"PASSWORD": env("POSTGRES_ADMIN_PASSWORD", default="S3cret"),
"HOST": env("POSTGRES_HOST", default="postgres-db"),
"PORT": env("POSTGRES_PORT", default="5432"),
"HOST": default_db_host,
"PORT": default_db_port,
},
"replica": {
"ENGINE": "psqlextra.backend",
"NAME": env("POSTGRES_REPLICA_DB", default=default_db_name),
"USER": env("POSTGRES_REPLICA_USER", default=default_db_user),
"PASSWORD": env("POSTGRES_REPLICA_PASSWORD", default=default_db_password),
"HOST": env("POSTGRES_REPLICA_HOST", default=default_db_host),
"PORT": env("POSTGRES_REPLICA_PORT", default=default_db_port),
},
}
DATABASES["default"] = DATABASES["prowler_user"]
REST_FRAMEWORK["DEFAULT_RENDERER_CLASSES"] = tuple( # noqa: F405
+24 -9
View File
@@ -6,22 +6,37 @@ ALLOWED_HOSTS = env.list("DJANGO_ALLOWED_HOSTS", default=["localhost", "127.0.0.
# Database
# TODO Use Django database routers https://docs.djangoproject.com/en/5.0/topics/db/multi-db/#automatic-database-routing
default_db_name = env("POSTGRES_DB")
default_db_user = env("POSTGRES_USER")
default_db_password = env("POSTGRES_PASSWORD")
default_db_host = env("POSTGRES_HOST")
default_db_port = env("POSTGRES_PORT")
DATABASES = {
"prowler_user": {
"ENGINE": "django.db.backends.postgresql",
"NAME": env("POSTGRES_DB"),
"USER": env("POSTGRES_USER"),
"PASSWORD": env("POSTGRES_PASSWORD"),
"HOST": env("POSTGRES_HOST"),
"PORT": env("POSTGRES_PORT"),
"ENGINE": "psqlextra.backend",
"NAME": default_db_name,
"USER": default_db_user,
"PASSWORD": default_db_password,
"HOST": default_db_host,
"PORT": default_db_port,
},
"admin": {
"ENGINE": "psqlextra.backend",
"NAME": env("POSTGRES_DB"),
"NAME": default_db_name,
"USER": env("POSTGRES_ADMIN_USER"),
"PASSWORD": env("POSTGRES_ADMIN_PASSWORD"),
"HOST": env("POSTGRES_HOST"),
"PORT": env("POSTGRES_PORT"),
"HOST": default_db_host,
"PORT": default_db_port,
},
"replica": {
"ENGINE": "psqlextra.backend",
"NAME": env("POSTGRES_REPLICA_DB", default=default_db_name),
"USER": env("POSTGRES_REPLICA_USER", default=default_db_user),
"PASSWORD": env("POSTGRES_REPLICA_PASSWORD", default=default_db_password),
"HOST": env("POSTGRES_REPLICA_HOST", default=default_db_host),
"PORT": env("POSTGRES_REPLICA_PORT", default=default_db_port),
},
}
DATABASES["default"] = DATABASES["prowler_user"]
Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

+38 -28
View File
@@ -20,6 +20,10 @@ from prowler.lib.outputs.asff.asff import ASFF
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
AWSWellArchitected,
)
from prowler.lib.outputs.compliance.c5.c5_aws import AWSC5
from prowler.lib.outputs.compliance.ccc.ccc_aws import CCC_AWS
from prowler.lib.outputs.compliance.ccc.ccc_azure import CCC_Azure
from prowler.lib.outputs.compliance.ccc.ccc_gcp import CCC_GCP
from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
@@ -73,12 +77,15 @@ COMPLIANCE_CLASS_MAP = {
(lambda name: name.startswith("iso27001_"), AWSISO27001),
(lambda name: name.startswith("kisa"), AWSKISAISMSP),
(lambda name: name == "prowler_threatscore_aws", ProwlerThreatScoreAWS),
(lambda name: name == "ccc_aws", CCC_AWS),
(lambda name: name.startswith("c5_"), AWSC5),
],
"azure": [
(lambda name: name.startswith("cis_"), AzureCIS),
(lambda name: name == "mitre_attack_azure", AzureMitreAttack),
(lambda name: name.startswith("ens_"), AzureENS),
(lambda name: name.startswith("iso27001_"), AzureISO27001),
(lambda name: name == "ccc_azure", CCC_Azure),
(lambda name: name == "prowler_threatscore_azure", ProwlerThreatScoreAzure),
],
"gcp": [
@@ -87,6 +94,7 @@ COMPLIANCE_CLASS_MAP = {
(lambda name: name.startswith("ens_"), GCPENS),
(lambda name: name.startswith("iso27001_"), GCPISO27001),
(lambda name: name == "prowler_threatscore_gcp", ProwlerThreatScoreGCP),
(lambda name: name == "ccc_gcp", CCC_GCP),
],
"kubernetes": [
(lambda name: name.startswith("cis_"), KubernetesCIS),
@@ -175,18 +183,21 @@ def get_s3_client():
return s3_client
def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str | None:
def _upload_to_s3(
tenant_id: str, scan_id: str, local_path: str, relative_key: str
) -> str | None:
"""
Upload the specified ZIP file to an S3 bucket.
If the S3 bucket environment variables are not configured,
the function returns None without performing an upload.
Upload a local artifact to an S3 bucket under the tenant/scan prefix.
Args:
tenant_id (str): The tenant identifier, used as part of the S3 key prefix.
zip_path (str): The local file system path to the ZIP file to be uploaded.
scan_id (str): The scan identifier, used as part of the S3 key prefix.
tenant_id (str): The tenant identifier used as the first segment of the S3 key.
scan_id (str): The scan identifier used as the second segment of the S3 key.
local_path (str): Filesystem path to the artifact to upload.
relative_key (str): Object key relative to `<tenant_id>/<scan_id>/`.
Returns:
str: The S3 URI of the uploaded file (e.g., "s3://<bucket>/<key>") if successful.
None: If the required environment variables for the S3 bucket are not set.
str | None: S3 URI of the uploaded artifact, or None if the upload is skipped.
Raises:
botocore.exceptions.ClientError: If the upload attempt to S3 fails for any reason.
"""
@@ -194,34 +205,26 @@ def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str | None:
if not bucket:
return
if not relative_key:
return
if not os.path.isfile(local_path):
return
try:
s3 = get_s3_client()
# Upload the ZIP file (outputs) to the S3 bucket
zip_key = f"{tenant_id}/{scan_id}/{os.path.basename(zip_path)}"
s3.upload_file(
Filename=zip_path,
Bucket=bucket,
Key=zip_key,
)
s3_key = f"{tenant_id}/{scan_id}/{relative_key}"
s3.upload_file(Filename=local_path, Bucket=bucket, Key=s3_key)
# Upload the compliance directory to the S3 bucket
compliance_dir = os.path.join(os.path.dirname(zip_path), "compliance")
for filename in os.listdir(compliance_dir):
local_path = os.path.join(compliance_dir, filename)
if not os.path.isfile(local_path):
continue
file_key = f"{tenant_id}/{scan_id}/compliance/{filename}"
s3.upload_file(Filename=local_path, Bucket=bucket, Key=file_key)
return f"s3://{base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET}/{zip_key}"
return f"s3://{base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET}/{s3_key}"
except (ClientError, NoCredentialsError, ParamValidationError, ValueError) as e:
logger.error(f"S3 upload failed: {str(e)}")
def _generate_output_directory(
output_directory, prowler_provider: object, tenant_id: str, scan_id: str
) -> tuple[str, str]:
) -> tuple[str, str, str]:
"""
Generate a file system path for the output directory of a prowler scan.
@@ -248,6 +251,7 @@ def _generate_output_directory(
>>> _generate_output_directory("/tmp", "aws", "tenant-1234", "scan-5678")
'/tmp/tenant-1234/aws/scan-5678/prowler-output-2023-02-15T12:34:56',
'/tmp/tenant-1234/aws/scan-5678/compliance/prowler-output-2023-02-15T12:34:56'
'/tmp/tenant-1234/aws/scan-5678/threatscore/prowler-output-2023-02-15T12:34:56'
"""
# Sanitize the prowler provider name to ensure it is a valid directory name
prowler_provider_sanitized = re.sub(r"[^\w\-]", "-", prowler_provider)
@@ -268,4 +272,10 @@ def _generate_output_directory(
)
os.makedirs("/".join(compliance_path.split("/")[:-1]), exist_ok=True)
return path, compliance_path
threatscore_path = (
f"{output_directory}/{tenant_id}/{scan_id}/threatscore/prowler-output-"
f"{prowler_provider_sanitized}-{timestamp}"
)
os.makedirs("/".join(threatscore_path.split("/")[:-1]), exist_ok=True)
return path, compliance_path, threatscore_path
+2 -1
View File
@@ -5,6 +5,7 @@ from celery.utils.log import get_task_logger
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
from tasks.utils import batched
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
from api.models import Finding, Integration, Provider
from api.utils import initialize_prowler_integration, initialize_prowler_provider
@@ -289,7 +290,7 @@ def upload_security_hub_integration(
has_findings = False
batch_number = 0
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
qs = (
Finding.all_objects.filter(tenant_id=tenant_id, scan_id=scan_id)
.order_by("uid")
@@ -0,0 +1,163 @@
from typing import Dict, Set
import openai
from celery.utils.log import get_task_logger
from api.models import LighthouseProviderConfiguration, LighthouseProviderModels
logger = get_task_logger(__name__)
def _extract_openai_api_key(
provider_cfg: LighthouseProviderConfiguration,
) -> str | None:
"""
Safely extract the OpenAI API key from a provider configuration.
Args:
provider_cfg (LighthouseProviderConfiguration): The provider configuration instance
containing the credentials.
Returns:
str | None: The API key string if present and valid, otherwise None.
"""
creds = provider_cfg.credentials_decoded
if not isinstance(creds, dict):
return None
api_key = creds.get("api_key")
if not isinstance(api_key, str) or not api_key:
return None
return api_key
def check_lighthouse_provider_connection(provider_config_id: str) -> Dict:
"""
Validate a Lighthouse provider configuration by calling the provider API and
toggle its active state accordingly.
Currently supports the OpenAI provider by invoking `models.list` to verify that
the provided credentials are valid.
Args:
provider_config_id (str): The primary key of the `LighthouseProviderConfiguration`
to validate.
Returns:
dict: A result dictionary with the following keys:
- "connected" (bool): Whether the provider credentials are valid.
- "error" (str | None): The error message when not connected, otherwise None.
Side Effects:
- Updates and persists `is_active` on the `LighthouseProviderConfiguration`.
Raises:
LighthouseProviderConfiguration.DoesNotExist: If no configuration exists with the given ID.
"""
provider_cfg = LighthouseProviderConfiguration.objects.get(pk=provider_config_id)
# TODO: Add support for other providers
if (
provider_cfg.provider_type
!= LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
return {"connected": False, "error": "Unsupported provider type"}
api_key = _extract_openai_api_key(provider_cfg)
if not api_key:
provider_cfg.is_active = False
provider_cfg.save()
return {"connected": False, "error": "API key is invalid or missing"}
try:
client = openai.OpenAI(api_key=api_key)
_ = client.models.list()
provider_cfg.is_active = True
provider_cfg.save()
return {"connected": True, "error": None}
except Exception as e:
logger.warning("OpenAI connection check failed: %s", str(e))
provider_cfg.is_active = False
provider_cfg.save()
return {"connected": False, "error": str(e)}
def refresh_lighthouse_provider_models(provider_config_id: str) -> Dict:
"""
Refresh the catalog of models for a Lighthouse provider configuration.
For the OpenAI provider, this fetches the current list of models, upserts entries
into `LighthouseProviderModels`, and deletes stale entries no longer returned by
the provider.
Args:
provider_config_id (str): The primary key of the `LighthouseProviderConfiguration`
whose models should be refreshed.
Returns:
dict: A result dictionary with the following keys on success:
- "created" (int): Number of new model rows created.
- "updated" (int): Number of existing model rows updated.
- "deleted" (int): Number of stale model rows removed.
If an error occurs, the dictionary will contain an "error" (str) field instead.
Raises:
LighthouseProviderConfiguration.DoesNotExist: If no configuration exists with the given ID.
"""
provider_cfg = LighthouseProviderConfiguration.objects.get(pk=provider_config_id)
if (
provider_cfg.provider_type
!= LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "Unsupported provider type",
}
api_key = _extract_openai_api_key(provider_cfg)
if not api_key:
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "API key is invalid or missing",
}
try:
client = openai.OpenAI(api_key=api_key)
models = client.models.list()
fetched_ids: Set[str] = {m.id for m in getattr(models, "data", [])}
except Exception as e: # noqa: BLE001
logger.warning("OpenAI models refresh failed: %s", str(e))
return {"created": 0, "updated": 0, "deleted": 0, "error": str(e)}
created = 0
updated = 0
for model_id in fetched_ids:
obj, was_created = LighthouseProviderModels.objects.update_or_create(
tenant_id=provider_cfg.tenant_id,
provider_configuration=provider_cfg,
model_id=model_id,
defaults={
"model_name": model_id, # OpenAI doesn't return a separate display name
"default_parameters": {},
},
)
if was_created:
created += 1
else:
updated += 1
# Delete stale models not present anymore
deleted, _ = (
LighthouseProviderModels.objects.filter(
tenant_id=provider_cfg.tenant_id, provider_configuration=provider_cfg
)
.exclude(model_id__in=fetched_ids)
.delete()
)
return {"created": created, "updated": updated, "deleted": deleted}
File diff suppressed because it is too large Load Diff
+238 -39
View File
@@ -1,8 +1,12 @@
import csv
import io
import json
import time
import uuid
from collections import defaultdict
from copy import deepcopy
from datetime import datetime, timezone
from typing import Any
from celery.utils.log import get_task_logger
from config.settings.celery import CELERY_DEADLOCK_ATTEMPTS
@@ -14,8 +18,11 @@ from api.compliance import (
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE,
generate_scan_compliance,
)
from api.db_router import READ_REPLICA_ALIAS, MainRouter
from api.db_utils import (
create_objects_in_batches,
POSTGRES_TENANT_VAR,
SET_CONFIG_QUERY,
psycopg_connection,
rls_transaction,
update_objects_in_batches,
)
@@ -40,6 +47,28 @@ from prowler.lib.scan.scan import Scan as ProwlerScan
logger = get_task_logger(__name__)
# Column order must match `ComplianceRequirementOverview` schema in
# `api/models.py`. Keep this list minimal but sufficient to populate all
# non-nullable fields plus the counters we care about.
COMPLIANCE_REQUIREMENT_COPY_COLUMNS = (
"id",
"tenant_id",
"inserted_at",
"compliance_id",
"framework",
"version",
"description",
"region",
"requirement_id",
"requirement_status",
"passed_checks",
"failed_checks",
"total_checks",
"passed_findings",
"total_findings",
"scan_id",
)
def _create_finding_delta(
last_status: FindingStatus | None | str, new_status: FindingStatus | None
@@ -107,6 +136,124 @@ def _store_resources(
return resource_instance, (resource_instance.uid, resource_instance.region)
def _copy_compliance_requirement_rows(
tenant_id: str, rows: list[dict[str, Any]]
) -> None:
"""Stream compliance requirement rows into Postgres using COPY.
We leverage the admin connection (when available) to bypass the COPY + RLS
restriction, writing only the fields required by
``ComplianceRequirementOverview``.
Args:
tenant_id: Target tenant UUID.
rows: List of row dictionaries prepared by
:func:`create_compliance_requirements`.
"""
csv_buffer = io.StringIO()
writer = csv.writer(csv_buffer)
datetime_now = datetime.now(tz=timezone.utc)
for row in rows:
writer.writerow(
[
str(row.get("id")),
str(row.get("tenant_id")),
(row.get("inserted_at") or datetime_now).isoformat(),
row.get("compliance_id") or "",
row.get("framework") or "",
row.get("version") or "",
row.get("description") or "",
row.get("region") or "",
row.get("requirement_id") or "",
row.get("requirement_status") or "",
row.get("passed_checks", 0),
row.get("failed_checks", 0),
row.get("total_checks", 0),
row.get("passed_findings", 0),
row.get("total_findings", 0),
str(row.get("scan_id")),
]
)
csv_buffer.seek(0)
copy_sql = (
"COPY compliance_requirements_overviews ("
+ ", ".join(COMPLIANCE_REQUIREMENT_COPY_COLUMNS)
+ ") FROM STDIN WITH (FORMAT CSV, DELIMITER ',', QUOTE '\"', ESCAPE '\"', NULL '\\N')"
)
try:
with psycopg_connection(MainRouter.admin_db) as connection:
connection.autocommit = False
try:
with connection.cursor() as cursor:
cursor.execute(SET_CONFIG_QUERY, [POSTGRES_TENANT_VAR, tenant_id])
cursor.copy_expert(copy_sql, csv_buffer)
connection.commit()
except Exception:
connection.rollback()
raise
finally:
csv_buffer.close()
def _persist_compliance_requirement_rows(
tenant_id: str, rows: list[dict[str, Any]]
) -> None:
"""Persist compliance requirement rows using COPY with ORM fallback.
Args:
tenant_id: Target tenant UUID.
rows: Precomputed row dictionaries that reflect the compliance
overview state for a scan.
"""
if not rows:
return
try:
_copy_compliance_requirement_rows(tenant_id, rows)
except Exception as error:
logger.exception(
"COPY bulk insert for compliance requirements failed; falling back to ORM bulk_create",
exc_info=error,
)
fallback_objects = [
ComplianceRequirementOverview(
id=row["id"],
tenant_id=row["tenant_id"],
inserted_at=row["inserted_at"],
compliance_id=row["compliance_id"],
framework=row["framework"],
version=row["version"],
description=row["description"],
region=row["region"],
requirement_id=row["requirement_id"],
requirement_status=row["requirement_status"],
passed_checks=row["passed_checks"],
failed_checks=row["failed_checks"],
total_checks=row["total_checks"],
passed_findings=row.get("passed_findings", 0),
total_findings=row.get("total_findings", 0),
scan_id=row["scan_id"],
)
for row in rows
]
with rls_transaction(tenant_id):
ComplianceRequirementOverview.objects.bulk_create(
fallback_objects, batch_size=500
)
def _normalized_compliance_key(framework: str | None, version: str | None) -> str:
"""Return normalized identifier used to group compliance totals."""
normalized_framework = (framework or "").lower().replace("-", "").replace("_", "")
normalized_version = (version or "").lower().replace("-", "").replace("_", "")
return f"{normalized_framework}{normalized_version}"
def perform_prowler_scan(
tenant_id: str,
scan_id: str,
@@ -143,7 +290,7 @@ def perform_prowler_scan(
scan_instance.save()
# Find the mutelist processor if it exists
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
try:
mutelist_processor = Processor.objects.get(
tenant_id=tenant_id, processor_type=Processor.ProcessorChoices.MUTELIST
@@ -272,7 +419,7 @@ def perform_prowler_scan(
unique_resources.add((resource_instance.uid, resource_instance.region))
# Process finding
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
finding_uid = finding.uid
last_first_seen_at = None
if finding_uid not in last_status_cache:
@@ -305,6 +452,12 @@ def perform_prowler_scan(
# If the finding is muted at this time the reason must be the configured Mutelist
muted_reason = "Muted by mutelist" if finding.muted else None
# Increment failed_findings_count cache if the finding status is FAIL and not muted
if status == FindingStatus.FAIL and not finding.muted:
resource_uid = finding.resource_uid
resource_failed_findings_cache[resource_uid] += 1
with rls_transaction(tenant_id):
# Create the finding
finding_instance = Finding.objects.create(
tenant_id=tenant_id,
@@ -325,11 +478,6 @@ def perform_prowler_scan(
)
finding_instance.add_resources([resource_instance])
# Increment failed_findings_count cache if the finding status is FAIL and not muted
if status == FindingStatus.FAIL and not finding.muted:
resource_uid = finding.resource_uid
resource_failed_findings_cache[resource_uid] += 1
# Update scan resource summaries
scan_resource_cache.add(
(
@@ -439,7 +587,7 @@ def aggregate_findings(tenant_id: str, scan_id: str):
- muted_new: Muted findings with a delta of 'new'.
- muted_changed: Muted findings with a delta of 'changed'.
"""
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
findings = Finding.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
aggregation = findings.values(
@@ -582,15 +730,32 @@ def create_compliance_requirements(tenant_id: str, scan_id: str):
ValidationError: If tenant_id is not a valid UUID.
"""
try:
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
scan_instance = Scan.objects.get(pk=scan_id)
provider_instance = scan_instance.provider
prowler_provider = return_prowler_provider(provider_instance)
compliance_template = PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE[
provider_instance.provider
]
modeled_threatscore_compliance_id = "ProwlerThreatScore-1.0"
threatscore_requirements_by_check: dict[str, set[str]] = {}
threatscore_framework = compliance_template.get(
modeled_threatscore_compliance_id
)
if threatscore_framework:
for requirement_id, requirement in threatscore_framework[
"requirements"
].items():
for check_id in requirement["checks"]:
threatscore_requirements_by_check.setdefault(check_id, set()).add(
requirement_id
)
# Get check status data by region from findings
findings = (
Finding.all_objects.filter(scan_id=scan_id, muted=False)
.only("id", "check_id", "status")
.only("id", "check_id", "status", "compliance")
.prefetch_related(
Prefetch(
"resources",
@@ -601,14 +766,36 @@ def create_compliance_requirements(tenant_id: str, scan_id: str):
.iterator(chunk_size=1000)
)
findings_count_by_compliance = {}
check_status_by_region = {}
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
for finding in findings:
for resource in finding.small_resources:
region = resource.region
current_status = check_status_by_region.setdefault(region, {})
if current_status.get(finding.check_id) != "FAIL":
current_status[finding.check_id] = finding.status
if modeled_threatscore_compliance_id in finding.compliance:
for requirement_id in finding.compliance[
modeled_threatscore_compliance_id
]:
compliance_key = findings_count_by_compliance.setdefault(
region, {}
).setdefault(
modeled_threatscore_compliance_id.lower().replace(
"-", ""
),
{},
)
if requirement_id not in compliance_key:
compliance_key[requirement_id] = {
"total": 0,
"pass": 0,
}
compliance_key[requirement_id]["total"] += 1
if finding.status == "PASS":
compliance_key[requirement_id]["pass"] += 1
try:
# Try to get regions from provider
@@ -617,11 +804,6 @@ def create_compliance_requirements(tenant_id: str, scan_id: str):
# If not available, use regions from findings
regions = set(check_status_by_region.keys())
# Get compliance template for the provider
compliance_template = PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE[
provider_instance.provider
]
# Create compliance data by region
compliance_overview_by_region = {
region: deepcopy(compliance_template) for region in regions
@@ -640,36 +822,53 @@ def create_compliance_requirements(tenant_id: str, scan_id: str):
status,
)
# Prepare compliance requirement objects
compliance_requirement_objects = []
# Prepare compliance requirement rows
compliance_requirement_rows: list[dict[str, Any]] = []
utc_datetime_now = datetime.now(tz=timezone.utc)
for region, compliance_data in compliance_overview_by_region.items():
for compliance_id, compliance in compliance_data.items():
modeled_compliance_id = _normalized_compliance_key(
compliance["framework"], compliance["version"]
)
# Create an overview record for each requirement within each compliance framework
for requirement_id, requirement in compliance["requirements"].items():
compliance_requirement_objects.append(
ComplianceRequirementOverview(
tenant_id=tenant_id,
scan=scan_instance,
region=region,
compliance_id=compliance_id,
framework=compliance["framework"],
version=compliance["version"],
requirement_id=requirement_id,
description=requirement["description"],
passed_checks=requirement["checks_status"]["pass"],
failed_checks=requirement["checks_status"]["fail"],
total_checks=requirement["checks_status"]["total"],
requirement_status=requirement["status"],
)
checks_status = requirement["checks_status"]
compliance_requirement_rows.append(
{
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": utc_datetime_now,
"compliance_id": compliance_id,
"framework": compliance["framework"],
"version": compliance["version"] or "",
"description": requirement.get("description") or "",
"region": region,
"requirement_id": requirement_id,
"requirement_status": requirement["status"],
"passed_checks": checks_status["pass"],
"failed_checks": checks_status["fail"],
"total_checks": checks_status["total"],
"scan_id": scan_instance.id,
"passed_findings": findings_count_by_compliance.get(
region, {}
)
.get(modeled_compliance_id, {})
.get(requirement_id, {})
.get("pass", 0),
"total_findings": findings_count_by_compliance.get(
region, {}
)
.get(modeled_compliance_id, {})
.get(requirement_id, {})
.get("total", 0),
}
)
# Bulk create requirement records
create_objects_in_batches(
tenant_id, ComplianceRequirementOverview, compliance_requirement_objects
)
# Bulk create requirement records using PostgreSQL COPY
_persist_compliance_requirement_rows(tenant_id, compliance_requirement_rows)
return {
"requirements_created": len(compliance_requirement_objects),
"requirements_created": len(compliance_requirement_rows),
"regions_processed": list(regions),
"compliance_frameworks": (
list(compliance_overview_by_region.get(list(regions)[0], {}).keys())
+126 -58
View File
@@ -1,3 +1,4 @@
import os
from datetime import datetime, timedelta, timezone
from pathlib import Path
from shutil import rmtree
@@ -26,6 +27,11 @@ from tasks.jobs.integrations import (
upload_s3_integration,
upload_security_hub_integration,
)
from tasks.jobs.lighthouse_providers import (
check_lighthouse_provider_connection,
refresh_lighthouse_provider_models,
)
from tasks.jobs.report import generate_threatscore_report_job
from tasks.jobs.scan import (
aggregate_findings,
create_compliance_requirements,
@@ -34,6 +40,7 @@ from tasks.jobs.scan import (
from tasks.utils import batched, get_next_execution_datetime
from api.compliance import get_compliance_frameworks
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
from api.decorators import set_tenant
from api.models import Finding, Integration, Provider, Scan, ScanSummary, StateChoices
@@ -63,10 +70,15 @@ def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str)
generate_outputs_task.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
check_integrations_task.si(
tenant_id=tenant_id,
provider_id=provider_id,
scan_id=scan_id,
group(
generate_threatscore_report_task.si(
tenant_id=tenant_id, scan_id=scan_id, provider_id=provider_id
),
check_integrations_task.si(
tenant_id=tenant_id,
provider_id=provider_id,
scan_id=scan_id,
),
),
).apply_async()
@@ -303,7 +315,7 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
frameworks_bulk = Compliance.get_bulk(provider_type)
frameworks_avail = get_compliance_frameworks(provider_type)
out_dir, comp_dir = _generate_output_directory(
out_dir, comp_dir, _ = _generate_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY, provider_uid, tenant_id, scan_id
)
@@ -343,70 +355,90 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
.order_by("uid")
.iterator()
)
for batch, is_last in batched(qs, DJANGO_FINDINGS_BATCH_SIZE):
fos = [FindingOutput.transform_api_finding(f, prowler_provider) for f in batch]
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
for batch, is_last in batched(qs, DJANGO_FINDINGS_BATCH_SIZE):
fos = [
FindingOutput.transform_api_finding(f, prowler_provider) for f in batch
]
# Outputs
for mode, cfg in OUTPUT_FORMATS_MAPPING.items():
# Skip ASFF generation if not needed
if mode == "json-asff" and not generate_asff:
continue
# Outputs
for mode, cfg in OUTPUT_FORMATS_MAPPING.items():
# Skip ASFF generation if not needed
if mode == "json-asff" and not generate_asff:
continue
cls = cfg["class"]
suffix = cfg["suffix"]
extra = cfg.get("kwargs", {}).copy()
if mode == "html":
extra.update(provider=prowler_provider, stats=scan_summary)
cls = cfg["class"]
suffix = cfg["suffix"]
extra = cfg.get("kwargs", {}).copy()
if mode == "html":
extra.update(provider=prowler_provider, stats=scan_summary)
writer, initialization = get_writer(
output_writers,
cls,
lambda cls=cls, fos=fos, suffix=suffix: cls(
findings=fos,
file_path=out_dir,
file_extension=suffix,
from_cli=False,
),
is_last,
)
if not initialization:
writer.transform(fos)
writer.batch_write_data_to_file(**extra)
writer._data.clear()
writer, initialization = get_writer(
output_writers,
cls,
lambda cls=cls, fos=fos, suffix=suffix: cls(
findings=fos,
file_path=out_dir,
file_extension=suffix,
from_cli=False,
),
is_last,
)
if not initialization:
writer.transform(fos)
writer.batch_write_data_to_file(**extra)
writer._data.clear()
# Compliance CSVs
for name in frameworks_avail:
compliance_obj = frameworks_bulk[name]
# Compliance CSVs
for name in frameworks_avail:
compliance_obj = frameworks_bulk[name]
klass = GenericCompliance
for condition, cls in COMPLIANCE_CLASS_MAP.get(provider_type, []):
if condition(name):
klass = cls
break
klass = GenericCompliance
for condition, cls in COMPLIANCE_CLASS_MAP.get(provider_type, []):
if condition(name):
klass = cls
break
filename = f"{comp_dir}_{name}.csv"
filename = f"{comp_dir}_{name}.csv"
writer, initialization = get_writer(
compliance_writers,
name,
lambda klass=klass, fos=fos: klass(
findings=fos,
compliance=compliance_obj,
file_path=filename,
from_cli=False,
),
is_last,
)
if not initialization:
writer.transform(fos, compliance_obj, name)
writer.batch_write_data_to_file()
writer._data.clear()
writer, initialization = get_writer(
compliance_writers,
name,
lambda klass=klass, fos=fos: klass(
findings=fos,
compliance=compliance_obj,
file_path=filename,
from_cli=False,
),
is_last,
)
if not initialization:
writer.transform(fos, compliance_obj, name)
writer.batch_write_data_to_file()
writer._data.clear()
compressed = _compress_output_files(out_dir)
upload_uri = _upload_to_s3(tenant_id, compressed, scan_id)
upload_uri = _upload_to_s3(
tenant_id,
scan_id,
compressed,
os.path.basename(compressed),
)
compliance_dir_path = Path(comp_dir).parent
if compliance_dir_path.exists():
for artifact_path in sorted(compliance_dir_path.iterdir()):
if artifact_path.is_file():
_upload_to_s3(
tenant_id,
scan_id,
str(artifact_path),
f"compliance/{artifact_path.name}",
)
# S3 integrations (need output_directory)
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
s3_integrations = Integration.objects.filter(
integrationproviderrelationship__provider_id=provider_id,
integration_type=Integration.IntegrationChoices.AMAZON_S3,
@@ -496,6 +528,24 @@ def check_lighthouse_connection_task(lighthouse_config_id: str, tenant_id: str =
return check_lighthouse_connection(lighthouse_config_id=lighthouse_config_id)
@shared_task(base=RLSTask, name="lighthouse-provider-connection-check")
@set_tenant
def check_lighthouse_provider_connection_task(
provider_config_id: str, tenant_id: str | None = None
) -> dict:
"""Task wrapper to validate provider credentials and set is_active."""
return check_lighthouse_provider_connection(provider_config_id=provider_config_id)
@shared_task(base=RLSTask, name="lighthouse-provider-models-refresh")
@set_tenant
def refresh_lighthouse_provider_models_task(
provider_config_id: str, tenant_id: str | None = None
) -> dict:
"""Task wrapper to refresh provider models catalog for the given configuration."""
return refresh_lighthouse_provider_models(provider_config_id=provider_config_id)
@shared_task(name="integration-check")
def check_integrations_task(tenant_id: str, provider_id: str, scan_id: str = None):
"""
@@ -613,3 +663,21 @@ def jira_integration_task(
return send_findings_to_jira(
tenant_id, integration_id, project_key, issue_type, finding_ids
)
@shared_task(
base=RLSTask,
name="scan-threatscore-report",
queue="scan-reports",
)
def generate_threatscore_report_task(tenant_id: str, scan_id: str, provider_id: str):
"""
Task to generate a threatscore report for a given scan.
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
provider_id (str): The provider identifier.
"""
return generate_threatscore_report_job(
tenant_id=tenant_id, scan_id=scan_id, provider_id=provider_id
)
+32 -10
View File
@@ -72,17 +72,26 @@ class TestOutputs:
client_mock = MagicMock()
mock_get_client.return_value = client_mock
result = _upload_to_s3("tenant-id", str(zip_path), "scan-id")
result = _upload_to_s3(
"tenant-id",
"scan-id",
str(zip_path),
"outputs.zip",
)
expected_uri = "s3://test-bucket/tenant-id/scan-id/outputs.zip"
assert result == expected_uri
assert client_mock.upload_file.call_count == 2
client_mock.upload_file.assert_called_once_with(
Filename=str(zip_path),
Bucket="test-bucket",
Key="tenant-id/scan-id/outputs.zip",
)
@patch("tasks.jobs.export.get_s3_client")
@patch("tasks.jobs.export.base")
def test_upload_to_s3_missing_bucket(self, mock_base, mock_get_client):
mock_base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET = ""
result = _upload_to_s3("tenant", "/tmp/fake.zip", "scan")
result = _upload_to_s3("tenant", "scan", "/tmp/fake.zip", "fake.zip")
assert result is None
@patch("tasks.jobs.export.get_s3_client")
@@ -101,11 +110,15 @@ class TestOutputs:
client_mock = MagicMock()
mock_get_client.return_value = client_mock
result = _upload_to_s3("tenant", str(zip_path), "scan")
result = _upload_to_s3(
"tenant",
"scan",
str(compliance_dir / "subdir"),
"compliance/subdir",
)
expected_uri = "s3://test-bucket/tenant/scan/results.zip"
assert result == expected_uri
client_mock.upload_file.assert_called_once()
assert result is None
client_mock.upload_file.assert_not_called()
@patch(
"tasks.jobs.export.get_s3_client",
@@ -126,7 +139,12 @@ class TestOutputs:
compliance_dir.mkdir()
(compliance_dir / "report.csv").write_text("csv")
_upload_to_s3("tenant", str(zip_path), "scan")
_upload_to_s3(
"tenant",
"scan",
str(zip_path),
"zipfile.zip",
)
mock_logger.assert_called()
@patch("tasks.jobs.export.rls_transaction")
@@ -150,15 +168,17 @@ class TestOutputs:
provider = "aws"
expected_timestamp = "20230615103045"
path, compliance = _generate_output_directory(
path, compliance, threatscore = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert os.path.isdir(os.path.dirname(threatscore))
assert path.endswith(f"{provider}-{expected_timestamp}")
assert compliance.endswith(f"{provider}-{expected_timestamp}")
assert threatscore.endswith(f"{provider}-{expected_timestamp}")
@patch("tasks.jobs.export.rls_transaction")
@patch("tasks.jobs.export.Scan")
@@ -181,12 +201,14 @@ class TestOutputs:
provider = "aws/test@check"
expected_timestamp = "20230615103045"
path, compliance = _generate_output_directory(
path, compliance, threatscore = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert os.path.isdir(os.path.dirname(threatscore))
assert path.endswith(f"aws-test-check-{expected_timestamp}")
assert compliance.endswith(f"aws-test-check-{expected_timestamp}")
assert threatscore.endswith(f"aws-test-check-{expected_timestamp}")
+963
View File
@@ -0,0 +1,963 @@
import uuid
from pathlib import Path
from unittest.mock import MagicMock, patch
import matplotlib
import pytest
from tasks.jobs.report import (
_aggregate_requirement_statistics_from_database,
_calculate_requirements_data_from_statistics,
_load_findings_for_requirement_checks,
generate_threatscore_report,
generate_threatscore_report_job,
)
from tasks.tasks import generate_threatscore_report_task
from api.models import Finding, StatusChoices
from prowler.lib.check.models import Severity
matplotlib.use("Agg") # Use non-interactive backend for tests
@pytest.mark.django_db
class TestGenerateThreatscoreReport:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
def test_no_findings_returns_early(self):
with patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
result = generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": False}
mock_filter.assert_called_once_with(scan_id=self.scan_id)
@patch("tasks.jobs.report.rmtree")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_threatscore_report")
@patch("tasks.jobs.report._generate_output_directory")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.ScanSummary.objects.filter")
def test_generate_threatscore_report_happy_path(
self,
mock_scan_summary_filter,
mock_provider_get,
mock_generate_output_directory,
mock_generate_report,
mock_upload,
mock_rmtree,
):
mock_scan_summary_filter.return_value.exists.return_value = True
mock_provider = MagicMock()
mock_provider.uid = "provider-uid"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
mock_generate_output_directory.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
mock_upload.return_value = "s3://bucket/threatscore_report.pdf"
result = generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": True}
mock_generate_report.assert_called_once_with(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id="prowler_threatscore_aws",
output_path="/tmp/threatscore_path_threatscore_report.pdf",
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
mock_upload.assert_called_once_with(
self.tenant_id,
self.scan_id,
"/tmp/threatscore_path_threatscore_report.pdf",
"threatscore/threatscore_path_threatscore_report.pdf",
)
mock_rmtree.assert_called_once_with(
Path("/tmp/threatscore_path_threatscore_report.pdf").parent,
ignore_errors=True,
)
def test_generate_threatscore_report_fails_upload(self):
with (
patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter,
patch("tasks.jobs.report.Provider.objects.get") as mock_provider_get,
patch("tasks.jobs.report._generate_output_directory") as mock_gen_dir,
patch("tasks.jobs.report.generate_threatscore_report"),
patch("tasks.jobs.report._upload_to_s3", return_value=None),
):
mock_filter.return_value.exists.return_value = True
# Mock provider
mock_provider = MagicMock()
mock_provider.uid = "aws-provider-uid"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
mock_gen_dir.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
result = generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": False}
def test_generate_threatscore_report_logs_rmtree_exception(self, caplog):
with (
patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter,
patch("tasks.jobs.report.Provider.objects.get") as mock_provider_get,
patch("tasks.jobs.report._generate_output_directory") as mock_gen_dir,
patch("tasks.jobs.report.generate_threatscore_report"),
patch(
"tasks.jobs.report._upload_to_s3", return_value="s3://bucket/report.pdf"
),
patch(
"tasks.jobs.report.rmtree", side_effect=Exception("Test deletion error")
),
):
mock_filter.return_value.exists.return_value = True
# Mock provider
mock_provider = MagicMock()
mock_provider.uid = "aws-provider-uid"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
mock_gen_dir.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
with caplog.at_level("ERROR"):
generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert "Error deleting output files" in caplog.text
def test_generate_threatscore_report_azure_provider(self):
with (
patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter,
patch("tasks.jobs.report.Provider.objects.get") as mock_provider_get,
patch("tasks.jobs.report._generate_output_directory") as mock_gen_dir,
patch("tasks.jobs.report.generate_threatscore_report") as mock_generate,
patch(
"tasks.jobs.report._upload_to_s3", return_value="s3://bucket/report.pdf"
),
patch("tasks.jobs.report.rmtree"),
):
mock_filter.return_value.exists.return_value = True
mock_provider = MagicMock()
mock_provider.uid = "azure-provider-uid"
mock_provider.provider = "azure"
mock_provider_get.return_value = mock_provider
mock_gen_dir.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
mock_generate.assert_called_once_with(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id="prowler_threatscore_azure",
output_path="/tmp/threatscore_path_threatscore_report.pdf",
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
@pytest.mark.django_db
class TestAggregateRequirementStatistics:
"""Test suite for _aggregate_requirement_statistics_from_database function."""
def test_aggregates_findings_correctly(self, tenants_fixture, scans_fixture):
"""Verify correct pass/total counts per check are aggregated from database."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create findings with different check_ids and statuses
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_1",
status=StatusChoices.PASS,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-2",
check_id="check_1",
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-3",
check_id="check_2",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {
"check_1": {"passed": 1, "total": 2},
"check_2": {"passed": 1, "total": 1},
}
def test_handles_empty_scan(self, tenants_fixture, scans_fixture):
"""Return empty dict when no findings exist for the scan."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {}
def test_multiple_findings_same_check(self, tenants_fixture, scans_fixture):
"""Aggregate multiple findings for same check_id correctly."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create 5 findings for same check, 3 passed
for i in range(3):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-pass-{i}",
check_id="check_same",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
for i in range(2):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-fail-{i}",
check_id="check_same",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {"check_same": {"passed": 3, "total": 5}}
def test_only_failed_findings(self, tenants_fixture, scans_fixture):
"""Correctly count when all findings are FAIL status."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-fail-1",
check_id="check_fail",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-fail-2",
check_id="check_fail",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {"check_fail": {"passed": 0, "total": 2}}
def test_mixed_statuses(self, tenants_fixture, scans_fixture):
"""Test with PASS, FAIL, and MANUAL statuses mixed."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-pass",
check_id="check_mixed",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-fail",
check_id="check_mixed",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-manual",
check_id="check_mixed",
status=StatusChoices.MANUAL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
# Only PASS status is counted as passed
assert result == {"check_mixed": {"passed": 1, "total": 3}}
@pytest.mark.django_db
class TestLoadFindingsForChecks:
"""Test suite for _load_findings_for_requirement_checks function."""
def test_loads_only_requested_checks(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Verify only findings for specified check_ids are loaded."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
providers_fixture[0]
# Create findings with different check_ids
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_requested",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-2",
check_id="check_not_requested",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_requested"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_requested"], mock_provider
)
# Only one finding should be loaded
assert "check_requested" in result
assert "check_not_requested" not in result
assert len(result["check_requested"]) == 1
assert mock_transform.call_count == 1
def test_empty_check_ids_returns_empty(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Return empty dict when check_ids list is empty."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
mock_provider = MagicMock()
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), [], mock_provider
)
assert result == {}
def test_groups_by_check_id(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Multiple findings for same check are grouped correctly."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create multiple findings for same check
for i in range(3):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-{i}",
check_id="check_group",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_group"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_group"], mock_provider
)
assert len(result["check_group"]) == 3
def test_transforms_to_finding_output(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Findings are transformed using FindingOutput.transform_api_finding."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-transform",
check_id="check_transform",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_transform"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_transform"], mock_provider
)
# Verify transform was called
mock_transform.assert_called_once()
# Verify the transformed output is in the result
assert result["check_transform"][0] == mock_finding_output
def test_batched_iteration(self, tenants_fixture, scans_fixture, providers_fixture):
"""Works correctly with multiple batches of findings."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create enough findings to ensure batching (assuming batch size > 1)
for i in range(10):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-batch-{i}",
check_id="check_batch",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_batch"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_batch"], mock_provider
)
# All 10 findings should be loaded regardless of batching
assert len(result["check_batch"]) == 10
assert mock_transform.call_count == 10
@pytest.mark.django_db
class TestCalculateRequirementsData:
"""Test suite for _calculate_requirements_data_from_statistics function."""
def test_requirement_status_all_pass(self):
"""Status is PASS when all findings for requirement checks pass."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_1"
mock_requirement.Description = "Test requirement"
mock_requirement.Checks = ["check_1", "check_2"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {
"check_1": {"passed": 5, "total": 5},
"check_2": {"passed": 3, "total": 3},
}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
assert requirements_list[0]["attributes"]["status"] == StatusChoices.PASS
assert requirements_list[0]["attributes"]["passed_findings"] == 8
assert requirements_list[0]["attributes"]["total_findings"] == 8
def test_requirement_status_some_fail(self):
"""Status is FAIL when some findings fail."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_2"
mock_requirement.Description = "Test requirement with failures"
mock_requirement.Checks = ["check_3"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {
"check_3": {"passed": 2, "total": 5},
}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
assert requirements_list[0]["attributes"]["status"] == StatusChoices.FAIL
assert requirements_list[0]["attributes"]["passed_findings"] == 2
assert requirements_list[0]["attributes"]["total_findings"] == 5
def test_requirement_status_no_findings(self):
"""Status is MANUAL when no findings exist for requirement."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_3"
mock_requirement.Description = "Manual requirement"
mock_requirement.Checks = ["check_nonexistent"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
assert requirements_list[0]["attributes"]["status"] == StatusChoices.MANUAL
assert requirements_list[0]["attributes"]["passed_findings"] == 0
assert requirements_list[0]["attributes"]["total_findings"] == 0
def test_aggregates_multiple_checks(self):
"""Correctly sum stats across multiple checks in requirement."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_4"
mock_requirement.Description = "Multi-check requirement"
mock_requirement.Checks = ["check_a", "check_b", "check_c"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {
"check_a": {"passed": 10, "total": 15},
"check_b": {"passed": 5, "total": 10},
"check_c": {"passed": 0, "total": 5},
}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
# 10 + 5 + 0 = 15 passed
assert requirements_list[0]["attributes"]["passed_findings"] == 15
# 15 + 10 + 5 = 30 total
assert requirements_list[0]["attributes"]["total_findings"] == 30
# Not all passed, so should be FAIL
assert requirements_list[0]["attributes"]["status"] == StatusChoices.FAIL
def test_returns_correct_structure(self):
"""Verify tuple structure and dict keys are correct."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_attribute = MagicMock()
mock_requirement = MagicMock()
mock_requirement.Id = "req_5"
mock_requirement.Description = "Structure test"
mock_requirement.Checks = ["check_struct"]
mock_requirement.Attributes = [mock_attribute]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {"check_struct": {"passed": 1, "total": 1}}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
# Verify attributes_by_id structure
assert "req_5" in attributes_by_id
assert "attributes" in attributes_by_id["req_5"]
assert "description" in attributes_by_id["req_5"]
assert "req_attributes" in attributes_by_id["req_5"]["attributes"]
assert "checks" in attributes_by_id["req_5"]["attributes"]
# Verify requirements_list structure
assert len(requirements_list) == 1
req = requirements_list[0]
assert "id" in req
assert "attributes" in req
assert "framework" in req["attributes"]
assert "version" in req["attributes"]
assert "status" in req["attributes"]
assert "description" in req["attributes"]
assert "passed_findings" in req["attributes"]
assert "total_findings" in req["attributes"]
@pytest.mark.django_db
class TestGenerateThreatscoreReportFunction:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
self.compliance_id = "prowler_threatscore_aws"
self.output_path = "/tmp/test_threatscore_report.pdf"
@patch("tasks.jobs.report.initialize_prowler_provider")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report._calculate_requirements_data_from_statistics")
@patch("tasks.jobs.report._load_findings_for_requirement_checks")
@patch("tasks.jobs.report.SimpleDocTemplate")
@patch("tasks.jobs.report.Image")
@patch("tasks.jobs.report.Spacer")
@patch("tasks.jobs.report.Paragraph")
@patch("tasks.jobs.report.PageBreak")
@patch("tasks.jobs.report.Table")
@patch("tasks.jobs.report.TableStyle")
@patch("tasks.jobs.report.plt.subplots")
@patch("tasks.jobs.report.plt.savefig")
@patch("tasks.jobs.report.io.BytesIO")
def test_generate_threatscore_report_success(
self,
mock_bytesio,
mock_savefig,
mock_subplots,
mock_table_style,
mock_table,
mock_page_break,
mock_paragraph,
mock_spacer,
mock_image,
mock_doc_template,
mock_load_findings,
mock_calculate_requirements,
mock_aggregate_statistics,
mock_compliance_get_bulk,
mock_provider_get,
mock_initialize_provider,
):
"""Test the updated generate_threatscore_report using new memory-efficient architecture."""
mock_provider = MagicMock()
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
prowler_provider = MagicMock()
mock_initialize_provider.return_value = prowler_provider
# Mock compliance object with requirements
mock_compliance_obj = MagicMock()
mock_compliance_obj.Framework = "ProwlerThreatScore"
mock_compliance_obj.Version = "1.0"
mock_compliance_obj.Description = "Test Description"
# Configure requirement with properly set numeric attributes for chart generation
mock_requirement = MagicMock()
mock_requirement.Id = "req_1"
mock_requirement.Description = "Test requirement"
mock_requirement.Checks = ["check_1"]
# Create a properly configured attribute mock with numeric values
mock_requirement_attr = MagicMock()
mock_requirement_attr.Section = "1. IAM"
mock_requirement_attr.SubSection = "1.1 Identity"
mock_requirement_attr.Title = "Test Requirement Title"
mock_requirement_attr.LevelOfRisk = 3
mock_requirement_attr.Weight = 100
mock_requirement_attr.AttributeDescription = "Test requirement description"
mock_requirement_attr.AdditionalInformation = "Additional test information"
mock_requirement.Attributes = [mock_requirement_attr]
mock_compliance_obj.Requirements = [mock_requirement]
mock_compliance_get_bulk.return_value = {
self.compliance_id: mock_compliance_obj
}
# Mock the aggregated statistics from database
mock_aggregate_statistics.return_value = {"check_1": {"passed": 5, "total": 10}}
# Mock the calculated requirements data with properly configured attributes
mock_attributes_by_id = {
"req_1": {
"attributes": {
"req_attributes": [mock_requirement_attr],
"checks": ["check_1"],
},
"description": "Test requirement",
}
}
mock_requirements_list = [
{
"id": "req_1",
"attributes": {
"framework": "ProwlerThreatScore",
"version": "1.0",
"status": StatusChoices.FAIL,
"description": "Test requirement",
"passed_findings": 5,
"total_findings": 10,
},
}
]
mock_calculate_requirements.return_value = (
mock_attributes_by_id,
mock_requirements_list,
)
# Mock the on-demand loaded findings
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_1"
mock_finding_output.status = "FAIL"
mock_finding_output.metadata = MagicMock()
mock_finding_output.metadata.CheckTitle = "Test Check"
mock_finding_output.metadata.Severity = "HIGH"
mock_finding_output.resource_name = "test-resource"
mock_finding_output.region = "us-east-1"
mock_load_findings.return_value = {"check_1": [mock_finding_output]}
# Mock PDF generation components
mock_doc = MagicMock()
mock_doc_template.return_value = mock_doc
mock_fig, mock_ax = MagicMock(), MagicMock()
mock_subplots.return_value = (mock_fig, mock_ax)
mock_buffer = MagicMock()
mock_bytesio.return_value = mock_buffer
mock_image.return_value = MagicMock()
mock_spacer.return_value = MagicMock()
mock_paragraph.return_value = MagicMock()
mock_page_break.return_value = MagicMock()
mock_table.return_value = MagicMock()
mock_table_style.return_value = MagicMock()
# Execute the function
generate_threatscore_report(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id=self.compliance_id,
output_path=self.output_path,
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
# Verify the new workflow was followed
mock_provider_get.assert_called_once_with(id=self.provider_id)
mock_initialize_provider.assert_called_once_with(mock_provider)
mock_compliance_get_bulk.assert_called_once_with("aws")
# Verify the new functions were called in correct order with correct parameters
mock_aggregate_statistics.assert_called_once_with(self.tenant_id, self.scan_id)
mock_calculate_requirements.assert_called_once_with(
mock_compliance_obj, {"check_1": {"passed": 5, "total": 10}}
)
mock_load_findings.assert_called_once_with(
self.tenant_id, self.scan_id, ["check_1"], prowler_provider
)
# Verify PDF was built
mock_doc_template.assert_called_once()
mock_doc.build.assert_called_once()
@patch("tasks.jobs.report.initialize_prowler_provider")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report.Finding.all_objects.filter")
def test_generate_threatscore_report_exception_handling(
self,
mock_finding_filter,
mock_compliance_get_bulk,
mock_provider_get,
mock_initialize_provider,
):
mock_provider_get.side_effect = Exception("Provider not found")
with pytest.raises(Exception, match="Provider not found"):
generate_threatscore_report(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id=self.compliance_id,
output_path=self.output_path,
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
@pytest.mark.django_db
class TestGenerateThreatscoreReportTask:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
@patch("tasks.tasks.generate_threatscore_report_job")
def test_generate_threatscore_report_task_calls_job(self, mock_generate_job):
mock_generate_job.return_value = {"upload": True}
result = generate_threatscore_report_task(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": True}
mock_generate_job.assert_called_once_with(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
@patch("tasks.tasks.generate_threatscore_report_job")
def test_generate_threatscore_report_task_handles_job_exception(
self, mock_generate_job
):
mock_generate_job.side_effect = Exception("Job failed")
with pytest.raises(Exception, match="Job failed"):
generate_threatscore_report_task(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
+776 -1
View File
@@ -1,17 +1,22 @@
import csv
import json
import uuid
from datetime import datetime
from datetime import datetime, timezone
from io import StringIO
from unittest.mock import MagicMock, patch
import pytest
from tasks.jobs.scan import (
_copy_compliance_requirement_rows,
_create_finding_delta,
_persist_compliance_requirement_rows,
_store_resources,
create_compliance_requirements,
perform_prowler_scan,
)
from tasks.utils import CustomEncoder
from api.db_router import MainRouter
from api.exceptions import ProviderConnectionError
from api.models import Finding, Provider, Resource, Scan, StateChoices, StatusChoices
from prowler.lib.check.models import Severity
@@ -1045,3 +1050,773 @@ class TestCreateComplianceRequirements:
assert "requirements_created" in result
assert result["requirements_created"] >= 0
class TestComplianceRequirementCopy:
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_streams_csv(
self, mock_psycopg_connection, settings
):
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
captured = {}
def copy_side_effect(sql, file_obj):
captured["sql"] = sql
captured["data"] = file_obj.read()
cursor.copy_expert.side_effect = copy_side_effect
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"compliance_id": "cisa_aws",
"framework": "CISA",
"version": None,
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
with patch.object(MainRouter, "admin_db", "admin"):
_copy_compliance_requirement_rows(str(row["tenant_id"]), [row])
mock_psycopg_connection.assert_called_once_with("admin")
connection.cursor.assert_called_once()
cursor.execute.assert_called_once()
cursor.copy_expert.assert_called_once()
csv_rows = list(csv.reader(StringIO(captured["data"])))
assert csv_rows[0][0] == str(row["id"])
assert csv_rows[0][5] == ""
assert csv_rows[0][-1] == str(row["scan_id"])
@patch("tasks.jobs.scan.ComplianceRequirementOverview.objects.bulk_create")
@patch("tasks.jobs.scan.rls_transaction")
@patch(
"tasks.jobs.scan._copy_compliance_requirement_rows",
side_effect=Exception("copy failed"),
)
def test_persist_compliance_requirement_rows_fallback(
self, mock_copy, mock_rls_transaction, mock_bulk_create
):
inserted_at = datetime.now(timezone.utc)
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"inserted_at": inserted_at,
"compliance_id": "cisa_aws",
"framework": "CISA",
"version": "1.0",
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
tenant_id = row["tenant_id"]
ctx = MagicMock()
ctx.__enter__.return_value = None
ctx.__exit__.return_value = False
mock_rls_transaction.return_value = ctx
_persist_compliance_requirement_rows(tenant_id, [row])
mock_copy.assert_called_once_with(tenant_id, [row])
mock_rls_transaction.assert_called_once_with(tenant_id)
mock_bulk_create.assert_called_once()
args, kwargs = mock_bulk_create.call_args
objects = args[0]
assert len(objects) == 1
fallback = objects[0]
assert fallback.version == row["version"]
assert fallback.compliance_id == row["compliance_id"]
@patch("tasks.jobs.scan.ComplianceRequirementOverview.objects.bulk_create")
@patch("tasks.jobs.scan.rls_transaction")
@patch("tasks.jobs.scan._copy_compliance_requirement_rows")
def test_persist_compliance_requirement_rows_no_rows(
self, mock_copy, mock_rls_transaction, mock_bulk_create
):
_persist_compliance_requirement_rows(str(uuid.uuid4()), [])
mock_copy.assert_not_called()
mock_rls_transaction.assert_not_called()
mock_bulk_create.assert_not_called()
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_multiple_rows(
self, mock_psycopg_connection, settings
):
"""Test COPY with multiple rows to ensure batch processing works correctly."""
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
captured = {}
def copy_side_effect(sql, file_obj):
captured["sql"] = sql
captured["data"] = file_obj.read()
cursor.copy_expert.side_effect = copy_side_effect
tenant_id = str(uuid.uuid4())
scan_id = uuid.uuid4()
inserted_at = datetime.now(timezone.utc)
rows = [
{
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": inserted_at,
"compliance_id": "cisa_aws",
"framework": "CISA",
"version": "1.0",
"description": "First requirement",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 5,
"failed_checks": 0,
"total_checks": 5,
"scan_id": scan_id,
},
{
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": inserted_at,
"compliance_id": "cisa_aws",
"framework": "CISA",
"version": "1.0",
"description": "Second requirement",
"region": "us-west-2",
"requirement_id": "req-2",
"requirement_status": "FAIL",
"passed_checks": 3,
"failed_checks": 2,
"total_checks": 5,
"scan_id": scan_id,
},
{
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": inserted_at,
"compliance_id": "aws_foundational_security_aws",
"framework": "AWS-Foundational-Security-Best-Practices",
"version": "2.0",
"description": "Third requirement",
"region": "eu-west-1",
"requirement_id": "req-3",
"requirement_status": "MANUAL",
"passed_checks": 0,
"failed_checks": 0,
"total_checks": 3,
"scan_id": scan_id,
},
]
with patch.object(MainRouter, "admin_db", "admin"):
_copy_compliance_requirement_rows(tenant_id, rows)
mock_psycopg_connection.assert_called_once_with("admin")
connection.cursor.assert_called_once()
cursor.execute.assert_called_once()
cursor.copy_expert.assert_called_once()
csv_rows = list(csv.reader(StringIO(captured["data"])))
assert len(csv_rows) == 3
# Validate first row
assert csv_rows[0][0] == str(rows[0]["id"])
assert csv_rows[0][1] == tenant_id
assert csv_rows[0][3] == "cisa_aws"
assert csv_rows[0][4] == "CISA"
assert csv_rows[0][6] == "First requirement"
assert csv_rows[0][7] == "us-east-1"
assert csv_rows[0][10] == "5"
assert csv_rows[0][11] == "0"
assert csv_rows[0][12] == "5"
# Validate second row
assert csv_rows[1][0] == str(rows[1]["id"])
assert csv_rows[1][7] == "us-west-2"
assert csv_rows[1][9] == "FAIL"
assert csv_rows[1][10] == "3"
assert csv_rows[1][11] == "2"
# Validate third row
assert csv_rows[2][0] == str(rows[2]["id"])
assert csv_rows[2][3] == "aws_foundational_security_aws"
assert csv_rows[2][5] == "2.0"
assert csv_rows[2][9] == "MANUAL"
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_null_values(
self, mock_psycopg_connection, settings
):
"""Test COPY handles NULL/None values correctly in nullable fields."""
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
captured = {}
def copy_side_effect(sql, file_obj):
captured["sql"] = sql
captured["data"] = file_obj.read()
cursor.copy_expert.side_effect = copy_side_effect
# Row with all nullable fields set to None/empty
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"compliance_id": "test_framework",
"framework": "Test",
"version": None, # nullable
"description": None, # nullable
"region": "",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 0,
"failed_checks": 0,
"total_checks": 0,
"scan_id": uuid.uuid4(),
}
with patch.object(MainRouter, "admin_db", "admin"):
_copy_compliance_requirement_rows(str(row["tenant_id"]), [row])
csv_rows = list(csv.reader(StringIO(captured["data"])))
assert len(csv_rows) == 1
# Validate that None values are converted to empty strings in CSV
assert csv_rows[0][5] == "" # version
assert csv_rows[0][6] == "" # description
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_special_characters(
self, mock_psycopg_connection, settings
):
"""Test COPY correctly escapes special characters in CSV."""
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
captured = {}
def copy_side_effect(sql, file_obj):
captured["sql"] = sql
captured["data"] = file_obj.read()
cursor.copy_expert.side_effect = copy_side_effect
# Row with special characters that need escaping
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"compliance_id": 'framework"with"quotes',
"framework": "Framework,with,commas",
"version": "1.0",
"description": 'Description with "quotes", commas, and\nnewlines',
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
with patch.object(MainRouter, "admin_db", "admin"):
_copy_compliance_requirement_rows(str(row["tenant_id"]), [row])
# Verify CSV was generated (csv module handles escaping automatically)
csv_rows = list(csv.reader(StringIO(captured["data"])))
assert len(csv_rows) == 1
# Verify special characters are preserved after CSV parsing
assert csv_rows[0][3] == 'framework"with"quotes'
assert csv_rows[0][4] == "Framework,with,commas"
assert "quotes" in csv_rows[0][6]
assert "commas" in csv_rows[0][6]
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_missing_inserted_at(
self, mock_psycopg_connection, settings
):
"""Test COPY uses current datetime when inserted_at is missing."""
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
captured = {}
def copy_side_effect(sql, file_obj):
captured["sql"] = sql
captured["data"] = file_obj.read()
cursor.copy_expert.side_effect = copy_side_effect
# Row without inserted_at field
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"compliance_id": "test_framework",
"framework": "Test",
"version": "1.0",
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
# Note: inserted_at is intentionally missing
}
before_call = datetime.now(timezone.utc)
with patch.object(MainRouter, "admin_db", "admin"):
_copy_compliance_requirement_rows(str(row["tenant_id"]), [row])
after_call = datetime.now(timezone.utc)
csv_rows = list(csv.reader(StringIO(captured["data"])))
assert len(csv_rows) == 1
# Verify inserted_at was auto-generated and is a valid ISO datetime
inserted_at_str = csv_rows[0][2]
inserted_at = datetime.fromisoformat(inserted_at_str)
assert before_call <= inserted_at <= after_call
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_transaction_rollback_on_copy_error(
self, mock_psycopg_connection, settings
):
"""Test transaction is rolled back when copy_expert fails."""
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
# Simulate copy_expert failure
cursor.copy_expert.side_effect = Exception("COPY command failed")
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"compliance_id": "test",
"framework": "Test",
"version": "1.0",
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
with patch.object(MainRouter, "admin_db", "admin"):
with pytest.raises(Exception, match="COPY command failed"):
_copy_compliance_requirement_rows(str(row["tenant_id"]), [row])
# Verify rollback was called
connection.rollback.assert_called_once()
connection.commit.assert_not_called()
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_transaction_rollback_on_set_config_error(
self, mock_psycopg_connection, settings
):
"""Test transaction is rolled back when SET_CONFIG fails."""
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
# Simulate cursor.execute failure
cursor.execute.side_effect = Exception("SET prowler.tenant_id failed")
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"compliance_id": "test",
"framework": "Test",
"version": "1.0",
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
with patch.object(MainRouter, "admin_db", "admin"):
with pytest.raises(Exception, match="SET prowler.tenant_id failed"):
_copy_compliance_requirement_rows(str(row["tenant_id"]), [row])
# Verify rollback was called
connection.rollback.assert_called_once()
connection.commit.assert_not_called()
@patch("tasks.jobs.scan.psycopg_connection")
def test_copy_compliance_requirement_rows_commit_on_success(
self, mock_psycopg_connection, settings
):
"""Test transaction is committed on successful COPY."""
settings.DATABASES.setdefault("admin", settings.DATABASES["default"])
connection = MagicMock()
cursor = MagicMock()
cursor_context = MagicMock()
cursor_context.__enter__.return_value = cursor
cursor_context.__exit__.return_value = False
connection.cursor.return_value = cursor_context
connection.__enter__.return_value = connection
connection.__exit__.return_value = False
context_manager = MagicMock()
context_manager.__enter__.return_value = connection
context_manager.__exit__.return_value = False
mock_psycopg_connection.return_value = context_manager
cursor.copy_expert.return_value = None # Success
row = {
"id": uuid.uuid4(),
"tenant_id": str(uuid.uuid4()),
"compliance_id": "test",
"framework": "Test",
"version": "1.0",
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
with patch.object(MainRouter, "admin_db", "admin"):
_copy_compliance_requirement_rows(str(row["tenant_id"]), [row])
# Verify commit was called and rollback was not
connection.commit.assert_called_once()
connection.rollback.assert_not_called()
# Verify autocommit was disabled
assert connection.autocommit is False
@patch("tasks.jobs.scan._copy_compliance_requirement_rows")
def test_persist_compliance_requirement_rows_success(self, mock_copy):
"""Test successful COPY path without fallback to ORM."""
mock_copy.return_value = None # Success, no exception
tenant_id = str(uuid.uuid4())
rows = [
{
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": datetime.now(timezone.utc),
"compliance_id": "test",
"framework": "Test",
"version": "1.0",
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
]
_persist_compliance_requirement_rows(tenant_id, rows)
# Verify COPY was called
mock_copy.assert_called_once_with(tenant_id, rows)
@patch("tasks.jobs.scan.logger")
@patch("tasks.jobs.scan.ComplianceRequirementOverview.objects.bulk_create")
@patch("tasks.jobs.scan.rls_transaction")
@patch(
"tasks.jobs.scan._copy_compliance_requirement_rows",
side_effect=Exception("COPY failed"),
)
def test_persist_compliance_requirement_rows_fallback_logging(
self, mock_copy, mock_rls_transaction, mock_bulk_create, mock_logger
):
"""Test logger.exception is called when COPY fails and fallback occurs."""
tenant_id = str(uuid.uuid4())
row = {
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": datetime.now(timezone.utc),
"compliance_id": "test",
"framework": "Test",
"version": "1.0",
"description": "desc",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 1,
"failed_checks": 0,
"total_checks": 1,
"scan_id": uuid.uuid4(),
}
ctx = MagicMock()
ctx.__enter__.return_value = None
ctx.__exit__.return_value = False
mock_rls_transaction.return_value = ctx
_persist_compliance_requirement_rows(tenant_id, [row])
# Verify logger.exception was called
mock_logger.exception.assert_called_once()
args, kwargs = mock_logger.exception.call_args
assert "COPY bulk insert" in args[0]
assert "falling back to ORM" in args[0]
assert kwargs.get("exc_info") is not None
@patch("tasks.jobs.scan.ComplianceRequirementOverview.objects.bulk_create")
@patch("tasks.jobs.scan.rls_transaction")
@patch(
"tasks.jobs.scan._copy_compliance_requirement_rows",
side_effect=Exception("copy failed"),
)
def test_persist_compliance_requirement_rows_fallback_multiple_rows(
self, mock_copy, mock_rls_transaction, mock_bulk_create
):
"""Test ORM fallback with multiple rows."""
tenant_id = str(uuid.uuid4())
scan_id = uuid.uuid4()
inserted_at = datetime.now(timezone.utc)
rows = [
{
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": inserted_at,
"compliance_id": "cisa_aws",
"framework": "CISA",
"version": "1.0",
"description": "First requirement",
"region": "us-east-1",
"requirement_id": "req-1",
"requirement_status": "PASS",
"passed_checks": 5,
"failed_checks": 0,
"total_checks": 5,
"scan_id": scan_id,
},
{
"id": uuid.uuid4(),
"tenant_id": tenant_id,
"inserted_at": inserted_at,
"compliance_id": "cisa_aws",
"framework": "CISA",
"version": "1.0",
"description": "Second requirement",
"region": "us-west-2",
"requirement_id": "req-2",
"requirement_status": "FAIL",
"passed_checks": 2,
"failed_checks": 3,
"total_checks": 5,
"scan_id": scan_id,
},
]
ctx = MagicMock()
ctx.__enter__.return_value = None
ctx.__exit__.return_value = False
mock_rls_transaction.return_value = ctx
_persist_compliance_requirement_rows(tenant_id, rows)
mock_copy.assert_called_once_with(tenant_id, rows)
mock_rls_transaction.assert_called_once_with(tenant_id)
mock_bulk_create.assert_called_once()
args, kwargs = mock_bulk_create.call_args
objects = args[0]
assert len(objects) == 2
assert kwargs["batch_size"] == 500
# Validate first object
assert objects[0].id == rows[0]["id"]
assert objects[0].tenant_id == rows[0]["tenant_id"]
assert objects[0].compliance_id == rows[0]["compliance_id"]
assert objects[0].framework == rows[0]["framework"]
assert objects[0].region == rows[0]["region"]
assert objects[0].passed_checks == 5
assert objects[0].failed_checks == 0
# Validate second object
assert objects[1].id == rows[1]["id"]
assert objects[1].requirement_id == rows[1]["requirement_id"]
assert objects[1].requirement_status == rows[1]["requirement_status"]
assert objects[1].passed_checks == 2
assert objects[1].failed_checks == 3
@patch("tasks.jobs.scan.ComplianceRequirementOverview.objects.bulk_create")
@patch("tasks.jobs.scan.rls_transaction")
@patch(
"tasks.jobs.scan._copy_compliance_requirement_rows",
side_effect=Exception("copy failed"),
)
def test_persist_compliance_requirement_rows_fallback_all_fields(
self, mock_copy, mock_rls_transaction, mock_bulk_create
):
"""Test ORM fallback correctly maps all fields from row dict to model."""
tenant_id = str(uuid.uuid4())
row_id = uuid.uuid4()
scan_id = uuid.uuid4()
inserted_at = datetime.now(timezone.utc)
row = {
"id": row_id,
"tenant_id": tenant_id,
"inserted_at": inserted_at,
"compliance_id": "aws_foundational_security_aws",
"framework": "AWS-Foundational-Security-Best-Practices",
"version": "2.0",
"description": "Ensure MFA is enabled",
"region": "eu-west-1",
"requirement_id": "iam.1",
"requirement_status": "FAIL",
"passed_checks": 10,
"failed_checks": 5,
"total_checks": 15,
"scan_id": scan_id,
}
ctx = MagicMock()
ctx.__enter__.return_value = None
ctx.__exit__.return_value = False
mock_rls_transaction.return_value = ctx
_persist_compliance_requirement_rows(tenant_id, [row])
args, kwargs = mock_bulk_create.call_args
objects = args[0]
assert len(objects) == 1
obj = objects[0]
# Validate ALL fields are correctly mapped
assert obj.id == row_id
assert obj.tenant_id == tenant_id
assert obj.inserted_at == inserted_at
assert obj.compliance_id == "aws_foundational_security_aws"
assert obj.framework == "AWS-Foundational-Security-Best-Practices"
assert obj.version == "2.0"
assert obj.description == "Ensure MFA is enabled"
assert obj.region == "eu-west-1"
assert obj.requirement_id == "iam.1"
assert obj.requirement_status == "FAIL"
assert obj.passed_checks == 10
assert obj.failed_checks == 5
assert obj.total_checks == 15
assert obj.scan_id == scan_id
+127 -59
View File
@@ -98,7 +98,11 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks._generate_output_directory",
return_value=("out-dir", "comp-dir"),
return_value=(
"/tmp/test/out-dir",
"/tmp/test/comp-dir",
"/tmp/test/threat-dir",
),
),
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
patch("tasks.tasks.rmtree"),
@@ -126,7 +130,8 @@ class TestGenerateOutputs:
patch("tasks.tasks.get_compliance_frameworks"),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
@@ -168,15 +173,35 @@ class TestGenerateOutputs:
mock_finding_output = MagicMock()
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
html_writer_mock = MagicMock()
html_writer_mock._data = []
html_writer_mock.close_file = False
html_writer_mock.transform = MagicMock()
html_writer_mock.batch_write_data_to_file = MagicMock()
compliance_writer_mock = MagicMock()
compliance_writer_mock._data = []
compliance_writer_mock.close_file = False
compliance_writer_mock.transform = MagicMock()
compliance_writer_mock.batch_write_data_to_file = MagicMock()
# Create a mock class that returns our mock instance when called
mock_compliance_class = MagicMock(return_value=compliance_writer_mock)
mock_provider = MagicMock()
mock_provider.provider = "aws"
mock_provider.uid = "test-provider-uid"
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.Provider.objects.get", return_value=mock_provider),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk", return_value={"cis": MagicMock()}),
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
@@ -190,6 +215,20 @@ class TestGenerateOutputs:
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/f.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree"),
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"html": {
"class": lambda *args, **kwargs: html_writer_mock,
"suffix": ".html",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
@@ -197,29 +236,12 @@ class TestGenerateOutputs:
True,
]
html_writer_mock = MagicMock()
with (
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"html": {
"class": lambda *args, **kwargs: html_writer_mock,
"suffix": ".html",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
),
):
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
html_writer_mock.batch_write_data_to_file.assert_called_once()
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
html_writer_mock.batch_write_data_to_file.assert_called_once()
def test_transform_called_only_on_second_batch(self):
raw1 = MagicMock()
@@ -256,7 +278,11 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks._generate_output_directory",
return_value=("outdir", "compdir"),
return_value=(
"/tmp/test/outdir",
"/tmp/test/compdir",
"/tmp/test/threatdir",
),
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
@@ -303,12 +329,14 @@ class TestGenerateOutputs:
def __init__(self, *args, **kwargs):
self.transform_calls = []
self._data = []
self.close_file = False
writer_instances.append(self)
def transform(self, fos, comp_obj, name):
self.transform_calls.append((fos, comp_obj, name))
def batch_write_data_to_file(self):
# Mock implementation - do nothing
pass
two_batches = [
@@ -329,7 +357,11 @@ class TestGenerateOutputs:
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch(
"tasks.tasks._generate_output_directory",
return_value=("outdir", "compdir"),
return_value=(
"/tmp/test/outdir",
"/tmp/test/compdir",
"/tmp/test/threatdir",
),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch(
@@ -368,15 +400,35 @@ class TestGenerateOutputs:
mock_finding_output = MagicMock()
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
json_writer_mock = MagicMock()
json_writer_mock._data = []
json_writer_mock.close_file = False
json_writer_mock.transform = MagicMock()
json_writer_mock.batch_write_data_to_file = MagicMock()
compliance_writer_mock = MagicMock()
compliance_writer_mock._data = []
compliance_writer_mock.close_file = False
compliance_writer_mock.transform = MagicMock()
compliance_writer_mock.batch_write_data_to_file = MagicMock()
# Create a mock class that returns our mock instance when called
mock_compliance_class = MagicMock(return_value=compliance_writer_mock)
mock_provider = MagicMock()
mock_provider.provider = "aws"
mock_provider.uid = "test-provider-uid"
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.Provider.objects.get", return_value=mock_provider),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk", return_value={"cis": MagicMock()}),
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
@@ -390,6 +442,20 @@ class TestGenerateOutputs:
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/file.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree", side_effect=Exception("Test deletion error")),
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": lambda *args, **kwargs: json_writer_mock,
"suffix": ".json",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
@@ -397,29 +463,13 @@ class TestGenerateOutputs:
True,
]
with (
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": lambda *args, **kwargs: MagicMock(),
"suffix": ".json",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
),
):
with caplog.at_level("ERROR"):
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert "Error deleting output files" in caplog.text
with caplog.at_level("ERROR"):
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert "Error deleting output files" in caplog.text
@patch("tasks.tasks.rls_transaction")
@patch("tasks.tasks.Integration.objects.filter")
@@ -435,7 +485,8 @@ class TestGenerateOutputs:
patch("tasks.tasks.get_compliance_frameworks", return_value=[]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
@@ -476,8 +527,15 @@ class TestScanCompleteTasks:
@patch("tasks.tasks.create_compliance_requirements_task.apply_async")
@patch("tasks.tasks.perform_scan_summary_task.si")
@patch("tasks.tasks.generate_outputs_task.si")
@patch("tasks.tasks.generate_threatscore_report_task.si")
@patch("tasks.tasks.check_integrations_task.si")
def test_scan_complete_tasks(
self, mock_outputs_task, mock_scan_summary_task, mock_compliance_tasks
self,
mock_check_integrations_task,
mock_threatscore_task,
mock_outputs_task,
mock_scan_summary_task,
mock_compliance_tasks,
):
_perform_scan_complete_tasks("tenant-id", "scan-id", "provider-id")
mock_compliance_tasks.assert_called_once_with(
@@ -492,6 +550,16 @@ class TestScanCompleteTasks:
provider_id="provider-id",
tenant_id="tenant-id",
)
mock_threatscore_task.assert_called_once_with(
tenant_id="tenant-id",
scan_id="scan-id",
provider_id="provider-id",
)
mock_check_integrations_task.assert_called_once_with(
tenant_id="tenant-id",
provider_id="provider-id",
scan_id="scan-id",
)
@pytest.mark.django_db
@@ -662,7 +730,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
@@ -787,7 +855,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
@@ -903,7 +971,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
+43
View File
@@ -0,0 +1,43 @@
import warnings
from dashboard.common_methods import get_section_containers_3_levels
warnings.filterwarnings("ignore")
def get_table(data):
data["REQUIREMENTS_DESCRIPTION"] = (
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
)
data["REQUIREMENTS_DESCRIPTION"] = data["REQUIREMENTS_DESCRIPTION"].apply(
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
)
data["REQUIREMENTS_ATTRIBUTES_SECTION"] = data[
"REQUIREMENTS_ATTRIBUTES_SECTION"
].apply(lambda x: x[:80] + "..." if len(str(x)) > 80 else x)
data["REQUIREMENTS_ATTRIBUTES_SUBSECTION"] = data[
"REQUIREMENTS_ATTRIBUTES_SUBSECTION"
].apply(lambda x: x[:150] + "..." if len(str(x)) > 150 else x)
aux = data[
[
"REQUIREMENTS_DESCRIPTION",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
]
return get_section_containers_3_levels(
aux,
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"REQUIREMENTS_DESCRIPTION",
)
+36
View File
@@ -0,0 +1,36 @@
import warnings
from dashboard.common_methods import get_section_containers_3_levels
warnings.filterwarnings("ignore")
def get_table(data):
data["REQUIREMENTS_ID"] = (
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
)
data["REQUIREMENTS_ID"] = data["REQUIREMENTS_ID"].apply(
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
)
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
]
return get_section_containers_3_levels(
aux,
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"REQUIREMENTS_ID",
)
+36
View File
@@ -0,0 +1,36 @@
import warnings
from dashboard.common_methods import get_section_containers_3_levels
warnings.filterwarnings("ignore")
def get_table(data):
data["REQUIREMENTS_ID"] = (
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
)
data["REQUIREMENTS_ID"] = data["REQUIREMENTS_ID"].apply(
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
)
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
]
return get_section_containers_3_levels(
aux,
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"REQUIREMENTS_ID",
)
+36
View File
@@ -0,0 +1,36 @@
import warnings
from dashboard.common_methods import get_section_containers_3_levels
warnings.filterwarnings("ignore")
def get_table(data):
data["REQUIREMENTS_ID"] = (
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
)
data["REQUIREMENTS_ID"] = data["REQUIREMENTS_ID"].apply(
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
)
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
]
return get_section_containers_3_levels(
aux,
"REQUIREMENTS_ATTRIBUTES_SECTION",
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
"REQUIREMENTS_ID",
)
+41
View File
@@ -0,0 +1,41 @@
import warnings
from dashboard.common_methods import get_section_containers_cis
warnings.filterwarnings("ignore")
def get_table(data):
"""
Generate CIS OCI Foundations Benchmark v3.0 compliance table.
Args:
data: DataFrame containing compliance check results with columns:
- REQUIREMENTS_ID: CIS requirement ID (e.g., "1.1", "2.1")
- REQUIREMENTS_DESCRIPTION: Description of the requirement
- REQUIREMENTS_ATTRIBUTES_SECTION: CIS section name
- CHECKID: Prowler check identifier
- STATUS: Check status (PASS/FAIL)
- REGION: OCI region
- TENANCYID: OCI tenancy OCID
- RESOURCEID: Resource OCID or identifier
Returns:
Section containers organized by CIS sections for dashboard display
"""
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_DESCRIPTION",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"CHECKID",
"STATUS",
"REGION",
"TENANCYID",
"RESOURCEID",
]
].copy()
return get_section_containers_cis(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
)
+4
View File
@@ -0,0 +1,4 @@
.idea/
.git/
.claude/
AGENTS.md
+45
View File
@@ -0,0 +1,45 @@
# Prowler Documentation
This repository contains the Prowler Open Source documentation powered by [Mintlify](https://mintlify.com).
## Documentation Structure
- **Getting Started**: Overview, installation, and basic usage guides
- **User Guide**: Comprehensive guides for Prowler App, CLI, providers, and compliance
- **Developer Guide**: Technical documentation for developers contributing to Prowler
## Local Development
Install the [Mintlify CLI](https://www.npmjs.com/package/mint) to preview documentation changes locally:
```bash
npm i -g mint
```
Run the following command at the root of your documentation (where `mint.json` is located):
```bash
mint dev
```
View your local preview at `http://localhost:3000`.
## Publishing Changes
Changes pushed to the main branch are automatically deployed to production through Mintlify's GitHub integration.
## Documentation Guidelines
When contributing to the documentation, please follow the Prowler documentation style guide located in the `.claude` directory.
## Troubleshooting
- If your dev environment isn't running: Run `mint update` to ensure you have the most recent version of the CLI.
- If a page loads as a 404: Make sure you are running in a folder with a valid `mint.json` file and that the page path is correctly listed in the navigation.
## Resources
- [Prowler GitHub Repository](https://github.com/prowler-cloud/prowler)
- [Prowler Documentation](https://docs.prowler.com/)
- [Mintlify Documentation](https://mintlify.com/docs)
- [Mintlify Community](https://mintlify.com/community)
-61
View File
@@ -1,61 +0,0 @@
## Access Prowler App
After [installation](../installation/prowler-app.md), navigate to [http://localhost:3000](http://localhost:3000) and sign up with email and password.
<img src="../../img/sign-up-button.png" alt="Sign Up Button" width="320"/>
<img src="../../img/sign-up.png" alt="Sign Up" width="285"/>
???+ note "User creation and default tenant behavior"
When creating a new user, the behavior depends on whether an invitation is provided:
- **Without an invitation**:
- A new tenant is automatically created.
- The new user is assigned to this tenant.
- A set of **RBAC admin permissions** is generated and assigned to the user for the newly-created tenant.
- **With an invitation**: The user is added to the specified tenant with the permissions defined in the invitation.
This mechanism ensures that the first user in a newly created tenant has administrative permissions within that tenant.
## Log In
Access Prowler App by logging in with **email and password**.
<img src="../../img/log-in.png" alt="Log In" width="285"/>
## Add Cloud Provider
Configure a cloud provider for scanning:
1. Navigate to `Settings > Cloud Providers` and click `Add Account`.
2. Select the cloud provider.
3. Enter the provider's identifier (Optional: Add an alias):
- **AWS**: Account ID
- **GCP**: Project ID
- **Azure**: Subscription ID
- **Kubernetes**: Cluster ID
- **M365**: Domain ID
4. Follow the guided instructions to add and authenticate your credentials.
## Start a Scan
Once credentials are successfully added and validated, Prowler initiates a scan of your cloud environment.
Click `Go to Scans` to monitor progress.
## View Results
Review findings during scan execution in the following sections:
- **Overview** Provides a high-level summary of your scans.
<img src="../../products/img/overview.png" alt="Overview" width="700"/>
- **Compliance** Displays compliance insights based on security frameworks.
<img src="../../img/compliance.png" alt="Compliance" width="700"/>
> For detailed usage instructions, refer to the [Prowler App Guide](../tutorials/prowler-app.md).
???+ note
Prowler will automatically scan all configured providers every **24 hours**, ensuring your cloud environment stays continuously monitored.
+3 -1
View File
@@ -1,4 +1,6 @@
# Contact Us
---
title: 'Contact Us'
---
For technical support or any type of inquiries, you are very welcome to:
@@ -1,12 +1,14 @@
# AWS Provider
---
title: 'AWS Provider'
---
In this page you can find all the details about [Amazon Web Services (AWS)](https://aws.amazon.com/) provider implementation in Prowler.
By default, Prowler will audit just one account and organization settings per scan. To configure it, follow the [AWS getting started guide](../tutorials/aws/getting-started-aws.md).
By default, Prowler will audit just one account and organization settings per scan. To configure it, follow the [AWS getting started guide](/user-guide/providers/aws/getting-started-aws).
## AWS Provider Classes Architecture
The AWS provider implementation follows the general [Provider structure](./provider.md). This section focuses on the AWS-specific implementation, highlighting how the generic provider concepts are realized for AWS in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md). In next subsection you can find a list of the main classes of the AWS provider.
The AWS provider implementation follows the general [Provider structure](/developer-guide/provider). This section focuses on the AWS-specific implementation, highlighting how the generic provider concepts are realized for AWS in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](/developer-guide/provider). In next subsection you can find a list of the main classes of the AWS provider.
### `AwsProvider` (Main Class)
@@ -33,7 +35,7 @@ The AWS provider implementation follows the general [Provider structure](./provi
### `AWSService` (Service Base Class)
- **Location:** [`prowler/providers/aws/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/service/service.py)
- **Purpose:** Abstract base class that all AWS service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for AWS.
- **Purpose:** Abstract base class that all AWS service-specific classes inherit from. This implements the generic service pattern (described in [service page](/developer-guide/services#service-base-class)) specifically for AWS.
- **Key AWS Responsibilities:**
- Receives an `AwsProvider` instance to access session, identity, and configuration.
- Manages clients for all services by regions.
@@ -52,12 +54,12 @@ The AWS provider implementation follows the general [Provider structure](./provi
## Specific Patterns in AWS Services
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the right now implemented services in the following locations:
The generic service pattern is described in [service page](/developer-guide/services#service-structure-and-initialisation). You can find all the right now implemented services in the following locations:
- Directly in the code, in location [`prowler/providers/aws/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/aws/services)
- In the [Prowler Hub](https://hub.prowler.com/). For a more human-readable view.
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all AWS services.
The best reference to understand how to implement a new service is following the [service implementation documentation](/developer-guide/services#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all AWS services.
### AWS Service Common Patterns
@@ -74,12 +76,12 @@ The best reference to understand how to implement a new service is following the
## Specific Patterns in AWS Checks
The AWS checks pattern is described in [checks page](./checks.md). You can find all the right now implemented checks:
The AWS checks pattern is described in [checks page](/developer-guide/checks). You can find all the right now implemented checks:
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/aws/services/s3/s3_bucket_acl_prohibited/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/aws/services/s3/s3_bucket_acl_prohibited))
- In the [Prowler Hub](https://hub.prowler.com/). For a more human-readable view.
The best reference to understand how to implement a new check is following the [check creation documentation](./checks.md#creating-a-check) and taking other similar checks as reference.
The best reference to understand how to implement a new check is following the [check creation documentation](/developer-guide/checks#creating-a-check) and taking other similar checks as reference.
### Check Report Class
@@ -1,12 +1,14 @@
# Azure Provider
---
title: 'Azure Provider'
---
In this page you can find all the details about [Microsoft Azure](https://azure.microsoft.com/) provider implementation in Prowler.
By default, Prowler will audit all the subscriptions that it is able to list in the Microsoft Entra tenant, and tenant Entra ID service. To configure it, follow the [Azure getting started guide](../tutorials/azure/getting-started-azure.md).
By default, Prowler will audit all the subscriptions that it is able to list in the Microsoft Entra tenant, and tenant Entra ID service. To configure it, follow the [Azure getting started guide](/user-guide/providers/azure/getting-started-azure).
## Azure Provider Classes Architecture
The Azure provider implementation follows the general [Provider structure](./provider.md). This section focuses on the Azure-specific implementation, highlighting how the generic provider concepts are realized for Azure in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md). In next subsection you can find a list of the main classes of the Azure provider.
The Azure provider implementation follows the general [Provider structure](/developer-guide/provider). This section focuses on the Azure-specific implementation, highlighting how the generic provider concepts are realized for Azure in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](/developer-guide/provider). In next subsection you can find a list of the main classes of the Azure provider.
### `AzureProvider` (Main Class)
@@ -32,7 +34,7 @@ The Azure provider implementation follows the general [Provider structure](./pro
### `AzureService` (Service Base Class)
- **Location:** [`prowler/providers/azure/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/lib/service/service.py)
- **Purpose:** Abstract base class that all Azure service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for Azure.
- **Purpose:** Abstract base class that all Azure service-specific classes inherit from. This implements the generic service pattern (described in [service page](/developer-guide/services#service-base-class)) specifically for Azure.
- **Key Azure Responsibilities:**
- Receives an `AzureProvider` instance to access session, identity, and configuration.
- Manages clients for all services by subscription.
@@ -50,12 +52,12 @@ The Azure provider implementation follows the general [Provider structure](./pro
## Specific Patterns in Azure Services
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
The generic service pattern is described in [service page](/developer-guide/services#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
- Directly in the code, in location [`prowler/providers/azure/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/azure/services)
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all Azure services.
The best reference to understand how to implement a new service is following the [service implementation documentation](/developer-guide/services#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all Azure services.
### Azure Service Common Patterns
@@ -68,12 +70,12 @@ The best reference to understand how to implement a new service is following the
## Specific Patterns in Azure Checks
The Azure checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks:
The Azure checks pattern is described in [checks page](/developer-guide/checks). You can find all the currently implemented checks:
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/azure/services/storage/storage_blob_public_access_level_is_disabled/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/azure/services/storage/storage_blob_public_access_level_is_disabled))
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new check is the [Azure check implementation documentation](./checks.md#creating-a-check) and taking other similar checks as reference.
The best reference to understand how to implement a new check is the [Azure check implementation documentation](/developer-guide/checks#creating-a-check) and taking other similar checks as reference.
### Check Report Class
@@ -1,8 +1,10 @@
# Check Metadata Guidelines
---
title: 'Check Metadata Guidelines'
---
## Introduction
This guide provides comprehensive guidelines for creating check metadata in Prowler. For basic information on check metadata structure, refer to the [check metadata](./checks.md#metadata-structure-for-prowler-checks) section.
This guide provides comprehensive guidelines for creating check metadata in Prowler. For basic information on check metadata structure, refer to the [check metadata](/developer-guide/checks#metadata-structure-for-prowler-checks) section.
## Check Title Guidelines
@@ -1,4 +1,6 @@
# Prowler Checks
---
title: 'Prowler Checks'
---
This guide explains how to create new checks in Prowler.
@@ -12,7 +14,7 @@ The most common high level steps to create a new check are:
1. Prerequisites:
- Verify the check does not already exist by searching [Prowler Hub](https://hub.prowler.com) or checking `prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>/`.
- Ensure required provider and service exist. If not, follow the [Provider](./provider.md) and [Service](./services.md) documentation to create them.
- Ensure required provider and service exist. If not, follow the [Provider](/developer-guide/provider) and [Service](/developer-guide/services) documentation to create them.
- Confirm the service has implemented all required methods and attributes for the check (in most cases, you will need to add or modify some methods in the service to get the data you need for the check).
2. Navigate to the service directory. The path should be as follows: `prowler/providers/<provider>/services/<service>`.
3. Create a check-specific folder. The path should follow this pattern: `prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>`. Adhere to the [Naming Format for Checks](#naming-format-for-checks).
@@ -20,7 +22,7 @@ The most common high level steps to create a new check are:
5. Run the check locally to ensure it works as expected. For checking you can use the CLI in the next way:
- To ensure the check has been detected by Prowler: `poetry run python prowler-cli.py <provider> --list-checks | grep <check_name>`.
- To run the check, to find possible issues: `poetry run python prowler-cli.py <provider> --log-level ERROR --verbose --check <check_name>`.
6. Create comprehensive tests for the check that cover multiple scenarios including both PASS (compliant) and FAIL (non-compliant) cases. For detailed information about test structure and implementation guidelines, refer to the [Testing](./unit-testing.md) documentation.
6. Create comprehensive tests for the check that cover multiple scenarios including both PASS (compliant) and FAIL (non-compliant) cases. For detailed information about test structure and implementation guidelines, refer to the [Testing](/developer-guide/unit-testing) documentation.
7. If the check and its corresponding tests are working as expected, you can submit a PR to Prowler.
### Naming Format for Checks
@@ -39,8 +41,8 @@ The name components are:
Each check in Prowler follows a straightforward structure. Within the newly created folder, three files must be added to implement the check logic:
- `__init__.py` (empty file) Ensures Python treats the check folder as a package.
- `<check_name>.py` (code file) Contains the check logic, following the prescribed format. Please refer to the [prowler's check code structure](./checks.md#prowlers-check-code-structure) for more information.
- `<check_name>.metadata.json` (metadata file) Defines the check's metadata for contextual information. Please refer to the [check metadata](./checks.md#metadata-structure-for-prowler-checks) for more information.
- `<check_name>.py` (code file) Contains the check logic, following the prescribed format. Please refer to the [prowler's check code structure](/developer-guide/checks#prowlers-check-code-structure) for more information.
- `<check_name>.metadata.json` (metadata file) Defines the check's metadata for contextual information. Please refer to the [check metadata](/developer-guide/checks#metadata-structure-for-prowler-checks) for more information.
## Prowler's Check Code Structure
@@ -50,9 +52,10 @@ Below the code for a generic check is presented. It is strongly recommended to c
Report fields are the most dependent on the provider, consult the `CheckReport<Provider>` class for more information on what can be included in the report [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py).
???+ note
Legacy providers (AWS, Azure, GCP, Kubernetes) follow the `Check_Report_<Provider>` naming convention. This is not recommended for current instances. Newer providers adopt the `CheckReport<Provider>` naming convention. Learn more at [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/lib/check/models.py).
<Note>
Legacy providers (AWS, Azure, GCP, Kubernetes) follow the `Check_Report_<Provider>` naming convention. This is not recommended for current instances. Newer providers adopt the `CheckReport<Provider>` naming convention. Learn more at [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/lib/check/models.py).
</Note>
```python title="Generic Check Class"
# Required Imports
# Import the base Check class and the provider-specific CheckReport class
@@ -213,7 +216,7 @@ Each check **must** populate the report with an unique identifier for the audite
### Configurable Checks in Prowler
See [Configurable Checks](./configurable-checks.md) for detailed information on making checks configurable using the `audit_config` object and configuration file.
See [Configurable Checks](/developer-guide/configurable-checks) for detailed information on making checks configurable using the `audit_config` object and configuration file.
## Metadata Structure for Prowler Checks
@@ -273,16 +276,17 @@ The `CheckTitle` field must be plain text, clearly and succinctly define **the b
**Always write the `CheckTitle` to describe the *PASS* case**, the desired secure or compliant state of the resource(s). This helps ensure that findings are easy to interpret and that the title always reflects the best practice being met.
For detailed guidelines on writing effective check titles, including how to determine singular vs. plural scope and common mistakes to avoid, see [Check Title Guidelines](./check-metadata-guidelines.md#check-title-guidelines).
For detailed guidelines on writing effective check titles, including how to determine singular vs. plural scope and common mistakes to avoid, see [Check Title Guidelines](/developer-guide/check-metadata-guidelines#check-title-guidelines).
#### CheckType
???+ warning
This field is only applicable to the AWS provider.
<Warning>
This field is only applicable to the AWS provider.
</Warning>
It follows the [AWS Security Hub Types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types) format using the pattern `namespace/category/classifier`.
For the complete AWS Security Hub selection guidelines, see [Check Type Guidelines](./check-metadata-guidelines.md#check-type-guidelines-aws-only).
For the complete AWS Security Hub selection guidelines, see [Check Type Guidelines](/developer-guide/check-metadata-guidelines#check-type-guidelines-aws-only).
#### ServiceName
@@ -314,13 +318,13 @@ The type of resource being audited. This field helps categorize and organize fin
A concise, natural language explanation that **clearly describes what the finding means**, focusing on clarity and context rather than technical implementation details. Use simple paragraphs with line breaks if needed, but avoid sections, code blocks, or complex formatting. This field is limited to maximum 400 characters.
For detailed writing guidelines and common mistakes to avoid, see [Description Guidelines](./check-metadata-guidelines.md#description-guidelines).
For detailed writing guidelines and common mistakes to avoid, see [Description Guidelines](/developer-guide/check-metadata-guidelines#description-guidelines).
#### Risk
A clear, natural language explanation of **why this finding poses a cybersecurity risk**. Focus on how it may impact confidentiality, integrity, or availability. If those do not apply, describe any relevant operational or financial risks. Use simple paragraphs with line breaks if needed, but avoid sections, code blocks, or complex formatting. Limit your explanation to 400 characters.
For detailed writing guidelines and common mistakes to avoid, see [Risk Guidelines](./check-metadata-guidelines.md#risk-guidelines).
For detailed writing guidelines and common mistakes to avoid, see [Risk Guidelines](/developer-guide/check-metadata-guidelines#risk-guidelines).
#### RelatedUrl
@@ -328,9 +332,10 @@ For detailed writing guidelines and common mistakes to avoid, see [Risk Guidelin
#### AdditionalURLs
???+ warning
URLs must be valid and not repeated.
<Warning>
URLs must be valid and not repeated.
</Warning>
A list of official documentation URLs for further reading. These should be authoritative sources that provide additional context, best practices, or detailed information about the security control being checked. Prefer official provider documentation, security standards, or well-established security resources. Avoid third-party blogs or unofficial sources unless they are highly reputable and directly relevant.
#### Remediation
@@ -345,17 +350,17 @@ Provides both code examples and best practice recommendations for addressing the
- **Terraform**: HashiCorp Configuration Language (HCL) code with an example of a compliant configuration.
- **Other**: Manual steps through web interfaces or other tools to make the finding compliant.
For detailed guidelines on writing remediation code, see [Remediation Code Guidelines](./check-metadata-guidelines.md#remediation-code-guidelines).
For detailed guidelines on writing remediation code, see [Remediation Code Guidelines](/developer-guide/check-metadata-guidelines#remediation-code-guidelines).
- **Recommendation**
- **Text**: Generic best practice guidance in natural language using Markdown format (maximum 400 characters). For writing guidelines, see [Recommendation Guidelines](./check-metadata-guidelines.md#recommendation-guidelines).
- **Text**: Generic best practice guidance in natural language using Markdown format (maximum 400 characters). For writing guidelines, see [Recommendation Guidelines](/developer-guide/check-metadata-guidelines#recommendation-guidelines).
- **Url**: [Prowler Hub URL](https://hub.prowler.com/) of the check. This URL is always composed by `https://hub.prowler.com/check/<check_id>`.
#### Categories
One or more functional groupings used for execution filtering (e.g., `internet-exposed`). You can define new categories just by adding to this field.
For the complete list of available categories, see [Categories Guidelines](./check-metadata-guidelines.md#categories-guidelines).
For the complete list of available categories, see [Categories Guidelines](/developer-guide/check-metadata-guidelines#categories-guidelines).
#### DependsOn
@@ -1,4 +1,6 @@
# Configurable Checks in Prowler
---
title: 'Configurable Checks in Prowler'
---
Prowler empowers users to extend and adapt cloud security coverage by making checks configurable through the use of the `audit_config` object. This approach enables customization of checks to meet specific requirements through a configuration file.
@@ -41,6 +43,6 @@ When adding a new configurable check to Prowler, update the following files:
- **Test Fixtures:** If tests depend on this configuration, add the variable to `tests/config/fixtures/config.yaml`.
- **Documentation:** Document the new variable in the list of configurable checks in `docs/tutorials/configuration_file.md`.
For a complete list of checks that already support configuration, see the [Configuration File Tutorial](../tutorials/configuration_file.md).
For a complete list of checks that already support configuration, see the [Configuration File Tutorial](/user-guide/cli/tutorials/configuration_file).
This approach ensures that checks are easily configurable, making Prowler highly adaptable to different environments and requirements.
@@ -1,4 +1,6 @@
# Debugging in Prowler
---
title: 'Debugging in Prowler'
---
Debugging in Prowler simplifies the development process, allowing developers to efficiently inspect and resolve unexpected issues during execution.
-28
View File
@@ -1,28 +0,0 @@
## Contributing to Documentation
Prowler documentation is built using `mkdocs`, allowing contributors to easily add or enhance documentation.
### Installation and Setup
Install all necessary dependencies using: `poetry install --with docs`.
1. Install `mkdocs` using your preferred package manager.
2. Running the Documentation Locally
Navigate to the `prowler` repository folder.
Start the local documentation server by running: `mkdocs serve`.
Open `http://localhost:8000` in your browser to view live updates.
3. Making Documentation Changes
Make all needed changes to docs or add new documents. Edit existing Markdown (.md) files inside `prowler/docs`.
To add new sections or files, update the `mkdocs.yaml` file located in the root directory of Prowlers repository.
4. Submitting Changes
Once documentation updates are complete:
Submit a pull request for review.
The Prowler team will assess and merge contributions.
Your efforts help improve Prowler documentation—thank you for contributing!
+42
View File
@@ -0,0 +1,42 @@
---
title: 'Contributing to Documentation'
---
Prowler documentation is built using [Mintlify](https://www.mintlify.com/docs), allowing contributors to easily add or enhance documentation.
## Installation and Setup
<Steps>
<Step title="Install Mintlify CLI">
```bash
npm i -g mint
```
For detailed instructions, check the [Mintlify documentation](https://www.mintlify.com/docs/installation).
</Step>
<Step title="Preview Documentation Locally">
Start the local development server to preview changes in real-time.
```bash
mint dev
```
A local preview of your documentation will be available at http://localhost:3000
</Step>
<Step title="Make Documentation Changes">
Edit existing Markdown (.mdx) files inside the `docs` directory or add new documents.
For reference about formatting, check the [Mintlify documentation](https://www.mintlify.com/docs/create/text).
To add new sections or files, update the [`docs/docs.json`](https://github.com/prowler-cloud/prowler/blob/master/docs/docs.json) file to include them in the navigation.
</Step>
<Step title="Submit Changes">
Once documentation updates are complete, submit a pull request for review.
The Prowler team will assess and merge contributions.
</Step>
</Steps>
Your efforts help improve Prowler documentation—thank you for contributing!
@@ -1,12 +1,14 @@
# Google Cloud Provider
---
title: 'Google Cloud Provider'
---
This page details the [Google Cloud Platform (GCP)](https://cloud.google.com/) provider implementation in Prowler.
By default, Prowler will audit all the GCP projects that the authenticated identity can access. To configure it, follow the [GCP getting started guide](../tutorials/gcp/getting-started-gcp.md).
By default, Prowler will audit all the GCP projects that the authenticated identity can access. To configure it, follow the [GCP getting started guide](/user-guide/providers/gcp/getting-started-gcp).
## GCP Provider Classes Architecture
The GCP provider implementation follows the general [Provider structure](./provider.md). This section focuses on the GCP-specific implementation, highlighting how the generic provider concepts are realized for GCP in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md).
The GCP provider implementation follows the general [Provider structure](/developer-guide/provider). This section focuses on the GCP-specific implementation, highlighting how the generic provider concepts are realized for GCP in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](/developer-guide/provider).
### Main Class
@@ -32,7 +34,7 @@ The GCP provider implementation follows the general [Provider structure](./provi
### `GCPService` (Service Base Class)
- **Location:** [`prowler/providers/gcp/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/lib/service/service.py)
- **Purpose:** Abstract base class that all GCP service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for GCP.
- **Purpose:** Abstract base class that all GCP service-specific classes inherit from. This implements the generic service pattern (described in [service page](/developer-guide/services#service-base-class)) specifically for GCP.
- **Key GCP Responsibilities:**
- Receives a `GcpProvider` instance to access session, identity, and configuration.
- Manages clients for all services by project.
@@ -95,12 +97,12 @@ def _get_instances(self):
## Specific Patterns in GCP Services
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
The generic service pattern is described in [service page](/developer-guide/services#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
- Directly in the code, in location [`prowler/providers/gcp/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/gcp/services)
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all GCP services.
The best reference to understand how to implement a new service is following the [service implementation documentation](/developer-guide/services#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all GCP services.
### GCP Service Common Patterns
@@ -117,12 +119,12 @@ The best reference to understand how to implement a new service is following the
## Specific Patterns in GCP Checks
The GCP checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks:
The GCP checks pattern is described in [checks page](/developer-guide/checks). You can find all the currently implemented checks:
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/gcp/services/iam/iam_sa_user_managed_key_unused/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/gcp/services/iam/iam_sa_user_managed_key_unused))
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new check is following the [GCP check implementation documentation](./checks.md#creating-a-check) and taking other similar checks as reference.
The best reference to understand how to implement a new check is following the [GCP check implementation documentation](/developer-guide/checks#creating-a-check) and taking other similar checks as reference.
### Check Report Class
@@ -1,12 +1,14 @@
# GitHub Provider
---
title: 'GitHub Provider'
---
This page details the [GitHub](https://github.com/) provider implementation in Prowler.
By default, Prowler will audit the GitHub account - scanning all repositories, organizations, and applications that your configured credentials can access. To configure it, follow the [GitHub getting started guide](../tutorials/github/getting-started-github.md).
By default, Prowler will audit the GitHub account - scanning all repositories, organizations, and applications that your configured credentials can access. To configure it, follow the [GitHub getting started guide](/user-guide/providers/github/getting-started-github).
## GitHub Provider Classes Architecture
The GitHub provider implementation follows the general [Provider structure](./provider.md). This section focuses on the GitHub-specific implementation, highlighting how the generic provider concepts are realized for GitHub in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md).
The GitHub provider implementation follows the general [Provider structure](/developer-guide/provider). This section focuses on the GitHub-specific implementation, highlighting how the generic provider concepts are realized for GitHub in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](/developer-guide/provider).
### `GithubProvider` (Main Class)
@@ -48,12 +50,12 @@ The GitHub provider implementation follows the general [Provider structure](./pr
## Specific Patterns in GitHub Services
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
The generic service pattern is described in [service page](/developer-guide/services#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
- Directly in the code, in location [`prowler/providers/github/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/github/services)
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and by taking other already implemented services as reference.
The best reference to understand how to implement a new service is following the [service implementation documentation](/developer-guide/services#adding-a-new-service) and by taking other already implemented services as reference.
### GitHub Service Common Patterns
@@ -66,12 +68,12 @@ The best reference to understand how to implement a new service is following the
## Specific Patterns in GitHub Checks
The GitHub checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks in:
The GitHub checks pattern is described in [checks page](/developer-guide/checks). You can find all the currently implemented checks in:
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/github/services/repository/repository_secret_scanning_enabled/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/github/services/repository/repository_secret_scanning_enabled))
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new check is the [GitHub check implementation documentation](./checks.md#creating-a-check) and by taking other checks as reference.
The best reference to understand how to implement a new check is the [GitHub check implementation documentation](/developer-guide/checks#creating-a-check) and by taking other checks as reference.
### Check Report Class
@@ -1,3 +0,0 @@
# Integration Tests
Coming soon ...
@@ -0,0 +1,5 @@
---
title: 'Integration Tests'
---
Coming soon ...
@@ -1,4 +1,6 @@
# Creating a New Integration
---
title: 'Creating a New Integration'
---
## Introduction
@@ -151,7 +153,7 @@ Refer to the [Prowler Developer Guide](https://docs.prowler.com/projects/prowler
# More properties and methods
```
* Test Connection Method:
* Validating Credentials or Tokens
@@ -1,4 +1,6 @@
# Introduction to developing in Prowler
---
title: 'Introduction to developing in Prowler'
---
Extending Prowler
@@ -48,11 +50,12 @@ poetry install --with dev
eval $(poetry env activate)
```
???+ important
Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
In case you have any doubts, consult the [Poetry environment activation guide](https://python-poetry.org/docs/managing-environments/#activating-the-environment).
<Warning>
Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
In case you have any doubts, consult the [Poetry environment activation guide](https://python-poetry.org/docs/managing-environments/#activating-the-environment).
</Warning>
## Contributing to Prowler
### Ways to Contribute
@@ -64,24 +67,24 @@ Here are some ideas for collaborating with Prowler:
2. **Expand Prowler's Capabilities**: Prowler is constantly evolving, and you can be a part of its growth. Whether you are adding checks, supporting new services, or introducing integrations, your contributions help improve the tool for everyone. Here is how you can get involved:
- **Adding New Checks**
Want to improve Prowler's detection capabilities for your favorite cloud provider? You can contribute by writing new checks. To get started, follow the [create a new check guide](./checks.md).
Want to improve Prowler's detection capabilities for your favorite cloud provider? You can contribute by writing new checks. To get started, follow the [create a new check guide](/developer-guide/checks).
- **Adding New Services**
One key service for your favorite cloud provider is missing? Add it to Prowler! To add a new service, check out the [create a new service guide](./services.md). Do not forget to include relevant checks to validate functionality.
One key service for your favorite cloud provider is missing? Add it to Prowler! To add a new service, check out the [create a new service guide](/developer-guide/services). Do not forget to include relevant checks to validate functionality.
- **Adding New Providers**
If you would like to extend Prowler to work with a new cloud provider, follow the [create a new provider guide](./provider.md). This typically involves setting up new services and checks to ensure compatibility.
If you would like to extend Prowler to work with a new cloud provider, follow the [create a new provider guide](/developer-guide/provider). This typically involves setting up new services and checks to ensure compatibility.
- **Adding New Output Formats**
Want to tailor how results are displayed or exported? You can add custom output formats by following the [create a new output format guide](./outputs.md).
Want to tailor how results are displayed or exported? You can add custom output formats by following the [create a new output format guide](/developer-guide/outputs).
- **Adding New Integrations**
Prowler can work with other tools and platforms through integrations. If you would like to add one, see the [create a new integration guide](./integrations.md).
Prowler can work with other tools and platforms through integrations. If you would like to add one, see the [create a new integration guide](/developer-guide/integrations).
- **Proposing or Implementing Features**
Got an idea to make Prowler better? Whether it is a brand-new feature or an enhancement to an existing one, you are welcome to propose it or help implement community-requested improvements.
3. **Improve Documentation**: Help make Prowler more accessible by enhancing our documentation, fixing typos, or adding examples/tutorials. See the tutorial of how we write our documentation [here](./documentation.md).
3. **Improve Documentation**: Help make Prowler more accessible by enhancing our documentation, fixing typos, or adding examples/tutorials. See the tutorial of how we write our documentation [here](/developer-guide/documentation).
4. **Bug Fixes**: If you find any issues or bugs, you can report them in the [GitHub Issues](https://github.com/prowler-cloud/prowler/issues) page and if you want you can also fix them.
@@ -105,9 +108,10 @@ pre-commit installed at .git/hooks/pre-commit
Before merging pull requests, several automated checks and utilities ensure code security and updated dependencies:
???+ note
These should have been already installed if `poetry install --with dev` was already run.
<Note>
These should have been already installed if `poetry install --with dev` was already run.
</Note>
- [`bandit`](https://pypi.org/project/bandit/) for code security review.
- [`safety`](https://pypi.org/project/safety/) and [`dependabot`](https://github.com/features/security) for dependencies.
- [`hadolint`](https://github.com/hadolint/hadolint) and [`dockle`](https://github.com/goodwithtech/dockle) for container security.
@@ -123,9 +127,10 @@ All dependencies are listed in the `pyproject.toml` file.
For proper code documentation, refer to the following and follow the code documentation practices presented there: [Google Python Style Guide - Comments and Docstrings](https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings).
???+ note
If you encounter issues when committing to the Prowler repository, use the `--no-verify` flag with the `git commit` command.
<Note>
If you encounter issues when committing to the Prowler repository, use the `--no-verify` flag with the `git commit` command.
</Note>
### Repository Folder Structure
Understanding the layout of the Prowler codebase will help you quickly find where to add new features, checks, or integrations. The following is a high-level overview from the root of the repository:
@@ -173,4 +178,4 @@ To test Prowler from a specific branch (for example, to try out changes from a p
pipx install "git+https://github.com/prowler-cloud/prowler.git@branch-name"
```
Replace `branch-name` with the name of the branch you want to test. This will install Prowler in an isolated environment, allowing you to try out the changes safely.
Replace `branch-name` with the name of the branch you want to test. This will install Prowler in an isolated environment, allowing you to try out the changes safely.
@@ -1,12 +1,14 @@
# Kubernetes Provider
---
title: 'Kubernetes Provider'
---
This page details the [Kubernetes](https://kubernetes.io/) provider implementation in Prowler.
By default, Prowler will audit all namespaces in the Kubernetes cluster accessible by the configured context. To configure it, see the [In-Cluster Execution](../tutorials/kubernetes/in-cluster.md) or [Non In-Cluster Execution](../tutorials/kubernetes/outside-cluster.md) guides.
By default, Prowler will audit all namespaces in the Kubernetes cluster accessible by the configured context. To configure it, see the [In-Cluster Execution](/user-guide/providers/kubernetes/in-cluster) or [Non In-Cluster Execution](/user-guide/providers/kubernetes/outside-cluster) guides.
## Kubernetes Provider Classes Architecture
The Kubernetes provider implementation follows the general [Provider structure](./provider.md). This section focuses on the Kubernetes-specific implementation, highlighting how the generic provider concepts are realized for Kubernetes in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md).
The Kubernetes provider implementation follows the general [Provider structure](/developer-guide/provider). This section focuses on the Kubernetes-specific implementation, highlighting how the generic provider concepts are realized for Kubernetes in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](/developer-guide/provider).
### `KubernetesProvider` (Main Class)
@@ -31,7 +33,7 @@ The Kubernetes provider implementation follows the general [Provider structure](
### `KubernetesService` (Service Base Class)
- **Location:** [`prowler/providers/kubernetes/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/lib/service/service.py)
- **Purpose:** Abstract base class that all Kubernetes service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for Kubernetes.
- **Purpose:** Abstract base class that all Kubernetes service-specific classes inherit from. This implements the generic service pattern (described in [service page](/developer-guide/services#service-base-class)) specifically for Kubernetes.
- **Key Kubernetes Responsibilities:**
- Receives a `KubernetesProvider` instance to access session, identity, and configuration.
- Manages the Kubernetes API client and context.
@@ -50,12 +52,12 @@ The Kubernetes provider implementation follows the general [Provider structure](
## Specific Patterns in Kubernetes Services
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
The generic service pattern is described in [service page](/developer-guide/services#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
- Directly in the code, in location [`prowler/providers/kubernetes/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/kubernetes/services)
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other already implemented services as reference.
The best reference to understand how to implement a new service is following the [service implementation documentation](/developer-guide/services#adding-a-new-service) and taking other already implemented services as reference.
### Kubernetes Service Common Patterns
@@ -69,12 +71,12 @@ The best reference to understand how to implement a new service is following the
## Specific Patterns in Kubernetes Checks
The Kubernetes checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks in:
The Kubernetes checks pattern is described in [checks page](/developer-guide/checks). You can find all the currently implemented checks in:
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/kubernetes/services/rbac/rbac_minimize_wildcard_use_roles/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/kubernetes/services/rbac/rbac_minimize_wildcard_use_roles))
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
The best reference to understand how to implement a new check is following the [Kubernetes check implementation documentation](./checks.md#creating-a-check) and taking other checks as reference.
The best reference to understand how to implement a new check is following the [Kubernetes check implementation documentation](/developer-guide/checks#creating-a-check) and taking other checks as reference.
### Check Report Class
@@ -1,4 +1,6 @@
# Extending Prowler Lighthouse AI
---
title: 'Extending Prowler Lighthouse AI'
---
This guide helps developers customize and extend Prowler Lighthouse AI by adding or modifying AI agents.
@@ -15,9 +17,10 @@ AI agents fall into two main categories:
Prowler Lighthouse AI is an autonomous agent - selecting the right tool(s) based on the users query.
???+ note
To learn more about AI agents, read [Anthropic's blog post on building effective agents](https://www.anthropic.com/engineering/building-effective-agents).
<Note>
To learn more about AI agents, read [Anthropic's blog post on building effective agents](https://www.anthropic.com/engineering/building-effective-agents).
</Note>
### LLM Dependency
The autonomous nature of agents depends on the underlying LLM. Autonomous agents using identical system prompts and tools but powered by different LLM providers might approach user queries differently. Agent with one LLM might solve a problem efficiently, while with another it might take a different route or fail entirely.
@@ -30,7 +33,7 @@ Prowler Lighthouse AI uses a multi-agent architecture orchestrated by the [Langg
### Architecture Components
<img src="../../tutorials/img/lighthouse-architecture.png" alt="Prowler Lighthouse architecture">
<img src="/images/prowler-app/lighthouse-architecture.png" alt="Prowler Lighthouse architecture" />
Prowler Lighthouse AI integrates with the NextJS application:
@@ -67,9 +70,10 @@ Modifying the supervisor prompt allows you to:
- Modify task delegation to specialized agents
- Set up guardrails (query types to answer or decline)
???+ note
The supervisor agent should not have its own tools. This design keeps the system modular and maintainable.
<Note>
The supervisor agent should not have its own tools. This design keeps the system modular and maintainable.
</Note>
### How to Create New Specialized Agents
The supervisor agent and all specialized agents are defined in the `route.ts` file. The supervisor agent uses [langgraph-supervisor](https://www.npmjs.com/package/@langchain/langgraph-supervisor), while other agents use the prebuilt [create-react-agent](https://langchain-ai.github.io/langgraphjs/how-tos/create-react-agent/).
@@ -77,14 +81,16 @@ The supervisor agent and all specialized agents are defined in the `route.ts` fi
To add new capabilities or all Lighthouse AI to interact with other APIs, create additional specialized agents:
1. First determine what the new agent would do. Create a detailed prompt defining the agent's purpose and capabilities. You can see an example from [here](https://github.com/prowler-cloud/prowler/blob/master/ui/lib/lighthouse/prompts.ts#L359-L385).
???+ note
Ensure that the new agent's capabilities don't collide with existing agents. For example, if there's already a *findings_agent* that talks to findings APIs don't create a new agent to do the same.
<Note>
Ensure that the new agent's capabilities don't collide with existing agents. For example, if there's already a *findings_agent* that talks to findings APIs don't create a new agent to do the same.
</Note>
2. Create necessary tools for the agents to access specific data or perform actions. A tool is a specialized function that extends the capabilities of LLM by allowing it to access external data or APIs. A tool is triggered by LLM based on the description of the tool and the user's query.
For example, the description of `getScanTool` is "Fetches detailed information about a specific scan by its ID." If the description doesn't convey what the tool is capable of doing, LLM will not invoke the function. If the description of `getScanTool` was set to something random or not set at all, LLM will not answer queries like "Give me the critical issues from the scan ID xxxxxxxxxxxxxxx"
???+ note
Ensure that one tool is added to one agent only. Adding tools is optional. There can be agents with no tools at all.
<Note>
Ensure that one tool is added to one agent only. Adding tools is optional. There can be agents with no tools at all.
</Note>
3. Use the `createReactAgent` function to define a new agent. For example, the rolesAgent name is "roles_agent" and has access to call tools "*getRolesTool*" and "*getRoleTool*"
```js
const rolesAgent = createReactAgent({
@@ -1,12 +1,14 @@
# LLM Provider
---
title: 'LLM Provider'
---
This page details the [Large Language Model (LLM)](https://en.wikipedia.org/wiki/Large_language_model) provider implementation in Prowler.
The LLM provider enables security testing of language models using red team techniques. By default, Prowler uses the built-in LLM configuration that targets OpenAI models with comprehensive security test suites. To configure it, follow the [LLM getting started guide](../tutorials/llm/getting-started-llm.md).
The LLM provider enables security testing of language models using red team techniques. By default, Prowler uses the built-in LLM configuration that targets OpenAI models with comprehensive security test suites. To configure it, follow the [LLM getting started guide](/user-guide/providers/llm/getting-started-llm).
## LLM Provider Classes Architecture
The LLM provider implementation follows the general [Provider structure](./provider.md). This section focuses on the LLM-specific implementation, highlighting how the generic provider concepts are realized for LLM security testing in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md).
The LLM provider implementation follows the general [Provider structure](/developer-guide/provider). This section focuses on the LLM-specific implementation, highlighting how the generic provider concepts are realized for LLM security testing in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](/developer-guide/provider).
### Main Class

Some files were not shown because too many files have changed in this diff Show More