Compare commits

...

534 Commits

Author SHA1 Message Date
Rubén De la Torre Vico 809fd3dcf3 Merge branch 'master' into api-add-missing-map 2025-06-23 10:48:00 +02:00
Daniel Barranquero eb5dbab86e feat(docs): update Azure and M365 docs with needed permissions (#8075) 2025-06-23 10:12:11 +02:00
Víctor Fernández Poyatos 223aab8ece chore(API): skip safety vulnerabilities related to asteval (#8076) 2025-06-20 14:28:23 +02:00
César Arroba 3ec57340a0 chore(gha): check changelog when label is added or deleted (#8071) 2025-06-20 16:35:19 +05:45
Pablo Lara 80d73cc05b feat: integrate Google Tag Manager manually to avoid ORB blocking (#8070) 2025-06-20 12:47:17 +02:00
Rubén De la Torre Vico a470b7c9d8 docs(changelog): change fix from version 2025-06-20 10:00:39 +02:00
Rubén De la Torre Vico eba5f8e621 docs(changelog): update PR link for new fix 2025-06-20 09:58:23 +02:00
César Arroba 94f02df11e chore(gha): check changelog changes on pull request (#7991)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-06-19 14:51:59 +05:45
Pepe Fagoaga c454ceb296 fix(changelog): Add missing entries (#8066) 2025-06-19 14:12:39 +05:45
Pepe Fagoaga 76ec13a1d6 chore(ocsf): remove version number and point to the latest (#8064) 2025-06-19 13:33:28 +05:45
Pepe Fagoaga 783b6ea982 chore(api): clean up old files (#8051) 2025-06-19 11:57:48 +05:45
Adrián Jesús Peña Rodríguez 56443518d6 fix(export): add missing m365 iso27001 mapping 2025-06-18 18:20:25 +02:00
Alejandro Bailo 6b7b700a98 feat: filters relationships in findings and scans page (#8046)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-06-18 17:19:41 +02:00
César Arroba b3f2a1c532 chore(ui): add NEXT_PUBLIC_GOOGLE_TAG_MANAGER_ID variable on Dockerfile (#8061) 2025-06-18 16:31:55 +02:00
Sergio Garcia c4e1bd3ed2 fix: add missing changelog compliance timestamps (#8060) 2025-06-18 16:28:48 +02:00
Sergio Garcia d0d4e0d483 fix(compliance): use unified timestampt for all requirements (#8052) 2025-06-18 22:00:51 +08:00
Pablo Lara 14a9f0e765 feat: add Google Tag Manager integration (#8058) 2025-06-18 15:47:48 +02:00
Rubén De la Torre Vico b572575c8d feat(azure): add new check iam_role_user_access_admin_restricted (#8040)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-06-18 21:24:23 +08:00
Rubén De la Torre Vico a626e41162 docs: add provider-specific developer guide sections (#7996)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-06-18 21:20:33 +08:00
Hugo Pereira Brito 22343faa1e feat(storage): add new check storage_default_to_entra_authorization_enabled (#7981) 2025-06-18 21:16:07 +08:00
Hugo Pereira Brito c5b37887ef chore: add pr to changelog (#8054) 2025-06-18 14:32:21 +02:00
Rubén De la Torre Vico f9aed36d0b feat(azure): add new check databricks_workspace_cmk_encryption_enabled (#8017) 2025-06-18 18:36:37 +08:00
Hugo Pereira Brito facc0627d7 feat(azure): add new check storage_geo_redundant_enabled (#7980)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-18 18:10:02 +08:00
Rubén De la Torre Vico 76f0d890e9 feat(azure): add Databricks service and check for workspace VNet injection (#8008)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-06-18 17:38:09 +08:00
Hugo Pereira Brito 7de7122c3b fix(m365): avoid user requests in setup_identity app context and user auth log enhancement (#8043) 2025-06-18 11:27:11 +02:00
Hugo Pereira Brito 1b73ab2fe4 feat(storage): add new check storage_cross_tenant_replication_disabled (#7977)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-18 15:54:13 +08:00
Rubén De la Torre Vico cc8f6131e6 feat(azure): add new check storage_blob_versioning_is_enabled (#7927)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-18 15:46:38 +08:00
Andoni Alonso dfd5c9aee7 feat(aws): add check to ensure Codebuild Github projects are only use allowed Github orgs (#7595)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-18 00:17:18 +08:00
dependabot[bot] 3986bf3f42 chore(deps): bump asteval from 1.0.5 to 1.0.6 (#8049)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 00:11:18 +08:00
Sergio Garcia c45ef1e286 chore(deps): update requests dependency (#8048) 2025-06-18 00:04:09 +08:00
dependabot[bot] 8d8f498dc2 chore(deps): bump asteval from 1.0.5 to 1.0.6 (#8047)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 23:32:13 +08:00
Sergio Garcia c4bd9122d4 feat(IaC): PoC for IaC Security Scanner (#7852)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-06-17 23:23:25 +08:00
dependabot[bot] 644cdc81b9 chore(deps): bump requests from 2.32.3 to 2.32.4 in /api (#7986)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 16:46:29 +02:00
Pablo Lara e5584f21b3 feat: make user and password fields optional but mutually required fo… (#8044) 2025-06-17 14:46:00 +02:00
Rubén De la Torre Vico b868d39bef chore(deps): add pre-commit as a dev dependency (#8042) 2025-06-17 18:54:32 +08:00
Alejandro Bailo ef9809f61f fix: correct parenthesis around the render condition (#8041) 2025-06-17 12:22:17 +02:00
Alejandro Bailo 9a04ca3611 feat: touching up compliances views (#8022)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-06-17 11:23:14 +02:00
Pedro Martín 1c9b3a1394 feat(m365): add ISO 27001 2022 compliance framework (#7985)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-17 17:04:36 +08:00
dependabot[bot] 5ee7bd6459 chore(deps): bump protobuf from 6.30.2 to 6.31.1 (#8037)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 16:31:04 +08:00
Chandrapal Badshah 05d2b86ba8 feat(lighthouse): update NextJS logic to work with latest APIs (#8033)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-06-17 10:25:37 +02:00
Andoni Alonso 84c30af6f8 chore(sentry): handle exceptions ignores not based in ClassNames (#8034) 2025-06-17 09:42:24 +02:00
dcanotrad e8a829b75e docs(dev-guide): improve quality redrive (#7718)
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <rubendltv22@gmail.com>
2025-06-17 09:28:22 +02:00
Sergio Garcia a0d169470d chore(metadata): add validator for ResourceType (#8035) 2025-06-17 00:06:32 +08:00
Rubén De la Torre Vico 1fd6046511 chore: add missing init file to check repository_secret_scanning_enabled (#8029) 2025-06-16 21:31:18 +08:00
Sergio Garcia 524455b0f3 fix(metadata): add missing ResourceType values (#8028)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-06-16 21:30:55 +08:00
Víctor Fernández Poyatos e6e1e37c1e fix(findings): exclude blank resource types from metadata endpoints (#8027) 2025-06-16 18:19:21 +05:45
Prowler Bot 2914510735 chore(regions_update): Changes in regions for AWS services (#8026)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-06-16 19:00:06 +08:00
Rubén De la Torre Vico 7e43c7797f fix(eks): add EKS to service without subservices (#7959)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-16 16:46:48 +08:00
Rubén De la Torre Vico 6954ef880e fix(azure): add new way to auth against App Insight (#7763) 2025-06-16 16:46:36 +08:00
Chandrapal Badshah 5f5e7015a9 feat(lighthouse): Add django endpoints to store config (#7848)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
Co-authored-by: Víctor Fernández Poyatos <vicferpoy@gmail.com>
2025-06-16 10:11:57 +02:00
Andoni Alonso bfafa518b1 feat(aws): avoid bypassing IAM check using wildcards (#7708) 2025-06-16 07:42:01 +02:00
Hugo Pereira Brito e34e59ff2d fix(network): allow 0 as compliant value (#7926)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-13 19:50:19 +08:00
Daniel Barranquero 7f80d2db46 fix(app): change api call for ftps_state (#7923) 2025-06-13 19:28:55 +08:00
sumit-tft 4a2a3921da feat(UI): Add Provider detail component in Findings, Scan details (#7968)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-06-13 12:17:18 +02:00
Pedro Martín e26b2e6527 feat(api): handle MitreAttack compliance requirements (#7987)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-06-13 10:26:34 +02:00
Mitchell @ Securemetrics 954814c1d7 feat(contrib): add PowerBI integration (#7826)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
2025-06-13 09:55:07 +02:00
Andoni Alonso 113224cbd9 chore: update CHANGELOG (#8015) 2025-06-13 15:38:56 +08:00
Andoni Alonso f5f1fce779 fix(iam): check always if root credentials are present (#7967) 2025-06-12 17:48:09 +02:00
Pepe Fagoaga 0ba9383202 chore(changelog): make all consistent (#8010)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-06-12 20:09:01 +05:45
Adrián Jesús Peña Rodríguez 8e9a9797c7 fix(export): add name sanitization (#8007) 2025-06-12 20:02:18 +05:45
Pablo Lara 2b4e6bffae chore: update package-lock after lighthouse was merged (#8011) 2025-06-12 15:32:58 +02:00
Chandrapal Badshah 74f7a86c2b feat(lighthouse): Add chat interface (#7878)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-06-12 15:19:41 +02:00
Pablo Lara e218435b2f fix: improve error handling in UpdateViaCredentialsForm with early re… (#7988) 2025-06-12 11:39:49 +02:00
Prowler Bot 5ec34ad5e7 chore(regions_update): Changes in regions for AWS services (#7973)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-06-12 17:24:15 +08:00
Pedro Martín c4b0859efd fix(dashboard): handle account uids with 0 at start and end (#7955)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-12 17:21:52 +08:00
Pedro Martín 1241a490f9 fix(kubernetes): change object type to set for apiserver check (#7952) 2025-06-12 17:02:48 +08:00
Pedro Martín 4ec498a612 fix(k8s): remove typo for PCI 4.0 compliance framework (#7971) 2025-06-12 16:41:58 +08:00
Pedro Martín 119c5e80a9 feat(gcp): add NIS 2 compliance framework (#7912)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-06-12 16:40:33 +08:00
sumit-tft d393bc48a2 fix(PRWLR-7380): button nesting hydration error (#7998) 2025-06-12 10:02:20 +02:00
Daniel Barranquero e09e3855b1 fix(gcp): remove azure video from gcp docs (#8001) 2025-06-12 09:54:25 +02:00
Alejandro Bailo 8751615faa feat: MittreAtack compliance detailed view (#8002) 2025-06-12 09:27:47 +02:00
Prowler Bot e7c17ab0b3 chore(regions_update): Changes in regions for AWS services (#7898)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-06-12 15:14:28 +08:00
dependabot[bot] f05d3eb334 chore(deps): bump trufflesecurity/trufflehog from 3.88.26 to 3.88.35 (#7896)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 15:14:02 +08:00
dependabot[bot] cf449d4607 chore(deps): bump aws-actions/configure-aws-credentials from 4.1.0 to 4.2.1 (#7895)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 15:13:35 +08:00
dependabot[bot] b338ac9add chore(deps): bump codecov/codecov-action from 5.4.2 to 5.4.3 (#7894)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 15:13:12 +08:00
dependabot[bot] 366d2b392a chore(deps): bump docker/build-push-action from 6.16.0 to 6.18.0 (#7893)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 15:12:52 +08:00
dependabot[bot] 41fc536b44 chore(deps): bump github/codeql-action from 3.28.16 to 3.28.18 (#7892)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 15:12:28 +08:00
Adrián Jesús Peña Rodríguez e042445ecf fix(migration): create site stuff before socialaccount (#7999) 2025-06-11 13:34:21 +02:00
Víctor Fernández Poyatos c17129afe3 revert: RLS transactions handling and DB custom backend (#7994) 2025-06-11 14:47:10 +05:45
Alejandro Bailo 4876d8435c feat: generic compliance detailed view (#7990) 2025-06-11 09:40:53 +02:00
Pedro Martín 1bd0d774e5 feat(mutelist): make validate_mutelist method static (#7811) 2025-06-11 11:33:49 +05:45
Alejandro Bailo c119cece89 feat: ThreatScore compliance detailed view (#7979) 2025-06-10 10:43:27 +02:00
Adrián Jesús Peña Rodríguez e24b211d22 feat(sso): add sso with saml to API (#7822) 2025-06-10 10:17:54 +02:00
Hugo Pereira Brito c589c95727 feat(storage): add new check storage_account_key_access_disabled (#7974)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-06-10 08:23:09 +02:00
Hugo Pereira Brito 7e4f1a73bf feat(storage): add new check storage_ensure_file_shares_soft_delete_is_enabled (#7966) 2025-06-10 08:09:11 +02:00
Pepe Fagoaga 4d00aece45 chore(changelog): move entry for their version (#7969) 2025-06-09 21:50:13 +05:45
Hugo Pereira Brito 49aaf011aa fix(parser): add GitHub provider to prowler -h usage section (#7906) 2025-06-09 17:47:29 +02:00
Adrián Jesús Peña Rodríguez 898934c7f8 chore: update django version (#7984) 2025-06-09 17:33:16 +02:00
Pepe Fagoaga 81c4b5a9c1 chore(api): Delete old docker compose file (#7982) 2025-06-09 21:01:52 +05:45
Pepe Fagoaga fe31656ffe fix(k8s): return a session if using kubeconfig_content (#7953) 2025-06-09 19:11:59 +05:45
Hugo Pereira Brito 359059dee6 fix(docs): add Organization.Read.All to M365 provider requirements (#7961) 2025-06-09 12:11:14 +02:00
Alejandro Bailo 2eaa37921d feat: KISA detailed view (#7965) 2025-06-09 09:29:34 +02:00
Pablo Lara 3a99909b75 chore: align Next.js version to 14.2.29 across Prowler and Cloud (#7962) 2025-06-06 13:54:42 +02:00
Pablo Lara 2ecd9ad2c5 docs: update changelog (#7960) 2025-06-06 13:17:38 +02:00
Alejandro Bailo 50dc396aa3 feat: scan id filter drowpdown (#7949)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-06-06 12:38:14 +02:00
Andoni Alonso acf333493a chore(api): reorder docker layers to speed up build times (#7957) 2025-06-06 10:42:14 +02:00
Pedro Martín bd6272f5a7 feat(docs): add information about tenants and read-only roles (#7956) 2025-06-06 10:14:33 +02:00
Pepe Fagoaga 8c95e1efaf chore: update API changelog for v5.7.3 (#7948) 2025-06-05 15:54:36 +02:00
Hugo Pereira Brito 845a0aa0d5 fix(changelog): add entries for password encryption in v5.7.3 (#7939)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-06-05 14:23:12 +02:00
Hugo Pereira Brito 75a11be9e6 fix(docs): add final permission assignments example (#7943) 2025-06-05 18:07:43 +05:45
Hugo Pereira Brito a778d005b6 fix(docs): add mfa warning for users (#7924)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-06-05 17:55:27 +05:45
Pedro Martín 1281f4ec5e chore(changelog): update following the correct format (#7908) 2025-06-05 17:52:36 +05:45
Víctor Fernández Poyatos 6332427e5e fix(compliance): add manual status to requirements (#7938) 2025-06-05 10:54:51 +02:00
Alejandro Bailo d89df83904 fix: Improve the perfomance removing regions heatmap (#7934) 2025-06-05 08:13:47 +02:00
Víctor Fernández Poyatos be420afebc feat(database): handle already closed connections (#7935) 2025-06-04 16:09:36 +02:00
Adrián Jesús Peña Rodríguez fb914a2c90 revert: remove get_with_retry (#7932) 2025-06-04 15:01:47 +02:00
Pablo Lara 4ac3cfc33d docs: update changelog (#7931) 2025-06-04 13:54:25 +02:00
Alejandro Bailo c74360ab63 fix: clear filters sync (#7928) 2025-06-04 13:32:52 +02:00
Alejandro Bailo 4dc4d82d42 feat: aws-well-architected compliance detailed view (#7925) 2025-06-04 12:26:27 +02:00
Víctor Fernández Poyatos 6e7a32cb51 revert(views): calling order to initial view method (#7921) 2025-06-03 16:38:00 +02:00
Alejandro Bailo 49e501c4be feat: CIS compliance detail view (#7913)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-06-03 15:47:46 +02:00
Víctor Fernández Poyatos 9ee78fe65f fix(views): calling order to initial view method (#7918) 2025-06-03 13:34:44 +02:00
Víctor Fernández Poyatos 7a0549d39c fix(rls): Apply persistent RLS transactions (#7916) 2025-06-03 13:10:41 +02:00
Alejandro Bailo 3e8c86d880 feat: ISO compliance detail view (#7897)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-06-03 09:20:52 +02:00
Pablo Lara e34c18757d fix: Fix named export for addCredentialsServiceAccountFormSchema (#7909) 2025-06-03 08:33:24 +02:00
Alejandro Bailo 5c1a47d108 feat: compliance detail view + ENS (#7853)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-06-02 18:20:22 +02:00
Víctor Fernández Poyatos 59c51d5a4a feat(compliance): Rework compliance overviews (#7877) 2025-06-02 17:06:24 +02:00
Pedro Martín 66aa67f636 feat(changelog): update version with fixes (#7904)
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-06-02 12:32:45 +02:00
Pablo Lara bdda377482 docs: update the changelog (#7901) 2025-06-02 11:49:04 +02:00
Hugo Pereira Brito aa11ed70bd chore(docs): replace old permission images (#7900) 2025-06-02 11:47:11 +02:00
Adrián Jesús Peña Rodríguez 0580dca6cf fix: set user_id for tenant operations (#7890)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-06-02 11:06:49 +02:00
Pablo Lara 678ef0ab5a feat(providers): setup workflow to support new GCP credential method (#7872) 2025-06-02 10:23:39 +02:00
César Arroba 4888c27713 chore: fix commit sha when a pr is merged (#7889) 2025-05-30 17:40:57 +05:45
Hugo Pereira Brito b256c10622 chore: replace Directory.Read.All permission to Domain.Read.All for Azure (#7888) 2025-05-30 10:24:49 +02:00
Adrián Jesús Peña Rodríguez 878e4e0bbc fix: add new get method to avoid race conditions when creating async tasks (#7876)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-05-30 10:07:32 +02:00
Hugo Pereira Brito 6c3653c483 fix(docs): remove warning of encrypted password for cloud (#7886) 2025-05-30 12:01:32 +04:00
Daniel Barranquero 71ac703e6f fix(api): connection correctly reflected (#7831)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-05-29 20:23:15 +05:45
Sergio Garcia a89e3598f2 fix(gcp): test connection by verifying token (#7882) 2025-05-29 13:20:53 +02:00
Alison Vilela 5d043cc929 fix(awslambda): aws service awslambda not working (#7869) 2025-05-29 12:50:23 +05:45
Pepe Fagoaga 921f94ebbf fix(k8s): UID validation for valid context names (#7871) 2025-05-29 12:32:57 +05:45
sumit-tft 48c9ed8a79 fix(ui): increase limit to retrieve more than 10 scan list (#7865) 2025-05-29 07:52:36 +02:00
Hugo Pereira Brito 12987ec9f9 fix(admincenter): service and group visibility (#7870) 2025-05-28 16:48:49 +02:00
Hugo Pereira Brito 40b90ed063 fix(tests): typo in m365 domain test (#7866) 2025-05-28 16:43:58 +02:00
Alejandro Bailo 60314e781f feat: enhance CustomDropdownFilter (#7868) 2025-05-28 16:30:28 +02:00
Harsh Kumar bc56d48595 feat(dashboard): add client-side search functionality to findings table (#7804)
Co-authored-by: Harsh Kumar <harsh.k@cybersecurist.com>
2025-05-28 11:44:01 +02:00
Pedro Martín 2d71cef3d5 feat(azure): add NIS 2 compliance framework (#7857) 2025-05-28 11:35:40 +02:00
Daniel Barranquero 41f6637497 fix(defender): update defender_ensure_notify_alerts_severity_is_high logic (#7862)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-28 10:32:44 +02:00
Pedro Martín c2e54bbbcc fix(threatscore): remove compliance name in tests to remove dummy files (#7859)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-28 10:03:20 +02:00
sumit-tft df8aacd09d fix(ui): Added missing icons (kisa, prowlerThreat) on compliance page (#7860) 2025-05-28 09:51:28 +02:00
Matt Keeler 2dd6be59b9 fix(m365): add compliantDevice grant control support (#7844) 2025-05-28 09:05:00 +02:00
Hugo Pereira Brito 9e8e3eb0e6 fix(m365): update documentation (#7823)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-05-28 08:52:03 +02:00
Sergio Garcia 3728430f8c chore: update README (#7842)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
2025-05-27 14:25:37 +02:00
sumit-tft ea97de7f43 fix(ui): updated to use the correct message when download report clicked (#7758)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-05-27 10:51:08 +02:00
Rubén De la Torre Vico f254a4bc0d feat(app): split SDK App service calls (#7778) 2025-05-27 09:52:50 +02:00
Pedro Martín 66acfd8691 feat(aws): add NIS2 compliance framework (#7839) 2025-05-27 09:35:57 +02:00
Matt Keeler 02ca82004f fix(typo): minor language updates (#7843) 2025-05-27 09:26:51 +02:00
Rubén De la Torre Vico 60b5a79b27 fix(vpc): change the ServiceName from EC2 to VPC (#7840) 2025-05-26 17:52:59 +02:00
Sergio Garcia be1e3e942b feat(api): support GCP Service Account key (#7824)
Co-authored-by: Sergio Garcia <38561120+garcitm@users.noreply.github.com>
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-05-26 15:42:39 +02:00
Sergio Garcia 3658e85cfc chore(github): add Branch class (#7838) 2025-05-26 14:34:44 +02:00
Adrián Jesús Peña Rodríguez 15e4d1acce refactor(reports): change API response message when tasks are running (#7837) 2025-05-26 12:20:05 +02:00
Andoni Alonso 44afd9ed31 fix: repository repository_dependency_scanning_enabled check logic (#7834) 2025-05-26 10:44:19 +02:00
Andoni Alonso 4f099c5663 refactor(github): use owner instead of repository in findings attributes (#7833) 2025-05-26 10:40:41 +02:00
Andoni Alonso eaec683eb9 feat(repositoy): add new check repository_inactive_not_archived (#7786)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-26 10:39:09 +02:00
Adrián Jesús Peña Rodríguez 50bcd828e9 fix(reports): change invalid search term for tasks (#7830) 2025-05-26 10:24:11 +02:00
Alejandro Bailo 91545e409e feat: change tenant name in /profile page (#7829)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-05-23 14:45:28 +02:00
Alejandro Bailo 33031d2c96 feat: implement provider UID extraction and mapping in scans pages (#7820)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-05-23 13:42:35 +02:00
Hugo Pereira Brito 1b42dda817 fix(formSchemas): encrypted password typo (#7828) 2025-05-23 12:52:17 +02:00
Hugo Pereira Brito f726d964a8 fix(m365): remove last encrypted password appearances (#7825) 2025-05-23 12:27:57 +02:00
Hugo Pereira Brito 36aaec8a55 chore(m365powershell): manage encryption from plaintext password (#7784)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-05-22 17:36:58 +02:00
Andoni Alonso 99164ce93e feat(repository): add new check repository_default_branch_requires_signed_commits (#7777)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-22 12:45:13 +02:00
Andoni Alonso 7ebc5d3c31 feat(repository): add new check repository_dependency_scanning_enabled (#7771)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-22 12:22:59 +02:00
Andoni Alonso 06ff3db8af feat(repository): add new check repository_secret_scanning_enabled (#7759)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-22 11:23:42 +02:00
Alejandro Bailo c44ea3943e feat: resources in finding tables (#7813) 2025-05-22 08:58:25 +02:00
Andoni Alonso d036e0054b feat(repository): add new check repository_default_branch_requires_codeowners_review (#7753)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-21 16:18:55 +02:00
Pedro Martín f72eb7e212 fix(files): remove empty files (#7819) 2025-05-21 16:15:04 +02:00
Andoni Alonso 62dcbc2961 feat(repository): add new check repository_has_codeowners_file (#7752) 2025-05-21 15:28:30 +02:00
Hugo Pereira Brito dddec4c688 fix(m365): add powershell.close() to msgraph services (#7816) 2025-05-21 15:13:03 +02:00
Sergio Garcia 6d00554082 chore(readme): add Prowler Hub link (#7814)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-05-21 17:46:54 +05:45
Pedro Martín 65d3fcee4c feat(prowler-threatscore): add Weight field inside req (#7795)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-21 12:57:10 +02:00
Pedro Martín 16cd0e4661 feat(prowler_threatscore): add a level for accordion in dashboard (#7739)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-21 12:46:47 +02:00
Hugo Pereira Brito 6e184dae93 fix(admincenter): admincenter_users_admins_reduced_license_footprint logic (#7779)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-21 12:46:35 +02:00
Pablo Lara 118f3d163d docs: update changelog UI (#7808) 2025-05-21 12:39:48 +02:00
Pedro Martín 7d84d67935 feat(gcp): add CIS 4.0 compliance framework (#7785) 2025-05-21 12:38:34 +02:00
Víctor Fernández Poyatos 1c1c58c975 feat(findings): Add new index for finding UID lookup (#7800) 2025-05-21 11:56:54 +02:00
Andoni Alonso 31ea672c61 fix: move changes to release 5.8 (#7801) 2025-05-21 11:45:54 +02:00
Toni de la Fuente 7016779b8e chore(README): update README.md (#7799) 2025-05-21 11:31:23 +02:00
Pedro Martín 4e958fdf39 feat(kubernetes): add CIS 1.11 compliance framework (#7790)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-21 11:09:47 +02:00
Pedro Martín c6259b6c75 fix(dashboard): remove typo from subscribe cards (#7792) 2025-05-21 11:08:52 +02:00
Sergio Garcia 021e243ada feat(kubernetes): support HTTPS_PROXY and K8S_SKIP_TLS_VERIFY (#7720) 2025-05-21 10:49:18 +02:00
Alejandro Bailo acdf420941 feat: profile page (#7780)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-05-21 10:47:32 +02:00
Hugo Pereira Brito 4e84507130 feat(entra): add new check entra_users_mfa_capable (#7734)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-05-21 10:31:56 +02:00
Prowler Bot 2a61610fec chore(regions_update): Changes in regions for AWS services (#7774)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-05-21 10:29:08 +02:00
Daniel Barranquero 9b127eba93 feat(admincenter): add new check admincenter_external_calendar_sharing_disabled (#7733)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-05-21 09:14:45 +02:00
Hugo Pereira Brito 1a89d65516 fix(m365powershell): add sanitize to test_credentials (#7761)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-05-21 08:49:04 +02:00
Daniel Barranquero 84749df708 feat(admincenter): add new check admincenter_organization_customer_lockbox_enabled (#7732)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-05-21 08:48:36 +02:00
Pepe Fagoaga 6f7cd85a18 chore(backport): create label on minor release (#7791) 2025-05-21 12:14:30 +05:45
Alejandro Bailo ad39061e1a fix: retrieve more than 10 providers (#7793) 2025-05-21 08:07:43 +02:00
Pablo Lara 615bacccaf chore: tweak some wording for consistency (#7794) 2025-05-21 07:59:53 +02:00
Prowler Bot b3a2479fab chore(release): Bump version to v5.8.0 (#7788)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-05-20 22:27:21 +05:45
sumit-tft 871c877a33 fix: AWS I AM role validation when field is empty (#7787)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-05-20 11:25:40 +02:00
Pedro Martín 7fd58de3bf feat(export): support m365 - prowler threatscore (#7783) 2025-05-19 15:59:42 +02:00
Víctor Fernández Poyatos 40f24b4d70 fix(providers): Fix m365 UID validation (#7781) 2025-05-19 13:34:46 +02:00
Adrián Jesús Peña Rodríguez d8f80699d4 chore: update api changelog (#7775) 2025-05-19 14:52:32 +05:45
Pablo Lara f24d0efc77 docs: update changelog (#7773) 2025-05-19 14:34:28 +05:45
Hugo Pereira Brito a18dd76a5a chore(m365): accept all tenant domains in authentication (#7746) 2025-05-19 13:53:54 +05:45
Pedro Martín a2362b4bbc fix(cis): rename and add sections and subsections (#7738) 2025-05-19 09:42:04 +02:00
Pedro Martín e5f1c2b19c feat(aws): add CIS 5.0 compliance framework (#7766) 2025-05-19 09:41:56 +02:00
Pedro Martín 0490ab6944 docs(checks): improve docs related with checks (#7768) 2025-05-19 09:17:14 +02:00
Sergio Garcia 97baa8a1e6 chore(ec2): improve severity logic in SG all ports open check (#7764) 2025-05-16 15:09:48 +02:00
Hugo Pereira Brito 637ebdc3db feat(repository): add new check repository_branch_delete_on_merge_enabled (#6209)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-16 15:03:37 +02:00
Hugo Pereira Brito 451b36093f feat(repository): add new check repository_default_branch_requires_conversation_resolution (#6208)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-16 14:57:15 +02:00
Víctor Fernández Poyatos beb0457aff fix(findings): Fix latest metadata backfill condition and optimization (#7765) 2025-05-16 14:50:40 +02:00
Víctor Fernández Poyatos 0335ea4e0b fix(findings): Fix latest metadata backfill condition (#7762) 2025-05-16 12:41:12 +02:00
sumit-tft 355abca5a3 fix(ui): Removed the alias if not available in findings detail page (#7751) 2025-05-16 09:02:47 +02:00
sumit-tft 7d69cc4cd9 fix: Updated the high risk section provider icons to make it consistent (#7706) 2025-05-16 08:53:34 +02:00
Hugo Pereira Brito cdc4b362a4 feat(repository): add new check repository_default_branch_protection_applies_to_admins (#6205)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-16 08:29:45 +02:00
Pablo Lara 6417e6bbba feat: use getFindingsLatest when no scan or date filters are applied (#7756) 2025-05-16 08:18:12 +02:00
Víctor Fernández Poyatos b810d45d34 feat(findings): Add /findings/latest and /findings/metadata/latest endpoints (#7743) 2025-05-15 16:08:09 +02:00
Ogonna Iwunze f5a2695c3b fix(check): Add support for condition with restriction on SNS endpoint (#7750)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-15 16:00:00 +02:00
Hugo Pereira Brito 977c788fff feat(repository): add new check repository_default_branch_status_checks_required (#6204)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-15 15:33:49 +02:00
Hugo Pereira Brito 21f8b5dbad fix(check): add missing __init__.py files (#7748) 2025-05-15 11:22:58 +02:00
Hugo Pereira Brito 1c874d1283 feat(repository): add new check repository_default_branch_deletion_disabled (#6200)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-15 08:33:36 +02:00
Hugo Pereira Brito 8f9bdae2b7 feat(repository): add new check repository_default_branch_disallows_force_push (#6197)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-14 16:48:47 +02:00
Pablo Lara 600813fb99 fix: force z-index componet select provider (#7744)
Co-authored-by: StylusFrost <pm.diaz.pena@gmail.com>
2025-05-14 15:19:41 +02:00
Hugo Pereira Brito 5a9ccd60a0 feat(repository): add new check repository_default_branch_requires_linear_history (#6162)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-14 14:37:27 +02:00
Hugo Pereira Brito beb7a53efe feat(repository): add new check repository_default_branch_protection_enabled (#6161)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-14 13:42:59 +02:00
Hugo Pereira Brito 8431ce42a1 feat(organization): add new check organization_members_mfa_required (#6304)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-14 13:29:08 +02:00
Pablo Lara c5a9b63970 fix: UID Filter Improvement (#7741)
Co-authored-by: sumit_chaturvedi <chaturvedi.sumit@tftus.com>
2025-05-14 11:36:27 +02:00
Hugo Pereira Brito a765c1543e feat: add GitHub provider documentation and CIS v1.0.0 compliance (#6116)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-14 10:47:33 +02:00
Hugo Pereira Brito 484a773f5b feat(github): add new service Organization (#6300)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-14 10:40:26 +02:00
Hugo Pereira Brito 9ecf570790 feat(github): add new check repository_code_changes_multi_approval_requirement (#6160)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-14 10:06:52 +02:00
Adrián Jesús Peña Rodríguez f8c840f283 fix: ensure proper folder creation (#7729) 2025-05-14 10:02:41 +02:00
Pepe Fagoaga deec9efa97 feat(ui): Add AWS CloudFormation Quick Link to deploy the IAM Role (#7735) 2025-05-14 09:30:01 +02:00
César Arroba 2ee62cca8e chore: add ref on checkout step (#7740) 2025-05-14 12:24:49 +05:45
Hugo Pereira Brito 413b948ca0 feat(github): add GitHub provider (#5787)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-13 15:28:01 +02:00
Pablo Lara d548e869fa docs: update changelog (#7731) 2025-05-13 13:41:41 +02:00
Sergio Garcia 5c8919372c fix(deps): solve h11 package vulnerability (#7728) 2025-05-13 13:29:22 +02:00
Sergio Garcia 9baac9fd89 fix(deps): solve h11 package vulnerability (#7696) 2025-05-13 13:10:06 +02:00
sumit-tft 252b664e49 fix: Added filter to get connected providers only for banner to show (#7723) 2025-05-13 12:58:23 +02:00
Víctor Fernández Poyatos 496e0f1e0a fix(overviews): Split in n queries to use database indexes for providers (#7725) 2025-05-13 12:34:14 +02:00
dependabot[bot] 80342d612f chore(deps): bump h11 from 0.14.0 to 0.16.0 in /api (#7610)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-13 12:15:14 +02:00
Pablo Lara 02d7eaf268 chore: bump tailwind-merge from 2.5.3 to 3.2.0 (#7722) 2025-05-13 09:27:27 +02:00
Hugo Pereira Brito 1a8df3bf18 fix(defender): enhance policies checks logic (#7666)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-12 17:02:30 +02:00
Pablo Lara 16f2209d3f chore: add M365 to scan page filters (#7704) 2025-05-12 16:20:07 +02:00
Pablo Lara 70e22af550 chore(deps): upgrade recharts from 2.13.0-alpha.4 to 2.15.2 (#7717) 2025-05-12 16:09:54 +02:00
Sergio Garcia 44f26bc0d5 chore(docs): quality redrive to README.md (#7616)
Co-authored-by: dcanotrad <168282715+dcanotrad@users.noreply.github.com>
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-05-12 15:23:14 +02:00
Alejandro Bailo a19f5d9a9a feat: scan label validation (#7693) 2025-05-12 15:07:44 +02:00
Hugo Pereira Brito b78f53a722 chore(findings): enhance m365 authentication method information (#7681) 2025-05-12 18:31:32 +05:45
Víctor Fernández Poyatos c20f07ced4 feat(findings): Improve performance on /findings/metadata, /overviews and filters (#7690) 2025-05-12 14:34:37 +02:00
Hugo Pereira Brito 7c3a53908b chore(compliance): update CIS 4.0 for M365 (#7699)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-12 12:59:50 +02:00
Pepe Fagoaga ea3c71e22c fix(bump-version): bump for fix also in minors (#7712) 2025-05-12 12:45:17 +02:00
Pedro Martín 40eaa79777 docs(compliance): update compliance page with latest changes (#7694)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-12 12:37:43 +02:00
Prowler Bot aa8119970e chore(regions_update): Changes in regions for AWS services (#7709)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-05-12 12:37:21 +02:00
Pepe Fagoaga 55fc8cb55b chore(api): Set tab name for API reference (#7713) 2025-05-12 16:16:29 +05:45
Andoni Alonso abf51eceee fix(typo): rename generate_compliance_json_from_csv_threatscore (#7698) 2025-05-12 12:29:30 +02:00
Pedro Martín 458c51dda3 feat(m365): add Prowler Threatscore (#7692)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-12 12:28:50 +02:00
Sergio Garcia c8d2a44ab0 feat(kubernetes): allow setting cluster name in in-cluster mode (#7695) 2025-05-12 12:28:04 +02:00
César Arroba 0a71628298 chore: add pass PR url (#7711) 2025-05-12 11:55:00 +02:00
Pablo Lara 60e0040577 fix: move ProviderType to shared types and update usages (#7710) 2025-05-12 11:54:42 +02:00
Alejandro Bailo 5c375d63c5 feat: Horizontal bar chart (#7680) 2025-05-12 11:14:10 +02:00
Adrián Jesús Peña Rodríguez 4d84529ba2 docs: update the download export documentation (#7682) 2025-05-12 14:45:53 +05:45
Prowler Bot 0737d9e8bb chore(release): Bump version to v5.7.0 (#7697)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-05-12 14:41:28 +05:45
Alejandro Bailo 50c5294bc0 feat: accordion component (#7700) 2025-05-12 14:17:40 +05:45
Hugo Pereira Brito f63e9e5e77 fix(m365): invalid user credentials exception (#7677) 2025-05-12 13:22:13 +05:45
Hugo Pereira Brito 3cab52772c feat(m365): add categories for tenant type e3 and e5 (#7691) 2025-05-09 08:11:44 +02:00
Pepe Fagoaga 81aa035451 chore(changelog): prepare for v5.6.0 (#7688) 2025-05-08 16:49:56 +05:45
Pedro Martín 899f31f1ee fix(prowler_threatscore): fine-tune LevelOfRisk (#7667) 2025-05-08 15:23:31 +05:45
Pedro Martín e142a9e0f4 fix(dashboard): drop duplicates for rows (#7686) 2025-05-08 14:20:19 +05:45
Sergio Garcia ed26c2c42c fix(mutelist): properly handle wildcards and regex (#7685) 2025-05-08 12:10:55 +05:45
Pedro Martín 1017510a67 fix(dashboard): remove muted findings on compliance page (#7683) 2025-05-07 13:52:14 -04:00
Adrián Jesús Peña Rodríguez bfa16607b0 feat: add compliance to API report files and its endpoint (#7653)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-05-07 20:44:58 +05:45
Hugo Pereira Brito 4c874b68f5 fix(metadata): typo in defender_chat_report_policy_configured (#7678) 2025-05-07 09:30:49 -04:00
Sergio Garcia 9458e2bbc4 fix(inspector2): handle error when getting active findings (#7670)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-07 14:39:34 +02:00
Alejandro Bailo 2da7b926ed feat: add DeltaIndicator in new findings (#7676) 2025-05-07 17:59:56 +05:45
Daniel Barranquero 8d4f0ab90a feat(docs): add snapshots to M365 docs (#7673) 2025-05-07 12:19:10 +02:00
Hugo Pereira Brito 83aefc42c1 fix(powershell): remove platform-specific execution (#7675) 2025-05-07 11:44:13 +02:00
Alejandro Bailo a6489f39fd refactor(finding-detail): remove "Next Scan" field (#7674) 2025-05-07 14:39:35 +05:45
Pablo Lara 15c34952cf docs: update changelog (#7672) 2025-05-07 09:43:17 +02:00
Alejandro Bailo d002f2f719 feat: diff between providers actions depending on their secrets (#7669) 2025-05-07 09:35:53 +02:00
Sergio Garcia 8530676419 chore(actions): run tests in dependabot updates (#7671) 2025-05-07 11:43:01 +05:45
Pedro Martín fe5a78e4d4 feat(aws): add static credentials for S3 and SH (#7322) 2025-05-06 17:55:53 +02:00
Pablo Lara d823b2b9de chore: tweaks for m365 provider (#7668) 2025-05-06 17:06:44 +02:00
Alejandro Bailo 3b17eb024c feat: add delta attribute in findings detail view with and finding id to the url (#7654) 2025-05-06 16:52:15 +02:00
Pablo Lara 87951a8371 feat(compliance): add a button to download the report in compliance card (#7665) 2025-05-06 14:44:02 +02:00
Andoni Alonso e5ca51d1e7 feat(teams): add new checks teams_security_reporting_enabled and defender_chat_report_policy_configured (#7614)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-05-06 11:30:00 +02:00
Daniel Barranquero e2fd3fe36e feat(defender): add new check defender_malware_policy_comprehensive_attachments_filter_applied (#7661) 2025-05-06 10:29:36 +02:00
Daniel Barranquero 6b0d73d7f9 feat(exchange): make exchange_user_mailbox_auditing_enabled check configurable (#7662)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-05 15:16:41 -04:00
Hugo Pereira Brito 7eec60f4d9 feat(m365): ensure all forms of mail forwarding are blocked or disabled (#7658)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-05-05 11:21:14 -04:00
Daniel Barranquero 9d788af932 docs(m365): add documentation for m365 (#7622)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-05 16:46:32 +02:00
Pedro Martín bbc0388d4d chore(changelog): update with latest PR (#7628)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-05 10:40:59 -04:00
Pedro Martín 887db29d96 feat(dashboard): support m365 provider (#7633)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-05-05 10:38:06 -04:00
dependabot[bot] ae74cab70a chore(deps): bump docker/build-push-action from 6.15.0 to 6.16.0 (#7650)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 09:58:38 -04:00
Prowler Bot e6d48c1fa4 chore(regions_update): Changes in regions for AWS services (#7657)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-05-05 09:56:16 -04:00
dependabot[bot] d5ab72a97c chore(deps): bump github/codeql-action from 3.28.15 to 3.28.16 (#7649)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 09:54:34 -04:00
dependabot[bot] 473631f83b chore(deps): bump trufflesecurity/trufflehog from 3.88.23 to 3.88.26 (#7648)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 09:54:16 -04:00
drewadwade a580b1ee04 fix(azure): CIS v2.0 4.4.1 Uses Wrong Check (#7656)
Co-authored-by: pedrooot <pedromarting3@gmail.com>
2025-05-05 15:53:55 +02:00
dependabot[bot] 844dd5ba95 chore(deps): bump actions/setup-python from 5.5.0 to 5.6.0 (#7647)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 09:53:40 -04:00
sumit-tft 44f8e4c488 feat(ui): Page size for datatables (#7634) 2025-05-05 15:42:06 +02:00
Alejandro Bailo 180eb61fee fix: error about page number persistence when filters change (#7655) 2025-05-05 12:23:04 +02:00
Andoni Alonso 9828824b73 chore(sentry): attach stacktrace to logging events (#7598)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-05-05 10:38:57 +02:00
Daniel Barranquero c938a25693 feat(exchange): add new check exchange_organization_modern_authentication_enabled (#7636)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-02 12:44:39 +02:00
Daniel Barranquero cccd69f27c feat(exchange): add new check exchange_roles_assignment_policy_addins_disabled (#7644)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-02 11:58:56 +02:00
Daniel Barranquero 3949806b5d feat(exchange): add new check exchange_mailbox_properties_auditing_e3_enabled (#7642)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-02 10:48:30 +02:00
Daniel Barranquero e7d249784d feat(exchange): add new check exchange_transport_config_smtp_auth_disabled (#7640)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-02 09:05:53 +02:00
Daniel Barranquero 25b1efe532 feat(exchange): add new check exchange_organization_mailtips_enabled (#7637)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-05-02 08:46:14 +02:00
Adrián Jesús Peña Rodríguez c289ddacf2 feat: add m365 to API (#7563)
Co-authored-by: Andoni A <14891798+andoniaf@users.noreply.github.com>
2025-04-30 17:09:47 +02:00
Hugo Pereira Brito 3fd9c51086 feat(m365): automate PowerShell modules installation (#7618)
Co-authored-by: Andoni A <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-04-30 16:41:59 +02:00
Pedro Martín de01087246 fix(s3): add ContentType in upload_file (#7635)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-04-30 19:48:23 +05:45
Pablo Lara fe42bb47f7 fix: set correct default value for session duration (#7639) 2025-04-30 13:00:45 +02:00
Víctor Fernández Poyatos c56bd519bb test(performance): Add base framework for API performance tests (#7632) 2025-04-30 12:36:25 +02:00
Daniel Barranquero 79b29d9437 feat(exchange): add new check exchange_mailbox_policy_additional_storage_restricted (#7638)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-04-30 12:05:41 +02:00
Pedro Martín 82eecec277 feat(sharepoint): add new check related with OneDrive Sync (#7589)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-04-30 11:43:41 +02:00
Pedro Martín ceacd077d2 fix(typos): remove unneeded files (#7627) 2025-04-29 13:24:24 +05:45
Pepe Fagoaga 5a0fb13ece fix(run-sh): Use poetry's env (#7621) 2025-04-29 13:01:12 +05:45
Erlend Ekern 78439b4c0c chore(dockerfile): add image source as docker label (#7617) 2025-04-29 13:00:47 +05:45
Pedro Martín 06f94f884f feat(compliance): add new Prowler Threat Score Compliance Framework (#7603)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-28 09:57:52 +02:00
dependabot[bot] b8836c6404 chore(deps): bump @babel/runtime from 7.24.7 to 7.27.0 in /ui (#7502)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-28 08:49:33 +02:00
Andoni Alonso ac79b86810 feat(teams): add new check teams_meeting_presenters_restricted (#7613)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-25 14:34:05 -04:00
Andoni Alonso 793c2ae947 feat(teams): add new check teams_meeting_recording_disabled (#7607)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-25 12:35:54 -04:00
Andoni Alonso cdcc5c6e35 feat(teams): add new check teams_meeting_external_chat_disabled (#7605)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-25 11:30:38 -04:00
Andoni Alonso 51db81aa5c feat(teams): add new check teams_meeting_external_control_disabled (#7604)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-25 10:59:36 -04:00
Hugo Pereira Brito a51a185f49 fix(powershell): handle m365 provider execution and logging (#7602)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-25 10:44:25 -04:00
Hugo Pereira Brito 90453fd07e feat(teams): add new check teams_meeting_chat_anonymous_users_disabled (#7579)
Co-authored-by: Andoni A <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-25 09:29:24 -04:00
Pablo Lara d740bf84c3 feat: add new M365 to the provider overview table (#7615) 2025-04-25 15:24:47 +02:00
Pedro Martín d13d2677ea fix(compliance): improve compliance and dashboard (#7596)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-24 13:28:18 -04:00
dependabot[bot] b076c98ba1 chore(deps): bump h11 from 0.14.0 to 0.16.0 (#7609)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-24 13:19:11 -04:00
Hugo Pereira Brito d071dea7f7 feat(teams): add new check teams_meeting_dial_in_lobby_bypass_disabled (#7571)
Co-authored-by: Andoni A <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-24 13:05:52 -04:00
Hugo Pereira Brito d9782c7b8a feat(teams): add new check teams_meeting_external_lobby_bypass_disabled (#7568)
Co-authored-by: Andoni A <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-24 12:13:42 -04:00
Pedro Martín f85450d0b5 fix(html): remove first empty line (#7606)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-24 11:23:24 -04:00
Pepe Fagoaga b129326ed6 chore(actions): Bump Prowler version on release (#7560) 2025-04-24 10:25:36 -04:00
Hugo Pereira Brito eaf0d06b63 chore(m365): add test_connection function (#7541)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-24 10:20:58 -04:00
Pedro Martín 87f3e0a138 fix(nhn): remove unneeded parameter (#7600) 2025-04-24 13:21:52 +02:00
Daniel Barranquero 8e3c856a14 feat(exchange): add new check exchange_external_email_tagging_enabled (#7580)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-23 14:11:39 -04:00
Daniel Barranquero 12c2439196 feat(exchange): add new check exchange_transport_rules_whitelist_disabled (#7569)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-23 13:47:51 -04:00
Daniel Barranquero deb1e0ff34 feat(defender): Add new check defender_antispam_policy_inbound_no_allowed_domains (#7500)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-23 13:29:24 -04:00
Hugo Pereira Brito 808e8297b0 feat(teams): add new check teams_meeting_anonymous_user_start_disabled (#7567) 2025-04-23 10:31:17 -04:00
Hugo Pereira Brito 738ce56955 fix(docs): overview m365 auth (#7588) 2025-04-23 09:58:32 -04:00
Sergio Garcia 190fd0b93c fix(scan): handle cloud provider errors and ignore expected sentry noise (#7582) 2025-04-23 09:58:04 -04:00
Pablo Lara ca6df26918 chore: remove deprecated launch scan page from old 4-step workflow (#7592) 2025-04-23 15:13:05 +02:00
Pablo Lara bcfeb97e4a feat(m365): add the new provider m365 - UI part (#7591) 2025-04-23 14:23:33 +02:00
Hugo Pereira Brito 0234957907 feat(teams): add new check teams_meeting_anonymous_user_join_disabled (#7565)
Co-authored-by: Andoni A <14891798+andoniaf@users.noreply.github.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 16:02:16 -04:00
Hugo Pereira Brito 8713b74204 feat(teams): add new check teams_external_users_cannot_start_conversations (#7562)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 14:36:54 -04:00
Hugo Pereira Brito cbaddad358 feat(teams): add new check teams_unmanaged_communication_disabled (#7561)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 13:25:30 -04:00
Hugo Pereira Brito 2379544425 feat(teams): add new check teams_external_domains_restricted (#7557)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-22 13:04:51 -04:00
Hugo Pereira Brito 29fefba62e fix(teams): teams_email_sending_to_channel_disabled docstrings (#7559) 2025-04-22 12:57:18 -04:00
Daniel Barranquero 098382117e feat(defender): add new check defender_antispam_connection_filter_policy_safe_list_off (#7494)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 12:52:34 -04:00
Daniel Barranquero d816d73174 feat(defender): add new check defender_antispam_connection_filter_policy_empty_ip_allowlist (#7492)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 12:28:18 -04:00
Matt Keeler 30eb78c293 fix(aws): use correct ports in ec2_instance_port_cifs_exposed_to_internet recommendation (#7574)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 12:24:12 -04:00
Daniel Barranquero a671b092ee feat(defender): add new check defender_domain_dkim_enabled (#7485)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 11:15:33 -04:00
Pepe Fagoaga 0edf199282 fix(actions): Include files within providers for SDK tests (#7577) 2025-04-22 10:28:43 -04:00
Andoni Alonso 2478555f0e fix(aws): update bucket naming validation to accept dots (#7545)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 10:06:14 -04:00
Daniel Barranquero b07080245d feat(defender): add new check defender_antispam_outbound_policy_configured (#7480)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-22 09:58:07 -04:00
Pepe Fagoaga 2ebf217bb0 fix(k8s): Remove command as it is not needed (#7570) 2025-04-22 09:33:40 -04:00
Prowler Bot bb527024d9 chore(regions_update): Changes in regions for AWS services (#7550)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-22 09:32:22 -04:00
Sergio Garcia e897978c3e fix(azure): handle new FlowLog properties (#7546) 2025-04-22 09:21:17 -04:00
Pepe Fagoaga 00f1c02532 chore(tests): Split by provider in the SDK (#7564) 2025-04-22 16:46:15 +05:45
César Arroba 348d1a2fda chore: pass labels on PR merge trigger (#7558) 2025-04-21 16:43:40 +02:00
César Arroba f1df8ba458 chore: revert pass labels (#7556) 2025-04-21 12:46:42 +02:00
César Arroba b5ea418933 chore: pass labels as json is required (#7555) 2025-04-21 12:10:18 +02:00
César Arroba 734fa5a4e6 chore: fix merged PR action, incorrect order on payload (#7554) 2025-04-21 12:03:14 +02:00
César Arroba 08f6d4b69b chore: pass labels (#7553) 2025-04-21 11:57:50 +02:00
César Arroba 29d3bb9f9a chore: fix json body (#7552) 2025-04-21 15:01:03 +05:45
César Arroba 4d217e642b chore: fix trigger (#7551) 2025-04-21 14:56:17 +05:45
César Arroba bd56e03991 chore(gha): trigger cloud pull-request when a PR is merged (#7212) 2025-04-21 14:54:22 +05:45
Felix Dreissig 0b6aa0ddcd fix(aws): remove SHA-1 from ACM insecure key algorithms (#7547) 2025-04-18 16:25:44 -04:00
Daniel Barranquero 4f3496194d feat(defender): add new check defender_antiphishing_policy_configured (#7453) 2025-04-18 12:42:19 -04:00
Daniel Barranquero d09a680aaa feat(defender): add new check defender_malware_policy_notifications_internal_users_malware_enabled (#7435)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-18 11:08:05 -04:00
Daniel Barranquero 56d7431d56 feat(defender): add service and new check defender_malware_policy_common_attachments_filter_enabled (#7425)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-17 13:33:43 -04:00
Daniel Barranquero abae5f1626 feat(exchange): add new check exchange_mailbox_audit_bypass_disabled (#7418)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-16 14:06:32 -04:00
Daniel Barranquero 7d0e94eecb feat(exchange): add service and new check exchange_organization_mailbox_auditing_enabled (#7408)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-16 12:19:06 -04:00
Hugo Pereira Brito 23b65c7728 feat(teams): add new check teams_email_sending_to_channel_disabled (#7533)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-16 11:13:55 -04:00
Sergio Garcia aa3182ebc5 feat(gcp): support CLOUDSDK_AUTH_ACCESS_TOKEN (#7495) 2025-04-16 10:35:04 -04:00
Sergio Garcia 32d27df0ba chore(regions): change interval to weekly (#7539) 2025-04-16 09:35:30 -04:00
Prowler Bot 6439f0a5f3 chore(regions_update): Changes in regions for AWS services (#7538)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-16 09:25:29 -04:00
Sergio Garcia 19476632ff chore(dependabot): change settings (#7536) 2025-04-16 11:26:57 +05:45
Pedro Martín d4c12e4632 fix(iam): change some logger.info values (#7526)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-04-15 13:25:37 -04:00
Hugo Pereira Brito 52bd48168f feat: adapt Microsoft365 provider to use PowerShell (#7331)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-15 13:24:09 -04:00
Bogdan A c0d935e232 docs(gcp): update required permissions for GCP (#7488) 2025-04-15 10:23:45 -04:00
Pepe Fagoaga 24dfd47329 fix(pypi): package name location in pyproject.toml while replicating for prowler-cloud (#7531) 2025-04-15 20:01:27 +05:45
dependabot[bot] fbae338689 chore(deps): bump python from 3.12.9-alpine3.20 to 3.12.10-alpine3.20 (#7520)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 09:26:04 -04:00
dependabot[bot] 186fd88f8c chore(deps): bump codecov/codecov-action from 5.4.0 to 5.4.2 (#7522)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 09:25:44 -04:00
dependabot[bot] 14ff34c00a chore(deps): bump actions/setup-node from 4.3.0 to 4.4.0 (#7521)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 09:25:23 -04:00
Prowler Bot a66fa394d3 chore(regions_update): Changes in regions for AWS services (#7527)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-15 09:20:20 -04:00
Pepe Fagoaga 931766fe08 chore(action): Remove cache in PyPI release (#7532) 2025-04-15 18:58:26 +05:45
Pepe Fagoaga c134914896 revert: fix(findings): increase uid max length to 600 (#7528) 2025-04-15 15:54:32 +05:45
Pepe Fagoaga 25dac080a5 chore(changelog): prepare for 5.5.1 (#7523) 2025-04-15 11:46:20 +05:45
Sergio Garcia 910d39eee4 chore(sdk): update changelog (#7512) 2025-04-15 11:19:50 +05:45
Pepe Fagoaga d604ae5569 fix(pyproject): Restore packages location (#7510) 2025-04-14 16:50:50 -04:00
Bogdan A 42f46b0fb1 feat(gcp): add check for unused Service Accounts (#7419)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-14 11:53:54 -04:00
Pepe Fagoaga abb5864224 chore(release): bump for 5.6.0 (#7503) 2025-04-14 11:50:46 -04:00
Prowler Bot 2e2a2bd89a chore(regions_update): Changes in regions for AWS services (#7491)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-14 10:29:19 -04:00
Sergio Garcia f8ee841921 fix(gcp): handle projects without ID (#7496) 2025-04-14 10:25:54 -04:00
Pedro Martín ceda8c76d2 feat(azure): add SOC2 compliance framework (#7489) 2025-04-14 10:16:20 -04:00
Pedro Martín afe0b7443f fix(defender): add default name to contacts (#7483) 2025-04-14 10:16:07 -04:00
Prowler Bot 9b773897d2 chore(regions_update): Changes in regions for AWS services (#7487)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-14 09:53:40 -04:00
Pedro Martín d6ec4c2c96 feat(sdk): add changelog file (#7499) 2025-04-14 09:22:50 -04:00
Prowler Bot 14ef169e99 chore(regions_update): Changes in regions for AWS services (#7497)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-14 09:22:21 -04:00
Pepe Fagoaga 22141f9706 fix(findings): increase uid max length to 600 (#7498)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-04-14 17:46:13 +05:45
Pablo Lara a5c6fee5b4 fix: update redirect URL for SSO (#7493) 2025-04-11 18:25:28 +05:45
Pablo Lara d3a5a5c0a1 fix: resolve social login issue in AuthForm on sign-up page (#7490) 2025-04-11 09:59:10 +02:00
dependabot[bot] 5d81869de4 chore(deps): bump tj-actions/changed-files from 46.0.4 to 46.0.5 (#7486)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-09 22:31:33 -04:00
Pepe Fagoaga 73ebf95d89 chore(changelog): Prepare for v5.5.0 (#7484) 2025-04-09 20:50:56 +05:45
Sergio Garcia 9f4574f4ff fix: handle errors in AWS and Azure (#7482) 2025-04-09 20:19:38 +05:45
Pedro Martín cb239b20ab fix(aws): add default session_duration (#7479) 2025-04-09 19:19:17 +05:45
eeche 3ef79588b4 feat(NHN): add NHN cloud provider with 6 checks (#6870)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-04-09 09:13:24 -04:00
Prowler Bot 61000e386b chore(regions_update): Changes in regions for AWS services (#7478)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-09 09:11:29 -04:00
Pablo Lara 53cb57901f fix: fix TS type for session duration (#7481) 2025-04-09 13:44:53 +02:00
Pedro Martín 993ff4d78e feat(gcp): add SOC2 compliance framework (#7476) 2025-04-08 15:04:08 -04:00
Drew Kerrigan 8fb10fbbf7 fix(ui): Remove UTC from timestamps in app (#7474) 2025-04-08 17:43:44 +02:00
Pablo Lara 11e834f639 feat: update the NextJS version to the latest (#7473) 2025-04-08 17:40:39 +02:00
Prowler Bot 62bf2fbb9c chore(regions_update): Changes in regions for AWS services (#7467)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-08 10:21:42 -04:00
dependabot[bot] e57930d6c2 chore(deps): bump github/codeql-action from 3.28.13 to 3.28.15 (#7463)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 09:38:18 -04:00
Pepe Fagoaga e0c417a466 fix(action): Use poetry > v2 (#7472) 2025-04-08 18:34:24 +05:45
Sergio Garcia b55f8efed1 fix: handle errors in AWS, Azure, and GCP (#7456) 2025-04-08 18:05:43 +05:45
Pablo Lara 7cbc60d977 feat: add link with the service status using static icon (#7468) 2025-04-08 12:06:21 +02:00
Adrián Jesús Peña Rodríguez 5b7912b558 fix(provider): disable periodic task on views before deleting (#7466)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-04-08 15:35:22 +05:45
Pedro Martín 57fca3e54d fix(soc2_aws): update compliance and remove some requirements (#7452) 2025-04-07 15:47:19 -04:00
Pedro Martín e31c27b123 fix(gcp): handle logic for empty project names (#7436) 2025-04-07 11:51:15 -04:00
Sergio Garcia 74f1da818e fix(gcp): ignore redirect balancers and add regional ones (#7442) 2025-04-07 11:47:02 -04:00
Pedro Martín 910cfa601b fix(aws): add resource arn for transit gateways (#7447) 2025-04-07 11:46:53 -04:00
dependabot[bot] fe321c3f8a chore(deps): bump tj-actions/changed-files from 46.0.3 to 46.0.4 (#7443)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-07 09:11:54 -04:00
Prowler Bot 43de0d405f chore(regions_update): Changes in regions for AWS services (#7446)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-07 09:11:23 -04:00
dependabot[bot] ac6ed31c8e chore(deps): bump trufflesecurity/trufflehog from 3.88.22 to 3.88.23 (#7444)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-07 09:11:07 -04:00
Prowler Bot 9d47437de4 chore(regions_update): Changes in regions for AWS services (#7445)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-07 09:10:49 -04:00
Pablo Lara eb7a62ff77 refactor: extract common auth headers into reusable helper (#7439) 2025-04-07 08:16:55 +02:00
Pedro Martín 67bc16b46d fix(defender): add default resource name in contacts (#7438) 2025-04-04 09:35:11 -04:00
Sergio Garcia 8552a578a0 fix(aws): solve multiple errors (#7431) 2025-04-04 09:34:58 -04:00
Sergio Garcia a5d277e045 fix(docs): solve broken links (#7432) 2025-04-04 09:15:48 -04:00
Adrián Jesús Peña Rodríguez 6dbf2ac606 feat: add missing SDK fields to API findings and resources (#7318) 2025-04-04 14:57:49 +02:00
Prowler Bot b1569ac2f3 chore(regions_update): Changes in regions for AWS services (#7434)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-04 08:36:23 -04:00
dependabot[bot] 3d0145b522 chore(deps): bump trufflesecurity/trufflehog from 3.88.20 to 3.88.22 (#7433)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-04 08:34:51 -04:00
Pedro Martín 44174526d6 docs: add onboarding information step by step for each provider (#7362) 2025-04-04 13:00:43 +02:00
Pablo Lara 0fd395ea83 fix: correct fetch variable name from invitations to roles (#7437) 2025-04-04 12:08:57 +02:00
dependabot[bot] 5e9d4a80a1 chore(deps): bump msgraph-sdk from 1.18.0 to 1.23.0 (#7128)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-04-04 11:27:39 +02:00
Pedro Martín e4d234fe03 fix(azure): remove resource_name inside the Check_Report (#7420) 2025-04-03 11:35:02 -04:00
Prowler Bot 3202184718 chore(regions_update): Changes in regions for AWS services (#7424)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-03 09:39:00 -04:00
Sergio Garcia 41e576f4f1 fix(gcp): make logging sink check at project level (#7421) 2025-04-03 09:37:46 -04:00
Pepe Fagoaga d8dce07019 chore(deletion): Add environment variable for batch size (#7423)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-04-03 15:31:13 +05:45
Prowler Bot 2b0a3144c7 chore(regions_update): Changes in regions for AWS services (#7417)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-04-02 09:59:08 -04:00
dependabot[bot] 62fbce0b5e chore(deps): bump azure-identity from 1.19.0 to 1.21.0 (#7192)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-04-02 11:16:47 +02:00
Pedro Martín 5a59bb335c fix(resources): add the correct id and names for resources (#7410) 2025-04-01 20:30:37 +02:00
Sergio Garcia 2719991630 fix(report): log as error when Resource ID or Name do not exist (#7411) 2025-04-01 20:24:18 +02:00
Daniel Barranquero 6a3b8c4674 feat(entra): add new check entra_admin_users_cloud_only (#7286) 2025-04-01 19:14:15 +02:00
dependabot[bot] 191fbf0177 chore(deps): bump azure-mgmt-applicationinsights from 4.0.0 to 4.1.0 (#7161)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-04-01 14:55:37 +02:00
Víctor Fernández Poyatos 228dd2952a fix(scans): Handle duplicated scan tasks (#7401) 2025-04-01 11:55:14 +02:00
dependabot[bot] 97db38aa25 chore(deps): bump azure-mgmt-containerregistry from 10.3.0 to 12.0.0 (#7025)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-04-01 10:29:31 +02:00
Pedro Martín dc953a6e22 docs(python): add annotations about Python version (#7402) 2025-03-31 18:14:59 +02:00
Bogdan A 51e796a48d feat(gcp): add check for dormant (unused) SA keys (#7348)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2025-03-31 18:14:21 +02:00
Hugo Pereira Brito 024f1425df feat(entra): add new check entra_legacy_authentication_blocked (#7240) 2025-03-31 18:12:26 +02:00
Hugo Pereira Brito a7ed610da9 feat(entra): add new check entra_users_mfa_enabled (#7228) 2025-03-31 17:54:52 +02:00
Hugo Pereira Brito 7ba99f22cd feat(entra): add new check entra_admin_users_phishing_resistant_mfa_enabled (#7211)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-03-31 17:52:28 +02:00
Hugo Pereira Brito b8ce09ec34 fix(entra): check name and logic of entra_admin_users_have_mfa_enabled (#7230) 2025-03-31 17:50:51 +02:00
Daniel Barranquero c243110a49 feat(entra): add new check entra_policy_guest_invite_only_for_admin_roles (#7241)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-03-31 14:53:50 +02:00
Daniel Barranquero ee27636f32 fix(redshift): validation error for Cluster.multi_az (#7381) 2025-03-31 13:55:48 +02:00
dependabot[bot] f2f41c9c44 chore(deps): bump azure-mgmt-resource from 23.2.0 to 23.3.0 (#7054)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-03-31 13:29:49 +02:00
Daniel Barranquero 9312890e6a feat(entra): add new check entra_policy_guest_users_access_restrictions (#7234) 2025-03-31 12:45:26 +02:00
Daniel Barranquero 9578281b4f feat(entra): add new check entra_policy_restricts_user_consent_for_apps (#7225) 2025-03-31 12:32:51 +02:00
Víctor Fernández Poyatos 08690068fc feat(findings): Handle muted findings in API and UI (#7378)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-03-31 12:25:58 +02:00
Hugo Pereira Brito e06a33de84 feat(entra): add new check entra_managed_device_required_for_mfa_registration (#7203)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-03-31 12:24:47 +02:00
Prowler Bot 6a3db10fda chore(regions_update): Changes in regions for AWS services (#7395)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-03-31 10:18:53 +02:00
Andoni Alonso bbed445efa chore(sentry): ignore exception when aws service not available in a region (#7352) 2025-03-31 10:13:19 +02:00
dependabot[bot] 9d65fb0bf2 chore(deps): bump trufflesecurity/trufflehog from 3.88.18 to 3.88.20 (#7394)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 10:12:55 +02:00
Prowler Bot 34f03ca110 chore(regions_update): Changes in regions for AWS services (#7391)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-03-27 11:10:07 +01:00
Daniel Barranquero 87c038f0c2 fix(rds): hundle Certificate rds-ca-2019 not found (#7383) 2025-03-27 11:09:33 +01:00
dependabot[bot] b3014f03b1 chore(deps): bump actions/setup-python from 5.4.0 to 5.5.0 (#7390)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-27 09:13:50 +01:00
Daniel Barranquero d39598c9fc fix(stepfunctions): Nonetype object has no attribute level (#7386) 2025-03-26 19:39:27 +01:00
Daniel Barranquero 5ea9106259 fix(fms): resource metadata could not be converted to dict (#7379) 2025-03-26 19:25:00 +01:00
Prowler Bot bcc0b59de1 chore(regions_update): Changes in regions for AWS services (#7382)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-03-26 12:52:35 +01:00
Daniel Barranquero 5d6ed640f0 fix(vm): handle Nonetype is not iterable for extensions (#7360)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-03-25 12:25:15 +01:00
Sergio Garcia dd1cc2d025 fix(s3): handle None S3 account public access block (#7350) 2025-03-25 11:39:19 +01:00
Andoni Alonso 52e5cc23e4 fix(storagegateway): describe smb/nfs share per region (#7374) 2025-03-25 10:35:37 +01:00
Pablo Lara 76a8e2be1f chore: tweak for button see findings (#7369) 2025-03-25 09:52:36 +01:00
Andoni Alonso d989425490 fix(vm): handle NoneType accessing security_profile (#7221) 2025-03-25 09:33:00 +01:00
Hugo Pereira Brito 1e324b7ed2 fix(network): handle Nonetype is not iterable for security groups (#7208) 2025-03-25 09:28:37 +01:00
Sergio Garcia e68aa62f94 fix(iam): handle none SAML Providers (#7359) 2025-03-25 09:24:32 +01:00
Daniel Barranquero 332b98a1ab fix(iam): handle UnboundLocalError cannot access local variable 'report' (#7361) 2025-03-25 09:22:35 +01:00
Pablo Lara dd05ef7974 chore(scans): properly enable link to findings when scan is completed (#7368) 2025-03-25 08:45:37 +01:00
dependabot[bot] d6862766d3 chore(deps): bump github/codeql-action from 3.28.12 to 3.28.13 (#7367)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-25 12:43:02 +05:45
dependabot[bot] f52d005e2d chore(deps): bump tj-actions/changed-files from 46.0.1 to 46.0.3 (#7363)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-25 12:42:50 +05:45
Víctor Fernández Poyatos bf475234a5 build(api): Force django-allauth==65.4.1 (#7358) 2025-03-24 17:39:47 +01:00
Pablo Lara cd5985c056 docs: update readme (#7357) 2025-03-24 15:41:35 +01:00
Pablo Lara ce33dbf823 chore(findings): apply default filter to show failed findings (#7356) 2025-03-24 15:38:09 +01:00
Pablo Lara 0a9d0688a7 docs(changelog): document addition of download column in scans table … (#7354) 2025-03-24 15:28:13 +01:00
Pablo Lara 24784f2ce5 feat(scans): add download button column for completed scans in table (#7353) 2025-03-24 15:22:36 +01:00
Víctor Fernández Poyatos 7a1e611b88 ref(providers): Refactor provider deletion functions (#7349) 2025-03-24 14:39:14 +01:00
Pepe Fagoaga 3073150008 chore(next): Remove x-powered-by header (#7346) 2025-03-24 16:17:18 +05:45
Jonny 9923def4cb chore(awslambda): update obsolete lambda runtimes (#7330)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-03-24 11:21:01 +01:00
Víctor Fernández Poyatos a7f612303f feat(compliance): Add endpoint to retrieve compliance overviews metadata (#7333) 2025-03-24 10:34:43 +01:00
Pablo Lara 64c2a2217a docs: update changelog with Next.js security patch (#7339) (#7341) 2025-03-24 09:59:59 +01:00
Pablo Lara 4689d7a952 chore: upgrade Next.js to 14.2.25 to fix auth middleware vulnerability (#7339) 2025-03-24 09:48:41 +01:00
Prowler Bot 87cd143967 chore(regions_update): Changes in regions for AWS services (#7219)
Co-authored-by: MrCloudSec <38561120+MrCloudSec@users.noreply.github.com>
2025-03-24 09:46:57 +01:00
Prowler Bot e37fd05d58 chore(regions_update): Changes in regions for AWS services (#7246)
Co-authored-by: MrCloudSec <38561120+MrCloudSec@users.noreply.github.com>
2025-03-24 09:46:26 +01:00
Prowler Bot acc708bda5 chore(regions_update): Changes in regions for AWS services (#7250)
Co-authored-by: MrCloudSec <38561120+MrCloudSec@users.noreply.github.com>
2025-03-24 09:46:08 +01:00
Prowler Bot c7460bb69c chore(regions_update): Changes in regions for AWS services (#7334)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-03-24 09:35:47 +01:00
Pepe Fagoaga 84b273dab9 fix(action): Use Poetry v2 (#7329) 2025-03-20 18:49:32 +01:00
Prowler Bot bb7ce2157e chore(regions_update): Changes in regions for AWS services (#7323)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-03-20 18:10:28 +05:45
Pepe Fagoaga 07b9e1d3a4 chore(api): Update CHANGELOG (#7325) 2025-03-20 15:22:00 +05:45
Pepe Fagoaga 96a879d761 fix(scan_id): Read the ID from the Scan object (#7324) 2025-03-20 15:18:31 +05:45
Pepe Fagoaga 283127c3f4 chore(aws-regions): remove backport to v3 (#7319) 2025-03-19 22:14:41 +05:45
dependabot[bot] beeee80a0b chore(deps): bump github/codeql-action from 3.28.11 to 3.28.12 (#7321)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-19 22:14:23 +05:45
Pepe Fagoaga 06b62826b4 chore(dependabot): disable for v3 (#7316) 2025-03-19 21:56:52 +05:45
Pedro Martín d0736af209 fix(gcp): make provider id mandatory in test_connection (#7296) 2025-03-19 18:33:49 +05:45
Pablo Lara 716c8c1a5f docs: add social login images and update documentation (#7314)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-03-19 17:16:37 +05:45
Pepe Fagoaga e6cdda1bd9 chore(dependabot): Disable for API and UI (#7300) 2025-03-19 14:46:11 +05:45
Pedro Martín 2747a633bc fix(k8s): remove typos from PCI 4.0 (#7294) 2025-03-19 09:31:40 +01:00
Pepe Fagoaga 74118f5cfe chore(social-login): improve copy when not enabled (#7295) 2025-03-19 13:36:22 +05:45
dependabot[bot] 598bdf28bb chore(deps): bump trufflesecurity/trufflehog from 3.88.17 to 3.88.18 (#7297)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-19 12:31:52 +05:45
Pepe Fagoaga d75f681c87 chore(security): Configure HTTP Security Headers (#7220)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-03-18 17:49:12 +01:00
Pepe Fagoaga c7956ede6a chore(security): Add HTTP Security Headers (#7289) 2025-03-18 17:44:57 +01:00
Pablo Lara 64f5a69e84 fix: prevent SSR mismatch in OAuth URL generation (#7288) 2025-03-18 17:22:29 +01:00
dependabot[bot] bfb15c34b8 chore(deps): bump azure-mgmt-containerservice from 34.0.0 to 34.1.0 (#6989)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-03-18 17:14:25 +01:00
Pablo Lara 638b3ac0cd chore(providers): change wording when adding a new provider (#7280) 2025-03-18 21:50:56 +05:45
Daniel Barranquero 9d6147a037 fix(route53): solve false positive in route53_public_hosted_zones_cloudwatch_logging_enabled (#7201) 2025-03-18 16:54:49 +01:00
Pepe Fagoaga 802c786ac2 fix(test-connection): Handle provider without secret (#7283) 2025-03-18 21:34:36 +05:45
Pepe Fagoaga c8be8dbd9a fix(aws-regions): Use @prowler-bot as author (#7285) 2025-03-18 20:27:19 +05:45
Pablo Lara 7053b2bb37 chore: add env vars for social login (#7257)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-03-18 13:43:46 +01:00
Prowler Bot 447bf832cd chore(regions_update): Changes in regions for AWS services (#7281)
Co-authored-by: MrCloudSec <38561120+MrCloudSec@users.noreply.github.com>
2025-03-18 17:35:44 +05:45
Pablo Lara 7c4571b55e feat(providers): add component to render a link to the documentation (#7282) 2025-03-18 12:05:38 +01:00
dependabot[bot] eb7c16aba5 chore(deps): bump azure-mgmt-storage from 21.2.1 to 22.1.1 (#7098)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-03-18 11:06:46 +01:00
Adrián Jesús Peña Rodríguez b09e83b171 chore: add api reference to download report section (#7243) 2025-03-18 14:54:13 +05:45
Hugo Pereira Brito bb149a30a7 fix(microsoft365): typo Microsoft365NotTenantIdButClientIdAndClienSecretError (#7244) 2025-03-17 21:16:47 +05:45
Pablo Lara d5be35af49 chore: Rename keyServer and extract to helper (#7256) 2025-03-17 21:11:27 +05:45
Pedro Martín f6aa56d92b fix(.env): remove spaces (#7255) 2025-03-17 20:48:55 +05:45
Pedro Martín 6a4df15c47 fix(prowler): change from prowler.py to prowler-cli.py (#7253) 2025-03-17 15:44:15 +01:00
Pablo Lara 72de5fdb1b chore: update git ignore file (#7254) 2025-03-17 14:53:58 +01:00
Pedro Martín a7f55d06af feat(jira): add basic auth method (#7233) 2025-03-17 14:31:35 +01:00
Pepe Fagoaga 97da78d4e7 fix(backport): Use container tagged version (#7252) 2025-03-17 18:19:43 +05:45
Pepe Fagoaga c4f6161c73 chore(security): Pin actions to the Full-Length Commit SHA (#7249) 2025-03-17 17:11:28 +05:45
Pablo Lara db7ffea24d chore: add env var for social login (#7251) 2025-03-17 10:23:01 +01:00
Prowler Bot 489b5abf82 chore(regions_update): Changes in regions for AWS services (#7237)
Co-authored-by: MrCloudSec <38561120+MrCloudSec@users.noreply.github.com>
2025-03-17 13:47:56 +05:45
Prowler Bot 3a55c2ee07 chore(regions_update): Changes in regions for AWS services (#7245)
Co-authored-by: MrCloudSec <38561120+MrCloudSec@users.noreply.github.com>
2025-03-17 12:34:44 +05:45
Pedro Martín 64d866271c fix(scan): add compliance info inside finding (#5649) 2025-03-17 12:18:00 +05:45
Pablo Lara 1ab2a80eab chore: improve UX when social login is not enabled (#7242) 2025-03-15 12:12:30 +01:00
Pablo Lara 89d4c521ba chore(social-login): disable social login buttons when env vars are not set (#7238) 2025-03-14 11:32:22 +01:00
Pablo Lara f2e19d377a chore(social-login): rename env.vars for social login (#7232) 2025-03-13 17:07:17 +01:00
Pablo Lara 2b7b887b87 chore: social auth is algo in sign-up page (#7231) 2025-03-13 14:20:09 +01:00
Pablo Lara 44c70b5d01 chore: remove unused regions (#7229) 2025-03-13 13:57:16 +01:00
Pablo Lara 7514484c42 chore: change wording for launching a single scan (#7226) 2025-03-13 13:48:01 +01:00
Adrián Jesús Peña Rodríguez 9594c4c99f fix: add a handled response in case local files are missing (#7183) 2025-03-13 13:47:00 +01:00
Pablo Lara 56445c9753 chore: update changelog (#7223) 2025-03-13 13:39:26 +01:00
Adrián Jesús Peña Rodríguez 07419fd5e1 fix(exports): change the way to remove the local export files after s3 upload (#7172) 2025-03-13 13:37:17 +01:00
Pablo Lara 2e4dd12b41 feat(social-login): social login with Google is working (#7218)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-03-13 12:52:30 +01:00
Víctor Fernández Poyatos fed2046c49 fix(migrations): add through parameter to integration.providers (#7222) 2025-03-13 12:47:34 +01:00
Pepe Fagoaga db79db4786 fix(pyproject): Rename prowler.py (#7217) 2025-03-13 16:53:38 +05:45
1515 changed files with 141793 additions and 19370 deletions
+30 -4
View File
@@ -4,13 +4,16 @@
#### Prowler UI Configuration ####
PROWLER_UI_VERSION="stable"
SITE_URL=http://localhost:3000
AUTH_URL=http://localhost:3000
API_BASE_URL=http://prowler-api:8080/api/v1
NEXT_PUBLIC_API_DOCS_URL=http://prowler-api:8080/api/v1/docs
AUTH_TRUST_HOST=true
UI_PORT=3000
# openssl rand -base64 32
AUTH_SECRET="N/c6mnaS5+SWq81+819OrzQZlmx1Vxtp/orjttJSmw8="
# Google Tag Manager ID
NEXT_PUBLIC_GOOGLE_TAG_MANAGER_ID=""
#### Prowler API Configuration ####
PROWLER_API_VERSION="stable"
@@ -24,6 +27,10 @@ POSTGRES_USER=prowler
POSTGRES_PASSWORD=postgres
POSTGRES_DB=prowler_db
# Celery-Prowler task settings
TASK_RETRY_DELAY_SECONDS=0.1
TASK_RETRY_ATTEMPTS=5
# Valkey settings
# If running Valkey and celery on host, use localhost, else use 'valkey'
VALKEY_HOST=valkey
@@ -33,10 +40,10 @@ VALKEY_DB=0
# API scan settings
# The path to the directory where scan output should be stored
DJANGO_TMP_OUTPUT_DIRECTORY = "/tmp/prowler_api_output"
DJANGO_TMP_OUTPUT_DIRECTORY="/tmp/prowler_api_output"
# The maximum number of findings to process in a single batch
DJANGO_FINDINGS_BATCH_SIZE = 1000
DJANGO_FINDINGS_BATCH_SIZE=1000
# The AWS access key to be used when uploading scan output to an S3 bucket
# If left empty, default AWS credentials resolution behavior will be used
@@ -123,4 +130,23 @@ SENTRY_ENVIRONMENT=local
SENTRY_RELEASE=local
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.5.0
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.6.0
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
SOCIAL_GOOGLE_OAUTH_CLIENT_ID=""
SOCIAL_GOOGLE_OAUTH_CLIENT_SECRET=""
SOCIAL_GITHUB_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/github"
SOCIAL_GITHUB_OAUTH_CLIENT_ID=""
SOCIAL_GITHUB_OAUTH_CLIENT_SECRET=""
# Single Sign-On (SSO)
SAML_PUBLIC_CERT=""
SAML_PRIVATE_KEY=""
# Lighthouse tracing
LANGSMITH_TRACING=false
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY=""
LANGCHAIN_PROJECT=""
+80 -76
View File
@@ -9,108 +9,112 @@ updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"
open-pull-requests-limit: 10
interval: "monthly"
open-pull-requests-limit: 25
target-branch: master
labels:
- "dependencies"
- "pip"
- package-ecosystem: "pip"
directory: "/api"
schedule:
interval: "daily"
open-pull-requests-limit: 10
target-branch: master
labels:
- "dependencies"
- "pip"
- "component/api"
# Dependabot Updates are temporary disabled - 2025/03/19
# - package-ecosystem: "pip"
# directory: "/api"
# schedule:
# interval: "daily"
# open-pull-requests-limit: 10
# target-branch: master
# labels:
# - "dependencies"
# - "pip"
# - "component/api"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
open-pull-requests-limit: 10
interval: "monthly"
open-pull-requests-limit: 25
target-branch: master
labels:
- "dependencies"
- "github_actions"
- package-ecosystem: "npm"
directory: "/ui"
schedule:
interval: "daily"
open-pull-requests-limit: 10
target-branch: master
labels:
- "dependencies"
- "npm"
- "component/ui"
# Dependabot Updates are temporary disabled - 2025/03/19
# - package-ecosystem: "npm"
# directory: "/ui"
# schedule:
# interval: "daily"
# open-pull-requests-limit: 10
# target-branch: master
# labels:
# - "dependencies"
# - "npm"
# - "component/ui"
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
interval: "monthly"
open-pull-requests-limit: 25
target-branch: master
labels:
- "dependencies"
- "docker"
# Dependabot Updates are temporary disabled - 2025/04/15
# v4.6
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
target-branch: v4.6
labels:
- "dependencies"
- "pip"
- "v4"
# - package-ecosystem: "pip"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 10
# target-branch: v4.6
# labels:
# - "dependencies"
# - "pip"
# - "v4"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
target-branch: v4.6
labels:
- "dependencies"
- "github_actions"
- "v4"
# - package-ecosystem: "github-actions"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 10
# target-branch: v4.6
# labels:
# - "dependencies"
# - "github_actions"
# - "v4"
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
target-branch: v4.6
labels:
- "dependencies"
- "docker"
- "v4"
# - package-ecosystem: "docker"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 10
# target-branch: v4.6
# labels:
# - "dependencies"
# - "docker"
# - "v4"
# Dependabot Updates are temporary disabled - 2025/03/19
# v3
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "monthly"
open-pull-requests-limit: 10
target-branch: v3
labels:
- "dependencies"
- "pip"
- "v3"
# - package-ecosystem: "pip"
# directory: "/"
# schedule:
# interval: "monthly"
# open-pull-requests-limit: 10
# target-branch: v3
# labels:
# - "dependencies"
# - "pip"
# - "v3"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
open-pull-requests-limit: 10
target-branch: v3
labels:
- "dependencies"
- "github_actions"
- "v3"
# - package-ecosystem: "github-actions"
# directory: "/"
# schedule:
# interval: "monthly"
# open-pull-requests-limit: 10
# target-branch: v3
# labels:
# - "dependencies"
# - "github_actions"
# - "v3"
+5
View File
@@ -27,6 +27,11 @@ provider/github:
- any-glob-to-any-file: "prowler/providers/github/**"
- any-glob-to-any-file: "tests/providers/github/**"
provider/iac:
- changed-files:
- any-glob-to-any-file: "prowler/providers/iac/**"
- any-glob-to-any-file: "tests/providers/iac/**"
github_actions:
- changed-files:
- any-glob-to-any-file: ".github/workflows/*"
+1
View File
@@ -16,6 +16,7 @@ Please include a summary of the change and which issue is fixed. List any depend
- [ ] Review if code is being documented following this specification https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings
- [ ] Review if backport is needed.
- [ ] Review if is needed to change the [Readme.md](https://github.com/prowler-cloud/prowler/blob/master/README.md)
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/prowler/CHANGELOG.md), if applicable.
#### API
- [ ] Verify if API specs need to be regenerated.
@@ -61,7 +61,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set short git commit SHA
id: vars
@@ -70,18 +70,18 @@ jobs:
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Build and push container image (latest)
# Comment the following line for testing
if: github.event_name == 'push'
uses: docker/build-push-action@v6
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
# Set push: false for testing
@@ -94,7 +94,7 @@ jobs:
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v6
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
@@ -106,7 +106,7 @@ jobs:
- name: Trigger deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@v3
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+3 -3
View File
@@ -44,16 +44,16 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
with:
category: "/language:${{matrix.language}}"
+49 -14
View File
@@ -28,6 +28,10 @@ env:
VALKEY_DB: 0
API_WORKING_DIR: ./api
IMAGE_NAME: prowler-api
IGNORE_FILES: |
api/docs/**
api/README.md
api/CHANGELOG.md
jobs:
test:
@@ -71,19 +75,22 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@v45
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: api/**
files_ignore: |
api/.github/**
api/docs/**
api/permissions/**
api/README.md
api/mkdocs.yml
files_ignore: ${{ env.IGNORE_FILES }}
- name: Replace @master with current branch in pyproject.toml
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
working-directory: ./api
@@ -92,13 +99,25 @@ jobs:
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Update poetry.lock after the branch name change
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry lock
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@v5
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install system dependencies for xmlsec
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
sudo apt-get update
sudo apt-get install -y libxml2-dev libxmlsec1-dev libxmlsec1-openssl pkg-config
- name: Install dependencies
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
@@ -117,6 +136,12 @@ jobs:
run: |
poetry check --lock
- name: Prevents known compatibility error between lxml and libxml2/libxmlsec versions - https://github.com/xmlsec/python-xmlsec/issues/320
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pip install --force-reinstall --no-binary lxml lxml
- name: Lint with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
@@ -144,8 +169,9 @@ jobs:
- name: Safety
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
# 76352 and 76353 come from SDK, but they cannot upgrade it yet. It does not affect API
run: |
poetry run safety check --ignore 70612,66963
poetry run safety check --ignore 70612,66963,74429,76352,76353
- name: Vulture
working-directory: ./api
@@ -167,7 +193,7 @@ jobs:
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
@@ -175,11 +201,20 @@ jobs:
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: api/**
files_ignore: ${{ env.IGNORE_FILES }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Build Container
uses: docker/build-push-action@v6
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.API_WORKING_DIR }}
push: false
+2 -2
View File
@@ -23,7 +23,7 @@ jobs:
steps:
- name: Check labels
id: preview_label_check
uses: docker://agilepathway/pull-request-label-checker:v1.6.55
uses: agilepathway/label-checker@c3d16ad512e7cea5961df85ff2486bb774caf3c5 # v1.6.65
with:
allow_failure: true
prefix_mode: true
@@ -33,7 +33,7 @@ jobs:
- name: Backport Action
if: steps.preview_label_check.outputs.label_check == 'success'
uses: sorenlouv/backport-github-action@v9.5.1
uses: sorenlouv/backport-github-action@ad888e978060bc1b2798690dd9d03c4036560947 # v9.5.1
with:
github_token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
auto_backport_label_prefix: ${{ env.BACKPORT_LABEL_PREFIX }}
@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Leave PR comment with the Prowler Documentation URI
uses: peter-evans/create-or-update-comment@v4
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ env.PR_NUMBER }}
body: |
+1 -1
View File
@@ -18,6 +18,6 @@ jobs:
steps:
- name: conventional-commit-check
id: conventional-commit-check
uses: agenthunt/conventional-commit-checker-action@v2.0.0
uses: agenthunt/conventional-commit-checker-action@9e552d650d0e205553ec7792d447929fc78e012b # v2.0.0
with:
pr-title-regex: '^([^\s(]+)(?:\(([^)]+)\))?: (.+)'
@@ -0,0 +1,67 @@
name: Create Backport Label
on:
release:
types: [published]
jobs:
create_label:
runs-on: ubuntu-latest
permissions:
contents: write
issues: write
steps:
- name: Create backport label
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_TAG: ${{ github.event.release.tag_name }}
OWNER_REPO: ${{ github.repository }}
run: |
VERSION_ONLY=${RELEASE_TAG#v} # Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
# Check if it's a minor version (X.Y.0)
if [[ "$VERSION_ONLY" =~ ^[0-9]+\.[0-9]+\.0$ ]]; then
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is a minor version. Proceeding to create backport label."
TWO_DIGIT_VERSION=${VERSION_ONLY%.0} # Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
FINAL_LABEL_NAME="backport-to-v${TWO_DIGIT_VERSION}"
FINAL_DESCRIPTION="Backport PR to the v${TWO_DIGIT_VERSION} branch"
echo "Effective label name will be: ${FINAL_LABEL_NAME}"
echo "Effective description will be: ${FINAL_DESCRIPTION}"
# Check if the label already exists
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token ${GITHUB_TOKEN}" "https://api.github.com/repos/${OWNER_REPO}/labels/${FINAL_LABEL_NAME}")
if [ "${STATUS_CODE}" -eq 200 ]; then
echo "Label '${FINAL_LABEL_NAME}' already exists."
elif [ "${STATUS_CODE}" -eq 404 ]; then
echo "Label '${FINAL_LABEL_NAME}' does not exist. Creating it..."
# Prepare JSON data payload
JSON_DATA=$(printf '{"name":"%s","description":"%s","color":"B60205"}' "${FINAL_LABEL_NAME}" "${FINAL_DESCRIPTION}")
CREATE_STATUS_CODE=$(curl -s -o /tmp/curl_create_response.json -w "%{http_code}" -X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${GITHUB_TOKEN}" \
--data "${JSON_DATA}" \
"https://api.github.com/repos/${OWNER_REPO}/labels")
CREATE_RESPONSE_BODY=$(cat /tmp/curl_create_response.json)
rm -f /tmp/curl_create_response.json
if [ "$CREATE_STATUS_CODE" -eq 201 ]; then
echo "Label '${FINAL_LABEL_NAME}' created successfully."
else
echo "Error creating label '${FINAL_LABEL_NAME}'. Status: $CREATE_STATUS_CODE"
echo "Response: $CREATE_RESPONSE_BODY"
exit 1
fi
else
echo "Error checking for label '${FINAL_LABEL_NAME}'. HTTP Status: ${STATUS_CODE}"
exit 1
fi
else
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is not a minor version. Skipping backport label creation."
exit 0
fi
+2 -2
View File
@@ -7,11 +7,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.88.16
uses: trufflesecurity/trufflehog@90694bf9af66e7536abc5824e7a87246dbf933cb # v3.88.35
with:
path: ./
base: ${{ github.event.repository.default_branch }}
+1 -1
View File
@@ -14,4 +14,4 @@ jobs:
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
- uses: actions/labeler@8558fd74291d67161a8a78ce36a881fa63b766a9 # v5.0.0
@@ -0,0 +1,83 @@
name: Check Changelog
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
jobs:
check-changelog:
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
runs-on: ubuntu-latest
permissions:
pull-requests: write
env:
MONITORED_FOLDERS: "api ui prowler"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Get list of changed files
id: changed_files
run: |
git fetch origin ${{ github.base_ref }}
git diff --name-only origin/${{ github.base_ref }}...HEAD > changed_files.txt
cat changed_files.txt
- name: Check for folder changes and changelog presence
id: check_folders
run: |
missing_changelogs=""
for folder in $MONITORED_FOLDERS; do
if grep -q "^${folder}/" changed_files.txt; then
echo "Detected changes in ${folder}/"
if ! grep -q "^${folder}/CHANGELOG.md$" changed_files.txt; then
echo "No changelog update found for ${folder}/"
missing_changelogs="${missing_changelogs}- \`${folder}\`\n"
fi
fi
done
echo "missing_changelogs<<EOF" >> $GITHUB_OUTPUT
echo -e "${missing_changelogs}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Find existing changelog comment
id: find_comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e #v3.1.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- changelog-check -->'
- name: Comment on PR if changelog is missing
if: steps.check_folders.outputs.missing_changelogs != ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
body: |
<!-- changelog-check -->
⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
${{ steps.check_folders.outputs.missing_changelogs }}
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.
- name: Comment on PR if all changelogs are present
if: steps.check_folders.outputs.missing_changelogs == ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
body: |
<!-- changelog-check -->
✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉
- name: Fail if changelog is missing
if: steps.check_folders.outputs.missing_changelogs != ''
run: |
echo "ERROR: Missing changelog updates in some folders."
exit 1
+37
View File
@@ -0,0 +1,37 @@
name: Prowler - Merged Pull Request
on:
pull_request_target:
branches: ['master']
types: ['closed']
jobs:
trigger-cloud-pull-request:
name: Trigger Cloud Pull Request
if: github.event.pull_request.merged == true && github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ github.event.pull_request.merge_commit_sha }}
- name: Set short git commit SHA
id: vars
run: |
shortSha=$(git rev-parse --short ${{ github.event.pull_request.merge_commit_sha }})
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Trigger pull request
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
event-type: prowler-pull-request-merged
client-payload: '{
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
"PROWLER_PR_TITLE": "${{ github.event.pull_request.title }}",
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
"PROWLER_PR_URL":${{ toJson(github.event.pull_request.html_url) }}
}'
@@ -59,16 +59,16 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Python
uses: actions/setup-python@v5
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install Poetry
run: |
pipx install poetry==1.8.5
pipx install poetry==2.*
pipx inject poetry poetry-bumpversion
- name: Get Prowler version
@@ -108,13 +108,13 @@ jobs:
esac
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: public.ecr.aws
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
@@ -123,11 +123,11 @@ jobs:
AWS_REGION: ${{ env.AWS_REGION }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Build and push container image (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@v6
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
push: true
tags: |
@@ -140,7 +140,7 @@ jobs:
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v6
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
# Use local context to get changes
# https://github.com/docker/build-push-action#path-context
+145
View File
@@ -0,0 +1,145 @@
name: SDK - Bump Version
on:
release:
types: [published]
env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
jobs:
bump-version:
name: Bump Version
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Get Prowler version
shell: bash
run: |
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
FIX_VERSION=${BASH_REMATCH[3]}
# Export version components to GitHub environment
echo "MAJOR_VERSION=${MAJOR_VERSION}" >> "${GITHUB_ENV}"
echo "MINOR_VERSION=${MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "FIX_VERSION=${FIX_VERSION}" >> "${GITHUB_ENV}"
if (( MAJOR_VERSION == 5 )); then
if (( FIX_VERSION == 0 )); then
echo "Minor Release: $PROWLER_VERSION"
# Set up next minor version for master
BUMP_VERSION_TO=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).${FIX_VERSION}
echo "BUMP_VERSION_TO=${BUMP_VERSION_TO}" >> "${GITHUB_ENV}"
TARGET_BRANCH=${BASE_BRANCH}
echo "TARGET_BRANCH=${TARGET_BRANCH}" >> "${GITHUB_ENV}"
# Set up patch version for version branch
PATCH_VERSION_TO=${MAJOR_VERSION}.${MINOR_VERSION}.1
echo "PATCH_VERSION_TO=${PATCH_VERSION_TO}" >> "${GITHUB_ENV}"
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Bumping to next minor version: ${BUMP_VERSION_TO} in branch ${TARGET_BRANCH}"
echo "Bumping to next patch version: ${PATCH_VERSION_TO} in branch ${VERSION_BRANCH}"
else
echo "Patch Release: $PROWLER_VERSION"
BUMP_VERSION_TO=${MAJOR_VERSION}.${MINOR_VERSION}.$((FIX_VERSION + 1))
echo "BUMP_VERSION_TO=${BUMP_VERSION_TO}" >> "${GITHUB_ENV}"
TARGET_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "TARGET_BRANCH=${TARGET_BRANCH}" >> "${GITHUB_ENV}"
echo "Bumping to next patch version: ${BUMP_VERSION_TO} in branch ${TARGET_BRANCH}"
fi
else
echo "Releasing another Prowler major version, aborting..."
exit 1
fi
else
echo "Invalid version syntax: '$PROWLER_VERSION' (must be N.N.N)" >&2
exit 1
fi
- name: Bump versions in files
run: |
echo "Using PROWLER_VERSION=$PROWLER_VERSION"
echo "Using BUMP_VERSION_TO=$BUMP_VERSION_TO"
set -e
echo "Bumping version in pyproject.toml ..."
sed -i "s|version = \"${PROWLER_VERSION}\"|version = \"${BUMP_VERSION_TO}\"|" pyproject.toml
echo "Bumping version in prowler/config/config.py ..."
sed -i "s|prowler_version = \"${PROWLER_VERSION}\"|prowler_version = \"${BUMP_VERSION_TO}\"|" prowler/config/config.py
echo "Bumping version in .env ..."
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${BUMP_VERSION_TO}|" .env
git --no-pager diff
- name: Create Pull Request
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.TARGET_BRANCH }}
commit-message: "chore(release): Bump version to v${{ env.BUMP_VERSION_TO }}"
branch: "version-bump-to-v${{ env.BUMP_VERSION_TO }}"
title: "chore(release): Bump version to v${{ env.BUMP_VERSION_TO }}"
body: |
### Description
Bump Prowler version to v${{ env.BUMP_VERSION_TO }}
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Handle patch version for minor release
if: env.FIX_VERSION == '0'
run: |
echo "Using PROWLER_VERSION=$PROWLER_VERSION"
echo "Using PATCH_VERSION_TO=$PATCH_VERSION_TO"
set -e
echo "Bumping version in pyproject.toml ..."
sed -i "s|version = \"${PROWLER_VERSION}\"|version = \"${PATCH_VERSION_TO}\"|" pyproject.toml
echo "Bumping version in prowler/config/config.py ..."
sed -i "s|prowler_version = \"${PROWLER_VERSION}\"|prowler_version = \"${PATCH_VERSION_TO}\"|" prowler/config/config.py
echo "Bumping version in .env ..."
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PATCH_VERSION_TO}|" .env
git --no-pager diff
- name: Create Pull Request for patch version
if: env.FIX_VERSION == '0'
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: "chore(release): Bump version to v${{ env.PATCH_VERSION_TO }}"
branch: "version-bump-to-v${{ env.PATCH_VERSION_TO }}"
title: "chore(release): Bump version to v${{ env.PATCH_VERSION_TO }}"
body: |
### Description
Bump Prowler version to v${{ env.PATCH_VERSION_TO }}
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
+5 -3
View File
@@ -21,6 +21,7 @@ on:
paths-ignore:
- 'ui/**'
- 'api/**'
- '.github/**'
pull_request:
branches:
- "master"
@@ -30,6 +31,7 @@ on:
paths-ignore:
- 'ui/**'
- 'api/**'
- '.github/**'
schedule:
- cron: '00 12 * * *'
@@ -50,16 +52,16 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/sdk-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
with:
category: "/language:${{matrix.language}}"
+135 -6
View File
@@ -21,11 +21,11 @@ jobs:
python-version: ["3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@v45
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: ./**
files_ignore: |
@@ -34,6 +34,7 @@ jobs:
permissions/**
api/**
ui/**
prowler/CHANGELOG.md
README.md
mkdocs.yml
.backportrc.json
@@ -50,7 +51,7 @@ jobs:
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@v5
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
@@ -106,15 +107,143 @@ jobs:
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
# Test AWS
- name: AWS - Check if any file has changed
id: aws-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/aws/**
./tests/providers/aws/**
.poetry.lock
- name: AWS - Test
if: steps.aws-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml tests/providers/aws
# Test Azure
- name: Azure - Check if any file has changed
id: azure-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/azure/**
./tests/providers/azure/**
.poetry.lock
- name: Azure - Test
if: steps.azure-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/azure --cov-report=xml:azure_coverage.xml tests/providers/azure
# Test GCP
- name: GCP - Check if any file has changed
id: gcp-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/gcp/**
./tests/providers/gcp/**
.poetry.lock
- name: GCP - Test
if: steps.gcp-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/gcp --cov-report=xml:gcp_coverage.xml tests/providers/gcp
# Test Kubernetes
- name: Kubernetes - Check if any file has changed
id: kubernetes-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/kubernetes/**
./tests/providers/kubernetes/**
.poetry.lock
- name: Kubernetes - Test
if: steps.kubernetes-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/kubernetes --cov-report=xml:kubernetes_coverage.xml tests/providers/kubernetes
# Test GitHub
- name: GitHub - Check if any file has changed
id: github-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/github/**
./tests/providers/github/**
.poetry.lock
- name: GitHub - Test
if: steps.github-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/github --cov-report=xml:github_coverage.xml tests/providers/github
# Test NHN
- name: NHN - Check if any file has changed
id: nhn-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/nhn/**
./tests/providers/nhn/**
.poetry.lock
- name: NHN - Test
if: steps.nhn-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/nhn --cov-report=xml:nhn_coverage.xml tests/providers/nhn
# Test M365
- name: M365 - Check if any file has changed
id: m365-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/m365/**
./tests/providers/m365/**
.poetry.lock
- name: M365 - Test
if: steps.m365-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/m365 --cov-report=xml:m365_coverage.xml tests/providers/m365
# Test IaC
- name: IaC - Check if any file has changed
id: iac-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/iac/**
./tests/providers/iac/**
.poetry.lock
- name: IaC - Test
if: steps.iac-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/iac --cov-report=xml:iac_coverage.xml tests/providers/iac
# Common Tests
- name: Lib - Test
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler --cov-report=xml tests
poetry run pytest -n auto --cov=./prowler/lib --cov-report=xml:lib_coverage.xml tests/lib
- name: Config - Test
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/config --cov-report=xml:config_coverage.xml tests/config
# Codecov
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: prowler
files: ./aws_coverage.xml,./azure_coverage.xml,./gcp_coverage.xml,./kubernetes_coverage.xml,./github_coverage.xml,./nhn_coverage.xml,./m365_coverage.xml,./lib_coverage.xml,./config_coverage.xml
+5 -5
View File
@@ -7,7 +7,7 @@ on:
env:
RELEASE_TAG: ${{ github.event.release.tag_name }}
PYTHON_VERSION: 3.11
CACHE: "poetry"
# CACHE: "poetry"
jobs:
repository-check:
@@ -64,17 +64,17 @@ jobs:
;;
esac
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install dependencies
run: |
pipx install poetry==1.8.5
pipx install poetry==2.1.1
- name: Setup Python
uses: actions/setup-python@v5
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: ${{ env.CACHE }}
# cache: ${{ env.CACHE }}
- name: Build Prowler package
run: |
@@ -4,7 +4,7 @@ name: SDK - Refresh AWS services' regions
on:
schedule:
- cron: "0 9 * * *" #runs at 09:00 UTC everyday
- cron: "0 9 * * 1" # runs at 09:00 UTC every Monday
env:
GITHUB_BRANCH: "master"
@@ -23,12 +23,12 @@ jobs:
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: setup python
uses: actions/setup-python@v5
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.9 #install the python needed
@@ -38,7 +38,7 @@ jobs:
pip install boto3
- name: Configure AWS Credentials -- DEV
uses: aws-actions/configure-aws-credentials@v4
uses: aws-actions/configure-aws-credentials@b47578312673ae6fa5b5096b330d9fbac3d116df # v4.2.1
with:
aws-region: ${{ env.AWS_REGION_DEV }}
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
@@ -50,12 +50,13 @@ jobs:
# Create pull request
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services"
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low, provider/aws, backport-to-v3"
labels: "status/waiting-for-revision, severity/low, provider/aws"
title: "chore(regions_update): Changes in regions for AWS services"
body: |
### Description
@@ -61,7 +61,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set short git commit SHA
id: vars
@@ -70,18 +70,18 @@ jobs:
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Build and push container image (latest)
# Comment the following line for testing
if: github.event_name == 'push'
uses: docker/build-push-action@v6
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
build-args: |
@@ -96,7 +96,7 @@ jobs:
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v6
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
build-args: |
@@ -110,7 +110,7 @@ jobs:
- name: Trigger deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@v3
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+3 -3
View File
@@ -44,16 +44,16 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/ui-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
with:
category: "/language:${{matrix.language}}"
+5 -5
View File
@@ -27,11 +27,11 @@ jobs:
node-version: [20.x]
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: ${{ matrix.node-version }}
- name: Install dependencies
@@ -46,11 +46,11 @@ jobs:
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Build Container
uses: docker/build-push-action@v6
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.UI_WORKING_DIR }}
# Always build using `prod` target
+4
View File
@@ -42,6 +42,9 @@ junit-reports/
# VSCode files
.vscode/
# Cursor files
.cursorignore
# Terraform
.terraform*
*.tfstate
@@ -50,6 +53,7 @@ junit-reports/
# .env
ui/.env*
api/.env*
.env.local
# Coverage
.coverage*
+1 -1
View File
@@ -115,7 +115,7 @@ repos:
- id: safety
name: safety
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
entry: bash -c 'safety check --ignore 70612,66963'
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353'
language: system
- id: vulture
+32 -10
View File
@@ -1,24 +1,43 @@
FROM python:3.12.9-alpine3.20
FROM python:3.12.10-slim-bookworm AS build
LABEL maintainer="https://github.com/prowler-cloud/prowler"
LABEL org.opencontainers.image.source="https://github.com/prowler-cloud/prowler"
# Update system dependencies and install essential tools
#hadolint ignore=DL3018
RUN apk --no-cache upgrade && apk --no-cache add curl git gcc python3-dev musl-dev linux-headers
ARG POWERSHELL_VERSION=7.5.0
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends wget libicu72 \
&& rm -rf /var/lib/apt/lists/*
# Install PowerShell
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-x64.tar.gz -O /tmp/powershell.tar.gz ; \
elif [ "$ARCH" = "aarch64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-arm64.tar.gz -O /tmp/powershell.tar.gz ; \
else \
echo "Unsupported architecture: $ARCH" && exit 1 ; \
fi && \
mkdir -p /opt/microsoft/powershell/7 && \
tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7 && \
chmod +x /opt/microsoft/powershell/7/pwsh && \
ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && \
rm /tmp/powershell.tar.gz
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
# Create non-root user
RUN mkdir -p /home/prowler && \
echo 'prowler:x:1000:1000:prowler:/home/prowler:' > /etc/passwd && \
echo 'prowler:x:1000:' > /etc/group && \
chown -R prowler:prowler /home/prowler
USER prowler
# Copy necessary files
WORKDIR /home/prowler
# Copy necessary files
COPY prowler/ /home/prowler/prowler/
COPY dashboard/ /home/prowler/dashboard/
COPY pyproject.toml /home/prowler
COPY README.md /home/prowler/
COPY prowler/providers/m365/lib/powershell/m365_powershell.py /home/prowler/prowler/providers/m365/lib/powershell/m365_powershell.py
# Install Python dependencies
ENV HOME='/home/prowler'
@@ -34,6 +53,9 @@ RUN pip install --no-cache-dir --upgrade pip && \
RUN poetry install --compile && \
rm -rf ~/.cache/pip
# Install PowerShell modules
RUN poetry run python prowler/providers/m365/lib/powershell/m365_powershell.py
# Remove deprecated dash dependencies
RUN pip uninstall dash-html-components -y && \
pip uninstall dash-core-components -y
+121 -53
View File
@@ -3,7 +3,7 @@
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-white.png#gh-dark-mode-only" width="50%" height="50%">
</p>
<p align="center">
<b><i>Prowler Open Source</b> is as dynamic and adaptable as the environment theyre meant to protect. Trusted by the leaders in security.
<b><i>Prowler</b> is the Open Cloud Security platform trusted by thousands to automate security and compliance in any cloud environment. With hundreds of ready-to-use checks and compliance frameworks, Prowler delivers real-time, customizable monitoring and seamless integrations, making cloud security simple, scalable, and cost-effective for organizations of any size.
</p>
<p align="center">
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
@@ -43,15 +43,29 @@
# Description
**Prowler** is an Open Source security tool to perform AWS, Azure, Google Cloud and Kubernetes security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness, and also remediations! We have Prowler CLI (Command Line Interface) that we call Prowler Open Source and a service on top of it that we call <a href="https://prowler.com">Prowler Cloud</a>.
**Prowler** is an open-source security tool designed to assess and enforce security best practices across AWS, Azure, Google Cloud, and Kubernetes. It supports tasks such as security audits, incident response, continuous monitoring, system hardening, forensic readiness, and remediation processes.
Prowler includes hundreds of built-in controls to ensure compliance with standards and frameworks, including:
- **Industry Standards:** CIS, NIST 800, NIST CSF, and CISA
- **Regulatory Compliance and Governance:** RBI, FedRAMP, and PCI-DSS
- **Frameworks for Sensitive Data and Privacy:** GDPR, HIPAA, and FFIEC
- **Frameworks for Organizational Governance and Quality Control:** SOC2 and GXP
- **AWS-Specific Frameworks:** AWS Foundational Technical Review (FTR) and AWS Well-Architected Framework (Security Pillar)
- **National Security Standards:** ENS (Spanish National Security Scheme)
- **Custom Security Frameworks:** Tailored to your needs
## Prowler CLI and Prowler Cloud
Prowler offers a Command Line Interface (CLI), known as Prowler Open Source, and an additional service built on top of it, called <a href="https://prowler.com">Prowler Cloud</a>.
## Prowler App
Prowler App is a web application that allows you to run Prowler in your cloud provider accounts and visualize the results in a user-friendly interface.
Prowler App is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
![Prowler App](docs/img/overview.png)
>More details at [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
>For more details, refer to the [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
## Prowler CLI
@@ -60,6 +74,7 @@ prowler <provider>
```
![Prowler CLI Execution](docs/img/short-display.png)
## Prowler Dashboard
```console
@@ -67,25 +82,34 @@ prowler dashboard
```
![Prowler Dashboard](docs/img/dashboard.png)
It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks.
# Prowler at a Glance
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 564 | 82 | 33 | 10 |
| GCP | 77 | 13 | 6 | 3 |
| Azure | 140 | 18 | 7 | 3 |
| Kubernetes | 83 | 7 | 4 | 7 |
| Microsoft365 | 5 | 2 | 1 | 0 |
| AWS | 567 | 82 | 36 | 10 |
| GCP | 79 | 13 | 10 | 3 |
| Azure | 142 | 18 | 10 | 3 |
| Kubernetes | 83 | 7 | 5 | 7 |
| GitHub | 16 | 2 | 1 | 0 |
| M365 | 69 | 7 | 3 | 2 |
| NHN (Unofficial) | 6 | 2 | 1 | 0 |
> You can list the checks, services, compliance frameworks and categories with `prowler <provider> --list-checks`, `prowler <provider> --list-services`, `prowler <provider> --list-compliance` and `prowler <provider> --list-categories`.
> [!Note]
> The numbers in the table are updated periodically.
> [!Tip]
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
> [!Note]
> Use the following commands to list Prowler's available checks, services, compliance frameworks, and categories: `prowler <provider> --list-checks`, `prowler <provider> --list-services`, `prowler <provider> --list-compliance` and `prowler <provider> --list-categories`.
# 💻 Installation
## Prowler App
Prowler App can be installed in different ways, depending on your environment:
Prowler App offers flexible installation methods tailored to various environments:
> See how to use Prowler App in the [Prowler App Usage Guide](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/prowler-app/).
> For detailed instructions on using Prowler App, refer to the [Prowler App Usage Guide](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/prowler-app/).
### Docker Compose
@@ -101,8 +125,16 @@ curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/mast
docker compose up -d
```
> Containers are built for `linux/amd64`. If your workstation's architecture is different, please set `DOCKER_DEFAULT_PLATFORM=linux/amd64` in your environment or use the `--platform linux/amd64` flag in the docker command.
> Enjoy Prowler App at http://localhost:3000 by signing up with your email and password.
> Containers are built for `linux/amd64`.
### Configuring Your Workstation for Prowler App
If your workstation's architecture is incompatible, you can resolve this by:
- **Setting the environment variable**: `DOCKER_DEFAULT_PLATFORM=linux/amd64`
- **Using the following flag in your Docker command**: `--platform linux/amd64`
> Once configured, access the Prowler App at http://localhost:3000. Sign up using your email and password to get started.
### From GitHub
@@ -128,12 +160,12 @@ python manage.py migrate --database admin
gunicorn -c config/guniconf.py config.wsgi:application
```
> [!IMPORTANT]
> Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
> As of Poetry v2.0.0, the `poetry shell` command has been deprecated. Use `poetry env activate` instead for environment activation.
>
> If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
> In case you have any doubts, consult the Poetry environment activation guide: https://python-poetry.org/docs/managing-environments/#activating-the-environment
> If your Poetry version is below v2.0.0, continue using `poetry shell` to activate your environment.
> For further guidance, refer to the Poetry Environment Activation Guide https://python-poetry.org/docs/managing-environments/#activating-the-environment.
> Now, you can access the API documentation at http://localhost:8080/api/v1/docs.
> After completing the setup, access the API documentation at http://localhost:8080/api/v1/docs.
**Commands to run the API Worker**
@@ -171,29 +203,31 @@ npm run build
npm start
```
> Enjoy Prowler App at http://localhost:3000 by signing up with your email and password.
> Once configured, access the Prowler App at http://localhost:3000. Sign up using your email and password to get started.
## Prowler CLI
### Pip package
Prowler CLI is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with Python > 3.9.1, < 3.13:
Prowler CLI is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/). Consequently, it can be installed using pip with Python >3.9.1, <3.13:
```console
pip install prowler
prowler -v
```
>More details at [https://docs.prowler.com](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-cli-installation)
>For further guidance, refer to [https://docs.prowler.com](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-cli-installation)
### Containers
The available versions of Prowler CLI are the following:
**Available Versions of Prowler CLI**
- `latest`: in sync with `master` branch (bear in mind that it is not a stable version)
- `v4-latest`: in sync with `v4` branch (bear in mind that it is not a stable version)
- `v3-latest`: in sync with `v3` branch (bear in mind that it is not a stable version)
- `<x.y.z>` (release): you can find the releases [here](https://github.com/prowler-cloud/prowler/releases), those are stable releases.
- `stable`: this tag always point to the latest release.
- `v4-stable`: this tag always point to the latest release for v4.
- `v3-stable`: this tag always point to the latest release for v3.
The following versions of Prowler CLI are available, depending on your requirements:
- `latest`: Synchronizes with the `master` branch. Note that this version is not stable.
- `v4-latest`: Synchronizes with the `v4` branch. Note that this version is not stable.
- `v3-latest`: Synchronizes with the `v3` branch. Note that this version is not stable.
- `<x.y.z>` (release): Stable releases corresponding to specific versions. You can find the complete list of releases [here](https://github.com/prowler-cloud/prowler/releases).
- `stable`: Always points to the latest release.
- `v4-stable`: Always points to the latest release for v4.
- `v3-stable`: Always points to the latest release for v3.
The container images are available here:
- Prowler CLI:
@@ -205,35 +239,56 @@ The container images are available here:
### From GitHub
Python > 3.9.1, < 3.13 is required with pip and poetry:
Python >3.9.1, <3.13 is required with pip and Poetry:
``` console
git clone https://github.com/prowler-cloud/prowler
cd prowler
eval $(poetry env activate)
poetry install
python prowler.py -v
python prowler-cli.py -v
```
> [!IMPORTANT]
> Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
>
> If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
> In case you have any doubts, consult the Poetry environment activation guide: https://python-poetry.org/docs/managing-environments/#activating-the-environment
> To clone Prowler on Windows, configure Git to support long file paths by running the following command: `git config core.longpaths true`.
> If you want to clone Prowler from Windows, use `git config core.longpaths true` to allow long file paths.
# 📐✏️ High level architecture
> [!IMPORTANT]
> As of Poetry v2.0.0, the `poetry shell` command has been deprecated. Use `poetry env activate` instead for environment activation.
>
> If your Poetry version is below v2.0.0, continue using `poetry shell` to activate your environment.
> For further guidance, refer to the Poetry Environment Activation Guide https://python-poetry.org/docs/managing-environments/#activating-the-environment.
# ✏️ High level architecture
## Prowler App
The **Prowler App** consists of three main components:
**Prowler App** is composed of three key components:
- **Prowler UI**: A user-friendly web interface for running Prowler and viewing results, powered by Next.js.
- **Prowler API**: The backend API that executes Prowler scans and stores the results, built with Django REST Framework.
- **Prowler SDK**: A Python SDK that integrates with the Prowler CLI for advanced functionality.
- **Prowler UI**: A web-based interface, built with Next.js, providing a user-friendly experience for executing Prowler scans and visualizing results.
- **Prowler API**: A backend service, developed with Django REST Framework, responsible for running Prowler scans and storing the generated results.
- **Prowler SDK**: A Python SDK designed to extend the functionality of the Prowler CLI for advanced capabilities.
![Prowler App Architecture](docs/img/prowler-app-architecture.png)
## Prowler CLI
You can run Prowler from your workstation, a Kubernetes Job, a Google Compute Engine, an Azure VM, an EC2 instance, Fargate or any other container, CloudShell and many more.
**Running Prowler**
Prowler can be executed across various environments, offering flexibility to meet your needs. It can be run from:
- Your own workstation
- A Kubernetes Job
- Google Compute Engine
- Azure Virtual Machines (VMs)
- Amazon EC2 instances
- AWS Fargate or other container platforms
- CloudShell
And many more environments.
![Architecture](docs/img/architecture.png)
@@ -241,23 +296,36 @@ You can run Prowler from your workstation, a Kubernetes Job, a Google Compute En
## General
- `Allowlist` now is called `Mutelist`.
- The `--quiet` option has been deprecated, now use the `--status` flag to select the finding's status you want to get from PASS, FAIL or MANUAL.
- All `INFO` finding's status has changed to `MANUAL`.
- The CSV output format is common for all the providers.
- The `--quiet` option has been deprecated. Use the `--status` flag to filter findings based on their status: PASS, FAIL, or MANUAL.
- All findings with an `INFO` status have been reclassified as `MANUAL`.
- The CSV output format is standardized across all providers.
We have deprecated some of our outputs formats:
- The native JSON is replaced for the JSON [OCSF](https://schema.ocsf.io/) v1.1.0, common for all the providers.
**Deprecated Output Formats**
The following formats are now deprecated:
- Native JSON has been replaced with JSON in [OCSF] v1.1.0 format, which is standardized across all providers (https://schema.ocsf.io/).
## AWS
- Deprecate the AWS flag --sts-endpoint-region since we use AWS STS regional tokens.
- To send only FAILS to AWS Security Hub, now use either `--send-sh-only-fails` or `--security-hub --status FAIL`.
**AWS Flag Deprecation**
The flag --sts-endpoint-region has been deprecated due to the adoption of AWS STS regional tokens.
**Sending FAIL Results to AWS Security Hub**
- To send only FAILS to AWS Security Hub, use one of the following options: `--send-sh-only-fails` or `--security-hub --status FAIL`.
# 📖 Documentation
Install, Usage, Tutorials and Developer Guide is at https://docs.prowler.com/
**Documentation Resources**
For installation instructions, usage details, tutorials, and the Developer Guide, visit https://docs.prowler.com/
# 📃 License
Prowler is licensed as Apache License 2.0 as specified in each file. You may obtain a copy of the License at
<http://www.apache.org/licenses/LICENSE-2.0>
**Prowler License Information**
Prowler is licensed under the Apache License 2.0, as indicated in each file within the repository. Obtaining a Copy of the License
A copy of the License is available at <http://www.apache.org/licenses/LICENSE-2.0>
+3
View File
@@ -53,3 +53,6 @@ DJANGO_GOOGLE_OAUTH_CALLBACK_URL=""
DJANGO_GITHUB_OAUTH_CLIENT_ID=""
DJANGO_GITHUB_OAUTH_CLIENT_SECRET=""
DJANGO_GITHUB_OAUTH_CALLBACK_URL=""
# Deletion Task Batch Size
DJANGO_DELETION_BATCH_SIZE=5000
-168
View File
@@ -1,168 +0,0 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.pyc
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
/_data/
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
*.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
# VSCode
.vscode/
-91
View File
@@ -1,91 +0,0 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
- id: check-yaml
args: ["--unsafe"]
- id: check-json
- id: end-of-file-fixer
- id: trailing-whitespace
- id: no-commit-to-branch
- id: pretty-format-json
args: ["--autofix", "--no-sort-keys", "--no-ensure-ascii"]
exclude: 'src/backend/api/fixtures/dev/.*\.json$'
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.13.0
hooks:
- id: pretty-format-toml
args: [--autofix]
files: pyproject.toml
## BASH
- repo: https://github.com/koalaman/shellcheck-precommit
rev: v0.10.0
hooks:
- id: shellcheck
exclude: contrib
## PYTHON
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.5.0
hooks:
# Run the linter.
- id: ruff
args: [ --fix ]
# Run the formatter.
- id: ruff-format
- repo: https://github.com/python-poetry/poetry
rev: 1.8.0
hooks:
- id: poetry-check
args: ["--directory=src"]
- id: poetry-lock
args: ["--no-update", "--directory=src"]
- repo: https://github.com/hadolint/hadolint
rev: v2.13.0-beta
hooks:
- id: hadolint
args: ["--ignore=DL3013", "Dockerfile"]
- repo: local
hooks:
- id: pylint
name: pylint
entry: bash -c 'poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/'
language: system
files: '.*\.py'
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
stages: ["commit", "push"]
- id: bandit
name: bandit
description: "Bandit is a tool for finding common security issues in Python code"
entry: bash -c 'poetry run bandit -q -lll -x '*_test.py,./contrib/,./.venv/' -r .'
language: system
files: '.*\.py'
- id: safety
name: safety
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
entry: bash -c 'poetry run safety check --ignore 70612,66963'
language: system
- id: vulture
name: vulture
description: "Vulture finds unused code in Python programs."
entry: bash -c 'poetry run vulture --exclude "contrib,.venv,tests,conftest.py" --min-confidence 100 .'
language: system
files: '.*\.py'
+125 -8
View File
@@ -2,13 +2,130 @@
All notable changes to the **Prowler API** are documented in this file.
## [v1.9.0] (Prowler UNRELEASED)
### Added
- SSO with SAML support [(#7822)](https://github.com/prowler-cloud/prowler/pull/7822)
- Support GCP Service Account key [(#7824)](https://github.com/prowler-cloud/prowler/pull/7824)
- `GET /compliance-overviews` endpoints to retrieve compliance metadata and specific requirements statuses [(#7877)](https://github.com/prowler-cloud/prowler/pull/7877)
- Lighthouse configuration support [(#7848)](https://github.com/prowler-cloud/prowler/pull/7848)
### Changed
- Reworked `GET /compliance-overviews` to return proper requirement metrics [(#7877)](https://github.com/prowler-cloud/prowler/pull/7877)
### Fixed
- Add missing mapping for ISO 27001 compliance for M365 provider [(#8069)](https://github.com/prowler-cloud/prowler/pull/8069)
---
## [v1.6.0] (Prowler UNRELEASED)
## [v1.8.5] (Prowler v5.7.5)
### Fixed
- Normalize provider UID to ensure safe and unique export directory paths [(#8007)](https://github.com/prowler-cloud/prowler/pull/8007).
- Blank resource types in `/metadata` endpoints [(#8027)](https://github.com/prowler-cloud/prowler/pull/8027)
---
## [v1.8.4] (Prowler v5.7.4)
### Removed
- Reverted RLS transaction handling and DB custom backend [(#7994)](https://github.com/prowler-cloud/prowler/pull/7994)
---
## [v1.8.3] (Prowler v5.7.3)
### Added
- Database backend to handle already closed connections [(#7935)](https://github.com/prowler-cloud/prowler/pull/7935)
### Changed
- Renamed field encrypted_password to password for M365 provider [(#7784)](https://github.com/prowler-cloud/prowler/pull/7784)
### Fixed
- Transaction persistence with RLS operations [(#7916)](https://github.com/prowler-cloud/prowler/pull/7916)
- Reverted the change `get_with_retry` to use the original `get` method for retrieving tasks [(#7932)](https://github.com/prowler-cloud/prowler/pull/7932)
---
## [v1.8.2] (Prowler v5.7.2)
### Fixed
- Task lookup to use task_kwargs instead of task_args for scan report resolution [(#7830)](https://github.com/prowler-cloud/prowler/pull/7830)
- Kubernetes UID validation to allow valid context names [(#7871)](https://github.com/prowler-cloud/prowler/pull/7871)
- Connection status verification before launching a scan [(#7831)](https://github.com/prowler-cloud/prowler/pull/7831)
- Race condition when creating background tasks [(#7876)](https://github.com/prowler-cloud/prowler/pull/7876)
- Error when modifying or retrieving tenants due to missing user UUID in transaction context [(#7890)](https://github.com/prowler-cloud/prowler/pull/7890)
---
## [v1.8.1] (Prowler v5.7.1)
### Fixed
- Added database index to improve performance on finding lookup [(#7800)](https://github.com/prowler-cloud/prowler/pull/7800)
---
## [v1.8.0] (Prowler v5.7.0)
### Added
- Huge improvements to `/findings/metadata` and resource related filters for findings [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
- Improvements to `/overviews` endpoints [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
- Queue to perform backfill background tasks [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
- New endpoints to retrieve latest findings and metadata [(#7743)](https://github.com/prowler-cloud/prowler/pull/7743)
- Export support for Prowler ThreatScore in M365 [(7783)](https://github.com/prowler-cloud/prowler/pull/7783)
---
## [v1.7.0] (Prowler v5.6.0)
### Added
- Support for developing new integrations [(#7167)](https://github.com/prowler-cloud/prowler/pull/7167).
- M365 as a new provider [(#7563)](https://github.com/prowler-cloud/prowler/pull/7563)
- `compliance/` folder and ZIPexport functionality for all compliance reports [(#7653)](https://github.com/prowler-cloud/prowler/pull/7653)
- API endpoint to fetch and download any specific compliance file by name [(#7653)](https://github.com/prowler-cloud/prowler/pull/7653)
---
## [v1.6.0] (Prowler v5.5.0)
### Added
- Support for developing new integrations [(#7167)](https://github.com/prowler-cloud/prowler/pull/7167)
- HTTP Security Headers [(#7289)](https://github.com/prowler-cloud/prowler/pull/7289)
- New endpoint to get the compliance overviews metadata [(#7333)](https://github.com/prowler-cloud/prowler/pull/7333)
- Support for muted findings [(#7378)](https://github.com/prowler-cloud/prowler/pull/7378)
- Missing fields to API findings and resources [(#7318)](https://github.com/prowler-cloud/prowler/pull/7318)
---
## [v1.5.4] (Prowler v5.4.4)
### Fixed
- Bug with periodic tasks when trying to delete a provider [(#7466)](https://github.com/prowler-cloud/prowler/pull/7466)
---
## [v1.5.3] (Prowler v5.4.3)
### Fixed
- Duplicated scheduled scans handling [(#7401)](https://github.com/prowler-cloud/prowler/pull/7401)
- Environment variable to configure the deletion task batch size [(#7423)](https://github.com/prowler-cloud/prowler/pull/7423)
---
## [v1.5.2] (Prowler v5.4.2)
### Changed
- Refactored deletion logic and implemented retry mechanism for deletion tasks [(#7349)](https://github.com/prowler-cloud/prowler/pull/7349)
---
## [v1.5.1] (Prowler v5.4.1)
### Fixed
- Handle response in case local files are missing [(#7183)](https://github.com/prowler-cloud/prowler/pull/7183)
- Race condition when deleting export files after the S3 upload [(#7172)](https://github.com/prowler-cloud/prowler/pull/7172)
- Handle exception when a provider has no secret in test connection [(#7283)](https://github.com/prowler-cloud/prowler/pull/7283)
---
@@ -16,20 +133,20 @@ All notable changes to the **Prowler API** are documented in this file.
### Added
- Social login integration with Google and GitHub [(#6906)](https://github.com/prowler-cloud/prowler/pull/6906)
- Add API scan report system, now all scans launched from the API will generate a compressed file with the report in OCSF, CSV and HTML formats [(#6878)](https://github.com/prowler-cloud/prowler/pull/6878).
- API scan report system, now all scans launched from the API will generate a compressed file with the report in OCSF, CSV and HTML formats [(#6878)](https://github.com/prowler-cloud/prowler/pull/6878)
- Configurable Sentry integration [(#6874)](https://github.com/prowler-cloud/prowler/pull/6874)
### Changed
- Optimized `GET /findings` endpoint to improve response time and size [(#7019)](https://github.com/prowler-cloud/prowler/pull/7019).
- Optimized `GET /findings` endpoint to improve response time and size [(#7019)](https://github.com/prowler-cloud/prowler/pull/7019)
---
## [v1.4.0] (Prowler v5.3.0)
### Changed
- Daily scheduled scan instances are now created beforehand with `SCHEDULED` state [(#6700)](https://github.com/prowler-cloud/prowler/pull/6700).
- Findings endpoints now require at least one date filter [(#6800)](https://github.com/prowler-cloud/prowler/pull/6800).
- Findings metadata endpoint received a performance improvement [(#6863)](https://github.com/prowler-cloud/prowler/pull/6863).
- Increased the allowed length of the provider UID for Kubernetes providers [(#6869)](https://github.com/prowler-cloud/prowler/pull/6869).
- Daily scheduled scan instances are now created beforehand with `SCHEDULED` state [(#6700)](https://github.com/prowler-cloud/prowler/pull/6700)
- Findings endpoints now require at least one date filter [(#6800)](https://github.com/prowler-cloud/prowler/pull/6800)
- Findings metadata endpoint received a performance improvement [(#6863)](https://github.com/prowler-cloud/prowler/pull/6863)
- Increased the allowed length of the provider UID for Kubernetes providers [(#6869)](https://github.com/prowler-cloud/prowler/pull/6869)
---
+45 -15
View File
@@ -1,13 +1,45 @@
FROM python:3.12.8-alpine3.20 AS build
FROM python:3.12.10-slim-bookworm AS build
LABEL maintainer="https://github.com/prowler-cloud/api"
# hadolint ignore=DL3018
RUN apk --no-cache add gcc python3-dev musl-dev linux-headers curl-dev
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
libicu72 \
gcc \
g++ \
make \
libxml2-dev \
libxmlsec1-dev \
libxmlsec1-openssl \
pkg-config \
libtool \
libxslt1-dev \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Install PowerShell
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-x64.tar.gz -O /tmp/powershell.tar.gz ; \
elif [ "$ARCH" = "aarch64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-arm64.tar.gz -O /tmp/powershell.tar.gz ; \
else \
echo "Unsupported architecture: $ARCH" && exit 1 ; \
fi && \
mkdir -p /opt/microsoft/powershell/7 && \
tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7 && \
chmod +x /opt/microsoft/powershell/7/pwsh && \
ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && \
rm /tmp/powershell.tar.gz
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
RUN apk --no-cache upgrade && \
addgroup -g 1000 prowler && \
adduser -D -u 1000 -G prowler prowler
USER prowler
WORKDIR /home/prowler
@@ -17,28 +49,26 @@ COPY pyproject.toml ./
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir poetry
COPY src/backend/ ./backend/
ENV PATH="/home/prowler/.local/bin:$PATH"
# Add `--no-root` to avoid installing the current project as a package
RUN poetry install --no-root && \
rm -rf ~/.cache/pip
RUN poetry run python "$(poetry env info --path)/src/prowler/prowler/providers/m365/lib/powershell/m365_powershell.py"
# Prevents known compatibility error between lxml and libxml2/libxmlsec versions.
# See: https://github.com/xmlsec/python-xmlsec/issues/320
RUN poetry run pip install --force-reinstall --no-binary lxml lxml
COPY src/backend/ ./backend/
COPY docker-entrypoint.sh ./docker-entrypoint.sh
WORKDIR /home/prowler/backend
# Development image
# hadolint ignore=DL3006
FROM build AS dev
USER 0
# hadolint ignore=DL3018
RUN apk --no-cache add curl vim
USER prowler
ENTRYPOINT ["../docker-entrypoint.sh", "dev"]
# Production image
+1
View File
@@ -235,6 +235,7 @@ To view the logs for any component (e.g., Django, Celery worker), you can use th
```console
docker logs -f $(docker ps --format "{{.Names}}" | grep 'api-')
```
## Applying migrations
-125
View File
@@ -1,125 +0,0 @@
services:
api:
build:
dockerfile: Dockerfile
image: prowler-api
env_file:
- path: ./.env
required: false
ports:
- "${DJANGO_PORT:-8000}:${DJANGO_PORT:-8000}"
profiles:
- prod
depends_on:
postgres:
condition: service_healthy
valkey:
condition: service_healthy
entrypoint:
- "../docker-entrypoint.sh"
- "prod"
api-dev:
build:
dockerfile: Dockerfile
target: dev
image: prowler-api-dev
environment:
- DJANGO_SETTINGS_MODULE=config.django.devel
- DJANGO_LOGGING_FORMATTER=human_readable
env_file:
- path: ./.env
required: false
ports:
- "${DJANGO_PORT:-8080}:${DJANGO_PORT:-8080}"
volumes:
- "./src/backend:/home/prowler/backend"
- "./pyproject.toml:/home/prowler/pyproject.toml"
profiles:
- dev
depends_on:
postgres:
condition: service_healthy
valkey:
condition: service_healthy
entrypoint:
- "../docker-entrypoint.sh"
- "dev"
postgres:
image: postgres:16.3-alpine
ports:
- "${POSTGRES_PORT:-5432}:${POSTGRES_PORT:-5432}"
hostname: "postgres-db"
volumes:
- ./_data/postgres:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${POSTGRES_ADMIN_USER:-prowler}
- POSTGRES_PASSWORD=${POSTGRES_ADMIN_PASSWORD:-S3cret}
- POSTGRES_DB=${POSTGRES_DB:-prowler_db}
env_file:
- path: ./.env
required: false
healthcheck:
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_ADMIN_USER:-prowler} -d ${POSTGRES_DB:-prowler_db}'"]
interval: 5s
timeout: 5s
retries: 5
valkey:
image: valkey/valkey:7-alpine3.19
ports:
- "${VALKEY_PORT:-6379}:6379"
hostname: "valkey"
volumes:
- ./_data/valkey:/data
env_file:
- path: ./.env
required: false
healthcheck:
test: ["CMD-SHELL", "sh -c 'valkey-cli ping'"]
interval: 10s
timeout: 5s
retries: 3
worker:
build:
dockerfile: Dockerfile
image: prowler-worker
environment:
- DJANGO_SETTINGS_MODULE=${DJANGO_SETTINGS_MODULE:-config.django.production}
env_file:
- path: ./.env
required: false
profiles:
- dev
- prod
depends_on:
valkey:
condition: service_healthy
postgres:
condition: service_healthy
entrypoint:
- "../docker-entrypoint.sh"
- "worker"
worker-beat:
build:
dockerfile: Dockerfile
image: prowler-worker
environment:
- DJANGO_SETTINGS_MODULE=${DJANGO_SETTINGS_MODULE:-config.django.production}
env_file:
- path: ./.env
required: false
profiles:
- dev
- prod
depends_on:
valkey:
condition: service_healthy
postgres:
condition: service_healthy
entrypoint:
- "../docker-entrypoint.sh"
- "beat"
+5 -1
View File
@@ -3,6 +3,10 @@
apply_migrations() {
echo "Applying database migrations..."
# Fix Inconsistent migration history after adding sites app
poetry run python manage.py check_and_fix_socialaccount_sites_migration --database admin
poetry run python manage.py migrate --database admin
}
@@ -28,7 +32,7 @@ start_prod_server() {
start_worker() {
echo "Starting the worker..."
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion -E --max-tasks-per-child 1
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill -E --max-tasks-per-child 1
}
start_worker_beat() {
+1322 -926
View File
File diff suppressed because it is too large Load Diff
+6 -3
View File
@@ -7,7 +7,8 @@ authors = [{name = "Prowler Engineering", email = "engineering@prowler.com"}]
dependencies = [
"celery[pytest] (>=5.4.0,<6.0.0)",
"dj-rest-auth[with_social,jwt] (==7.0.1)",
"django==5.1.7",
"django==5.1.10",
"django-allauth[saml] (>=65.8.0,<66.0.0)",
"django-celery-beat (>=2.7.0,<3.0.0)",
"django-celery-results (>=2.5.1,<3.0.0)",
"django-cors-headers==4.4.0",
@@ -26,7 +27,8 @@ dependencies = [
"psycopg2-binary==2.9.9",
"pytest-celery[redis] (>=1.0.1,<2.0.0)",
"sentry-sdk[django] (>=2.20.0,<3.0.0)",
"uuid6==2024.7.10"
"uuid6==2024.7.10",
"openai (>=1.82.0,<2.0.0)"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
@@ -34,7 +36,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.5.0"
version = "1.9.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
@@ -45,6 +47,7 @@ coverage = "7.5.4"
django-silk = "5.3.2"
docker = "7.1.0"
freezegun = "1.5.1"
marshmallow = ">=3.15.0,<4.0.0"
mypy = "1.10.1"
pylint = "3.2.5"
pytest = "8.2.2"
+89 -23
View File
@@ -3,7 +3,14 @@ from django.db import transaction
from api.db_router import MainRouter
from api.db_utils import rls_transaction
from api.models import Membership, Role, Tenant, User, UserRoleRelationship
from api.models import (
Membership,
Role,
SAMLConfiguration,
Tenant,
User,
UserRoleRelationship,
)
class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
@@ -17,6 +24,8 @@ class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
def pre_social_login(self, request, sociallogin):
# Link existing accounts with the same email address
email = sociallogin.account.extra_data.get("email")
if sociallogin.account.provider == "saml":
email = sociallogin.user.email
if email:
existing_user = self.get_user_by_email(email)
if existing_user:
@@ -29,29 +38,86 @@ class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
"""
with transaction.atomic(using=MainRouter.admin_db):
user = super().save_user(request, sociallogin, form)
user.save(using=MainRouter.admin_db)
provider = sociallogin.account.provider
extra = sociallogin.account.extra_data
tenant = Tenant.objects.using(MainRouter.admin_db).create(
name=f"{user.email.split('@')[0]} default tenant"
)
with rls_transaction(str(tenant.id)):
Membership.objects.using(MainRouter.admin_db).create(
user=user, tenant=tenant, role=Membership.RoleChoices.OWNER
if provider == "saml":
# Handle SAML-specific logic
user.first_name = extra.get("firstName", [""])[0]
user.last_name = extra.get("lastName", [""])[0]
user.company_name = extra.get("organization", [""])[0]
user.name = f"{user.first_name} {user.last_name}".strip()
user.save(using=MainRouter.admin_db)
email_domain = user.email.split("@")[-1]
tenant = (
SAMLConfiguration.objects.using(MainRouter.admin_db)
.get(email_domain=email_domain)
.tenant
)
role = Role.objects.using(MainRouter.admin_db).create(
name="admin",
tenant_id=tenant.id,
manage_users=True,
manage_account=True,
manage_billing=True,
manage_providers=True,
manage_integrations=True,
manage_scans=True,
unlimited_visibility=True,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
with rls_transaction(str(tenant.id)):
role_name = extra.get("userType", ["saml_default_role"])[0].strip()
try:
role = Role.objects.using(MainRouter.admin_db).get(
name=role_name, tenant_id=tenant.id
)
except Role.DoesNotExist:
role = Role.objects.using(MainRouter.admin_db).create(
name=role_name,
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
)
Membership.objects.using(MainRouter.admin_db).create(
user=user,
tenant=tenant,
role=Membership.RoleChoices.MEMBER,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
)
else:
# Handle other providers (e.g., GitHub, Google)
user.save(using=MainRouter.admin_db)
social_account_name = extra.get("name")
if social_account_name:
user.name = social_account_name
user.save(using=MainRouter.admin_db)
tenant = Tenant.objects.using(MainRouter.admin_db).create(
name=f"{user.email.split('@')[0]} default tenant"
)
with rls_transaction(str(tenant.id)):
Membership.objects.using(MainRouter.admin_db).create(
user=user, tenant=tenant, role=Membership.RoleChoices.OWNER
)
role = Role.objects.using(MainRouter.admin_db).create(
name="admin",
tenant_id=tenant.id,
manage_users=True,
manage_account=True,
manage_billing=True,
manage_providers=True,
manage_integrations=True,
manage_scans=True,
unlimited_visibility=True,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
)
return user
+2 -12
View File
@@ -109,16 +109,6 @@ class BaseTenantViewset(BaseViewSet):
pass # Tenant might not exist, handle gracefully
def initial(self, request, *args, **kwargs):
if (
request.resolver_match.url_name != "tenant-detail"
and request.method != "DELETE"
):
user_id = str(request.user.id)
with rls_transaction(value=user_id, parameter=POSTGRES_USER_VAR):
return super().initial(request, *args, **kwargs)
# TODO: DRY this when we have time
if request.auth is None:
raise NotAuthenticated
@@ -126,8 +116,8 @@ class BaseTenantViewset(BaseViewSet):
if tenant_id is None:
raise NotAuthenticated("Tenant ID is not present in token")
with rls_transaction(tenant_id):
self.request.tenant_id = tenant_id
user_id = str(request.user.id)
with rls_transaction(value=user_id, parameter=POSTGRES_USER_VAR):
return super().initial(request, *args, **kwargs)
+39 -9
View File
@@ -1,12 +1,38 @@
from types import MappingProxyType
from api.models import Provider
from prowler.config.config import get_available_compliance_frameworks
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.check.models import CheckMetadata
from api.models import Provider
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE = {}
PROWLER_CHECKS = {}
AVAILABLE_COMPLIANCE_FRAMEWORKS = {}
def get_compliance_frameworks(provider_type: Provider.ProviderChoices) -> list[str]:
"""
Retrieve and cache the list of available compliance frameworks for a specific cloud provider.
This function lazily loads and caches the available compliance frameworks (e.g., CIS, MITRE, ISO)
for each provider type (AWS, Azure, GCP, etc.) on first access. Subsequent calls for the same
provider will return the cached result.
Args:
provider_type (Provider.ProviderChoices): The cloud provider type for which to retrieve
available compliance frameworks (e.g., "aws", "azure", "gcp", "m365").
Returns:
list[str]: A list of framework identifiers (e.g., "cis_1.4_aws", "mitre_attack_azure") available
for the given provider.
"""
global AVAILABLE_COMPLIANCE_FRAMEWORKS
if provider_type not in AVAILABLE_COMPLIANCE_FRAMEWORKS:
AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type] = (
get_available_compliance_frameworks(provider_type)
)
return AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type]
def get_prowler_provider_checks(provider_type: Provider.ProviderChoices):
@@ -164,10 +190,16 @@ def generate_compliance_overview_template(prowler_compliance: dict):
total_checks = len(requirement.Checks)
checks_dict = {check: None for check in requirement.Checks}
req_status_val = "MANUAL" if total_checks == 0 else "PASS"
# Build requirement dictionary
requirement_dict = {
"name": requirement.Name or requirement.Id,
"description": requirement.Description,
"tactics": getattr(requirement, "Tactics", []),
"subtechniques": getattr(requirement, "SubTechniques", []),
"platforms": getattr(requirement, "Platforms", []),
"technique_url": getattr(requirement, "TechniqueURL", ""),
"attributes": [
dict(attribute) for attribute in requirement.Attributes
],
@@ -178,20 +210,18 @@ def generate_compliance_overview_template(prowler_compliance: dict):
"manual": 0,
"total": total_checks,
},
"status": "PASS",
"status": req_status_val,
}
# Update requirements status
if total_checks == 0:
# Update requirements status counts for the framework
if req_status_val == "MANUAL":
requirements_status["manual"] += 1
elif req_status_val == "PASS":
requirements_status["passed"] += 1
# Add requirement to compliance requirements
compliance_requirements[requirement.Id] = requirement_dict
# Calculate pending requirements
pending_requirements = total_requirements - requirements_status["manual"]
requirements_status["passed"] = pending_requirements
# Build compliance dictionary
compliance_dict = {
"framework": compliance_data.Framework,
+208 -9
View File
@@ -1,3 +1,4 @@
import re
import secrets
import uuid
from contextlib import contextmanager
@@ -6,6 +7,7 @@ from datetime import datetime, timedelta, timezone
from django.conf import settings
from django.contrib.auth.models import BaseUserManager
from django.db import connection, models, transaction
from django_celery_beat.models import PeriodicTask
from psycopg2 import connect as psycopg2_connect
from psycopg2.extensions import AsIs, new_type, register_adapter, register_type
from rest_framework_json_api.serializers import ValidationError
@@ -105,11 +107,12 @@ def generate_random_token(length: int = 14, symbols: str | None = None) -> str:
return "".join(secrets.choice(symbols or _symbols) for _ in range(length))
def batch_delete(queryset, batch_size=5000):
def batch_delete(tenant_id, queryset, batch_size=settings.DJANGO_DELETION_BATCH_SIZE):
"""
Deletes objects in batches and returns the total number of deletions and a summary.
Args:
tenant_id (str): Tenant ID the queryset belongs to.
queryset (QuerySet): The queryset of objects to delete.
batch_size (int): The number of objects to delete in each batch.
@@ -120,15 +123,16 @@ def batch_delete(queryset, batch_size=5000):
deletion_summary = {}
while True:
# Get a batch of IDs to delete
batch_ids = set(
queryset.values_list("id", flat=True).order_by("id")[:batch_size]
)
if not batch_ids:
# No more objects to delete
break
with rls_transaction(tenant_id, POSTGRES_TENANT_VAR):
# Get a batch of IDs to delete
batch_ids = set(
queryset.values_list("id", flat=True).order_by("id")[:batch_size]
)
if not batch_ids:
# No more objects to delete
break
deleted_count, deleted_info = queryset.filter(id__in=batch_ids).delete()
deleted_count, deleted_info = queryset.filter(id__in=batch_ids).delete()
total_deleted += deleted_count
for model_label, count in deleted_info.items():
@@ -137,6 +141,40 @@ def batch_delete(queryset, batch_size=5000):
return total_deleted, deletion_summary
def delete_related_daily_task(provider_id: str):
"""
Deletes the periodic task associated with a specific provider.
Args:
provider_id (str): The unique identifier for the provider
whose related periodic task should be deleted.
"""
task_name = f"scan-perform-scheduled-{provider_id}"
PeriodicTask.objects.filter(name=task_name).delete()
def create_objects_in_batches(
tenant_id: str, model, objects: list, batch_size: int = 500
):
"""
Bulk-create model instances in repeated, per-tenant RLS transactions.
All chunks execute in their own transaction, so no single transaction
grows too large.
Args:
tenant_id (str): UUID string of the tenant under which to set RLS.
model: Django model class whose `.objects.bulk_create()` will be called.
objects (list): List of model instances (unsaved) to bulk-create.
batch_size (int): Maximum number of objects per bulk_create call.
"""
total = len(objects)
for i in range(0, total, batch_size):
chunk = objects[i : i + batch_size]
with rls_transaction(value=tenant_id, parameter=POSTGRES_TENANT_VAR):
model.objects.bulk_create(chunk, batch_size)
# Postgres Enums
@@ -212,6 +250,167 @@ def register_enum(apps, schema_editor, enum_class): # noqa: F841
register_adapter(enum_class, enum_adapter)
def _should_create_index_on_partition(
partition_name: str, all_partitions: bool = False
) -> bool:
"""
Determine if we should create an index on this partition.
Args:
partition_name: The name of the partition (e.g., "findings_2025_aug", "findings_default")
all_partitions: If True, create on all partitions. If False, only current/future partitions.
Returns:
bool: True if index should be created on this partition, False otherwise.
"""
if all_partitions:
return True
# Extract date from partition name if it follows the pattern
# Partition names look like: findings_2025_aug, findings_2025_jul, etc.
date_pattern = r"(\d{4})_([a-z]{3})$"
match = re.search(date_pattern, partition_name)
if not match:
# If we can't parse the date, include it to be safe (e.g., default partition)
return True
try:
year_str, month_abbr = match.groups()
year = int(year_str)
# Map month abbreviations to numbers
month_map = {
"jan": 1,
"feb": 2,
"mar": 3,
"apr": 4,
"may": 5,
"jun": 6,
"jul": 7,
"aug": 8,
"sep": 9,
"oct": 10,
"nov": 11,
"dec": 12,
}
month = month_map.get(month_abbr.lower())
if month is None:
# Unknown month abbreviation, include it to be safe
return True
partition_date = datetime(year, month, 1, tzinfo=timezone.utc)
# Get current month start
now = datetime.now(timezone.utc)
current_month_start = now.replace(
day=1, hour=0, minute=0, second=0, microsecond=0
)
# Include current month and future partitions
return partition_date >= current_month_start
except (ValueError, TypeError):
# If date parsing fails, include it to be safe
return True
def create_index_on_partitions(
apps, # noqa: F841
schema_editor,
parent_table: str,
index_name: str,
columns: str,
method: str = "BTREE",
where: str = "",
all_partitions: bool = True,
):
"""
Create an index on existing partitions of `parent_table`.
Args:
parent_table: The name of the root table (e.g. "findings").
index_name: A short name for the index (will be prefixed per-partition).
columns: The parenthesized column list, e.g. "tenant_id, scan_id, status".
method: The index method—BTREE, GIN, etc. Defaults to BTREE.
where: Optional WHERE clause (without the leading "WHERE"), e.g. "status = 'FAIL'".
all_partitions: Whether to create indexes on all partitions or just current/future ones.
Defaults to False (current/future only) to avoid maintenance overhead
on old partitions where the index may not be needed.
Examples:
# Create index only on current and future partitions (recommended for new indexes)
create_index_on_partitions(
apps, schema_editor,
parent_table="findings",
index_name="new_performance_idx",
columns="tenant_id, status, severity",
all_partitions=False # Default behavior
)
# Create index on all partitions (use when migrating existing critical indexes)
create_index_on_partitions(
apps, schema_editor,
parent_table="findings",
index_name="critical_existing_idx",
columns="tenant_id, scan_id",
all_partitions=True
)
"""
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT inhrelid::regclass::text
FROM pg_inherits
WHERE inhparent = %s::regclass
""",
[parent_table],
)
partitions = [row[0] for row in cursor.fetchall()]
where_sql = f" WHERE {where}" if where else ""
for partition in partitions:
if _should_create_index_on_partition(partition, all_partitions):
idx_name = f"{partition.replace('.', '_')}_{index_name}"
sql = (
f"CREATE INDEX CONCURRENTLY IF NOT EXISTS {idx_name} "
f"ON {partition} USING {method} ({columns})"
f"{where_sql};"
)
schema_editor.execute(sql)
def drop_index_on_partitions(
apps, # noqa: F841
schema_editor,
parent_table: str,
index_name: str,
):
"""
Drop the per-partition indexes that were created by create_index_on_partitions.
Args:
parent_table: The name of the root table (e.g. "findings").
index_name: The same short name used when creating them.
"""
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT inhrelid::regclass::text
FROM pg_inherits
WHERE inhparent = %s::regclass
""",
[parent_table],
)
partitions = [row[0] for row in cursor.fetchall()]
for partition in partitions:
idx_name = f"{partition.replace('.', '_')}_{index_name}"
sql = f"DROP INDEX CONCURRENTLY IF EXISTS {idx_name};"
schema_editor.execute(sql)
# Postgres enum definition for member role
+34 -4
View File
@@ -3,7 +3,7 @@ from rest_framework import status
from rest_framework.exceptions import APIException
from rest_framework_json_api.exceptions import exception_handler
from rest_framework_json_api.serializers import ValidationError
from rest_framework_simplejwt.exceptions import TokenError, InvalidToken
from rest_framework_simplejwt.exceptions import InvalidToken, TokenError
class ModelValidationError(ValidationError):
@@ -32,6 +32,31 @@ class InvitationTokenExpiredException(APIException):
default_code = "token_expired"
# Task Management Exceptions (non-HTTP)
class TaskManagementError(Exception):
"""Base exception for task management errors."""
def __init__(self, task=None):
self.task = task
super().__init__()
class TaskFailedException(TaskManagementError):
"""Raised when a task has failed."""
class TaskNotFoundException(TaskManagementError):
"""Raised when a task is not found."""
class TaskInProgressException(TaskManagementError):
"""Raised when a task is running but there's no related Task object to return."""
def __init__(self, task_result=None):
self.task_result = task_result
super().__init__()
def custom_exception_handler(exc, context):
if isinstance(exc, django_validation_error):
if hasattr(exc, "error_dict"):
@@ -39,7 +64,12 @@ def custom_exception_handler(exc, context):
else:
exc = ValidationError(detail=exc.messages[0], code=exc.code)
elif isinstance(exc, (TokenError, InvalidToken)):
exc.detail["messages"] = [
message_item["message"] for message_item in exc.detail["messages"]
]
if (
hasattr(exc, "detail")
and isinstance(exc.detail, dict)
and "messages" in exc.detail
):
exc.detail["messages"] = [
message_item["message"] for message_item in exc.detail["messages"]
]
return exception_handler(exc, context)
+147 -108
View File
@@ -22,7 +22,7 @@ from api.db_utils import (
StatusEnumField,
)
from api.models import (
ComplianceOverview,
ComplianceRequirementOverview,
Finding,
Integration,
Invitation,
@@ -81,6 +81,114 @@ class ChoiceInFilter(BaseInFilter, ChoiceFilter):
pass
class CommonFindingFilters(FilterSet):
# We filter providers from the scan in findings
provider = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
choices=Provider.ProviderChoices.choices, field_name="scan__provider__provider"
)
provider_type__in = ChoiceInFilter(
choices=Provider.ProviderChoices.choices, field_name="scan__provider__provider"
)
provider_uid = CharFilter(field_name="scan__provider__uid", lookup_expr="exact")
provider_uid__in = CharInFilter(field_name="scan__provider__uid", lookup_expr="in")
provider_uid__icontains = CharFilter(
field_name="scan__provider__uid", lookup_expr="icontains"
)
provider_alias = CharFilter(field_name="scan__provider__alias", lookup_expr="exact")
provider_alias__in = CharInFilter(
field_name="scan__provider__alias", lookup_expr="in"
)
provider_alias__icontains = CharFilter(
field_name="scan__provider__alias", lookup_expr="icontains"
)
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
uid = CharFilter(field_name="uid")
delta = ChoiceFilter(choices=Finding.DeltaChoices.choices)
status = ChoiceFilter(choices=StatusChoices.choices)
severity = ChoiceFilter(choices=SeverityChoices)
impact = ChoiceFilter(choices=SeverityChoices)
muted = BooleanFilter(
help_text="If this filter is not provided, muted and non-muted findings will be returned."
)
resources = UUIDInFilter(field_name="resource__id", lookup_expr="in")
region = CharFilter(method="filter_resource_region")
region__in = CharInFilter(field_name="resource_regions", lookup_expr="overlap")
region__icontains = CharFilter(
field_name="resource_regions", lookup_expr="icontains"
)
service = CharFilter(method="filter_resource_service")
service__in = CharInFilter(field_name="resource_services", lookup_expr="overlap")
service__icontains = CharFilter(
field_name="resource_services", lookup_expr="icontains"
)
resource_uid = CharFilter(field_name="resources__uid")
resource_uid__in = CharInFilter(field_name="resources__uid", lookup_expr="in")
resource_uid__icontains = CharFilter(
field_name="resources__uid", lookup_expr="icontains"
)
resource_name = CharFilter(field_name="resources__name")
resource_name__in = CharInFilter(field_name="resources__name", lookup_expr="in")
resource_name__icontains = CharFilter(
field_name="resources__name", lookup_expr="icontains"
)
resource_type = CharFilter(method="filter_resource_type")
resource_type__in = CharInFilter(field_name="resource_types", lookup_expr="overlap")
resource_type__icontains = CharFilter(
field_name="resources__type", lookup_expr="icontains"
)
# Temporarily disabled until we implement tag filtering in the UI
# resource_tag_key = CharFilter(field_name="resources__tags__key")
# resource_tag_key__in = CharInFilter(
# field_name="resources__tags__key", lookup_expr="in"
# )
# resource_tag_key__icontains = CharFilter(
# field_name="resources__tags__key", lookup_expr="icontains"
# )
# resource_tag_value = CharFilter(field_name="resources__tags__value")
# resource_tag_value__in = CharInFilter(
# field_name="resources__tags__value", lookup_expr="in"
# )
# resource_tag_value__icontains = CharFilter(
# field_name="resources__tags__value", lookup_expr="icontains"
# )
# resource_tags = CharInFilter(
# method="filter_resource_tag",
# lookup_expr="in",
# help_text="Filter by resource tags `key:value` pairs.\nMultiple values may be "
# "separated by commas.",
# )
def filter_resource_service(self, queryset, name, value):
return queryset.filter(resource_services__contains=[value])
def filter_resource_region(self, queryset, name, value):
return queryset.filter(resource_regions__contains=[value])
def filter_resource_type(self, queryset, name, value):
return queryset.filter(resource_types__contains=[value])
def filter_resource_tag(self, queryset, name, value):
overall_query = Q()
for key_value_pair in value:
tag_key, tag_value = key_value_pair.split(":", 1)
overall_query |= Q(
resources__tags__key__icontains=tag_key,
resources__tags__value__icontains=tag_value,
)
return queryset.filter(overall_query).distinct()
class TenantFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
@@ -257,91 +365,7 @@ class ResourceFilter(ProviderRelationshipFilterSet):
return queryset.filter(tags__text_search=value)
class FindingFilter(FilterSet):
# We filter providers from the scan in findings
provider = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
choices=Provider.ProviderChoices.choices, field_name="scan__provider__provider"
)
provider_type__in = ChoiceInFilter(
choices=Provider.ProviderChoices.choices, field_name="scan__provider__provider"
)
provider_uid = CharFilter(field_name="scan__provider__uid", lookup_expr="exact")
provider_uid__in = CharInFilter(field_name="scan__provider__uid", lookup_expr="in")
provider_uid__icontains = CharFilter(
field_name="scan__provider__uid", lookup_expr="icontains"
)
provider_alias = CharFilter(field_name="scan__provider__alias", lookup_expr="exact")
provider_alias__in = CharInFilter(
field_name="scan__provider__alias", lookup_expr="in"
)
provider_alias__icontains = CharFilter(
field_name="scan__provider__alias", lookup_expr="icontains"
)
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
uid = CharFilter(field_name="uid")
delta = ChoiceFilter(choices=Finding.DeltaChoices.choices)
status = ChoiceFilter(choices=StatusChoices.choices)
severity = ChoiceFilter(choices=SeverityChoices)
impact = ChoiceFilter(choices=SeverityChoices)
resources = UUIDInFilter(field_name="resource__id", lookup_expr="in")
region = CharFilter(field_name="resources__region")
region__in = CharInFilter(field_name="resources__region", lookup_expr="in")
region__icontains = CharFilter(
field_name="resources__region", lookup_expr="icontains"
)
service = CharFilter(field_name="resources__service")
service__in = CharInFilter(field_name="resources__service", lookup_expr="in")
service__icontains = CharFilter(
field_name="resources__service", lookup_expr="icontains"
)
resource_uid = CharFilter(field_name="resources__uid")
resource_uid__in = CharInFilter(field_name="resources__uid", lookup_expr="in")
resource_uid__icontains = CharFilter(
field_name="resources__uid", lookup_expr="icontains"
)
resource_name = CharFilter(field_name="resources__name")
resource_name__in = CharInFilter(field_name="resources__name", lookup_expr="in")
resource_name__icontains = CharFilter(
field_name="resources__name", lookup_expr="icontains"
)
resource_type = CharFilter(field_name="resources__type")
resource_type__in = CharInFilter(field_name="resources__type", lookup_expr="in")
resource_type__icontains = CharFilter(
field_name="resources__type", lookup_expr="icontains"
)
# Temporarily disabled until we implement tag filtering in the UI
# resource_tag_key = CharFilter(field_name="resources__tags__key")
# resource_tag_key__in = CharInFilter(
# field_name="resources__tags__key", lookup_expr="in"
# )
# resource_tag_key__icontains = CharFilter(
# field_name="resources__tags__key", lookup_expr="icontains"
# )
# resource_tag_value = CharFilter(field_name="resources__tags__value")
# resource_tag_value__in = CharInFilter(
# field_name="resources__tags__value", lookup_expr="in"
# )
# resource_tag_value__icontains = CharFilter(
# field_name="resources__tags__value", lookup_expr="icontains"
# )
# resource_tags = CharInFilter(
# method="filter_resource_tag",
# lookup_expr="in",
# help_text="Filter by resource tags `key:value` pairs.\nMultiple values may be "
# "separated by commas.",
# )
class FindingFilter(CommonFindingFilters):
scan = UUIDFilter(method="filter_scan_id")
scan__in = UUIDInFilter(method="filter_scan_id_in")
@@ -382,6 +406,15 @@ class FindingFilter(FilterSet):
},
}
def filter_resource_type(self, queryset, name, value):
return queryset.filter(resource_types__contains=[value])
def filter_resource_region(self, queryset, name, value):
return queryset.filter(resource_regions__contains=[value])
def filter_resource_service(self, queryset, name, value):
return queryset.filter(resource_services__contains=[value])
def filter_queryset(self, queryset):
if not (self.data.get("scan") or self.data.get("scan__in")) and not (
self.data.get("inserted_at")
@@ -500,16 +533,6 @@ class FindingFilter(FilterSet):
return queryset.filter(id__lt=end)
def filter_resource_tag(self, queryset, name, value):
overall_query = Q()
for key_value_pair in value:
tag_key, tag_value = key_value_pair.split(":", 1)
overall_query |= Q(
resources__tags__key__icontains=tag_key,
resources__tags__value__icontains=tag_value,
)
return queryset.filter(overall_query).distinct()
@staticmethod
def maybe_date_to_datetime(value):
dt = value
@@ -518,6 +541,31 @@ class FindingFilter(FilterSet):
return dt
class LatestFindingFilter(CommonFindingFilters):
class Meta:
model = Finding
fields = {
"id": ["exact", "in"],
"uid": ["exact", "in"],
"delta": ["exact", "in"],
"status": ["exact", "in"],
"severity": ["exact", "in"],
"impact": ["exact", "in"],
"check_id": ["exact", "in", "icontains"],
}
filter_overrides = {
FindingDeltaEnumField: {
"filter_class": CharFilter,
},
StatusEnumField: {
"filter_class": CharFilter,
},
SeverityEnumField: {
"filter_class": CharFilter,
},
}
class ProviderSecretFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
@@ -589,12 +637,11 @@ class RoleFilter(FilterSet):
class ComplianceOverviewFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
provider_type = ChoiceFilter(choices=Provider.ProviderChoices.choices)
provider_type__in = ChoiceInFilter(choices=Provider.ProviderChoices.choices)
scan_id = UUIDFilter(field_name="scan__id")
scan_id = UUIDFilter(field_name="scan_id")
region = CharFilter(field_name="region")
class Meta:
model = ComplianceOverview
model = ComplianceRequirementOverview
fields = {
"inserted_at": ["date", "gte", "lte"],
"compliance_id": ["exact", "icontains"],
@@ -614,12 +661,6 @@ class ScanSummaryFilter(FilterSet):
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
region = CharFilter(field_name="region")
muted_findings = BooleanFilter(method="filter_muted_findings")
def filter_muted_findings(self, queryset, name, value):
if not value:
return queryset.exclude(muted__gt=0)
return queryset
class Meta:
model = ScanSummary
@@ -630,8 +671,6 @@ class ScanSummaryFilter(FilterSet):
class ServiceOverviewFilter(ScanSummaryFilter):
muted_findings = None
def is_valid(self):
# Check if at least one of the inserted_at filters is present
inserted_at_filters = [
@@ -0,0 +1,80 @@
from django.contrib.sites.models import Site
from django.core.management.base import BaseCommand
from django.db import DEFAULT_DB_ALIAS, connection, connections, transaction
from django.db.migrations.recorder import MigrationRecorder
def table_exists(table_name):
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT EXISTS (
SELECT 1 FROM information_schema.tables
WHERE table_name = %s
)
""",
[table_name],
)
return cursor.fetchone()[0]
class Command(BaseCommand):
help = "Fix migration inconsistency between socialaccount and sites"
def add_arguments(self, parser):
parser.add_argument(
"--database",
default=DEFAULT_DB_ALIAS,
help="Specifies the database to operate on.",
)
def handle(self, *args, **options):
db = options["database"]
connection = connections[db]
recorder = MigrationRecorder(connection)
applied = set(recorder.applied_migrations())
has_social = ("socialaccount", "0001_initial") in applied
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_name = 'django_site'
);
"""
)
site_table_exists = cursor.fetchone()[0]
if has_social and not site_table_exists:
self.stdout.write(
f"Detected inconsistency in '{db}'. Creating 'django_site' table manually..."
)
with transaction.atomic(using=db):
with connection.schema_editor() as schema_editor:
schema_editor.create_model(Site)
recorder.record_applied("sites", "0001_initial")
recorder.record_applied("sites", "0002_alter_domain_unique")
self.stdout.write(
"Fixed: 'django_site' table created and migrations registered."
)
# Ensure the relationship table also exists
if not table_exists("socialaccount_socialapp_sites"):
self.stdout.write(
"Detected missing 'socialaccount_socialapp_sites' table. Creating manually..."
)
with connection.schema_editor() as schema_editor:
from allauth.socialaccount.models import SocialApp
schema_editor.create_model(
SocialApp._meta.get_field("sites").remote_field.through
)
self.stdout.write(
"Fixed: 'socialaccount_socialapp_sites' table created."
)
@@ -12,6 +12,7 @@ from api.models import (
Provider,
Resource,
ResourceFindingMapping,
ResourceScanSummary,
Scan,
StatusChoices,
)
@@ -133,6 +134,7 @@ class Command(BaseCommand):
region=random.choice(possible_regions),
service=random.choice(possible_services),
type=random.choice(possible_types),
inserted_at="2024-10-01T00:00:00Z",
)
)
@@ -181,6 +183,10 @@ class Command(BaseCommand):
"servicename": assigned_resource.service,
"resourcetype": assigned_resource.type,
},
resource_types=[assigned_resource.type],
resource_regions=[assigned_resource.region],
resource_services=[assigned_resource.service],
inserted_at="2024-10-01T00:00:00Z",
)
)
@@ -197,12 +203,22 @@ class Command(BaseCommand):
# Create ResourceFindingMapping
mappings = []
for index, f in enumerate(findings):
scan_resource_cache: set[tuple] = set()
for index, finding_instance in enumerate(findings):
resource_instance = resources[findings_resources_mapping[index]]
mappings.append(
ResourceFindingMapping(
tenant_id=tenant_id,
resource=resources[findings_resources_mapping[index]],
finding=f,
resource=resource_instance,
finding=finding_instance,
)
)
scan_resource_cache.add(
(
str(resource_instance.id),
resource_instance.service,
resource_instance.region,
resource_instance.type,
)
)
@@ -220,6 +236,38 @@ class Command(BaseCommand):
"Resource-finding mappings created successfully.\n\n"
)
)
with rls_transaction(tenant_id):
scan.progress = 99
scan.save()
self.stdout.write(self.style.WARNING("Creating finding filter values..."))
resource_scan_summaries = [
ResourceScanSummary(
tenant_id=tenant_id,
scan_id=str(scan.id),
resource_id=resource_id,
service=service,
region=region,
resource_type=resource_type,
)
for resource_id, service, region, resource_type in scan_resource_cache
]
num_batches = ceil(len(resource_scan_summaries) / batch_size)
with rls_transaction(tenant_id):
for i in tqdm(
range(0, len(resource_scan_summaries), batch_size),
total=num_batches,
):
with rls_transaction(tenant_id):
ResourceScanSummary.objects.bulk_create(
resource_scan_summaries[i : i + batch_size],
ignore_conflicts=True,
)
self.stdout.write(
self.style.SUCCESS("Finding filter values created successfully.\n\n")
)
except Exception as e:
self.stdout.write(self.style.ERROR(f"Failed to populate test data: {e}"))
scan_state = "failed"
@@ -50,12 +50,6 @@ class Migration(migrations.Migration):
),
("configuration", models.JSONField(default=dict)),
("_credentials", models.BinaryField(db_column="credentials")),
(
"providers",
models.ManyToManyField(
related_name="integrations", to="api.provider", blank=True
),
),
(
"tenant",
models.ForeignKey(
@@ -124,4 +118,14 @@ class Migration(migrations.Migration):
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddField(
model_name="integration",
name="providers",
field=models.ManyToManyField(
blank=True,
related_name="integrations",
through="api.IntegrationProviderRelationship",
to="api.provider",
),
),
]
@@ -0,0 +1,26 @@
# Generated by Django 5.1.5 on 2025-03-25 11:29
from django.db import migrations, models
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0014_integrations"),
]
operations = [
migrations.AddField(
model_name="finding",
name="muted",
field=models.BooleanField(default=False),
),
migrations.AlterField(
model_name="finding",
name="status",
field=api.db_utils.StatusEnumField(
choices=[("FAIL", "Fail"), ("PASS", "Pass"), ("MANUAL", "Manual")]
),
),
]
@@ -0,0 +1,32 @@
# Generated by Django 5.1.5 on 2025-03-31 10:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0015_finding_muted"),
]
operations = [
migrations.AddField(
model_name="finding",
name="compliance",
field=models.JSONField(blank=True, default=dict, null=True),
),
migrations.AddField(
model_name="resource",
name="details",
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name="resource",
name="metadata",
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name="resource",
name="partition",
field=models.TextField(blank=True, null=True),
),
]
@@ -0,0 +1,32 @@
# Generated by Django 5.1.7 on 2025-04-16 08:47
from django.db import migrations
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0016_finding_compliance_resource_details_and_more"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="provider",
field=api.db_utils.ProviderEnumField(
choices=[
("aws", "AWS"),
("azure", "Azure"),
("gcp", "GCP"),
("kubernetes", "Kubernetes"),
("m365", "M365"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'm365';",
reverse_sql=migrations.RunSQL.noop,
),
]
@@ -0,0 +1,81 @@
# Generated by Django 5.1.7 on 2025-05-05 10:01
import uuid
import django.db.models.deletion
import uuid6
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0017_m365_provider"),
]
operations = [
migrations.CreateModel(
name="ResourceScanSummary",
fields=[
(
"id",
models.BigAutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("scan_id", models.UUIDField(db_index=True, default=uuid6.uuid7)),
("resource_id", models.UUIDField(db_index=True, default=uuid.uuid4)),
("service", models.CharField(max_length=100)),
("region", models.CharField(max_length=100)),
("resource_type", models.CharField(max_length=100)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "resource_scan_summaries",
"indexes": [
models.Index(
fields=["tenant_id", "scan_id", "service"],
name="rss_tenant_scan_svc_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region"],
name="rss_tenant_scan_reg_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "resource_type"],
name="rss_tenant_scan_type_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region", "service"],
name="rss_tenant_scan_reg_svc_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "service", "resource_type"],
name="rss_tenant_scan_svc_type_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region", "resource_type"],
name="rss_tenant_scan_reg_type_idx",
),
],
"unique_together": {("tenant_id", "scan_id", "resource_id")},
},
),
migrations.AddConstraint(
model_name="resourcescansummary",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_resourcescansummary",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,42 @@
import django.contrib.postgres.fields
import django.contrib.postgres.indexes
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0018_resource_scan_summaries"),
]
operations = [
migrations.AddField(
model_name="finding",
name="resource_regions",
field=django.contrib.postgres.fields.ArrayField(
base_field=models.CharField(max_length=100),
blank=True,
null=True,
size=None,
),
),
migrations.AddField(
model_name="finding",
name="resource_services",
field=django.contrib.postgres.fields.ArrayField(
base_field=models.CharField(max_length=100),
blank=True,
null=True,
size=None,
),
),
migrations.AddField(
model_name="finding",
name="resource_types",
field=django.contrib.postgres.fields.ArrayField(
base_field=models.CharField(max_length=100),
blank=True,
null=True,
size=None,
),
),
]
@@ -0,0 +1,86 @@
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0019_finding_denormalize_resource_fields"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="gin_find_service_idx",
columns="resource_services",
method="GIN",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="gin_find_service_idx",
),
),
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="gin_find_region_idx",
columns="resource_regions",
method="GIN",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="gin_find_region_idx",
),
),
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="gin_find_rtype_idx",
columns="resource_types",
method="GIN",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="gin_find_rtype_idx",
),
),
migrations.RunPython(
partial(
drop_index_on_partitions,
parent_table="findings",
index_name="findings_uid_idx",
),
reverse_code=partial(
create_index_on_partitions,
parent_table="findings",
index_name="findings_uid_idx",
columns="uid",
method="BTREE",
),
),
migrations.RunPython(
partial(
drop_index_on_partitions,
parent_table="findings",
index_name="findings_filter_idx",
),
reverse_code=partial(
create_index_on_partitions,
parent_table="findings",
index_name="findings_filter_idx",
columns="scan_id, impact, severity, status, check_id, delta",
method="BTREE",
),
),
]
@@ -0,0 +1,37 @@
import django.contrib.postgres.indexes
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("api", "0020_findings_new_performance_indexes_partitions"),
]
operations = [
migrations.AddIndex(
model_name="finding",
index=django.contrib.postgres.indexes.GinIndex(
fields=["resource_services"], name="gin_find_service_idx"
),
),
migrations.AddIndex(
model_name="finding",
index=django.contrib.postgres.indexes.GinIndex(
fields=["resource_regions"], name="gin_find_region_idx"
),
),
migrations.AddIndex(
model_name="finding",
index=django.contrib.postgres.indexes.GinIndex(
fields=["resource_types"], name="gin_find_rtype_idx"
),
),
migrations.RemoveIndex(
model_name="finding",
name="findings_uid_idx",
),
migrations.RemoveIndex(
model_name="finding",
name="findings_filter_idx",
),
]
@@ -0,0 +1,38 @@
# Generated by Django 5.1.8 on 2025-05-12 10:04
from django.contrib.postgres.operations import AddIndexConcurrently
from django.db import migrations, models
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0021_findings_new_performance_indexes_parent"),
("django_celery_beat", "0019_alter_periodictasks_options"),
]
operations = [
AddIndexConcurrently(
model_name="scan",
index=models.Index(
condition=models.Q(("state", "completed")),
fields=["tenant_id", "provider_id", "state", "-inserted_at"],
name="scans_prov_state_ins_desc_idx",
),
),
AddIndexConcurrently(
model_name="scansummary",
index=models.Index(
fields=["tenant_id", "scan_id", "service"],
name="ss_tenant_scan_service_idx",
),
),
AddIndexConcurrently(
model_name="scansummary",
index=models.Index(
fields=["tenant_id", "scan_id", "severity"],
name="ss_tenant_scan_severity_idx",
),
),
]
@@ -0,0 +1,28 @@
# Generated by Django 5.1.8 on 2025-05-12 10:18
from django.contrib.postgres.operations import AddIndexConcurrently
from django.db import migrations, models
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0022_scan_summaries_performance_indexes"),
]
operations = [
AddIndexConcurrently(
model_name="resource",
index=models.Index(
fields=["tenant_id", "id"], name="resources_tenant_id_idx"
),
),
AddIndexConcurrently(
model_name="resource",
index=models.Index(
fields=["tenant_id", "provider_id"],
name="resources_tenant_provider_idx",
),
),
]
@@ -0,0 +1,29 @@
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0023_resources_lookup_optimization"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="find_tenant_uid_inserted_idx",
columns="tenant_id, uid, inserted_at DESC",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="find_tenant_uid_inserted_idx",
),
)
]
@@ -0,0 +1,17 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0024_findings_uid_index_partitions"),
]
operations = [
migrations.AddIndex(
model_name="finding",
index=models.Index(
fields=["tenant_id", "uid", "-inserted_at"],
name="find_tenant_uid_inserted_idx",
),
),
]
@@ -0,0 +1,14 @@
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("api", "0025_findings_uid_index_parent"),
]
operations = [
migrations.RunSQL(
"ALTER TYPE provider_secret_type ADD VALUE IF NOT EXISTS 'service_account';",
reverse_sql=migrations.RunSQL.noop,
),
]
@@ -0,0 +1,124 @@
# Generated by Django 5.1.8 on 2025-05-21 11:37
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.db_utils
import api.rls
from api.rls import RowLevelSecurityConstraint
class Migration(migrations.Migration):
dependencies = [
("api", "0026_provider_secret_gcp_service_account"),
]
operations = [
migrations.CreateModel(
name="ComplianceRequirementOverview",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("compliance_id", models.TextField(blank=False)),
("framework", models.TextField(blank=False)),
("version", models.TextField(blank=True)),
("description", models.TextField(blank=True)),
("region", models.TextField(blank=False)),
("requirement_id", models.TextField(blank=False)),
(
"requirement_status",
api.db_utils.StatusEnumField(
choices=[
("FAIL", "Fail"),
("PASS", "Pass"),
("MANUAL", "Manual"),
]
),
),
("passed_checks", models.IntegerField(default=0)),
("failed_checks", models.IntegerField(default=0)),
("total_checks", models.IntegerField(default=0)),
(
"scan",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="compliance_requirements_overviews",
related_query_name="compliance_requirements_overview",
to="api.scan",
),
),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "compliance_requirements_overviews",
"abstract": False,
"indexes": [
models.Index(
fields=["tenant_id", "scan_id"], name="cro_tenant_scan_idx"
),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id"],
name="cro_scan_comp_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id", "region"],
name="cro_scan_comp_reg_idx",
),
models.Index(
fields=[
"tenant_id",
"scan_id",
"compliance_id",
"requirement_id",
],
name="cro_scan_comp_req_idx",
),
models.Index(
fields=[
"tenant_id",
"scan_id",
"compliance_id",
"requirement_id",
"region",
],
name="cro_scan_comp_req_reg_idx",
),
],
"constraints": [
models.UniqueConstraint(
fields=(
"tenant_id",
"scan_id",
"compliance_id",
"requirement_id",
"region",
),
name="unique_tenant_compliance_requirement_overview",
)
],
},
),
migrations.AddConstraint(
model_name="ComplianceRequirementOverview",
constraint=RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_compliancerequirementoverview",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,29 @@
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0027_compliance_requirement_overviews"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="find_tenant_scan_check_idx",
columns="tenant_id, scan_id, check_id",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="find_tenant_scan_check_idx",
),
)
]
@@ -0,0 +1,17 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0028_findings_check_index_partitions"),
]
operations = [
migrations.AddIndex(
model_name="finding",
index=models.Index(
fields=["tenant_id", "scan_id", "check_id"],
name="find_tenant_scan_check_idx",
),
),
]
@@ -0,0 +1,120 @@
# Generated by Django 5.1.8 on 2025-05-15 09:54
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0029_findings_check_index_parent"),
]
operations = [
migrations.CreateModel(
name="SAMLDomainIndex",
fields=[
(
"id",
models.BigAutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("email_domain", models.CharField(max_length=254, unique=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "saml_domain_index",
},
),
migrations.AddConstraint(
model_name="samldomainindex",
constraint=models.UniqueConstraint(
fields=("email_domain", "tenant"),
name="unique_resources_by_email_domain",
),
),
migrations.AddConstraint(
model_name="samldomainindex",
constraint=api.rls.BaseSecurityConstraint(
name="statements_on_samldomainindex",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.CreateModel(
name="SAMLConfiguration",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
(
"email_domain",
models.CharField(
help_text="Email domain used to identify the tenant, e.g. prowlerdemo.com",
max_length=254,
unique=True,
),
),
(
"metadata_xml",
models.TextField(
help_text="Raw IdP metadata XML to configure SingleSignOnService, certificates, etc."
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "saml_configurations",
},
),
migrations.AddConstraint(
model_name="samlconfiguration",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_samlconfiguration",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="samlconfiguration",
constraint=models.UniqueConstraint(
fields=("tenant",), name="unique_samlconfig_per_tenant"
),
),
migrations.AlterField(
model_name="integration",
name="integration_type",
field=api.db_utils.IntegrationTypeEnumField(
choices=[
("amazon_s3", "Amazon S3"),
("aws_security_hub", "AWS Security Hub"),
("jira", "JIRA"),
("slack", "Slack"),
]
),
),
]
@@ -0,0 +1,106 @@
# Generated by Django 5.1.10 on 2025-06-12 12:45
import uuid
import django.core.validators
import django.db.models.deletion
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0030_samlconfigurations"),
]
operations = [
migrations.CreateModel(
name="LighthouseConfiguration",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"name",
models.CharField(
help_text="Name of the configuration",
max_length=100,
validators=[django.core.validators.MinLengthValidator(3)],
),
),
(
"api_key",
models.BinaryField(
help_text="Encrypted API key for the LLM service"
),
),
(
"model",
models.CharField(
choices=[
("gpt-4o-2024-11-20", "GPT-4o v2024-11-20"),
("gpt-4o-2024-08-06", "GPT-4o v2024-08-06"),
("gpt-4o-2024-05-13", "GPT-4o v2024-05-13"),
("gpt-4o", "GPT-4o Default"),
("gpt-4o-mini-2024-07-18", "GPT-4o Mini v2024-07-18"),
("gpt-4o-mini", "GPT-4o Mini Default"),
],
help_text="Must be one of the supported model names",
max_length=50,
),
),
(
"temperature",
models.FloatField(default=0, help_text="Must be between 0 and 1"),
),
(
"max_tokens",
models.IntegerField(
default=4000, help_text="Must be between 500 and 5000"
),
),
(
"business_context",
models.TextField(
blank=True,
default="",
help_text="Additional business context for this AI model configuration",
),
),
("is_active", models.BooleanField(default=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "lighthouse_configurations",
"abstract": False,
"constraints": [
models.UniqueConstraint(
fields=("tenant_id",),
name="unique_lighthouse_config_per_tenant",
),
],
},
),
migrations.AddConstraint(
model_name="lighthouseconfiguration",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_lighthouseconfiguration",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
+562 -19
View File
@@ -1,12 +1,20 @@
import json
import logging
import re
import xml.etree.ElementTree as ET
from uuid import UUID, uuid4
from cryptography.fernet import Fernet
from allauth.socialaccount.models import SocialApp
from config.custom_logging import BackendLogger
from config.settings.social_login import SOCIALACCOUNT_PROVIDERS
from cryptography.fernet import Fernet, InvalidToken
from django.conf import settings
from django.contrib.auth.models import AbstractBaseUser
from django.contrib.postgres.fields import ArrayField
from django.contrib.postgres.indexes import GinIndex
from django.contrib.postgres.search import SearchVector, SearchVectorField
from django.contrib.sites.models import Site
from django.core.exceptions import ValidationError
from django.core.validators import MinLengthValidator
from django.db import models
from django.db.models import Q
@@ -18,6 +26,7 @@ from psqlextra.models import PostgresPartitionedModel
from psqlextra.types import PostgresPartitioningMethod
from uuid6 import uuid7
from api.db_router import MainRouter
from api.db_utils import (
CustomUserManager,
FindingDeltaEnumField,
@@ -48,6 +57,8 @@ fernet = Fernet(settings.SECRETS_ENCRYPTION_KEY.encode())
# Convert Prowler Severity enum to Django TextChoices
SeverityChoices = enum_to_choices(Severity)
logger = logging.getLogger(BackendLogger.API)
class StatusChoices(models.TextChoices):
"""
@@ -59,7 +70,6 @@ class StatusChoices(models.TextChoices):
FAIL = "FAIL", _("Fail")
PASS = "PASS", _("Pass")
MANUAL = "MANUAL", _("Manual")
MUTED = "MUTED", _("Muted")
class StateChoices(models.TextChoices):
@@ -192,6 +202,7 @@ class Provider(RowLevelSecurityProtectedModel):
AZURE = "azure", _("Azure")
GCP = "gcp", _("GCP")
KUBERNETES = "kubernetes", _("Kubernetes")
M365 = "m365", _("M365")
@staticmethod
def validate_aws_uid(value):
@@ -215,6 +226,19 @@ class Provider(RowLevelSecurityProtectedModel):
pointer="/data/attributes/uid",
)
@staticmethod
def validate_m365_uid(value):
if not re.match(
r"""^(?!-)[A-Za-z0-9](?:[A-Za-z0-9-]{0,61}[A-Za-z0-9])?(?:\.(?!-)[A-Za-z0-9]"""
r"""(?:[A-Za-z0-9-]{0,61}[A-Za-z0-9])?)*\.[A-Za-z]{2,}$""",
value,
):
raise ModelValidationError(
detail="M365 domain ID must be a valid domain.",
code="m365-uid",
pointer="/data/attributes/uid",
)
@staticmethod
def validate_gcp_uid(value):
if not re.match(r"^[a-z][a-z0-9-]{5,29}$", value):
@@ -228,7 +252,7 @@ class Provider(RowLevelSecurityProtectedModel):
@staticmethod
def validate_kubernetes_uid(value):
if not re.match(
r"^[a-z0-9][A-Za-z0-9_.:\/-]{1,250}$",
r"^[a-zA-Z0-9][a-zA-Z0-9._@:\/-]{1,250}$",
value,
):
raise ModelValidationError(
@@ -416,6 +440,7 @@ class Scan(RowLevelSecurityProtectedModel):
PeriodicTask, on_delete=models.CASCADE, null=True, blank=True
)
output_location = models.CharField(blank=True, null=True, max_length=200)
# TODO: mutelist foreign key
class Meta(RowLevelSecurityProtectedModel.Meta):
@@ -438,6 +463,11 @@ class Scan(RowLevelSecurityProtectedModel):
fields=["tenant_id", "provider_id", "state", "inserted_at"],
name="scans_prov_state_insert_idx",
),
models.Index(
fields=["tenant_id", "provider_id", "state", "-inserted_at"],
condition=Q(state=StateChoices.COMPLETED),
name="scans_prov_state_ins_desc_idx",
),
]
class JSONAPIMeta:
@@ -519,6 +549,11 @@ class Resource(RowLevelSecurityProtectedModel):
editable=False,
)
metadata = models.TextField(blank=True, null=True)
details = models.TextField(blank=True, null=True)
partition = models.TextField(blank=True, null=True)
# Relationships
tags = models.ManyToManyField(
ResourceTag,
verbose_name="Tags associated with the resource, by provider",
@@ -559,6 +594,11 @@ class Resource(RowLevelSecurityProtectedModel):
name="resource_tenant_metadata_idx",
),
GinIndex(fields=["text_search"], name="gin_resources_search_idx"),
models.Index(fields=["tenant_id", "id"], name="resources_tenant_id_idx"),
models.Index(
fields=["tenant_id", "provider_id"],
name="resources_tenant_provider_idx",
),
]
constraints = [
@@ -656,6 +696,23 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
tags = models.JSONField(default=dict, null=True, blank=True)
check_id = models.CharField(max_length=100, blank=False, null=False)
check_metadata = models.JSONField(default=dict, null=False)
muted = models.BooleanField(default=False, null=False)
compliance = models.JSONField(default=dict, null=True, blank=True)
# Denormalize resource data for performance
resource_regions = ArrayField(
models.CharField(max_length=100), blank=True, null=True
)
resource_services = ArrayField(
models.CharField(max_length=100),
blank=True,
null=True,
)
resource_types = ArrayField(
models.CharField(max_length=100),
blank=True,
null=True,
)
# Relationships
scan = models.ForeignKey(to=Scan, related_name="findings", on_delete=models.CASCADE)
@@ -697,18 +754,6 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
]
indexes = [
models.Index(fields=["uid"], name="findings_uid_idx"),
models.Index(
fields=[
"scan_id",
"impact",
"severity",
"status",
"check_id",
"delta",
],
name="findings_filter_idx",
),
models.Index(fields=["tenant_id", "id"], name="findings_tenant_and_id_idx"),
GinIndex(fields=["text_search"], name="gin_findings_search_idx"),
models.Index(fields=["tenant_id", "scan_id"], name="find_tenant_scan_idx"),
@@ -720,19 +765,46 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
condition=Q(delta="new"),
name="find_delta_new_idx",
),
models.Index(
fields=["tenant_id", "uid", "-inserted_at"],
name="find_tenant_uid_inserted_idx",
),
GinIndex(fields=["resource_services"], name="gin_find_service_idx"),
GinIndex(fields=["resource_regions"], name="gin_find_region_idx"),
GinIndex(fields=["resource_types"], name="gin_find_rtype_idx"),
models.Index(
fields=["tenant_id", "scan_id", "check_id"],
name="find_tenant_scan_check_idx",
),
]
class JSONAPIMeta:
resource_name = "findings"
def add_resources(self, resources: list[Resource] | None):
# Add new relationships with the tenant_id field
if not resources:
return
self.resource_regions = self.resource_regions or []
self.resource_services = self.resource_services or []
self.resource_types = self.resource_types or []
# Deduplication
regions = set(self.resource_regions)
services = set(self.resource_services)
types = set(self.resource_types)
for resource in resources:
ResourceFindingMapping.objects.update_or_create(
resource=resource, finding=self, tenant_id=self.tenant_id
)
regions.add(resource.region)
services.add(resource.service)
types.add(resource.type)
# Save the instance
self.resource_regions = list(regions)
self.resource_services = list(services)
self.resource_types = list(types)
self.save()
@@ -792,6 +864,7 @@ class ProviderSecret(RowLevelSecurityProtectedModel):
class TypeChoices(models.TextChoices):
STATIC = "static", _("Key-value pairs")
ROLE = "role", _("Role assumption")
SERVICE_ACCOUNT = "service_account", _("GCP Service Account Key")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
@@ -1084,6 +1157,78 @@ class ComplianceOverview(RowLevelSecurityProtectedModel):
resource_name = "compliance-overviews"
class ComplianceRequirementOverview(RowLevelSecurityProtectedModel):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
compliance_id = models.TextField(blank=False)
framework = models.TextField(blank=False)
version = models.TextField(blank=True)
description = models.TextField(blank=True)
region = models.TextField(blank=False)
requirement_id = models.TextField(blank=False)
requirement_status = StatusEnumField(choices=StatusChoices)
passed_checks = models.IntegerField(default=0)
failed_checks = models.IntegerField(default=0)
total_checks = models.IntegerField(default=0)
scan = models.ForeignKey(
Scan,
on_delete=models.CASCADE,
related_name="compliance_requirements_overviews",
related_query_name="compliance_requirements_overview",
)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "compliance_requirements_overviews"
constraints = [
models.UniqueConstraint(
fields=(
"tenant_id",
"scan_id",
"compliance_id",
"requirement_id",
"region",
),
name="unique_tenant_compliance_requirement_overview",
),
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "DELETE"],
),
]
indexes = [
models.Index(fields=["tenant_id", "scan_id"], name="cro_tenant_scan_idx"),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id"],
name="cro_scan_comp_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id", "region"],
name="cro_scan_comp_reg_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id", "requirement_id"],
name="cro_scan_comp_req_idx",
),
models.Index(
fields=[
"tenant_id",
"scan_id",
"compliance_id",
"requirement_id",
"region",
],
name="cro_scan_comp_req_reg_idx",
),
]
class JSONAPIMeta:
resource_name = "compliance-requirements-overviews"
class ScanSummary(RowLevelSecurityProtectedModel):
objects = ActiveProviderManager()
all_objects = models.Manager()
@@ -1134,7 +1279,15 @@ class ScanSummary(RowLevelSecurityProtectedModel):
models.Index(
fields=["tenant_id", "scan_id"],
name="scan_summaries_tenant_scan_idx",
)
),
models.Index(
fields=["tenant_id", "scan_id", "service"],
name="ss_tenant_scan_service_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "severity"],
name="ss_tenant_scan_severity_idx",
),
]
class JSONAPIMeta:
@@ -1144,7 +1297,6 @@ class ScanSummary(RowLevelSecurityProtectedModel):
class Integration(RowLevelSecurityProtectedModel):
class IntegrationChoices(models.TextChoices):
S3 = "amazon_s3", _("Amazon S3")
SAML = "saml", _("SAML")
AWS_SECURITY_HUB = "aws_security_hub", _("AWS Security Hub")
JIRA = "jira", _("JIRA")
SLACK = "slack", _("Slack")
@@ -1216,3 +1368,394 @@ class IntegrationProviderRelationship(RowLevelSecurityProtectedModel):
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
class SAMLDomainIndex(models.Model):
"""
Public index of SAML domains. No RLS. Used for fast lookup in SAML login flow.
"""
email_domain = models.CharField(max_length=254, unique=True)
tenant = models.ForeignKey("Tenant", on_delete=models.CASCADE)
class Meta:
db_table = "saml_domain_index"
constraints = [
models.UniqueConstraint(
fields=("email_domain", "tenant"),
name="unique_resources_by_email_domain",
),
BaseSecurityConstraint(
name="statements_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
class SAMLConfiguration(RowLevelSecurityProtectedModel):
"""
Stores per-tenant SAML settings, including email domain and IdP metadata.
Automatically syncs to a SocialApp instance on save.
Note:
This model exists to provide a tenant-aware abstraction over SAML configuration.
It supports row-level security, custom validation, and metadata parsing, enabling
Prowler to expose a clean API and admin interface for managing SAML integrations.
Although Django Allauth uses the SocialApp model to store provider configuration,
it is not designed for multi-tenant use. SocialApp lacks support for tenant scoping,
email domain mapping, and structured metadata handling.
By managing SAMLConfiguration separately, we ensure:
- Strong isolation between tenants via RLS.
- Ownership of raw IdP metadata and its validation.
- An explicit link between SAML config and business-level identifiers (e.g. email domain).
- Programmatic transformation into the SocialApp format used by Allauth.
In short, this model acts as a secure and user-friendly layer over Allauth's lower-level primitives.
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
email_domain = models.CharField(
max_length=254,
unique=True,
help_text="Email domain used to identify the tenant, e.g. prowlerdemo.com",
)
metadata_xml = models.TextField(
help_text="Raw IdP metadata XML to configure SingleSignOnService, certificates, etc."
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class JSONAPIMeta:
resource_name = "saml-configurations"
class Meta:
db_table = "saml_configurations"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
# 1 config per tenant
models.UniqueConstraint(
fields=["tenant"],
name="unique_samlconfig_per_tenant",
),
]
def clean(self, old_email_domain=None):
# Domain must not contain @
if "@" in self.email_domain:
raise ValidationError({"email_domain": "Domain must not contain @"})
# Enforce at most one config per tenant
qs = SAMLConfiguration.objects.filter(tenant=self.tenant)
# Exclude ourselves in case of update
if self.pk:
qs = qs.exclude(pk=self.pk)
if qs.exists():
raise ValidationError(
{"tenant": "A SAML configuration already exists for this tenant."}
)
# The email domain must be unique in the entire system
qs = SAMLConfiguration.objects.using(MainRouter.admin_db).filter(
email_domain__iexact=self.email_domain
)
if qs.exists() and old_email_domain != self.email_domain:
raise ValidationError(
{"tenant": "There is a problem with your email domain."}
)
def save(self, *args, **kwargs):
self.email_domain = self.email_domain.strip().lower()
is_create = not SAMLConfiguration.objects.filter(pk=self.pk).exists()
if not is_create:
old = SAMLConfiguration.objects.get(pk=self.pk)
old_email_domain = old.email_domain
old_metadata_xml = old.metadata_xml
else:
old_email_domain = None
old_metadata_xml = None
self.clean(old_email_domain)
super().save(*args, **kwargs)
if is_create or (
old_email_domain != self.email_domain
or old_metadata_xml != self.metadata_xml
):
self._sync_social_app(old_email_domain)
# Sync the public index
if not is_create and old_email_domain and old_email_domain != self.email_domain:
SAMLDomainIndex.objects.filter(email_domain=old_email_domain).delete()
# Create/update the new domain index
SAMLDomainIndex.objects.update_or_create(
email_domain=self.email_domain, defaults={"tenant": self.tenant}
)
def _parse_metadata(self):
"""
Parse the raw IdP metadata XML and extract:
- entity_id
- sso_url
- slo_url (may be None)
- x509cert (required)
"""
ns = {
"md": "urn:oasis:names:tc:SAML:2.0:metadata",
"ds": "http://www.w3.org/2000/09/xmldsig#",
}
try:
root = ET.fromstring(self.metadata_xml)
except ET.ParseError as e:
raise ValidationError({"metadata_xml": f"Invalid XML: {e}"})
# Entity ID
entity_id = root.attrib.get("entityID")
# SSO endpoint (must exist)
sso = root.find(".//md:IDPSSODescriptor/md:SingleSignOnService", ns)
if sso is None or "Location" not in sso.attrib:
raise ValidationError(
{"metadata_xml": "Missing SingleSignOnService in metadata."}
)
sso_url = sso.attrib["Location"]
# SLO endpoint (optional)
slo = root.find(".//md:IDPSSODescriptor/md:SingleLogoutService", ns)
slo_url = slo.attrib.get("Location") if slo is not None else None
# X.509 certificate (required)
cert = root.find(
'.//md:KeyDescriptor[@use="signing"]/ds:KeyInfo/ds:X509Data/ds:X509Certificate',
ns,
)
if cert is None or not cert.text or not cert.text.strip():
raise ValidationError(
{
"metadata_xml": 'Metadata must include a <ds:X509Certificate> under <KeyDescriptor use="signing">.'
}
)
x509cert = cert.text.strip()
return {
"entity_id": entity_id,
"sso_url": sso_url,
"slo_url": slo_url,
"x509cert": x509cert,
}
def _sync_social_app(self, previous_email_domain=None):
"""
Create or update the corresponding SocialApp based on email_domain.
If the domain changed, update the matching SocialApp.
"""
idp_settings = self._parse_metadata()
settings_dict = SOCIALACCOUNT_PROVIDERS["saml"].copy()
settings_dict["idp"] = idp_settings
current_site = Site.objects.get(id=settings.SITE_ID)
social_app_qs = SocialApp.objects.filter(
provider="saml", client_id=previous_email_domain or self.email_domain
)
if social_app_qs.exists():
social_app = social_app_qs.first()
social_app.client_id = self.email_domain
social_app.name = f"{self.tenant.name} SAML ({self.email_domain})"
social_app.settings = settings_dict
social_app.save()
social_app.sites.set([current_site])
else:
social_app = SocialApp.objects.create(
provider="saml",
client_id=self.email_domain,
name=f"{self.tenant.name} SAML ({self.email_domain})",
settings=settings_dict,
)
social_app.sites.set([current_site])
class ResourceScanSummary(RowLevelSecurityProtectedModel):
scan_id = models.UUIDField(default=uuid7, db_index=True)
resource_id = models.UUIDField(default=uuid4, db_index=True)
service = models.CharField(max_length=100)
region = models.CharField(max_length=100)
resource_type = models.CharField(max_length=100)
class Meta:
db_table = "resource_scan_summaries"
unique_together = (("tenant_id", "scan_id", "resource_id"),)
indexes = [
# Single-dimension lookups:
models.Index(
fields=["tenant_id", "scan_id", "service"],
name="rss_tenant_scan_svc_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region"],
name="rss_tenant_scan_reg_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "resource_type"],
name="rss_tenant_scan_type_idx",
),
# Two-dimension cross-filters:
models.Index(
fields=["tenant_id", "scan_id", "region", "service"],
name="rss_tenant_scan_reg_svc_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "service", "resource_type"],
name="rss_tenant_scan_svc_type_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region", "resource_type"],
name="rss_tenant_scan_reg_type_idx",
),
]
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
class LighthouseConfiguration(RowLevelSecurityProtectedModel):
"""
Stores configuration and API keys for LLM services.
"""
class ModelChoices(models.TextChoices):
GPT_4O_2024_11_20 = "gpt-4o-2024-11-20", _("GPT-4o v2024-11-20")
GPT_4O_2024_08_06 = "gpt-4o-2024-08-06", _("GPT-4o v2024-08-06")
GPT_4O_2024_05_13 = "gpt-4o-2024-05-13", _("GPT-4o v2024-05-13")
GPT_4O = "gpt-4o", _("GPT-4o Default")
GPT_4O_MINI_2024_07_18 = "gpt-4o-mini-2024-07-18", _("GPT-4o Mini v2024-07-18")
GPT_4O_MINI = "gpt-4o-mini", _("GPT-4o Mini Default")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
name = models.CharField(
max_length=100,
validators=[MinLengthValidator(3)],
blank=False,
null=False,
help_text="Name of the configuration",
)
api_key = models.BinaryField(
blank=False, null=False, help_text="Encrypted API key for the LLM service"
)
model = models.CharField(
max_length=50,
choices=ModelChoices.choices,
blank=False,
null=False,
default=ModelChoices.GPT_4O_2024_08_06,
help_text="Must be one of the supported model names",
)
temperature = models.FloatField(default=0, help_text="Must be between 0 and 1")
max_tokens = models.IntegerField(
default=4000, help_text="Must be between 500 and 5000"
)
business_context = models.TextField(
blank=True,
null=False,
default="",
help_text="Additional business context for this AI model configuration",
)
is_active = models.BooleanField(default=True)
def __str__(self):
return self.name
def clean(self):
super().clean()
# Validate temperature
if not 0 <= self.temperature <= 1:
raise ModelValidationError(
detail="Temperature must be between 0 and 1",
code="invalid_temperature",
pointer="/data/attributes/temperature",
)
# Validate max_tokens
if not 500 <= self.max_tokens <= 5000:
raise ModelValidationError(
detail="Max tokens must be between 500 and 5000",
code="invalid_max_tokens",
pointer="/data/attributes/max_tokens",
)
@property
def api_key_decoded(self):
"""Return the decrypted API key, or None if unavailable or invalid."""
if not self.api_key:
return None
try:
decrypted_key = fernet.decrypt(bytes(self.api_key))
return decrypted_key.decode()
except InvalidToken:
logger.warning("Invalid token while decrypting API key.")
except Exception as e:
logger.exception("Unexpected error while decrypting API key: %s", e)
@api_key_decoded.setter
def api_key_decoded(self, value):
"""Store the encrypted API key."""
if not value:
raise ModelValidationError(
detail="API key is required",
code="invalid_api_key",
pointer="/data/attributes/api_key",
)
# Validate OpenAI API key format
openai_key_pattern = r"^sk-[\w-]+T3BlbkFJ[\w-]+$"
if not re.match(openai_key_pattern, value):
raise ModelValidationError(
detail="Invalid OpenAI API key format.",
code="invalid_api_key",
pointer="/data/attributes/api_key",
)
self.api_key = fernet.encrypt(value.encode())
def save(self, *args, **kwargs):
self.full_clean()
super().save(*args, **kwargs)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "lighthouse_configurations"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
# Add unique constraint for name within a tenant
models.UniqueConstraint(
fields=["tenant_id"], name="unique_lighthouse_config_per_tenant"
),
]
class JSONAPIMeta:
resource_name = "lighthouse-configurations"
+1 -1
View File
@@ -1,4 +1,4 @@
from rest_framework_json_api.pagination import JsonApiPageNumberPagination
from drf_spectacular_jsonapi.schemas.pagination import JsonApiPageNumberPagination
class ComplianceOverviewPagination(JsonApiPageNumberPagination):
+3 -4
View File
@@ -1,12 +1,12 @@
from celery import states
from celery.signals import before_task_publish
from config.celery import celery_app
from django.db.models.signals import post_delete
from django.dispatch import receiver
from django_celery_beat.models import PeriodicTask
from django_celery_results.backends.database import DatabaseBackend
from api.db_utils import delete_related_daily_task
from api.models import Provider
from config.celery import celery_app
def create_task_result_on_publish(sender=None, headers=None, **kwargs): # noqa: F841
@@ -31,5 +31,4 @@ before_task_publish.connect(
@receiver(post_delete, sender=Provider)
def delete_provider_scan_task(sender, instance, **kwargs): # noqa: F841
# Delete the associated periodic task when the provider is deleted
task_name = f"scan-perform-scheduled-{instance.id}"
PeriodicTask.objects.filter(name=task_name).delete()
delete_related_daily_task(instance.id)
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,82 @@
from unittest.mock import MagicMock
import pytest
from allauth.socialaccount.models import SocialLogin
from django.contrib.auth import get_user_model
from api.adapters import ProwlerSocialAccountAdapter
from api.db_router import MainRouter
from api.models import Membership, SAMLConfiguration, Tenant
User = get_user_model()
@pytest.mark.django_db
class TestProwlerSocialAccountAdapter:
def test_get_user_by_email_returns_user(self, create_test_user):
adapter = ProwlerSocialAccountAdapter()
user = adapter.get_user_by_email(create_test_user.email)
assert user == create_test_user
def test_get_user_by_email_returns_none_for_unknown_email(self):
adapter = ProwlerSocialAccountAdapter()
assert adapter.get_user_by_email("notfound@example.com") is None
def test_pre_social_login_links_existing_user(self, create_test_user, rf):
adapter = ProwlerSocialAccountAdapter()
sociallogin = MagicMock(spec=SocialLogin)
sociallogin.account = MagicMock()
sociallogin.account.provider = "saml"
sociallogin.account.extra_data = {}
sociallogin.user = create_test_user
sociallogin.connect = MagicMock()
adapter.pre_social_login(rf.get("/"), sociallogin)
call_args = sociallogin.connect.call_args
assert call_args is not None
called_request, called_user = call_args[0]
assert called_request.path == "/"
assert called_user.email == create_test_user.email
def test_pre_social_login_no_link_if_email_missing(self, rf):
adapter = ProwlerSocialAccountAdapter()
sociallogin = MagicMock(spec=SocialLogin)
sociallogin.account = MagicMock()
sociallogin.account.provider = "github"
sociallogin.account.extra_data = {}
sociallogin.connect = MagicMock()
adapter.pre_social_login(rf.get("/"), sociallogin)
sociallogin.connect.assert_not_called()
def test_save_user_saml_flow(
self,
rf,
saml_setup,
saml_sociallogin,
):
adapter = ProwlerSocialAccountAdapter()
request = rf.get("/")
saml_sociallogin.user.email = saml_setup["email"]
tenant = Tenant.objects.using(MainRouter.admin_db).get(
id=saml_setup["tenant_id"]
)
saml_config = SAMLConfiguration.objects.using(MainRouter.admin_db).get(
tenant=tenant
)
assert saml_config.email_domain == saml_setup["domain"]
user = adapter.save_user(request, saml_sociallogin)
assert user.email == saml_setup["email"]
assert (
Membership.objects.using(MainRouter.admin_db)
.filter(user=user, tenant=tenant)
.exists()
)
+22 -6
View File
@@ -1,12 +1,12 @@
from unittest.mock import patch, MagicMock
from unittest.mock import MagicMock, patch
from api.compliance import (
generate_compliance_overview_template,
generate_scan_compliance,
get_prowler_provider_checks,
get_prowler_provider_compliance,
load_prowler_compliance,
load_prowler_checks,
generate_scan_compliance,
generate_compliance_overview_template,
load_prowler_compliance,
)
from api.models import Provider
@@ -69,7 +69,7 @@ class TestCompliance:
load_prowler_compliance()
from api.compliance import PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE, PROWLER_CHECKS
from api.compliance import PROWLER_CHECKS, PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE
assert PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE == {
"template_key": "template_value"
@@ -218,6 +218,10 @@ class TestCompliance:
Description="Description of requirement 1",
Attributes=[],
Checks=["check1", "check2"],
Tactics=["tactic1"],
SubTechniques=["subtechnique1"],
Platforms=["platform1"],
TechniqueURL="https://example.com",
)
requirement2 = MagicMock(
Id="requirement2",
@@ -225,6 +229,10 @@ class TestCompliance:
Description="Description of requirement 2",
Attributes=[],
Checks=[],
Tactics=[],
SubTechniques=[],
Platforms=[],
TechniqueURL="",
)
compliance1 = MagicMock(
Requirements=[requirement1, requirement2],
@@ -247,6 +255,10 @@ class TestCompliance:
"requirement1": {
"name": "Requirement 1",
"description": "Description of requirement 1",
"tactics": ["tactic1"],
"subtechniques": ["subtechnique1"],
"platforms": ["platform1"],
"technique_url": "https://example.com",
"attributes": [],
"checks": {"check1": None, "check2": None},
"checks_status": {
@@ -260,6 +272,10 @@ class TestCompliance:
"requirement2": {
"name": "Requirement 2",
"description": "Description of requirement 2",
"tactics": [],
"subtechniques": [],
"platforms": [],
"technique_url": "",
"attributes": [],
"checks": {},
"checks_status": {
@@ -268,7 +284,7 @@ class TestCompliance:
"manual": 0,
"total": 0,
},
"status": "PASS",
"status": "MANUAL",
},
},
"requirements_status": {
+92 -2
View File
@@ -3,9 +3,13 @@ from enum import Enum
from unittest.mock import patch
import pytest
from django.conf import settings
from freezegun import freeze_time
from api.db_utils import (
_should_create_index_on_partition,
batch_delete,
create_objects_in_batches,
enum_to_choices,
generate_random_token,
one_week_from_now,
@@ -131,9 +135,95 @@ class TestBatchDelete:
return provider_count
@pytest.mark.django_db
def test_batch_delete(self, create_test_providers):
def test_batch_delete(self, tenants_fixture, create_test_providers):
tenant_id = str(tenants_fixture[0].id)
_, summary = batch_delete(
Provider.objects.all(), batch_size=create_test_providers // 2
tenant_id, Provider.objects.all(), batch_size=create_test_providers // 2
)
assert Provider.objects.all().count() == 0
assert summary == {"api.Provider": create_test_providers}
class TestShouldCreateIndexOnPartition:
@freeze_time("2025-05-15 00:00:00Z")
@pytest.mark.parametrize(
"partition_name, all_partitions, expected",
[
("any_name", True, True),
("findings_default", True, True),
("findings_2022_jan", True, True),
("foo_bar", False, True),
("findings_2025_MAY", False, True),
("findings_2025_may", False, True),
("findings_2025_jun", False, True),
("findings_2025_apr", False, False),
("findings_2025_xyz", False, True),
],
)
def test_partition_inclusion_logic(self, partition_name, all_partitions, expected):
assert (
_should_create_index_on_partition(partition_name, all_partitions)
is expected
)
@freeze_time("2025-05-15 00:00:00Z")
def test_invalid_date_components(self):
# even if regex matches but int conversion fails, we fallback True
# (e.g. year too big, month number parse error)
bad_name = "findings_99999_jan"
assert _should_create_index_on_partition(bad_name, False) is True
bad_name2 = "findings_2025_abc"
# abc not in month_map → fallback True
assert _should_create_index_on_partition(bad_name2, False) is True
@pytest.mark.django_db
class TestCreateObjectsInBatches:
@pytest.fixture
def tenant(self, tenants_fixture):
return tenants_fixture[0]
def make_provider_instances(self, tenant, count):
"""
Return a list of `count` unsaved Provider instances for the given tenant.
"""
base_uid = 1000
return [
Provider(
tenant=tenant,
uid=str(base_uid + i),
provider=Provider.ProviderChoices.AWS,
)
for i in range(count)
]
def test_exact_multiple_of_batch(self, tenant):
total = 6
batch_size = 3
objs = self.make_provider_instances(tenant, total)
create_objects_in_batches(str(tenant.id), Provider, objs, batch_size=batch_size)
qs = Provider.objects.filter(tenant=tenant)
assert qs.count() == total
def test_non_multiple_of_batch(self, tenant):
total = 7
batch_size = 3
objs = self.make_provider_instances(tenant, total)
create_objects_in_batches(str(tenant.id), Provider, objs, batch_size=batch_size)
qs = Provider.objects.filter(tenant=tenant)
assert qs.count() == total
def test_batch_size_default(self, tenant):
default_size = settings.DJANGO_DELETION_BATCH_SIZE
total = default_size + 2
objs = self.make_provider_instances(tenant, total)
create_objects_in_batches(str(tenant.id), Provider, objs)
qs = Provider.objects.filter(tenant=tenant)
assert qs.count() == total
+379
View File
@@ -0,0 +1,379 @@
import json
from uuid import uuid4
import pytest
from django_celery_results.models import TaskResult
from rest_framework import status
from rest_framework.response import Response
from api.exceptions import (
TaskFailedException,
TaskInProgressException,
TaskNotFoundException,
)
from api.models import Task, User
from api.rls import Tenant
from api.v1.mixins import PaginateByPkMixin, TaskManagementMixin
@pytest.mark.django_db
class TestPaginateByPkMixin:
@pytest.fixture
def tenant(self):
return Tenant.objects.create(name="Test Tenant")
@pytest.fixture
def users(self, tenant):
# Create 5 users with proper email field
users = []
for i in range(5):
user = User.objects.create(email=f"user{i}@example.com", name=f"User {i}")
users.append(user)
return users
class DummyView(PaginateByPkMixin):
def __init__(self, page):
self._page = page
def paginate_queryset(self, qs):
return self._page
def get_serializer(self, queryset, many):
class S:
def __init__(self, data):
# serialize to list of ids
self.data = [obj.id for obj in data] if many else queryset.id
return S(queryset)
def get_paginated_response(self, data):
return Response({"results": data}, status=status.HTTP_200_OK)
def test_no_pagination(self, users):
base_qs = User.objects.all().order_by("id")
view = self.DummyView(page=None)
resp = view.paginate_by_pk(
request=None, base_queryset=base_qs, manager=User.objects
)
# since no pagination, should return all ids in order
expected = [u.id for u in base_qs]
assert isinstance(resp, Response)
assert resp.data == expected
def test_with_pagination(self, users):
base_qs = User.objects.all().order_by("id")
# simulate paging to first 2 ids
page = [base_qs[1].id, base_qs[3].id]
view = self.DummyView(page=page)
resp = view.paginate_by_pk(
request=None, base_queryset=base_qs, manager=User.objects
)
# should fetch only those two users, in the same order as page
assert resp.status_code == status.HTTP_200_OK
assert resp.data == {"results": page}
@pytest.mark.django_db
class TestTaskManagementMixin:
class DummyView(TaskManagementMixin):
pass
@pytest.fixture
def tenant(self):
return Tenant.objects.create(name="Test Tenant")
@pytest.fixture(autouse=True)
def cleanup(self):
Task.objects.all().delete()
TaskResult.objects.all().delete()
def test_no_task_and_no_taskresult_raises_not_found(self):
view = self.DummyView()
with pytest.raises(TaskNotFoundException):
view.check_task_status("task_xyz", {"foo": "bar"})
def test_no_task_and_no_taskresult_returns_none_when_not_raising(self):
view = self.DummyView()
result = view.check_task_status(
"task_xyz", {"foo": "bar"}, raise_on_not_found=False
)
assert result is None
def test_taskresult_pending_raises_in_progress(self):
task_kwargs = {"foo": "bar"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="task_xyz",
task_kwargs=json.dumps(task_kwargs),
status="PENDING",
)
view = self.DummyView()
with pytest.raises(TaskInProgressException) as excinfo:
view.check_task_status("task_xyz", task_kwargs, raise_on_not_found=False)
assert hasattr(excinfo.value, "task_result")
assert excinfo.value.task_result == tr
def test_taskresult_started_raises_in_progress(self):
task_kwargs = {"foo": "bar"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="task_xyz",
task_kwargs=json.dumps(task_kwargs),
status="STARTED",
)
view = self.DummyView()
with pytest.raises(TaskInProgressException) as excinfo:
view.check_task_status("task_xyz", task_kwargs, raise_on_not_found=False)
assert hasattr(excinfo.value, "task_result")
assert excinfo.value.task_result == tr
def test_taskresult_progress_raises_in_progress(self):
task_kwargs = {"foo": "bar"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="task_xyz",
task_kwargs=json.dumps(task_kwargs),
status="PROGRESS",
)
view = self.DummyView()
with pytest.raises(TaskInProgressException) as excinfo:
view.check_task_status("task_xyz", task_kwargs, raise_on_not_found=False)
assert hasattr(excinfo.value, "task_result")
assert excinfo.value.task_result == tr
def test_taskresult_failure_raises_failed(self):
task_kwargs = {"a": 1}
TaskResult.objects.create(
task_id=str(uuid4()),
task_name="task_fail",
task_kwargs=json.dumps(task_kwargs),
status="FAILURE",
)
view = self.DummyView()
with pytest.raises(TaskFailedException):
view.check_task_status("task_fail", task_kwargs, raise_on_not_found=False)
def test_taskresult_failure_returns_none_when_not_raising(self):
task_kwargs = {"a": 1}
TaskResult.objects.create(
task_id=str(uuid4()),
task_name="task_fail",
task_kwargs=json.dumps(task_kwargs),
status="FAILURE",
)
view = self.DummyView()
result = view.check_task_status(
"task_fail", task_kwargs, raise_on_failed=False, raise_on_not_found=False
)
assert result is None
def test_taskresult_success_returns_none(self):
task_kwargs = {"x": 2}
TaskResult.objects.create(
task_id=str(uuid4()),
task_name="task_ok",
task_kwargs=json.dumps(task_kwargs),
status="SUCCESS",
)
view = self.DummyView()
# should not raise, and returns None
assert (
view.check_task_status("task_ok", task_kwargs, raise_on_not_found=False)
is None
)
def test_taskresult_revoked_returns_none(self):
task_kwargs = {"x": 2}
TaskResult.objects.create(
task_id=str(uuid4()),
task_name="task_revoked",
task_kwargs=json.dumps(task_kwargs),
status="REVOKED",
)
view = self.DummyView()
# should not raise, and returns None
assert (
view.check_task_status(
"task_revoked", task_kwargs, raise_on_not_found=False
)
is None
)
def test_task_with_failed_status_raises_failed(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="FAILURE",
)
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
with pytest.raises(TaskFailedException) as excinfo:
view.check_task_status("scan_task", task_kwargs)
# Check that the exception contains the expected task
assert hasattr(excinfo.value, "task")
assert excinfo.value.task == task
def test_task_with_cancelled_status_raises_failed(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="REVOKED",
)
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
with pytest.raises(TaskFailedException) as excinfo:
view.check_task_status("scan_task", task_kwargs)
# Check that the exception contains the expected task
assert hasattr(excinfo.value, "task")
assert excinfo.value.task == task
def test_task_with_failed_status_returns_task_when_not_raising(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="FAILURE",
)
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
result = view.check_task_status("scan_task", task_kwargs, raise_on_failed=False)
assert result == task
def test_task_with_completed_status_returns_none(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="SUCCESS",
)
Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
result = view.check_task_status("scan_task", task_kwargs)
assert result is None
def test_task_with_executing_status_returns_task(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="STARTED",
)
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
result = view.check_task_status("scan_task", task_kwargs)
assert result is not None
assert result.pk == task.pk
def test_task_with_pending_status_returns_task(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="PENDING",
)
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
result = view.check_task_status("scan_task", task_kwargs)
assert result is not None
assert result.pk == task.pk
def test_get_task_response_if_running_returns_none_for_completed_task(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="SUCCESS",
)
Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
result = view.get_task_response_if_running("scan_task", task_kwargs)
assert result is None
def test_get_task_response_if_running_returns_none_for_no_task(self):
view = self.DummyView()
result = view.get_task_response_if_running(
"nonexistent", {"foo": "bar"}, raise_on_not_found=False
)
assert result is None
def test_get_task_response_if_running_returns_202_for_executing_task(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="STARTED",
)
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
result = view.get_task_response_if_running("scan_task", task_kwargs)
assert isinstance(result, Response)
assert result.status_code == status.HTTP_202_ACCEPTED
assert "Content-Location" in result.headers
# The response should contain the serialized task data
assert result.data is not None
assert "id" in result.data
assert str(result.data["id"]) == str(task.id)
def test_get_task_response_if_running_returns_none_for_available_task(self, tenant):
task_kwargs = {"provider_id": "test"}
tr = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs),
status="PENDING",
)
Task.objects.create(tenant=tenant, task_runner_task=tr)
view = self.DummyView()
result = view.get_task_response_if_running("scan_task", task_kwargs)
# PENDING maps to AVAILABLE, which is not EXECUTING, so should return None
assert result is None
def test_kwargs_filtering_works_correctly(self, tenant):
# Create tasks with different kwargs
task_kwargs_1 = {"provider_id": "test1", "scan_type": "full"}
task_kwargs_2 = {"provider_id": "test2", "scan_type": "quick"}
tr1 = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs_1),
status="STARTED",
)
tr2 = TaskResult.objects.create(
task_id=str(uuid4()),
task_name="scan_task",
task_kwargs=json.dumps(task_kwargs_2),
status="STARTED",
)
task1 = Task.objects.create(tenant=tenant, task_runner_task=tr1)
task2 = Task.objects.create(tenant=tenant, task_runner_task=tr2)
view = self.DummyView()
# Should find task1 when searching for its kwargs
result1 = view.check_task_status("scan_task", {"provider_id": "test1"})
assert result1 is not None
assert result1.pk == task1.pk
# Should find task2 when searching for its kwargs
result2 = view.check_task_status("scan_task", {"provider_id": "test2"})
assert result2 is not None
assert result2.pk == task2.pk
# Should not find anything when searching for non-existent kwargs
result3 = view.check_task_status(
"scan_task", {"provider_id": "test3"}, raise_on_not_found=False
)
assert result3 is None
+178 -1
View File
@@ -1,6 +1,9 @@
import pytest
from allauth.socialaccount.models import SocialApp
from django.core.exceptions import ValidationError
from api.models import Resource, ResourceTag
from api.db_router import MainRouter
from api.models import Resource, ResourceTag, SAMLConfiguration, Tenant
@pytest.mark.django_db
@@ -92,3 +95,177 @@ class TestResourceModel:
assert len(resource.tags.filter(tenant_id=tenant_id)) == 0
assert resource.get_tags(tenant_id=tenant_id) == {}
# @pytest.mark.django_db
# class TestFindingModel:
# def test_add_finding_with_long_uid(
# self, providers_fixture, scans_fixture, resources_fixture
# ):
# provider, *_ = providers_fixture
# tenant_id = provider.tenant_id
# long_uid = "1" * 500
# _ = Finding.objects.create(
# tenant_id=tenant_id,
# uid=long_uid,
# delta=Finding.DeltaChoices.NEW,
# check_metadata={},
# status=StatusChoices.PASS,
# status_extended="",
# severity="high",
# impact="high",
# raw_result={},
# check_id="test_check",
# scan=scans_fixture[0],
# first_seen_at=None,
# muted=False,
# compliance={},
# )
# assert Finding.objects.filter(uid=long_uid).exists()
@pytest.mark.django_db
class TestSAMLConfigurationModel:
VALID_METADATA = """<?xml version='1.0' encoding='UTF-8'?>
<md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
<md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
<md:KeyDescriptor use='signing'>
<ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
<ds:X509Data>
<ds:X509Certificate>FAKECERTDATA</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://idp.test/sso'/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
"""
def test_creates_valid_configuration(self):
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant A")
config = SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain="ssoexample.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
assert config.email_domain == "ssoexample.com"
assert SocialApp.objects.filter(client_id="ssoexample.com").exists()
def test_email_domain_with_at_symbol_fails(self):
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant B")
config = SAMLConfiguration(
email_domain="invalid@domain.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
with pytest.raises(ValidationError) as exc_info:
config.clean()
errors = exc_info.value.message_dict
assert "email_domain" in errors
assert "Domain must not contain @" in errors["email_domain"][0]
def test_duplicate_email_domain_fails(self):
tenant1 = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant C1")
tenant2 = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant C2")
SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain="duplicate.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant1,
)
config = SAMLConfiguration(
email_domain="duplicate.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant2,
)
with pytest.raises(ValidationError) as exc_info:
config.clean()
errors = exc_info.value.message_dict
assert "tenant" in errors
assert "There is a problem with your email domain." in errors["tenant"][0]
def test_duplicate_tenant_config_fails(self):
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant D")
SAMLConfiguration.objects.using(MainRouter.admin_db).create(
email_domain="unique1.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
config = SAMLConfiguration(
email_domain="unique2.com",
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
tenant=tenant,
)
with pytest.raises(ValidationError) as exc_info:
config.clean()
errors = exc_info.value.message_dict
assert "tenant" in errors
assert (
"A SAML configuration already exists for this tenant."
in errors["tenant"][0]
)
def test_invalid_metadata_xml_fails(self):
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant E")
config = SAMLConfiguration(
email_domain="brokenxml.com",
metadata_xml="<bad<xml>",
tenant=tenant,
)
with pytest.raises(ValidationError) as exc_info:
config._parse_metadata()
errors = exc_info.value.message_dict
assert "metadata_xml" in errors
assert "Invalid XML" in errors["metadata_xml"][0]
assert "not well-formed" in errors["metadata_xml"][0]
def test_metadata_missing_sso_fails(self):
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant F")
xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
<md:IDPSSODescriptor></md:IDPSSODescriptor>
</md:EntityDescriptor>"""
config = SAMLConfiguration(
email_domain="nosso.com",
metadata_xml=xml,
tenant=tenant,
)
with pytest.raises(ValidationError) as exc_info:
config._parse_metadata()
errors = exc_info.value.message_dict
assert "metadata_xml" in errors
assert "Missing SingleSignOnService" in errors["metadata_xml"][0]
def test_metadata_missing_certificate_fails(self):
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant G")
xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
<md:IDPSSODescriptor>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://example.com/sso"/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>"""
config = SAMLConfiguration(
email_domain="nocert.com",
metadata_xml=xml,
tenant=tenant,
)
with pytest.raises(ValidationError) as exc_info:
config._parse_metadata()
errors = exc_info.value.message_dict
assert "metadata_xml" in errors
assert "X509Certificate" in errors["metadata_xml"][0]
+80
View File
@@ -0,0 +1,80 @@
import logging
from unittest.mock import MagicMock
from config.settings.sentry import before_send
def test_before_send_ignores_log_with_ignored_exception():
"""Test that before_send ignores logs containing ignored exceptions."""
log_record = MagicMock()
log_record.msg = "Provider kubernetes is not connected"
log_record.levelno = logging.ERROR # 40
hint = {"log_record": log_record}
event = MagicMock()
result = before_send(event, hint)
# Assert that the event was dropped (None returned)
assert result is None
def test_before_send_ignores_exception_with_ignored_exception():
"""Test that before_send ignores exceptions containing ignored exceptions."""
exc_info = (Exception, Exception("Provider kubernetes is not connected"), None)
hint = {"exc_info": exc_info}
event = MagicMock()
result = before_send(event, hint)
# Assert that the event was dropped (None returned)
assert result is None
def test_before_send_passes_through_non_ignored_log():
"""Test that before_send passes through logs that don't contain ignored exceptions."""
log_record = MagicMock()
log_record.msg = "Some other error message"
log_record.levelno = logging.ERROR # 40
hint = {"log_record": log_record}
event = MagicMock()
result = before_send(event, hint)
# Assert that the event was passed through
assert result == event
def test_before_send_passes_through_non_ignored_exception():
"""Test that before_send passes through exceptions that don't contain ignored exceptions."""
exc_info = (Exception, Exception("Some other error message"), None)
hint = {"exc_info": exc_info}
event = MagicMock()
result = before_send(event, hint)
# Assert that the event was passed through
assert result == event
def test_before_send_handles_warning_level():
"""Test that before_send handles warning level logs."""
log_record = MagicMock()
log_record.msg = "Provider kubernetes is not connected"
log_record.levelno = logging.WARNING # 30
hint = {"log_record": log_record}
event = MagicMock()
result = before_send(event, hint)
# Assert that the event was dropped (None returned)
assert result is None
+32 -15
View File
@@ -1,25 +1,25 @@
from datetime import datetime, timedelta, timezone
from unittest.mock import patch, MagicMock
from unittest.mock import MagicMock, patch
import pytest
from rest_framework.exceptions import NotFound, ValidationError
from api.db_router import MainRouter
from api.exceptions import InvitationTokenExpiredException
from api.models import Invitation, Provider
from api.utils import (
get_prowler_provider_kwargs,
initialize_prowler_provider,
merge_dicts,
prowler_provider_connection_test,
return_prowler_provider,
validate_invitation,
)
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
from rest_framework.exceptions import ValidationError, NotFound
from api.db_router import MainRouter
from api.exceptions import InvitationTokenExpiredException
from api.models import Invitation
from api.models import Provider
from api.utils import (
merge_dicts,
return_prowler_provider,
initialize_prowler_provider,
prowler_provider_connection_test,
get_prowler_provider_kwargs,
)
from api.utils import validate_invitation
from prowler.providers.m365.m365_provider import M365Provider
class TestMergeDicts:
@@ -105,6 +105,7 @@ class TestReturnProwlerProvider:
(Provider.ProviderChoices.GCP.value, GcpProvider),
(Provider.ProviderChoices.AZURE.value, AzureProvider),
(Provider.ProviderChoices.KUBERNETES.value, KubernetesProvider),
(Provider.ProviderChoices.M365.value, M365Provider),
],
)
def test_return_prowler_provider(self, provider_type, expected_provider):
@@ -144,6 +145,18 @@ class TestProwlerProviderConnectionTest:
key="value", provider_id="1234567890", raise_on_exception=False
)
@pytest.mark.django_db
@patch("api.utils.return_prowler_provider")
def test_prowler_provider_connection_test_without_secret(
self, mock_return_prowler_provider, providers_fixture
):
mock_return_prowler_provider.return_value = MagicMock()
connection = prowler_provider_connection_test(providers_fixture[0])
assert connection.is_connected is False
assert isinstance(connection.error, Provider.secret.RelatedObjectDoesNotExist)
assert str(connection.error) == "Provider has no secret."
class TestGetProwlerProviderKwargs:
@pytest.mark.parametrize(
@@ -165,6 +178,10 @@ class TestGetProwlerProviderKwargs:
Provider.ProviderChoices.KUBERNETES.value,
{"context": "provider_uid"},
),
(
Provider.ProviderChoices.M365.value,
{},
),
],
)
def test_get_prowler_provider_kwargs(self, provider_type, expected_extra_kwargs):
File diff suppressed because it is too large Load Diff
+48 -7
View File
@@ -1,16 +1,20 @@
from datetime import datetime, timezone
from allauth.socialaccount.providers.oauth2.client import OAuth2Client
from django.contrib.postgres.aggregates import ArrayAgg
from django.db.models import Subquery
from rest_framework.exceptions import NotFound, ValidationError
from api.db_router import MainRouter
from api.exceptions import InvitationTokenExpiredException
from api.models import Invitation, Provider
from api.models import Invitation, Provider, Resource
from api.v1.serializers import FindingMetadataSerializer
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.common.models import Connection
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
from prowler.providers.m365.m365_provider import M365Provider
class CustomOAuth2Client(OAuth2Client):
@@ -51,14 +55,14 @@ def merge_dicts(default_dict: dict, replacement_dict: dict) -> dict:
def return_prowler_provider(
provider: Provider,
) -> [AwsProvider | AzureProvider | GcpProvider | KubernetesProvider]:
) -> [AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider]:
"""Return the Prowler provider class based on the given provider type.
Args:
provider (Provider): The provider object containing the provider type and associated secrets.
Returns:
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider: The corresponding provider class.
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider: The corresponding provider class.
Raises:
ValueError: If the provider type specified in `provider.provider` is not supported.
@@ -72,6 +76,8 @@ def return_prowler_provider(
prowler_provider = AzureProvider
case Provider.ProviderChoices.KUBERNETES.value:
prowler_provider = KubernetesProvider
case Provider.ProviderChoices.M365.value:
prowler_provider = M365Provider
case _:
raise ValueError(f"Provider type {provider.provider} not supported")
return prowler_provider
@@ -104,15 +110,15 @@ def get_prowler_provider_kwargs(provider: Provider) -> dict:
def initialize_prowler_provider(
provider: Provider,
) -> AwsProvider | AzureProvider | GcpProvider | KubernetesProvider:
) -> AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider:
"""Initialize a Prowler provider instance based on the given provider type.
Args:
provider (Provider): The provider object containing the provider type and associated secrets.
Returns:
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider: An instance of the corresponding provider class
(`AwsProvider`, `AzureProvider`, `GcpProvider`, or `KubernetesProvider`) initialized with the
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider: An instance of the corresponding provider class
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `KubernetesProvider` or `M365Provider`) initialized with the
provider's secrets.
"""
prowler_provider = return_prowler_provider(provider)
@@ -130,7 +136,12 @@ def prowler_provider_connection_test(provider: Provider) -> Connection:
Connection: A connection object representing the result of the connection test for the specified provider.
"""
prowler_provider = return_prowler_provider(provider)
prowler_provider_kwargs = provider.secret.secret
try:
prowler_provider_kwargs = provider.secret.secret
except Provider.secret.RelatedObjectDoesNotExist as secret_error:
return Connection(is_connected=False, error=secret_error)
return prowler_provider.test_connection(
**prowler_provider_kwargs, provider_id=provider.uid, raise_on_exception=False
)
@@ -197,3 +208,33 @@ def validate_invitation(
)
return invitation
# ToRemove after removing the fallback mechanism in /findings/metadata
def get_findings_metadata_no_aggregations(tenant_id: str, filtered_queryset):
filtered_ids = filtered_queryset.order_by().values("id")
relevant_resources = Resource.all_objects.filter(
tenant_id=tenant_id, findings__id__in=Subquery(filtered_ids)
).only("service", "region", "type")
aggregation = relevant_resources.aggregate(
services=ArrayAgg("service", flat=True),
regions=ArrayAgg("region", flat=True),
resource_types=ArrayAgg("type", flat=True),
)
services = sorted(set(aggregation["services"] or []))
regions = sorted({region for region in aggregation["regions"] or [] if region})
resource_types = sorted(set(aggregation["resource_types"] or []))
result = {
"services": services,
"regions": regions,
"resource_types": resource_types,
}
serializer = FindingMetadataSerializer(data=result)
serializer.is_valid(raise_exception=True)
return serializer.data
+222
View File
@@ -0,0 +1,222 @@
from django.urls import reverse
from django_celery_results.models import TaskResult
from rest_framework import status
from rest_framework.response import Response
from api.exceptions import (
TaskFailedException,
TaskInProgressException,
TaskNotFoundException,
)
from api.models import StateChoices, Task
from api.v1.serializers import TaskSerializer
class PaginateByPkMixin:
"""
Mixin to paginate on a list of PKs (cheaper than heavy JOINs),
re-fetch the full objects with the desired select/prefetch,
re-sort them to preserve DB ordering, then serialize + return.
"""
def paginate_by_pk(
self,
request, # noqa: F841
base_queryset,
manager,
select_related: list[str] | None = None,
prefetch_related: list[str] | None = None,
) -> Response:
pk_list = base_queryset.values_list("id", flat=True)
page = self.paginate_queryset(pk_list)
if page is None:
return Response(self.get_serializer(base_queryset, many=True).data)
queryset = manager.filter(id__in=page)
if select_related:
queryset = queryset.select_related(*select_related)
if prefetch_related:
queryset = queryset.prefetch_related(*prefetch_related)
queryset = sorted(queryset, key=lambda obj: page.index(obj.id))
serialized = self.get_serializer(queryset, many=True).data
return self.get_paginated_response(serialized)
class TaskManagementMixin:
"""
Mixin to manage task status checking.
This mixin provides functionality to check if a task with specific parameters
is running, completed, failed, or doesn't exist. It returns the task when running
and raises specific exceptions for failed/not found scenarios that can be handled
at the view level.
"""
def check_task_status(
self,
task_name: str,
task_kwargs: dict,
raise_on_failed: bool = True,
raise_on_not_found: bool = True,
) -> Task | None:
"""
Check the status of a task with given name and kwargs.
This method first checks for a related Task object, and if not found,
checks TaskResult directly. If a TaskResult is found and running but
there's no related Task, it raises TaskInProgressException.
Args:
task_name (str): The name of the task to check
task_kwargs (dict): The kwargs to match against the task
raise_on_failed (bool): Whether to raise exception if task failed
raise_on_not_found (bool): Whether to raise exception if task not found
Returns:
Task | None: The task instance if found (regardless of state), None if not found and raise_on_not_found=False
Raises:
TaskFailedException: If task failed and raise_on_failed=True
TaskNotFoundException: If task not found and raise_on_not_found=True
TaskInProgressException: If task is running but no related Task object exists
"""
# First, try to find a Task object with related TaskResult
try:
# Build the filter for task kwargs
task_filter = {
"task_runner_task__task_name": task_name,
}
# Add kwargs filters - we need to check if the task kwargs contain our parameters
for key, value in task_kwargs.items():
task_filter["task_runner_task__task_kwargs__contains"] = str(value)
task = (
Task.objects.filter(**task_filter)
.select_related("task_runner_task")
.order_by("-inserted_at")
.first()
)
if task:
# Get task state using the same logic as TaskSerializer
task_state_mapping = {
"PENDING": StateChoices.AVAILABLE,
"STARTED": StateChoices.EXECUTING,
"PROGRESS": StateChoices.EXECUTING,
"SUCCESS": StateChoices.COMPLETED,
"FAILURE": StateChoices.FAILED,
"REVOKED": StateChoices.CANCELLED,
}
celery_status = (
task.task_runner_task.status if task.task_runner_task else None
)
task_state = task_state_mapping.get(
celery_status or "", StateChoices.AVAILABLE
)
# Check task state and raise exceptions accordingly
if task_state in (StateChoices.FAILED, StateChoices.CANCELLED):
if raise_on_failed:
raise TaskFailedException(task=task)
return task
elif task_state == StateChoices.COMPLETED:
return None
return task
except Task.DoesNotExist:
pass
# If no Task found, check TaskResult directly
try:
# Build the filter for TaskResult
task_result_filter = {
"task_name": task_name,
}
# Add kwargs filters - check if the task kwargs contain our parameters
for key, value in task_kwargs.items():
task_result_filter["task_kwargs__contains"] = str(value)
task_result = (
TaskResult.objects.filter(**task_result_filter)
.order_by("-date_created")
.first()
)
if task_result:
# Check if the TaskResult indicates a running task
if task_result.status in ["PENDING", "STARTED", "PROGRESS"]:
# Task is running but no related Task object exists
raise TaskInProgressException(task_result=task_result)
elif task_result.status == "FAILURE":
if raise_on_failed:
raise TaskFailedException(task=None)
# For other statuses (SUCCESS, REVOKED), we don't have a Task to return,
# so we treat it as not found
except TaskResult.DoesNotExist:
pass
# No task found at all
if raise_on_not_found:
raise TaskNotFoundException()
return None
def get_task_response_if_running(
self,
task_name: str,
task_kwargs: dict,
raise_on_failed: bool = True,
raise_on_not_found: bool = True,
) -> Response | None:
"""
Get a 202 response with task details if the task is currently running.
This method is useful for endpoints that should return task status when
a background task is in progress, similar to the compliance overview endpoints.
Args:
task_name (str): The name of the task to check
task_kwargs (dict): The kwargs to match against the task
Returns:
Response | None: 202 response with task details if running, None otherwise
"""
task = self.check_task_status(
task_name=task_name,
task_kwargs=task_kwargs,
raise_on_failed=raise_on_failed,
raise_on_not_found=raise_on_not_found,
)
if not task:
return None
# Get task state
task_state_mapping = {
"PENDING": StateChoices.AVAILABLE,
"STARTED": StateChoices.EXECUTING,
"PROGRESS": StateChoices.EXECUTING,
"SUCCESS": StateChoices.COMPLETED,
"FAILURE": StateChoices.FAILED,
"REVOKED": StateChoices.CANCELLED,
}
celery_status = task.task_runner_task.status if task.task_runner_task else None
task_state = task_state_mapping.get(celery_status or "", StateChoices.AVAILABLE)
if task_state == StateChoices.EXECUTING:
self.response_serializer_class = TaskSerializer
serializer = TaskSerializer(task)
return Response(
data=serializer.data,
status=status.HTTP_202_ACCEPTED,
headers={
"Content-Location": reverse("task-detail", kwargs={"pk": task.id})
},
)
@@ -0,0 +1,183 @@
from drf_spectacular.utils import extend_schema_field
from rest_framework_json_api import serializers
@extend_schema_field(
{
"oneOf": [
{
"type": "object",
"title": "AWS Static Credentials",
"properties": {
"aws_access_key_id": {
"type": "string",
"description": "The AWS access key ID. Required for environments where no IAM role is being "
"assumed and direct AWS access is needed.",
},
"aws_secret_access_key": {
"type": "string",
"description": "The AWS secret access key. Must accompany 'aws_access_key_id' to authorize "
"access to AWS resources.",
},
"aws_session_token": {
"type": "string",
"description": "The session token associated with temporary credentials. Only needed for "
"session-based or temporary AWS access.",
},
},
"required": ["aws_access_key_id", "aws_secret_access_key"],
},
{
"type": "object",
"title": "AWS Assume Role",
"properties": {
"role_arn": {
"type": "string",
"description": "The Amazon Resource Name (ARN) of the role to assume. Required for AWS role "
"assumption.",
},
"external_id": {
"type": "string",
"description": "An identifier to enhance security for role assumption.",
},
"aws_access_key_id": {
"type": "string",
"description": "The AWS access key ID. Only required if the environment lacks pre-configured "
"AWS credentials.",
},
"aws_secret_access_key": {
"type": "string",
"description": "The AWS secret access key. Required if 'aws_access_key_id' is provided or if "
"no AWS credentials are pre-configured.",
},
"aws_session_token": {
"type": "string",
"description": "The session token for temporary credentials, if applicable.",
},
"session_duration": {
"type": "integer",
"minimum": 900,
"maximum": 43200,
"default": 3600,
"description": "The duration (in seconds) for the role session.",
},
"role_session_name": {
"type": "string",
"description": "An identifier for the role session, useful for tracking sessions in AWS logs. "
"The regex used to validate this parameter is a string of characters consisting of "
"upper- and lower-case alphanumeric characters with no spaces. You can also include "
"underscores or any of the following characters: =,.@-\n\n"
"Examples:\n"
"- MySession123\n"
"- User_Session-1\n"
"- Test.Session@2",
"pattern": "^[a-zA-Z0-9=,.@_-]+$",
},
},
"required": ["role_arn", "external_id"],
},
{
"type": "object",
"title": "Azure Static Credentials",
"properties": {
"client_id": {
"type": "string",
"description": "The Azure application (client) ID for authentication in Azure AD.",
},
"client_secret": {
"type": "string",
"description": "The client secret associated with the application (client) ID, providing "
"secure access.",
},
"tenant_id": {
"type": "string",
"description": "The Azure tenant ID, representing the directory where the application is "
"registered.",
},
},
"required": ["client_id", "client_secret", "tenant_id"],
},
{
"type": "object",
"title": "M365 Static Credentials",
"properties": {
"client_id": {
"type": "string",
"description": "The Azure application (client) ID for authentication in Azure AD.",
},
"client_secret": {
"type": "string",
"description": "The client secret associated with the application (client) ID, providing "
"secure access.",
},
"tenant_id": {
"type": "string",
"description": "The Azure tenant ID, representing the directory where the application is "
"registered.",
},
"user": {
"type": "email",
"description": "User microsoft email address.",
},
"password": {
"type": "string",
"description": "User password.",
},
},
"required": [
"client_id",
"client_secret",
"tenant_id",
"user",
"password",
],
},
{
"type": "object",
"title": "GCP Static Credentials",
"properties": {
"client_id": {
"type": "string",
"description": "The client ID from Google Cloud, used to identify the application for GCP "
"access.",
},
"client_secret": {
"type": "string",
"description": "The client secret associated with the GCP client ID, required for secure "
"access.",
},
"refresh_token": {
"type": "string",
"description": "A refresh token that allows the application to obtain new access tokens for "
"extended use.",
},
},
"required": ["client_id", "client_secret", "refresh_token"],
},
{
"type": "object",
"title": "GCP Service Account Key",
"properties": {
"service_account_key": {
"type": "object",
"description": "The service account key for GCP.",
}
},
"required": ["service_account_key"],
},
{
"type": "object",
"title": "Kubernetes Static Credentials",
"properties": {
"kubeconfig_content": {
"type": "string",
"description": "The content of the Kubernetes kubeconfig file, encoded as a string.",
}
},
"required": ["kubeconfig_content"],
},
]
}
)
class ProviderSecretField(serializers.JSONField):
pass
+243 -246
View File
@@ -14,12 +14,12 @@ from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
from rest_framework_simplejwt.tokens import RefreshToken
from api.models import (
ComplianceOverview,
Finding,
Integration,
IntegrationProviderRelationship,
Invitation,
InvitationRoleRelationship,
LighthouseConfiguration,
Membership,
Provider,
ProviderGroup,
@@ -29,8 +29,10 @@ from api.models import (
ResourceTag,
Role,
RoleProviderGroupRelationship,
SAMLConfiguration,
Scan,
StateChoices,
StatusChoices,
Task,
User,
UserRoleRelationship,
@@ -42,6 +44,7 @@ from api.v1.serializer_utils.integrations import (
IntegrationCredentialField,
S3ConfigSerializer,
)
from api.v1.serializer_utils.providers import ProviderSecretField
# Tokens
@@ -851,6 +854,10 @@ class ScanSerializer(RLSSerializer):
"url",
]
included_serializers = {
"provider": "api.v1.serializers.ProviderIncludeSerializer",
}
class ScanIncludeSerializer(RLSSerializer):
trigger = serializers.ChoiceField(
@@ -955,6 +962,15 @@ class ScanReportSerializer(serializers.Serializer):
fields = ["id"]
class ScanComplianceReportSerializer(serializers.Serializer):
id = serializers.CharField(source="scan")
name = serializers.CharField()
class Meta:
resource_name = "scan-reports"
fields = ["id", "name"]
class ResourceTagSerializer(RLSSerializer):
"""
Serializer for the ResourceTag model
@@ -1087,6 +1103,7 @@ class FindingSerializer(RLSSerializer):
"inserted_at",
"updated_at",
"first_seen_at",
"muted",
"url",
# Relationships
"scan",
@@ -1136,12 +1153,16 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
serializer = GCPProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.KUBERNETES.value:
serializer = KubernetesProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.M365.value:
serializer = M365ProviderSecret(data=secret)
else:
raise serializers.ValidationError(
{"provider": f"Provider type not supported {provider_type}"}
)
elif secret_type == ProviderSecret.TypeChoices.ROLE:
serializer = AWSRoleAssumptionProviderSecret(data=secret)
elif secret_type == ProviderSecret.TypeChoices.SERVICE_ACCOUNT:
serializer = GCPServiceAccountProviderSecret(data=secret)
else:
raise serializers.ValidationError(
{"secret_type": f"Secret type not supported: {secret_type}"}
@@ -1175,6 +1196,17 @@ class AzureProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
class M365ProviderSecret(serializers.Serializer):
client_id = serializers.CharField()
client_secret = serializers.CharField()
tenant_id = serializers.CharField()
user = serializers.EmailField()
password = serializers.CharField()
class Meta:
resource_name = "provider-secrets"
class GCPProviderSecret(serializers.Serializer):
client_id = serializers.CharField()
client_secret = serializers.CharField()
@@ -1184,6 +1216,13 @@ class GCPProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
class GCPServiceAccountProviderSecret(serializers.Serializer):
service_account_key = serializers.JSONField()
class Meta:
resource_name = "provider-secrets"
class KubernetesProviderSecret(serializers.Serializer):
kubeconfig_content = serializers.CharField()
@@ -1206,141 +1245,6 @@ class AWSRoleAssumptionProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
@extend_schema_field(
{
"oneOf": [
{
"type": "object",
"title": "AWS Static Credentials",
"properties": {
"aws_access_key_id": {
"type": "string",
"description": "The AWS access key ID. Required for environments where no IAM role is being "
"assumed and direct AWS access is needed.",
},
"aws_secret_access_key": {
"type": "string",
"description": "The AWS secret access key. Must accompany 'aws_access_key_id' to authorize "
"access to AWS resources.",
},
"aws_session_token": {
"type": "string",
"description": "The session token associated with temporary credentials. Only needed for "
"session-based or temporary AWS access.",
},
},
"required": ["aws_access_key_id", "aws_secret_access_key"],
},
{
"type": "object",
"title": "AWS Assume Role",
"properties": {
"role_arn": {
"type": "string",
"description": "The Amazon Resource Name (ARN) of the role to assume. Required for AWS role "
"assumption.",
},
"external_id": {
"type": "string",
"description": "An identifier to enhance security for role assumption.",
},
"aws_access_key_id": {
"type": "string",
"description": "The AWS access key ID. Only required if the environment lacks pre-configured "
"AWS credentials.",
},
"aws_secret_access_key": {
"type": "string",
"description": "The AWS secret access key. Required if 'aws_access_key_id' is provided or if "
"no AWS credentials are pre-configured.",
},
"aws_session_token": {
"type": "string",
"description": "The session token for temporary credentials, if applicable.",
},
"session_duration": {
"type": "integer",
"minimum": 900,
"maximum": 43200,
"default": 3600,
"description": "The duration (in seconds) for the role session.",
},
"role_session_name": {
"type": "string",
"description": "An identifier for the role session, useful for tracking sessions in AWS logs. "
"The regex used to validate this parameter is a string of characters consisting of "
"upper- and lower-case alphanumeric characters with no spaces. You can also include "
"underscores or any of the following characters: =,.@-\n\n"
"Examples:\n"
"- MySession123\n"
"- User_Session-1\n"
"- Test.Session@2",
"pattern": "^[a-zA-Z0-9=,.@_-]+$",
},
},
"required": ["role_arn", "external_id"],
},
{
"type": "object",
"title": "Azure Static Credentials",
"properties": {
"client_id": {
"type": "string",
"description": "The Azure application (client) ID for authentication in Azure AD.",
},
"client_secret": {
"type": "string",
"description": "The client secret associated with the application (client) ID, providing "
"secure access.",
},
"tenant_id": {
"type": "string",
"description": "The Azure tenant ID, representing the directory where the application is "
"registered.",
},
},
"required": ["client_id", "client_secret", "tenant_id"],
},
{
"type": "object",
"title": "GCP Static Credentials",
"properties": {
"client_id": {
"type": "string",
"description": "The client ID from Google Cloud, used to identify the application for GCP "
"access.",
},
"client_secret": {
"type": "string",
"description": "The client secret associated with the GCP client ID, required for secure "
"access.",
},
"refresh_token": {
"type": "string",
"description": "A refresh token that allows the application to obtain new access tokens for "
"extended use.",
},
},
"required": ["client_id", "client_secret", "refresh_token"],
},
{
"type": "object",
"title": "Kubernetes Static Credentials",
"properties": {
"kubeconfig_content": {
"type": "string",
"description": "The content of the Kubernetes kubeconfig file, encoded as a string.",
}
},
"required": ["kubeconfig_content"],
},
]
}
)
class ProviderSecretField(serializers.JSONField):
pass
class ProviderSecretSerializer(RLSSerializer):
"""
Serializer for the ProviderSecret model.
@@ -1777,124 +1681,64 @@ class RoleProviderGroupRelationshipSerializer(RLSSerializer, BaseWriteSerializer
# Compliance overview
class ComplianceOverviewSerializer(RLSSerializer):
class ComplianceOverviewSerializer(serializers.Serializer):
"""
Serializer for the ComplianceOverview model.
Serializer for compliance requirement status aggregated by compliance framework.
This serializer is used to format aggregated compliance framework data,
providing counts of passed, failed, and manual requirements along with
an overall global status for each framework.
"""
requirements_status = serializers.SerializerMethodField(
read_only=True, method_name="get_requirements_status"
)
provider_type = serializers.SerializerMethodField(read_only=True)
# Add ID field which will be used for resource identification
id = serializers.CharField()
framework = serializers.CharField()
version = serializers.CharField()
requirements_passed = serializers.IntegerField()
requirements_failed = serializers.IntegerField()
requirements_manual = serializers.IntegerField()
total_requirements = serializers.IntegerField()
class Meta:
model = ComplianceOverview
fields = [
"id",
"inserted_at",
"compliance_id",
"framework",
"version",
"requirements_status",
"region",
"provider_type",
"scan",
"url",
]
@extend_schema_field(
{
"type": "object",
"properties": {
"passed": {"type": "integer"},
"failed": {"type": "integer"},
"manual": {"type": "integer"},
"total": {"type": "integer"},
},
}
)
def get_requirements_status(self, obj):
return {
"passed": obj.requirements_passed,
"failed": obj.requirements_failed,
"manual": obj.requirements_manual,
"total": obj.total_requirements,
}
@extend_schema_field(serializers.CharField(allow_null=True))
def get_provider_type(self, obj):
"""
Retrieves the provider_type from scan.provider.provider_type.
"""
try:
return obj.scan.provider.provider
except AttributeError:
return None
class JSONAPIMeta:
resource_name = "compliance-overviews"
class ComplianceOverviewFullSerializer(ComplianceOverviewSerializer):
requirements = serializers.SerializerMethodField(read_only=True)
class ComplianceOverviewDetailSerializer(serializers.Serializer):
"""
Serializer for detailed compliance requirement information.
class Meta(ComplianceOverviewSerializer.Meta):
fields = ComplianceOverviewSerializer.Meta.fields + [
"description",
"requirements",
]
This serializer formats the aggregated requirement data, showing detailed status
and counts for each requirement across all regions.
"""
@extend_schema_field(
{
"type": "object",
"properties": {
"requirement_id": {
"type": "object",
"properties": {
"name": {"type": "string"},
"checks": {
"type": "object",
"properties": {
"check_name": {
"type": "object",
"properties": {
"status": {
"type": "string",
"enum": ["PASS", "FAIL", None],
},
},
}
},
"description": "Each key in the 'checks' object is a check name, with values as "
"'PASS', 'FAIL', or null.",
},
"status": {
"type": "string",
"enum": ["PASS", "FAIL", "MANUAL"],
},
"attributes": {
"type": "array",
"items": {
"type": "object",
},
},
"description": {"type": "string"},
"checks_status": {
"type": "object",
"properties": {
"total": {"type": "integer"},
"pass": {"type": "integer"},
"fail": {"type": "integer"},
"manual": {"type": "integer"},
},
},
},
}
},
}
)
def get_requirements(self, obj):
"""
Returns the detailed structure of requirements.
"""
return obj.requirements
id = serializers.CharField()
framework = serializers.CharField()
version = serializers.CharField()
description = serializers.CharField()
status = serializers.ChoiceField(choices=StatusChoices.choices)
class JSONAPIMeta:
resource_name = "compliance-requirements-details"
class ComplianceOverviewAttributesSerializer(serializers.Serializer):
id = serializers.CharField()
framework_description = serializers.CharField()
name = serializers.CharField()
framework = serializers.CharField()
version = serializers.CharField()
description = serializers.CharField()
attributes = serializers.JSONField()
class JSONAPIMeta:
resource_name = "compliance-requirements-attributes"
class ComplianceOverviewMetadataSerializer(serializers.Serializer):
regions = serializers.ListField(child=serializers.CharField(), allow_empty=True)
class JSONAPIMeta:
resource_name = "compliance-overviews-metadata"
# Overviews
@@ -2219,3 +2063,156 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
IntegrationProviderRelationship.objects.bulk_create(new_relationships)
return super().update(instance, validated_data)
# SSO
class SamlInitiateSerializer(serializers.Serializer):
email_domain = serializers.CharField()
class JSONAPIMeta:
resource_name = "saml-initiate"
class SamlMetadataSerializer(serializers.Serializer):
class JSONAPIMeta:
resource_name = "saml-meta"
class SAMLConfigurationSerializer(RLSSerializer):
class Meta:
model = SAMLConfiguration
fields = ["id", "email_domain", "metadata_xml", "created_at", "updated_at"]
read_only_fields = ["id", "created_at", "updated_at"]
class LighthouseConfigSerializer(RLSSerializer):
"""
Serializer for the LighthouseConfig model.
"""
api_key = serializers.CharField(required=False)
class Meta:
model = LighthouseConfiguration
fields = [
"id",
"name",
"api_key",
"model",
"temperature",
"max_tokens",
"business_context",
"is_active",
"inserted_at",
"updated_at",
"url",
]
extra_kwargs = {
"id": {"read_only": True},
"is_active": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
}
def to_representation(self, instance):
data = super().to_representation(instance)
# Check if api_key is specifically requested in fields param
fields_param = self.context.get("request", None) and self.context[
"request"
].query_params.get("fields[lighthouse-config]", "")
if fields_param == "api_key":
# Return decrypted key if specifically requested
data["api_key"] = instance.api_key_decoded if instance.api_key else None
else:
# Return masked key for general requests
data["api_key"] = "*" * len(instance.api_key) if instance.api_key else None
return data
class LighthouseConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
"""Serializer for creating new Lighthouse configurations."""
api_key = serializers.CharField(write_only=True, required=True)
class Meta:
model = LighthouseConfiguration
fields = [
"id",
"name",
"api_key",
"model",
"temperature",
"max_tokens",
"business_context",
"is_active",
"inserted_at",
"updated_at",
]
extra_kwargs = {
"id": {"read_only": True},
"is_active": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
}
def validate(self, attrs):
tenant_id = self.context.get("request").tenant_id
if LighthouseConfiguration.objects.filter(tenant_id=tenant_id).exists():
raise serializers.ValidationError(
{
"tenant_id": "Lighthouse configuration already exists for this tenant."
}
)
return super().validate(attrs)
def create(self, validated_data):
api_key = validated_data.pop("api_key")
instance = super().create(validated_data)
instance.api_key_decoded = api_key
instance.save()
return instance
def to_representation(self, instance):
data = super().to_representation(instance)
# Always mask the API key in the response
data["api_key"] = "*" * len(instance.api_key) if instance.api_key else None
return data
class LighthouseConfigUpdateSerializer(BaseWriteSerializer):
"""
Serializer for updating LighthouseConfig instances.
"""
api_key = serializers.CharField(write_only=True, required=False)
class Meta:
model = LighthouseConfiguration
fields = [
"id",
"name",
"api_key",
"model",
"temperature",
"max_tokens",
"business_context",
"is_active",
]
extra_kwargs = {
"id": {"read_only": True},
"is_active": {"read_only": True},
"name": {"required": False},
"model": {"required": False},
"temperature": {"required": False},
"max_tokens": {"required": False},
}
def update(self, instance, validated_data):
api_key = validated_data.pop("api_key", None)
instance = super().update(instance, validated_data)
if api_key:
instance.api_key_decoded = api_key
instance.save()
return instance
+21
View File
@@ -13,6 +13,7 @@ from api.v1.views import (
IntegrationViewSet,
InvitationAcceptViewSet,
InvitationViewSet,
LighthouseConfigViewSet,
MembershipViewSet,
OverviewViewSet,
ProviderGroupProvidersRelationshipView,
@@ -22,10 +23,13 @@ from api.v1.views import (
ResourceViewSet,
RoleProviderGroupRelationshipView,
RoleViewSet,
SAMLConfigurationViewSet,
SAMLInitiateAPIView,
ScanViewSet,
ScheduleViewSet,
SchemaView,
TaskViewSet,
TenantFinishACSView,
TenantMembersViewSet,
TenantViewSet,
UserRoleRelationshipView,
@@ -49,6 +53,12 @@ router.register(
router.register(r"overviews", OverviewViewSet, basename="overview")
router.register(r"schedules", ScheduleViewSet, basename="schedule")
router.register(r"integrations", IntegrationViewSet, basename="integration")
router.register(r"saml-config", SAMLConfigurationViewSet, basename="saml-config")
router.register(
r"lighthouse-configurations",
LighthouseConfigViewSet,
basename="lighthouseconfiguration",
)
tenants_router = routers.NestedSimpleRouter(router, r"tenants", lookup="tenant")
tenants_router.register(
@@ -112,6 +122,17 @@ urlpatterns = [
),
name="provider_group-providers-relationship",
),
# API endpoint to start SAML SSO flow
path(
"auth/saml/initiate/", SAMLInitiateAPIView.as_view(), name="api_saml_initiate"
),
# Allauth SAML endpoints for tenants
path("accounts/", include("allauth.urls")),
path(
"api/v1/accounts/saml/<organization_slug>/acs/finish/",
TenantFinishACSView.as_view(),
name="saml_finish_acs",
),
path("tokens/google", GoogleSocialLoginView.as_view(), name="token-google"),
path("tokens/github", GithubSocialLoginView.as_view(), name="token-github"),
path("", include(router.urls)),
File diff suppressed because it is too large Load Diff
+9 -2
View File
@@ -1,6 +1,13 @@
import warnings
from celery import Celery, Task
from config.env import env
# Suppress specific warnings from django-rest-auth: https://github.com/iMerica/dj-rest-auth/issues/684
warnings.filterwarnings(
"ignore", category=UserWarning, module="dj_rest_auth.registration.serializers"
)
BROKER_VISIBILITY_TIMEOUT = env.int("DJANGO_BROKER_VISIBILITY_TIMEOUT", default=86400)
celery_app = Celery("tasks")
@@ -50,9 +57,9 @@ class RLSTask(Task):
tenant_id = kwargs.get("tenant_id")
with rls_transaction(tenant_id):
APITask.objects.create(
APITask.objects.update_or_create(
id=task_result_instance.task_id,
tenant_id=tenant_id,
task_runner_task=task_result_instance,
defaults={"task_runner_task": task_result_instance},
)
return result
+12
View File
@@ -10,6 +10,7 @@ from config.settings.social_login import * # noqa
SECRET_KEY = env("SECRET_KEY", default="secret")
DEBUG = env.bool("DJANGO_DEBUG", default=False)
ALLOWED_HOSTS = ["localhost", "127.0.0.1"]
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
# Application definition
@@ -26,16 +27,19 @@ INSTALLED_APPS = [
"rest_framework",
"corsheaders",
"drf_spectacular",
"drf_spectacular_jsonapi",
"django_guid",
"rest_framework_json_api",
"django_celery_results",
"django_celery_beat",
"rest_framework_simplejwt.token_blacklist",
"allauth",
"django.contrib.sites",
"allauth.account",
"allauth.socialaccount",
"allauth.socialaccount.providers.google",
"allauth.socialaccount.providers.github",
"allauth.socialaccount.providers.saml",
"dj_rest_auth.registration",
"rest_framework.authtoken",
]
@@ -111,6 +115,7 @@ SPECTACULAR_SETTINGS = {
"PREPROCESSING_HOOKS": [
"drf_spectacular_jsonapi.hooks.fix_nested_path_parameters",
],
"TITLE": "API Reference - Prowler",
}
WSGI_APPLICATION = "config.wsgi.application"
@@ -236,3 +241,10 @@ DJANGO_OUTPUT_S3_AWS_SECRET_ACCESS_KEY = env.str(
)
DJANGO_OUTPUT_S3_AWS_SESSION_TOKEN = env.str("DJANGO_OUTPUT_S3_AWS_SESSION_TOKEN", "")
DJANGO_OUTPUT_S3_AWS_DEFAULT_REGION = env.str("DJANGO_OUTPUT_S3_AWS_DEFAULT_REGION", "")
# HTTP Security Headers
SECURE_CONTENT_TYPE_NOSNIFF = True
X_FRAME_OPTIONS = "DENY"
SECURE_REFERRER_POLICY = "strict-origin-when-cross-origin"
DJANGO_DELETION_BATCH_SIZE = env.int("DJANGO_DELETION_BATCH_SIZE", 5000)
+24 -3
View File
@@ -12,6 +12,8 @@ IGNORED_EXCEPTIONS = [
"UnauthorizedOperation",
"AuthFailure",
"InvalidClientTokenId",
"AWSInvalidProviderIdError",
"InternalServerErrorException",
"AccessDenied",
"No Shodan API Key", # Shodan Check
"RequestLimitExceeded", # For now we don't want to log the RequestLimitExceeded errors
@@ -33,6 +35,13 @@ IGNORED_EXCEPTIONS = [
"ValidationException",
"AWSSecretAccessKeyInvalidError",
"InvalidAction",
"InvalidRequestException",
"RequestExpired",
"ConnectionClosedError",
"MaxRetryError",
"AWSAccessKeyIDInvalidError",
"AWSSessionTokenExpiredError",
"EndpointConnectionError", # AWS Service is not available in a region
"Pool is closed", # The following comes from urllib3: eu-west-1 -- HTTPClientError[126]: An HTTP Client raised an unhandled exception: AWSHTTPSConnectionPool(host='hostname.s3.eu-west-1.amazonaws.com', port=443): Pool is closed.
# Authentication Errors from GCP
"ClientAuthenticationError",
@@ -41,6 +50,8 @@ IGNORED_EXCEPTIONS = [
"Permission denied to get service",
"API has not been used in project",
"HttpError 404 when requesting",
"HttpError 403 when requesting",
"HttpError 400 when requesting",
"GCPNoAccesibleProjectsError",
# Authentication Errors from Azure
"ClientAuthenticationError",
@@ -49,6 +60,7 @@ IGNORED_EXCEPTIONS = [
"AzureNotValidClientIdError",
"AzureNotValidClientSecretError",
"AzureNotValidTenantIdError",
"AzureInvalidProviderIdError",
"AzureTenantIdAndClientSecretNotBelongingToClientIdError",
"AzureTenantIdAndClientIdNotBelongingToClientSecretError",
"AzureClientIdAndClientSecretNotBelongingToTenantIdError",
@@ -67,9 +79,16 @@ def before_send(event, hint):
log_msg = hint["log_record"].msg
log_lvl = hint["log_record"].levelno
# Handle Error events and discard the rest
if log_lvl == 40 and any(ignored in log_msg for ignored in IGNORED_EXCEPTIONS):
return
# Handle Error and Critical events and discard the rest
if log_lvl <= 40 and any(ignored in log_msg for ignored in IGNORED_EXCEPTIONS):
return None # Explicitly return None to drop the event
# Ignore exceptions with the ignored_exceptions
if "exc_info" in hint and hint["exc_info"]:
exc_value = str(hint["exc_info"][1])
if any(ignored in exc_value for ignored in IGNORED_EXCEPTIONS):
return None # Explicitly return None to drop the event
return event
@@ -85,4 +104,6 @@ sentry_sdk.init(
# possible.
"continuous_profiling_auto_start": True,
},
attach_stacktrace=True,
ignore_errors=IGNORED_EXCEPTIONS,
)
@@ -1,18 +1,17 @@
from config.env import env
# Google Oauth settings
GOOGLE_OAUTH_CLIENT_ID = env("DJANGO_GOOGLE_OAUTH_CLIENT_ID", default="")
GOOGLE_OAUTH_CLIENT_SECRET = env("DJANGO_GOOGLE_OAUTH_CLIENT_SECRET", default="")
GOOGLE_OAUTH_CALLBACK_URL = env("DJANGO_GOOGLE_OAUTH_CALLBACK_URL", default="")
# Provider Oauth settings
GOOGLE_OAUTH_CLIENT_ID = env("SOCIAL_GOOGLE_OAUTH_CLIENT_ID", default="")
GOOGLE_OAUTH_CLIENT_SECRET = env("SOCIAL_GOOGLE_OAUTH_CLIENT_SECRET", default="")
GOOGLE_OAUTH_CALLBACK_URL = env("SOCIAL_GOOGLE_OAUTH_CALLBACK_URL", default="")
GITHUB_OAUTH_CLIENT_ID = env("DJANGO_GITHUB_OAUTH_CLIENT_ID", default="")
GITHUB_OAUTH_CLIENT_SECRET = env("DJANGO_GITHUB_OAUTH_CLIENT_SECRET", default="")
GITHUB_OAUTH_CALLBACK_URL = env("DJANGO_GITHUB_OAUTH_CALLBACK_URL", default="")
GITHUB_OAUTH_CLIENT_ID = env("SOCIAL_GITHUB_OAUTH_CLIENT_ID", default="")
GITHUB_OAUTH_CLIENT_SECRET = env("SOCIAL_GITHUB_OAUTH_CLIENT_SECRET", default="")
GITHUB_OAUTH_CALLBACK_URL = env("SOCIAL_GITHUB_OAUTH_CALLBACK_URL", default="")
# Allauth settings
ACCOUNT_LOGIN_METHODS = {"email"} # Use Email / Password authentication
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_SIGNUP_FIELDS = ["email*", "password1*", "password2*"]
ACCOUNT_EMAIL_VERIFICATION = "none" # Do not require email confirmation
ACCOUNT_USER_MODEL_USERNAME_FIELD = None
REST_AUTH = {
@@ -25,6 +24,11 @@ SOCIALACCOUNT_EMAIL_AUTHENTICATION = True
# Connect local account and social account if local account with that email address already exists
SOCIALACCOUNT_EMAIL_AUTHENTICATION_AUTO_CONNECT = True
SOCIALACCOUNT_ADAPTER = "api.adapters.ProwlerSocialAccountAdapter"
# SAML keys
SAML_PUBLIC_CERT = env("SAML_PUBLIC_CERT", default="")
SAML_PRIVATE_KEY = env("SAML_PRIVATE_KEY", default="")
SOCIALACCOUNT_PROVIDERS = {
"google": {
"APP": {
@@ -50,4 +54,18 @@ SOCIALACCOUNT_PROVIDERS = {
"read:org",
],
},
"saml": {
"use_nameid_for_email": True,
"sp": {
"entity_id": "urn:prowler.com:sp",
},
"advanced": {
"x509cert": SAML_PUBLIC_CERT,
"private_key": SAML_PRIVATE_KEY,
"name_id_format": "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress",
"authn_request_signed": True,
"want_assertion_signed": True,
"want_message_signed": True,
},
},
}
+255 -1
View File
@@ -1,8 +1,9 @@
import logging
from datetime import datetime, timedelta, timezone
from unittest.mock import patch
from unittest.mock import MagicMock, patch
import pytest
from allauth.socialaccount.models import SocialLogin
from django.conf import settings
from django.db import connection as django_connection
from django.db import connections as django_connections
@@ -10,14 +11,17 @@ from django.urls import reverse
from django_celery_results.models import TaskResult
from rest_framework import status
from rest_framework.test import APIClient
from tasks.jobs.backfill import backfill_resource_scan_summaries
from api.db_utils import rls_transaction
from api.models import (
ComplianceOverview,
ComplianceRequirementOverview,
Finding,
Integration,
IntegrationProviderRelationship,
Invitation,
LighthouseConfiguration,
Membership,
Provider,
ProviderGroup,
@@ -25,9 +29,12 @@ from api.models import (
Resource,
ResourceTag,
Role,
SAMLConfiguration,
SAMLDomainIndex,
Scan,
ScanSummary,
StateChoices,
StatusChoices,
Task,
User,
UserRoleRelationship,
@@ -655,6 +662,7 @@ def findings_fixture(scans_fixture, resources_fixture):
"Description": "test description orange juice",
},
first_seen_at="2024-01-02T00:00:00Z",
muted=True,
)
finding2.add_resources([resource2])
@@ -775,6 +783,131 @@ def compliance_overviews_fixture(scans_fixture, tenants_fixture):
return compliance_overview1, compliance_overview2
@pytest.fixture
def compliance_requirements_overviews_fixture(scans_fixture, tenants_fixture):
"""Fixture for ComplianceRequirementOverview objects used by the new ComplianceOverviewViewSet."""
tenant = tenants_fixture[0]
scan1, scan2, scan3 = scans_fixture
# Create ComplianceRequirementOverview objects for scan1
requirement_overview1 = ComplianceRequirementOverview.objects.create(
tenant=tenant,
scan=scan1,
compliance_id="aws_account_security_onboarding_aws",
framework="AWS-Account-Security-Onboarding",
version="1.0",
description="Description for AWS Account Security Onboarding",
region="eu-west-1",
requirement_id="requirement1",
requirement_status=StatusChoices.PASS,
passed_checks=2,
failed_checks=0,
total_checks=2,
)
requirement_overview2 = ComplianceRequirementOverview.objects.create(
tenant=tenant,
scan=scan1,
compliance_id="aws_account_security_onboarding_aws",
framework="AWS-Account-Security-Onboarding",
version="1.0",
description="Description for AWS Account Security Onboarding",
region="eu-west-1",
requirement_id="requirement2",
requirement_status=StatusChoices.PASS,
passed_checks=2,
failed_checks=0,
total_checks=2,
)
requirement_overview3 = ComplianceRequirementOverview.objects.create(
tenant=tenant,
scan=scan1,
compliance_id="aws_account_security_onboarding_aws",
framework="AWS-Account-Security-Onboarding",
version="1.0",
description="Description for AWS Account Security Onboarding",
region="eu-west-2",
requirement_id="requirement1",
requirement_status=StatusChoices.PASS,
passed_checks=2,
failed_checks=0,
total_checks=2,
)
requirement_overview4 = ComplianceRequirementOverview.objects.create(
tenant=tenant,
scan=scan1,
compliance_id="aws_account_security_onboarding_aws",
framework="AWS-Account-Security-Onboarding",
version="1.0",
description="Description for AWS Account Security Onboarding",
region="eu-west-2",
requirement_id="requirement2",
requirement_status=StatusChoices.FAIL,
passed_checks=1,
failed_checks=1,
total_checks=2,
)
requirement_overview5 = ComplianceRequirementOverview.objects.create(
tenant=tenant,
scan=scan1,
compliance_id="aws_account_security_onboarding_aws",
framework="AWS-Account-Security-Onboarding",
version="1.0",
description="Description for AWS Account Security Onboarding (MANUAL)",
region="eu-west-2",
requirement_id="requirement3",
requirement_status=StatusChoices.MANUAL,
passed_checks=0,
failed_checks=0,
total_checks=0,
)
# Create a different compliance framework for testing
requirement_overview6 = ComplianceRequirementOverview.objects.create(
tenant=tenant,
scan=scan1,
compliance_id="cis_1.4_aws",
framework="CIS-1.4-AWS",
version="1.4",
description="CIS AWS Foundations Benchmark v1.4.0",
region="eu-west-1",
requirement_id="cis_requirement1",
requirement_status=StatusChoices.FAIL,
passed_checks=0,
failed_checks=3,
total_checks=3,
)
# Create another compliance framework for testing MITRE ATT&CK
requirement_overview7 = ComplianceRequirementOverview.objects.create(
tenant=tenant,
scan=scan1,
compliance_id="mitre_attack_aws",
framework="MITRE-ATTACK",
version="1.0",
description="MITRE ATT&CK",
region="eu-west-1",
requirement_id="mitre_requirement1",
requirement_status=StatusChoices.FAIL,
passed_checks=0,
failed_checks=0,
total_checks=0,
)
return (
requirement_overview1,
requirement_overview2,
requirement_overview3,
requirement_overview4,
requirement_overview5,
requirement_overview6,
requirement_overview7,
)
def get_api_tokens(
api_client, user_email: str, user_password: str, tenant_id: str = None
) -> tuple[str, str]:
@@ -919,6 +1052,127 @@ def integrations_fixture(providers_fixture):
return integration1, integration2
@pytest.fixture
def backfill_scan_metadata_fixture(scans_fixture, findings_fixture):
for scan_instance in scans_fixture:
tenant_id = scan_instance.tenant_id
scan_id = scan_instance.id
backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
@pytest.fixture
def lighthouse_config_fixture(authenticated_client, tenants_fixture):
return LighthouseConfiguration.objects.create(
tenant_id=tenants_fixture[0].id,
name="OpenAI",
api_key_decoded="sk-test1234567890T3BlbkFJtest1234567890",
model="gpt-4o",
temperature=0,
max_tokens=4000,
business_context="Test business context",
is_active=True,
)
@pytest.fixture(scope="function")
def latest_scan_finding(authenticated_client, providers_fixture, resources_fixture):
provider = providers_fixture[0]
tenant_id = str(providers_fixture[0].tenant_id)
resource = resources_fixture[0]
scan = Scan.objects.create(
name="latest completed scan",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant_id=tenant_id,
)
finding = Finding.objects.create(
tenant_id=tenant_id,
uid="test_finding_uid_1",
scan=scan,
delta="new",
status=Status.FAIL,
status_extended="test status extended ",
impact=Severity.critical,
impact_extended="test impact extended one",
severity=Severity.critical,
raw_result={
"status": Status.FAIL,
"impact": Severity.critical,
"severity": Severity.critical,
},
tags={"test": "dev-qa"},
check_id="test_check_id",
check_metadata={
"CheckId": "test_check_id",
"Description": "test description apple sauce",
},
first_seen_at="2024-01-02T00:00:00Z",
)
finding.add_resources([resource])
backfill_resource_scan_summaries(tenant_id, str(scan.id))
return finding
@pytest.fixture
def saml_setup(tenants_fixture):
tenant_id = tenants_fixture[0].id
domain = "example.com"
SAMLDomainIndex.objects.create(email_domain=domain, tenant_id=tenant_id)
metadata_xml = """<?xml version='1.0' encoding='UTF-8'?>
<md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
<md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
<md:KeyDescriptor use='signing'>
<ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
<ds:X509Data>
<ds:X509Certificate>TEST</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://TEST/sso/saml'/>
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect' Location='https://TEST/sso/saml'/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
"""
SAMLConfiguration.objects.create(
tenant_id=str(tenant_id),
email_domain=domain,
metadata_xml=metadata_xml,
)
return {
"email": f"user@{domain}",
"domain": domain,
"tenant_id": tenant_id,
}
@pytest.fixture
def saml_sociallogin(users_fixture):
user = users_fixture[0]
user.email = "samlsso@acme.com"
extra_data = {
"firstName": ["Test"],
"lastName": ["User"],
"organization": ["Prowler"],
"userType": ["member"],
}
account = MagicMock()
account.provider = "saml"
account.extra_data = extra_data
sociallogin = MagicMock(spec=SocialLogin)
sociallogin.account = account
sociallogin.user = user
return sociallogin
def get_authorization_header(access_token: str) -> dict:
return {"Authorization": f"Bearer {access_token}"}
+6
View File
@@ -3,6 +3,12 @@
import os
import sys
import warnings
# Suppress specific warnings from django-rest-auth: https://github.com/iMerica/dj-rest-auth/issues/684
warnings.filterwarnings(
"ignore", category=UserWarning, module="dj_rest_auth.registration.serializers"
)
def main():
+61
View File
@@ -0,0 +1,61 @@
from api.db_utils import rls_transaction
from api.models import (
Resource,
ResourceFindingMapping,
ResourceScanSummary,
Scan,
StateChoices,
)
def backfill_resource_scan_summaries(tenant_id: str, scan_id: str):
with rls_transaction(tenant_id):
if ResourceScanSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
return {"status": "already backfilled"}
with rls_transaction(tenant_id):
if not Scan.objects.filter(
tenant_id=tenant_id,
id=scan_id,
state__in=(StateChoices.COMPLETED, StateChoices.FAILED),
).exists():
return {"status": "scan is not completed"}
resource_ids_qs = (
ResourceFindingMapping.objects.filter(
tenant_id=tenant_id, finding__scan_id=scan_id
)
.values_list("resource_id", flat=True)
.distinct()
)
resource_ids = list(resource_ids_qs)
if not resource_ids:
return {"status": "no resources to backfill"}
resources_qs = Resource.objects.filter(
tenant_id=tenant_id, id__in=resource_ids
).only("id", "service", "region", "type")
summaries = []
for resource in resources_qs.iterator():
summaries.append(
ResourceScanSummary(
tenant_id=tenant_id,
scan_id=scan_id,
resource_id=str(resource.id),
service=resource.service,
region=resource.region,
resource_type=resource.type,
)
)
for i in range(0, len(summaries), 500):
ResourceScanSummary.objects.bulk_create(
summaries[i : i + 500], ignore_conflicts=True
)
return {"status": "backfilled", "inserted": len(summaries)}
+45 -1
View File
@@ -1,8 +1,9 @@
from datetime import datetime, timezone
import openai
from celery.utils.log import get_task_logger
from api.models import Provider
from api.models import LighthouseConfiguration, Provider
from api.utils import prowler_provider_connection_test
logger = get_task_logger(__name__)
@@ -39,3 +40,46 @@ def check_provider_connection(provider_id: str):
connection_error = f"{connection_result.error}" if connection_result.error else None
return {"connected": connection_result.is_connected, "error": connection_error}
def check_lighthouse_connection(lighthouse_config_id: str):
"""
Business logic to check the connection status of a Lighthouse configuration.
Args:
lighthouse_config_id (str): The primary key of the LighthouseConfiguration instance to check.
Returns:
dict: A dictionary containing:
- 'connected' (bool): Indicates whether the connection is successful.
- 'error' (str or None): The error message if the connection failed, otherwise `None`.
- 'available_models' (list): List of available models if connection is successful.
Raises:
Model.DoesNotExist: If the lighthouse configuration does not exist.
"""
lighthouse_config = LighthouseConfiguration.objects.get(pk=lighthouse_config_id)
if not lighthouse_config.api_key_decoded:
lighthouse_config.is_active = False
lighthouse_config.save()
return {
"connected": False,
"error": "API key is invalid or missing.",
"available_models": [],
}
try:
client = openai.OpenAI(api_key=lighthouse_config.api_key_decoded)
models = client.models.list()
lighthouse_config.is_active = True
lighthouse_config.save()
return {
"connected": True,
"error": None,
"available_models": [model.id for model in models.data],
}
except Exception as e:
lighthouse_config.is_active = False
lighthouse_config.save()
return {"connected": False, "error": str(e), "available_models": []}
+27 -29
View File
@@ -1,5 +1,5 @@
from celery.utils.log import get_task_logger
from django.db import transaction
from django.db import DatabaseError
from api.db_router import MainRouter
from api.db_utils import batch_delete, rls_transaction
@@ -8,11 +8,12 @@ from api.models import Finding, Provider, Resource, Scan, ScanSummary, Tenant
logger = get_task_logger(__name__)
def delete_provider(pk: str):
def delete_provider(tenant_id: str, pk: str):
"""
Gracefully deletes an instance of a provider along with its related data.
Args:
tenant_id (str): Tenant ID the resources belong to.
pk (str): The primary key of the Provider instance to delete.
Returns:
@@ -22,33 +23,31 @@ def delete_provider(pk: str):
Raises:
Provider.DoesNotExist: If no instance with the provided primary key exists.
"""
instance = Provider.all_objects.get(pk=pk)
deletion_summary = {}
with rls_transaction(tenant_id):
instance = Provider.all_objects.get(pk=pk)
deletion_summary = {}
deletion_steps = [
("Scan Summaries", ScanSummary.all_objects.filter(scan__provider=instance)),
("Findings", Finding.all_objects.filter(scan__provider=instance)),
("Resources", Resource.all_objects.filter(provider=instance)),
("Scans", Scan.all_objects.filter(provider=instance)),
]
with transaction.atomic():
# Delete Scan Summaries
scan_summaries_qs = ScanSummary.all_objects.filter(scan__provider=instance)
_, scans_summ_summary = batch_delete(scan_summaries_qs)
deletion_summary.update(scans_summ_summary)
for step_name, queryset in deletion_steps:
try:
_, step_summary = batch_delete(tenant_id, queryset)
deletion_summary.update(step_summary)
except DatabaseError as db_error:
logger.error(f"Error deleting {step_name}: {db_error}")
raise
# Delete Findings
findings_qs = Finding.all_objects.filter(scan__provider=instance)
_, findings_summary = batch_delete(findings_qs)
deletion_summary.update(findings_summary)
# Delete Resources
resources_qs = Resource.all_objects.filter(provider=instance)
_, resources_summary = batch_delete(resources_qs)
deletion_summary.update(resources_summary)
# Delete Scans
scans_qs = Scan.all_objects.filter(provider=instance)
_, scans_summary = batch_delete(scans_qs)
deletion_summary.update(scans_summary)
provider_deleted_count, provider_summary = instance.delete()
try:
with rls_transaction(tenant_id):
_, provider_summary = instance.delete()
deletion_summary.update(provider_summary)
except DatabaseError as db_error:
logger.error(f"Error deleting Provider: {db_error}")
raise
return deletion_summary
@@ -66,9 +65,8 @@ def delete_tenant(pk: str):
deletion_summary = {}
for provider in Provider.objects.using(MainRouter.admin_db).filter(tenant_id=pk):
with rls_transaction(pk):
summary = delete_provider(provider.id)
deletion_summary.update(summary)
summary = delete_provider(pk, provider.id)
deletion_summary.update(summary)
Tenant.objects.using(MainRouter.admin_db).filter(id=pk).delete()
+118 -15
View File
@@ -1,4 +1,5 @@
import os
import re
import zipfile
import boto3
@@ -13,6 +14,42 @@ from prowler.config.config import (
json_ocsf_file_suffix,
output_file_timestamp,
)
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
AWSWellArchitected,
)
from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
from prowler.lib.outputs.compliance.cis.cis_kubernetes import KubernetesCIS
from prowler.lib.outputs.compliance.cis.cis_m365 import M365CIS
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
from prowler.lib.outputs.compliance.ens.ens_azure import AzureENS
from prowler.lib.outputs.compliance.ens.ens_gcp import GCPENS
from prowler.lib.outputs.compliance.iso27001.iso27001_aws import AWSISO27001
from prowler.lib.outputs.compliance.iso27001.iso27001_azure import AzureISO27001
from prowler.lib.outputs.compliance.iso27001.iso27001_gcp import GCPISO27001
from prowler.lib.outputs.compliance.iso27001.iso27001_kubernetes import (
KubernetesISO27001,
)
from prowler.lib.outputs.compliance.iso27001.iso27001_m365 import M365ISO27001
from prowler.lib.outputs.compliance.kisa_ismsp.kisa_ismsp_aws import AWSKISAISMSP
from prowler.lib.outputs.compliance.mitre_attack.mitre_attack_aws import AWSMitreAttack
from prowler.lib.outputs.compliance.mitre_attack.mitre_attack_azure import (
AzureMitreAttack,
)
from prowler.lib.outputs.compliance.mitre_attack.mitre_attack_gcp import GCPMitreAttack
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_aws import (
ProwlerThreatScoreAWS,
)
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_azure import (
ProwlerThreatScoreAzure,
)
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_gcp import (
ProwlerThreatScoreGCP,
)
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_m365 import (
ProwlerThreatScoreM365,
)
from prowler.lib.outputs.csv.csv import CSV
from prowler.lib.outputs.html.html import HTML
from prowler.lib.outputs.ocsf.ocsf import OCSF
@@ -20,6 +57,45 @@ from prowler.lib.outputs.ocsf.ocsf import OCSF
logger = get_task_logger(__name__)
COMPLIANCE_CLASS_MAP = {
"aws": [
(lambda name: name.startswith("cis_"), AWSCIS),
(lambda name: name == "mitre_attack_aws", AWSMitreAttack),
(lambda name: name.startswith("ens_"), AWSENS),
(
lambda name: name.startswith("aws_well_architected_framework"),
AWSWellArchitected,
),
(lambda name: name.startswith("iso27001_"), AWSISO27001),
(lambda name: name.startswith("kisa"), AWSKISAISMSP),
(lambda name: name == "prowler_threatscore_aws", ProwlerThreatScoreAWS),
],
"azure": [
(lambda name: name.startswith("cis_"), AzureCIS),
(lambda name: name == "mitre_attack_azure", AzureMitreAttack),
(lambda name: name.startswith("ens_"), AzureENS),
(lambda name: name.startswith("iso27001_"), AzureISO27001),
(lambda name: name == "prowler_threatscore_azure", ProwlerThreatScoreAzure),
],
"gcp": [
(lambda name: name.startswith("cis_"), GCPCIS),
(lambda name: name == "mitre_attack_gcp", GCPMitreAttack),
(lambda name: name.startswith("ens_"), GCPENS),
(lambda name: name.startswith("iso27001_"), GCPISO27001),
(lambda name: name == "prowler_threatscore_gcp", ProwlerThreatScoreGCP),
],
"kubernetes": [
(lambda name: name.startswith("cis_"), KubernetesCIS),
(lambda name: name.startswith("iso27001_"), KubernetesISO27001),
],
"m365": [
(lambda name: name.startswith("iso27001_"), M365ISO27001),
(lambda name: name.startswith("cis_"), M365CIS),
(lambda name: name == "prowler_threatscore_m365", ProwlerThreatScoreM365),
],
}
# Predefined mapping for output formats and their configurations
OUTPUT_FORMATS_MAPPING = {
"csv": {
@@ -43,13 +119,17 @@ def _compress_output_files(output_directory: str) -> str:
str: The full path to the newly created ZIP archive.
"""
zip_path = f"{output_directory}.zip"
parent_dir = os.path.dirname(output_directory)
zip_path_abs = os.path.abspath(zip_path)
with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_DEFLATED) as zipf:
for suffix in [config["suffix"] for config in OUTPUT_FORMATS_MAPPING.values()]:
zipf.write(
f"{output_directory}{suffix}",
f"output/{output_directory.split('/')[-1]}{suffix}",
)
for foldername, _, filenames in os.walk(parent_dir):
for filename in filenames:
file_path = os.path.join(foldername, filename)
if os.path.abspath(file_path) == zip_path_abs:
continue
arcname = os.path.relpath(file_path, start=parent_dir)
zipf.write(file_path, arcname)
return zip_path
@@ -102,25 +182,38 @@ def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str:
Raises:
botocore.exceptions.ClientError: If the upload attempt to S3 fails for any reason.
"""
if not base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET:
return
bucket = base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET
if not bucket:
return None
try:
s3 = get_s3_client()
s3_key = f"{tenant_id}/{scan_id}/{os.path.basename(zip_path)}"
# Upload the ZIP file (outputs) to the S3 bucket
zip_key = f"{tenant_id}/{scan_id}/{os.path.basename(zip_path)}"
s3.upload_file(
Filename=zip_path,
Bucket=base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET,
Key=s3_key,
Bucket=bucket,
Key=zip_key,
)
return f"s3://{base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET}/{s3_key}"
# Upload the compliance directory to the S3 bucket
compliance_dir = os.path.join(os.path.dirname(zip_path), "compliance")
for filename in os.listdir(compliance_dir):
local_path = os.path.join(compliance_dir, filename)
if not os.path.isfile(local_path):
continue
file_key = f"{tenant_id}/{scan_id}/compliance/{filename}"
s3.upload_file(Filename=local_path, Bucket=bucket, Key=file_key)
return f"s3://{base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET}/{zip_key}"
except (ClientError, NoCredentialsError, ParamValidationError, ValueError) as e:
logger.error(f"S3 upload failed: {str(e)}")
def _generate_output_directory(
output_directory, prowler_provider: object, tenant_id: str, scan_id: str
) -> str:
) -> tuple[str, str]:
"""
Generate a file system path for the output directory of a prowler scan.
@@ -145,12 +238,22 @@ def _generate_output_directory(
Example:
>>> _generate_output_directory("/tmp", "aws", "tenant-1234", "scan-5678")
'/tmp/tenant-1234/aws/scan-5678/prowler-output-2023-02-15T12:34:56'
'/tmp/tenant-1234/aws/scan-5678/prowler-output-2023-02-15T12:34:56',
'/tmp/tenant-1234/aws/scan-5678/compliance/prowler-output-2023-02-15T12:34:56'
"""
# Sanitize the prowler provider name to ensure it is a valid directory name
prowler_provider_sanitized = re.sub(r"[^\w\-]", "-", prowler_provider)
path = (
f"{output_directory}/{tenant_id}/{scan_id}/prowler-output-"
f"{prowler_provider}-{output_file_timestamp}"
f"{prowler_provider_sanitized}-{output_file_timestamp}"
)
os.makedirs("/".join(path.split("/")[:-1]), exist_ok=True)
return path
compliance_path = (
f"{output_directory}/{tenant_id}/{scan_id}/compliance/prowler-output-"
f"{prowler_provider_sanitized}-{output_file_timestamp}"
)
os.makedirs("/".join(compliance_path.split("/")[:-1]), exist_ok=True)
return path, compliance_path
+184 -84
View File
@@ -1,3 +1,4 @@
import json
import time
from copy import deepcopy
from datetime import datetime, timezone
@@ -6,17 +7,19 @@ from celery.utils.log import get_task_logger
from config.settings.celery import CELERY_DEADLOCK_ATTEMPTS
from django.db import IntegrityError, OperationalError
from django.db.models import Case, Count, IntegerField, Sum, When
from tasks.utils import CustomEncoder
from api.compliance import (
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE,
generate_scan_compliance,
)
from api.db_utils import rls_transaction
from api.db_utils import create_objects_in_batches, rls_transaction
from api.models import (
ComplianceOverview,
ComplianceRequirementOverview,
Finding,
Provider,
Resource,
ResourceScanSummary,
ResourceTag,
Scan,
ScanSummary,
@@ -116,10 +119,11 @@ def perform_prowler_scan(
ValueError: If the provider cannot be connected.
"""
check_status_by_region = {}
exception = None
unique_resources = set()
scan_resource_cache: set[tuple[str, str, str, str]] = set()
start_time = time.time()
exc = None
with rls_transaction(tenant_id):
provider_instance = Provider.objects.get(pk=provider_id)
@@ -135,7 +139,7 @@ def perform_prowler_scan(
provider_instance.connected = True
except Exception as e:
provider_instance.connected = False
raise ValueError(
exc = ValueError(
f"Provider {provider_instance.provider} is not connected: {e}"
)
finally:
@@ -144,6 +148,11 @@ def perform_prowler_scan(
)
provider_instance.save()
# If the provider is not connected, raise an exception outside the transaction.
# If raised within the transaction, the transaction will be rolled back and the provider will not be marked as not connected.
if exc:
raise exc
prowler_scan = ProwlerScan(provider=prowler_provider, checks=checks_to_execute)
resource_cache = {}
@@ -191,6 +200,17 @@ def perform_prowler_scan(
if resource_instance.type != finding.resource_type:
resource_instance.type = finding.resource_type
updated_fields.append("type")
if resource_instance.metadata != finding.resource_metadata:
resource_instance.metadata = json.dumps(
finding.resource_metadata, cls=CustomEncoder
)
updated_fields.append("metadata")
if resource_instance.details != finding.resource_details:
resource_instance.details = finding.resource_details
updated_fields.append("details")
if resource_instance.partition != finding.partition:
resource_instance.partition = finding.partition
updated_fields.append("partition")
if updated_fields:
with rls_transaction(tenant_id):
resource_instance.save(update_fields=updated_fields)
@@ -267,18 +287,20 @@ def perform_prowler_scan(
check_id=finding.check_id,
scan=scan_instance,
first_seen_at=last_first_seen_at,
muted=finding.muted,
compliance=finding.compliance,
)
finding_instance.add_resources([resource_instance])
# Update compliance data if applicable
if finding.status.value == "MUTED":
continue
region_dict = check_status_by_region.setdefault(finding.region, {})
current_status = region_dict.get(finding.check_id)
if current_status == "FAIL":
continue
region_dict[finding.check_id] = finding.status.value
# Update scan resource summaries
scan_resource_cache.add(
(
str(resource_instance.id),
resource_instance.service,
resource_instance.region,
resource_instance.type,
)
)
# Update scan progress
with rls_transaction(tenant_id):
@@ -299,66 +321,33 @@ def perform_prowler_scan(
scan_instance.unique_resource_count = len(unique_resources)
scan_instance.save()
if exception is None:
try:
regions = prowler_provider.get_regions()
except AttributeError:
regions = set()
compliance_template = PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE[
provider_instance.provider
]
compliance_overview_by_region = {
region: deepcopy(compliance_template) for region in regions
}
for region, check_status in check_status_by_region.items():
compliance_data = compliance_overview_by_region.setdefault(
region, deepcopy(compliance_template)
)
for check_name, status in check_status.items():
generate_scan_compliance(
compliance_data,
provider_instance.provider,
check_name,
status,
)
# Prepare compliance overview objects
compliance_overview_objects = []
for region, compliance_data in compliance_overview_by_region.items():
for compliance_id, compliance in compliance_data.items():
compliance_overview_objects.append(
ComplianceOverview(
tenant_id=tenant_id,
scan=scan_instance,
region=region,
compliance_id=compliance_id,
framework=compliance["framework"],
version=compliance["version"],
description=compliance["description"],
requirements=compliance["requirements"],
requirements_passed=compliance["requirements_status"]["passed"],
requirements_failed=compliance["requirements_status"]["failed"],
requirements_manual=compliance["requirements_status"]["manual"],
total_requirements=compliance["total_requirements"],
)
)
try:
with rls_transaction(tenant_id):
ComplianceOverview.objects.bulk_create(
compliance_overview_objects, batch_size=100
)
except Exception as overview_exception:
import sentry_sdk
sentry_sdk.capture_exception(overview_exception)
logger.error(
f"Error storing compliance overview for scan {scan_id}: {overview_exception}"
)
if exception is not None:
raise exception
try:
resource_scan_summaries = [
ResourceScanSummary(
tenant_id=tenant_id,
scan_id=scan_id,
resource_id=resource_id,
service=service,
region=region,
resource_type=resource_type,
)
for resource_id, service, region, resource_type in scan_resource_cache
]
with rls_transaction(tenant_id):
ResourceScanSummary.objects.bulk_create(
resource_scan_summaries, batch_size=500, ignore_conflicts=True
)
except Exception as filter_exception:
import sentry_sdk
sentry_sdk.capture_exception(filter_exception)
logger.error(
f"Error storing filter values for scan {scan_id}: {filter_exception}"
)
serializer = ScanTaskSerializer(instance=scan_instance)
return serializer.data
@@ -402,21 +391,21 @@ def aggregate_findings(tenant_id: str, scan_id: str):
).annotate(
fail=Sum(
Case(
When(status="FAIL", then=1),
When(status="FAIL", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
_pass=Sum(
Case(
When(status="PASS", then=1),
When(status="PASS", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
muted=Sum(
muted_count=Sum(
Case(
When(status="MUTED", then=1),
When(muted=True, then=1),
default=0,
output_field=IntegerField(),
)
@@ -424,63 +413,63 @@ def aggregate_findings(tenant_id: str, scan_id: str):
total=Count("id"),
new=Sum(
Case(
When(delta="new", then=1),
When(delta="new", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
changed=Sum(
Case(
When(delta="changed", then=1),
When(delta="changed", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
unchanged=Sum(
Case(
When(delta__isnull=True, then=1),
When(delta__isnull=True, muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
fail_new=Sum(
Case(
When(delta="new", status="FAIL", then=1),
When(delta="new", status="FAIL", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
fail_changed=Sum(
Case(
When(delta="changed", status="FAIL", then=1),
When(delta="changed", status="FAIL", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
pass_new=Sum(
Case(
When(delta="new", status="PASS", then=1),
When(delta="new", status="PASS", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
pass_changed=Sum(
Case(
When(delta="changed", status="PASS", then=1),
When(delta="changed", status="PASS", muted=False, then=1),
default=0,
output_field=IntegerField(),
)
),
muted_new=Sum(
Case(
When(delta="new", status="MUTED", then=1),
When(delta="new", muted=True, then=1),
default=0,
output_field=IntegerField(),
)
),
muted_changed=Sum(
Case(
When(delta="changed", status="MUTED", then=1),
When(delta="changed", muted=True, then=1),
default=0,
output_field=IntegerField(),
)
@@ -498,7 +487,7 @@ def aggregate_findings(tenant_id: str, scan_id: str):
region=agg["resources__region"],
fail=agg["fail"],
_pass=agg["_pass"],
muted=agg["muted"],
muted=agg["muted_count"],
total=agg["total"],
new=agg["new"],
changed=agg["changed"],
@@ -513,3 +502,114 @@ def aggregate_findings(tenant_id: str, scan_id: str):
for agg in aggregation
}
ScanSummary.objects.bulk_create(scan_aggregations, batch_size=3000)
def create_compliance_requirements(tenant_id: str, scan_id: str):
"""
Create detailed compliance requirement overview records for a scan.
This function processes the compliance data collected during a scan and creates
individual records for each compliance requirement in each region. These detailed
records provide a granular view of compliance status.
Args:
tenant_id (str): The ID of the tenant for which to create records.
scan_id (str): The ID of the scan for which to create records.
Returns:
dict: A dictionary containing the number of requirements created and the regions processed.
Raises:
ValidationError: If tenant_id is not a valid UUID.
"""
try:
with rls_transaction(tenant_id):
scan_instance = Scan.objects.get(pk=scan_id)
provider_instance = scan_instance.provider
prowler_provider = initialize_prowler_provider(provider_instance)
# Get check status data by region from findings
check_status_by_region = {}
with rls_transaction(tenant_id):
findings = Finding.objects.filter(scan_id=scan_id, muted=False)
for finding in findings:
# Get region from resources
for resource in finding.resources.all():
region = resource.region
region_dict = check_status_by_region.setdefault(region, {})
current_status = region_dict.get(finding.check_id)
if current_status == "FAIL":
continue
region_dict[finding.check_id] = finding.status
try:
# Try to get regions from provider
regions = prowler_provider.get_regions()
except (AttributeError, Exception):
# If not available, use regions from findings
regions = set(check_status_by_region.keys())
# Get compliance template for the provider
compliance_template = PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE[
provider_instance.provider
]
# Create compliance data by region
compliance_overview_by_region = {
region: deepcopy(compliance_template) for region in regions
}
# Apply check statuses to compliance data
for region, check_status in check_status_by_region.items():
compliance_data = compliance_overview_by_region.setdefault(
region, deepcopy(compliance_template)
)
for check_name, status in check_status.items():
generate_scan_compliance(
compliance_data,
provider_instance.provider,
check_name,
status,
)
# Prepare compliance requirement objects
compliance_requirement_objects = []
for region, compliance_data in compliance_overview_by_region.items():
for compliance_id, compliance in compliance_data.items():
# Create an overview record for each requirement within each compliance framework
for requirement_id, requirement in compliance["requirements"].items():
compliance_requirement_objects.append(
ComplianceRequirementOverview(
tenant_id=tenant_id,
scan=scan_instance,
region=region,
compliance_id=compliance_id,
framework=compliance["framework"],
version=compliance["version"],
requirement_id=requirement_id,
description=requirement["description"],
passed_checks=requirement["checks_status"]["pass"],
failed_checks=requirement["checks_status"]["fail"],
total_checks=requirement["checks_status"]["total"],
requirement_status=requirement["status"],
)
)
# Bulk create requirement records
create_objects_in_batches(
tenant_id, ComplianceRequirementOverview, compliance_requirement_objects
)
return {
"requirements_created": len(compliance_requirement_objects),
"regions_processed": list(regions),
"compliance_frameworks": (
list(compliance_overview_by_region.get(list(regions)[0], {}).keys())
if regions
else []
),
}
except Exception as e:
logger.error(f"Error creating compliance requirements for scan {scan_id}: {e}")
raise e
+199 -62
View File
@@ -1,3 +1,5 @@
from datetime import datetime, timedelta, timezone
from pathlib import Path
from shutil import rmtree
from celery import chain, shared_task
@@ -5,21 +7,31 @@ from celery.utils.log import get_task_logger
from config.celery import RLSTask
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE, DJANGO_TMP_OUTPUT_DIRECTORY
from django_celery_beat.models import PeriodicTask
from tasks.jobs.connection import check_provider_connection
from tasks.jobs.backfill import backfill_resource_scan_summaries
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
from tasks.jobs.deletion import delete_provider, delete_tenant
from tasks.jobs.export import (
COMPLIANCE_CLASS_MAP,
OUTPUT_FORMATS_MAPPING,
_compress_output_files,
_generate_output_directory,
_upload_to_s3,
)
from tasks.jobs.scan import aggregate_findings, perform_prowler_scan
from tasks.jobs.scan import (
aggregate_findings,
create_compliance_requirements,
perform_prowler_scan,
)
from tasks.utils import batched, get_next_execution_datetime
from api.compliance import get_compliance_frameworks
from api.db_utils import rls_transaction
from api.decorators import set_tenant
from api.models import Finding, Provider, Scan, ScanSummary, StateChoices
from api.utils import initialize_prowler_provider
from api.v1.serializers import ScanTaskSerializer
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.outputs.compliance.generic.generic import GenericCompliance
from prowler.lib.outputs.finding import Finding as FindingOutput
logger = get_task_logger(__name__)
@@ -42,9 +54,10 @@ def check_provider_connection_task(provider_id: str):
return check_provider_connection(provider_id=provider_id)
@shared_task(base=RLSTask, name="provider-deletion", queue="deletion")
@set_tenant
def delete_provider_task(provider_id: str):
@shared_task(
base=RLSTask, name="provider-deletion", queue="deletion", autoretry_for=(Exception,)
)
def delete_provider_task(provider_id: str, tenant_id: str):
"""
Task to delete a specific Provider instance.
@@ -52,6 +65,7 @@ def delete_provider_task(provider_id: str):
Args:
provider_id (str): The primary key of the `Provider` instance to be deleted.
tenant_id (str): Tenant ID the provider belongs to.
Returns:
tuple: A tuple containing:
@@ -59,7 +73,7 @@ def delete_provider_task(provider_id: str):
- A dictionary with the count of deleted instances per model,
including related models if cascading deletes were triggered.
"""
return delete_provider(pk=provider_id)
return delete_provider(tenant_id=tenant_id, pk=provider_id)
@shared_task(base=RLSTask, name="scan-perform", queue="scans")
@@ -91,6 +105,7 @@ def perform_scan_task(
chain(
perform_scan_summary_task.si(tenant_id, scan_id),
create_compliance_requirements_task.si(tenant_id=tenant_id, scan_id=scan_id),
generate_outputs.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
@@ -125,6 +140,43 @@ def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
periodic_task_instance = PeriodicTask.objects.get(
name=f"scan-perform-scheduled-{provider_id}"
)
executed_scan = Scan.objects.filter(
tenant_id=tenant_id,
provider_id=provider_id,
task__task_runner_task__task_id=task_id,
).order_by("completed_at")
if (
Scan.objects.filter(
tenant_id=tenant_id,
provider_id=provider_id,
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.EXECUTING,
scheduler_task_id=periodic_task_instance.id,
scheduled_at__date=datetime.now(timezone.utc).date(),
).exists()
or executed_scan.exists()
):
# Duplicated task execution due to visibility timeout or scan is already running
logger.warning(f"Duplicated scheduled scan for provider {provider_id}.")
try:
affected_scan = executed_scan.first()
if not affected_scan:
raise ValueError(
"Error retrieving affected scan details after detecting duplicated scheduled "
"scan."
)
# Return the affected scan details to avoid losing data
serializer = ScanTaskSerializer(instance=affected_scan)
except Exception as duplicated_scan_exception:
logger.error(
f"Duplicated scheduled scan for provider {provider_id}. Error retrieving affected scan details: "
f"{str(duplicated_scan_exception)}"
)
raise duplicated_scan_exception
return serializer.data
next_scan_datetime = get_next_execution_datetime(task_id, provider_id)
scan_instance, _ = Scan.objects.get_or_create(
tenant_id=tenant_id,
@@ -132,7 +184,11 @@ def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
trigger=Scan.TriggerChoices.SCHEDULED,
state__in=(StateChoices.SCHEDULED, StateChoices.AVAILABLE),
scheduler_task_id=periodic_task_instance.id,
defaults={"state": StateChoices.SCHEDULED},
defaults={
"state": StateChoices.SCHEDULED,
"name": "Daily scheduled scan",
"scheduled_at": next_scan_datetime - timedelta(days=1),
},
)
scan_instance.task_id = task_id
@@ -160,6 +216,9 @@ def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
chain(
perform_scan_summary_task.si(tenant_id, scan_instance.id),
create_compliance_requirements_task.si(
tenant_id=tenant_id, scan_id=str(scan_instance.id)
),
generate_outputs.si(
scan_id=str(scan_instance.id), provider_id=provider_id, tenant_id=tenant_id
),
@@ -173,7 +232,7 @@ def perform_scan_summary_task(tenant_id: str, scan_id: str):
return aggregate_findings(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="tenant-deletion", queue="deletion")
@shared_task(name="tenant-deletion", queue="deletion", autoretry_for=(Exception,))
def delete_tenant_task(tenant_id: str):
return delete_tenant(pk=tenant_id)
@@ -200,80 +259,158 @@ def generate_outputs(scan_id: str, provider_id: str, tenant_id: str):
scan_id (str): The scan identifier.
provider_id (str): The provider_id id to be used in generating outputs.
"""
# Initialize the prowler provider
prowler_provider = initialize_prowler_provider(Provider.objects.get(id=provider_id))
# Check if the scan has findings
if not ScanSummary.objects.filter(scan_id=scan_id).exists():
logger.info(f"No findings found for scan {scan_id}")
return {"upload": False}
# Get the provider UID
provider_uid = Provider.objects.get(id=provider_id).uid
provider_obj = Provider.objects.get(id=provider_id)
prowler_provider = initialize_prowler_provider(provider_obj)
provider_uid = provider_obj.uid
provider_type = provider_obj.provider
# Generate and ensure the output directory exists
output_directory = _generate_output_directory(
frameworks_bulk = Compliance.get_bulk(provider_type)
frameworks_avail = get_compliance_frameworks(provider_type)
out_dir, comp_dir = _generate_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY, provider_uid, tenant_id, scan_id
)
# Define auxiliary variables
def get_writer(writer_map, name, factory, is_last):
"""
Return existing writer_map[name] or create via factory().
In both cases set `.close_file = is_last`.
"""
initialization = False
if name not in writer_map:
writer_map[name] = factory()
initialization = True
w = writer_map[name]
w.close_file = is_last
return w, initialization
output_writers = {}
compliance_writers = {}
scan_summary = FindingOutput._transform_findings_stats(
ScanSummary.objects.filter(scan_id=scan_id)
)
# Retrieve findings queryset
findings_qs = Finding.all_objects.filter(scan_id=scan_id).order_by("uid")
qs = Finding.all_objects.filter(scan_id=scan_id).order_by("uid").iterator()
for batch, is_last in batched(qs, DJANGO_FINDINGS_BATCH_SIZE):
fos = [FindingOutput.transform_api_finding(f, prowler_provider) for f in batch]
# Process findings in batches
for batch, is_last_batch in batched(
findings_qs.iterator(), DJANGO_FINDINGS_BATCH_SIZE
):
finding_outputs = [
FindingOutput.transform_api_finding(finding, prowler_provider)
for finding in batch
]
# Generate output files
for mode, config in OUTPUT_FORMATS_MAPPING.items():
kwargs = dict(config.get("kwargs", {}))
# Outputs
for mode, cfg in OUTPUT_FORMATS_MAPPING.items():
cls = cfg["class"]
suffix = cfg["suffix"]
extra = cfg.get("kwargs", {}).copy()
if mode == "html":
kwargs["provider"] = prowler_provider
kwargs["stats"] = scan_summary
extra.update(provider=prowler_provider, stats=scan_summary)
writer_class = config["class"]
if writer_class in output_writers:
writer = output_writers[writer_class]
writer.transform(finding_outputs)
writer.close_file = is_last_batch
else:
writer = writer_class(
findings=finding_outputs,
file_path=output_directory,
file_extension=config["suffix"],
writer, initialization = get_writer(
output_writers,
cls,
lambda cls=cls, fos=fos, suffix=suffix: cls(
findings=fos,
file_path=out_dir,
file_extension=suffix,
from_cli=False,
)
writer.close_file = is_last_batch
output_writers[writer_class] = writer
),
is_last,
)
if not initialization:
writer.transform(fos)
writer.batch_write_data_to_file(**extra)
writer._data.clear()
# Write the current batch using the writer
writer.batch_write_data_to_file(**kwargs)
# Compliance CSVs
for name in frameworks_avail:
compliance_obj = frameworks_bulk[name]
# TODO: Refactor the output classes to avoid this manual reset
writer._data = []
klass = GenericCompliance
for condition, cls in COMPLIANCE_CLASS_MAP.get(provider_type, []):
if condition(name):
klass = cls
break
# Compress output files
output_directory = _compress_output_files(output_directory)
filename = f"{comp_dir}_{name}.csv"
# Save to configured storage
uploaded = _upload_to_s3(tenant_id, output_directory, scan_id)
writer, initialization = get_writer(
compliance_writers,
name,
lambda klass=klass, fos=fos: klass(
findings=fos,
compliance=compliance_obj,
file_path=filename,
from_cli=False,
),
is_last,
)
if not initialization:
writer.transform(fos, compliance_obj, name)
writer.batch_write_data_to_file()
writer._data.clear()
if uploaded:
output_directory = uploaded
uploaded = True
# Remove the local files after upload
rmtree(DJANGO_TMP_OUTPUT_DIRECTORY, ignore_errors=True)
compressed = _compress_output_files(out_dir)
upload_uri = _upload_to_s3(tenant_id, compressed, scan_id)
if upload_uri:
try:
rmtree(Path(compressed).parent, ignore_errors=True)
except Exception as e:
logger.error(f"Error deleting output files: {e}")
final_location, did_upload = upload_uri, True
else:
uploaded = False
final_location, did_upload = compressed, False
# Update the scan instance with the output path
Scan.all_objects.filter(id=scan_id).update(output_location=output_directory)
Scan.all_objects.filter(id=scan_id).update(output_location=final_location)
logger.info(f"Scan outputs at {final_location}")
return {"upload": did_upload}
logger.info(f"Scan output files generated, output location: {output_directory}")
return {"upload": uploaded}
@shared_task(name="backfill-scan-resource-summaries", queue="backfill")
def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
"""
Tries to backfill the resource scan summaries table for a given scan.
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
"""
return backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(base=RLSTask, name="scan-compliance-overviews")
def create_compliance_requirements_task(tenant_id: str, scan_id: str):
"""
Creates detailed compliance requirement records for a scan.
This task processes the compliance data collected during a scan and creates
individual records for each compliance requirement in each region. These detailed
records provide a granular view of compliance status.
Args:
tenant_id (str): The tenant ID for which to create records.
scan_id (str): The ID of the scan for which to create records.
"""
return create_compliance_requirements(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(base=RLSTask, name="lighthouse-connection-check")
@set_tenant
def check_lighthouse_connection_task(lighthouse_config_id: str, tenant_id: str = None):
"""
Task to check the connection status of a Lighthouse configuration.
Args:
lighthouse_config_id (str): The primary key of the LighthouseConfiguration instance to check.
tenant_id (str): The tenant ID for the task.
Returns:
dict: A dictionary containing:
- 'connected' (bool): Indicates whether the connection is successful.
- 'error' (str or None): The error message if the connection failed, otherwise `None`.
- 'available_models' (list): List of available models if connection is successful.
"""
return check_lighthouse_connection(lighthouse_config_id=lighthouse_config_id)
@@ -0,0 +1,79 @@
from uuid import uuid4
import pytest
from tasks.jobs.backfill import backfill_resource_scan_summaries
from api.models import ResourceScanSummary, Scan, StateChoices
@pytest.mark.django_db
class TestBackfillResourceScanSummaries:
@pytest.fixture(scope="function")
def resource_scan_summary_data(self, scans_fixture):
scan = scans_fixture[0]
return ResourceScanSummary.objects.create(
tenant_id=scan.tenant_id,
scan_id=scan.id,
resource_id=str(uuid4()),
service="aws",
region="us-east-1",
resource_type="instance",
)
@pytest.fixture(scope="function")
def get_not_completed_scans(self, providers_fixture):
provider_id = providers_fixture[0].id
tenant_id = providers_fixture[0].tenant_id
scan_1 = Scan.objects.create(
tenant_id=tenant_id,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.EXECUTING,
provider_id=provider_id,
)
scan_2 = Scan.objects.create(
tenant_id=tenant_id,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.AVAILABLE,
provider_id=provider_id,
)
return scan_1, scan_2
def test_already_backfilled(self, resource_scan_summary_data):
tenant_id = resource_scan_summary_data.tenant_id
scan_id = resource_scan_summary_data.scan_id
result = backfill_resource_scan_summaries(tenant_id, scan_id)
assert result == {"status": "already backfilled"}
def test_not_completed_scan(self, get_not_completed_scans):
for scan_instance in get_not_completed_scans:
tenant_id = scan_instance.tenant_id
scan_id = scan_instance.id
result = backfill_resource_scan_summaries(tenant_id, scan_id)
assert result == {"status": "scan is not completed"}
def test_successful_backfill_inserts_one_summary(
self, resources_fixture, findings_fixture
):
tenant_id = findings_fixture[0].tenant_id
scan_id = findings_fixture[0].scan_id
# This scan affects the first two resources
resources = resources_fixture[:2]
result = backfill_resource_scan_summaries(tenant_id, scan_id)
assert result == {"status": "backfilled", "inserted": len(resources)}
# Verify correct values
summaries = ResourceScanSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
)
assert summaries.count() == len(resources)
for resource in resources:
summary = summaries.get(resource_id=resource.id)
assert summary.resource_id == resource.id
assert summary.service == resource.service
assert summary.region == resource.region
assert summary.resource_type == resource.type
+60 -3
View File
@@ -1,10 +1,10 @@
from datetime import datetime, timezone
from unittest.mock import patch, MagicMock
from unittest.mock import MagicMock, patch
import pytest
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
from api.models import Provider
from tasks.jobs.connection import check_provider_connection
from api.models import LighthouseConfiguration, Provider
@pytest.mark.parametrize(
@@ -70,3 +70,60 @@ def test_check_provider_connection_exception(
mock_provider_instance.save.assert_called_once()
assert mock_provider_instance.connected is False
@pytest.mark.parametrize(
"lighthouse_data",
[
{
"name": "OpenAI",
"api_key_decoded": "sk-test1234567890T3BlbkFJtest1234567890",
"model": "gpt-4o",
"temperature": 0,
"max_tokens": 4000,
"business_context": "Test business context",
"is_active": True,
},
],
)
@patch("tasks.jobs.connection.openai.OpenAI")
@pytest.mark.django_db
def test_check_lighthouse_connection(
mock_openai_client, tenants_fixture, lighthouse_data
):
lighthouse_config = LighthouseConfiguration.objects.create(
**lighthouse_data, tenant_id=tenants_fixture[0].id
)
mock_models = MagicMock()
mock_models.data = [MagicMock(id="gpt-4o"), MagicMock(id="gpt-4o-mini")]
mock_openai_client.return_value.models.list.return_value = mock_models
result = check_lighthouse_connection(
lighthouse_config_id=str(lighthouse_config.id),
)
lighthouse_config.refresh_from_db()
mock_openai_client.assert_called_once_with(
api_key=lighthouse_data["api_key_decoded"]
)
assert lighthouse_config.is_active is True
assert result["connected"] is True
assert result["error"] is None
assert result["available_models"] == ["gpt-4o", "gpt-4o-mini"]
@patch("tasks.jobs.connection.LighthouseConfiguration.objects.get")
@pytest.mark.django_db
def test_check_lighthouse_connection_missing_api_key(mock_lighthouse_get):
mock_lighthouse_instance = MagicMock()
mock_lighthouse_instance.api_key_decoded = None
mock_lighthouse_get.return_value = mock_lighthouse_instance
result = check_lighthouse_connection("lighthouse_config_id")
assert result["connected"] is False
assert result["error"] == "API key is invalid or missing."
assert result["available_models"] == []
assert mock_lighthouse_instance.is_active is False
mock_lighthouse_instance.save.assert_called_once()
+5 -3
View File
@@ -9,17 +9,19 @@ from api.models import Provider, Tenant
class TestDeleteProvider:
def test_delete_provider_success(self, providers_fixture):
instance = providers_fixture[0]
result = delete_provider(instance.id)
tenant_id = str(instance.tenant_id)
result = delete_provider(tenant_id, instance.id)
assert result
with pytest.raises(ObjectDoesNotExist):
Provider.objects.get(pk=instance.id)
def test_delete_provider_does_not_exist(self):
def test_delete_provider_does_not_exist(self, tenants_fixture):
tenant_id = str(tenants_fixture[0].id)
non_existent_pk = "babf6796-cfcc-4fd3-9dcf-88d012247645"
with pytest.raises(ObjectDoesNotExist):
delete_provider(non_existent_pk)
delete_provider(tenant_id, non_existent_pk)
@pytest.mark.django_db
+166
View File
@@ -0,0 +1,166 @@
import os
import zipfile
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from botocore.exceptions import ClientError
from tasks.jobs.export import (
_compress_output_files,
_generate_output_directory,
_upload_to_s3,
get_s3_client,
)
@pytest.mark.django_db
class TestOutputs:
def test_compress_output_files_creates_zip(self, tmpdir):
base_tmp = Path(str(tmpdir.mkdir("compress_output")))
output_dir = base_tmp / "output"
output_dir.mkdir()
file_path = output_dir / "result.csv"
file_path.write_text("data")
zip_path = _compress_output_files(str(output_dir))
assert zip_path.endswith(".zip")
assert os.path.exists(zip_path)
with zipfile.ZipFile(zip_path, "r") as zipf:
assert "output/result.csv" in zipf.namelist()
@patch("tasks.jobs.export.boto3.client")
@patch("tasks.jobs.export.settings")
def test_get_s3_client_success(self, mock_settings, mock_boto_client):
mock_settings.DJANGO_OUTPUT_S3_AWS_ACCESS_KEY_ID = "test"
mock_settings.DJANGO_OUTPUT_S3_AWS_SECRET_ACCESS_KEY = "test"
mock_settings.DJANGO_OUTPUT_S3_AWS_SESSION_TOKEN = "token"
mock_settings.DJANGO_OUTPUT_S3_AWS_DEFAULT_REGION = "eu-west-1"
client_mock = MagicMock()
mock_boto_client.return_value = client_mock
client = get_s3_client()
assert client is not None
client_mock.list_buckets.assert_called()
@patch("tasks.jobs.export.boto3.client")
@patch("tasks.jobs.export.settings")
def test_get_s3_client_fallback(self, mock_settings, mock_boto_client):
mock_boto_client.side_effect = [
ClientError({"Error": {"Code": "403"}}, "ListBuckets"),
MagicMock(),
]
client = get_s3_client()
assert client is not None
@patch("tasks.jobs.export.get_s3_client")
@patch("tasks.jobs.export.base")
def test_upload_to_s3_success(self, mock_base, mock_get_client, tmpdir):
mock_base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET = "test-bucket"
base_tmp = Path(str(tmpdir.mkdir("upload_success")))
zip_path = base_tmp / "outputs.zip"
zip_path.write_bytes(b"dummy")
compliance_dir = base_tmp / "compliance"
compliance_dir.mkdir()
(compliance_dir / "report.csv").write_text("ok")
client_mock = MagicMock()
mock_get_client.return_value = client_mock
result = _upload_to_s3("tenant-id", str(zip_path), "scan-id")
expected_uri = "s3://test-bucket/tenant-id/scan-id/outputs.zip"
assert result == expected_uri
assert client_mock.upload_file.call_count == 2
@patch("tasks.jobs.export.get_s3_client")
@patch("tasks.jobs.export.base")
def test_upload_to_s3_missing_bucket(self, mock_base, mock_get_client):
mock_base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET = ""
result = _upload_to_s3("tenant", "/tmp/fake.zip", "scan")
assert result is None
@patch("tasks.jobs.export.get_s3_client")
@patch("tasks.jobs.export.base")
def test_upload_to_s3_skips_non_files(self, mock_base, mock_get_client, tmpdir):
mock_base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET = "test-bucket"
base_tmp = Path(str(tmpdir.mkdir("upload_skips_non_files")))
zip_path = base_tmp / "results.zip"
zip_path.write_bytes(b"zip")
compliance_dir = base_tmp / "compliance"
compliance_dir.mkdir()
(compliance_dir / "subdir").mkdir()
client_mock = MagicMock()
mock_get_client.return_value = client_mock
result = _upload_to_s3("tenant", str(zip_path), "scan")
expected_uri = "s3://test-bucket/tenant/scan/results.zip"
assert result == expected_uri
client_mock.upload_file.assert_called_once()
@patch(
"tasks.jobs.export.get_s3_client",
side_effect=ClientError({"Error": {}}, "Upload"),
)
@patch("tasks.jobs.export.base")
@patch("tasks.jobs.export.logger.error")
def test_upload_to_s3_failure_logs_error(
self, mock_logger, mock_base, mock_get_client, tmpdir
):
mock_base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET = "bucket"
base_tmp = Path(str(tmpdir.mkdir("upload_failure_logs")))
zip_path = base_tmp / "zipfile.zip"
zip_path.write_bytes(b"zip")
compliance_dir = base_tmp / "compliance"
compliance_dir.mkdir()
(compliance_dir / "report.csv").write_text("csv")
_upload_to_s3("tenant", str(zip_path), "scan")
mock_logger.assert_called()
def test_generate_output_directory_creates_paths(self, tmpdir):
from prowler.config.config import output_file_timestamp
base_tmp = Path(str(tmpdir.mkdir("generate_output")))
base_dir = str(base_tmp)
tenant_id = "t1"
scan_id = "s1"
provider = "aws"
path, compliance = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert path.endswith(f"{provider}-{output_file_timestamp}")
assert compliance.endswith(f"{provider}-{output_file_timestamp}")
def test_generate_output_directory_invalid_character(self, tmpdir):
from prowler.config.config import output_file_timestamp
base_tmp = Path(str(tmpdir.mkdir("generate_output")))
base_dir = str(base_tmp)
tenant_id = "t1"
scan_id = "s1"
provider = "aws/test@check"
path, compliance = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert path.endswith(f"aws-test-check-{output_file_timestamp}")
assert compliance.endswith(f"aws-test-check-{output_file_timestamp}")
+836 -8
View File
@@ -1,14 +1,19 @@
import json
import uuid
from datetime import datetime
from unittest.mock import MagicMock, patch
import pytest
from tasks.jobs.scan import (
_create_finding_delta,
_store_resources,
create_compliance_requirements,
perform_prowler_scan,
)
from tasks.utils import CustomEncoder
from api.models import (
ComplianceRequirementOverview,
Finding,
Provider,
Resource,
@@ -107,7 +112,13 @@ class TestPerformScan:
finding.service_name = "service_name"
finding.resource_type = "resource_type"
finding.resource_tags = {"tag1": "value1", "tag2": "value2"}
finding.muted = False
finding.raw = {}
finding.resource_metadata = {"test": "metadata"}
finding.resource_details = {"details": "test"}
finding.partition = "partition"
finding.muted = True
finding.compliance = {"compliance1": "PASS"}
# Mock the ProwlerScan instance
mock_prowler_scan_instance = MagicMock()
@@ -145,6 +156,8 @@ class TestPerformScan:
assert scan_finding.severity == finding.severity
assert scan_finding.check_id == finding.check_id
assert scan_finding.raw_result == finding.raw
assert scan_finding.muted
assert scan_finding.compliance == finding.compliance
assert scan_resource.tenant == tenant
assert scan_resource.uid == finding.resource_uid
@@ -152,6 +165,11 @@ class TestPerformScan:
assert scan_resource.service == finding.service_name
assert scan_resource.type == finding.resource_type
assert scan_resource.name == finding.resource_name
assert scan_resource.metadata == json.dumps(
finding.resource_metadata, cls=CustomEncoder
)
assert scan_resource.details == f"{finding.resource_details}"
assert scan_resource.partition == finding.partition
# Assert that the resource tags have been created and associated
tags = scan_resource.tags.all()
@@ -191,6 +209,10 @@ class TestPerformScan:
scan.refresh_from_db()
assert scan.state == StateChoices.FAILED
provider.refresh_from_db()
assert provider.connected is False
assert isinstance(provider.connection_last_checked_at, datetime)
@pytest.mark.parametrize(
"last_status, new_status, expected_delta",
[
@@ -215,7 +237,7 @@ class TestPerformScan:
):
tenant_id = uuid.uuid4()
provider_instance = MagicMock()
provider_instance.id = "provider456"
provider_instance.id = "provider123"
finding = MagicMock()
finding.resource_uid = "resource_uid_123"
@@ -230,15 +252,16 @@ class TestPerformScan:
resource_instance.region = finding.region
mock_get_or_create_resource.return_value = (resource_instance, True)
tag_instance = MagicMock()
mock_get_or_create_tag.return_value = (tag_instance, True)
resource, resource_uid_tuple = _store_resources(
finding, tenant_id, provider_instance
finding, str(tenant_id), provider_instance
)
mock_get_or_create_resource.assert_called_once_with(
tenant_id=tenant_id,
tenant_id=str(tenant_id),
provider=provider_instance,
uid=finding.resource_uid,
defaults={
@@ -285,11 +308,11 @@ class TestPerformScan:
mock_get_or_create_tag.return_value = (tag_instance, True)
resource, resource_uid_tuple = _store_resources(
finding, tenant_id, provider_instance
finding, str(tenant_id), provider_instance
)
mock_get_or_create_resource.assert_called_once_with(
tenant_id=tenant_id,
tenant_id=str(tenant_id),
provider=provider_instance,
uid=finding.resource_uid,
defaults={
@@ -343,14 +366,14 @@ class TestPerformScan:
]
resource, resource_uid_tuple = _store_resources(
finding, tenant_id, provider_instance
finding, str(tenant_id), provider_instance
)
mock_get_or_create_tag.assert_any_call(
tenant_id=tenant_id, key="tag1", value="value1"
tenant_id=str(tenant_id), key="tag1", value="value1"
)
mock_get_or_create_tag.assert_any_call(
tenant_id=tenant_id, key="tag2", value="value2"
tenant_id=str(tenant_id), key="tag2", value="value2"
)
resource_instance.upsert_or_delete_tags.assert_called_once()
tags_passed = resource_instance.upsert_or_delete_tags.call_args[1]["tags"]
@@ -362,3 +385,808 @@ class TestPerformScan:
# TODO Add tests for aggregations
@pytest.mark.django_db
class TestCreateComplianceRequirements:
def test_create_compliance_requirements_success(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
findings_fixture,
resources_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch("tasks.jobs.scan.generate_scan_compliance"),
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = [
"us-east-1",
"us-west-2",
]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
mock_compliance_template.__getitem__.return_value = {
"cis_1.4_aws": {
"framework": "CIS AWS Foundations Benchmark",
"version": "1.4.0",
"requirements": {
"1.1": {
"description": "Ensure root access key does not exist",
"checks_status": {
"pass": 0,
"fail": 0,
"manual": 0,
"total": 1,
},
"status": "PASS",
},
"1.2": {
"description": "Ensure MFA is enabled for root account",
"checks_status": {
"pass": 0,
"fail": 1,
"manual": 0,
"total": 1,
},
"status": "FAIL",
},
},
},
"aws_account_security_onboarding_aws": {
"framework": "AWS Account Security Onboarding",
"version": "1.0",
"requirements": {
"requirement1": {
"description": "Basic security requirement",
"checks_status": {
"pass": 1,
"fail": 0,
"manual": 0,
"total": 1,
},
"status": "PASS",
},
},
},
}
mock_findings_filter.return_value = []
result = create_compliance_requirements(tenant_id, scan_id)
assert "requirements_created" in result
assert "regions_processed" in result
assert "compliance_frameworks" in result
assert result["regions_processed"] == ["us-east-1", "us-west-2"]
assert result["requirements_created"] == 6
assert len(result["compliance_frameworks"]) == 2
mock_create_objects.assert_called_once()
call_args = mock_create_objects.call_args[0]
assert call_args[0] == tenant_id
assert call_args[1] == ComplianceRequirementOverview
assert len(call_args[2]) == 6
compliance_objects = call_args[2]
for obj in compliance_objects:
assert isinstance(obj, ComplianceRequirementOverview)
assert obj.tenant.id == tenant.id
assert obj.scan == scan
assert obj.region in ["us-east-1", "us-west-2"]
assert obj.compliance_id in [
"cis_1.4_aws",
"aws_account_security_onboarding_aws",
]
def test_create_compliance_requirements_with_findings(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch(
"tasks.jobs.scan.generate_scan_compliance"
) as mock_generate_compliance,
patch("tasks.jobs.scan.create_objects_in_batches"),
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
mock_finding1 = MagicMock()
mock_finding1.check_id = "check1"
mock_finding1.status = "PASS"
mock_resource1 = MagicMock()
mock_resource1.region = "us-east-1"
mock_finding1.resources.all.return_value = [mock_resource1]
mock_finding2 = MagicMock()
mock_finding2.check_id = "check2"
mock_finding2.status = "FAIL"
mock_resource2 = MagicMock()
mock_resource2.region = "us-west-2"
mock_finding2.resources.all.return_value = [mock_resource2]
mock_findings_filter.return_value = [mock_finding1, mock_finding2]
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = [
"us-east-1",
"us-west-2",
]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
mock_compliance_template.__getitem__.return_value = {
"test_compliance": {
"framework": "Test Framework",
"version": "1.0",
"requirements": {
"req_1": {
"description": "Test Requirement 1",
"checks": {"check_1": None},
"checks_status": {
"pass": 2,
"fail": 1,
"manual": 0,
"total": 3,
},
"status": "FAIL",
},
"req_2": {
"description": "Test Requirement 2",
"checks": {"check_2": None},
"checks_status": {
"pass": 2,
"fail": 0,
"manual": 0,
"total": 2,
},
"status": "PASS",
},
},
}
}
result = create_compliance_requirements(tenant_id, scan_id)
mock_findings_filter.assert_called_once_with(scan_id=scan_id, muted=False)
assert mock_generate_compliance.call_count == 2
assert result["requirements_created"] == 4
assert set(result["regions_processed"]) == {"us-east-1", "us-west-2"}
def test_create_compliance_requirements_no_provider_regions(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch("tasks.jobs.scan.generate_scan_compliance"),
patch("tasks.jobs.scan.create_objects_in_batches"),
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.KUBERNETES
provider.save()
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
mock_finding = MagicMock()
mock_finding.check_id = "check1"
mock_finding.status = "PASS"
mock_resource = MagicMock()
mock_resource.region = "default"
mock_finding.resources.all.return_value = [mock_resource]
mock_findings_filter.return_value = [mock_finding]
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.side_effect = AttributeError(
"No get_regions method"
)
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
mock_compliance_template.__getitem__.return_value = {
"kubernetes_cis": {
"framework": "CIS Kubernetes Benchmark",
"version": "1.6.0",
"requirements": {
"1.1": {
"description": "Test requirement",
"checks_status": {
"pass": 0,
"fail": 0,
"manual": 0,
"total": 1,
},
"status": "PASS",
},
},
},
}
result = create_compliance_requirements(tenant_id, scan_id)
assert result["regions_processed"] == ["default"]
def test_create_compliance_requirements_empty_findings(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch(
"tasks.jobs.scan.generate_scan_compliance"
) as mock_generate_compliance,
patch("tasks.jobs.scan.create_objects_in_batches"),
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
mock_findings_filter.return_value = []
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
mock_compliance_template.__getitem__.return_value = {
"cis_1.4_aws": {
"framework": "CIS AWS Foundations Benchmark",
"version": "1.4.0",
"requirements": {
"1.1": {
"description": "Test requirement",
"checks_status": {
"pass": 0,
"fail": 0,
"manual": 0,
"total": 1,
},
"status": "PASS",
},
},
},
}
mock_findings_filter.return_value = []
result = create_compliance_requirements(tenant_id, scan_id)
assert result["regions_processed"] == ["us-east-1"]
assert result["requirements_created"] == 1
mock_generate_compliance.assert_not_called()
def test_create_compliance_requirements_error_handling(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
mock_initialize_prowler_provider.side_effect = Exception(
"Provider initialization failed"
)
with pytest.raises(Exception, match="Provider initialization failed"):
create_compliance_requirements(tenant_id, scan_id)
def test_create_compliance_requirements_muted_findings_excluded(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch("tasks.jobs.scan.generate_scan_compliance"),
patch("tasks.jobs.scan.create_objects_in_batches"),
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
mock_findings_filter.return_value = []
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
mock_compliance_template.__getitem__.return_value = {}
mock_findings_filter.return_value = []
create_compliance_requirements(tenant_id, scan_id)
mock_findings_filter.assert_called_once_with(scan_id=scan_id, muted=False)
def test_create_compliance_requirements_check_status_priority(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch(
"tasks.jobs.scan.generate_scan_compliance"
) as mock_generate_compliance,
patch("tasks.jobs.scan.create_objects_in_batches"),
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
scan.provider = provider
scan.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
mock_finding1 = MagicMock()
mock_finding1.check_id = "check1"
mock_finding1.status = "PASS"
mock_resource1 = MagicMock()
mock_resource1.region = "us-east-1"
mock_finding1.resources.all.return_value = [mock_resource1]
mock_finding2 = MagicMock()
mock_finding2.check_id = "check1"
mock_finding2.status = "FAIL"
mock_resource2 = MagicMock()
mock_resource2.region = "us-east-1"
mock_finding2.resources.all.return_value = [mock_resource2]
mock_findings_filter.return_value = [mock_finding1, mock_finding2]
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
mock_compliance_template.__getitem__.return_value = {
"cis_1.4_aws": {
"framework": "CIS AWS Foundations Benchmark",
"version": "1.4.0",
"requirements": {
"1.1": {
"description": "Test requirement",
"checks_status": {
"pass": 0,
"fail": 0,
"manual": 0,
"total": 1,
},
"status": "PASS",
},
},
},
}
create_compliance_requirements(tenant_id, scan_id)
assert mock_generate_compliance.call_count == 1
def test_compliance_overview_aggregation_requirement_fail_priority(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch(
"tasks.jobs.scan.generate_scan_compliance"
) as mock_generate_compliance,
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
providers_fixture[0]
mock_findings_filter.return_value = []
mock_prowler_provider = MagicMock()
mock_prowler_provider.get_regions.return_value = [
"us-east-1",
"us-west-2",
"eu-west-1",
]
mock_initialize_prowler_provider.return_value = mock_prowler_provider
mock_compliance_template.__getitem__.return_value = {
"test_compliance": {
"framework": "Test Framework",
"version": "1.0",
"requirements": {
"req_1": {
"description": "Test Requirement 1",
"checks": {"check_1": None},
"checks_status": {
"pass": 2,
"fail": 1,
"manual": 0,
"total": 3,
},
"status": "FAIL",
}
},
}
}
mock_generate_compliance.return_value = {
"test_compliance": {
"framework": "Test Framework",
"version": "1.0",
"requirements": {
"req_1": {
"description": "Test Requirement 1",
"checks": {
"check_1": {
"us-east-1": {"status": "PASS"},
"us-west-2": {"status": "FAIL"},
"eu-west-1": {"status": "PASS"},
}
},
"checks_status": {
"pass": 2,
"fail": 1,
"manual": 0,
"total": 3,
},
"status": "FAIL",
}
},
}
}
created_objects = []
mock_create_objects.side_effect = (
lambda tenant_id, model, objs, batch_size=500: created_objects.extend(
objs
)
)
create_compliance_requirements(str(tenant.id), str(scan.id))
assert len(created_objects) == 3
assert all(obj.requirement_status == "FAIL" for obj in created_objects)
def test_compliance_overview_aggregation_requirement_pass_all_regions(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch(
"tasks.jobs.scan.generate_scan_compliance"
) as mock_generate_compliance,
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
providers_fixture[0]
mock_findings_filter.return_value = []
mock_prowler_provider = MagicMock()
mock_prowler_provider.get_regions.return_value = ["us-east-1", "us-west-2"]
mock_initialize_prowler_provider.return_value = mock_prowler_provider
mock_compliance_template.__getitem__.return_value = {
"test_compliance": {
"framework": "Test Framework",
"version": "1.0",
"requirements": {
"req_1": {
"description": "Test Requirement 1",
"checks": {"check_1": None},
"checks_status": {
"pass": 2,
"fail": 0,
"manual": 0,
"total": 2,
},
"status": "PASS",
}
},
}
}
mock_generate_compliance.return_value = {
"test_compliance": {
"framework": "Test Framework",
"version": "1.0",
"requirements": {
"req_1": {
"description": "Test Requirement 1",
"checks": {
"check_1": {
"us-east-1": {"status": "PASS"},
"us-west-2": {"status": "PASS"},
}
},
"checks_status": {
"pass": 2,
"fail": 0,
"manual": 0,
"total": 2,
},
"status": "PASS",
}
},
}
}
created_objects = []
mock_create_objects.side_effect = (
lambda tenant_id, model, objs, batch_size=500: created_objects.extend(
objs
)
)
create_compliance_requirements(str(tenant.id), str(scan.id))
assert len(created_objects) == 2
assert all(obj.requirement_status == "PASS" for obj in created_objects)
def test_compliance_overview_aggregation_multiple_requirements_mixed_status(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
) as mock_compliance_template,
patch(
"tasks.jobs.scan.generate_scan_compliance"
) as mock_generate_compliance,
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
patch("api.models.Finding.objects.filter") as mock_findings_filter,
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
providers_fixture[0]
mock_findings_filter.return_value = []
mock_prowler_provider = MagicMock()
mock_prowler_provider.get_regions.return_value = ["us-east-1", "us-west-2"]
mock_initialize_prowler_provider.return_value = mock_prowler_provider
mock_compliance_template.__getitem__.return_value = {
"test_compliance": {
"framework": "Test Framework",
"version": "1.0",
"requirements": {
"req_1": {
"description": "Test Requirement 1",
"checks": {"check_1": None},
"checks_status": {
"pass": 2,
"fail": 0,
"manual": 0,
"total": 2,
},
"status": "PASS",
},
"req_2": {
"description": "Test Requirement 2",
"checks": {"check_2": None},
"checks_status": {
"pass": 1,
"fail": 1,
"manual": 0,
"total": 2,
},
"status": "FAIL",
},
},
}
}
mock_generate_compliance.return_value = {
"test_compliance": {
"framework": "Test Framework",
"version": "1.0",
"requirements": {
"req_1": {
"description": "Test Requirement 1",
"checks": {
"check_1": {
"us-east-1": {"status": "PASS"},
"us-west-2": {"status": "PASS"},
}
},
"checks_status": {
"pass": 2,
"fail": 0,
"manual": 0,
"total": 2,
},
"status": "PASS",
},
"req_2": {
"description": "Test Requirement 2",
"checks": {
"check_2": {
"us-east-1": {"status": "PASS"},
"us-west-2": {"status": "FAIL"},
}
},
"checks_status": {
"pass": 1,
"fail": 1,
"manual": 0,
"total": 2,
},
"status": "FAIL",
},
},
}
}
created_objects = []
mock_create_objects.side_effect = (
lambda tenant_id, model, objs, batch_size=500: created_objects.extend(
objs
)
)
create_compliance_requirements(str(tenant.id), str(scan.id))
assert len(created_objects) == 4
req_1_objects = [
obj for obj in created_objects if obj.requirement_id == "req_1"
]
req_2_objects = [
obj for obj in created_objects if obj.requirement_id == "req_2"
]
assert len(req_1_objects) == 2
assert len(req_2_objects) == 2
assert all(obj.requirement_status == "PASS" for obj in req_1_objects)
assert all(obj.requirement_status == "FAIL" for obj in req_2_objects)
+415
View File
@@ -0,0 +1,415 @@
import uuid
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from tasks.tasks import generate_outputs
@pytest.mark.django_db
class TestGenerateOutputs:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
def test_no_findings_returns_early(self):
with patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
result = generate_outputs(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert result == {"upload": False}
mock_filter.assert_called_once_with(scan_id=self.scan_id)
@patch("tasks.tasks.rmtree")
@patch("tasks.tasks._upload_to_s3")
@patch("tasks.tasks._compress_output_files")
@patch("tasks.tasks.get_compliance_frameworks")
@patch("tasks.tasks.Compliance.get_bulk")
@patch("tasks.tasks.initialize_prowler_provider")
@patch("tasks.tasks.Provider.objects.get")
@patch("tasks.tasks.ScanSummary.objects.filter")
@patch("tasks.tasks.Finding.all_objects.filter")
def test_generate_outputs_happy_path(
self,
mock_finding_filter,
mock_scan_summary_filter,
mock_provider_get,
mock_initialize_provider,
mock_compliance_get_bulk,
mock_get_available_frameworks,
mock_compress,
mock_upload,
mock_rmtree,
):
mock_scan_summary_filter.return_value.exists.return_value = True
mock_provider = MagicMock()
mock_provider.uid = "provider-uid"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
prowler_provider = MagicMock()
mock_initialize_provider.return_value = prowler_provider
mock_compliance_get_bulk.return_value = {"cis": MagicMock()}
mock_get_available_frameworks.return_value = ["cis"]
dummy_finding = MagicMock(uid="f1")
mock_finding_filter.return_value.order_by.return_value.iterator.return_value = [
[dummy_finding],
True,
]
mock_transformed_stats = {"some": "stats"}
with (
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
return_value=mock_transformed_stats,
),
patch(
"tasks.tasks.FindingOutput.transform_api_finding",
return_value={"transformed": "f1"},
),
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": MagicMock(name="JSONWriter"),
"suffix": ".json",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock(name="CSVCompliance"))]},
),
patch(
"tasks.tasks._generate_output_directory",
return_value=("out-dir", "comp-dir"),
),
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
):
mock_compress.return_value = "/tmp/zipped.zip"
mock_upload.return_value = "s3://bucket/zipped.zip"
result = generate_outputs(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert result == {"upload": True}
mock_scan_update.return_value.update.assert_called_once_with(
output_location="s3://bucket/zipped.zip"
)
mock_rmtree.assert_called_once_with(
Path("/tmp/zipped.zip").parent, ignore_errors=True
)
def test_generate_outputs_fails_upload(self):
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk"),
patch("tasks.tasks.get_compliance_frameworks"),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": MagicMock(name="Writer"),
"suffix": ".json",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
),
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value=None),
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
[MagicMock()],
True,
]
result = generate_outputs(
scan_id="scan",
provider_id="provider",
tenant_id=self.tenant_id,
)
assert result == {"upload": False}
mock_scan_update.return_value.update.assert_called_once()
def test_generate_outputs_triggers_html_extra_update(self):
mock_finding_output = MagicMock()
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk", return_value={"cis": MagicMock()}),
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
return_value={"some": "stats"},
),
patch(
"tasks.tasks.FindingOutput.transform_api_finding",
return_value=mock_finding_output,
),
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/f.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
[MagicMock()],
True,
]
html_writer_mock = MagicMock()
with (
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"html": {
"class": lambda *args, **kwargs: html_writer_mock,
"suffix": ".html",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
),
):
generate_outputs(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
html_writer_mock.batch_write_data_to_file.assert_called_once()
def test_transform_called_only_on_second_batch(self):
raw1 = MagicMock()
raw2 = MagicMock()
tf1 = MagicMock()
tf1.compliance = {}
tf2 = MagicMock()
tf2.compliance = {}
writer_instances = []
class TrackingWriter:
def __init__(self, findings, file_path, file_extension, from_cli):
self.transform_called = 0
self.batch_write_data_to_file = MagicMock()
self._data = []
self.close_file = False
writer_instances.append(self)
def transform(self, fos):
self.transform_called += 1
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_summary,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk"),
patch("tasks.tasks.get_compliance_frameworks", return_value=[]),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch(
"tasks.tasks.FindingOutput.transform_api_finding",
side_effect=[tf1, tf2],
),
patch(
"tasks.tasks._generate_output_directory",
return_value=("outdir", "compdir"),
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
patch("tasks.tasks.rmtree"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch(
"tasks.tasks.batched",
return_value=[
([raw1], False),
([raw2], True),
],
),
):
mock_summary.return_value.exists.return_value = True
with patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": TrackingWriter,
"suffix": ".json",
"kwargs": {},
}
},
):
result = generate_outputs(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert result == {"upload": True}
assert len(writer_instances) == 1
writer = writer_instances[0]
assert writer.transform_called == 1
def test_compliance_transform_called_on_second_batch(self):
raw1 = MagicMock()
raw2 = MagicMock()
compliance_obj = MagicMock()
writer_instances = []
class TrackingComplianceWriter:
def __init__(self, *args, **kwargs):
self.transform_calls = []
self._data = []
writer_instances.append(self)
def transform(self, fos, comp_obj, name):
self.transform_calls.append((fos, comp_obj, name))
def batch_write_data_to_file(self):
pass
two_batches = [
([raw1], False),
([raw2], True),
]
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_summary,
patch(
"tasks.tasks.Provider.objects.get",
return_value=MagicMock(uid="UID", provider="aws"),
),
patch("tasks.tasks.initialize_prowler_provider"),
patch(
"tasks.tasks.Compliance.get_bulk", return_value={"cis": compliance_obj}
),
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch(
"tasks.tasks._generate_output_directory",
return_value=("outdir", "compdir"),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch(
"tasks.tasks.FindingOutput.transform_api_finding",
side_effect=lambda f, prov: f,
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
patch("tasks.tasks.rmtree"),
patch(
"tasks.tasks.Scan.all_objects.filter",
return_value=MagicMock(update=lambda **kw: None),
),
patch("tasks.tasks.batched", return_value=two_batches),
patch("tasks.tasks.OUTPUT_FORMATS_MAPPING", {}),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda name: True, TrackingComplianceWriter)]},
),
):
mock_summary.return_value.exists.return_value = True
result = generate_outputs(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert len(writer_instances) == 1
writer = writer_instances[0]
assert writer.transform_calls == [([raw2], compliance_obj, "cis")]
assert result == {"upload": True}
def test_generate_outputs_logs_rmtree_exception(self, caplog):
mock_finding_output = MagicMock()
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk", return_value={"cis": MagicMock()}),
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
return_value={"some": "stats"},
),
patch(
"tasks.tasks.FindingOutput.transform_api_finding",
return_value=mock_finding_output,
),
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/file.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree", side_effect=Exception("Test deletion error")),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
[MagicMock()],
True,
]
with (
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": lambda *args, **kwargs: MagicMock(),
"suffix": ".json",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
),
):
with caplog.at_level("ERROR"):
generate_outputs(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert "Error deleting output files" in caplog.text

Some files were not shown because too many files have changed in this diff Show More