Compare commits

...

377 Commits

Author SHA1 Message Date
Daniel Barranquero bdae47d61b feat: add changelog 2025-10-08 16:59:32 +02:00
Daniel Barranquero e3a7d89948 fix(m365): attribute errors in defender, exchange and admincenter 2025-10-08 16:54:52 +02:00
Hugo Pereira Brito c7d7ec9a3b fix: add pagination for m365 and azure users retrieval (#8858) 2025-10-08 09:07:18 +02:00
Rubén De la Torre Vico 155a1813cc chore(aws): enhance metadata for cloudformation service (#8828)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-07 16:39:23 +02:00
Rubén De la Torre Vico 71e444d4ae chore: improve API docs for Provider endpoints (#8723)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-10-07 15:30:14 +02:00
Víctor Fernández Poyatos 42b7f0f1a9 fix(migrations): API key RLS migration (#8863) 2025-10-07 12:39:30 +02:00
Josema Camacho 5b3f0fbd7f fix(doc): document about using the same .env as the code version (#8804) 2025-10-07 09:38:20 +02:00
Josema Camacho 06eb69e455 chore(security): update Django to 5.1.13 (#8842) 2025-10-07 09:38:11 +02:00
Rubén De la Torre Vico 338a11eaaf chore(aws): enhance metadata for account service (#8715)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-06 12:27:47 -05:00
Alejandro Bailo 8814a0710a fix(scans): detail drawer fails after dependencies migration (#8856) 2025-10-06 17:52:38 +02:00
Chandrapal Badshah 76a55cdb54 fix: remove maxTokens for gpt-5 (#8843)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-10-06 17:25:20 +02:00
Rubén De la Torre Vico 736badb284 chore(aws): enhance metadata for appstream service (#8789)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-06 15:29:06 +02:00
Prowler Bot 37f77bb778 chore(regions_update): Changes in regions for AWS services (#8847)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-10-06 08:23:03 -05:00
Daniel Barranquero 7e5e48c588 fix(changelog): duplicated v5.12.4 in SDK changelog (#8852) 2025-10-06 08:22:15 -05:00
Hugo Pereira Brito 5f0017046f chore(findings): change References display in UI (#8793) 2025-10-06 14:04:20 +02:00
Víctor Fernández Poyatos 612d867838 fix(tests): Race condition on redundant API unit test (#8849) 2025-10-06 12:42:16 +02:00
Rubén De la Torre Vico 8c2668ebe4 chore: rename docs AGENTS (#8846) 2025-10-06 10:53:17 +02:00
Rubén De la Torre Vico be4b1bd99b chore: add first version of AGENTS.md (#8799) 2025-10-06 10:47:51 +02:00
Daniel Barranquero 502525eff1 fix(compliance): generate file extension correctly (#8791) 2025-10-06 10:27:16 +02:00
Rubén De la Torre Vico 09b5afe9c3 chore(aws): enhance metadata for awslambda service (#8825)
Co-authored-by: Daniel Barranquero <74871504+danibarranqueroo@users.noreply.github.com>
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-03 13:48:55 +02:00
Víctor Fernández Poyatos 9a4fc784db feat(api-keys): Add API Key support for the Prowler API (#8805) 2025-10-03 13:42:43 +02:00
Rubén De la Torre Vico 04177db648 chore(aws): enhance metadata for apigateway service (#8788)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-03 11:49:33 +02:00
Alejandro Bailo 2408dbf855 chore(ui): upgrade zod v4, zustand v5, and ai sdk v5 (#8801) 2025-10-03 09:57:46 +02:00
Pepe Fagoaga 9c4a8782e4 fix(conflict-checker): fail on conflict (#8840) 2025-10-03 13:11:45 +05:45
dependabot[bot] 0d549ea39e chore(deps): bump github/codeql-action from 3.29.7 to 3.30.5 (#8812)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: César Arroba <cesar@prowler.com>
2025-10-02 10:36:02 +02:00
dependabot[bot] 0060081cad chore(deps): bump peter-evans/repository-dispatch from 3.0.0 to 4.0.0 (#8821)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:35:02 +02:00
dependabot[bot] 0c2d06dd9a chore(deps): bump actions/setup-node from 4.4.0 to 5.0.0 (#8819)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:34:21 +02:00
dependabot[bot] 14b9be4c47 chore(deps): bump tj-actions/changed-files from 46.0.5 to 47.0.0 (#8814)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:33:13 +02:00
dependabot[bot] 6bac5650e6 chore(deps): bump aws-actions/configure-aws-credentials from 4.2.1 to 5.0.0 (#8813)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:32:55 +02:00
dependabot[bot] 6170462a61 chore(deps): bump actions/github-script from 7.0.1 to 8.0.0 (#8820)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:32:10 +02:00
dependabot[bot] 2ad5926b13 chore(deps): bump actions/setup-python from 5.6.0 to 6.0.0 (#8818)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:31:20 +02:00
dependabot[bot] a6ddc85e4c chore(deps): bump codecov/codecov-action from 5.4.3 to 5.5.1 (#8811)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:30:27 +02:00
dependabot[bot] aceff35f29 chore(deps): bump peter-evans/find-comment from 3.1.0 to 4.0.0 (#8817)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:29:46 +02:00
dependabot[bot] 3ae96c3aa6 chore(deps): bump actions/labeler from 5.0.0 to 6.0.1 (#8816)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:28:56 +02:00
dependabot[bot] 0dcaaa9083 chore(deps): bump actions/cache from 4.2.3 to 4.3.0 (#8815)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:28:43 +02:00
dependabot[bot] 323a7f0349 chore(deps): bump docker/login-action from 3.4.0 to 3.6.0 (#8810)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:25:49 +02:00
dependabot[bot] 736cbea862 chore(deps): bump softprops/action-gh-release from 2.3.2 to 2.3.3 (#8809)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:25:04 +02:00
dependabot[bot] d3e290978e chore(deps): bump actions/checkout from 4.2.2 to 5.0.0 (#8808)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:24:41 +02:00
dependabot[bot] 9c91cfcb7d chore(deps): bump trufflesecurity/trufflehog from 3.90.2 to 3.90.8 (#8807)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 10:23:41 +02:00
Daniel Barranquero e279f7fcfd fix: handle eks cluster version and listener certificate arn not in acm (#8802) 2025-10-01 13:55:26 -04:00
Hugo Pereira Brito a555cffebe fix(html): preserve markdown formatting in read-more functionality (#8803) 2025-10-01 13:48:20 -04:00
César Arroba 49f5435392 chore(gha): check API changes for versioning (#8532) 2025-10-01 15:32:08 +02:00
Rubén De la Torre Vico a087dd9b85 chore(aws): enhance metadata for accessanalyzer service (#8688)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-01 15:05:44 +02:00
Rubén De la Torre Vico 6e89c301b2 chore(aws): enhance metadata for athena service (#8790)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-01 13:59:03 +02:00
Pedro Martín d5dac448a6 fix(m365): add framework and name for iso27001 (#8792) 2025-10-01 13:43:55 +02:00
Pepe Fagoaga 00e6eb35f1 fix(workflows): load latest SDK only for master (#8796) 2025-10-01 13:35:43 +05:45
Hugo Pereira Brito cdb455b2b1 feat(aws): add new check ec2_instance_with_outdated_ami (#6910)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-09-30 13:54:36 -04:00
Sergio Garcia 837c65ba23 chore(securityhub): improve logging for Security Hub integration (#8608) 2025-09-30 10:36:42 -04:00
OlmeNav 035293b612 feat: Verify that the CheckID is the same as the filename and classname in the Check class (#8690)
Co-authored-by: angelolmn <e.angelolm#go.ugr.es>
Co-authored-by: César Arroba <cesar@prowler.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-09-30 13:46:59 +02:00
Rubén De la Torre Vico 250b5df836 chore(aws): enhance metadata for acm service (#8716)
Co-authored-by: Daniel Barranquero <74871504+danibarranqueroo@users.noreply.github.com>
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-09-30 13:33:09 +02:00
Josema Camacho ec59dbc6ee fix: move delete user 500 error fix to its right version (#8787) 2025-09-30 10:56:29 +02:00
Alan Buscaglia 4d5676f00e feat: upgrade to React 19, Next.js 15, React Compiler, HeroUI and Tailwind 4 (#8748)
Co-authored-by: Alan Buscaglia <alanbuscaglia@MacBook-Pro.local>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
Co-authored-by: César Arroba <cesar@prowler.com>
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2025-09-30 09:59:51 +02:00
MustafaAamir 2a4b62527a fix(tests_iam): AWS managed policies are isolated (#8609)
Co-authored-by: MustafaAamir <mustafa@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-09-30 13:44:03 +05:45
Josema Camacho ec0341c696 fix(user): PermissionError, 500, when deleting user (#8731) 2025-09-30 09:49:33 +02:00
Rubén De la Torre Vico 2e5f3a5a66 feat(aws): enhance metadata for apigatewayv2 service (#8719)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-09-29 12:35:05 -04:00
dependabot[bot] 231a5fab86 chore(deps-dev): bump authlib from 1.6.1 to 1.6.4 (#8741)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-09-29 12:08:47 -04:00
Andoni Alonso 10319ea69d docs(github): refactor getting started and auth (#8767) 2025-09-29 11:33:15 -04:00
Sergio Garcia 53bb5aff22 feat(llm): add LLM provider (#8555)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-09-29 11:24:10 -04:00
Rubén De la Torre Vico 52a5fff61f chore(aws): enhance metadata for appsync service (#8721)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-09-29 16:36:43 +02:00
Andoni Alonso f28754b883 docs(iac): refactor getting started and auth (#8779) 2025-09-29 15:41:25 +02:00
Pedro Martín 6fce797ca2 feat(compliance-mapper): add first version (#8568) 2025-09-29 15:40:29 +02:00
Adrián Jesús Peña Rodríguez a1fd315104 ref(actions): remove xmlsec step (#8482)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-09-29 13:04:33 +02:00
Prowler Bot a91f0ac8b5 chore(regions_update): Changes in regions for AWS services (#8777)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-09-29 16:27:27 +05:45
Andoni Alonso 2c96df05f4 docs(mongodbatlas): refactor getting started and auth (#8776) 2025-09-29 11:58:09 +02:00
Chandrapal Badshah b57788c7b9 fix: update prowler package version in api (#8778)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-29 11:44:45 +02:00
Pedro Martín 7431bab2a7 docs(threatscore): add info with Prowler ThreatScore (#8711)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-09-29 11:17:05 +02:00
Andoni Alonso a52697bfdf docs(m365): refactor getting started and auth (#8761) 2025-09-29 10:01:40 +02:00
Alejandro Bailo 9dc2199381 feat(ui): add compliance_name (#8775) 2025-09-29 09:59:18 +02:00
Rubén De la Torre Vico 89db760b89 docs(mcp): add preview feature disclaimer (#8774) 2025-09-29 09:42:16 +02:00
Chandrapal Badshah 4356c1e186 fix(ui): update ui changelog (#8771)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-26 17:08:17 +02:00
Rubén De la Torre Vico e32cebc553 feat(mcp): add Dockerfile for MCP Server containerization (#8768) 2025-09-26 15:04:24 +02:00
Andoni Alonso 23e1cc281d docs(azure): refactor getting started and auth (#8754)
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-09-26 15:02:57 +02:00
Josema Camacho 48d3fb4fe3 feat(doc): 📚 add documenation about JWT keys autogeneration (#8766) 2025-09-26 13:52:46 +05:45
César Arroba ab727e6816 chore(gha): fix e2e workflow (#8769) 2025-09-25 22:13:53 +05:45
Rubén De la Torre Vico 23d882d7ab feat(mcp): add Prowler App MCP Server (#8744) 2025-09-25 15:21:34 +02:00
Alejandro Bailo 59435167ea fix(scans): update link disable condition for findings table (#8762) 2025-09-25 12:57:22 +02:00
Andoni Alonso 77cdd793f8 fix(aws): cover SNS ResourceID in Quick Inventory output (#8763) 2025-09-25 11:14:32 +02:00
Andoni Alonso d13f3f0e0c docs(gcp): refactor getting started and auth (#8758) 2025-09-25 10:19:01 +02:00
Víctor Fernández Poyatos 56821de2f4 feat(tasks): Move compliance tasks to compliance queue (#8755) 2025-09-24 14:00:17 +02:00
Daniel Barranquero 92190fa69f feat(docs): add renaming checks to developer guide (#8717)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-09-24 11:46:52 +02:00
Prowler Bot 85db7c5183 chore(regions_update): Changes in regions for AWS services (#8736)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-09-24 10:38:12 +02:00
Josema Camacho a55ac266bf chore(django): update django to 5.1.12 due to security problems (#8693) 2025-09-23 16:35:25 +05:45
Andoni Alonso 90622e0437 docs: update Entra SSO SAML video link (#8745) 2025-09-23 12:43:51 +02:00
Pepe Fagoaga 81596250dc fix(actions): lock poetry after changes (#8477) 2025-09-23 14:31:45 +05:45
Rubén De la Torre Vico 43db5fe527 feat(mcp): add basic logger (#8740) 2025-09-23 09:09:38 +02:00
Pepe Fagoaga dfb479fa80 chore(readme): remove deprecations and fix typo (#8739) 2025-09-22 20:31:42 +05:45
Pedro Martín aa88b453ff fix(compliance): change order in models and remove prints (#8738) 2025-09-22 15:45:09 +02:00
Pedro Martín fbda66c6d1 feat(compliance): add name for each compliance (#7920)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-09-22 14:53:27 +02:00
Adrián Jesús Peña Rodríguez 2200e65519 feat(auth): add safeguards to prevent self-role removal and enforce MANAGE_ACCOUNT role presence (#8729) 2025-09-22 14:04:39 +02:00
Josema Camacho b8537aa22d feat(config): add generation for JWT keys if missing (#8655) 2025-09-22 13:14:54 +02:00
Rubén De la Torre Vico cb4a5dec79 chore: set an appropiate User-Agent in requests (#8724) 2025-09-22 12:48:13 +02:00
Rubén De la Torre Vico 0286de7ce2 chore: add mcp_server component labeler configuration (#8737) 2025-09-22 15:40:23 +05:45
Pepe Fagoaga b00602f109 fix(users): only list roles and memberships with manage_account (#8281)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-22 11:25:24 +02:00
Adrián Jesús Peña Rodríguez 1cfae546a0 chore(deps): add markdown package version 3.9 to dependencies (#8735) 2025-09-22 10:44:26 +02:00
Sergio Garcia 05dae4e8d1 fix(iac): handle empty results (#8733) 2025-09-16 14:20:15 +02:00
dependabot[bot] 52ddaca4c5 chore(deps-dev): bump moto from 5.0.28 to 5.1.11 (#7100)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-09-16 14:17:47 +02:00
Alejandro Bailo 940a1202b3 fix: handle 4XX and 204 properly (#8722)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-09-15 17:07:15 +02:00
Prowler Bot ec27451199 chore(regions_update): Changes in regions for AWS services (#8728)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-09-15 15:02:37 +02:00
Sergio Garcia 60e06dcc6e chore(html): support markdown in HTML (#8727) 2025-09-15 11:38:18 +02:00
Hugo Pereira Brito 7733aab088 feat: add additional_urls to finding details and markdown (#8704)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-15 11:33:27 +02:00
Pepe Fagoaga 5c6fadcfe7 chore(changelog): remove whitespace in links (#8712) 2025-09-12 17:09:19 +05:45
César Arroba 1bdb314e2c chore(gha): permissions missed for conflict checker action (#8714) 2025-09-12 12:37:12 +02:00
Rubén De la Torre Vico 5b0365947f feat: add first Prowler MCP server version (#8695) 2025-09-12 09:56:36 +02:00
Daniel Barranquero b512f6c421 fix(firehose): false positive in firehose_stream_encrypted_at_rest (#8599)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-09-11 09:55:16 -04:00
Alejandro Bailo c4a8771647 chore(dependencies): update package versions and track them (#8696) 2025-09-11 15:36:06 +02:00
Alejandro Bailo 6f967c6da7 fix(auth): validate email field (#8698) 2025-09-11 15:29:49 +02:00
Alejandro Bailo 82cd29d595 fix(auth): add method attribute to form for proper submission handling (#8699) 2025-09-11 15:02:36 +02:00
Daniel Barranquero 14c2334e1b fix(defender): change policies rules key (#8702) 2025-09-11 13:46:21 +02:00
Rubén De la Torre Vico 3598514cb4 chore(aws/config): adapt metadata to new standarized format (#8641)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-09-10 17:46:11 +02:00
Hugo Pereira Brito c4ba061f30 chore(outputs): adapt to new metadata specification (#8651) 2025-09-10 17:21:19 +02:00
Chandrapal Badshah f4530b21d2 fix(lighthouse): make Enter submit text (#8664)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-10 16:34:35 +02:00
Chandrapal Badshah 3949ab736d fix(lighthouse): allow scrolling during AI response streaming (#8669)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-10 16:34:24 +02:00
sumit-tft 9da5066b18 feat(ui): add copy link icon to finding detail page (#8685)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-10 16:30:16 +02:00
Rubén De la Torre Vico 941539616c chore(aws/neptune): adapt some metadata fields to new format (#8494)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-09-10 16:21:30 +02:00
sumit-tft 135fa044b7 feat(ui): Add Prowler Hub menu item with tooltip (#8692)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-10 16:09:09 +02:00
Andoni Alonso 48913c1886 docs(aws): refactor getting started and auth (#8683) 2025-09-10 13:45:36 +02:00
Pedro Martín ea20943f83 feat(actions): support dashboard changes in changelog (#8694) 2025-09-10 11:05:56 +02:00
Hugo Pereira Brito 2738cfd1bd feat(dashboard): add Description and markdown support (#8667) 2025-09-10 10:53:53 +02:00
Rubén De la Torre Vico 265c3d818e docs(developer-guide): enhance check metadata format (#8411)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-09-10 09:19:08 +02:00
Alejandro Bailo c0a9fdf8c8 docs(jira): add comprehensive guide for Jira integration in Prowler App (#8681)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-09-09 17:01:12 +02:00
Rubén De la Torre Vico 8b3335f426 chore: add metadata-review label for .metadata.json files (#8689) 2025-09-09 20:32:04 +05:45
Daniel Barranquero 252033d113 fix(compliance): replace old check id with new one (#8682) 2025-09-09 14:25:56 +02:00
Prowler Bot 0bc00dbca4 chore(release): Bump version to v5.13.0 (#8679)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-09-09 16:36:22 +05:45
Adrián Jesús Peña Rodríguez 3f5178bffb chore: update api changelog (#8677) 2025-09-09 10:23:55 +02:00
Josema Camacho e08b272a1d fix(login): add DRF throttle option for dj-rest-auth lib (#8672) 2025-09-09 09:34:02 +02:00
Pedro Martín 64c43a288d feat(jira): add force accept language for requests (#8674) 2025-09-09 13:17:25 +05:45
Daniel Barranquero 74bf0e6b47 fix(aws): nonetype errors in opensearch, firehose and cognito (#8670) 2025-09-09 13:12:57 +05:45
Andoni Alonso 02b7c5328f docs: update providers table (#8676)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-09-09 09:25:20 +02:00
Alejandro Bailo bb02004e7c fix: social auth buttons showed for sign-up (#8673) 2025-09-09 09:23:56 +02:00
Andoni Alonso 82cf216a74 feat(mongodbatlas): add MongoDB Atlas provider PoC (#8312)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-09-09 09:18:37 +02:00
Daniel Barranquero 7916425ed4 fix(memorydb): handle clusters with no security groups (#8666) 2025-09-08 15:05:13 -04:00
Andoni Alonso d98063ed47 docs: add interface column to providers (#8675) 2025-09-08 15:03:17 -04:00
Andoni Alonso 27bf78a3a1 docs: update providers list (#8671) 2025-09-08 17:12:16 +02:00
Andoni Alonso f50bd50d60 docs: add SSO with SAML Entra ID video link (#8668) 2025-09-08 14:57:38 +02:00
Alejandro Bailo 80665e0396 feat(ui): send a finding to Jira (#8649) 2025-09-08 14:15:23 +02:00
Pedro Martín 4b259fa8dd chore(changelog): update with latest changes (#8665) 2025-09-08 17:24:31 +05:45
Hugo Pereira Brito 10db2ed6d8 chore(docs): add notes regarding gov accounts support (#8656) 2025-09-08 11:07:00 +02:00
Chandrapal Badshah 422a8a0f62 fix: change title in lighthouse settings (#8615)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-08 10:34:09 +02:00
Daniel Barranquero 906a2cc651 fix(entra): add metadata description for check entra_admin_users_phishing_resistant_mfa_enabled (#8654) 2025-09-08 08:11:46 +02:00
Víctor Fernández Poyatos 43fe9c6860 feat(integrations): allow sending findings to Jira from the API (#8645) 2025-09-05 14:28:34 +02:00
Andoni Alonso f87b2089fb docs: remove llms.txt (#8653) 2025-09-05 17:08:42 +05:45
Samuele Pasini 1884874ab6 fix: typo ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_* CheckID (#8294)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-09-05 13:16:12 +02:00
Andoni Alonso cd6d29e176 docs: reorg tutorials (#8652) 2025-09-05 16:49:14 +05:45
Pedro Martín 0b7055e983 feat(jira): add send_finding method with specific finding fields (#8648) 2025-09-05 12:25:53 +02:00
Josema Camacho ae53b76d78 feat(login): add DJANGO_THROTTLE_TOKEN_OBTAIN to main .env file (#8650) 2025-09-05 16:01:48 +05:45
Josema Camacho 406e473b5c feat(login): add throttling option for the /api/v1/tokens endpoint (#8647) 2025-09-05 14:37:31 +05:45
Pedro Martín 1a2bf461f0 feat(jira): support labels in jira tickets (#8603) 2025-09-05 09:53:24 +02:00
Samuele Pasini 1b49c0b27f feat: add --excluded-checks-file flag (#8301)
Co-authored-by: pedrooot <pedromarting3@gmail.com>
2025-09-05 09:33:21 +02:00
Pablo Lara 12ada66978 feat: add status filter to /overviews endpoint (#8186)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-09-04 18:46:14 +02:00
Alejandro Bailo daa2536005 feat: Jira UI integration - pages and server actions (#8640) 2025-09-04 15:59:37 +02:00
Chandrapal Badshah 69a62db19a chore: rename to lighthouse ai (#8614)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-04 15:30:07 +05:45
Pedro Martín 79450d6977 fix(securityhub): resolve TypeError from Python3.9 (#8619)
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-09-03 17:52:09 +02:00
Víctor Fernández Poyatos 0463fd0830 refactor(integrations-jira): Move domain to credentials and retrieve metadata during connection test (#8637) 2025-09-03 17:24:42 +02:00
Alejandro Bailo b15e3d339c fix(saml): remove validation call on email domain change (#8638) 2025-09-03 17:04:51 +02:00
Pedro Martín 1fc12952ba feat(jira): add color for manual status (#8642) 2025-09-03 16:53:31 +02:00
sumit-tft 088a6bcbda feat(ui): handle no-permissions on scan page (#8624)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-03 15:51:14 +02:00
Hugo Pereira Brito a3b0bb6d4b refactor(models): rename AdditionalUrls to AdditionalURLs (#8639) 2025-09-03 19:34:06 +05:45
Pedro Martín 3c819f8875 chore(changelog): update with latest changes (#8636) 2025-09-03 12:54:50 +02:00
Pedro Martín cdf0292bbc feat(jira): add get_metadata (#8630) 2025-09-03 10:59:07 +02:00
César Arroba 987121051b chore(sdk): comment push readme to dockerhub steps (#8628) 2025-09-02 21:48:42 +05:45
Hugo Pereira Brito c9ed7773d2 feat(models): add AdditionalUrls field to check metadata (#8590) 2025-09-02 21:27:21 +05:45
Pepe Fagoaga fdf45aac51 fix(img): prowler architecture (#8635) 2025-09-02 21:15:40 +05:45
Alejandro Bailo 3ded224a4b fix: new errors detected through the app (#8629) 2025-09-02 12:35:06 +02:00
sumit-tft 230a085c76 fix(ui): display NoProvidersAdded when no cloud providers are configured (#8626) 2025-09-02 12:33:58 +02:00
Chandrapal Badshah 8cd90e07dc chore(ui): eslint nextjs files (#8627)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-09-02 12:15:48 +02:00
Pedro Martín 06ded98d05 feat(jira): add data to table and error handling (#8601) 2025-09-02 11:48:52 +02:00
Pedro Martín a5066326bd chore(changelog): update with latests changes (#8620) 2025-09-02 11:27:13 +02:00
Alejandro Bailo 83a9ac2109 chore(ui): update CHANGELOG (#8625) 2025-09-02 10:45:34 +02:00
Alejandro Bailo 136eb4facd feat: 50X errors handler (#8621) 2025-09-02 10:12:03 +02:00
Víctor Fernández Poyatos d4eb4bdca7 feat(integrations): Support JIRA integration in the API (#8622) 2025-09-02 09:53:36 +02:00
Alejandro Bailo 665c9d878a chore(ui): update Next.js and ESLint dependencies to version 14.2.32 (#8623) 2025-09-01 18:38:39 +02:00
Hugo Pereira Brito a064e43302 chore(ui): render attributes as markdown (#8604)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-09-01 16:43:36 +02:00
Daniel Barranquero fdb76e7820 feat(docs): update mfa enforcement date for m365 (#8610) 2025-09-01 09:48:21 +02:00
Pepe Fagoaga 1259bb85e3 fix: remove dot (#8613) 2025-08-29 14:46:19 +05:45
Pepe Fagoaga 0db9ab91b2 chore(docs): review stats, imgs and update copy (#8612) 2025-08-29 14:44:01 +05:45
César Arroba f6ea314ec0 chore(sdk): push readme file to docker hub (#8611) 2025-08-29 14:43:53 +05:45
Alejandro Bailo 9e02da342b docs: Security Hub API and UI documentation (#8576)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-08-28 20:43:42 +05:45
Prowler Bot 358d4239c7 chore(release): Bump version to v5.12.0 (#8605)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-28 16:56:24 +02:00
Víctor Fernández Poyatos b003fca377 fix(docs): remove empty sections (#8600) 2025-08-28 12:55:46 +02:00
Víctor Fernández Poyatos b4deda3c3f docs(api): fix API response samples (#8592) 2025-08-28 12:39:07 +02:00
Sergio Garcia 338bb74c0c fix(azure): query API management logs with not empty operations (#8598) 2025-08-28 12:03:35 +02:00
Alejandro Bailo 7342a8901f chore: update CHANGELOG.md for Prowler v5.11.0 release (#8597) 2025-08-28 11:43:24 +02:00
Sergio Garcia f484b83f15 feat(azure): Add APIM threat detection for LLM jacking attacks (#8571)
Co-authored-by: Rubén De la Torre Vico <ruben@prowler.com>
2025-08-28 11:42:07 +02:00
Adrián Jesús Peña Rodríguez c69187f484 chore: prepare api changelog for 5.11 (#8596) 2025-08-28 10:25:08 +02:00
Alejandro Bailo 5038afeb26 fix(security-hub): copy updated (#8594) 2025-08-27 18:42:34 +02:00
Sergio Garcia fce43cea16 chore: update changelog (#8593) 2025-08-27 17:57:07 +02:00
Andoni Alonso 43a14b89bc fix(github): provider always scans user instead of organization when using provider UID (#8587) 2025-08-27 17:45:13 +02:00
Tom 24364bd73e feat(gcp): Add support for skipping APIs check (#8575)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2025-08-27 14:44:34 +02:00
Adrián Jesús Peña Rodríguez a1abe6dd2d fix(sh): reset regions information if connection fails (#8588) 2025-08-27 14:15:09 +02:00
César Arroba 25098bc82a chore(gha): fix conflict checker action (#8586) 2025-08-27 13:41:39 +02:00
sumit-tft 20f2f45610 feat(ui): add S3 bucket link with folder for each integration (#8554)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-08-27 12:40:37 +02:00
Alejandro Bailo 06c2608a05 feat(integrations): external links and copies changed (#8574) 2025-08-27 12:40:25 +02:00
Alejandro Bailo 329ac113f2 chore(docs): update CHANGELOG properly (#8585) 2025-08-27 11:57:12 +02:00
Hugo Pereira Brito 97179d2b43 fix(docs): incorrect permission in sp creation guide (#8581) 2025-08-27 11:01:37 +02:00
sumit-tft 8317ea783f feat(ui): show all provider UIDs in scan page filter regardless of co… (#8375) 2025-08-27 10:50:16 +02:00
Andoni Alonso 65e7e89d61 fix(github): GitHub Personal Access Token authentication fails without user:email scope (#8580) 2025-08-27 09:57:32 +02:00
Víctor Fernández Poyatos 26a4dd4e8d chore: bump h2 to 4.3.0 (#8573) 2025-08-26 15:17:06 +02:00
Alejandro Bailo dab0cea2dd feat(ui): Security Hub (#8552) 2025-08-26 14:30:45 +02:00
Daniel Barranquero 3b42eb3818 fix(s3): resource metadata error in s3_bucket_shadow_resource_vulnerability (#8572) 2025-08-26 13:30:49 +02:00
Prowler Bot a5ba950627 chore(regions_update): Changes in regions for AWS services (#8567)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-26 09:57:45 +02:00
Andoni Alonso a1232446c1 docs: refactor several sections (#8570) 2025-08-26 09:55:18 +02:00
Pedro Martín aa6f851887 docs(aws): deploying prowler iam roles across aws organizations (#8427)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-08-26 09:45:14 +02:00
Adrián Jesús Peña Rodríguez 25f972e910 feat(sh): create asff of there is an enabled SecurityHub integration (#8569) 2025-08-25 16:58:21 +02:00
Pedro Martín 7216e5ce3d chore(github): improve pull request template (#7910) 2025-08-25 16:22:55 +02:00
Adrián Jesús Peña Rodríguez 83242da0ab feat(integrations): implement AWS Security Hub integration (#8365) 2025-08-25 15:53:48 +02:00
Alejandro Bailo d457166a0c fix(ui): AWS form selector default values (#8553) 2025-08-25 12:30:02 +02:00
Daniel Barranquero 88f38b2d2a feat(docs): remove old requirements links (#8561) 2025-08-22 14:22:50 +02:00
Pepe Fagoaga c2e0849d5f fix(conflict-checker): use prowler-bot (#8560) 2025-08-22 17:27:44 +05:45
Andoni Alonso 1fdebfa295 docs: remove "Requirements" page (#8559) 2025-08-22 15:55:25 +05:45
Sergio Garcia ea6d04ed3a chore(securityhub): add static credentials and role assumption support (#8539)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-08-22 11:58:35 +02:00
Sergio Garcia 2167683851 feat(aws): add Resource Explorer enumeration actions (#8557) 2025-08-22 11:47:51 +02:00
Pepe Fagoaga 6324be31ab fix(api): poetry lock up to date with the SDK (#8558) 2025-08-22 11:05:14 +02:00
Alejandro Bailo 525f152e51 fix(ui): update authorization logic to match right paths (#8556) 2025-08-22 10:35:28 +02:00
Sergio Garcia c3a2d79234 chore(iac): change engine to trivy (#8466)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-08-22 10:17:51 +02:00
Andoni Alonso cefa708322 docs: add provider bulk provisioning (#8551) 2025-08-21 16:33:45 +02:00
Andoni Alonso 1a9e14ab2a chore(bulk-provisioning-tool): add script to bulk provision providers (#8540) 2025-08-21 13:11:46 +02:00
Chandrapal Badshah b1c6094b6d fix: Remove temperature for GPT-5 models (#8550)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-21 12:40:49 +02:00
Pablo Lara 1038b11fe3 docs: update changelog (#8549) 2025-08-21 12:22:27 +02:00
Chandrapal Badshah d54e3b25db fix: Refactor getting lighthouse config (#8546)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-21 11:14:21 +02:00
Pepe Fagoaga 6a8e8750bb chore(actions): conflict checker (#8547) 2025-08-21 14:28:18 +05:45
Hugo Pereira Brito ad3d4536fb fix(m365): only evaluate enabled users in entra_users_mfa_capable (#8544) 2025-08-20 16:45:00 +02:00
Andoni Alonso 46c24055ee docs: refactor Overview into several files (#8543) 2025-08-20 17:44:06 +05:45
Pepe Fagoaga 4c6a1592ac chore(actions): update docs comment with link (#8448) 2025-08-20 17:42:32 +05:45
Hugo Pereira Brito 89e657561c feat(github): add User Email and APP name/installations information (#8501)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-20 12:26:38 +02:00
Hugo Pereira Brito 55099abc86 fix(organization): list all accessible organizations (#8535)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-20 12:13:01 +02:00
Andoni Alonso 3c599a75cc feat(iam): add ECS privilege escalation patterns to IAM checks (#8541) 2025-08-20 09:23:30 +02:00
Chandrapal Badshah f77897f813 feat: gpt-5 and gpt-5-mini integration with lighthouse (#8527)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-08-19 16:49:21 +02:00
Sergio Garcia 30518f2e0e feat(aws): new check eks_cluster_deletion_protection_enabled (#8536) 2025-08-19 10:25:24 +02:00
Chandrapal Badshah efdeb431ba feat: Add resource agent to supervisor (#8509)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-19 09:40:14 +02:00
Sergio Garcia bb07cf9147 fix(aws): exact match in resource-arn filtering (#8533) 2025-08-18 12:11:13 +02:00
Prowler Bot 9214b5c26f chore(regions_update): Changes in regions for AWS services (#8531)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-18 11:58:41 +02:00
dependabot[bot] d57df3cc28 chore(deps): bump actions/upload-artifact from 4.5.0 to 4.6.2 (#8154)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 11:43:41 +02:00
Andoni Alonso 2f5fce41dc feat(iam): remove standalone iam:PassRole from privesc detection and add missing patterns (#8530) 2025-08-18 11:35:14 +02:00
Chandrapal Badshah 6918a75449 fix: add business context to lighthouse chat (#8528)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-18 09:49:23 +02:00
Pablo Lara 3aeaa3d992 feat(filters): improve provider connection filter UX (#8520) 2025-08-18 09:10:16 +02:00
Sergio Garcia fd833eecf0 fix(github): solve Github APP auth method (#8529) 2025-08-18 08:35:19 +02:00
Andoni Alonso 39e4d20b24 feat(iam): add Bedrock AgentCore privilege escalation combo (#8526) 2025-08-15 13:25:15 +02:00
Sergio Garcia dfdd45e4d0 fix(github): list all accessible repositories (#8522) 2025-08-14 10:38:38 +02:00
Hugo Pereira Brito 81478dfed3 fix(compliance): GitHub CIS 1.0 (#8519) 2025-08-13 16:45:36 +02:00
Chandrapal Badshah 2854f8405c fix: simplify error handling to use only error.message (#8518)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-13 10:59:47 +02:00
Jaen-923 0e1578cfbc chore(aws): Refine kisa isms-p compliance mapping (#8479)
Co-authored-by: ghkim583 <203069125+ghkim583@users.noreply.github.com>
2025-08-13 09:08:37 +02:00
Hugo Pereira Brito f5b1532647 fix(kafka): false positives in kafka_cluster_is_public check (#8514) 2025-08-13 09:05:09 +02:00
Sergio Garcia d9f3a6b88e docs(github): add Github onboarding documentation (#8510) 2025-08-12 17:11:30 +02:00
Hugo Pereira Brito b0c386fc60 fix(app): fix false positives in app_http_logs_enabled (#8507)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-12 14:47:17 +02:00
Hugo Pereira Brito 72b06261df fix(storage): fall positives in storage_geo_redundant_enabled (#8504) 2025-08-12 12:30:43 +02:00
sumit-tft 1562b77581 fix(ui): redirection after deleting providers group and improve erro… (#8389)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-12 11:31:45 +02:00
Daniel Barranquero 10e38ca407 fix: missing resource_name in GCP and Azure Defender checks (#8352) 2025-08-11 16:16:08 +02:00
Rubén De la Torre Vico 5842f2df37 feat(azure/vm): add new check vm_jit_access_enabled (#8202)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-11 13:12:36 +02:00
Prowler Bot 8b3b9ffd99 chore(regions_update): Changes in regions for AWS services (#8499)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-11 12:00:02 +02:00
Rubén De la Torre Vico d238050065 feat(azure/vm): add new check vm_sufficient_daily_backup_retention_period (#8200)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-11 11:44:45 +02:00
sumit-tft 5572d476ad fix(ui): adjust table headers to be single-line and consistent (#8480) 2025-08-11 10:47:10 +02:00
sumit-tft 3c94d3a56f fix(ui): disable See Compliance button until scan completes (#8487)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-11 10:37:35 +02:00
Hugo Pereira Brito 85af4ff77c feat(m365): add certificate auth method to cli (#8404)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-11 09:47:56 +02:00
Daniel Barranquero dcee114ef3 fix: validation errors in azure and m365 (#8368) 2025-08-11 09:42:30 +02:00
Pedro Martín 760723874c fix(prowler-threatscore): order the requirements by id (#8495)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-11 08:20:10 +02:00
Pedro Martín c0a4898074 chore(changelog): update (#8496) 2025-08-11 07:48:23 +02:00
Alejandro Bailo 03c0533b58 feat(ui): overview charts display improved (#8491)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-08 10:59:15 +02:00
sumit-tft c8dcb0edb0 feat(ui): add GitHub submenu under High Risk Findings (#8488)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-08 10:36:36 +02:00
Pablo Lara 82171ee916 docs: update changelog (#8489) 2025-08-08 10:20:53 +02:00
Pablo Lara df4bf18b97 feat(ui): add Mutelist menu item under Configuration (#8444)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-08-08 09:09:37 +02:00
Alejandro Bailo 94e60f7329 fix(ui): assume role fields shown (#8484) 2025-08-07 17:44:46 +02:00
Rubén De la Torre Vico f1ba5abbec chore(docs): update provider statistics in README.md (#8483)
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-07 17:10:56 +02:00
Hugo Pereira Brito 6cc1a9a2cb fix(compliance): delete invalid requirements for GitHub CIS 1.0 (#8472)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-07 20:51:20 +07:00
Pablo Lara 31f98092bf feat(ui): add provider type filter to providers page (#8473) 2025-08-07 14:34:04 +02:00
Pepe Fagoaga 85197036ca chore(env): Update NEXT_PUBLIC_PROWLER_RELEASE_VERSION (#8476) 2025-08-07 17:50:18 +05:45
Pepe Fagoaga be43025f00 fix(actions): always get latest SDK reference (#8474) 2025-08-07 17:38:40 +05:45
César Arroba c6b34f0a85 chore(api): open PR with API prowler version (#8475) 2025-08-07 13:49:39 +02:00
Prowler Bot 675698a26a chore(release): Bump version to v5.11.0 (#8470)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-07 12:40:55 +02:00
Alejandro Bailo 8d9bf2384f docs: S3 tutorial documentation (#8414)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-08-07 16:04:42 +05:45
César Arroba ff900a2a45 chore(gha): use prowler-bot for push in action (#8469) 2025-08-07 10:50:58 +02:00
César Arroba a41663fb0d chore(gha): fix release preparation workflow (#8468) 2025-08-07 10:41:16 +02:00
César Arroba 033e9fd58c chore(gha): fix release preparation workflow (#8467) 2025-08-07 10:36:22 +02:00
sumit-tft 240b02b498 feat(ui): add SAML documentation link in config modal (#8461)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2025-08-07 10:23:07 +02:00
Rubén De la Torre Vico 87eb2dfdf7 chore(changelog): move fixes from version 5.9.3 to 5.10 (#8464) 2025-08-07 13:43:56 +05:45
Alejandro Bailo b4d8d64f0e feat: update AWS role credentials form to set default credentials typ… (#8459) 2025-08-07 09:54:48 +02:00
Pablo Lara 7944ebe83a docs: update changelog (#8462) 2025-08-07 09:39:24 +02:00
Pepe Fagoaga bd138114c9 fix: changelog check update messages (#8465) 2025-08-07 13:22:54 +05:45
Adrián Jesús Peña Rodríguez d527a3f12b chore: update changelog (#8463) 2025-08-07 09:35:16 +02:00
Pepe Fagoaga 260fada3eb fix(s3): Use HeadBucket instead of GetBucketLocation (#8456) 2025-08-06 19:20:52 +05:45
Pepe Fagoaga 0ee0fc082a chore(s3): remove trailing 's' from docs helper (#8458) 2025-08-06 14:21:39 +02:00
Hugo Pereira Brito 9d66d86f66 fix(docs): m365 requirements Needed permissions link (#8457) 2025-08-06 13:51:16 +02:00
Alejandro Bailo 825e53c38f feat(ui): add a default Mutelist placeholder (#8455) 2025-08-06 13:11:31 +02:00
Daniel Barranquero 196c17d44d feat(gcp): add retry to avoid quota limit errors (#8412)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-06 16:59:41 +07:00
Andoni Alonso fc69e195e4 fix(github): handle GithubAppIdentityInfo in output generation (#8423)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-08-06 16:55:44 +07:00
Prowler Bot 5f53a9ec6f chore(regions_update): Changes in regions for AWS services (#8437)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-08-06 16:53:43 +07:00
dependabot[bot] 5e72a40898 chore(deps): bump github/codeql-action from 3.29.2 to 3.29.5 (#8434)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-06 16:52:09 +07:00
dependabot[bot] 496ada3cba chore(deps): bump trufflesecurity/trufflehog from 3.89.2 to 3.90.2 (#8433)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-06 16:51:42 +07:00
Adrián Jesús Peña Rodríguez 481a43f3f6 chore(integrations): remove unnecessary error alerts (#8453) 2025-08-06 09:16:26 +02:00
Pepe Fagoaga 58298706d4 docs(saml): IdP initiated flow (#8435) 2025-08-06 12:46:18 +05:45
Pepe Fagoaga e75a760da0 fix(ui): cfn quick link (#8452) 2025-08-05 22:42:57 +05:45
Pepe Fagoaga c313757ef2 fix(templates): only one cloudformation template (#8451) 2025-08-05 18:17:50 +02:00
Adrián Jesús Peña Rodríguez 284678fe48 fix(export): remove static timestamp (#8449) 2025-08-05 18:12:04 +02:00
Alejandro Bailo c3d25e6f39 feat(ui): S3 integrations pagination added (#8450) 2025-08-05 18:11:32 +02:00
Adrián Jesús Peña Rodríguez a9d16bbbce chore: change output folder (#8447) 2025-08-05 14:07:35 +02:00
Pepe Fagoaga 92bc992e7f feat(s3): templates for permissions (#8395) 2025-08-05 17:36:04 +05:45
Alejandro Bailo 903e4f8b9f feat(integrations): add enabled attribute to S3 integration (#8446) 2025-08-05 13:13:58 +02:00
Alejandro Bailo 2c09076f91 feat: output_directory default value added (#8445) 2025-08-05 12:20:31 +02:00
Adrián Jesús Peña Rodríguez 3d4902b057 feat(integrations): integrations enabled by default (#8439) 2025-08-05 11:25:42 +02:00
Chandrapal Badshah b30eab7935 fix: Don't invoke tools if no providers or completed scans (#8443)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-05 09:32:35 +02:00
sumit-tft cf8402e013 feat(ui): add notification system (#8394)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-08-05 09:06:15 +02:00
Pedro Martín af8fbaf2cd docs(mutelist): improve mutelist docs across all the providers (#8397)
Co-authored-by: Pablo Lara <larabjj@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-08-05 08:38:50 +02:00
Alejandro Bailo c748e57878 feat: manage integration permission behavior (#8441)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-08-04 17:49:04 +02:00
Alejandro Bailo a5187c6a42 feat(ui): S3 integration retouches (#8438) 2025-08-04 16:04:10 +02:00
Alejandro Bailo e19ed30ac7 feat(UI): xml validation (#8429) 2025-08-04 12:09:18 +02:00
Hugo Pereira Brito 96ce1461b9 chore(sentry): add powershell user auth module connection errors to ignored list (#8420) 2025-08-04 11:58:05 +02:00
Alejandro Bailo 9da5fb67c3 feat(ui): S3 integration (#8391) 2025-08-04 11:43:14 +02:00
Chandrapal Badshah eb1c1791e4 fix: clear only last message on error (#8431)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-08-04 10:33:45 +02:00
Adrián Jesús Peña Rodríguez 581afd38e6 fix: add default values for S3 class (#8417)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-01 13:50:51 +02:00
sumit-tft 19a735aafe chore(ui): remove misconfigurations from Top Failed Findings in the s… (#8426) 2025-08-01 12:47:17 +02:00
Paul Negedu 2170fbb1ab feat(aws): add s3_bucket_shadow_resource_vulnerability check (#8398)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-08-01 18:26:03 +08:00
Pablo Lara 90c6c6b98d feat: add new provider GitHub and update enum source of truth (#8421) 2025-08-01 10:03:47 +02:00
sumit-tft 02b416b4f8 chore(ui): remove browse all resources from the sidebar (#8418) 2025-07-31 16:13:30 +02:00
Hugo Pereira Brito 1022b5e413 chore(docs): add a step to check development guide (#8416) 2025-07-31 12:45:15 +02:00
Pablo Lara d1bad9d9ab chore: rename menu item (#8415) 2025-07-31 12:10:07 +02:00
Rubén De la Torre Vico 178f3850be chore: add M365 provider to PR labeler (#8406) 2025-07-31 17:32:18 +08:00
Adrián Jesús Peña Rodríguez d239d299e2 fix(s3): use enabled to filter (#8409) 2025-07-31 10:00:05 +02:00
Pepe Fagoaga 88fae9ecae chore(ui): remove changelog entry (#8410) 2025-07-31 09:27:11 +02:00
Hugo Pereira Brito a3bff9705c fix(tests): github and iac providers arguments_test naming and structure (#8408) 2025-07-30 17:16:34 +02:00
César Arroba 75989b09d7 chore(gha): fix payload on merged PR action (#8407) 2025-07-30 16:59:40 +02:00
Pablo Lara 9a622f60fe feat(providers): add GitHub provider support with credential types (#8405) 2025-07-30 15:55:40 +02:00
Rubén De la Torre Vico 7cd1966066 fix(azure,m365): use default tenant domain instead of first domain in list (#8402) 2025-07-30 13:23:25 +02:00
Pedro Martín 77e59203ae feat(prowler-threatscore): remove and add requirements (#8401) 2025-07-30 13:09:51 +02:00
Chandrapal Badshah 0a449c7e13 fix(lighthouse): Display errors in Lighthouse & allow resending message (#8358)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-07-30 12:32:48 +02:00
Adrián Jesús Peña Rodríguez 163fbaff19 feat(integrations): add s3 integration (#8056) 2025-07-30 12:05:46 +02:00
Sergio Garcia 7ec514d9dd feat(aws): new check bedrock_api_key_no_long_term_credentials (#8396) 2025-07-30 17:04:16 +08:00
Hugo Pereira Brito b63f70ac82 fix(m365): enhance execution to avoid multiple error calls (#8353) 2025-07-30 14:54:27 +08:00
Chandrapal Badshah 2c86b3a990 feat: Add lighthouse banner (#8259)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Pablo Lara <larabjj@gmail.com>
2025-07-29 12:30:57 +02:00
Daniel Barranquero 12443f7cbb feat(docs): update m365 and azure docs (#8393) 2025-07-29 11:58:03 +02:00
Rubén De la Torre Vico 3a8c635b75 docs(dev-guide): add generic best practices for checks and services (#8074)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-07-29 11:04:26 +02:00
Rubén De la Torre Vico 8bc6e8b7ab docs(getting-started): improve quality redrive (#7963)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-07-29 11:04:12 +02:00
Rubén De la Torre Vico 9ca1899ebf docs(tutorials): improve quality redrive (#7915)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-07-29 11:03:52 +02:00
Sergio Garcia 1bdcf2c7f1 refactor(iac): revert importingcheckov as python library (#8385) 2025-07-29 15:55:28 +08:00
Pedro Martín 92a804bf88 fix(prowler-threatscore): remove typo from description req 1.2.3 - m365 (#8384)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-28 23:55:38 +08:00
ghkim583 f85ad9a7a2 chore(aws): minor fixes for the kisa isms-p compliance (#8386) 2025-07-28 17:51:20 +02:00
Pedro Martín 308c778bad fix(kisa): change the way of counting the PASS/FAILED reqs (#8382) 2025-07-28 21:56:58 +08:00
Jaen-923 ee06d3a68a chore(aws): update kisa-isms-p compliance (#8367)
Co-authored-by: ghkim583 <203069125+ghkim583@users.noreply.github.com>
2025-07-28 21:55:50 +08:00
Andoni Alonso 8dc4bd0be8 feat(github): add repository and organization scoping support (#8329) 2025-07-28 21:43:41 +08:00
Pedro Martín bf9e38dc5c fix(docs): remove typo from getting started - github (#8380) 2025-07-28 20:18:13 +08:00
Aviad Levy a85b89ffb5 fix(ec2): add check that protocol is matched in security group checks (#8374)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-07-28 19:53:08 +08:00
César Arroba 87da11b712 chore(gha): delete repo limitation for bump workflow (#8379) 2025-07-28 13:22:19 +02:00
César Arroba 8b57f178e0 chore(gha): improve e2e pipeline (#8378) 2025-07-28 13:22:12 +02:00
Prowler Bot 7830ed8b9f chore(regions_update): Changes in regions for AWS services (#8376)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-07-28 17:56:48 +08:00
Kay Agahd d4e66c4a6f chore(sqs): clean up code (#8366) 2025-07-25 20:10:34 +08:00
Rubén De la Torre Vico 1cfe610d47 feat(azure/vm): add new check vm_scaleset_not_empty (#8192)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-25 18:42:03 +08:00
Rubén De la Torre Vico d9a9236ab7 feat(azure/vm): add new check vm_desired_sku_size (#8191)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-25 17:51:01 +08:00
Hugo Pereira Brito 285aea3458 fix(docs): change Exchange Administrator role to Global Reader for M365 (#8360) 2025-07-25 15:45:30 +08:00
César Arroba b051aeeb64 chore(gha): automate e2e tests with new workflow (#8361) 2025-07-24 16:54:01 +02:00
Pedro Martín b99dce6a43 feat(azure): add CIS 4.0 (#7782) 2025-07-24 22:29:46 +08:00
Andoni Alonso 04749c1da1 fix(aws): sns_topics_not_publicly_accessible false positive with aws:SourceArn conditions (#8340)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-07-24 18:03:30 +08:00
Chandrapal Badshah 44d70f8467 fix(lighthouse): update prompt and tool schema for checks tool (#8265)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-07-24 10:50:36 +02:00
Andoni Alonso 95791a9909 chore(aws): replace known errors with warnings (#8347)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-24 15:34:45 +08:00
sumit-tft ad0b8a4208 feat(ui): create CustomLink component and refactor links to use it (#8341)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-07-23 19:10:51 +02:00
Cole Murray 5669a42039 fix(wazuh): patch command injection vulnerability in prowler-wrapper.py (#8331)
Co-authored-by: Test User <test@example.com>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-07-23 16:06:55 +02:00
Kay Agahd 83b328ea92 fix(aws): avoid false positives in SQS encryption check for ephemeral queues (#8330)
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-07-23 21:03:02 +08:00
Alejandro Bailo a6c88c0d9e test: timeout updated for E2E (#8351) 2025-07-23 13:11:32 +02:00
Sergio Garcia 922f9d2f91 docs(gcp): update GCP permissions (#8350) 2025-07-23 17:43:42 +08:00
Rubén De la Torre Vico a69d0d16c0 fix(azure/storage): handle when Azure API set values to None (#8325)
Co-authored-by: Pedro Martín <pedromarting3@gmail.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-07-23 17:11:04 +08:00
Alejandro Bailo 676cc44fe2 feat: env keys behavior updated (#8348) 2025-07-23 10:44:28 +02:00
Alejandro Bailo 3840e40870 test(e2e): Sign-in (#8337)
Co-authored-by: César Arroba <cesar@prowler.com>
2025-07-22 18:04:54 +02:00
dependabot[bot] ab2d57554a chore(deps): bump form-data from 4.0.3 to 4.0.4 in /ui (#8346)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-22 17:53:32 +02:00
César Arroba cbb5b21e6c chore(gha): e2e tests pipeline with API services (#8338) 2025-07-22 17:49:23 +02:00
Sergio Garcia 1efd5668ce feat(api): add GitHub provider support (#8271) 2025-07-22 23:26:02 +08:00
Sergio Garcia ca86aeb1d7 feat(aws): new check bedrock_api_key_no_administrative_privileges (#8321) 2025-07-22 22:06:17 +08:00
Víctor Fernández Poyatos 4f2a8b71bb feat(performance): resources scenario (#8345) 2025-07-22 13:01:19 +02:00
Prowler Bot 3b0cb3db85 chore(regions_update): Changes in regions for AWS services (#8333)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-07-22 17:23:24 +08:00
Víctor Fernández Poyatos 00c527ff79 chore: update Prowler changelog for v5.9.2 (#8342) 2025-07-22 10:53:22 +02:00
Víctor Fernández Poyatos ab348d5752 feat(resources): Optimize findings prefetching during resource views (#8336) 2025-07-21 16:33:07 +02:00
Daniel Barranquero dd713351dc fix(defender): avoid duplicated findings in check defender_domain_dkim_enabled (#8334) 2025-07-21 13:07:26 +02:00
sumit-tft fa722f1dc7 feat(ui): add 32-character limit validation for scan name in create a… (#8319)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-07-21 10:00:25 +02:00
Pedro Martín b0cc3978d0 feat(docs): add info about updating Prowler App (#8320) 2025-07-21 07:44:07 +02:00
César Arroba aa843b823c chore(gha): fix action version (#8327) 2025-07-18 15:00:32 +02:00
Víctor Fernández Poyatos 020edc0d1d fix(tasks): calculate failed findings for resources during scan (#8322) 2025-07-18 13:19:22 +02:00
César Arroba 036da81bbd chore(gha): fix api prowler version (#8323) 2025-07-18 12:43:38 +02:00
sumit-tft 4428bcb2c0 feat(ui): update step title and description in cloud provider update … (#8303)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-07-18 10:11:44 +02:00
Prowler Bot 21de9a2f6f chore(release): Bump version to v5.10.0 (#8314)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-07-17 19:38:28 +02:00
Alejandro Bailo 231d933b9e chore(docs): SAML documentation (#8137)
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-07-17 23:22:49 +05:45
Alejandro Bailo 2ad360a7f9 docs(ui): Mutelist documentation (#8201) 2025-07-17 23:15:20 +05:45
1265 changed files with 262393 additions and 24812 deletions
+6 -38
View File
@@ -10,6 +10,8 @@ NEXT_PUBLIC_API_BASE_URL=${API_BASE_URL}
NEXT_PUBLIC_API_DOCS_URL=http://prowler-api:8080/api/v1/docs
AUTH_TRUST_HOST=true
UI_PORT=3000
# Temp URL for feeds need to use actual
RSS_FEED_URL=https://prowler.com/blog/rss
# openssl rand -base64 32
AUTH_SECRET="N/c6mnaS5+SWq81+819OrzQZlmx1Vxtp/orjttJSmw8="
# Google Tag Manager ID
@@ -83,55 +85,21 @@ DJANGO_CACHE_MAX_AGE=3600
DJANGO_STALE_WHILE_REVALIDATE=60
DJANGO_MANAGE_DB_PARTITIONS=True
# openssl genrsa -out private.pem 2048
DJANGO_TOKEN_SIGNING_KEY="-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDs4e+kt7SnUJek
6V5r9zMGzXCoU5qnChfPiqu+BgANyawz+MyVZPs6RCRfeo6tlCknPQtOziyXYM2I
7X+qckmuzsjqp8+u+o1mw3VvUuJew5k2SQLPYwsiTzuFNVJEOgRo3hywGiGwS2iv
/5nh2QAl7fq2qLqZEXQa5+/xJlQggS1CYxOJgggvLyra50QZlBvPve/AxKJ/EV/Q
irWTZU5lLNI8sH2iZR05vQeBsxZ0dCnGMT+vGl+cGkqrvzQzKsYbDmabMcfTYhYi
78fpv6A4uharJFHayypYBjE39PwhMyyeycrNXlpm1jpq+03HgmDuDMHydk1tNwuT
nEC7m7iNAgMBAAECggEAA2m48nJcJbn9SVi8bclMwKkWmbJErOnyEGEy2sTK3Of+
NWx9BB0FmqAPNxn0ss8K7cANKOhDD7ZLF9E2MO4/HgfoMKtUzHRbM7MWvtEepldi
nnvcUMEgULD8Dk4HnqiIVjt3BdmGiTv46OpBnRWrkSBV56pUL+7msZmMZTjUZvh2
ZWv0+I3gtDIjo2Zo/FiwDV7CfwRjJarRpYUj/0YyuSA4FuOUYl41WAX1I301FKMH
xo3jiAYi1s7IneJ16OtPpOA34Wg5F6ebm/UO0uNe+iD4kCXKaZmxYQPh5tfB0Qa3
qj1T7GNpFNyvtG7VVdauhkb8iu8X/wl6PCwbg0RCKQKBgQD9HfpnpH0lDlHMRw9K
X7Vby/1fSYy1BQtlXFEIPTN/btJ/asGxLmAVwJ2HAPXWlrfSjVAH7CtVmzN7v8oj
HeIHfeSgoWEu1syvnv2AMaYSo03UjFFlfc/GUxF7DUScRIhcJUPCP8jkAROz9nFv
DByNjUL17Q9r43DmDiRsy0IFqQKBgQDvlJ9Uhl+Sp7gRgKYwa/IG0+I4AduAM+Gz
Dxbm52QrMGMTjaJFLmLHBUZ/ot+pge7tZZGws8YR8ufpyMJbMqPjxhIvRRa/p1Tf
E3TQPW93FMsHUvxAgY3MV5MzXFPhlNAKb+akP/RcXUhetGAuZKLubtDCWa55ZQuL
wj2OS+niRQKBgE7K8zUqNi6/22S8xhy/2GPgB1qPObbsABUofK0U6CAGLo6te+gc
6Jo84IyzFtQbDNQFW2Fr+j1m18rw9AqkdcUhQndiZS9AfG07D+zFB86LeWHt4DS4
ymIRX8Kvaak/iDcu/n3Mf0vCrhB6aetImObTj4GgrwlFvtJOmrYnO8EpAoGAIXXP
Xt25gWD9OyyNiVu6HKwA/zN7NYeJcRmdaDhO7B1A6R0x2Zml4AfjlbXoqOLlvLAf
zd79vcoAC82nH1eOPiSOq51plPDI0LMF8IN0CtyTkn1Lj7LIXA6rF1RAvtOqzppc
SvpHpZK9pcRpXnFdtBE0BMDDtl6fYzCIqlP94UUCgYEAnhXbAQMF7LQifEm34Dx8
BizRMOKcqJGPvbO2+Iyt50O5X6onU2ITzSV1QHtOvAazu+B1aG9pEuBFDQ+ASxEu
L9ruJElkOkb/o45TSF6KCsHd55ReTZ8AqnRjf5R+lyzPqTZCXXb8KTcRvWT4zQa3
VxyT2PnaSqEcexWUy4+UXoQ=
-----END PRIVATE KEY-----"
DJANGO_TOKEN_SIGNING_KEY=""
# openssl rsa -in private.pem -pubout -out public.pem
DJANGO_TOKEN_VERIFYING_KEY="-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7OHvpLe0p1CXpOlea/cz
Bs1wqFOapwoXz4qrvgYADcmsM/jMlWT7OkQkX3qOrZQpJz0LTs4sl2DNiO1/qnJJ
rs7I6qfPrvqNZsN1b1LiXsOZNkkCz2MLIk87hTVSRDoEaN4csBohsEtor/+Z4dkA
Je36tqi6mRF0Gufv8SZUIIEtQmMTiYIILy8q2udEGZQbz73vwMSifxFf0Iq1k2VO
ZSzSPLB9omUdOb0HgbMWdHQpxjE/rxpfnBpKq780MyrGGw5mmzHH02IWIu/H6b+g
OLoWqyRR2ssqWAYxN/T8ITMsnsnKzV5aZtY6avtNx4Jg7gzB8nZNbTcLk5xAu5u4
jQIDAQAB
-----END PUBLIC KEY-----"
DJANGO_TOKEN_VERIFYING_KEY=""
# openssl rand -base64 32
DJANGO_SECRETS_ENCRYPTION_KEY="oE/ltOhp/n1TdbHjVmzcjDPLcLA41CVI/4Rk+UB5ESc="
DJANGO_BROKER_VISIBILITY_TIMEOUT=86400
DJANGO_SENTRY_DSN=
DJANGO_THROTTLE_TOKEN_OBTAIN=50/minute
# Sentry settings
SENTRY_ENVIRONMENT=local
SENTRY_RELEASE=local
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.7.5
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.12.2
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
+20
View File
@@ -22,6 +22,11 @@ provider/kubernetes:
- any-glob-to-any-file: "prowler/providers/kubernetes/**"
- any-glob-to-any-file: "tests/providers/kubernetes/**"
provider/m365:
- changed-files:
- any-glob-to-any-file: "prowler/providers/m365/**"
- any-glob-to-any-file: "tests/providers/m365/**"
provider/github:
- changed-files:
- any-glob-to-any-file: "prowler/providers/github/**"
@@ -32,6 +37,11 @@ provider/iac:
- any-glob-to-any-file: "prowler/providers/iac/**"
- any-glob-to-any-file: "tests/providers/iac/**"
provider/mongodbatlas:
- changed-files:
- any-glob-to-any-file: "prowler/providers/mongodbatlas/**"
- any-glob-to-any-file: "tests/providers/mongodbatlas/**"
github_actions:
- changed-files:
- any-glob-to-any-file: ".github/workflows/*"
@@ -47,11 +57,13 @@ mutelist:
- any-glob-to-any-file: "prowler/providers/azure/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/gcp/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/kubernetes/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/mongodbatlas/lib/mutelist/**"
- any-glob-to-any-file: "tests/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/aws/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/azure/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/gcp/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/kubernetes/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/mongodbatlas/lib/mutelist/**"
integration/s3:
- changed-files:
@@ -98,6 +110,10 @@ component/ui:
- changed-files:
- any-glob-to-any-file: "ui/**"
component/mcp-server:
- changed-files:
- any-glob-to-any-file: "mcp_server/**"
compliance:
- changed-files:
- any-glob-to-any-file: "prowler/compliance/**"
@@ -107,3 +123,7 @@ compliance:
review-django-migrations:
- changed-files:
- any-glob-to-any-file: "api/src/backend/api/migrations/**"
metadata-review:
- changed-files:
- any-glob-to-any-file: "**/*.metadata.json"
+4
View File
@@ -8,6 +8,10 @@ If fixes an issue please add it with `Fix #XXXX`
Please include a summary of the change and which issue is fixed. List any dependencies that are required for this change.
### Steps to review
Please add a detailed description of how to review this PR.
### Checklist
- Are there new checks included in this PR? Yes / No
@@ -62,7 +62,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set short git commit SHA
id: vars
@@ -71,7 +71,7 @@ jobs:
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
@@ -107,7 +107,7 @@ jobs:
- name: Trigger deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+3 -3
View File
@@ -44,16 +44,16 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
+32 -17
View File
@@ -13,6 +13,7 @@ on:
- "master"
- "v5.*"
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
env:
@@ -75,18 +76,20 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: api/**
files: |
api/**
.github/workflows/api-pull-request.yml
files_ignore: ${{ env.IGNORE_FILES }}
- name: Replace @master with current branch in pyproject.toml
- name: Replace @master with current branch in pyproject.toml - Only for pull requests to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'pull_request' && github.base_ref == 'master'
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
@@ -99,7 +102,24 @@ jobs:
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Update poetry.lock after the branch name change
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push' && github.ref == 'refs/heads/master'
run: |
# Get the latest commit hash from the prowler-cloud/prowler repository
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
# Update the resolved_reference specifically for prowler-cloud/prowler repository
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
# Verify the change was made
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
@@ -107,17 +127,11 @@ jobs:
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install system dependencies for xmlsec
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
sudo apt-get update
sudo apt-get install -y libxml2-dev libxmlsec1-dev libxmlsec1-openssl pkg-config
- name: Install dependencies
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
@@ -164,8 +178,9 @@ jobs:
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: |
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
working-directory: ./api
@@ -187,7 +202,7 @@ jobs:
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
@@ -195,11 +210,11 @@ jobs:
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: api/**
files_ignore: ${{ env.IGNORE_FILES }}
@@ -7,6 +7,7 @@ on:
- 'v3'
paths:
- 'docs/**'
- '.github/workflows/build-documentation-on-pr.yml'
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
@@ -16,9 +17,20 @@ jobs:
name: Documentation Link
runs-on: ubuntu-latest
steps:
- name: Leave PR comment with the Prowler Documentation URI
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
- name: Find existing documentation comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-comment
with:
issue-number: ${{ env.PR_NUMBER }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- prowler-docs-link -->'
- name: Create or update PR comment with the Prowler Documentation URI
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ env.PR_NUMBER }}
body: |
<!-- prowler-docs-link -->
You can check the documentation for this PR here -> [Prowler Documentation](https://prowler-prowler-docs--${{ env.PR_NUMBER }}.com.readthedocs.build/projects/prowler-open-source/en/${{ env.PR_NUMBER }}/)
edit-mode: replace
+1 -1
View File
@@ -1,4 +1,4 @@
name: Create Backport Label
name: Prowler - Create Backport Label
on:
release:
+2 -2
View File
@@ -7,11 +7,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@6641d4ba5b684fffe195b9820345de1bf19f3181 # v3.89.2
uses: trufflesecurity/trufflehog@466da5b0bb161144f6afca9afe5d57975828c410 # v3.90.8
with:
path: ./
base: ${{ github.event.repository.default_branch }}
+1 -1
View File
@@ -14,4 +14,4 @@ jobs:
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@8558fd74291d67161a8a78ce36a881fa63b766a9 # v5.0.0
- uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
+175
View File
@@ -0,0 +1,175 @@
name: Prowler - PR Conflict Checker
on:
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- "master"
- "v5.*"
# Leaving this commented until we find a way to run it for forks but in Prowler's context
# pull_request_target:
# types:
# - opened
# - synchronize
# - reopened
# branches:
# - "master"
# - "v5.*"
jobs:
conflict-checker:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
issues: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
**
- name: Check for conflict markers
id: conflict-check
run: |
echo "Checking for conflict markers in changed files..."
CONFLICT_FILES=""
HAS_CONFLICTS=false
# Check each changed file for conflict markers
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [ -f "$file" ]; then
echo "Checking file: $file"
# Look for conflict markers
if grep -l "^<<<<<<<\|^=======\|^>>>>>>>" "$file" 2>/dev/null; then
echo "Conflict markers found in: $file"
CONFLICT_FILES="$CONFLICT_FILES$file "
HAS_CONFLICTS=true
fi
fi
done
if [ "$HAS_CONFLICTS" = true ]; then
echo "has_conflicts=true" >> $GITHUB_OUTPUT
echo "conflict_files=$CONFLICT_FILES" >> $GITHUB_OUTPUT
echo "Conflict markers detected in files: $CONFLICT_FILES"
else
echo "has_conflicts=false" >> $GITHUB_OUTPUT
echo "No conflict markers found in changed files"
fi
- name: Add conflict label
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const hasConflictLabel = labels.some(label => label.name === 'has-conflicts');
if (!hasConflictLabel) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['has-conflicts']
});
console.log('Added has-conflicts label');
} else {
console.log('has-conflicts label already exists');
}
- name: Remove conflict label
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: 'has-conflicts'
});
console.log('Removed has-conflicts label');
} catch (error) {
if (error.status === 404) {
console.log('has-conflicts label was not present');
} else {
throw error;
}
}
- name: Find existing conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Create or update conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
⚠️ **Conflict Markers Detected**
This pull request contains unresolved conflict markers in the following files:
```
${{ steps.conflict-check.outputs.conflict_files }}
```
Please resolve these conflicts by:
1. Locating the conflict markers: `<<<<<<<`, `=======`, and `>>>>>>>`
2. Manually editing the files to resolve the conflicts
3. Removing all conflict markers
4. Committing and pushing the changes
- name: Find existing conflict comment when resolved
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-resolved-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Update comment when conflicts resolved
if: steps.conflict-check.outputs.has_conflicts == 'false' && steps.find-resolved-comment.outputs.comment-id != ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-resolved-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
✅ **Conflict Markers Resolved**
All conflict markers have been successfully resolved in this pull request.
- name: Fail workflow if conflicts detected
if: steps.conflict-check.outputs.has_conflicts == 'true'
run: |
echo "::error::Workflow failed due to conflict markers in files: ${{ steps.conflict-check.outputs.conflict_files }}"
exit 1
+226 -71
View File
@@ -1,4 +1,4 @@
name: Prowler Release Preparation
name: Prowler - Release Preparation
run-name: Prowler Release Preparation for ${{ inputs.prowler_version }}
@@ -19,11 +19,28 @@ jobs:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: '3.12'
- name: Install Poetry
run: |
python3 -m pip install --user poetry
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Configure Git
run: |
git config --global user.name "prowler-bot"
git config --global user.email "179230569+prowler-bot@users.noreply.github.com"
- name: Parse version and determine branch
run: |
@@ -42,25 +59,157 @@ jobs:
BRANCH_NAME="v${MAJOR_VERSION}.${MINOR_VERSION}"
echo "BRANCH_NAME=${BRANCH_NAME}" >> "${GITHUB_ENV}"
# Calculate UI version (1.X.X format - matches Prowler minor version)
UI_VERSION="1.${MINOR_VERSION}.${PATCH_VERSION}"
echo "UI_VERSION=${UI_VERSION}" >> "${GITHUB_ENV}"
# Function to extract the latest version from changelog
extract_latest_version() {
local changelog_file="$1"
if [ -f "$changelog_file" ]; then
# Extract the first version entry (most recent) from changelog
# Format: ## [version] (1.2.3) or ## [vversion] (v1.2.3)
local version=$(grep -m 1 '^## \[' "$changelog_file" | sed 's/^## \[\(.*\)\].*/\1/' | sed 's/^v//' | tr -d '[:space:]')
echo "$version"
else
echo ""
fi
}
# Calculate API version (1.X.X format - one minor version ahead)
API_MINOR_VERSION=$((MINOR_VERSION + 1))
API_VERSION="1.${API_MINOR_VERSION}.${PATCH_VERSION}"
# Read actual versions from changelogs (source of truth)
UI_VERSION=$(extract_latest_version "ui/CHANGELOG.md")
API_VERSION=$(extract_latest_version "api/CHANGELOG.md")
SDK_VERSION=$(extract_latest_version "prowler/CHANGELOG.md")
echo "UI_VERSION=${UI_VERSION}" >> "${GITHUB_ENV}"
echo "API_VERSION=${API_VERSION}" >> "${GITHUB_ENV}"
echo "SDK_VERSION=${SDK_VERSION}" >> "${GITHUB_ENV}"
if [ -n "$UI_VERSION" ]; then
echo "Read UI version from changelog: $UI_VERSION"
else
echo "Warning: No UI version found in ui/CHANGELOG.md"
fi
if [ -n "$API_VERSION" ]; then
echo "Read API version from changelog: $API_VERSION"
else
echo "Warning: No API version found in api/CHANGELOG.md"
fi
if [ -n "$SDK_VERSION" ]; then
echo "Read SDK version from changelog: $SDK_VERSION"
else
echo "Warning: No SDK version found in prowler/CHANGELOG.md"
fi
echo "Prowler version: $PROWLER_VERSION"
echo "Branch name: $BRANCH_NAME"
echo "UI version: $UI_VERSION"
echo "API version: $API_VERSION"
echo "SDK version: $SDK_VERSION"
echo "Is minor release: $([ $PATCH_VERSION -eq 0 ] && echo 'true' || echo 'false')"
else
echo "Invalid version syntax: '$PROWLER_VERSION' (must be N.N.N)" >&2
exit 1
fi
- name: Extract changelog entries
run: |
set -e
# Function to extract changelog for a specific version
extract_changelog() {
local file="$1"
local version="$2"
local output_file="$3"
if [ ! -f "$file" ]; then
echo "Warning: $file not found, skipping..."
touch "$output_file"
return
fi
# Extract changelog section for this version
awk -v version="$version" '
/^## \[v?'"$version"'\]/ { found=1; next }
found && /^## \[v?[0-9]+\.[0-9]+\.[0-9]+\]/ { found=0 }
found && !/^## \[v?'"$version"'\]/ { print }
' "$file" > "$output_file"
# Remove --- separators
sed -i '/^---$/d' "$output_file"
# Remove trailing empty lines
sed -i '/^$/d' "$output_file"
}
# Calculate expected versions for this release
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
EXPECTED_UI_VERSION="1.${BASH_REMATCH[2]}.${BASH_REMATCH[3]}"
EXPECTED_API_VERSION="1.$((${BASH_REMATCH[2]} + 1)).${BASH_REMATCH[3]}"
echo "Expected UI version for this release: $EXPECTED_UI_VERSION"
echo "Expected API version for this release: $EXPECTED_API_VERSION"
fi
# Determine if components have changes for this specific release
# UI has changes if its current version matches what we expect for this release
if [ -n "$UI_VERSION" ] && [ "$UI_VERSION" = "$EXPECTED_UI_VERSION" ]; then
echo "HAS_UI_CHANGES=true" >> $GITHUB_ENV
echo "✓ UI changes detected - version matches expected: $UI_VERSION"
extract_changelog "ui/CHANGELOG.md" "$UI_VERSION" "ui_changelog.md"
else
echo "HAS_UI_CHANGES=false" >> $GITHUB_ENV
echo " No UI changes for this release (current: $UI_VERSION, expected: $EXPECTED_UI_VERSION)"
touch "ui_changelog.md"
fi
# API has changes if its current version matches what we expect for this release
if [ -n "$API_VERSION" ] && [ "$API_VERSION" = "$EXPECTED_API_VERSION" ]; then
echo "HAS_API_CHANGES=true" >> $GITHUB_ENV
echo "✓ API changes detected - version matches expected: $API_VERSION"
extract_changelog "api/CHANGELOG.md" "$API_VERSION" "api_changelog.md"
else
echo "HAS_API_CHANGES=false" >> $GITHUB_ENV
echo " No API changes for this release (current: $API_VERSION, expected: $EXPECTED_API_VERSION)"
touch "api_changelog.md"
fi
# SDK has changes if its current version matches the input version
if [ -n "$SDK_VERSION" ] && [ "$SDK_VERSION" = "$PROWLER_VERSION" ]; then
echo "HAS_SDK_CHANGES=true" >> $GITHUB_ENV
echo "✓ SDK changes detected - version matches input: $SDK_VERSION"
extract_changelog "prowler/CHANGELOG.md" "$PROWLER_VERSION" "prowler_changelog.md"
else
echo "HAS_SDK_CHANGES=false" >> $GITHUB_ENV
echo " No SDK changes for this release (current: $SDK_VERSION, input: $PROWLER_VERSION)"
touch "prowler_changelog.md"
fi
# Combine changelogs in order: UI, API, SDK
> combined_changelog.md
if [ "$HAS_UI_CHANGES" = "true" ] && [ -s "ui_changelog.md" ]; then
echo "## UI" >> combined_changelog.md
echo "" >> combined_changelog.md
cat ui_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
if [ "$HAS_API_CHANGES" = "true" ] && [ -s "api_changelog.md" ]; then
echo "## API" >> combined_changelog.md
echo "" >> combined_changelog.md
cat api_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
if [ "$HAS_SDK_CHANGES" = "true" ] && [ -s "prowler_changelog.md" ]; then
echo "## SDK" >> combined_changelog.md
echo "" >> combined_changelog.md
cat prowler_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
echo "Combined changelog preview:"
cat combined_changelog.md
- name: Checkout existing branch for patch release
if: ${{ env.PATCH_VERSION != '0' }}
run: |
@@ -97,6 +246,7 @@ jobs:
echo "✓ prowler/config/config.py version: $CURRENT_VERSION"
- name: Verify version in api/pyproject.toml
if: ${{ env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_API_VERSION=$(grep '^version = ' api/pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
API_VERSION_TRIMMED=$(echo "$API_VERSION" | tr -d '[:space:]')
@@ -107,16 +257,18 @@ jobs:
echo "✓ api/pyproject.toml version: $CURRENT_API_VERSION"
- name: Verify prowler dependency in api/pyproject.toml
if: ${{ env.PATCH_VERSION != '0' && env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
PROWLER_VERSION_TRIMMED=$(echo "$PROWLER_VERSION" | tr -d '[:space:]')
if [ "$CURRENT_PROWLER_REF" != "$PROWLER_VERSION_TRIMMED" ]; then
echo "ERROR: Prowler dependency mismatch in api/pyproject.toml (expected: '$PROWLER_VERSION_TRIMMED', found: '$CURRENT_PROWLER_REF')"
BRANCH_NAME_TRIMMED=$(echo "$BRANCH_NAME" | tr -d '[:space:]')
if [ "$CURRENT_PROWLER_REF" != "$BRANCH_NAME_TRIMMED" ]; then
echo "ERROR: Prowler dependency mismatch in api/pyproject.toml (expected: '$BRANCH_NAME_TRIMMED', found: '$CURRENT_PROWLER_REF')"
exit 1
fi
echo "✓ api/pyproject.toml prowler dependency: $CURRENT_PROWLER_REF"
- name: Verify version in api/src/backend/api/v1/views.py
if: ${{ env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_API_VERSION=$(grep 'spectacular_settings.VERSION = ' api/src/backend/api/v1/views.py | sed -E 's/.*spectacular_settings.VERSION = "([^"]+)".*/\1/' | tr -d '[:space:]')
API_VERSION_TRIMMED=$(echo "$API_VERSION" | tr -d '[:space:]')
@@ -126,87 +278,90 @@ jobs:
fi
echo "✓ api/src/backend/api/v1/views.py version: $CURRENT_API_VERSION"
- name: Create release branch for minor release
- name: Checkout existing release branch for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
echo "Minor release detected (patch = 0), creating new branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/heads/$BRANCH_NAME" || git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "ERROR: Branch $BRANCH_NAME already exists for minor release $PROWLER_VERSION"
echo "Minor release detected (patch = 0), checking out existing branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists remotely, checking out..."
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "ERROR: Branch $BRANCH_NAME should exist for minor release $PROWLER_VERSION. Please create it manually first."
exit 1
fi
git checkout -b "$BRANCH_NAME"
- name: Extract changelog entries
- name: Prepare prowler dependency update for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
set -e
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
BRANCH_NAME_TRIMMED=$(echo "$BRANCH_NAME" | tr -d '[:space:]')
# Function to extract changelog for a specific version
extract_changelog() {
local file="$1"
local version="$2"
local output_file="$3"
# Create a temporary branch for the PR from the minor version branch
TEMP_BRANCH="update-api-dependency-$BRANCH_NAME_TRIMMED-$(date +%s)"
echo "TEMP_BRANCH=$TEMP_BRANCH" >> $GITHUB_ENV
if [ ! -f "$file" ]; then
echo "Warning: $file not found, skipping..."
touch "$output_file"
return
fi
# Create temp branch from the current minor version branch
git checkout -b "$TEMP_BRANCH"
# Extract changelog section for this version
awk -v version="$version" '
/^## \[v?'"$version"'\]/ { found=1; next }
found && /^## \[v?[0-9]+\.[0-9]+\.[0-9]+\]/ { found=0 }
found && !/^## \[v?'"$version"'\]/ { print }
' "$file" > "$output_file"
# Minor release: update the dependency to use the release branch
echo "Updating prowler dependency from '$CURRENT_PROWLER_REF' to '$BRANCH_NAME_TRIMMED'"
sed -i "s|prowler @ git+https://github.com/prowler-cloud/prowler.git@[^\"]*\"|prowler @ git+https://github.com/prowler-cloud/prowler.git@$BRANCH_NAME_TRIMMED\"|" api/pyproject.toml
# Remove --- separators
sed -i '/^---$/d' "$output_file"
# Remove trailing empty lines
sed -i '/^$/d' "$output_file"
}
# Extract changelogs
echo "Extracting changelog entries..."
extract_changelog "prowler/CHANGELOG.md" "$PROWLER_VERSION" "prowler_changelog.md"
extract_changelog "api/CHANGELOG.md" "$API_VERSION" "api_changelog.md"
extract_changelog "ui/CHANGELOG.md" "$UI_VERSION" "ui_changelog.md"
# Combine changelogs in order: UI, API, SDK
> combined_changelog.md
if [ -s "ui_changelog.md" ]; then
echo "## UI" >> combined_changelog.md
echo "" >> combined_changelog.md
cat ui_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
# Verify the change was made
UPDATED_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
if [ "$UPDATED_PROWLER_REF" != "$BRANCH_NAME_TRIMMED" ]; then
echo "ERROR: Failed to update prowler dependency in api/pyproject.toml"
exit 1
fi
if [ -s "api_changelog.md" ]; then
echo "## API" >> combined_changelog.md
echo "" >> combined_changelog.md
cat api_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
# Update poetry lock file
echo "Updating poetry.lock file..."
cd api
poetry lock
cd ..
if [ -s "prowler_changelog.md" ]; then
echo "## SDK" >> combined_changelog.md
echo "" >> combined_changelog.md
cat prowler_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
# Commit and push the temporary branch
git add api/pyproject.toml api/poetry.lock
git commit -m "chore(api): update prowler dependency to $BRANCH_NAME_TRIMMED for release $PROWLER_VERSION"
git push origin "$TEMP_BRANCH"
echo "Combined changelog preview:"
cat combined_changelog.md
echo "✓ Prepared prowler dependency update to: $UPDATED_PROWLER_REF"
- name: Create Pull Request against release branch
if: ${{ env.PATCH_VERSION == '0' }}
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
branch: ${{ env.TEMP_BRANCH }}
base: ${{ env.BRANCH_NAME }}
title: "chore(api): Update prowler dependency to ${{ env.BRANCH_NAME }} for release ${{ env.PROWLER_VERSION }}"
body: |
### Description
Updates the API prowler dependency for release ${{ env.PROWLER_VERSION }}.
**Changes:**
- Updates `api/pyproject.toml` prowler dependency from `@master` to `@${{ env.BRANCH_NAME }}`
- Updates `api/poetry.lock` file with resolved dependencies
This PR should be merged into the `${{ env.BRANCH_NAME }}` release branch.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
labels: |
component/api
no-changelog
- name: Create draft release
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8 # v2.3.2
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
with:
tag_name: ${{ env.PROWLER_VERSION }}
name: Prowler ${{ env.PROWLER_VERSION }}
body_path: combined_changelog.md
draft: true
target_commitish: ${{ env.PATCH_VERSION == '0' && 'master' || env.BRANCH_NAME }}
target_commitish: ${{ env.BRANCH_NAME }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -1,4 +1,4 @@
name: Check Changelog
name: Prowler - Check Changelog
on:
pull_request:
@@ -13,10 +13,10 @@ jobs:
contents: read
pull-requests: write
env:
MONITORED_FOLDERS: "api ui prowler"
MONITORED_FOLDERS: "api ui prowler dashboard"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
@@ -49,35 +49,26 @@ jobs:
- name: Find existing changelog comment
if: github.event.pull_request.head.repo.full_name == github.repository
id: find_comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e #v3.1.0
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad #v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- changelog-check -->'
- name: Comment on PR if changelog is missing
if: github.event.pull_request.head.repo.full_name == github.repository && steps.check_folders.outputs.missing_changelogs != ''
- name: Update PR comment with changelog status
if: github.event.pull_request.head.repo.full_name == github.repository
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
edit-mode: replace
body: |
<!-- changelog-check -->
⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
${{ steps.check_folders.outputs.missing_changelogs != '' && format('⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
${{ steps.check_folders.outputs.missing_changelogs }}
{0}
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.
- name: Comment on PR if all changelogs are present
if: github.event.pull_request.head.repo.full_name == github.repository && steps.check_folders.outputs.missing_changelogs == ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
body: |
<!-- changelog-check -->
✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.', steps.check_folders.outputs.missing_changelogs) || '✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉' }}
- name: Fail if changelog is missing
if: steps.check_folders.outputs.missing_changelogs != ''
+11 -10
View File
@@ -11,7 +11,7 @@ jobs:
if: github.event.pull_request.merged == true && github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.merge_commit_sha }}
@@ -22,16 +22,17 @@ jobs:
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Trigger pull request
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
event-type: prowler-pull-request-merged
client-payload: '{
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
"PROWLER_PR_TITLE": "${{ github.event.pull_request.title }}",
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
"PROWLER_PR_URL":${{ toJson(github.event.pull_request.html_url) }}
}'
client-payload: |
{
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
"PROWLER_PR_TITLE": ${{ toJson(github.event.pull_request.title) }},
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
"PROWLER_PR_URL": ${{ toJson(github.event.pull_request.html_url) }}
}
@@ -59,10 +59,10 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ env.PYTHON_VERSION }}
@@ -108,13 +108,13 @@ jobs:
esac
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: public.ecr.aws
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
@@ -157,6 +157,22 @@ jobs:
cache-from: type=gha
cache-to: type=gha,mode=max
# - name: Push README to Docker Hub (toniblyx)
# uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
# repository: ${{ env.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}
# readme-filepath: ./README.md
#
# - name: Push README to Docker Hub (prowlercloud)
# uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
# repository: ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}
# readme-filepath: ./README.md
dispatch-action:
needs: container-build-push
runs-on: ubuntu-latest
+1 -2
View File
@@ -12,10 +12,9 @@ env:
jobs:
bump-version:
name: Bump Version
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get Prowler version
shell: bash
+3 -3
View File
@@ -52,16 +52,16 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/sdk-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
+36 -21
View File
@@ -21,11 +21,11 @@ jobs:
python-version: ["3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: ./**
files_ignore: |
@@ -51,7 +51,7 @@ jobs:
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
@@ -104,7 +104,7 @@ jobs:
- name: Dockerfile - Check if Dockerfile has changed
id: dockerfile-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
Dockerfile
@@ -117,12 +117,12 @@ jobs:
# Test AWS
- name: AWS - Check if any file has changed
id: aws-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/aws/**
./tests/providers/aws/**
.poetry.lock
./poetry.lock
- name: AWS - Test
if: steps.aws-changed-files.outputs.any_changed == 'true'
@@ -132,12 +132,12 @@ jobs:
# Test Azure
- name: Azure - Check if any file has changed
id: azure-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/azure/**
./tests/providers/azure/**
.poetry.lock
./poetry.lock
- name: Azure - Test
if: steps.azure-changed-files.outputs.any_changed == 'true'
@@ -147,12 +147,12 @@ jobs:
# Test GCP
- name: GCP - Check if any file has changed
id: gcp-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/gcp/**
./tests/providers/gcp/**
.poetry.lock
./poetry.lock
- name: GCP - Test
if: steps.gcp-changed-files.outputs.any_changed == 'true'
@@ -162,12 +162,12 @@ jobs:
# Test Kubernetes
- name: Kubernetes - Check if any file has changed
id: kubernetes-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/kubernetes/**
./tests/providers/kubernetes/**
.poetry.lock
./poetry.lock
- name: Kubernetes - Test
if: steps.kubernetes-changed-files.outputs.any_changed == 'true'
@@ -177,12 +177,12 @@ jobs:
# Test GitHub
- name: GitHub - Check if any file has changed
id: github-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/github/**
./tests/providers/github/**
.poetry.lock
./poetry.lock
- name: GitHub - Test
if: steps.github-changed-files.outputs.any_changed == 'true'
@@ -192,12 +192,12 @@ jobs:
# Test NHN
- name: NHN - Check if any file has changed
id: nhn-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/nhn/**
./tests/providers/nhn/**
.poetry.lock
./poetry.lock
- name: NHN - Test
if: steps.nhn-changed-files.outputs.any_changed == 'true'
@@ -207,12 +207,12 @@ jobs:
# Test M365
- name: M365 - Check if any file has changed
id: m365-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/m365/**
./tests/providers/m365/**
.poetry.lock
./poetry.lock
- name: M365 - Test
if: steps.m365-changed-files.outputs.any_changed == 'true'
@@ -222,18 +222,33 @@ jobs:
# Test IaC
- name: IaC - Check if any file has changed
id: iac-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/iac/**
./tests/providers/iac/**
.poetry.lock
./poetry.lock
- name: IaC - Test
if: steps.iac-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/iac --cov-report=xml:iac_coverage.xml tests/providers/iac
# Test MongoDB Atlas
- name: MongoDB Atlas - Check if any file has changed
id: mongodb-atlas-changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
./prowler/providers/mongodbatlas/**
./tests/providers/mongodbatlas/**
.poetry.lock
- name: MongoDB Atlas - Test
if: steps.mongodb-atlas-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/mongodbatlas --cov-report=xml:mongodb_atlas_coverage.xml tests/providers/mongodbatlas
# Common Tests
- name: Lib - Test
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
@@ -248,7 +263,7 @@ jobs:
# Codecov
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
+2 -2
View File
@@ -64,14 +64,14 @@ jobs:
;;
esac
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install dependencies
run: |
pipx install poetry==2.1.1
- name: Setup Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ env.PYTHON_VERSION }}
# cache: ${{ env.CACHE }}
@@ -23,12 +23,12 @@ jobs:
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: setup python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.9 #install the python needed
@@ -38,7 +38,7 @@ jobs:
pip install boto3
- name: Configure AWS Credentials -- DEV
uses: aws-actions/configure-aws-credentials@b47578312673ae6fa5b5096b330d9fbac3d116df # v4.2.1
uses: aws-actions/configure-aws-credentials@a03048d87541d1d9fcf2ecf528a4a65ba9bd7838 # v5.0.0
with:
aws-region: ${{ env.AWS_REGION_DEV }}
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
@@ -56,7 +56,7 @@ jobs:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services"
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low, provider/aws"
labels: "status/waiting-for-revision, severity/low, provider/aws, no-changelog"
title: "chore(regions_update): Changes in regions for AWS services"
body: |
### Description
@@ -62,7 +62,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set short git commit SHA
id: vars
@@ -71,7 +71,7 @@ jobs:
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
@@ -113,7 +113,7 @@ jobs:
- name: Trigger deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
+3 -3
View File
@@ -44,16 +44,16 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/ui-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
+100
View File
@@ -0,0 +1,100 @@
name: UI - E2E Tests
on:
pull_request:
branches:
- master
- "v5.*"
paths:
- '.github/workflows/ui-e2e-tests.yml'
- 'ui/**'
jobs:
e2e-tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
env:
AUTH_SECRET: 'fallback-ci-secret-for-testing'
AUTH_TRUST_HOST: true
NEXTAUTH_URL: 'http://localhost:3000'
NEXT_PUBLIC_API_BASE_URL: 'http://localhost:8080/api/v1'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Fix API data directory permissions
run: docker run --rm -v $(pwd)/_data/api:/data alpine chown -R 1000:1000 /data
- name: Start API services
run: |
# Override docker-compose image tag to use latest instead of stable
# This overrides any PROWLER_API_VERSION set in .env file
export PROWLER_API_VERSION=latest
echo "Using PROWLER_API_VERSION=${PROWLER_API_VERSION}"
docker compose up -d api worker worker-beat
- name: Wait for API to be ready
run: |
echo "Waiting for prowler-api..."
timeout=150 # 5 minutes max
elapsed=0
while [ $elapsed -lt $timeout ]; do
if curl -s ${NEXT_PUBLIC_API_BASE_URL}/docs >/dev/null 2>&1; then
echo "Prowler API is ready!"
exit 0
fi
echo "Waiting for prowler-api... (${elapsed}s elapsed)"
sleep 5
elapsed=$((elapsed + 5))
done
echo "Timeout waiting for prowler-api to start"
exit 1
- name: Load database fixtures for E2E tests
run: |
docker compose exec -T api sh -c '
echo "Loading all fixtures from api/fixtures/dev/..."
for fixture in api/fixtures/dev/*.json; do
if [ -f "$fixture" ]; then
echo "Loading $fixture"
poetry run python manage.py loaddata "$fixture" --database admin
fi
done
echo "All database fixtures loaded successfully!"
'
- name: Setup Node.js environment
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
with:
node-version: '20.x'
cache: 'npm'
cache-dependency-path: './ui/package-lock.json'
- name: Install UI dependencies
working-directory: ./ui
run: npm ci
- name: Build UI application
working-directory: ./ui
run: npm run build
- name: Cache Playwright browsers
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
id: playwright-cache
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('ui/package-lock.json') }}
restore-keys: |
${{ runner.os }}-playwright-
- name: Install Playwright browsers
working-directory: ./ui
if: steps.playwright-cache.outputs.cache-hit != 'true'
run: npm run test:e2e:install
- name: Run E2E tests
working-directory: ./ui
run: npm run test:e2e
- name: Upload test reports
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: failure()
with:
name: playwright-report
path: ui/playwright-report/
retention-days: 30
- name: Cleanup services
if: always()
run: |
echo "Shutting down services..."
docker compose down -v || true
echo "Cleanup completed"
+3 -49
View File
@@ -27,11 +27,11 @@ jobs:
node-version: [20.x]
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
@@ -46,56 +46,10 @@ jobs:
working-directory: ./ui
run: npm run build
e2e-tests:
runs-on: ubuntu-latest
env:
AUTH_SECRET: 'fallback-ci-secret-for-testing'
AUTH_TRUST_HOST: true
NEXTAUTH_URL: http://localhost:3000
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Setup Node.js
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: '20.x'
cache: 'npm'
cache-dependency-path: './ui/package-lock.json'
- name: Install dependencies
working-directory: ./ui
run: npm ci
- name: Cache Playwright browsers
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
id: playwright-cache
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('ui/package-lock.json') }}
restore-keys: |
${{ runner.os }}-playwright-
- name: Install Playwright browsers
working-directory: ./ui
if: steps.playwright-cache.outputs.cache-hit != 'true'
run: npm run test:e2e:install
- name: Build the application
working-directory: ./ui
run: npm run build
- name: Run Playwright tests
working-directory: ./ui
run: npm run test:e2e
- name: Upload Playwright report
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
if: failure()
with:
name: playwright-report
path: ui/playwright-report/
retention-days: 30
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build Container
+8
View File
@@ -63,6 +63,7 @@ junit-reports/
# .env
ui/.env*
api/.env*
mcp_server/.env*
.env.local
# Coverage
@@ -75,3 +76,10 @@ node_modules
# Persistent data
_data/
# Claude
CLAUDE.md
# MCP Server
mcp_server/prowler_mcp_server/prowler_app/server.py
mcp_server/prowler_mcp_server/prowler_app/utils/schema.yaml
+3 -1
View File
@@ -6,6 +6,7 @@ repos:
- id: check-merge-conflict
- id: check-yaml
args: ["--unsafe"]
exclude: prowler/config/llm_config.yaml
- id: check-json
- id: end-of-file-fixer
- id: trailing-whitespace
@@ -115,7 +116,8 @@ repos:
- id: safety
name: safety
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353'
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745'
language: system
- id: vulture
+110
View File
@@ -0,0 +1,110 @@
# Repository Guidelines
## How to Use This Guide
- Start here for cross-project norms, Prowler is a monorepo with several components. Every component should have an `AGENTS.md` file that contains the guidelines for the agents in that component. The file is located beside the code you are touching (e.g. `api/AGENTS.md`, `ui/AGENTS.md`, `prowler/AGENTS.md`).
- Follow the stricter rule when guidance conflicts; component docs override this file for their scope.
- Keep instructions synchronized. When you add new workflows or scripts, update both, the relevant component `AGENTS.md` and this file if they apply broadly.
## Project Overview
Prowler is an open-source cloud security assessment tool that supports multiple cloud providers (AWS, Azure, GCP, Kubernetes, GitHub, M365, etc.). The project consists in a monorepo with the following main components:
- **Prowler SDK**: Python SDK, includes the Prowler CLI, providers, services, checks, compliances, config, etc. (`prowler/`)
- **Prowler API**: Django-based REST API backend (`api/`)
- **Prowler UI**: Next.js frontend application (`ui/`)
- **Prowler MCP Server**: Model Context Protocol server that gives access to the entire Prowler ecosystem for LLMs (`mcp_server/`)
- **Prowler Dashboard**: Prowler CLI feature that allows to visualize the results of the scans in a simple dashboard (`dashboard/`)
### Project Structure (Key Folders & Files)
- `prowler/`: Main source code for Prowler SDK (CLI, providers, services, checks, compliances, config, etc.)
- `api/`: Django-based REST API backend components
- `ui/`: Next.js frontend application
- `mcp_server/`: Model Context Protocol server that gives access to the entire Prowler ecosystem for LLMs
- `dashboard/`: Prowler CLI feature that allows to visualize the results of the scans in a simple dashboard
- `docs/`: Documentation
- `examples/`: Example output formats for providers and scripts
- `permissions/`: Permission-related files and policies
- `contrib/`: Community-contributed scripts or modules
- `tests/`: Prowler SDK test suite
- `docker-compose.yml`: Docker compose file to run the Prowler App (API + UI) production environment
- `docker-compose-dev.yml`: Docker compose file to run the Prowler App (API + UI) development environment
- `pyproject.toml`: Poetry Prowler SDK project file
- `.pre-commit-config.yaml`: Pre-commit hooks configuration
- `Makefile`: Makefile to run the project
- `LICENSE`: License file
- `README.md`: README file
- `CONTRIBUTING.md`: Contributing guide
## Python Development
Most of the code is written in Python, so the main files in the root are focused on Python code.
### Poetry Dev Environment
For developing in Python we recommend using `poetry` to manage the dependencies. The minimal version is `2.1.1`. So it is recommended to run all commands using `poetry run ...`.
To install the core dependencies to develop it is needed to run `poetry install --with dev`.
### Pre-commit hooks
The project has pre-commit hooks to lint and format the code. They are installed by running `poetry run pre-commit install`.
When commiting a change, the hooks will be run automatically. Some of them are:
- Code formatting (black, isort)
- Linting (flake8, pylint)
- Security checks (bandit, safety, trufflehog)
- YAML/JSON validation
- Poetry lock file validation
### Linting and Formatting
We use the following tools to lint and format the code:
- `flake8`: for linting the code
- `black`: for formatting the code
- `pylint`: for linting the code
You can run all using the `make` command:
```bash
poetry run make lint
poetry run make format
```
Or they will be run automatically when you commit your changes using pre-commit hooks.
## Commit & Pull Request Guidelines
For the commit messages and pull requests name follow the conventional-commit style.
Befire creating a pull request, complete the checklist in `.github/pull_request_template.md`. Summaries should explain deployment impact, highlight review steps, and note changelog or permission updates. Run all relevant tests and linters before requesting review and link screenshots for UI or dashboard changes.
### Conventional Commit Style
The Conventional Commits specification is a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of.
The commit message should be structured as follows:
```
<type>[optional scope]: <description>
<BLANK LINE>
[optional body]
<BLANK LINE>
[optional footer(s)]
```
Any line of the commit message cannot be longer 100 characters! This allows the message to be easier to read on GitHub as well as in various git tools
#### Commit Types
- **feat**: code change introuce new functionality to the application
- **fix**: code change that solve a bug in the codebase
- **docs**: documentation only changes
- **chore**: changes related to the build process or auxiliary tools and libraries, that do not affect the application's functionality
- **perf**: code change that improves performance
- **refactor**: code change that neither fixes a bug nor adds a feature
- **style**: changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
- **test**: adding missing tests or correcting existing tests
+4
View File
@@ -45,3 +45,7 @@ pypi-upload: ## Upload package
help: ## Show this help.
@echo "Prowler Makefile"
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
##@ Development Environment
run-api-dev: ## Start development environment with API, PostgreSQL, Valkey, and workers
docker compose -f docker-compose-dev.yml up api-dev postgres valkey worker-dev worker-beat --build
+29 -55
View File
@@ -19,19 +19,16 @@
<a href="https://goto.prowler.com/slack"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=prowler%20downloads"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=downloads"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://codecov.io/gh/prowler-cloud/prowler"><img src="https://codecov.io/gh/prowler-cloud/prowler/graph/badge.svg?token=OflBGsdpDl"/></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler"><img alt="Repo size" src="https://img.shields.io/github/repo-size/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler/issues"><img alt="Issues" src="https://img.shields.io/github/issues/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/v/release/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/v/release/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/release-date/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler"><img alt="Contributors" src="https://img.shields.io/github/contributors-anon/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler/issues"><img alt="Issues" src="https://img.shields.io/github/issues/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler"><img alt="License" src="https://img.shields.io/github/license/prowler-cloud/prowler"></a>
<a href="https://twitter.com/ToniBlyx"><img alt="Twitter" src="https://img.shields.io/twitter/follow/toniblyx?style=social"></a>
<a href="https://twitter.com/prowlercloud"><img alt="Twitter" src="https://img.shields.io/twitter/follow/prowlercloud?style=social"></a>
@@ -55,15 +52,11 @@ Prowler includes hundreds of built-in controls to ensure compliance with standar
- **National Security Standards:** ENS (Spanish National Security Scheme)
- **Custom Security Frameworks:** Tailored to your needs
## Prowler CLI and Prowler Cloud
Prowler offers a Command Line Interface (CLI), known as Prowler Open Source, and an additional service built on top of it, called <a href="https://prowler.com">Prowler Cloud</a>.
## Prowler App
Prowler App is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
![Prowler App](docs/img/overview.png)
![Prowler App](docs/products/img/overview.png)
>For more details, refer to the [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
@@ -80,28 +73,37 @@ prowler <provider>
```console
prowler dashboard
```
![Prowler Dashboard](docs/img/dashboard.png)
![Prowler Dashboard](docs/products/img/dashboard.png)
# Prowler at a Glance
> [!Tip]
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 567 | 82 | 36 | 10 |
| GCP | 79 | 13 | 10 | 3 |
| Azure | 142 | 18 | 10 | 3 |
| Kubernetes | 83 | 7 | 5 | 7 |
| GitHub | 16 | 2 | 1 | 0 |
| M365 | 69 | 7 | 3 | 2 |
| NHN (Unofficial) | 6 | 2 | 1 | 0 |
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Stage | Interface |
|---|---|---|---|---|---|---|---|
| AWS | 576 | 82 | 36 | 10 | Official | Stable | UI, API, CLI |
| GCP | 79 | 13 | 10 | 3 | Official | Stable | UI, API, CLI |
| Azure | 162 | 19 | 11 | 4 | Official | Stable | UI, API, CLI |
| Kubernetes | 83 | 7 | 5 | 7 | Official | Stable | UI, API, CLI |
| GitHub | 17 | 2 | 1 | 0 | Official | Stable | UI, API, CLI |
| M365 | 70 | 7 | 3 | 2 | Official | Stable | UI, API, CLI |
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | Beta | CLI |
| MongoDB Atlas | 10 | 3 | 0 | 0 | Official | Beta | CLI |
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | Beta | CLI |
| NHN | 6 | 2 | 1 | 0 | Unofficial | Beta | CLI |
> [!Note]
> The numbers in the table are updated periodically.
> [!Tip]
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
> [!Note]
> Use the following commands to list Prowler's available checks, services, compliance frameworks, and categories: `prowler <provider> --list-checks`, `prowler <provider> --list-services`, `prowler <provider> --list-compliance` and `prowler <provider> --list-categories`.
> Use the following commands to list Prowler's available checks, services, compliance frameworks, and categories:
> - `prowler <provider> --list-checks`
> - `prowler <provider> --list-services`
> - `prowler <provider> --list-compliance`
> - `prowler <provider> --list-categories`
# 💻 Installation
@@ -239,7 +241,7 @@ The following versions of Prowler CLI are available, depending on your requireme
The container images are available here:
- Prowler CLI:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [DockerHub](https://hub.docker.com/r/prowlercloud/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
- Prowler App:
- [DockerHub - Prowler UI](https://hub.docker.com/r/prowlercloud/prowler-ui/tags)
@@ -274,7 +276,7 @@ python prowler-cli.py -v
- **Prowler API**: A backend service, developed with Django REST Framework, responsible for running Prowler scans and storing the generated results.
- **Prowler SDK**: A Python SDK designed to extend the functionality of the Prowler CLI for advanced capabilities.
![Prowler App Architecture](docs/img/prowler-app-architecture.png)
![Prowler App Architecture](docs/products/img/prowler-app-architecture.png)
## Prowler CLI
@@ -300,40 +302,12 @@ And many more environments.
![Architecture](docs/img/architecture.png)
# Deprecations from v3
## General
- `Allowlist` now is called `Mutelist`.
- The `--quiet` option has been deprecated. Use the `--status` flag to filter findings based on their status: PASS, FAIL, or MANUAL.
- All findings with an `INFO` status have been reclassified as `MANUAL`.
- The CSV output format is standardized across all providers.
**Deprecated Output Formats**
The following formats are now deprecated:
- Native JSON has been replaced with JSON in [OCSF] v1.1.0 format, which is standardized across all providers (https://schema.ocsf.io/).
## AWS
**AWS Flag Deprecation**
The flag --sts-endpoint-region has been deprecated due to the adoption of AWS STS regional tokens.
**Sending FAIL Results to AWS Security Hub**
- To send only FAILS to AWS Security Hub, use one of the following options: `--send-sh-only-fails` or `--security-hub --status FAIL`.
# 📖 Documentation
**Documentation Resources**
For installation instructions, usage details, tutorials, and the Developer Guide, visit https://docs.prowler.com/
# 📃 License
**Prowler License Information**
Prowler is licensed under the Apache License 2.0, as indicated in each file within the repository. Obtaining a Copy of the License
Prowler is licensed under the Apache License 2.0.
A copy of the License is available at <http://www.apache.org/licenses/LICENSE-2.0>
+57 -15
View File
@@ -1,23 +1,65 @@
# Security Policy
# Security
## Software Security
As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (FTR)](https://aws.amazon.com/partners/foundational-technical-review/) and we use the following tools and automation to make sure our code is secure and dependencies up-to-dated:
## Reporting Vulnerabilities
- `bandit` for code security review.
- `safety` and `dependabot` for dependencies.
- `hadolint` and `dockle` for our containers security.
- `snyk` in Docker Hub.
- `clair` in Amazon ECR.
- `vulture`, `flake8`, `black` and `pylint` for formatting and best practices.
At Prowler, we consider the security of our open source software and systems a top priority. But no matter how much effort we put into system security, there can still be vulnerabilities present.
## Reporting a Vulnerability
If you discover a vulnerability, we would like to know about it so we can take steps to address it as quickly as possible. We would like to ask you to help us better protect our users, our clients and our systems.
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to https://support.prowler.com.
When reporting vulnerabilities, please consider (1) attack scenario / exploitability, and (2) the security impact of the bug. The following issues are considered out of scope:
The information you share with ProwlerPro as part of this process is kept confidential within ProwlerPro. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
- Social engineering support or attacks requiring social engineering.
- Clickjacking on pages with no sensitive actions.
- Cross-Site Request Forgery (CSRF) on unauthenticated forms or forms with no sensitive actions.
- Attacks requiring Man-In-The-Middle (MITM) or physical access to a user's device.
- Previously known vulnerable libraries without a working Proof of Concept (PoC).
- Comma Separated Values (CSV) injection without demonstrating a vulnerability.
- Missing best practices in SSL/TLS configuration.
- Any activity that could lead to the disruption of service (DoS).
- Rate limiting or brute force issues on non-authentication endpoints.
- Missing best practices in Content Security Policy (CSP).
- Missing HttpOnly or Secure flags on cookies.
- Configuration of or missing security headers.
- Missing email best practices, such as invalid, incomplete, or missing SPF/DKIM/DMARC records.
- Vulnerabilities only affecting users of outdated or unpatched browsers (less than two stable versions behind).
- Software version disclosure, banner identification issues, or descriptive error messages.
- Tabnabbing.
- Issues that require unlikely user interaction.
- Improper logout functionality and improper session timeout.
- CORS misconfiguration without an exploitation scenario.
- Broken link hijacking.
- Automated scanning results (e.g., sqlmap, Burp active scanner) that have not been manually verified.
- Content spoofing and text injection issues without a clear attack vector.
- Email spoofing without exploiting security flaws.
- Dead links or broken links.
- User enumeration.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.
Testing guidelines:
- Do not run automated scanners on other customer projects. Running automated scanners can run up costs for our users. Aggressively configured scanners might inadvertently disrupt services, exploit vulnerabilities, lead to system instability or breaches and violate Terms of Service from our upstream providers. Our own security systems won't be able to distinguish hostile reconnaissance from whitehat research. If you wish to run an automated scanner, notify us at support@prowler.com and only run it on your own Prowler app project. Do NOT attack Prowler in usage of other customers.
- Do not take advantage of the vulnerability or problem you have discovered, for example by downloading more data than necessary to demonstrate the vulnerability or deleting or modifying other people's data.
You will receive a non-automated response to your initial contact within 24 hours, confirming receipt of your reported vulnerability.
Reporting guidelines:
- File a report through our Support Desk at https://support.prowler.com
- If it is about a lack of a security functionality, please file a feature request instead at https://github.com/prowler-cloud/prowler/issues
- Do provide sufficient information to reproduce the problem, so we will be able to resolve it as quickly as possible.
- If you have further questions and want direct interaction with the Prowler team, please contact us at via our Community Slack at goto.prowler.com/slack.
We will coordinate public notification of any validated vulnerability with you. Where possible, we prefer that our respective public disclosures be posted simultaneously.
Disclosure guidelines:
- In order to protect our users and customers, do not reveal the problem to others until we have researched, addressed and informed our affected customers.
- If you want to publicly share your research about Prowler at a conference, in a blog or any other public forum, you should share a draft with us for review and approval at least 30 days prior to the publication date. Please note that the following should not be included:
- Data regarding any Prowler user or customer projects.
- Prowler customers' data.
- Information about Prowler employees, contractors or partners.
What we promise:
- We will respond to your report within 5 business days with our evaluation of the report and an expected resolution date.
- If you have followed the instructions above, we will not take any legal action against you in regard to the report.
- We will handle your report with strict confidentiality, and not pass on your personal details to third parties without your permission.
- We will keep you informed of the progress towards resolving the problem.
- In the public information concerning the problem reported, we will give your name as the discoverer of the problem (unless you desire otherwise).
We strive to resolve all problems as quickly as possible, and we would like to play an active role in the ultimate publication on the problem after it is resolved.
---
For more information about our security policies, please refer to our [Security](https://docs.prowler.com/projects/prowler-open-source/en/latest/security/) section in our documentation.
+2
View File
@@ -19,6 +19,8 @@ DJANGO_REFRESH_TOKEN_LIFETIME=1440
DJANGO_CACHE_MAX_AGE=3600
DJANGO_STALE_WHILE_REVALIDATE=60
DJANGO_SECRETS_ENCRYPTION_KEY=""
# Throttle, two options: Empty means no throttle; or if desired use one in DRF format: https://www.django-rest-framework.org/api-guide/throttling/#setting-the-throttling-policy
DJANGO_THROTTLE_TOKEN_OBTAIN=50/minute
# Decide whether to allow Django manage database table partitions
DJANGO_MANAGE_DB_PARTITIONS=[True|False]
DJANGO_CELERY_DEADLOCK_ATTEMPTS=5
+80 -1
View File
@@ -2,6 +2,85 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.14.0] (Prowler UNRELEASED)
### Added
- Default JWT keys are generated and stored if they are missing from configuration [(#8655)](https://github.com/prowler-cloud/prowler/pull/8655)
- `compliance_name` for each compliance [(#7920)](https://github.com/prowler-cloud/prowler/pull/7920)
- API Key support [(#8805)](https://github.com/prowler-cloud/prowler/pull/8805)
### Changed
- Now the MANAGE_ACCOUNT permission is required to modify or read user permissions instead of MANAGE_USERS [(#8281)](https://github.com/prowler-cloud/prowler/pull/8281)
- Now at least one user with MANAGE_ACCOUNT permission is required in the tenant [(#8729)](https://github.com/prowler-cloud/prowler/pull/8729)
### Security
- Django updated to the latest 5.1 security release, 5.1.13, due to problems with potential [SQL injection](https://github.com/prowler-cloud/prowler/security/dependabot/104) and [directory traversals](https://github.com/prowler-cloud/prowler/security/dependabot/103) [(#8842)](https://github.com/prowler-cloud/prowler/pull/8842)
---
## [1.13.2] (Prowler 5.12.3)
### Fixed
- 500 error when deleting user [(#8731)](https://github.com/prowler-cloud/prowler/pull/8731)
---
## [1.13.1] (Prowler 5.12.2)
### Changed
- Renamed compliance overview task queue to `compliance` [(#8755)](https://github.com/prowler-cloud/prowler/pull/8755)
### Security
- Django updated to the latest 5.1 security release, 5.1.12, due to [problems](https://www.djangoproject.com/weblog/2025/sep/03/security-releases/) with potential SQL injection in FilteredRelation column aliases [(#8693)](https://github.com/prowler-cloud/prowler/pull/8693)
---
## [1.13.0] (Prowler 5.12.0)
### Added
- Integration with JIRA, enabling sending findings to a JIRA project [(#8622)](https://github.com/prowler-cloud/prowler/pull/8622), [(#8637)](https://github.com/prowler-cloud/prowler/pull/8637)
- `GET /overviews/findings_severity` now supports `filter[status]` and `filter[status__in]` to aggregate by specific statuses (`FAIL`, `PASS`)[(#8186)](https://github.com/prowler-cloud/prowler/pull/8186)
- Throttling options for `/api/v1/tokens` using the `DJANGO_THROTTLE_TOKEN_OBTAIN` environment variable [(#8647)](https://github.com/prowler-cloud/prowler/pull/8647)
---
## [1.12.0] (Prowler 5.11.0)
### Added
- Lighthouse support for OpenAI GPT-5 [(#8527)](https://github.com/prowler-cloud/prowler/pull/8527)
- Integration with Amazon Security Hub, enabling sending findings to Security Hub [(#8365)](https://github.com/prowler-cloud/prowler/pull/8365)
- Generate ASFF output for AWS providers with SecurityHub integration enabled [(#8569)](https://github.com/prowler-cloud/prowler/pull/8569)
### Fixed
- GitHub provider always scans user instead of organization when using provider UID [(#8587)](https://github.com/prowler-cloud/prowler/pull/8587)
---
## [1.11.0] (Prowler 5.10.0)
### Added
- Github provider support [(#8271)](https://github.com/prowler-cloud/prowler/pull/8271)
- Integration with Amazon S3, enabling storage and retrieval of scan data via S3 buckets [(#8056)](https://github.com/prowler-cloud/prowler/pull/8056)
### Fixed
- Avoid sending errors to Sentry in M365 provider when user authentication fails [(#8420)](https://github.com/prowler-cloud/prowler/pull/8420)
---
## [1.10.2] (Prowler v5.9.2)
### Changed
- Optimized queries for resources views [(#8336)](https://github.com/prowler-cloud/prowler/pull/8336)
---
## [v1.10.1] (Prowler v5.9.1)
### Fixed
- Calculate failed findings during scans to prevent heavy database queries [(#8322)](https://github.com/prowler-cloud/prowler/pull/8322)
---
## [v1.10.0] (Prowler v5.9.0)
### Added
@@ -12,7 +91,7 @@ All notable changes to the **Prowler API** are documented in this file.
- `/processors` endpoints to post-process findings. Currently, only the Mutelist processor is supported to allow to mute findings.
- Optimized the underlying queries for resources endpoints [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
- Optimized include parameters for resources view [(#8229)](https://github.com/prowler-cloud/prowler/pull/8229)
- Optimized overview background tasks [(#8300)](https://github.com/prowler-cloud/prowler/pull/8300)
- Optimized overview background tasks [(#8300)](https://github.com/prowler-cloud/prowler/pull/8300)
### Fixed
- Search filter for findings and resources [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
+3
View File
@@ -44,6 +44,9 @@ USER prowler
WORKDIR /home/prowler
# Ensure output directory exists
RUN mkdir -p /tmp/prowler_api_output
COPY pyproject.toml ./
RUN pip install --no-cache-dir --upgrade pip && \
+5 -1
View File
@@ -18,7 +18,11 @@ Valkey exposes a Redis 7.2 compliant API. Any service that exposes the Redis API
# Modify environment variables
Under the root path of the project, you can find a file called `.env.example`. This file shows all the environment variables that the project uses. You *must* create a new file called `.env` and set the values for the variables.
Under the root path of the project, you can find a file called `.env`. This file shows all the environment variables that the project uses. You should review it and set the values for the variables you want to change.
If you dont set `DJANGO_TOKEN_SIGNING_KEY` or `DJANGO_TOKEN_VERIFYING_KEY`, the API will generate them at `~/.config/prowler-api/` with `0600` and `0644` permissions; back up these files to persist identity across redeploys.
**Important note**: Every Prowler version (or repository branches and tags) could have different variables set in its `.env` file. Please use the `.env` file that corresponds with each version.
## Local deployment
Keep in mind if you export the `.env` file to use it with local deployment that you will have to do it within the context of the Poetry interpreter, not before. Otherwise, variables will not be loaded properly.
+1 -1
View File
@@ -32,7 +32,7 @@ start_prod_server() {
start_worker() {
echo "Starting the worker..."
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview -E --max-tasks-per-child 1
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview,integrations,compliance -E --max-tasks-per-child 1
}
start_worker_beat() {
+1581 -1238
View File
File diff suppressed because it is too large Load Diff
+6 -3
View File
@@ -7,7 +7,7 @@ authors = [{name = "Prowler Engineering", email = "engineering@prowler.com"}]
dependencies = [
"celery[pytest] (>=5.4.0,<6.0.0)",
"dj-rest-auth[with_social,jwt] (==7.0.1)",
"django==5.1.10",
"django (==5.1.13)",
"django-allauth[saml] (>=65.8.0,<66.0.0)",
"django-celery-beat (>=2.7.0,<3.0.0)",
"django-celery-results (>=2.5.1,<3.0.0)",
@@ -30,7 +30,10 @@ dependencies = [
"sentry-sdk[django] (>=2.20.0,<3.0.0)",
"uuid6==2024.7.10",
"openai (>=1.82.0,<2.0.0)",
"xmlsec==1.3.14"
"xmlsec==1.3.14",
"h2 (==4.3.0)",
"markdown (>=3.9,<4.0)",
"drf-simple-apikey (==2.2.1)"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
@@ -38,7 +41,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.10.0"
version = "1.14.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
+157
View File
@@ -1,4 +1,26 @@
import logging
import os
import sys
from pathlib import Path
from config.custom_logging import BackendLogger
from config.env import env
from django.apps import AppConfig
from django.conf import settings
logger = logging.getLogger(BackendLogger.API)
SIGNING_KEY_ENV = "DJANGO_TOKEN_SIGNING_KEY"
VERIFYING_KEY_ENV = "DJANGO_TOKEN_VERIFYING_KEY"
PRIVATE_KEY_FILE = "jwt_private.pem"
PUBLIC_KEY_FILE = "jwt_public.pem"
KEYS_DIRECTORY = (
Path.home() / ".config" / "prowler-api"
) # `/home/prowler/.config/prowler-api` inside the container
_keys_initialized = False # Flag to prevent multiple executions within the same process
class ApiConfig(AppConfig):
@@ -6,7 +28,142 @@ class ApiConfig(AppConfig):
name = "api"
def ready(self):
from api import schema_extensions # noqa: F401
from api import signals # noqa: F401
from api.compliance import load_prowler_compliance
# Generate required cryptographic keys if not present, but only if:
# `"manage.py" not in sys.argv`: If an external server (e.g., Gunicorn) is running the app
# `os.environ.get("RUN_MAIN")`: If it's not a Django command or using `runserver`,
# only the main process will do it
if "manage.py" not in sys.argv or os.environ.get("RUN_MAIN"):
self._ensure_crypto_keys()
load_prowler_compliance()
def _ensure_crypto_keys(self):
"""
Orchestrator method that ensures all required cryptographic keys are present.
This method coordinates the generation of:
- RSA key pairs for JWT token signing and verification
Note: During development, Django spawns multiple processes (migrations, fixtures, etc.)
which will each generate their own keys. This is expected behavior and each process
will have consistent keys for its lifetime. In production, set the keys as environment
variables to avoid regeneration.
"""
global _keys_initialized
# Skip key generation if running tests
if hasattr(settings, "TESTING") and settings.TESTING:
return
# Skip if already initialized in this process
if _keys_initialized:
return
# Check if both JWT keys are set; if not, generate them
signing_key = env.str(SIGNING_KEY_ENV, default="").strip()
verifying_key = env.str(VERIFYING_KEY_ENV, default="").strip()
if not signing_key or not verifying_key:
logger.info(
f"Generating JWT RSA key pair. In production, set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' "
"environment variables."
)
self._ensure_jwt_keys()
# Mark as initialized to prevent future executions in this process
_keys_initialized = True
def _read_key_file(self, file_name):
"""
Utility method to read the contents of a file.
"""
file_path = KEYS_DIRECTORY / file_name
return file_path.read_text().strip() if file_path.is_file() else None
def _write_key_file(self, file_name, content, private=True):
"""
Utility method to write content to a file.
"""
try:
file_path = KEYS_DIRECTORY / file_name
file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text(content)
file_path.chmod(0o600 if private else 0o644)
except Exception as e:
logger.error(
f"Error writing key file '{file_name}': {e}. "
f"Please set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' manually."
)
raise e
def _ensure_jwt_keys(self):
"""
Generate RSA key pairs for JWT token signing and verification
if they are not already set in environment variables.
"""
# Read existing keys from files if they exist
signing_key = self._read_key_file(PRIVATE_KEY_FILE)
verifying_key = self._read_key_file(PUBLIC_KEY_FILE)
if not signing_key or not verifying_key:
# Generate and store the RSA key pair
signing_key, verifying_key = self._generate_jwt_keys()
self._write_key_file(PRIVATE_KEY_FILE, signing_key, private=True)
self._write_key_file(PUBLIC_KEY_FILE, verifying_key, private=False)
logger.info("JWT keys generated and stored successfully")
else:
logger.info("JWT keys already generated")
# Set environment variables and Django settings
os.environ[SIGNING_KEY_ENV] = signing_key
settings.SIMPLE_JWT["SIGNING_KEY"] = signing_key
os.environ[VERIFYING_KEY_ENV] = verifying_key
settings.SIMPLE_JWT["VERIFYING_KEY"] = verifying_key
def _generate_jwt_keys(self):
"""
Generate and set RSA key pairs for JWT token operations.
"""
try:
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa
# Generate RSA key pair
private_key = rsa.generate_private_key( # Future improvement: we could read the next values from env vars
public_exponent=65537,
key_size=2048,
)
# Serialize private key (for signing)
private_pem = private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.PKCS8,
encryption_algorithm=serialization.NoEncryption(),
).decode("utf-8")
# Serialize public key (for verification)
public_key = private_key.public_key()
public_pem = public_key.public_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PublicFormat.SubjectPublicKeyInfo,
).decode("utf-8")
logger.debug("JWT RSA key pair generated successfully.")
return private_pem, public_pem
except ImportError as e:
logger.warning(
"The 'cryptography' package is required for automatic JWT key generation."
)
raise e
except Exception as e:
logger.error(
f"Error generating JWT keys: {e}. Please set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' manually."
)
raise e
+76
View File
@@ -0,0 +1,76 @@
from typing import Optional, Tuple
from uuid import UUID
from cryptography.fernet import InvalidToken
from django.utils import timezone
from drf_simple_apikey.backends import APIKeyAuthentication as BaseAPIKeyAuth
from drf_simple_apikey.crypto import get_crypto
from rest_framework.authentication import BaseAuthentication
from rest_framework.exceptions import AuthenticationFailed
from rest_framework.request import Request
from rest_framework_simplejwt.authentication import JWTAuthentication
from api.models import TenantAPIKey, TenantAPIKeyManager
class TenantAPIKeyAuthentication(BaseAPIKeyAuth):
model = TenantAPIKey
def __init__(self):
self.key_crypto = get_crypto()
def authenticate(self, request: Request):
prefixed_key = self.get_key(request)
# Split prefix from key (format: pk_xxxxxxxx.encrypted_key)
try:
prefix, key = prefixed_key.split(TenantAPIKeyManager.separator, 1)
except ValueError:
raise AuthenticationFailed("Invalid API Key.")
try:
entity, _ = self._authenticate_credentials(request, key)
except InvalidToken:
raise AuthenticationFailed("Invalid API Key.")
# Get the API key instance to update last_used_at and retrieve tenant info
# We need to decrypt again to get the pk (already validated by _authenticate_credentials)
payload = self.key_crypto.decrypt(key)
api_key_pk = payload["_pk"]
# Convert string UUID back to UUID object for lookup
if isinstance(api_key_pk, str):
api_key_pk = UUID(api_key_pk)
try:
api_key_instance = TenantAPIKey.objects.get(id=api_key_pk, prefix=prefix)
except TenantAPIKey.DoesNotExist:
raise AuthenticationFailed("Invalid API Key.")
# Update last_used_at
api_key_instance.last_used_at = timezone.now()
api_key_instance.save(update_fields=["last_used_at"])
return entity, {
"tenant_id": str(api_key_instance.tenant_id),
"sub": str(api_key_instance.entity.id),
"api_key_prefix": prefix,
}
class CombinedJWTOrAPIKeyAuthentication(BaseAuthentication):
jwt_auth = JWTAuthentication()
api_key_auth = TenantAPIKeyAuthentication()
def authenticate(self, request: Request) -> Optional[Tuple[object, dict]]:
auth_header = request.headers.get("Authorization", "")
# Prioritize JWT authentication if both are present
if auth_header.startswith("Bearer "):
return self.jwt_auth.authenticate(request)
if auth_header.startswith("Api-Key "):
return self.api_key_auth.authenticate(request)
# Default fallback
return self.jwt_auth.authenticate(request)
+2 -2
View File
@@ -5,8 +5,8 @@ from rest_framework.exceptions import NotAuthenticated
from rest_framework.filters import SearchFilter
from rest_framework_json_api import filters
from rest_framework_json_api.views import ModelViewSet
from rest_framework_simplejwt.authentication import JWTAuthentication
from api.authentication import CombinedJWTOrAPIKeyAuthentication
from api.db_router import MainRouter
from api.db_utils import POSTGRES_USER_VAR, rls_transaction
from api.filters import CustomDjangoFilterBackend
@@ -15,7 +15,7 @@ from api.rbac.permissions import HasPermissions
class BaseViewSet(ModelViewSet):
authentication_classes = [JWTAuthentication]
authentication_classes = [CombinedJWTOrAPIKeyAuthentication]
required_permissions = []
permission_classes = [permissions.IsAuthenticated, HasPermissions]
filter_backends = [
+1
View File
@@ -225,6 +225,7 @@ def generate_compliance_overview_template(prowler_compliance: dict):
# Build compliance dictionary
compliance_dict = {
"framework": compliance_data.Framework,
"name": compliance_data.Name,
"version": compliance_data.Version,
"provider": provider_type,
"description": compliance_data.Description,
+7 -1
View File
@@ -61,7 +61,7 @@ def rls_transaction(value: str, parameter: str = POSTGRES_TENANT_VAR):
with transaction.atomic():
with connection.cursor() as cursor:
try:
# just in case the value is an UUID object
# just in case the value is a UUID object
uuid.UUID(str(value))
except ValueError:
raise ValidationError("Must be a valid UUID")
@@ -434,6 +434,12 @@ def drop_index_on_partitions(
schema_editor.execute(sql)
def generate_api_key_prefix():
"""Generate a random 8-character prefix for API keys (e.g., 'pk_abc123de')."""
random_chars = generate_random_token(length=8)
return f"pk_{random_chars}"
# Postgres enum definition for member role
+17 -10
View File
@@ -1,6 +1,10 @@
from django.core.exceptions import ValidationError as django_validation_error
from rest_framework import status
from rest_framework.exceptions import APIException
from rest_framework.exceptions import (
APIException,
AuthenticationFailed,
NotAuthenticated,
)
from rest_framework_json_api.exceptions import exception_handler
from rest_framework_json_api.serializers import ValidationError
from rest_framework_simplejwt.exceptions import InvalidToken, TokenError
@@ -68,15 +72,18 @@ def custom_exception_handler(exc, context):
exc = ValidationError(exc.message_dict)
else:
exc = ValidationError(detail=exc.messages[0], code=exc.code)
elif isinstance(exc, (TokenError, InvalidToken)):
if (
hasattr(exc, "detail")
and isinstance(exc.detail, dict)
and "messages" in exc.detail
):
exc.detail["messages"] = [
message_item["message"] for message_item in exc.detail["messages"]
]
# Force 401 status for AuthenticationFailed exceptions regardless of the authentication backend
elif isinstance(exc, (AuthenticationFailed, NotAuthenticated, TokenError)):
exc.status_code = status.HTTP_401_UNAUTHORIZED
if isinstance(exc, (TokenError, InvalidToken)):
if (
hasattr(exc, "detail")
and isinstance(exc.detail, dict)
and "messages" in exc.detail
):
exc.detail["messages"] = [
message_item["message"] for message_item in exc.detail["messages"]
]
return exception_handler(exc, context)
+140 -6
View File
@@ -2,7 +2,7 @@ from datetime import date, datetime, timedelta, timezone
from dateutil.parser import parse
from django.conf import settings
from django.db.models import Q
from django.db.models import F, Q
from django_filters.rest_framework import (
BaseInFilter,
BooleanFilter,
@@ -28,6 +28,7 @@ from api.models import (
Integration,
Invitation,
Membership,
OverviewStatusChoices,
PermissionChoices,
Processor,
Provider,
@@ -42,6 +43,7 @@ from api.models import (
StateChoices,
StatusChoices,
Task,
TenantAPIKey,
User,
)
from api.rls import Tenant
@@ -218,10 +220,31 @@ class MembershipFilter(FilterSet):
class ProviderFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
connected = BooleanFilter()
inserted_at = DateFilter(
field_name="inserted_at",
lookup_expr="date",
help_text="""Filter by date when the provider was added
(format: YYYY-MM-DD)""",
)
updated_at = DateFilter(
field_name="updated_at",
lookup_expr="date",
help_text="""Filter by date when the provider was updated
(format: YYYY-MM-DD)""",
)
connected = BooleanFilter(
help_text="""Filter by connection status. Set to True to return only
connected providers, or False to return only providers with failed
connections. If not specified, both connected and failed providers are
included. Providers with no connection attempt (status is null) are
excluded from this filter."""
)
provider = ChoiceFilter(choices=Provider.ProviderChoices.choices)
provider__in = ChoiceInFilter(
field_name="provider",
choices=Provider.ProviderChoices.choices,
lookup_expr="in",
)
class Meta:
model = Provider
@@ -647,8 +670,16 @@ class LatestFindingFilter(CommonFindingFilters):
class ProviderSecretFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
inserted_at = DateFilter(
field_name="inserted_at",
lookup_expr="date",
help_text="Filter by date when the secret was added (format: YYYY-MM-DD)",
)
updated_at = DateFilter(
field_name="updated_at",
lookup_expr="date",
help_text="Filter by date when the secret was updated (format: YYYY-MM-DD)",
)
provider = UUIDFilter(field_name="provider__id", lookup_expr="exact")
class Meta:
@@ -750,6 +781,72 @@ class ScanSummaryFilter(FilterSet):
}
class ScanSummarySeverityFilter(ScanSummaryFilter):
"""Filter for findings_severity ScanSummary endpoint - includes status filters"""
# Custom status filters - only for severity grouping endpoint
status = ChoiceFilter(method="filter_status", choices=OverviewStatusChoices.choices)
status__in = CharInFilter(method="filter_status_in", lookup_expr="in")
def filter_status(self, queryset, name, value):
# Validate the status value
if value not in [choice[0] for choice in OverviewStatusChoices.choices]:
raise ValidationError(f"Invalid status value: {value}")
# Apply the filter by annotating the queryset with the status field
if value == OverviewStatusChoices.FAIL:
return queryset.annotate(status_count=F("fail"))
elif value == OverviewStatusChoices.PASS:
return queryset.annotate(status_count=F("_pass"))
else:
return queryset.annotate(status_count=F("total"))
def filter_status_in(self, queryset, name, value):
# Validate the status values
valid_statuses = [choice[0] for choice in OverviewStatusChoices.choices]
for status_val in value:
if status_val not in valid_statuses:
raise ValidationError(f"Invalid status value: {status_val}")
# If all statuses or no valid statuses, use total
if (
set(value)
>= {
OverviewStatusChoices.FAIL,
OverviewStatusChoices.PASS,
}
or not value
):
return queryset.annotate(status_count=F("total"))
# Build the sum expression based on status values
sum_expression = None
for status in value:
if status == OverviewStatusChoices.FAIL:
field_expr = F("fail")
elif status == OverviewStatusChoices.PASS:
field_expr = F("_pass")
else:
continue
if sum_expression is None:
sum_expression = field_expr
else:
sum_expression = sum_expression + field_expr
if sum_expression is None:
return queryset.annotate(status_count=F("total"))
return queryset.annotate(status_count=sum_expression)
class Meta:
model = ScanSummary
fields = {
"inserted_at": ["date", "gte", "lte"],
"region": ["exact", "icontains", "in"],
}
class ServiceOverviewFilter(ScanSummaryFilter):
def is_valid(self):
# Check if at least one of the inserted_at filters is present
@@ -793,3 +890,40 @@ class ProcessorFilter(FilterSet):
field_name="processor_type",
lookup_expr="in",
)
class IntegrationJiraFindingsFilter(FilterSet):
# To be expanded as needed
finding_id = UUIDFilter(field_name="id", lookup_expr="exact")
finding_id__in = UUIDInFilter(field_name="id", lookup_expr="in")
class Meta:
model = Finding
fields = {}
def filter_queryset(self, queryset):
# Validate that there is at least one filter provided
if not self.data:
raise ValidationError(
{
"findings": "No finding filters provided. At least one filter is required."
}
)
return super().filter_queryset(queryset)
class TenantApiKeyFilter(FilterSet):
inserted_at = DateFilter(field_name="created", lookup_expr="date")
inserted_at__gte = DateFilter(field_name="created", lookup_expr="gte")
inserted_at__lte = DateFilter(field_name="created", lookup_expr="lte")
expires_at = DateFilter(field_name="expiry_date", lookup_expr="date")
expires_at__gte = DateFilter(field_name="expiry_date", lookup_expr="gte")
expires_at__lte = DateFilter(field_name="expiry_date", lookup_expr="lte")
class Meta:
model = TenantAPIKey
fields = {
"prefix": ["exact", "icontains"],
"revoked": ["exact"],
"name": ["exact", "icontains"],
}
@@ -24,5 +24,18 @@
"is_active": true,
"date_joined": "2024-09-18T09:04:20.850Z"
}
},
{
"model": "api.user",
"pk": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
"fields": {
"password": "pbkdf2_sha256$870000$Z63pGJ7nre48hfcGbk5S0O$rQpKczAmijs96xa+gPVJifpT3Fetb8DOusl5Eq6gxac=",
"last_login": null,
"name": "E2E Test User",
"email": "e2e@prowler.com",
"company_name": "Prowler E2E Tests",
"is_active": true,
"date_joined": "2024-01-01T00:00:00.850Z"
}
}
]
@@ -46,5 +46,24 @@
"role": "member",
"date_joined": "2024-09-19T11:03:59.712Z"
}
},
{
"model": "api.tenant",
"pk": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"fields": {
"inserted_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"name": "E2E Test Tenant"
}
},
{
"model": "api.membership",
"pk": "9b1a2c3d-4e5f-6789-abc1-23456789def0",
"fields": {
"user": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"role": "owner",
"date_joined": "2024-01-01T00:00:00.000Z"
}
}
]
@@ -149,5 +149,32 @@
"user": "8b38e2eb-6689-4f1e-a4ba-95b275130200",
"inserted_at": "2024-11-20T15:36:14.302Z"
}
},
{
"model": "api.role",
"pk": "a5b6c7d8-9e0f-1234-5678-90abcdef1234",
"fields": {
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"name": "e2e_admin",
"manage_users": true,
"manage_account": true,
"manage_billing": true,
"manage_providers": true,
"manage_integrations": true,
"manage_scans": true,
"unlimited_visibility": true,
"inserted_at": "2024-01-01T00:00:00.000Z",
"updated_at": "2024-01-01T00:00:00.000Z"
}
},
{
"model": "api.userrolerelationship",
"pk": "f1e2d3c4-b5a6-9876-5432-10fedcba9876",
"fields": {
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"role": "a5b6c7d8-9e0f-1234-5678-90abcdef1234",
"user": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
"inserted_at": "2024-01-01T00:00:00.000Z"
}
}
]
File diff suppressed because one or more lines are too long
+8 -2
View File
@@ -8,9 +8,14 @@ def extract_auth_info(request) -> dict:
if getattr(request, "auth", None) is not None:
tenant_id = request.auth.get("tenant_id", "N/A")
user_id = request.auth.get("sub", "N/A")
api_key_prefix = request.auth.get("api_key_prefix", "N/A")
else:
tenant_id, user_id = "N/A", "N/A"
return {"tenant_id": tenant_id, "user_id": user_id}
tenant_id, user_id, api_key_prefix = "N/A", "N/A", "N/A"
return {
"tenant_id": tenant_id,
"user_id": user_id,
"api_key_prefix": api_key_prefix,
}
class APILoggingMiddleware:
@@ -38,6 +43,7 @@ class APILoggingMiddleware:
extra={
"user_id": auth_info["user_id"],
"tenant_id": auth_info["tenant_id"],
"api_key_prefix": auth_info["api_key_prefix"],
"method": request.method,
"path": request.path,
"query_params": request.GET.dict(),
@@ -0,0 +1,30 @@
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0039_resource_resources_failed_findings_idx"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="resource_finding_mappings",
index_name="rfm_tenant_resource_idx",
columns="tenant_id, resource_id",
method="BTREE",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="resource_finding_mappings",
index_name="rfm_tenant_resource_idx",
),
),
]
@@ -0,0 +1,17 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0040_rfm_tenant_resource_index_partitions"),
]
operations = [
migrations.AddIndex(
model_name="resourcefindingmapping",
index=models.Index(
fields=["tenant_id", "resource_id"],
name="rfm_tenant_resource_idx",
),
),
]
@@ -0,0 +1,23 @@
from django.contrib.postgres.operations import AddIndexConcurrently
from django.db import migrations, models
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0041_rfm_tenant_resource_parent_partitions"),
("django_celery_beat", "0019_alter_periodictasks_options"),
]
operations = [
AddIndexConcurrently(
model_name="scan",
index=models.Index(
condition=models.Q(("state", "completed")),
fields=["tenant_id", "provider_id", "-inserted_at"],
include=("id",),
name="scans_prov_ins_desc_idx",
),
),
]
@@ -0,0 +1,33 @@
# Generated by Django 5.1.7 on 2025-07-09 14:44
from django.db import migrations
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0042_scan_scans_prov_ins_desc_idx"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="provider",
field=api.db_utils.ProviderEnumField(
choices=[
("aws", "AWS"),
("azure", "Azure"),
("gcp", "GCP"),
("kubernetes", "Kubernetes"),
("m365", "M365"),
("github", "GitHub"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'github';",
reverse_sql=migrations.RunSQL.noop,
),
]
@@ -0,0 +1,19 @@
# Generated by Django 5.1.10 on 2025-07-17 11:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0043_github_provider"),
]
operations = [
migrations.AddConstraint(
model_name="integration",
constraint=models.UniqueConstraint(
fields=("configuration", "tenant"),
name="unique_configuration_per_tenant",
),
),
]
@@ -0,0 +1,17 @@
# Generated by Django 5.1.10 on 2025-07-21 16:08
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0044_integration_unique_configuration_per_tenant"),
]
operations = [
migrations.AlterField(
model_name="scan",
name="output_location",
field=models.CharField(blank=True, max_length=4096, null=True),
),
]
@@ -0,0 +1,33 @@
# Generated by Django 5.1.10 on 2025-08-20 09:04
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0045_alter_scan_output_location"),
]
operations = [
migrations.AlterField(
model_name="lighthouseconfiguration",
name="model",
field=models.CharField(
choices=[
("gpt-4o-2024-11-20", "GPT-4o v2024-11-20"),
("gpt-4o-2024-08-06", "GPT-4o v2024-08-06"),
("gpt-4o-2024-05-13", "GPT-4o v2024-05-13"),
("gpt-4o", "GPT-4o Default"),
("gpt-4o-mini-2024-07-18", "GPT-4o Mini v2024-07-18"),
("gpt-4o-mini", "GPT-4o Mini Default"),
("gpt-5-2025-08-07", "GPT-5 v2025-08-07"),
("gpt-5", "GPT-5 Default"),
("gpt-5-mini-2025-08-07", "GPT-5 Mini v2025-08-07"),
("gpt-5-mini", "GPT-5 Mini Default"),
],
default="gpt-4o-2024-08-06",
help_text="Must be one of the supported model names",
max_length=50,
),
),
]
@@ -0,0 +1,16 @@
# Generated by Django 5.1.10 on 2025-08-20 08:24
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("api", "0046_lighthouse_gpt5"),
]
operations = [
migrations.RemoveConstraint(
model_name="integration",
name="unique_configuration_per_tenant",
),
]
@@ -0,0 +1,125 @@
# Generated by Django 5.1.12 on 2025-09-30 13:10
import uuid
import django.db.models.deletion
import drf_simple_apikey.models
from django.conf import settings
from django.db import migrations, models
import api.db_utils
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0047_remove_integration_unique_configuration_per_tenant"),
]
operations = [
migrations.CreateModel(
name="TenantAPIKey",
fields=[
("name", models.CharField(blank=True, max_length=255, null=True)),
(
"expiry_date",
models.DateTimeField(
default=drf_simple_apikey.models._expiry_date,
help_text="Once API key expires, entities cannot use it anymore.",
verbose_name="Expires",
),
),
(
"revoked",
models.BooleanField(
blank=True,
default=False,
help_text="If the API key is revoked, entities cannot use it anymore. (This cannot be undone.)",
),
),
("created", models.DateTimeField(auto_now=True)),
(
"whitelisted_ips",
models.JSONField(
blank=True,
help_text="List of allowed IP addresses for this API key.",
null=True,
),
),
(
"blacklisted_ips",
models.JSONField(
blank=True,
help_text="List of denied IP addresses for this API key.",
null=True,
),
),
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
(
"prefix",
models.CharField(
default=api.db_utils.generate_api_key_prefix,
editable=False,
help_text="Unique prefix to identify the API key",
max_length=11,
unique=True,
),
),
(
"last_used_at",
models.DateTimeField(
blank=True,
help_text="Last time this API key was used for authentication",
null=True,
),
),
(
"entity",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="user_api_keys",
to=settings.AUTH_USER_MODEL,
),
),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "api_keys",
"abstract": False,
"indexes": [
models.Index(
fields=["tenant_id", "prefix"],
name="api_keys_tenant_prefix_idx",
)
],
"constraints": [
models.UniqueConstraint(
fields=("tenant_id", "prefix"), name="unique_api_key_prefixes"
)
],
},
),
migrations.AddConstraint(
model_name="tenantapikey",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_tenantapikey",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
+100 -2
View File
@@ -22,6 +22,8 @@ from django.db.models import Q
from django.utils.translation import gettext_lazy as _
from django_celery_beat.models import PeriodicTask
from django_celery_results.models import TaskResult
from drf_simple_apikey.crypto import get_crypto
from drf_simple_apikey.models import AbstractAPIKey, AbstractAPIKeyManager
from psqlextra.manager import PostgresManager
from psqlextra.models import PostgresPartitionedModel
from psqlextra.types import PostgresPartitioningMethod
@@ -42,6 +44,7 @@ from api.db_utils import (
StateEnumField,
StatusEnumField,
enum_to_choices,
generate_api_key_prefix,
generate_random_token,
one_week_from_now,
)
@@ -74,6 +77,15 @@ class StatusChoices(models.TextChoices):
MANUAL = "MANUAL", _("Manual")
class OverviewStatusChoices(models.TextChoices):
"""
Status filters allowed in overview/severity endpoints.
"""
FAIL = "FAIL", _("Fail")
PASS = "PASS", _("Pass")
class StateChoices(models.TextChoices):
AVAILABLE = "available", _("Available")
SCHEDULED = "scheduled", _("Scheduled")
@@ -116,6 +128,17 @@ class ActiveProviderPartitionedManager(PostgresManager, ActiveProviderManager):
return super().get_queryset().filter(self.active_provider_filter())
class TenantAPIKeyManager(AbstractAPIKeyManager):
separator = "."
def assign_api_key(self, obj) -> str:
payload = {"_pk": str(obj.pk), "_exp": obj.expiry_date.timestamp()}
key = get_crypto().generate(payload)
prefixed_key = f"{obj.prefix}{self.separator}{key}"
return prefixed_key
class User(AbstractBaseUser):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
name = models.CharField(max_length=150, validators=[MinLengthValidator(3)])
@@ -195,6 +218,55 @@ class Membership(models.Model):
resource_name = "memberships"
class TenantAPIKey(AbstractAPIKey, RowLevelSecurityProtectedModel):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
prefix = models.CharField(
max_length=11,
unique=True,
default=generate_api_key_prefix,
editable=False,
help_text="Unique prefix to identify the API key",
)
last_used_at = models.DateTimeField(
null=True,
blank=True,
help_text="Last time this API key was used for authentication",
)
entity = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.SET_NULL,
null=True,
blank=True,
related_name="user_api_keys",
)
objects = TenantAPIKeyManager()
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "api_keys"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
models.UniqueConstraint(
fields=("tenant_id", "prefix"),
name="unique_api_key_prefixes",
),
]
indexes = [
models.Index(
fields=["tenant_id", "prefix"], name="api_keys_tenant_prefix_idx"
),
]
class JSONAPIMeta:
resource_name = "api-keys"
class Provider(RowLevelSecurityProtectedModel):
objects = ActiveProviderManager()
all_objects = models.Manager()
@@ -205,6 +277,7 @@ class Provider(RowLevelSecurityProtectedModel):
GCP = "gcp", _("GCP")
KUBERNETES = "kubernetes", _("Kubernetes")
M365 = "m365", _("M365")
GITHUB = "github", _("GitHub")
@staticmethod
def validate_aws_uid(value):
@@ -265,6 +338,16 @@ class Provider(RowLevelSecurityProtectedModel):
pointer="/data/attributes/uid",
)
@staticmethod
def validate_github_uid(value):
if not re.match(r"^[a-zA-Z0-9][a-zA-Z0-9-]{0,38}$", value):
raise ModelValidationError(
detail="GitHub provider ID must be a valid GitHub username or organization name (1-39 characters, "
"starting with alphanumeric, containing only alphanumeric characters and hyphens).",
code="github-uid",
pointer="/data/attributes/uid",
)
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
@@ -427,7 +510,7 @@ class Scan(RowLevelSecurityProtectedModel):
scheduler_task = models.ForeignKey(
PeriodicTask, on_delete=models.SET_NULL, null=True, blank=True
)
output_location = models.CharField(blank=True, null=True, max_length=200)
output_location = models.CharField(blank=True, null=True, max_length=4096)
provider = models.ForeignKey(
Provider,
on_delete=models.CASCADE,
@@ -476,6 +559,13 @@ class Scan(RowLevelSecurityProtectedModel):
condition=Q(state=StateChoices.COMPLETED),
name="scans_prov_state_ins_desc_idx",
),
# TODO This might replace `scans_prov_state_ins_desc_idx` completely. Review usage
models.Index(
fields=["tenant_id", "provider_id", "-inserted_at"],
condition=Q(state=StateChoices.COMPLETED),
include=["id"],
name="scans_prov_ins_desc_idx",
),
]
class JSONAPIMeta:
@@ -860,6 +950,10 @@ class ResourceFindingMapping(PostgresPartitionedModel, RowLevelSecurityProtected
fields=["tenant_id", "finding_id"],
name="rfm_tenant_finding_idx",
),
models.Index(
fields=["tenant_id", "resource_id"],
name="rfm_tenant_resource_idx",
),
]
constraints = [
models.UniqueConstraint(
@@ -1324,7 +1418,7 @@ class ScanSummary(RowLevelSecurityProtectedModel):
class Integration(RowLevelSecurityProtectedModel):
class IntegrationChoices(models.TextChoices):
S3 = "amazon_s3", _("Amazon S3")
AMAZON_S3 = "amazon_s3", _("Amazon S3")
AWS_SECURITY_HUB = "aws_security_hub", _("AWS Security Hub")
JIRA = "jira", _("JIRA")
SLACK = "slack", _("Slack")
@@ -1726,6 +1820,10 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
GPT_4O = "gpt-4o", _("GPT-4o Default")
GPT_4O_MINI_2024_07_18 = "gpt-4o-mini-2024-07-18", _("GPT-4o Mini v2024-07-18")
GPT_4O_MINI = "gpt-4o-mini", _("GPT-4o Mini Default")
GPT_5_2025_08_07 = "gpt-5-2025-08-07", _("GPT-5 v2025-08-07")
GPT_5 = "gpt-5", _("GPT-5 Default")
GPT_5_MINI_2025_08_07 = "gpt-5-mini-2025-08-07", _("GPT-5 Mini v2025-08-07")
GPT_5_MINI = "gpt-5-mini", _("GPT-5 Mini Default")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
+16
View File
@@ -0,0 +1,16 @@
from drf_spectacular.extensions import OpenApiAuthenticationExtension
from drf_spectacular.openapi import AutoSchema
class CombinedJWTOrAPIKeyAuthenticationScheme(OpenApiAuthenticationExtension):
target_class = "api.authentication.CombinedJWTOrAPIKeyAuthentication"
name = "JWT or API Key"
def get_security_definition(self, auto_schema: AutoSchema): # noqa: F841
return {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"description": "Supports both JWT Bearer tokens and API Key authentication. "
"Use `Bearer <token>` for JWT or `Api-Key <key>` for API keys.",
}
+95
View File
@@ -0,0 +1,95 @@
def _pick_task_response_component(components):
schemas = components.get("schemas", {}) or {}
for candidate in ("TaskResponse",):
if candidate in schemas:
return candidate
return None
def _extract_task_example_from_components(components):
schemas = components.get("schemas", {}) or {}
candidate = "TaskResponse"
doc = schemas.get(candidate)
if isinstance(doc, dict) and "example" in doc:
return doc["example"]
res = schemas.get(candidate)
if isinstance(res, dict) and "example" in res:
example = res["example"]
return example if "data" in example else {"data": example}
# Fallback
return {
"data": {
"type": "tasks",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"attributes": {
"inserted_at": "2019-08-24T14:15:22Z",
"completed_at": "2019-08-24T14:15:22Z",
"name": "string",
"state": "available",
"result": None,
"task_args": None,
"metadata": None,
},
}
}
def attach_task_202_examples(result, generator, request, public): # noqa: F841
if not isinstance(result, dict):
return result
components = result.get("components", {}) or {}
task_resp_component = _pick_task_response_component(components)
task_example = _extract_task_example_from_components(components)
paths = result.get("paths", {}) or {}
for path_item in paths.values():
if not isinstance(path_item, dict):
continue
for method_obj in path_item.values():
if not isinstance(method_obj, dict):
continue
responses = method_obj.get("responses", {}) or {}
resp_202 = responses.get("202")
if not isinstance(resp_202, dict):
continue
content = resp_202.get("content", {}) or {}
jsonapi = content.get("application/vnd.api+json")
if not isinstance(jsonapi, dict):
continue
# Inject example if missing
if "examples" not in jsonapi and "example" not in jsonapi:
jsonapi["examples"] = {
"Task queued": {
"summary": "Task queued",
"value": task_example,
}
}
# Rewrite schema $ref if needed
if task_resp_component:
schema = jsonapi.get("schema")
must_replace = False
if not isinstance(schema, dict):
must_replace = True
else:
ref = schema.get("$ref")
if not ref:
must_replace = True
else:
current = ref.split("/")[-1]
if current != task_resp_component:
must_replace = True
if must_replace:
jsonapi["schema"] = {
"$ref": f"#/components/schemas/{task_resp_component}"
}
return result
+26 -2
View File
@@ -1,12 +1,12 @@
from celery import states
from celery.signals import before_task_publish
from config.celery import celery_app
from django.db.models.signals import post_delete
from django.db.models.signals import post_delete, pre_delete
from django.dispatch import receiver
from django_celery_results.backends.database import DatabaseBackend
from api.db_utils import delete_related_daily_task
from api.models import Provider
from api.models import Membership, Provider, TenantAPIKey, User
def create_task_result_on_publish(sender=None, headers=None, **kwargs): # noqa: F841
@@ -32,3 +32,27 @@ before_task_publish.connect(
def delete_provider_scan_task(sender, instance, **kwargs): # noqa: F841
# Delete the associated periodic task when the provider is deleted
delete_related_daily_task(instance.id)
@receiver(pre_delete, sender=User)
def revoke_user_api_keys(sender, instance, **kwargs): # noqa: F841
"""
Revoke all API keys associated with a user before deletion.
The entity field will be set to NULL by on_delete=SET_NULL,
but we explicitly revoke the keys to prevent further use.
"""
TenantAPIKey.objects.filter(entity=instance).update(revoked=True)
@receiver(post_delete, sender=Membership)
def revoke_membership_api_keys(sender, instance, **kwargs): # noqa: F841
"""
Revoke all API keys when a user is removed from a tenant.
When a membership is deleted, all API keys created by that user
in that tenant should be revoked to prevent further access.
"""
TenantAPIKey.objects.filter(
entity=instance.user, tenant_id=instance.tenant.id
).update(revoked=True)
File diff suppressed because it is too large Load Diff
@@ -1,9 +1,14 @@
import time
from datetime import datetime, timedelta, timezone
from uuid import uuid4
import pytest
from conftest import TEST_PASSWORD, get_api_tokens, get_authorization_header
from django.urls import reverse
from drf_simple_apikey.crypto import get_crypto
from rest_framework.test import APIClient
from api.models import Membership, User
from api.models import Membership, Role, TenantAPIKey, User, UserRoleRelationship
@pytest.mark.django_db
@@ -298,3 +303,706 @@ class TestTokenSwitchTenant:
assert invalid_tenant_response.json()["errors"][0]["detail"] == (
"Tenant does not exist or user is not a " "member."
)
@pytest.mark.django_db
class TestAPIKeyAuthentication:
def test_successful_authentication_with_api_key(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Verify API key can authenticate and access protected endpoints."""
client = APIClient()
api_key = api_keys_fixture[0]
# Use API key to authenticate and access protected endpoint
api_key_headers = get_api_key_header(api_key._raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 200
assert "data" in response.json()
def test_api_key_one_time_display_on_creation(
self, create_test_user_rbac, tenants_fixture
):
"""Verify full key only returned on creation, subsequent retrieval shows prefix only."""
client = APIClient()
# Authenticate with JWT to create API key
access_token, _ = get_api_tokens(
client, create_test_user_rbac.email, TEST_PASSWORD
)
jwt_headers = get_authorization_header(access_token)
# Create API key
api_key_name = "Test One-Time Key"
create_response = client.post(
reverse("api-key-list"),
data={
"data": {
"type": "api-keys",
"attributes": {
"name": api_key_name,
},
}
},
format="vnd.api+json",
headers=jwt_headers,
)
assert create_response.status_code == 201
created_data = create_response.json()["data"]
api_key_id = created_data["id"]
# Verify full key is present in creation response
assert "api_key" in created_data["attributes"]
full_key = created_data["attributes"]["api_key"]
assert full_key.startswith("pk_")
assert "." in full_key
# Retrieve the same API key
retrieve_response = client.get(
reverse("api-key-detail", kwargs={"pk": api_key_id}),
headers=jwt_headers,
)
assert retrieve_response.status_code == 200
retrieved_data = retrieve_response.json()["data"]
# Verify full key is NOT present in retrieval response
assert "api_key" not in retrieved_data["attributes"]
# Only prefix should be visible
assert "prefix" in retrieved_data["attributes"]
assert retrieved_data["attributes"]["prefix"].startswith("pk_")
def test_last_used_at_tracking(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Verify last_used_at timestamp updates on each authentication."""
client = APIClient()
api_key = api_keys_fixture[0]
# Verify initially last_used_at is None
assert api_key.last_used_at is None
# Use API key to authenticate
api_key_headers = get_api_key_header(api_key._raw_key)
first_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert first_response.status_code == 200
# Reload from database and check last_used_at is set
api_key.refresh_from_db()
first_used_at = api_key.last_used_at
assert first_used_at is not None
# Use the same key again after a small delay
time.sleep(0.1)
second_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert second_response.status_code == 200
# Reload and verify last_used_at was updated
api_key.refresh_from_db()
second_used_at = api_key.last_used_at
assert second_used_at is not None
assert second_used_at > first_used_at
@pytest.mark.django_db
class TestAPIKeyErrors:
def test_invalid_api_key_format_missing_separator(
self, create_test_user, tenants_fixture
):
"""Malformed key without . separator."""
client = APIClient()
# Create malformed key without separator
malformed_key = "pk_12345678abcdefgh"
api_key_headers = get_api_key_header(malformed_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert "Invalid API Key." in response.json()["errors"][0]["detail"]
def test_invalid_api_key_format_malformed(self, create_test_user, tenants_fixture):
"""Completely invalid format."""
client = APIClient()
# Various malformed keys
malformed_keys = [
"invalid_key",
"Bearer some_token",
"",
"pk_.",
".encrypted_part",
]
for malformed_key in malformed_keys:
api_key_headers = get_api_key_header(malformed_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert "Invalid API Key." in response.json()["errors"][0]["detail"]
def test_expired_api_key_rejected(self, create_test_user, tenants_fixture):
"""Key past expiry date returns 401."""
client = APIClient()
# Create API key with past expiry date
expired_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Expired Key",
tenant_id=tenants_fixture[0].id,
entity=create_test_user,
expiry_date=datetime.now(timezone.utc) - timedelta(days=1),
)
api_key_headers = get_api_key_header(raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert "API Key has already expired." in response.json()["errors"][0]["detail"]
def test_revoked_api_key_rejected(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Revoked key returns 401."""
client = APIClient()
# Use the revoked key from fixture
revoked_key = api_keys_fixture[2]
assert revoked_key.revoked is True
api_key_headers = get_api_key_header(revoked_key._raw_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert "API Key has been revoked." in response.json()["errors"][0]["detail"]
def test_non_existent_api_key(self, create_test_user, tenants_fixture):
"""Key UUID doesn't exist in database."""
client = APIClient()
# Create a valid-looking key with non-existent UUID
crypto = get_crypto()
fake_uuid = str(uuid4())
fake_expiry = (datetime.now(timezone.utc) + timedelta(days=30)).timestamp()
payload = {"_pk": fake_uuid, "_exp": fake_expiry}
encrypted_payload = crypto.generate(payload)
fake_key = f"pk_fakepfx.{encrypted_payload}"
api_key_headers = get_api_key_header(fake_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert (
"No entity matching this api key." in response.json()["errors"][0]["detail"]
)
def test_corrupted_payload(self, create_test_user, tenants_fixture):
"""Tampered/corrupted encrypted payload."""
client = APIClient()
# Create key with corrupted encrypted portion
corrupted_key = "pk_12345678.corrupted_encrypted_data_here"
api_key_headers = get_api_key_header(corrupted_key)
response = client.get(reverse("provider-list"), headers=api_key_headers)
assert response.status_code == 401
assert "Invalid API Key." in response.json()["errors"][0]["detail"]
@pytest.mark.django_db
class TestAPIKeyTenantIsolation:
def test_api_key_tenant_isolation(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""User in tenant A cannot use API key from tenant B."""
client = APIClient()
# Create a second user in a different tenant
second_user = User.objects.create_user(
name="second_user",
email="second_user@prowler.com",
password="Test_password@1",
)
second_tenant = tenants_fixture[1]
Membership.objects.create(user=second_user, tenant=second_tenant)
# Create and assign role to second_user
second_role = Role.objects.create(
tenant_id=second_tenant.id,
name="Second Tenant Role",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=second_user,
role=second_role,
tenant_id=second_tenant.id,
)
# Create API key for second user in second tenant
second_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Second Tenant Key",
tenant_id=second_tenant.id,
entity=second_user,
)
# First user's API key from first tenant
first_key = api_keys_fixture[0]
tenants_fixture[0]
# Verify both keys are from different tenants
assert first_key.tenant_id != second_key.tenant_id
# Each key should only access resources in its own tenant
# This is enforced by RLS at the database level
first_headers = get_api_key_header(first_key._raw_key)
second_headers = get_api_key_header(raw_key)
# Both should work for their respective tenants
first_response = client.get(reverse("provider-list"), headers=first_headers)
assert first_response.status_code == 200
second_response = client.get(reverse("provider-list"), headers=second_headers)
assert second_response.status_code == 200
# Verify tenant context is correct in each response
# The responses should contain only data for their respective tenants
def test_api_key_filters_by_tenant(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""List endpoint only shows keys for current tenant."""
client = APIClient()
# Create JWT token for first tenant
access_token, _ = get_api_tokens(client, create_test_user.email, TEST_PASSWORD)
jwt_headers = get_authorization_header(access_token)
# List API keys
list_response = client.get(reverse("api-key-list"), headers=jwt_headers)
assert list_response.status_code == 200
keys_data = list_response.json()["data"]
# Verify all returned keys belong to the current tenant
tenants_fixture[0].id
for key_data in keys_data:
# We can't directly see tenant_id in response, but all keys should be from fixtures
# which are created in first tenant
assert key_data["type"] == "api-keys"
# Count should match the number of non-revoked keys in api_keys_fixture for this tenant
# api_keys_fixture creates 3 keys (1 normal, 1 with expiry, 1 revoked)
assert len(keys_data) == 3
def test_api_key_revoked_when_user_removed_from_tenant(self, tenants_fixture):
"""When user membership is deleted, all user's API keys for that tenant are revoked."""
client = APIClient()
tenant = tenants_fixture[0]
# Create a fresh user for this test
test_user = User.objects.create_user(
name="test_membership_removal",
email="membership_removal@prowler.com",
password=TEST_PASSWORD,
)
# Create membership between user and tenant
Membership.objects.create(
user=test_user,
tenant=tenant,
role=Membership.RoleChoices.OWNER,
)
# Create role with manage_account permission
role = Role.objects.create(
tenant_id=tenant.id,
name="Membership Removal Role",
unlimited_visibility=True,
manage_account=True,
)
# Assign role to user
UserRoleRelationship.objects.create(
user=test_user,
role=role,
tenant_id=tenant.id,
)
# Create API key for this user in this tenant
api_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Test Key for Membership Removal",
tenant_id=tenant.id,
entity=test_user,
)
# Verify API key works initially
api_key_headers = get_api_key_header(raw_key)
initial_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert initial_response.status_code == 200
# Store API key ID for later verification
api_key_id = api_key.id
# Remove user from tenant by deleting membership
Membership.objects.filter(user=test_user, tenant=tenant).delete()
# Reload API key from database
api_key.refresh_from_db()
# Verify API key still exists in database
assert TenantAPIKey.objects.filter(id=api_key_id).exists()
# Verify API key is now revoked
assert api_key.revoked is True
# Verify authentication with this API key now fails with 401
auth_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert auth_response.status_code == 401
# Verify error message indicates revocation
response_json = auth_response.json()
assert "errors" in response_json
error_detail = response_json["errors"][0]["detail"]
assert "revoked" in error_detail.lower()
@pytest.mark.django_db
class TestAPIKeyLifecycle:
def test_create_api_key(self, create_test_user_rbac, tenants_fixture):
"""Create via POST with name and optional expiry."""
client = APIClient()
# Authenticate with JWT
access_token, _ = get_api_tokens(
client, create_test_user_rbac.email, TEST_PASSWORD
)
jwt_headers = get_authorization_header(access_token)
# Create API key without expiry
key_name = "Test Lifecycle Key"
create_response = client.post(
reverse("api-key-list"),
data={
"data": {
"type": "api-keys",
"attributes": {
"name": key_name,
},
}
},
format="vnd.api+json",
headers=jwt_headers,
)
assert create_response.status_code == 201
created_data = create_response.json()["data"]
assert created_data["attributes"]["name"] == key_name
assert "api_key" in created_data["attributes"]
assert "prefix" in created_data["attributes"]
assert created_data["attributes"]["revoked"] is False
# Create API key with expiry
future_expiry = (datetime.now(timezone.utc) + timedelta(days=90)).isoformat()
create_with_expiry_response = client.post(
reverse("api-key-list"),
data={
"data": {
"type": "api-keys",
"attributes": {
"name": "Key with Expiry",
"expires_at": future_expiry,
},
}
},
format="vnd.api+json",
headers=jwt_headers,
)
assert create_with_expiry_response.status_code == 201
expiry_data = create_with_expiry_response.json()["data"]
assert expiry_data["attributes"]["expires_at"] is not None
def test_update_api_key_name_only(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""PATCH only allows name changes."""
client = APIClient()
# Authenticate with JWT
access_token, _ = get_api_tokens(client, create_test_user.email, TEST_PASSWORD)
jwt_headers = get_authorization_header(access_token)
api_key = api_keys_fixture[0]
api_key.name
new_name = "Updated API Key Name"
# Update name
update_response = client.patch(
reverse("api-key-detail", kwargs={"pk": api_key.id}),
data={
"data": {
"type": "api-keys",
"id": str(api_key.id),
"attributes": {
"name": new_name,
},
}
},
format="vnd.api+json",
headers=jwt_headers,
)
assert update_response.status_code == 200
updated_data = update_response.json()["data"]
assert updated_data["attributes"]["name"] == new_name
# Verify name was actually updated in database
api_key.refresh_from_db()
assert api_key.name == new_name
# Verify other fields remain unchanged
assert api_key.prefix == updated_data["attributes"]["prefix"]
assert api_key.revoked is False
def test_delete_api_key(self, create_test_user, tenants_fixture, api_keys_fixture):
"""DELETE revokes key (sets revoked=True)."""
client = APIClient()
# Authenticate with JWT
access_token, _ = get_api_tokens(client, create_test_user.email, TEST_PASSWORD)
jwt_headers = get_authorization_header(access_token)
api_key = api_keys_fixture[1]
api_key_id = api_key.id
# Revoke API key using the revoke endpoint
revoke_response = client.delete(
reverse("api-key-revoke", kwargs={"pk": api_key_id}),
headers=jwt_headers,
)
assert revoke_response.status_code == 200
# Verify key still exists but is revoked
api_key.refresh_from_db()
assert api_key.revoked is True
# Verify revoked key can no longer authenticate
api_key_headers = get_api_key_header(api_key._raw_key)
auth_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert auth_response.status_code == 401
def test_multiple_keys_per_user(self, create_test_user_rbac, tenants_fixture):
"""User can have multiple active keys."""
client = APIClient()
# Authenticate with JWT
access_token, _ = get_api_tokens(
client, create_test_user_rbac.email, TEST_PASSWORD
)
jwt_headers = get_authorization_header(access_token)
# Create multiple API keys
key_names = ["Key One", "Key Two", "Key Three"]
created_keys = []
for name in key_names:
create_response = client.post(
reverse("api-key-list"),
data={
"data": {
"type": "api-keys",
"attributes": {
"name": name,
},
}
},
format="vnd.api+json",
headers=jwt_headers,
)
assert create_response.status_code == 201
created_keys.append(create_response.json()["data"])
# Verify all keys were created
assert len(created_keys) == 3
# List all keys and verify count
list_response = client.get(reverse("api-key-list"), headers=jwt_headers)
assert list_response.status_code == 200
# Should include the 3 new keys plus the ones from api_keys_fixture
keys_list = list_response.json()["data"]
assert len(keys_list) >= 3
# Verify each created key can authenticate independently
for key_data in created_keys:
full_key = key_data["attributes"]["api_key"]
api_key_headers = get_api_key_header(full_key)
auth_response = client.get(
reverse("provider-list"), headers=api_key_headers
)
assert auth_response.status_code == 200
def test_api_key_becomes_invalid_when_user_deleted(self, tenants_fixture):
"""When user is deleted, API key entity is set to None and authentication fails."""
client = APIClient()
tenant = tenants_fixture[0]
# Create a fresh user for this test to avoid affecting other tests
test_user = User.objects.create_user(
name="test_deletion_user",
email="deletion_test@prowler.com",
password=TEST_PASSWORD,
)
Membership.objects.create(
user=test_user,
tenant=tenant,
role=Membership.RoleChoices.OWNER,
)
# Create role for the user
role = Role.objects.create(
tenant_id=tenant.id,
name="Deletion Test Role",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=test_user,
role=role,
tenant_id=tenant.id,
)
# Create API key for this user
api_key, raw_key = TenantAPIKey.objects.create_api_key(
name="Test Key for Deletion",
tenant_id=tenant.id,
entity=test_user,
)
# Verify the API key works initially
api_key_headers = get_api_key_header(raw_key)
initial_response = client.get(reverse("provider-list"), headers=api_key_headers)
assert initial_response.status_code == 200
# Store the API key ID for later verification
api_key_id = api_key.id
# Delete the user
test_user.delete()
# Reload the API key from database
api_key.refresh_from_db()
# Verify the API key still exists in database (not cascade deleted)
assert TenantAPIKey.objects.filter(id=api_key_id).exists()
# Verify entity field is now None (CASCADE behavior is SET_NULL)
assert api_key.entity is None
# Verify authentication with this API key now fails
auth_response = client.get(reverse("provider-list"), headers=api_key_headers)
# Must return 401 Unauthorized, not 500 Internal Server Error
assert auth_response.status_code == 401, (
f"Expected 401 but got {auth_response.status_code}: "
f"{auth_response.json()}"
)
# Verify error message is present
response_json = auth_response.json()
assert "errors" in response_json
error_detail = response_json["errors"][0]["detail"]
# The error should indicate authentication failed due to invalid/orphaned key
assert (
"API Key" in error_detail
or "Invalid" in error_detail
or "entity" in error_detail.lower()
)
@pytest.mark.django_db
class TestCombinedAuthentication:
def test_jwt_takes_priority_over_api_key(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""When Bearer token present, JWT is used."""
client = APIClient()
# Get JWT token
access_token, _ = get_api_tokens(client, create_test_user.email, TEST_PASSWORD)
# Create headers with both Bearer (JWT) and API key would conflict
# But we'll test that Bearer takes priority by setting Authorization to Bearer
jwt_headers = {"Authorization": f"Bearer {access_token}"}
response = client.get(reverse("provider-list"), headers=jwt_headers)
assert response.status_code == 200
# The authentication should have used JWT, not API key
# We can verify this worked as JWT authentication
def test_api_key_header_format_validation(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Verify Authorization: Api-Key <key> format."""
client = APIClient()
api_key = api_keys_fixture[0]
# Correct format
correct_headers = {"Authorization": f"Api-Key {api_key._raw_key}"}
correct_response = client.get(reverse("provider-list"), headers=correct_headers)
assert correct_response.status_code == 200
# Wrong format - using Bearer instead of Api-Key
wrong_format_headers = {"Authorization": f"Bearer {api_key._raw_key}"}
wrong_response = client.get(
reverse("provider-list"), headers=wrong_format_headers
)
# Should fail because it tries to parse as JWT
assert wrong_response.status_code == 401
# Wrong format - missing Api-Key prefix
no_prefix_headers = {"Authorization": api_key._raw_key}
no_prefix_response = client.get(
reverse("provider-list"), headers=no_prefix_headers
)
assert no_prefix_response.status_code == 401
def test_concurrent_api_key_usage(
self, create_test_user, tenants_fixture, api_keys_fixture
):
"""Same key can be used multiple times concurrently."""
client = APIClient()
api_key = api_keys_fixture[0]
api_key_headers = get_api_key_header(api_key._raw_key)
# Make multiple concurrent requests with the same key
responses = []
for _ in range(5):
response = client.get(reverse("provider-list"), headers=api_key_headers)
responses.append(response)
# All requests should succeed
for response in responses:
assert response.status_code == 200
# Verify last_used_at was updated
api_key.refresh_from_db()
assert api_key.last_used_at is not None
def get_api_key_header(api_key: str) -> dict:
"""Helper to create API key authorization header."""
return {"Authorization": f"Api-Key {api_key}"}
+152
View File
@@ -0,0 +1,152 @@
import os
from pathlib import Path
from unittest.mock import MagicMock
import pytest
from django.conf import settings
import api.apps as api_apps_module
from api.apps import (
ApiConfig,
PRIVATE_KEY_FILE,
PUBLIC_KEY_FILE,
SIGNING_KEY_ENV,
VERIFYING_KEY_ENV,
)
@pytest.fixture(autouse=True)
def reset_keys_initialized(monkeypatch):
"""Ensure per-test clean state for the module-level guard flag."""
monkeypatch.setattr(api_apps_module, "_keys_initialized", False, raising=False)
def _stub_keys():
return (
"""-----BEGIN PRIVATE KEY-----\nPRIVATE\n-----END PRIVATE KEY-----\n""",
"""-----BEGIN PUBLIC KEY-----\nPUBLIC\n-----END PUBLIC KEY-----\n""",
)
def test_generate_jwt_keys_when_missing(monkeypatch, tmp_path):
# Arrange: isolate FS, env, and settings; force generation path
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.delenv(SIGNING_KEY_ENV, raising=False)
monkeypatch.delenv(VERIFYING_KEY_ENV, raising=False)
# Work on a copy of SIMPLE_JWT to avoid mutating the global settings dict for other tests
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
monkeypatch.setattr(settings, "TESTING", False, raising=False)
# Avoid dependency on the cryptography package
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_stub_keys))
config = ApiConfig("api", api_apps_module)
# Act
config._ensure_crypto_keys()
# Assert: files created with expected content
priv_path = Path(tmp_path) / PRIVATE_KEY_FILE
pub_path = Path(tmp_path) / PUBLIC_KEY_FILE
assert priv_path.is_file()
assert pub_path.is_file()
assert priv_path.read_text() == _stub_keys()[0]
assert pub_path.read_text() == _stub_keys()[1]
# Env vars and Django settings updated
assert os.environ[SIGNING_KEY_ENV] == _stub_keys()[0]
assert os.environ[VERIFYING_KEY_ENV] == _stub_keys()[1]
assert settings.SIMPLE_JWT["SIGNING_KEY"] == _stub_keys()[0]
assert settings.SIMPLE_JWT["VERIFYING_KEY"] == _stub_keys()[1]
def test_ensure_crypto_keys_are_idempotent_within_process(monkeypatch, tmp_path):
# Arrange
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.delenv(SIGNING_KEY_ENV, raising=False)
monkeypatch.delenv(VERIFYING_KEY_ENV, raising=False)
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
monkeypatch.setattr(settings, "TESTING", False, raising=False)
mock_generate = MagicMock(side_effect=_stub_keys)
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(mock_generate))
config = ApiConfig("api", api_apps_module)
# Act: first call should generate, second should be a no-op (guard flag)
config._ensure_crypto_keys()
config._ensure_crypto_keys()
# Assert: generation occurred exactly once
assert mock_generate.call_count == 1
def test_ensure_jwt_keys_uses_existing_files(monkeypatch, tmp_path):
# Arrange: pre-create key files
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
existing_private, existing_public = _stub_keys()
(Path(tmp_path) / PRIVATE_KEY_FILE).write_text(existing_private)
(Path(tmp_path) / PUBLIC_KEY_FILE).write_text(existing_public)
# If generation were called, fail the test
def _fail_generate():
raise AssertionError("_generate_jwt_keys should not be called when files exist")
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_fail_generate))
config = ApiConfig("api", api_apps_module)
# Act: call the lower-level method directly to set env/settings from files
config._ensure_jwt_keys()
# Assert
# _read_key_file() strips trailing newlines; environment/settings should reflect stripped content
assert os.environ[SIGNING_KEY_ENV] == existing_private.strip()
assert os.environ[VERIFYING_KEY_ENV] == existing_public.strip()
assert settings.SIMPLE_JWT["SIGNING_KEY"] == existing_private.strip()
assert settings.SIMPLE_JWT["VERIFYING_KEY"] == existing_public.strip()
def test_ensure_crypto_keys_skips_when_env_vars(monkeypatch, tmp_path):
# Arrange: put values in env so the orchestrator doesn't generate
monkeypatch.setattr(
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
)
monkeypatch.setenv(SIGNING_KEY_ENV, "ENV-PRIVATE")
monkeypatch.setenv(VERIFYING_KEY_ENV, "ENV-PUBLIC")
monkeypatch.setattr(
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
)
monkeypatch.setattr(settings, "TESTING", False, raising=False)
called = {"ensure": False}
def _track_call():
called["ensure"] = True
return _stub_keys()
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_track_call))
config = ApiConfig("api", api_apps_module)
# Act
config._ensure_crypto_keys()
# Assert: orchestrator did not trigger generation when env present
assert called["ensure"] is False
@@ -239,6 +239,7 @@ class TestCompliance:
Framework="Framework 1",
Version="1.0",
Description="Description of compliance1",
Name="Compliance 1",
)
prowler_compliance = {"aws": {"compliance1": compliance1}}
@@ -248,6 +249,7 @@ class TestCompliance:
"aws": {
"compliance1": {
"framework": "Framework 1",
"name": "Compliance 1",
"version": "1.0",
"provider": "aws",
"description": "Description of compliance1",
@@ -11,6 +11,7 @@ from api.db_utils import (
batch_delete,
create_objects_in_batches,
enum_to_choices,
generate_api_key_prefix,
generate_random_token,
one_week_from_now,
update_objects_in_batches,
@@ -313,3 +314,28 @@ class TestUpdateObjectsInBatches:
qs = Provider.objects.filter(tenant=tenant, uid__endswith="_upd")
assert qs.count() == total
class TestGenerateApiKeyPrefix:
def test_prefix_format(self):
"""Test that generated prefix starts with 'pk_'."""
prefix = generate_api_key_prefix()
assert prefix.startswith("pk_")
def test_prefix_length(self):
"""Test that prefix has correct length (pk_ + 8 random chars = 11)."""
prefix = generate_api_key_prefix()
assert len(prefix) == 11
def test_prefix_uniqueness(self):
"""Test that multiple generations produce unique prefixes."""
prefixes = {generate_api_key_prefix() for _ in range(100)}
assert len(prefixes) == 100
def test_prefix_character_set(self):
"""Test that random part uses only allowed characters."""
allowed_chars = "23456789ABCDEFGHJKMNPQRSTVWXYZ"
for _ in range(50):
prefix = generate_api_key_prefix()
random_part = prefix[3:] # Strip 'pk_'
assert all(char in allowed_chars for char in random_part)
@@ -24,6 +24,7 @@ def test_api_logging_middleware_logging(mock_logger):
mock_extract_auth_info.return_value = {
"user_id": "user123",
"tenant_id": "tenant456",
"api_key_prefix": "pk_test",
}
with patch("api.middleware.logging.getLogger") as mock_get_logger:
@@ -44,6 +45,7 @@ def test_api_logging_middleware_logging(mock_logger):
expected_extra = {
"user_id": "user123",
"tenant_id": "tenant456",
"api_key_prefix": "pk_test",
"method": "GET",
"path": "/test-path",
"query_params": {"param1": "value1", "param2": "value2"},
+336
View File
@@ -1,3 +1,4 @@
import json
from unittest.mock import ANY, Mock, patch
import pytest
@@ -151,6 +152,221 @@ class TestUserViewSet:
assert response.status_code == status.HTTP_200_OK
assert response.json()["data"]["attributes"]["email"] == "rbac_limited@rbac.com"
def test_me_shows_own_roles_and_memberships_without_manage_account(
self, authenticated_client_no_permissions_rbac
):
response = authenticated_client_no_permissions_rbac.get(reverse("user-me"))
assert response.status_code == status.HTTP_200_OK
rels = response.json()["data"]["relationships"]
# Self should see own roles and memberships even without manage_account
assert isinstance(rels["roles"]["data"], list)
assert rels["memberships"]["meta"]["count"] == 1
def test_me_shows_roles_and_memberships_with_manage_account(
self, authenticated_client_rbac
):
response = authenticated_client_rbac.get(reverse("user-me"))
assert response.status_code == status.HTTP_200_OK
rels = response.json()["data"]["relationships"]
# Roles should have data when manage_account is True
assert len(rels["roles"]["data"]) > 0
# Memberships should be present and count > 0
assert rels["memberships"]["meta"]["count"] > 0
def test_me_include_roles_and_memberships_included_block(
self, authenticated_client_rbac
):
# Request current user info including roles and memberships
response = authenticated_client_rbac.get(
reverse("user-me"), {"include": "roles,memberships"}
)
assert response.status_code == status.HTTP_200_OK
payload = response.json()
# Included must contain memberships corresponding to relationships data
rel_memberships = payload["data"]["relationships"]["memberships"]
ids_in_relationship = {item["id"] for item in rel_memberships["data"]}
included = payload["included"]
included_membership_ids = {
item["id"] for item in included if item["type"] == "memberships"
}
# If there are memberships in relationships, they must be present in included
if ids_in_relationship:
assert ids_in_relationship.issubset(included_membership_ids)
else:
# At minimum, included should contain the user's membership when requested
# (count should align with meta count)
assert rel_memberships["meta"]["count"] == len(included_membership_ids)
def test_list_users_with_manage_account_only_forbidden(
self, authenticated_client_rbac_manage_account
):
response = authenticated_client_rbac_manage_account.get(reverse("user-list"))
assert response.status_code == status.HTTP_403_FORBIDDEN
def test_retrieve_other_user_with_manage_account_only_forbidden(
self, authenticated_client_rbac_manage_account, create_test_user
):
response = authenticated_client_rbac_manage_account.get(
reverse("user-detail", kwargs={"pk": create_test_user.id})
)
assert response.status_code == status.HTTP_403_FORBIDDEN
def test_list_users_with_manage_users_only_hides_relationships(
self, authenticated_client_rbac_manage_users_only
):
# Ensure there is at least one other user in the same tenant
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
other_user = User.objects.create_user(
name="other_in_tenant",
email="other_in_tenant@rbac.com",
password="Password123@",
)
Membership.objects.create(user=other_user, tenant=tenant)
response = authenticated_client_rbac_manage_users_only.get(reverse("user-list"))
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert isinstance(data, list)
current_user_id = str(mu_user.id)
assert any(item["id"] == current_user_id for item in data)
for item in data:
rels = item["relationships"]
if item["id"] == current_user_id:
# Self should see own relationships
assert isinstance(rels["roles"]["data"], list)
assert rels["memberships"]["meta"].get("count", 0) >= 1
else:
# Others should be hidden without manage_account
assert rels["roles"]["data"] == []
assert rels["memberships"]["data"] == []
assert rels["memberships"]["meta"]["count"] == 0
def test_include_roles_hidden_without_manage_account(
self, authenticated_client_rbac_manage_users_only
):
# Arrange: ensure another user in the same tenant with its own role
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
other_user = User.objects.create_user(
name="other_in_tenant_inc",
email="other_in_tenant_inc@rbac.com",
password="Password123@",
)
Membership.objects.create(user=other_user, tenant=tenant)
other_role = Role.objects.create(
name="other_inc_role",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
UserRoleRelationship.objects.create(
user=other_user, role=other_role, tenant_id=tenant.id
)
response = authenticated_client_rbac_manage_users_only.get(
reverse("user-list"), {"include": "roles"}
)
assert response.status_code == status.HTTP_200_OK
payload = response.json()
# Assert: included must not contain the other user's role
included = payload.get("included", [])
included_role_ids = {
item["id"] for item in included if item.get("type") == "roles"
}
assert str(other_role.id) not in included_role_ids
# Relationships for other user should be empty
for item in payload["data"]:
if item["id"] == str(other_user.id):
rels = item["relationships"]
assert rels["roles"]["data"] == []
def test_include_roles_visible_with_manage_account(
self, authenticated_client_rbac, tenants_fixture
):
# Arrange: another user in tenant[0] with its role
tenant = tenants_fixture[0]
other_user = User.objects.create_user(
name="other_with_role",
email="other_with_role@rbac.com",
password="Password123@",
)
Membership.objects.create(user=other_user, tenant=tenant)
other_role = Role.objects.create(
name="other_visible_role",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
UserRoleRelationship.objects.create(
user=other_user, role=other_role, tenant_id=tenant.id
)
response = authenticated_client_rbac.get(
reverse("user-list"), {"include": "roles"}
)
assert response.status_code == status.HTTP_200_OK
payload = response.json()
# Assert: included must contain the other user's role
included = payload.get("included", [])
included_role_ids = {
item["id"] for item in included if item.get("type") == "roles"
}
assert str(other_role.id) in included_role_ids
def test_retrieve_user_with_manage_users_only_hides_relationships(
self, authenticated_client_rbac_manage_users_only
):
# Create a target user in the same tenant to ensure visibility
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
target_user = User.objects.create_user(
name="target_same_tenant",
email="target_same_tenant@rbac.com",
password="Password123@",
)
Membership.objects.create(user=target_user, tenant=tenant)
response = authenticated_client_rbac_manage_users_only.get(
reverse("user-detail", kwargs={"pk": target_user.id})
)
assert response.status_code == status.HTTP_200_OK
rels = response.json()["data"]["relationships"]
assert rels["roles"]["data"] == []
assert rels["memberships"]["data"] == []
assert rels["memberships"]["meta"]["count"] == 0
def test_list_users_with_all_permissions_shows_relationships(
self, authenticated_client_rbac
):
response = authenticated_client_rbac.get(reverse("user-list"))
assert response.status_code == status.HTTP_200_OK
data = response.json()["data"]
assert isinstance(data, list)
rels = data[0]["relationships"]
assert len(rels["roles"]["data"]) >= 0
assert rels["memberships"]["meta"]["count"] >= 0
@pytest.mark.django_db
class TestProviderViewSet:
@@ -494,3 +710,123 @@ class TestLimitedVisibility:
assert response.status_code == status.HTTP_200_OK
assert len(response.json()["data"]) == 0
@pytest.mark.django_db
class TestRolePermissions:
def test_role_create_with_manage_account_only_allowed(
self, authenticated_client_rbac_manage_account
):
data = {
"data": {
"type": "roles",
"attributes": {
"name": "Role Manage Account Only",
"manage_users": "false",
"manage_account": "true",
"manage_providers": "false",
"manage_scans": "false",
"unlimited_visibility": "false",
},
"relationships": {"provider_groups": {"data": []}},
}
}
response = authenticated_client_rbac_manage_account.post(
reverse("role-list"),
data=json.dumps(data),
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_201_CREATED
def test_role_create_with_manage_users_only_forbidden(
self, authenticated_client_rbac_manage_users_only
):
data = {
"data": {
"type": "roles",
"attributes": {
"name": "Role Manage Users Only",
"manage_users": "true",
"manage_account": "false",
"manage_providers": "false",
"manage_scans": "false",
"unlimited_visibility": "false",
},
"relationships": {"provider_groups": {"data": []}},
}
}
response = authenticated_client_rbac_manage_users_only.post(
reverse("role-list"),
data=json.dumps(data),
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_403_FORBIDDEN
@pytest.mark.django_db
class TestUserRoleLinkPermissions:
def test_link_user_roles_with_manage_account_only_allowed(
self, authenticated_client_rbac_manage_account
):
# Arrange: create a second user in the same tenant as the manage_account user
ma_user = authenticated_client_rbac_manage_account.user
ma_membership = Membership.objects.filter(user=ma_user).first()
tenant = ma_membership.tenant
user2 = User.objects.create_user(
name="target_user",
email="target_user_ma@rbac.com",
password="Password123@",
)
Membership.objects.create(user=user2, tenant=tenant)
# Create a role in the same tenant
role = Role.objects.create(
name="linkable_role",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
data = {"data": [{"type": "roles", "id": str(role.id)}]}
# Act
response = authenticated_client_rbac_manage_account.post(
reverse("user-roles-relationship", kwargs={"pk": user2.id}),
data=data,
content_type="application/vnd.api+json",
)
# Assert
assert response.status_code == status.HTTP_204_NO_CONTENT
def test_link_user_roles_with_manage_users_only_forbidden(
self, authenticated_client_rbac_manage_users_only
):
mu_user = authenticated_client_rbac_manage_users_only.user
mu_membership = Membership.objects.filter(user=mu_user).first()
tenant = mu_membership.tenant
user2 = User.objects.create_user(
name="target_user2",
email="target_user_mu@rbac.com",
password="Password123@",
)
Membership.objects.create(user=user2, tenant=tenant)
role = Role.objects.create(
name="linkable_role_mu",
tenant_id=tenant.id,
manage_users=False,
manage_account=False,
)
data = {"data": [{"type": "roles", "id": str(role.id)}]}
response = authenticated_client_rbac_manage_users_only.post(
reverse("user-roles-relationship", kwargs={"pk": user2.id}),
data=data,
content_type="application/vnd.api+json",
)
assert response.status_code == status.HTTP_403_FORBIDDEN
@@ -0,0 +1,100 @@
import pytest
from rest_framework.exceptions import ValidationError
from api.v1.serializer_utils.integrations import S3ConfigSerializer
class TestS3ConfigSerializer:
"""Test cases for S3ConfigSerializer validation."""
def test_validate_output_directory_valid_paths(self):
"""Test that valid output directory paths are accepted."""
serializer = S3ConfigSerializer()
# Test normal paths
assert serializer.validate_output_directory("test") == "test"
assert serializer.validate_output_directory("test/folder") == "test/folder"
assert serializer.validate_output_directory("my-folder_123") == "my-folder_123"
# Test paths with leading slashes (should be normalized)
assert serializer.validate_output_directory("/test") == "test"
assert serializer.validate_output_directory("/test/folder") == "test/folder"
# Test paths with excessive slashes (should be normalized)
assert serializer.validate_output_directory("///test") == "test"
assert serializer.validate_output_directory("///////test") == "test"
assert serializer.validate_output_directory("test//folder") == "test/folder"
assert serializer.validate_output_directory("test///folder") == "test/folder"
def test_validate_output_directory_empty_values(self):
"""Test that empty values raise validation errors."""
serializer = S3ConfigSerializer()
with pytest.raises(
ValidationError, match="Output directory cannot be empty or just"
):
serializer.validate_output_directory(".")
with pytest.raises(
ValidationError, match="Output directory cannot be empty or just"
):
serializer.validate_output_directory("/")
def test_validate_output_directory_invalid_characters(self):
"""Test that invalid characters are rejected."""
serializer = S3ConfigSerializer()
invalid_chars = ["<", ">", ":", '"', "|", "?", "*"]
for char in invalid_chars:
with pytest.raises(
ValidationError, match="Output directory contains invalid characters"
):
serializer.validate_output_directory(f"test{char}folder")
def test_validate_output_directory_too_long(self):
"""Test that paths that are too long are rejected."""
serializer = S3ConfigSerializer()
# Create a path longer than 900 characters
long_path = "a" * 901
with pytest.raises(ValidationError, match="Output directory path is too long"):
serializer.validate_output_directory(long_path)
def test_validate_output_directory_edge_cases(self):
"""Test edge cases for output directory validation."""
serializer = S3ConfigSerializer()
# Test path at the limit (900 characters)
path_at_limit = "a" * 900
assert serializer.validate_output_directory(path_at_limit) == path_at_limit
# Test complex normalization
assert serializer.validate_output_directory("//test/../folder//") == "folder"
assert serializer.validate_output_directory("/test/./folder/") == "test/folder"
def test_s3_config_serializer_full_validation(self):
"""Test the full S3ConfigSerializer with valid data."""
data = {
"bucket_name": "my-test-bucket",
"output_directory": "///////test", # This should be normalized
}
serializer = S3ConfigSerializer(data=data)
assert serializer.is_valid()
validated_data = serializer.validated_data
assert validated_data["bucket_name"] == "my-test-bucket"
assert validated_data["output_directory"] == "test" # Normalized
def test_s3_config_serializer_invalid_data(self):
"""Test the full S3ConfigSerializer with invalid data."""
data = {
"bucket_name": "my-test-bucket",
"output_directory": "test<invalid", # Contains invalid character
}
serializer = S3ConfigSerializer(data=data)
assert not serializer.is_valid()
assert "output_directory" in serializer.errors
+262 -1
View File
@@ -6,16 +6,18 @@ from rest_framework.exceptions import NotFound, ValidationError
from api.db_router import MainRouter
from api.exceptions import InvitationTokenExpiredException
from api.models import Invitation, Provider
from api.models import Integration, Invitation, Provider
from api.utils import (
get_prowler_provider_kwargs,
initialize_prowler_provider,
merge_dicts,
prowler_integration_connection_test,
prowler_provider_connection_test,
return_prowler_provider,
validate_invitation,
)
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHubConnection
from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
@@ -197,6 +199,10 @@ class TestGetProwlerProviderKwargs:
Provider.ProviderChoices.M365.value,
{},
),
(
Provider.ProviderChoices.GITHUB.value,
{"organizations": ["provider_uid"]},
),
],
)
def test_get_prowler_provider_kwargs(self, provider_type, expected_extra_kwargs):
@@ -390,3 +396,258 @@ class TestValidateInvitation:
mock_db.get.assert_called_once_with(
token="VALID_TOKEN", email__iexact="user@example.com"
)
class TestProwlerIntegrationConnectionTest:
"""Test prowler_integration_connection_test function for SecurityHub regions reset."""
@patch("api.utils.SecurityHub")
def test_security_hub_connection_failure_resets_regions(
self, mock_security_hub_class
):
"""Test that SecurityHub connection failure resets regions to empty dict."""
# Create integration with existing regions configuration
integration = MagicMock()
integration.integration_type = Integration.IntegrationChoices.AWS_SECURITY_HUB
integration.credentials = {
"aws_access_key_id": "test_key",
"aws_secret_access_key": "test_secret",
}
integration.configuration = {
"send_only_fails": True,
"regions": {
"us-east-1": True,
"us-west-2": True,
"eu-west-1": False,
"ap-south-1": False,
},
}
# Mock provider relationship
mock_provider = MagicMock()
mock_provider.uid = "123456789012"
mock_relationship = MagicMock()
mock_relationship.provider = mock_provider
integration.integrationproviderrelationship_set.first.return_value = (
mock_relationship
)
# Mock failed SecurityHub connection
mock_connection = SecurityHubConnection(
is_connected=False,
error=Exception("SecurityHub testing"),
enabled_regions=set(),
disabled_regions=set(),
)
mock_security_hub_class.test_connection.return_value = mock_connection
# Call the function
result = prowler_integration_connection_test(integration)
# Assertions
assert result.is_connected is False
assert str(result.error) == "SecurityHub testing"
# Verify regions were completely reset to empty dict
assert integration.configuration["regions"] == {}
# Verify save was called to persist the change
integration.save.assert_called_once()
# Verify test_connection was called with correct parameters
mock_security_hub_class.test_connection.assert_called_once_with(
aws_account_id="123456789012",
raise_on_exception=False,
aws_access_key_id="test_key",
aws_secret_access_key="test_secret",
)
@patch("api.utils.SecurityHub")
def test_security_hub_connection_success_saves_regions(
self, mock_security_hub_class
):
"""Test that successful SecurityHub connection saves regions correctly."""
integration = MagicMock()
integration.integration_type = Integration.IntegrationChoices.AWS_SECURITY_HUB
integration.credentials = {
"aws_access_key_id": "valid_key",
"aws_secret_access_key": "valid_secret",
}
integration.configuration = {"send_only_fails": False}
# Mock provider relationship
mock_provider = MagicMock()
mock_provider.uid = "123456789012"
mock_relationship = MagicMock()
mock_relationship.provider = mock_provider
integration.integrationproviderrelationship_set.first.return_value = (
mock_relationship
)
# Mock successful SecurityHub connection with regions
mock_connection = SecurityHubConnection(
is_connected=True,
error=None,
enabled_regions={"us-east-1", "eu-west-1"},
disabled_regions={"ap-south-1"},
)
mock_security_hub_class.test_connection.return_value = mock_connection
result = prowler_integration_connection_test(integration)
assert result.is_connected is True
# Verify regions were saved correctly
assert integration.configuration["regions"]["us-east-1"] is True
assert integration.configuration["regions"]["eu-west-1"] is True
assert integration.configuration["regions"]["ap-south-1"] is False
integration.save.assert_called_once()
@patch("api.utils.rls_transaction")
@patch("api.utils.Jira")
def test_jira_connection_success_basic_auth(
self, mock_jira_class, mock_rls_transaction
):
integration = MagicMock()
integration.integration_type = Integration.IntegrationChoices.JIRA
integration.tenant_id = "test-tenant-id"
integration.credentials = {
"user_mail": "test@example.com",
"api_token": "test_api_token",
"domain": "example.atlassian.net",
}
integration.configuration = {}
# Mock successful JIRA connection with projects
mock_connection = MagicMock()
mock_connection.is_connected = True
mock_connection.error = None
mock_connection.projects = {"PROJ1": "Project 1", "PROJ2": "Project 2"}
mock_jira_class.test_connection.return_value = mock_connection
# Mock rls_transaction context manager
mock_rls_transaction.return_value.__enter__ = MagicMock()
mock_rls_transaction.return_value.__exit__ = MagicMock()
result = prowler_integration_connection_test(integration)
assert result.is_connected is True
assert result.error is None
# Verify JIRA connection was called with correct parameters including domain from credentials
mock_jira_class.test_connection.assert_called_once_with(
user_mail="test@example.com",
api_token="test_api_token",
domain="example.atlassian.net",
raise_on_exception=False,
)
# Verify rls_transaction was called with correct tenant_id
mock_rls_transaction.assert_called_once_with("test-tenant-id")
# Verify projects were saved to integration configuration
assert integration.configuration["projects"] == {
"PROJ1": "Project 1",
"PROJ2": "Project 2",
}
# Verify integration.save() was called
integration.save.assert_called_once()
@patch("api.utils.rls_transaction")
@patch("api.utils.Jira")
def test_jira_connection_failure_invalid_credentials(
self, mock_jira_class, mock_rls_transaction
):
integration = MagicMock()
integration.integration_type = Integration.IntegrationChoices.JIRA
integration.tenant_id = "test-tenant-id"
integration.credentials = {
"user_mail": "invalid@example.com",
"api_token": "invalid_token",
"domain": "invalid.atlassian.net",
}
integration.configuration = {}
# Mock failed JIRA connection
mock_connection = MagicMock()
mock_connection.is_connected = False
mock_connection.error = Exception("Authentication failed: Invalid credentials")
mock_connection.projects = {} # Empty projects when connection fails
mock_jira_class.test_connection.return_value = mock_connection
# Mock rls_transaction context manager
mock_rls_transaction.return_value.__enter__ = MagicMock()
mock_rls_transaction.return_value.__exit__ = MagicMock()
result = prowler_integration_connection_test(integration)
assert result.is_connected is False
assert "Authentication failed: Invalid credentials" in str(result.error)
# Verify JIRA connection was called with correct parameters
mock_jira_class.test_connection.assert_called_once_with(
user_mail="invalid@example.com",
api_token="invalid_token",
domain="invalid.atlassian.net",
raise_on_exception=False,
)
# Verify rls_transaction was called even on failure
mock_rls_transaction.assert_called_once_with("test-tenant-id")
# Verify empty projects dict was saved to integration configuration
assert integration.configuration["projects"] == {}
# Verify integration.save() was called even on connection failure
integration.save.assert_called_once()
@patch("api.utils.rls_transaction")
@patch("api.utils.Jira")
def test_jira_connection_projects_update_with_existing_configuration(
self, mock_jira_class, mock_rls_transaction
):
"""Test that projects are properly updated when integration already has configuration data"""
integration = MagicMock()
integration.integration_type = Integration.IntegrationChoices.JIRA
integration.tenant_id = "test-tenant-id"
integration.credentials = {
"user_mail": "test@example.com",
"api_token": "test_api_token",
"domain": "example.atlassian.net",
}
integration.configuration = {
"issue_types": ["Task"], # Existing configuration
"projects": {"OLD_PROJ": "Old Project"}, # Will be overwritten
}
# Mock successful JIRA connection with new projects
mock_connection = MagicMock()
mock_connection.is_connected = True
mock_connection.error = None
mock_connection.projects = {
"NEW_PROJ1": "New Project 1",
"NEW_PROJ2": "New Project 2",
}
mock_jira_class.test_connection.return_value = mock_connection
# Mock rls_transaction context manager
mock_rls_transaction.return_value.__enter__ = MagicMock()
mock_rls_transaction.return_value.__exit__ = MagicMock()
result = prowler_integration_connection_test(integration)
assert result.is_connected is True
assert result.error is None
# Verify projects were updated (old projects replaced with new ones)
assert integration.configuration["projects"] == {
"NEW_PROJ1": "New Project 1",
"NEW_PROJ2": "New Project 2",
}
# Verify other configuration fields were preserved
assert integration.configuration["issue_types"] == ["Task"]
# Verify integration.save() was called
integration.save.assert_called_once()
File diff suppressed because it is too large Load Diff
+118 -6
View File
@@ -6,13 +6,18 @@ from django.db.models import Subquery
from rest_framework.exceptions import NotFound, ValidationError
from api.db_router import MainRouter
from api.db_utils import rls_transaction
from api.exceptions import InvitationTokenExpiredException
from api.models import Invitation, Processor, Provider, Resource
from api.models import Integration, Invitation, Processor, Provider, Resource
from api.v1.serializers import FindingMetadataSerializer
from prowler.lib.outputs.jira.jira import Jira, JiraBasicAuthError
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.aws.lib.s3.s3 import S3
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHub
from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.common.models import Connection
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.github.github_provider import GithubProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
from prowler.providers.m365.m365_provider import M365Provider
@@ -55,14 +60,21 @@ def merge_dicts(default_dict: dict, replacement_dict: dict) -> dict:
def return_prowler_provider(
provider: Provider,
) -> [AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider]:
) -> [
AwsProvider
| AzureProvider
| GcpProvider
| GithubProvider
| KubernetesProvider
| M365Provider
]:
"""Return the Prowler provider class based on the given provider type.
Args:
provider (Provider): The provider object containing the provider type and associated secrets.
Returns:
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider: The corresponding provider class.
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider: The corresponding provider class.
Raises:
ValueError: If the provider type specified in `provider.provider` is not supported.
@@ -78,6 +90,8 @@ def return_prowler_provider(
prowler_provider = KubernetesProvider
case Provider.ProviderChoices.M365.value:
prowler_provider = M365Provider
case Provider.ProviderChoices.GITHUB.value:
prowler_provider = GithubProvider
case _:
raise ValueError(f"Provider type {provider.provider} not supported")
return prowler_provider
@@ -108,6 +122,12 @@ def get_prowler_provider_kwargs(
}
elif provider.provider == Provider.ProviderChoices.KUBERNETES.value:
prowler_provider_kwargs = {**prowler_provider_kwargs, "context": provider.uid}
elif provider.provider == Provider.ProviderChoices.GITHUB.value:
if provider.uid:
prowler_provider_kwargs = {
**prowler_provider_kwargs,
"organizations": [provider.uid],
}
if mutelist_processor:
mutelist_content = mutelist_processor.configuration.get("Mutelist", {})
@@ -120,7 +140,14 @@ def get_prowler_provider_kwargs(
def initialize_prowler_provider(
provider: Provider,
mutelist_processor: Processor | None = None,
) -> AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider:
) -> (
AwsProvider
| AzureProvider
| GcpProvider
| GithubProvider
| KubernetesProvider
| M365Provider
):
"""Initialize a Prowler provider instance based on the given provider type.
Args:
@@ -128,8 +155,8 @@ def initialize_prowler_provider(
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
Returns:
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider: An instance of the corresponding provider class
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `KubernetesProvider` or `M365Provider`) initialized with the
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider: An instance of the corresponding provider class
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `GithubProvider`, `KubernetesProvider` or `M365Provider`) initialized with the
provider's secrets.
"""
prowler_provider = return_prowler_provider(provider)
@@ -158,6 +185,77 @@ def prowler_provider_connection_test(provider: Provider) -> Connection:
)
def prowler_integration_connection_test(integration: Integration) -> Connection:
"""Test the connection to a Prowler integration based on the given integration type.
Args:
integration (Integration): The integration object containing the integration type and associated credentials.
Returns:
Connection: A connection object representing the result of the connection test for the specified integration.
"""
if integration.integration_type == Integration.IntegrationChoices.AMAZON_S3:
return S3.test_connection(
**integration.credentials,
bucket_name=integration.configuration["bucket_name"],
raise_on_exception=False,
)
# TODO: It is possible that we can unify the connection test for all integrations, but need refactoring
# to avoid code duplication. Actually the AWS integrations are similar, so SecurityHub and S3 can be unified
# making some changes in the SDK.
elif (
integration.integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB
):
# Get the provider associated with this integration
provider_relationship = integration.integrationproviderrelationship_set.first()
if not provider_relationship:
return Connection(
is_connected=False, error="No provider associated with this integration"
)
credentials = (
integration.credentials
if integration.credentials
else provider_relationship.provider.secret.secret
)
connection = SecurityHub.test_connection(
aws_account_id=provider_relationship.provider.uid,
raise_on_exception=False,
**credentials,
)
# Only save regions if connection is successful
if connection.is_connected:
regions_status = {r: True for r in connection.enabled_regions}
regions_status.update({r: False for r in connection.disabled_regions})
# Save regions information in the integration configuration
integration.configuration["regions"] = regions_status
integration.save()
else:
# Reset regions information if connection fails
integration.configuration["regions"] = {}
integration.save()
return connection
elif integration.integration_type == Integration.IntegrationChoices.JIRA:
jira_connection = Jira.test_connection(
**integration.credentials,
raise_on_exception=False,
)
project_keys = jira_connection.projects if jira_connection.is_connected else {}
with rls_transaction(str(integration.tenant_id)):
integration.configuration["projects"] = project_keys
integration.save()
return jira_connection
elif integration.integration_type == Integration.IntegrationChoices.SLACK:
pass
else:
raise ValueError(
f"Integration type {integration.integration_type} not supported"
)
def validate_invitation(
invitation_token: str, email: str, raise_not_found=False
) -> Invitation:
@@ -249,3 +347,17 @@ def get_findings_metadata_no_aggregations(tenant_id: str, filtered_queryset):
serializer.is_valid(raise_exception=True)
return serializer.data
def initialize_prowler_integration(integration: Integration) -> Jira:
# TODO Refactor other integrations to use this function
if integration.integration_type == Integration.IntegrationChoices.JIRA:
try:
return Jira(**integration.credentials)
except JiraBasicAuthError as jira_auth_error:
with rls_transaction(str(integration.tenant_id)):
integration.configuration["projects"] = {}
integration.connected = False
integration.connection_last_checked_at = datetime.now(tz=timezone.utc)
integration.save()
raise jira_auth_error
@@ -1,3 +1,6 @@
import os
import re
from drf_spectacular.utils import extend_schema_field
from rest_framework_json_api import serializers
@@ -6,7 +9,70 @@ from api.v1.serializer_utils.base import BaseValidateSerializer
class S3ConfigSerializer(BaseValidateSerializer):
bucket_name = serializers.CharField()
output_directory = serializers.CharField()
output_directory = serializers.CharField(allow_blank=True)
def validate_output_directory(self, value):
"""
Validate the output_directory field to ensure it's a properly formatted path.
Prevents paths with excessive slashes like "///////test".
If empty, sets a default value.
"""
# If empty or None, set default value
if not value:
return "output"
# Normalize the path to remove excessive slashes
normalized_path = os.path.normpath(value)
# Remove leading slashes for S3 paths
if normalized_path.startswith("/"):
normalized_path = normalized_path.lstrip("/")
# Check for invalid characters or patterns
if re.search(r'[<>:"|?*]', normalized_path):
raise serializers.ValidationError(
'Output directory contains invalid characters. Avoid: < > : " | ? *'
)
# Check for empty path after normalization
if not normalized_path or normalized_path == ".":
raise serializers.ValidationError(
"Output directory cannot be empty or just '.' or '/'."
)
# Check for paths that are too long (S3 key limit is 1024 characters, leave some room for filename)
if len(normalized_path) > 900:
raise serializers.ValidationError(
"Output directory path is too long (max 900 characters)."
)
return normalized_path
class Meta:
resource_name = "integrations"
class SecurityHubConfigSerializer(BaseValidateSerializer):
send_only_fails = serializers.BooleanField(default=False)
archive_previous_findings = serializers.BooleanField(default=False)
regions = serializers.DictField(default=dict, read_only=True)
def to_internal_value(self, data):
validated_data = super().to_internal_value(data)
# Always initialize regions as empty dict
validated_data["regions"] = {}
return validated_data
class Meta:
resource_name = "integrations"
class JiraConfigSerializer(BaseValidateSerializer):
domain = serializers.CharField(read_only=True)
issue_types = serializers.ListField(
read_only=True, child=serializers.CharField(), default=["Task"]
)
projects = serializers.DictField(read_only=True)
class Meta:
resource_name = "integrations"
@@ -27,6 +93,15 @@ class AWSCredentialSerializer(BaseValidateSerializer):
resource_name = "integrations"
class JiraCredentialSerializer(BaseValidateSerializer):
user_mail = serializers.EmailField(required=True)
api_token = serializers.CharField(required=True)
domain = serializers.CharField(required=True)
class Meta:
resource_name = "integrations"
@extend_schema_field(
{
"oneOf": [
@@ -78,6 +153,27 @@ class AWSCredentialSerializer(BaseValidateSerializer):
},
},
},
{
"type": "object",
"title": "JIRA Credentials",
"properties": {
"user_mail": {
"type": "string",
"format": "email",
"description": "The email address of the JIRA user account.",
},
"api_token": {
"type": "string",
"description": "The API token for authentication with JIRA. This can be generated from your "
"Atlassian account settings.",
},
"domain": {
"type": "string",
"description": "The JIRA domain/instance URL (e.g., 'your-domain.atlassian.net').",
},
},
"required": ["user_mail", "api_token", "domain"],
},
]
}
)
@@ -98,10 +194,40 @@ class IntegrationCredentialField(serializers.JSONField):
},
"output_directory": {
"type": "string",
"description": "The directory path within the bucket where files will be saved.",
"description": "The directory path within the bucket where files will be saved. Optional - "
'defaults to "output" if not provided. Path will be normalized to remove '
'excessive slashes and invalid characters are not allowed (< > : " | ? *). '
"Maximum length is 900 characters.",
"maxLength": 900,
"pattern": '^[^<>:"|?*]+$',
"default": "output",
},
},
"required": ["bucket_name", "output_directory"],
"required": ["bucket_name"],
},
{
"type": "object",
"title": "AWS Security Hub",
"properties": {
"send_only_fails": {
"type": "boolean",
"default": False,
"description": "If true, only findings with status 'FAIL' will be sent to Security Hub.",
},
"archive_previous_findings": {
"type": "boolean",
"default": False,
"description": "If true, archives findings that are not present in the current execution.",
},
},
},
{
"type": "object",
"title": "JIRA",
"description": "JIRA integration does not accept any configuration in the payload. Leave it as an "
"empty JSON object (`{}`).",
"properties": {},
"additionalProperties": False,
},
]
}
@@ -176,6 +176,43 @@ from rest_framework_json_api import serializers
},
"required": ["kubeconfig_content"],
},
{
"type": "object",
"title": "GitHub Personal Access Token",
"properties": {
"personal_access_token": {
"type": "string",
"description": "GitHub personal access token for authentication.",
}
},
"required": ["personal_access_token"],
},
{
"type": "object",
"title": "GitHub OAuth App Token",
"properties": {
"oauth_app_token": {
"type": "string",
"description": "GitHub OAuth App token for authentication.",
}
},
"required": ["oauth_app_token"],
},
{
"type": "object",
"title": "GitHub App Credentials",
"properties": {
"github_app_id": {
"type": "integer",
"description": "GitHub App ID for authentication.",
},
"github_app_key": {
"type": "string",
"description": "Path to the GitHub App private key file.",
},
},
"required": ["github_app_id", "github_app_key"],
},
]
}
)
+464 -17
View File
@@ -15,6 +15,8 @@ from rest_framework_simplejwt.exceptions import TokenError
from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
from rest_framework_simplejwt.tokens import RefreshToken
from api.db_router import MainRouter
from api.exceptions import ConflictException
from api.models import (
Finding,
Integration,
@@ -37,6 +39,7 @@ from api.models import (
StateChoices,
StatusChoices,
Task,
TenantAPIKey,
User,
UserRoleRelationship,
)
@@ -45,7 +48,10 @@ from api.v1.serializer_utils.integrations import (
AWSCredentialSerializer,
IntegrationConfigField,
IntegrationCredentialField,
JiraConfigSerializer,
JiraCredentialSerializer,
S3ConfigSerializer,
SecurityHubConfigSerializer,
)
from api.v1.serializer_utils.processors import ProcessorConfigField
from api.v1.serializer_utils.providers import ProviderSecretField
@@ -255,8 +261,15 @@ class UserSerializer(BaseSerializerV1):
Serializer for the User model.
"""
memberships = serializers.ResourceRelatedField(many=True, read_only=True)
roles = serializers.ResourceRelatedField(many=True, read_only=True)
# We use SerializerMethodResourceRelatedField so includes (e.g. ?include=roles)
# respect RBAC and do not leak relationships of other users when the requester
# lacks manage_account. The visibility logic lives in get_roles/get_memberships.
memberships = SerializerMethodResourceRelatedField(
many=True, read_only=True, source="memberships", method_name="get_memberships"
)
roles = SerializerMethodResourceRelatedField(
many=True, read_only=True, source="roles", method_name="get_roles"
)
class Meta:
model = User
@@ -274,9 +287,35 @@ class UserSerializer(BaseSerializerV1):
}
included_serializers = {
"roles": "api.v1.serializers.RoleSerializer",
"roles": "api.v1.serializers.RoleIncludeSerializer",
"memberships": "api.v1.serializers.MembershipIncludeSerializer",
}
def _can_view_relationships(self, instance) -> bool:
"""Allow self to view own relationships. Require manage_account to view others."""
role = self.context.get("role")
request = self.context.get("request")
is_self = bool(
request
and getattr(request, "user", None)
and getattr(instance, "id", None) == request.user.id
)
return is_self or (role and role.manage_account)
def get_roles(self, instance):
return (
instance.roles.all()
if self._can_view_relationships(instance)
else Role.objects.none()
)
def get_memberships(self, instance):
return (
instance.memberships.all()
if self._can_view_relationships(instance)
else Membership.objects.none()
)
class UserCreateSerializer(BaseWriteSerializer):
password = serializers.CharField(write_only=True)
@@ -384,6 +423,34 @@ class UserRoleRelationshipSerializer(RLSSerializer, BaseWriteSerializer):
roles = Role.objects.filter(id__in=role_ids)
tenant_id = self.context.get("tenant_id")
# Safeguard: A tenant must always have at least one user with MANAGE_ACCOUNT.
# If the target roles do NOT include MANAGE_ACCOUNT, and the current user is
# the only one in the tenant with MANAGE_ACCOUNT, block the update.
target_includes_manage_account = roles.filter(manage_account=True).exists()
if not target_includes_manage_account:
# Check if any other user has MANAGE_ACCOUNT
other_users_have_manage_account = (
UserRoleRelationship.objects.filter(
tenant_id=tenant_id, role__manage_account=True
)
.exclude(user_id=instance.id)
.exists()
)
# Check if the current user has MANAGE_ACCOUNT
instance_has_manage_account = instance.roles.filter(
tenant_id=tenant_id, manage_account=True
).exists()
# If the current user is the last holder of MANAGE_ACCOUNT, prevent removal
if instance_has_manage_account and not other_users_have_manage_account:
raise serializers.ValidationError(
{
"roles": "At least one user in the tenant must retain MANAGE_ACCOUNT. "
"Assign MANAGE_ACCOUNT to another user before removing it here."
}
)
instance.roles.clear()
new_relationships = [
UserRoleRelationship(user=instance, role=r, tenant_id=tenant_id)
@@ -498,6 +565,12 @@ class TenantSerializer(BaseSerializerV1):
fields = ["id", "name", "memberships"]
class TenantIncludeSerializer(BaseSerializerV1):
class Meta:
model = Tenant
fields = ["id", "name"]
# Memberships
@@ -519,6 +592,29 @@ class MembershipSerializer(serializers.ModelSerializer):
fields = ["id", "user", "tenant", "role", "date_joined"]
class MembershipIncludeSerializer(serializers.ModelSerializer):
"""
Include-oriented Membership serializer that enables including tenant objects with names
without altering the base MembershipSerializer behavior.
"""
role = MemberRoleEnumSerializerField()
user = serializers.ResourceRelatedField(read_only=True)
tenant = SerializerMethodResourceRelatedField(read_only=True, source="tenant")
class Meta:
model = Membership
fields = ["id", "user", "tenant", "role", "date_joined"]
included_serializers = {"tenant": "api.v1.serializers.TenantIncludeSerializer"}
def get_tenant(self, instance):
try:
return Tenant.objects.using(MainRouter.admin_db).get(id=instance.tenant_id)
except Tenant.DoesNotExist:
return None
# Provider Groups
class ProviderGroupSerializer(RLSSerializer, BaseWriteSerializer):
providers = serializers.ResourceRelatedField(
@@ -813,6 +909,17 @@ class ProviderCreateSerializer(RLSSerializer, BaseWriteSerializer):
"uid",
# "scanner_args"
]
extra_kwargs = {
"alias": {
"help_text": "Human readable name to identify the provider, e.g. 'Production AWS Account', 'Dev Environment'",
},
"provider": {
"help_text": "Type of provider to create.",
},
"uid": {
"help_text": "Unique identifier for the provider, set by the provider, e.g. AWS account ID, Azure subscription ID, GCP project ID, etc.",
},
}
class ProviderUpdateSerializer(BaseWriteSerializer):
@@ -827,6 +934,11 @@ class ProviderUpdateSerializer(BaseWriteSerializer):
"alias",
# "scanner_args"
]
extra_kwargs = {
"alias": {
"help_text": "Human readable name to identify the provider, e.g. 'Production AWS Account', 'Dev Environment'",
}
}
# Scans
@@ -1217,6 +1329,8 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
serializer = AzureProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.GCP.value:
serializer = GCPProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.GITHUB.value:
serializer = GithubProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.KUBERNETES.value:
serializer = KubernetesProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.M365.value:
@@ -1296,6 +1410,16 @@ class KubernetesProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
class GithubProviderSecret(serializers.Serializer):
personal_access_token = serializers.CharField(required=False)
oauth_app_token = serializers.CharField(required=False)
github_app_id = serializers.IntegerField(required=False)
github_app_key_content = serializers.CharField(required=False)
class Meta:
resource_name = "provider-secrets"
class AWSRoleAssumptionProviderSecret(serializers.Serializer):
role_arn = serializers.CharField()
external_id = serializers.CharField()
@@ -1662,6 +1786,37 @@ class RoleUpdateSerializer(RoleSerializer):
if "users" in validated_data:
users = validated_data.pop("users")
# Prevent a user from removing their own role assignment via Role update
request = self.context.get("request")
if request and getattr(request, "user", None):
request_user = request.user
is_currently_assigned = instance.users.filter(
id=request_user.id
).exists()
will_be_assigned = any(u.id == request_user.id for u in users)
if is_currently_assigned and not will_be_assigned:
raise serializers.ValidationError(
{"users": "Users cannot remove their own role."}
)
# Safeguard MANAGE_ACCOUNT coverage when updating users of this role
if instance.manage_account:
# Existing MANAGE_ACCOUNT assignments on other roles within the tenant
other_ma_exists = (
UserRoleRelationship.objects.filter(
tenant_id=tenant_id, role__manage_account=True
)
.exclude(role_id=instance.id)
.exists()
)
if not other_ma_exists and len(users) == 0:
raise serializers.ValidationError(
{
"users": "At least one user in the tenant must retain MANAGE_ACCOUNT. "
"Assign this MANAGE_ACCOUNT role to at least one user or ensure another user has it."
}
)
instance.users.clear()
through_model_instances = [
UserRoleRelationship(
@@ -1676,6 +1831,37 @@ class RoleUpdateSerializer(RoleSerializer):
return super().update(instance, validated_data)
class RoleIncludeSerializer(RLSSerializer):
permission_state = serializers.SerializerMethodField()
def get_permission_state(self, obj) -> str:
return obj.permission_state
class Meta:
model = Role
fields = [
"id",
"name",
"manage_users",
"manage_account",
# Disable for the first release
# "manage_billing",
# /Disable for the first release
"manage_integrations",
"manage_providers",
"manage_scans",
"permission_state",
"unlimited_visibility",
"inserted_at",
"updated_at",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
}
class ProviderGroupResourceIdentifierSerializer(serializers.Serializer):
resource_type = serializers.CharField(source="type")
id = serializers.UUIDField()
@@ -1790,6 +1976,7 @@ class ComplianceOverviewDetailSerializer(serializers.Serializer):
class ComplianceOverviewAttributesSerializer(serializers.Serializer):
id = serializers.CharField()
compliance_name = serializers.CharField()
framework_description = serializers.CharField()
name = serializers.CharField()
framework = serializers.CharField()
@@ -1938,6 +2125,62 @@ class ScheduleDailyCreateSerializer(serializers.Serializer):
class BaseWriteIntegrationSerializer(BaseWriteSerializer):
def validate(self, attrs):
integration_type = attrs.get("integration_type")
if (
integration_type == Integration.IntegrationChoices.AMAZON_S3
and Integration.objects.filter(
configuration=attrs.get("configuration")
).exists()
):
raise ConflictException(
detail="This integration already exists.",
pointer="/data/attributes/configuration",
)
if (
integration_type == Integration.IntegrationChoices.JIRA
and Integration.objects.filter(
configuration__contains={
"domain": attrs.get("configuration").get("domain")
}
).exists()
):
raise ConflictException(
detail="This integration already exists.",
pointer="/data/attributes/configuration",
)
# Check if any provider already has a SecurityHub integration
if hasattr(self, "instance") and self.instance and not integration_type:
integration_type = self.instance.integration_type
if (
integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB
and "providers" in attrs
):
providers = attrs.get("providers", [])
tenant_id = self.context.get("tenant_id")
for provider in providers:
# For updates, exclude the current instance from the check
query = IntegrationProviderRelationship.objects.filter(
provider=provider,
integration__integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
tenant_id=tenant_id,
)
if hasattr(self, "instance") and self.instance:
query = query.exclude(integration=self.instance)
if query.exists():
raise ConflictException(
detail=f"Provider {provider.id} already has a Security Hub integration. Only one "
"Security Hub integration is allowed per provider.",
pointer="/data/relationships/providers",
)
return super().validate(attrs)
@staticmethod
def validate_integration_data(
integration_type: str,
@@ -1945,17 +2188,49 @@ class BaseWriteIntegrationSerializer(BaseWriteSerializer):
configuration: dict,
credentials: dict,
):
if integration_type == Integration.IntegrationChoices.S3:
if integration_type == Integration.IntegrationChoices.AMAZON_S3:
config_serializer = S3ConfigSerializer
credentials_serializers = [AWSCredentialSerializer]
# TODO: This will be required for AWS Security Hub
# if providers and not all(
# provider.provider == Provider.ProviderChoices.AWS
# for provider in providers
# ):
# raise serializers.ValidationError(
# {"providers": "All providers must be AWS for the S3 integration."}
# )
elif integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB:
if providers:
if len(providers) > 1:
raise serializers.ValidationError(
{
"providers": "Only one provider is supported for the Security Hub integration."
}
)
if providers[0].provider != Provider.ProviderChoices.AWS:
raise serializers.ValidationError(
{
"providers": "The provider must be AWS type for the Security Hub integration."
}
)
config_serializer = SecurityHubConfigSerializer
credentials_serializers = [AWSCredentialSerializer]
elif integration_type == Integration.IntegrationChoices.JIRA:
if providers:
raise serializers.ValidationError(
{
"providers": "Relationship field is not accepted. This integration applies to all providers."
}
)
if configuration:
raise serializers.ValidationError(
{
"configuration": "This integration does not support custom configuration."
}
)
config_serializer = JiraConfigSerializer
# Create non-editable configuration for JIRA integration
default_jira_issue_types = ["Task"]
configuration.update(
{
"projects": {},
"issue_types": default_jira_issue_types,
"domain": credentials.get("domain"),
}
)
credentials_serializers = [JiraCredentialSerializer]
else:
raise serializers.ValidationError(
{
@@ -1963,7 +2238,11 @@ class BaseWriteIntegrationSerializer(BaseWriteSerializer):
}
)
config_serializer(data=configuration).is_valid(raise_exception=True)
serializer_instance = config_serializer(data=configuration)
serializer_instance.is_valid(raise_exception=True)
# Apply the validated (and potentially transformed) data back to configuration
configuration.update(serializer_instance.validated_data)
for cred_serializer in credentials_serializers:
try:
@@ -2015,6 +2294,10 @@ class IntegrationSerializer(RLSSerializer):
for provider in representation["providers"]
if provider["id"] in allowed_provider_ids
]
if instance.integration_type == Integration.IntegrationChoices.JIRA:
representation["configuration"].update(
{"domain": instance.credentials.get("domain")}
)
return representation
@@ -2042,7 +2325,6 @@ class IntegrationCreateSerializer(BaseWriteIntegrationSerializer):
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"connected": {"read_only": True},
"enabled": {"read_only": True},
"connection_last_checked_at": {"read_only": True},
}
@@ -2052,10 +2334,18 @@ class IntegrationCreateSerializer(BaseWriteIntegrationSerializer):
configuration = attrs.get("configuration")
credentials = attrs.get("credentials")
validated_attrs = super().validate(attrs)
if (
not providers
and integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB
):
raise serializers.ValidationError(
{"providers": "At least one provider is required for this integration."}
)
self.validate_integration_data(
integration_type, providers, configuration, credentials
)
validated_attrs = super().validate(attrs)
return validated_attrs
def create(self, validated_data):
@@ -2108,13 +2398,16 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
def validate(self, attrs):
integration_type = self.instance.integration_type
providers = attrs.get("providers")
configuration = attrs.get("configuration") or self.instance.configuration
if integration_type != Integration.IntegrationChoices.JIRA:
configuration = attrs.get("configuration") or self.instance.configuration
else:
configuration = attrs.get("configuration", {})
credentials = attrs.get("credentials") or self.instance.credentials
validated_attrs = super().validate(attrs)
self.validate_integration_data(
integration_type, providers, configuration, credentials
)
validated_attrs = super().validate(attrs)
return validated_attrs
def update(self, instance, validated_data):
@@ -2129,8 +2422,62 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
]
IntegrationProviderRelationship.objects.bulk_create(new_relationships)
# Preserve regions field for Security Hub integrations
if instance.integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB:
if "configuration" in validated_data:
# Preserve the existing regions field if it exists
existing_regions = instance.configuration.get("regions", {})
validated_data["configuration"]["regions"] = existing_regions
return super().update(instance, validated_data)
def to_representation(self, instance):
representation = super().to_representation(instance)
# Ensure JIRA integrations show updated domain in configuration from credentials
if instance.integration_type == Integration.IntegrationChoices.JIRA:
representation["configuration"].update(
{"domain": instance.credentials.get("domain")}
)
return representation
class IntegrationJiraDispatchSerializer(serializers.Serializer):
"""
Serializer for dispatching findings to JIRA integration.
"""
project_key = serializers.CharField(required=True)
issue_type = serializers.ChoiceField(required=True, choices=["Task"])
class JSONAPIMeta:
resource_name = "integrations-jira-dispatches"
def validate(self, attrs):
validated_attrs = super().validate(attrs)
integration_instance = Integration.objects.get(
id=self.context.get("integration_id")
)
if integration_instance.integration_type != Integration.IntegrationChoices.JIRA:
raise ValidationError(
{"integration_type": "The given integration is not a JIRA integration"}
)
if not integration_instance.enabled:
raise ValidationError(
{"integration": "The given integration is not enabled"}
)
project_key = attrs.get("project_key")
if project_key not in integration_instance.configuration.get("projects", {}):
raise ValidationError(
{
"project_key": "The given project key is not available for this JIRA integration. Refresh the "
"connection if this is an error."
}
)
return validated_attrs
# Processors
@@ -2405,3 +2752,103 @@ class LighthouseConfigUpdateSerializer(BaseWriteSerializer):
instance.api_key_decoded = api_key
instance.save()
return instance
# API Keys
class TenantApiKeySerializer(RLSSerializer):
"""
Serializer for the TenantApiKey model.
"""
# Map database field names to API field names for consistency
expires_at = serializers.DateTimeField(source="expiry_date", read_only=True)
inserted_at = serializers.DateTimeField(source="created", read_only=True)
class Meta:
model = TenantAPIKey
fields = [
"id",
"name",
"prefix",
"expires_at",
"revoked",
"inserted_at",
"last_used_at",
"entity",
]
class TenantApiKeyCreateSerializer(RLSSerializer, BaseWriteSerializer):
"""Serializer for creating new API keys."""
# Map database field names to API field names for consistency
expires_at = serializers.DateTimeField(source="expiry_date", required=False)
inserted_at = serializers.DateTimeField(source="created", read_only=True)
api_key = serializers.SerializerMethodField()
class Meta:
model = TenantAPIKey
fields = [
"id",
"name",
"prefix",
"expires_at",
"revoked",
"entity",
"inserted_at",
"last_used_at",
"api_key",
]
extra_kwargs = {
"id": {"read_only": True},
"prefix": {"read_only": True},
"revoked": {"read_only": True},
"entity": {"read_only": True},
"inserted_at": {"read_only": True},
"last_used_at": {"read_only": True},
"api_key": {"read_only": True},
}
def get_api_key(self, obj):
"""Return the raw API key if it was stored during creation."""
return getattr(obj, "_raw_api_key", None)
def create(self, validated_data):
instance, raw_api_key = TenantAPIKey.objects.create_api_key(
**validated_data,
tenant_id=self.context.get("tenant_id"),
entity=self.context.get("request").user,
)
# Store the raw API key temporarily on the instance for the serializer
instance._raw_api_key = raw_api_key
return instance
class TenantApiKeyUpdateSerializer(RLSSerializer, BaseWriteSerializer):
"""Serializer for updating API keys - only allows changing the name."""
# Map database field names to API field names for consistency
expires_at = serializers.DateTimeField(source="expiry_date", read_only=True)
inserted_at = serializers.DateTimeField(source="created", read_only=True)
class Meta:
model = TenantAPIKey
fields = [
"id",
"name",
"prefix",
"expires_at",
"entity",
"inserted_at",
"last_used_at",
]
extra_kwargs = {
"id": {"read_only": True},
"prefix": {"read_only": True},
"entity": {"read_only": True},
"expires_at": {"read_only": True},
"inserted_at": {"read_only": True},
"last_used_at": {"read_only": True},
}
+11
View File
@@ -12,6 +12,7 @@ from api.v1.views import (
FindingViewSet,
GithubSocialLoginView,
GoogleSocialLoginView,
IntegrationJiraViewSet,
IntegrationViewSet,
InvitationAcceptViewSet,
InvitationViewSet,
@@ -38,6 +39,7 @@ from api.v1.views import (
TenantViewSet,
UserRoleRelationshipView,
UserViewSet,
TenantApiKeyViewSet,
)
router = routers.DefaultRouter(trailing_slash=False)
@@ -64,6 +66,7 @@ router.register(
LighthouseConfigViewSet,
basename="lighthouseconfiguration",
)
router.register(r"api-keys", TenantApiKeyViewSet, basename="api-key")
tenants_router = routers.NestedSimpleRouter(router, r"tenants", lookup="tenant")
tenants_router.register(
@@ -73,6 +76,13 @@ tenants_router.register(
users_router = routers.NestedSimpleRouter(router, r"users", lookup="user")
users_router.register(r"memberships", MembershipViewSet, basename="user-membership")
integrations_router = routers.NestedSimpleRouter(
router, r"integrations", lookup="integration"
)
integrations_router.register(
r"jira", IntegrationJiraViewSet, basename="integration-jira"
)
urlpatterns = [
path("tokens", CustomTokenObtainView.as_view(), name="token-obtain"),
path("tokens/refresh", CustomTokenRefreshView.as_view(), name="token-refresh"),
@@ -162,6 +172,7 @@ urlpatterns = [
path("", include(router.urls)),
path("", include(tenants_router.urls)),
path("", include(users_router.urls)),
path("", include(integrations_router.urls)),
path("schema", SchemaView.as_view(), name="schema"),
path("docs", SpectacularRedocView.as_view(url_name="schema"), name="docs"),
]
+396 -53
View File
@@ -22,7 +22,7 @@ from django.conf import settings as django_settings
from django.contrib.postgres.aggregates import ArrayAgg
from django.contrib.postgres.search import SearchQuery
from django.db import transaction
from django.db.models import Count, F, Prefetch, Q, Sum
from django.db.models import Count, F, Prefetch, Q, Subquery, Sum
from django.db.models.functions import Coalesce
from django.http import HttpResponse
from django.shortcuts import redirect
@@ -57,10 +57,12 @@ from tasks.beat import schedule_provider_scan
from tasks.jobs.export import get_s3_client
from tasks.tasks import (
backfill_scan_resource_summaries_task,
check_integration_connection_task,
check_lighthouse_connection_task,
check_provider_connection_task,
delete_provider_task,
delete_tenant_task,
jira_integration_task,
perform_scan_task,
)
@@ -74,8 +76,10 @@ from api.db_utils import rls_transaction
from api.exceptions import TaskFailedException
from api.filters import (
ComplianceOverviewFilter,
CustomDjangoFilterBackend,
FindingFilter,
IntegrationFilter,
IntegrationJiraFindingsFilter,
InvitationFilter,
LatestFindingFilter,
LatestResourceFilter,
@@ -88,8 +92,10 @@ from api.filters import (
RoleFilter,
ScanFilter,
ScanSummaryFilter,
ScanSummarySeverityFilter,
ServiceOverviewFilter,
TaskFilter,
TenantApiKeyFilter,
TenantFilter,
UserFilter,
)
@@ -119,6 +125,7 @@ from api.models import (
SeverityChoices,
StateChoices,
Task,
TenantAPIKey,
User,
UserRoleRelationship,
)
@@ -141,6 +148,7 @@ from api.v1.serializers import (
FindingMetadataSerializer,
FindingSerializer,
IntegrationCreateSerializer,
IntegrationJiraDispatchSerializer,
IntegrationSerializer,
IntegrationUpdateSerializer,
InvitationAcceptSerializer,
@@ -183,6 +191,9 @@ from api.v1.serializers import (
ScanUpdateSerializer,
ScheduleDailyCreateSerializer,
TaskSerializer,
TenantApiKeyCreateSerializer,
TenantApiKeySerializer,
TenantApiKeyUpdateSerializer,
TenantSerializer,
TokenRefreshSerializer,
TokenSerializer,
@@ -213,6 +224,8 @@ class RelationshipViewSchema(JsonApiAutoSchema):
description="Obtain a token by providing valid credentials and an optional tenant ID.",
)
class CustomTokenObtainView(GenericAPIView):
throttle_scope = "token-obtain"
resource_name = "tokens"
serializer_class = TokenSerializer
http_method_names = ["post"]
@@ -292,7 +305,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.10.0"
spectacular_settings.VERSION = "1.14.0"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
@@ -312,6 +325,11 @@ class SchemaView(SpectacularAPIView):
"description": "Endpoints for tenant invitations management, allowing retrieval and filtering of "
"invitations, creating new invitations, accepting and revoking them.",
},
{
"name": "Role",
"description": "Endpoints for managing RBAC roles within tenants, allowing creation, retrieval, "
"updating, and deletion of role configurations and permissions.",
},
{
"name": "Provider",
"description": "Endpoints for managing providers (AWS, GCP, Azure, etc...).",
@@ -320,10 +338,20 @@ class SchemaView(SpectacularAPIView):
"name": "Provider Group",
"description": "Endpoints for managing provider groups.",
},
{
"name": "Task",
"description": "Endpoints for task management, allowing retrieval of task status and "
"revoking tasks that have not started.",
},
{
"name": "Scan",
"description": "Endpoints for triggering manual scans and viewing scan results.",
},
{
"name": "Schedule",
"description": "Endpoints for managing scan schedules, allowing configuration of automated "
"scans with different scheduling options.",
},
{
"name": "Resource",
"description": "Endpoints for managing resources discovered by scans, allowing "
@@ -335,8 +363,9 @@ class SchemaView(SpectacularAPIView):
"findings that result from scans.",
},
{
"name": "Overview",
"description": "Endpoints for retrieving aggregated summaries of resources from the system.",
"name": "Processor",
"description": "Endpoints for managing post-processors used to process Prowler findings, including "
"registration, configuration, and deletion of post-processing actions.",
},
{
"name": "Compliance Overview",
@@ -344,9 +373,8 @@ class SchemaView(SpectacularAPIView):
" compliance framework ID.",
},
{
"name": "Task",
"description": "Endpoints for task management, allowing retrieval of task status and "
"revoking tasks that have not started.",
"name": "Overview",
"description": "Endpoints for retrieving aggregated summaries of resources from the system.",
},
{
"name": "Integration",
@@ -354,14 +382,20 @@ class SchemaView(SpectacularAPIView):
" retrieval, and deletion of integrations such as S3, JIRA, or other services.",
},
{
"name": "Lighthouse",
"description": "Endpoints for managing Lighthouse configurations, including creation, retrieval, "
"updating, and deletion of configurations such as OpenAI keys, models, and business context.",
"name": "Lighthouse AI",
"description": "Endpoints for managing Lighthouse AI configurations, including creation, retrieval, "
"updating, and deletion of configurations such as OpenAI keys, models, and business "
"context.",
},
{
"name": "Processor",
"description": "Endpoints for managing post-processors used to process Prowler findings, including "
"registration, configuration, and deletion of post-processing actions.",
"name": "SAML",
"description": "Endpoints for Single Sign-On authentication management via SAML for seamless user "
"authentication.",
},
{
"name": "API Keys",
"description": "Endpoints for API keys management. These can be used as an alternative to JWT "
"authorization.",
},
]
return super().get(request, *args, **kwargs)
@@ -744,11 +778,13 @@ class UserViewSet(BaseUserViewset):
# If called during schema generation, return an empty queryset
if getattr(self, "swagger_fake_view", False):
return User.objects.none()
queryset = (
User.objects.filter(membership__tenant__id=self.request.tenant_id)
if hasattr(self.request, "tenant_id")
else User.objects.all()
)
return queryset.prefetch_related("memberships", "roles")
def get_permissions(self):
@@ -766,6 +802,12 @@ class UserViewSet(BaseUserViewset):
else:
return UserSerializer
def get_serializer_context(self):
context = super().get_serializer_context()
if self.request.user.is_authenticated:
context["role"] = get_role(self.request.user)
return context
@action(detail=False, methods=["get"], url_name="me")
def me(self, request):
user = self.request.user
@@ -779,7 +821,9 @@ class UserViewSet(BaseUserViewset):
if kwargs["pk"] != str(self.request.user.id):
raise ValidationError("Only the current user can be deleted.")
return super().destroy(request, *args, **kwargs)
user = self.get_object()
user.delete(using=MainRouter.admin_db)
return Response(status=status.HTTP_204_NO_CONTENT)
@extend_schema(
parameters=[
@@ -870,7 +914,11 @@ class UserViewSet(BaseUserViewset):
partial_update=extend_schema(
tags=["User"],
summary="Partially update a user-roles relationship",
description="Update the user-roles relationship information without affecting other fields.",
description=(
"Update the user-roles relationship information without affecting other fields. "
"If the update would remove MANAGE_ACCOUNT from the last remaining user in the "
"tenant, the API rejects the request with a 400 response."
),
responses={
204: OpenApiResponse(
response=None, description="Relationship updated successfully"
@@ -880,7 +928,12 @@ class UserViewSet(BaseUserViewset):
destroy=extend_schema(
tags=["User"],
summary="Delete a user-roles relationship",
description="Remove the user-roles relationship from the system by their ID.",
description=(
"Remove the user-roles relationship from the system by their ID. If removing "
"MANAGE_ACCOUNT would take it away from the last remaining user in the tenant, "
"the API rejects the request with a 400 response. Users also cannot delete their "
"own role assignments; attempting to do so returns a 400 response."
),
responses={
204: OpenApiResponse(
response=None, description="Relationship deleted successfully"
@@ -895,11 +948,48 @@ class UserRoleRelationshipView(RelationshipView, BaseRLSViewSet):
http_method_names = ["post", "patch", "delete"]
schema = RelationshipViewSchema()
# RBAC required permissions
required_permissions = [Permissions.MANAGE_USERS]
required_permissions = [Permissions.MANAGE_ACCOUNT]
def get_queryset(self):
return User.objects.filter(membership__tenant__id=self.request.tenant_id)
def destroy(self, request, *args, **kwargs):
"""
Prevent deleting role relationships if it would leave the tenant with no
users having MANAGE_ACCOUNT. Supports deleting specific roles via JSON:API
relationship payload or clearing all roles for the user when no payload.
"""
user = self.get_object()
# Disallow deleting own roles
if str(user.id) == str(request.user.id):
return Response(
data={
"detail": "Users cannot delete the relationship with their role."
},
status=status.HTTP_400_BAD_REQUEST,
)
tenant_id = self.request.tenant_id
payload = request.data if isinstance(request.data, dict) else None
# If a user has more than one role, we will delete the relationship with the roles in the payload
data = payload.get("data") if payload else None
if data:
try:
role_ids = [item["id"] for item in data]
except KeyError:
role_ids = []
roles_to_remove = Role.objects.filter(id__in=role_ids, tenant_id=tenant_id)
else:
roles_to_remove = user.roles.filter(tenant_id=tenant_id)
UserRoleRelationship.objects.filter(
user=user,
tenant_id=tenant_id,
role_id__in=roles_to_remove.values_list("id", flat=True),
).delete()
return Response(status=status.HTTP_204_NO_CONTENT)
def create(self, request, *args, **kwargs):
user = self.get_object()
@@ -938,12 +1028,6 @@ class UserRoleRelationshipView(RelationshipView, BaseRLSViewSet):
serializer.save()
return Response(status=status.HTTP_204_NO_CONTENT)
def destroy(self, request, *args, **kwargs):
user = self.get_object()
user.roles.clear()
return Response(status=status.HTTP_204_NO_CONTENT)
@extend_schema_view(
list=extend_schema(
@@ -1994,6 +2078,21 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
)
)
def _should_prefetch_findings(self) -> bool:
fields_param = self.request.query_params.get("fields[resources]", "")
include_param = self.request.query_params.get("include", "")
return (
fields_param == ""
or "findings" in fields_param.split(",")
or "findings" in include_param.split(",")
)
def _get_findings_prefetch(self):
findings_queryset = Finding.all_objects.defer("scan", "resources").filter(
tenant_id=self.request.tenant_id
)
return [Prefetch("findings", queryset=findings_queryset)]
def get_serializer_class(self):
if self.action in ["metadata", "metadata_latest"]:
return ResourceMetadataSerializer
@@ -2017,7 +2116,11 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
filtered_queryset,
manager=Resource.all_objects,
select_related=["provider"],
prefetch_related=["findings"],
prefetch_related=(
self._get_findings_prefetch()
if self._should_prefetch_findings()
else []
),
)
def retrieve(self, request, *args, **kwargs):
@@ -2042,14 +2145,18 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
tenant_id = request.tenant_id
filtered_queryset = self.filter_queryset(self.get_queryset())
latest_scan_ids = (
Scan.all_objects.filter(tenant_id=tenant_id, state=StateChoices.COMPLETED)
latest_scans = (
Scan.all_objects.filter(
tenant_id=tenant_id,
state=StateChoices.COMPLETED,
)
.order_by("provider_id", "-inserted_at")
.distinct("provider_id")
.values_list("id", flat=True)
.values("provider_id")
)
filtered_queryset = filtered_queryset.filter(
tenant_id=tenant_id, provider__scan__in=latest_scan_ids
provider_id__in=Subquery(latest_scans)
)
return self.paginate_by_pk(
@@ -2057,7 +2164,11 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
filtered_queryset,
manager=Resource.all_objects,
select_related=["provider"],
prefetch_related=["findings"],
prefetch_related=(
self._get_findings_prefetch()
if self._should_prefetch_findings()
else []
),
)
@action(detail=False, methods=["get"], url_name="metadata")
@@ -2821,13 +2932,11 @@ class InvitationAcceptViewSet(BaseRLSViewSet):
partial_update=extend_schema(
tags=["Role"],
summary="Partially update a role",
description="Update certain fields of an existing role's information without affecting other fields.",
responses={200: RoleSerializer},
),
destroy=extend_schema(
tags=["Role"],
summary="Delete a role",
description="Remove a role from the system by their ID.",
),
)
class RoleViewSet(BaseRLSViewSet):
@@ -2849,6 +2958,14 @@ class RoleViewSet(BaseRLSViewSet):
return RoleUpdateSerializer
return super().get_serializer_class()
@extend_schema(
description=(
"Update selected fields on an existing role. When changing the `users` "
"relationship of a role that grants MANAGE_ACCOUNT, the API blocks attempts "
"that would leave the tenant without any MANAGE_ACCOUNT assignees and prevents "
"callers from removing their own assignment to that role."
)
)
def partial_update(self, request, *args, **kwargs):
user_role = get_role(request.user)
# If the user is the owner of the role, the manage_account field is not editable
@@ -2856,6 +2973,12 @@ class RoleViewSet(BaseRLSViewSet):
request.data["manage_account"] = str(user_role.manage_account).lower()
return super().partial_update(request, *args, **kwargs)
@extend_schema(
description=(
"Delete the specified role. The API rejects deletion of the last role "
"in the tenant that grants MANAGE_ACCOUNT."
)
)
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
if (
@@ -2863,6 +2986,21 @@ class RoleViewSet(BaseRLSViewSet):
): # TODO: Move to a constant/enum (in case other roles are created by default)
raise ValidationError(detail="The admin role cannot be deleted.")
# Prevent deleting the last MANAGE_ACCOUNT role in the tenant
if instance.manage_account:
has_other_ma = (
Role.objects.filter(tenant_id=instance.tenant_id, manage_account=True)
.exclude(id=instance.id)
.exists()
)
if not has_other_ma:
return Response(
data={
"detail": "Cannot delete the only role with MANAGE_ACCOUNT in the tenant."
},
status=status.HTTP_400_BAD_REQUEST,
)
return super().destroy(request, *args, **kwargs)
@@ -3005,7 +3143,9 @@ class RoleProviderGroupRelationshipView(RelationshipView, BaseRLSViewSet):
description="Compliance overviews metadata obtained successfully",
response=ComplianceOverviewMetadataSerializer,
),
202: OpenApiResponse(description="The task is in progress"),
202: OpenApiResponse(
description="The task is in progress", response=TaskSerializer
),
500: OpenApiResponse(
description="Compliance overviews generation task failed"
),
@@ -3037,7 +3177,9 @@ class RoleProviderGroupRelationshipView(RelationshipView, BaseRLSViewSet):
description="Compliance requirement details obtained successfully",
response=ComplianceOverviewDetailSerializer(many=True),
),
202: OpenApiResponse(description="The task is in progress"),
202: OpenApiResponse(
description="The task is in progress", response=TaskSerializer
),
500: OpenApiResponse(
description="Compliance overviews generation task failed"
),
@@ -3415,6 +3557,7 @@ class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
),
"name": requirement.get("name", ""),
"framework": compliance_framework.get("framework", ""),
"compliance_name": compliance_framework.get("name", ""),
"version": compliance_framework.get("version", ""),
"description": requirement.get("description", ""),
"attributes": base_attributes,
@@ -3498,8 +3641,10 @@ class OverviewViewSet(BaseRLSViewSet):
def get_filterset_class(self):
if self.action == "providers":
return None
elif self.action in ["findings", "findings_severity"]:
elif self.action == "findings":
return ScanSummaryFilter
elif self.action == "findings_severity":
return ScanSummarySeverityFilter
elif self.action == "services":
return ServiceOverviewFilter
return None
@@ -3621,7 +3766,12 @@ class OverviewViewSet(BaseRLSViewSet):
@action(detail=False, methods=["get"], url_name="findings_severity")
def findings_severity(self, request):
tenant_id = self.request.tenant_id
queryset = self.get_queryset()
# Load only required fields
queryset = self.get_queryset().only(
"tenant_id", "scan_id", "severity", "fail", "_pass", "total"
)
filtered_queryset = self.filter_queryset(queryset)
provider_filter = (
{"provider__in": self.allowed_providers}
@@ -3641,16 +3791,22 @@ class OverviewViewSet(BaseRLSViewSet):
tenant_id=tenant_id, scan_id__in=latest_scan_ids
)
# The filter will have added a status_count annotation if any status filter was used
if "status_count" in filtered_queryset.query.annotations:
sum_expression = Sum("status_count")
else:
sum_expression = Sum("total")
severity_counts = (
filtered_queryset.values("severity")
.annotate(count=Sum("total"))
.annotate(count=sum_expression)
.order_by("severity")
)
severity_data = {sev[0]: 0 for sev in SeverityChoices}
for item in severity_counts:
severity_data[item["severity"]] = item["count"]
severity_data.update(
{item["severity"]: item["count"] for item in severity_counts}
)
serializer = self.get_serializer(severity_data)
return Response(serializer.data, status=status.HTTP_200_OK)
@@ -3811,32 +3967,138 @@ class IntegrationViewSet(BaseRLSViewSet):
context["allowed_providers"] = self.allowed_providers
return context
@extend_schema(
tags=["Integration"],
summary="Check integration connection",
description="Try to verify integration connection",
request=None,
responses={202: OpenApiResponse(response=TaskSerializer)},
)
@action(detail=True, methods=["post"], url_name="connection")
def connection(self, request, pk=None):
get_object_or_404(Integration, pk=pk)
with transaction.atomic():
task = check_integration_connection_task.delay(
integration_id=pk, tenant_id=self.request.tenant_id
)
prowler_task = Task.objects.get(id=task.id)
serializer = TaskSerializer(prowler_task)
return Response(
data=serializer.data,
status=status.HTTP_202_ACCEPTED,
headers={
"Content-Location": reverse(
"task-detail", kwargs={"pk": prowler_task.id}
)
},
)
@extend_schema_view(
dispatches=extend_schema(
tags=["Integration"],
summary="Send findings to a Jira integration",
description="Send a set of filtered findings to the given integration. At least one finding filter must be "
"provided.",
responses={202: OpenApiResponse(response=TaskSerializer)},
filters=True,
)
)
class IntegrationJiraViewSet(BaseRLSViewSet):
queryset = Finding.all_objects.all()
serializer_class = IntegrationJiraDispatchSerializer
http_method_names = ["post"]
filter_backends = [CustomDjangoFilterBackend]
filterset_class = IntegrationJiraFindingsFilter
# RBAC required permissions
required_permissions = [Permissions.MANAGE_INTEGRATIONS]
@extend_schema(exclude=True)
def create(self, request, *args, **kwargs):
raise MethodNotAllowed(method="POST")
def get_queryset(self):
tenant_id = self.request.tenant_id
user_roles = get_role(self.request.user)
if user_roles.unlimited_visibility:
# User has unlimited visibility, return all findings
queryset = Finding.all_objects.filter(tenant_id=tenant_id)
else:
# User lacks permission, filter findings based on provider groups associated with the role
queryset = Finding.all_objects.filter(
scan__provider__in=get_providers(user_roles)
)
return queryset
@action(detail=False, methods=["post"], url_name="dispatches")
def dispatches(self, request, integration_pk=None):
get_object_or_404(Integration, pk=integration_pk)
serializer = self.get_serializer(
data=request.data, context={"integration_id": integration_pk}
)
serializer.is_valid(raise_exception=True)
if self.filter_queryset(self.get_queryset()).count() == 0:
raise ValidationError(
{"findings": "No findings match the provided filters"}
)
finding_ids = [
str(finding_id)
for finding_id in self.filter_queryset(self.get_queryset()).values_list(
"id", flat=True
)
]
project_key = serializer.validated_data["project_key"]
issue_type = serializer.validated_data["issue_type"]
with transaction.atomic():
task = jira_integration_task.delay(
tenant_id=self.request.tenant_id,
integration_id=integration_pk,
project_key=project_key,
issue_type=issue_type,
finding_ids=finding_ids,
)
prowler_task = Task.objects.get(id=task.id)
serializer = TaskSerializer(prowler_task)
return Response(
data=serializer.data,
status=status.HTTP_202_ACCEPTED,
headers={
"Content-Location": reverse(
"task-detail", kwargs={"pk": prowler_task.id}
)
},
)
@extend_schema_view(
list=extend_schema(
tags=["Lighthouse"],
summary="List all Lighthouse configurations",
description="Retrieve a list of all Lighthouse configurations.",
tags=["Lighthouse AI"],
summary="List all Lighthouse AI configurations",
description="Retrieve a list of all Lighthouse AI configurations.",
),
create=extend_schema(
tags=["Lighthouse"],
summary="Create a new Lighthouse configuration",
description="Create a new Lighthouse configuration with the specified details.",
tags=["Lighthouse AI"],
summary="Create a new Lighthouse AI configuration",
description="Create a new Lighthouse AI configuration with the specified details.",
),
partial_update=extend_schema(
tags=["Lighthouse"],
summary="Partially update a Lighthouse configuration",
description="Update certain fields of an existing Lighthouse configuration.",
tags=["Lighthouse AI"],
summary="Partially update a Lighthouse AI configuration",
description="Update certain fields of an existing Lighthouse AI configuration.",
),
destroy=extend_schema(
tags=["Lighthouse"],
summary="Delete a Lighthouse configuration",
description="Remove a Lighthouse configuration by its ID.",
tags=["Lighthouse AI"],
summary="Delete a Lighthouse AI configuration",
description="Remove a Lighthouse AI configuration by its ID.",
),
connection=extend_schema(
tags=["Lighthouse"],
tags=["Lighthouse AI"],
summary="Check the connection to the OpenAI API",
description="Verify the connection to the OpenAI API for a specific Lighthouse configuration.",
description="Verify the connection to the OpenAI API for a specific Lighthouse AI configuration.",
request=None,
responses={202: OpenApiResponse(response=TaskSerializer)},
),
@@ -3938,3 +4200,84 @@ class ProcessorViewSet(BaseRLSViewSet):
elif self.action == "partial_update":
return ProcessorUpdateSerializer
return super().get_serializer_class()
@extend_schema_view(
list=extend_schema(
tags=["API Keys"],
summary="List API keys",
description="Retrieve a list of API keys for the tenant, with filtering support.",
),
retrieve=extend_schema(
tags=["API Keys"],
summary="Retrieve API key details",
description="Fetch detailed information about a specific API key by its ID.",
),
create=extend_schema(
tags=["API Keys"],
summary="Create a new API key",
description="Create a new API key for the tenant.",
),
partial_update=extend_schema(
tags=["API Keys"],
summary="Partially update an API key",
description="Modify certain fields of an existing API key without affecting other settings.",
),
revoke=extend_schema(
tags=["API Keys"],
summary="Revoke an API key",
description="Revoke an API key by its ID. This action is irreversible and will prevent the key from being "
"used.",
request=None,
responses={
200: OpenApiResponse(
response=TenantApiKeySerializer,
description="API key was successfully revoked",
)
},
),
)
class TenantApiKeyViewSet(BaseRLSViewSet):
queryset = TenantAPIKey.objects.all()
serializer_class = TenantApiKeySerializer
filterset_class = TenantApiKeyFilter
http_method_names = ["get", "post", "patch", "delete"]
ordering = ["revoked", "-created"]
ordering_fields = ["name", "prefix", "revoked", "inserted_at", "expires_at"]
# RBAC required permissions
required_permissions = [Permissions.MANAGE_ACCOUNT]
def get_queryset(self):
queryset = TenantAPIKey.objects.filter(
tenant_id=self.request.tenant_id
).annotate(inserted_at=F("created"), expires_at=F("expiry_date"))
return queryset
def get_serializer_class(self):
if self.action == "create":
return TenantApiKeyCreateSerializer
elif self.action == "partial_update":
return TenantApiKeyUpdateSerializer
return super().get_serializer_class()
@extend_schema(exclude=True)
def destroy(self, request, *args, **kwargs):
raise MethodNotAllowed(method="DESTROY")
@action(detail=True, methods=["delete"])
def revoke(self, request, *args, **kwargs):
instance = self.get_object()
# Check if already revoked
if instance.revoked:
raise ValidationError(
{
"detail": "API key is already revoked",
}
)
TenantAPIKey.objects.revoke_api_key(instance.pk)
instance.refresh_from_db()
serializer = self.get_serializer(instance)
return Response(data=serializer.data, status=status.HTTP_200_OK)
+7
View File
@@ -48,6 +48,10 @@ class NDJSONFormatter(logging.Formatter):
log_record["user_id"] = record.user_id
if hasattr(record, "tenant_id"):
log_record["tenant_id"] = record.tenant_id
if hasattr(record, "api_key_prefix"):
log_record["api_key_prefix"] = (
record.api_key_prefix if record.api_key_prefix != "N/A" else None
)
if hasattr(record, "method"):
log_record["method"] = record.method
if hasattr(record, "path"):
@@ -90,6 +94,9 @@ class HumanReadableFormatter(logging.Formatter):
# Add REST API extra fields
if hasattr(record, "user_id"):
log_components.append(f"({record.user_id})")
if hasattr(record, "api_key_prefix"):
if record.api_key_prefix != "N/A":
log_components.append(f"(API-Key {record.api_key_prefix})")
if hasattr(record, "tenant_id"):
log_components.append(f"[{record.tenant_id}]")
if hasattr(record, "method"):
+21 -2
View File
@@ -43,6 +43,7 @@ INSTALLED_APPS = [
"allauth.socialaccount.providers.saml",
"dj_rest_auth.registration",
"rest_framework.authtoken",
"drf_simple_apikey",
]
MIDDLEWARE = [
@@ -84,7 +85,7 @@ TEMPLATES = [
REST_FRAMEWORK = {
"DEFAULT_SCHEMA_CLASS": "drf_spectacular_jsonapi.schemas.openapi.JsonApiAutoSchema",
"DEFAULT_AUTHENTICATION_CLASSES": (
"rest_framework_simplejwt.authentication.JWTAuthentication",
"api.authentication.CombinedJWTOrAPIKeyAuthentication",
),
"PAGE_SIZE": 10,
"EXCEPTION_HANDLER": "api.exceptions.custom_exception_handler",
@@ -108,6 +109,13 @@ REST_FRAMEWORK = {
),
"TEST_REQUEST_DEFAULT_FORMAT": "vnd.api+json",
"JSON_API_UNIFORM_EXCEPTIONS": True,
"DEFAULT_THROTTLE_CLASSES": [
"rest_framework.throttling.ScopedRateThrottle",
],
"DEFAULT_THROTTLE_RATES": {
"token-obtain": env("DJANGO_THROTTLE_TOKEN_OBTAIN", default=None),
"dj_rest_auth": None,
},
}
SPECTACULAR_SETTINGS = {
@@ -116,6 +124,9 @@ SPECTACULAR_SETTINGS = {
"PREPROCESSING_HOOKS": [
"drf_spectacular_jsonapi.hooks.fix_nested_path_parameters",
],
"POSTPROCESSING_HOOKS": [
"api.schema_hooks.attach_task_202_examples",
],
"TITLE": "API Reference - Prowler",
}
@@ -210,7 +221,8 @@ SIMPLE_JWT = {
"JTI_CLAIM": "jti",
"USER_ID_FIELD": "id",
"USER_ID_CLAIM": "sub",
# Issuer and Audience claims, for the moment we will keep these values as default values, they may change in the future.
# Issuer and Audience claims, for the moment we will keep these values as default values, they may change in the
# future.
"AUDIENCE": env.str("DJANGO_JWT_AUDIENCE", "https://api.prowler.com"),
"ISSUER": env.str("DJANGO_JWT_ISSUER", "https://api.prowler.com"),
# Additional security settings
@@ -219,6 +231,13 @@ SIMPLE_JWT = {
SECRETS_ENCRYPTION_KEY = env.str("DJANGO_SECRETS_ENCRYPTION_KEY", "")
# DRF Simple API Key settings
DRF_API_KEY = {
"FERNET_SECRET": SECRETS_ENCRYPTION_KEY,
"API_KEY_LIFETIME": 365,
"AUTHENTICATION_KEYWORD_HEADER": "Api-Key",
}
# Internationalization
# https://docs.djangoproject.com/en/5.0/topics/i18n/
+7
View File
@@ -20,6 +20,13 @@ DATABASE_ROUTERS = []
TESTING = True
SECRETS_ENCRYPTION_KEY = "ZMiYVo7m4Fbe2eXXPyrwxdJss2WSalXSv3xHBcJkPl0="
# DRF Simple API Key settings
DRF_API_KEY = {
"FERNET_SECRET": SECRETS_ENCRYPTION_KEY,
"API_KEY_LIFETIME": 365,
"AUTHENTICATION_KEYWORD_HEADER": "Api-Key",
}
# JWT
SIMPLE_JWT["ALGORITHM"] = "HS256" # noqa: F405
@@ -69,6 +69,9 @@ IGNORED_EXCEPTIONS = [
"AzureClientIdAndClientSecretNotBelongingToTenantIdError",
"AzureHTTPResponseError",
"Error with credentials provided",
# PowerShell Errors in User Authentication
"Microsoft Teams User Auth connection failed: Please check your permissions and try again.",
"Exchange Online User Auth connection failed: Please check your permissions and try again.",
]
+154 -1
View File
@@ -38,6 +38,7 @@ from api.models import (
StateChoices,
StatusChoices,
Task,
TenantAPIKey,
User,
UserRoleRelationship,
)
@@ -191,6 +192,108 @@ def create_test_user_rbac_limited(django_db_setup, django_db_blocker):
return user
@pytest.fixture(scope="function")
def create_test_user_rbac_manage_account(django_db_setup, django_db_blocker):
"""User with only manage_account permission (no manage_users)."""
with django_db_blocker.unblock():
user = User.objects.create_user(
name="testing_manage_account",
email="rbac_manage_account@rbac.com",
password=TEST_PASSWORD,
)
tenant = Tenant.objects.create(
name="Tenant Test Manage Account",
)
Membership.objects.create(
user=user,
tenant=tenant,
role=Membership.RoleChoices.OWNER,
)
role = Role.objects.create(
name="manage_account",
tenant_id=tenant.id,
manage_users=False,
manage_account=True,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
)
UserRoleRelationship.objects.create(
user=user,
role=role,
tenant_id=tenant.id,
)
return user
@pytest.fixture
def authenticated_client_rbac_manage_account(
create_test_user_rbac_manage_account, tenants_fixture, client
):
client.user = create_test_user_rbac_manage_account
serializer = TokenSerializer(
data={
"type": "tokens",
"email": "rbac_manage_account@rbac.com",
"password": TEST_PASSWORD,
}
)
serializer.is_valid()
access_token = serializer.validated_data["access"]
client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
return client
@pytest.fixture(scope="function")
def create_test_user_rbac_manage_users_only(django_db_setup, django_db_blocker):
"""User with only manage_users permission (no manage_account)."""
with django_db_blocker.unblock():
user = User.objects.create_user(
name="testing_manage_users_only",
email="rbac_manage_users_only@rbac.com",
password=TEST_PASSWORD,
)
tenant = Tenant.objects.create(name="Tenant Test Manage Users Only")
Membership.objects.create(
user=user,
tenant=tenant,
role=Membership.RoleChoices.OWNER,
)
role = Role.objects.create(
name="manage_users_only",
tenant_id=tenant.id,
manage_users=True,
manage_account=False,
manage_billing=False,
manage_providers=False,
manage_integrations=False,
manage_scans=False,
unlimited_visibility=False,
)
UserRoleRelationship.objects.create(user=user, role=role, tenant_id=tenant.id)
return user
@pytest.fixture
def authenticated_client_rbac_manage_users_only(
create_test_user_rbac_manage_users_only, client
):
client.user = create_test_user_rbac_manage_users_only
serializer = TokenSerializer(
data={
"type": "tokens",
"email": "rbac_manage_users_only@rbac.com",
"password": TEST_PASSWORD,
}
)
serializer.is_valid()
access_token = serializer.validated_data["access"]
client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
return client
@pytest.fixture
def authenticated_client_rbac(create_test_user_rbac, tenants_fixture, client):
client.user = create_test_user_rbac
@@ -1065,7 +1168,7 @@ def integrations_fixture(providers_fixture):
enabled=True,
connected=True,
integration_type="amazon_s3",
configuration={"key": "value"},
configuration={"key": "value1"},
credentials={"psswd": "1234"},
)
IntegrationProviderRelationship.objects.create(
@@ -1266,6 +1369,56 @@ def saml_sociallogin(users_fixture):
return sociallogin
@pytest.fixture
def api_keys_fixture(tenants_fixture, create_test_user):
"""Create test API keys for testing."""
tenant = tenants_fixture[0]
user = create_test_user
# Create and assign role to user for API key authentication
role = Role.objects.create(
tenant_id=tenant.id,
name="Test API Key Role",
unlimited_visibility=True,
manage_account=True,
)
UserRoleRelationship.objects.create(
user=user,
role=role,
tenant_id=tenant.id,
)
# Create API keys with different states
api_key1, raw_key1 = TenantAPIKey.objects.create_api_key(
name="Test API Key 1",
tenant_id=tenant.id,
entity=user,
)
api_key2, raw_key2 = TenantAPIKey.objects.create_api_key(
name="Test API Key 2",
tenant_id=tenant.id,
entity=user,
expiry_date=datetime.now(timezone.utc) + timedelta(days=60),
)
# Revoked API key
api_key3, raw_key3 = TenantAPIKey.objects.create_api_key(
name="Revoked API Key",
tenant_id=tenant.id,
entity=user,
)
api_key3.revoked = True
api_key3.save()
# Store raw keys on instances for testing
api_key1._raw_key = raw_key1
api_key2._raw_key = raw_key2
api_key3._raw_key = raw_key3
return [api_key1, api_key2, api_key3]
def get_authorization_header(access_token: str) -> dict:
return {"Authorization": f"Bearer {access_token}"}
+37 -2
View File
@@ -3,8 +3,11 @@ from datetime import datetime, timezone
import openai
from celery.utils.log import get_task_logger
from api.models import LighthouseConfiguration, Provider
from api.utils import prowler_provider_connection_test
from api.models import Integration, LighthouseConfiguration, Provider
from api.utils import (
prowler_integration_connection_test,
prowler_provider_connection_test,
)
logger = get_task_logger(__name__)
@@ -83,3 +86,35 @@ def check_lighthouse_connection(lighthouse_config_id: str):
lighthouse_config.is_active = False
lighthouse_config.save()
return {"connected": False, "error": str(e), "available_models": []}
def check_integration_connection(integration_id: str):
"""
Business logic to check the connection status of an integration.
Args:
integration_id (str): The primary key of the Integration instance to check.
"""
integration = Integration.objects.filter(pk=integration_id, enabled=True).first()
if not integration:
logger.info(f"Integration {integration_id} is not enabled")
return {"connected": False, "error": "Integration is not enabled"}
try:
result = prowler_integration_connection_test(integration)
except Exception as e:
logger.warning(
f"Unexpected exception checking {integration.integration_type} integration connection: {str(e)}"
)
raise e
# Update integration connection status
integration.connected = result.is_connected
integration.connection_last_checked_at = datetime.now(tz=timezone.utc)
integration.save()
return {
"connected": result.is_connected,
"error": str(result.error) if result.error else None,
}
+17 -5
View File
@@ -8,18 +8,22 @@ from botocore.exceptions import ClientError, NoCredentialsError, ParamValidation
from celery.utils.log import get_task_logger
from django.conf import settings
from api.db_utils import rls_transaction
from api.models import Scan
from prowler.config.config import (
csv_file_suffix,
html_file_suffix,
json_asff_file_suffix,
json_ocsf_file_suffix,
output_file_timestamp,
)
from prowler.lib.outputs.asff.asff import ASFF
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
AWSWellArchitected,
)
from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
from prowler.lib.outputs.compliance.cis.cis_github import GithubCIS
from prowler.lib.outputs.compliance.cis.cis_kubernetes import KubernetesCIS
from prowler.lib.outputs.compliance.cis.cis_m365 import M365CIS
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
@@ -93,6 +97,9 @@ COMPLIANCE_CLASS_MAP = {
(lambda name: name == "prowler_threatscore_m365", ProwlerThreatScoreM365),
(lambda name: name.startswith("iso27001_"), M365ISO27001),
],
"github": [
(lambda name: name.startswith("cis_"), GithubCIS),
],
}
@@ -104,6 +111,7 @@ OUTPUT_FORMATS_MAPPING = {
"kwargs": {},
},
"json-ocsf": {"class": OCSF, "suffix": json_ocsf_file_suffix, "kwargs": {}},
"json-asff": {"class": ASFF, "suffix": json_asff_file_suffix, "kwargs": {}},
"html": {"class": HTML, "suffix": html_file_suffix, "kwargs": {"stats": {}}},
}
@@ -167,7 +175,7 @@ def get_s3_client():
return s3_client
def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str:
def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str | None:
"""
Upload the specified ZIP file to an S3 bucket.
If the S3 bucket environment variables are not configured,
@@ -184,7 +192,7 @@ def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str:
"""
bucket = base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET
if not bucket:
return None
return
try:
s3 = get_s3_client()
@@ -244,15 +252,19 @@ def _generate_output_directory(
# Sanitize the prowler provider name to ensure it is a valid directory name
prowler_provider_sanitized = re.sub(r"[^\w\-]", "-", prowler_provider)
with rls_transaction(tenant_id):
started_at = Scan.objects.get(id=scan_id).started_at
timestamp = started_at.strftime("%Y%m%d%H%M%S")
path = (
f"{output_directory}/{tenant_id}/{scan_id}/prowler-output-"
f"{prowler_provider_sanitized}-{output_file_timestamp}"
f"{prowler_provider_sanitized}-{timestamp}"
)
os.makedirs("/".join(path.split("/")[:-1]), exist_ok=True)
compliance_path = (
f"{output_directory}/{tenant_id}/{scan_id}/compliance/prowler-output-"
f"{prowler_provider_sanitized}-{output_file_timestamp}"
f"{prowler_provider_sanitized}-{timestamp}"
)
os.makedirs("/".join(compliance_path.split("/")[:-1]), exist_ok=True)
+506
View File
@@ -0,0 +1,506 @@
import os
from glob import glob
from celery.utils.log import get_task_logger
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
from tasks.utils import batched
from api.db_utils import rls_transaction
from api.models import Finding, Integration, Provider
from api.utils import initialize_prowler_integration, initialize_prowler_provider
from prowler.lib.outputs.asff.asff import ASFF
from prowler.lib.outputs.compliance.generic.generic import GenericCompliance
from prowler.lib.outputs.csv.csv import CSV
from prowler.lib.outputs.finding import Finding as FindingOutput
from prowler.lib.outputs.html.html import HTML
from prowler.lib.outputs.ocsf.ocsf import OCSF
from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.aws.lib.s3.s3 import S3
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHub
from prowler.providers.common.models import Connection
logger = get_task_logger(__name__)
def get_s3_client_from_integration(
integration: Integration,
) -> tuple[bool, S3 | Connection]:
"""
Create and return a boto3 S3 client using AWS credentials from an integration.
Args:
integration (Integration): The integration to get the S3 client from.
Returns:
tuple[bool, S3 | Connection]: A tuple containing a boolean indicating if the connection was successful and the S3 client or connection object.
"""
s3 = S3(
**integration.credentials,
bucket_name=integration.configuration["bucket_name"],
output_directory=integration.configuration["output_directory"],
)
connection = s3.test_connection(
**integration.credentials,
bucket_name=integration.configuration["bucket_name"],
)
if connection.is_connected:
return True, s3
return False, connection
def upload_s3_integration(
tenant_id: str, provider_id: str, output_directory: str
) -> bool:
"""
Upload the specified output files to an S3 bucket from an integration.
Reconstructs output objects from files in the output directory instead of using serialized data.
Args:
tenant_id (str): The tenant identifier, used as part of the S3 key prefix.
provider_id (str): The provider identifier, used as part of the S3 key prefix.
output_directory (str): Path to the directory containing output files.
Returns:
bool: True if all integrations were executed, False otherwise.
Raises:
botocore.exceptions.ClientError: If the upload attempt to S3 fails for any reason.
"""
logger.info(f"Processing S3 integrations for provider {provider_id}")
try:
with rls_transaction(tenant_id):
integrations = list(
Integration.objects.filter(
integrationproviderrelationship__provider_id=provider_id,
integration_type=Integration.IntegrationChoices.AMAZON_S3,
enabled=True,
)
)
if not integrations:
logger.error(f"No S3 integrations found for provider {provider_id}")
return False
integration_executions = 0
for integration in integrations:
try:
connected, s3 = get_s3_client_from_integration(integration)
except Exception as e:
logger.info(
f"S3 connection failed for integration {integration.id}: {e}"
)
integration.connected = False
integration.save()
continue
if connected:
try:
# Reconstruct generated_outputs from files in output directory
# This approach scans the output directory for files and creates the appropriate
# output objects based on file extensions and naming patterns.
generated_outputs = {"regular": [], "compliance": []}
# Find and recreate regular outputs (CSV, HTML, OCSF)
output_file_patterns = {
".csv": CSV,
".html": HTML,
".ocsf.json": OCSF,
".asff.json": ASFF,
}
base_dir = os.path.dirname(output_directory)
for extension, output_class in output_file_patterns.items():
pattern = f"{output_directory}*{extension}"
for file_path in glob(pattern):
if os.path.exists(file_path):
output = output_class(findings=[], file_path=file_path)
output.create_file_descriptor(file_path)
generated_outputs["regular"].append(output)
# Find and recreate compliance outputs
compliance_pattern = os.path.join(base_dir, "compliance", "*.csv")
for file_path in glob(compliance_pattern):
if os.path.exists(file_path):
output = GenericCompliance(
findings=[],
compliance=None,
file_path=file_path,
file_extension=".csv",
)
output.create_file_descriptor(file_path)
generated_outputs["compliance"].append(output)
# Use send_to_bucket with recreated generated_outputs objects
s3.send_to_bucket(generated_outputs)
except Exception as e:
logger.error(
f"S3 upload failed for integration {integration.id}: {e}"
)
continue
integration_executions += 1
else:
integration.connected = False
integration.save()
logger.error(
f"S3 upload failed, connection failed for integration {integration.id}: {s3.error}"
)
result = integration_executions == len(integrations)
if result:
logger.info(
f"All the S3 integrations completed successfully for provider {provider_id}"
)
else:
logger.info(f"Some S3 integrations failed for provider {provider_id}")
return result
except Exception as e:
logger.error(f"S3 integrations failed for provider {provider_id}: {str(e)}")
return False
def get_security_hub_client_from_integration(
integration: Integration, tenant_id: str, findings: list
) -> tuple[bool, SecurityHub | Connection]:
"""
Create and return a SecurityHub client using AWS credentials from an integration.
Args:
integration (Integration): The integration to get the Security Hub client from.
tenant_id (str): The tenant identifier.
findings (list): List of findings in ASFF format to send to Security Hub.
Returns:
tuple[bool, SecurityHub | Connection]: A tuple containing a boolean indicating
if the connection was successful and the SecurityHub client or connection object.
"""
# Get the provider associated with this integration
with rls_transaction(tenant_id):
provider_relationship = integration.integrationproviderrelationship_set.first()
if not provider_relationship:
return Connection(
is_connected=False, error="No provider associated with this integration"
)
provider_uid = provider_relationship.provider.uid
provider_secret = provider_relationship.provider.secret.secret
credentials = (
integration.credentials if integration.credentials else provider_secret
)
connection = SecurityHub.test_connection(
aws_account_id=provider_uid,
raise_on_exception=False,
**credentials,
)
if connection.is_connected:
all_security_hub_regions = AwsProvider.get_available_aws_service_regions(
"securityhub", connection.partition
)
# Create regions status dictionary
regions_status = {}
for region in set(all_security_hub_regions):
regions_status[region] = region in connection.enabled_regions
# Save regions information in the integration configuration
with rls_transaction(tenant_id):
integration.configuration["regions"] = regions_status
integration.save()
# Create SecurityHub client with all necessary parameters
security_hub = SecurityHub(
aws_account_id=provider_uid,
findings=findings,
send_only_fails=integration.configuration.get("send_only_fails", False),
aws_security_hub_available_regions=list(connection.enabled_regions),
**credentials,
)
return True, security_hub
else:
# Reset regions information if connection fails
with rls_transaction(tenant_id):
integration.configuration["regions"] = {}
integration.save()
return False, connection
def upload_security_hub_integration(
tenant_id: str, provider_id: str, scan_id: str
) -> bool:
"""
Upload findings to AWS Security Hub using configured integrations.
This function retrieves findings from the database, transforms them to ASFF format,
and sends them to AWS Security Hub using the configured integration credentials.
Args:
tenant_id (str): The tenant identifier.
provider_id (str): The provider identifier.
scan_id (str): The scan identifier for which to send findings.
Returns:
bool: True if all integrations executed successfully, False otherwise.
"""
logger.info(f"Processing Security Hub integrations for provider {provider_id}")
try:
with rls_transaction(tenant_id):
# Get Security Hub integrations for this provider
integrations = list(
Integration.objects.filter(
integrationproviderrelationship__provider_id=provider_id,
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
enabled=True,
)
)
if not integrations:
logger.error(
f"No Security Hub integrations found for provider {provider_id}"
)
return False
# Get the provider object
provider = Provider.objects.get(id=provider_id)
# Initialize prowler provider for finding transformation
prowler_provider = initialize_prowler_provider(provider)
# Process each Security Hub integration
integration_executions = 0
total_findings_sent = {} # Track findings sent per integration
for integration in integrations:
try:
# Initialize Security Hub client for this integration
# We'll create the client once and reuse it for all batches
security_hub_client = None
send_only_fails = integration.configuration.get(
"send_only_fails", False
)
total_findings_sent[integration.id] = 0
# Process findings in batches to avoid memory issues
has_findings = False
batch_number = 0
with rls_transaction(tenant_id):
qs = (
Finding.all_objects.filter(tenant_id=tenant_id, scan_id=scan_id)
.order_by("uid")
.iterator()
)
for batch, _ in batched(qs, DJANGO_FINDINGS_BATCH_SIZE):
batch_number += 1
has_findings = True
# Transform findings for this batch
transformed_findings = [
FindingOutput.transform_api_finding(
finding, prowler_provider
)
for finding in batch
]
# Convert to ASFF format
asff_transformer = ASFF(
findings=transformed_findings,
file_path="",
file_extension="json",
)
asff_transformer.transform(transformed_findings)
# Get the batch of ASFF findings
batch_asff_findings = asff_transformer.data
if batch_asff_findings:
# Create Security Hub client for first batch or reuse existing
if not security_hub_client:
connected, security_hub = (
get_security_hub_client_from_integration(
integration, tenant_id, batch_asff_findings
)
)
if not connected:
logger.error(
f"Security Hub connection failed for integration {integration.id}: "
f"{security_hub.error}"
)
integration.connected = False
integration.save()
break # Skip this integration
security_hub_client = security_hub
logger.info(
f"Sending {'fail' if send_only_fails else 'all'} findings to Security Hub via "
f"integration {integration.id}"
)
else:
# Update findings in existing client for this batch
security_hub_client._findings_per_region = (
security_hub_client.filter(
batch_asff_findings, send_only_fails
)
)
# Send this batch to Security Hub
try:
findings_sent = (
security_hub_client.batch_send_to_security_hub()
)
total_findings_sent[integration.id] += findings_sent
if findings_sent > 0:
logger.debug(
f"Sent batch {batch_number} with {findings_sent} findings to Security Hub"
)
except Exception as batch_error:
logger.error(
f"Failed to send batch {batch_number} to Security Hub: {str(batch_error)}"
)
# Clear memory after processing each batch
asff_transformer._data.clear()
del batch_asff_findings
del transformed_findings
if not has_findings:
logger.info(
f"No findings to send to Security Hub for scan {scan_id}"
)
integration_executions += 1
elif security_hub_client:
if total_findings_sent[integration.id] > 0:
logger.info(
f"Successfully sent {total_findings_sent[integration.id]} total findings to Security Hub via integration {integration.id}"
)
integration_executions += 1
else:
logger.warning(
f"No findings were sent to Security Hub via integration {integration.id}"
)
# Archive previous findings if configured to do so
if integration.configuration.get(
"archive_previous_findings", False
):
logger.info(
f"Archiving previous findings in Security Hub via integration {integration.id}"
)
try:
findings_archived = (
security_hub_client.archive_previous_findings()
)
logger.info(
f"Successfully archived {findings_archived} previous findings in Security Hub"
)
except Exception as archive_error:
logger.warning(
f"Failed to archive previous findings: {str(archive_error)}"
)
except Exception as e:
logger.error(
f"Security Hub integration {integration.id} failed: {str(e)}"
)
continue
result = integration_executions == len(integrations)
if result:
logger.info(
f"All Security Hub integrations completed successfully for provider {provider_id}"
)
else:
logger.error(
f"Some Security Hub integrations failed for provider {provider_id}"
)
return result
except Exception as e:
logger.error(
f"Security Hub integrations failed for provider {provider_id}: {str(e)}"
)
return False
def send_findings_to_jira(
tenant_id: str,
integration_id: str,
project_key: str,
issue_type: str,
finding_ids: list[str],
):
with rls_transaction(tenant_id):
integration = Integration.objects.get(id=integration_id)
jira_integration = initialize_prowler_integration(integration)
num_tickets_created = 0
for finding_id in finding_ids:
with rls_transaction(tenant_id):
finding_instance = (
Finding.all_objects.select_related("scan__provider")
.prefetch_related("resources")
.get(id=finding_id)
)
# Extract resource information
resource = (
finding_instance.resources.first()
if finding_instance.resources.exists()
else None
)
resource_uid = resource.uid if resource else ""
resource_name = resource.name if resource else ""
resource_tags = {}
if resource and hasattr(resource, "tags"):
resource_tags = resource.get_tags(tenant_id)
# Get region
region = resource.region if resource and resource.region else ""
# Extract remediation information from check_metadata
check_metadata = finding_instance.check_metadata
remediation = check_metadata.get("remediation", {})
recommendation = remediation.get("recommendation", {})
remediation_code = remediation.get("code", {})
# Send the individual finding to Jira
result = jira_integration.send_finding(
check_id=finding_instance.check_id,
check_title=check_metadata.get("checktitle", ""),
severity=finding_instance.severity,
status=finding_instance.status,
status_extended=finding_instance.status_extended or "",
provider=finding_instance.scan.provider.provider,
region=region,
resource_uid=resource_uid,
resource_name=resource_name,
risk=check_metadata.get("risk", ""),
recommendation_text=recommendation.get("text", ""),
recommendation_url=recommendation.get("url", ""),
remediation_code_native_iac=remediation_code.get("nativeiac", ""),
remediation_code_terraform=remediation_code.get("terraform", ""),
remediation_code_cli=remediation_code.get("cli", ""),
remediation_code_other=remediation_code.get("other", ""),
resource_tags=resource_tags,
compliance=finding_instance.compliance or {},
project_key=project_key,
issue_type=issue_type,
)
if result:
num_tickets_created += 1
else:
logger.error(f"Failed to send finding {finding_id} to Jira")
return {
"created_count": num_tickets_created,
"failed_count": len(finding_ids) - num_tickets_created,
}
+38 -48
View File
@@ -1,11 +1,12 @@
import json
import time
from collections import defaultdict
from copy import deepcopy
from datetime import datetime, timezone
from celery.utils.log import get_task_logger
from config.settings.celery import CELERY_DEADLOCK_ATTEMPTS
from django.db import IntegrityError, OperationalError, connection
from django.db import IntegrityError, OperationalError
from django.db.models import Case, Count, IntegerField, Prefetch, Sum, When
from tasks.utils import CustomEncoder
@@ -13,7 +14,11 @@ from api.compliance import (
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE,
generate_scan_compliance,
)
from api.db_utils import create_objects_in_batches, rls_transaction
from api.db_utils import (
create_objects_in_batches,
rls_transaction,
update_objects_in_batches,
)
from api.exceptions import ProviderConnectionError
from api.models import (
ComplianceRequirementOverview,
@@ -103,7 +108,10 @@ def _store_resources(
def perform_prowler_scan(
tenant_id: str, scan_id: str, provider_id: str, checks_to_execute: list[str] = None
tenant_id: str,
scan_id: str,
provider_id: str,
checks_to_execute: list[str] | None = None,
):
"""
Perform a scan using Prowler and store the findings and resources in the database.
@@ -175,6 +183,7 @@ def perform_prowler_scan(
resource_cache = {}
tag_cache = {}
last_status_cache = {}
resource_failed_findings_cache = defaultdict(int)
for progress, findings in prowler_scan.scan():
for finding in findings:
@@ -200,6 +209,9 @@ def perform_prowler_scan(
},
)
resource_cache[resource_uid] = resource_instance
# Initialize all processed resources in the cache
resource_failed_findings_cache[resource_uid] = 0
else:
resource_instance = resource_cache[resource_uid]
@@ -313,6 +325,11 @@ def perform_prowler_scan(
)
finding_instance.add_resources([resource_instance])
# Increment failed_findings_count cache if the finding status is FAIL and not muted
if status == FindingStatus.FAIL and not finding.muted:
resource_uid = finding.resource_uid
resource_failed_findings_cache[resource_uid] += 1
# Update scan resource summaries
scan_resource_cache.add(
(
@@ -330,6 +347,24 @@ def perform_prowler_scan(
scan_instance.state = StateChoices.COMPLETED
# Update failed_findings_count for all resources in batches if scan completed successfully
if resource_failed_findings_cache:
resources_to_update = []
for resource_uid, failed_count in resource_failed_findings_cache.items():
if resource_uid in resource_cache:
resource_instance = resource_cache[resource_uid]
resource_instance.failed_findings_count = failed_count
resources_to_update.append(resource_instance)
if resources_to_update:
update_objects_in_batches(
tenant_id=tenant_id,
model=Resource,
objects=resources_to_update,
fields=["failed_findings_count"],
batch_size=1000,
)
except Exception as e:
logger.error(f"Error performing scan {scan_id}: {e}")
exception = e
@@ -376,7 +411,6 @@ def perform_prowler_scan(
def aggregate_findings(tenant_id: str, scan_id: str):
"""
Aggregates findings for a given scan and stores the results in the ScanSummary table.
Also updates the failed_findings_count for each resource based on the latest findings.
This function retrieves all findings associated with a given `scan_id` and calculates various
metrics such as counts of failed, passed, and muted findings, as well as their deltas (new,
@@ -405,8 +439,6 @@ def aggregate_findings(tenant_id: str, scan_id: str):
- muted_new: Muted findings with a delta of 'new'.
- muted_changed: Muted findings with a delta of 'changed'.
"""
_update_resource_failed_findings_count(tenant_id, scan_id)
with rls_transaction(tenant_id):
findings = Finding.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
@@ -531,48 +563,6 @@ def aggregate_findings(tenant_id: str, scan_id: str):
ScanSummary.objects.bulk_create(scan_aggregations, batch_size=3000)
def _update_resource_failed_findings_count(tenant_id: str, scan_id: str):
"""
Update the failed_findings_count field for resources based on the latest findings.
This function calculates the number of failed findings for each resource by:
1. Getting the latest finding for each finding.uid
2. Counting failed findings per resource
3. Updating the failed_findings_count field for each resource
Args:
tenant_id (str): The ID of the tenant to which the scan belongs.
scan_id (str): The ID of the scan for which to update resource counts.
"""
with rls_transaction(tenant_id):
scan = Scan.objects.get(pk=scan_id)
provider_id = str(scan.provider_id)
with connection.cursor() as cursor:
cursor.execute(
"""
UPDATE resources AS r
SET failed_findings_count = COALESCE((
SELECT COUNT(*) FROM (
SELECT DISTINCT ON (f.uid) f.uid
FROM findings AS f
JOIN resource_finding_mappings AS rfm
ON rfm.finding_id = f.id
WHERE f.tenant_id = %s
AND f.status = %s
AND f.muted = FALSE
AND rfm.resource_id = r.id
ORDER BY f.uid, f.inserted_at DESC
) AS latest_uids
), 0)
WHERE r.tenant_id = %s
AND r.provider_id = %s
""",
[tenant_id, FindingStatus.FAIL, tenant_id, provider_id],
)
def create_compliance_requirements(tenant_id: str, scan_id: str):
"""
Create detailed compliance requirement overview records for a scan.
+199 -6
View File
@@ -2,13 +2,17 @@ from datetime import datetime, timedelta, timezone
from pathlib import Path
from shutil import rmtree
from celery import chain, shared_task
from celery import chain, group, shared_task
from celery.utils.log import get_task_logger
from config.celery import RLSTask
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE, DJANGO_TMP_OUTPUT_DIRECTORY
from django_celery_beat.models import PeriodicTask
from tasks.jobs.backfill import backfill_resource_scan_summaries
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
from tasks.jobs.connection import (
check_integration_connection,
check_lighthouse_connection,
check_provider_connection,
)
from tasks.jobs.deletion import delete_provider, delete_tenant
from tasks.jobs.export import (
COMPLIANCE_CLASS_MAP,
@@ -17,6 +21,11 @@ from tasks.jobs.export import (
_generate_output_directory,
_upload_to_s3,
)
from tasks.jobs.integrations import (
send_findings_to_jira,
upload_s3_integration,
upload_security_hub_integration,
)
from tasks.jobs.scan import (
aggregate_findings,
create_compliance_requirements,
@@ -27,7 +36,7 @@ from tasks.utils import batched, get_next_execution_datetime
from api.compliance import get_compliance_frameworks
from api.db_utils import rls_transaction
from api.decorators import set_tenant
from api.models import Finding, Provider, Scan, ScanSummary, StateChoices
from api.models import Finding, Integration, Provider, Scan, ScanSummary, StateChoices
from api.utils import initialize_prowler_provider
from api.v1.serializers import ScanTaskSerializer
from prowler.lib.check.compliance_models import Compliance
@@ -54,6 +63,11 @@ def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str)
generate_outputs_task.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
check_integrations_task.si(
tenant_id=tenant_id,
provider_id=provider_id,
scan_id=scan_id,
),
).apply_async()
@@ -74,6 +88,18 @@ def check_provider_connection_task(provider_id: str):
return check_provider_connection(provider_id=provider_id)
@shared_task(base=RLSTask, name="integration-connection-check")
@set_tenant
def check_integration_connection_task(integration_id: str):
"""
Task to check the connection status of an integration.
Args:
integration_id (str): The primary key of the Integration instance to check.
"""
return check_integration_connection(integration_id=integration_id)
@shared_task(
base=RLSTask, name="provider-deletion", queue="deletion", autoretry_for=(Exception,)
)
@@ -302,12 +328,30 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
ScanSummary.objects.filter(scan_id=scan_id)
)
qs = Finding.all_objects.filter(scan_id=scan_id).order_by("uid").iterator()
# Check if we need to generate ASFF output for AWS providers with SecurityHub integration
generate_asff = False
if provider_type == "aws":
security_hub_integrations = Integration.objects.filter(
integrationproviderrelationship__provider_id=provider_id,
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
enabled=True,
)
generate_asff = security_hub_integrations.exists()
qs = (
Finding.all_objects.filter(tenant_id=tenant_id, scan_id=scan_id)
.order_by("uid")
.iterator()
)
for batch, is_last in batched(qs, DJANGO_FINDINGS_BATCH_SIZE):
fos = [FindingOutput.transform_api_finding(f, prowler_provider) for f in batch]
# Outputs
for mode, cfg in OUTPUT_FORMATS_MAPPING.items():
# Skip ASFF generation if not needed
if mode == "json-asff" and not generate_asff:
continue
cls = cfg["class"]
suffix = cfg["suffix"]
extra = cfg.get("kwargs", {}).copy()
@@ -361,7 +405,34 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
compressed = _compress_output_files(out_dir)
upload_uri = _upload_to_s3(tenant_id, compressed, scan_id)
# S3 integrations (need output_directory)
with rls_transaction(tenant_id):
s3_integrations = Integration.objects.filter(
integrationproviderrelationship__provider_id=provider_id,
integration_type=Integration.IntegrationChoices.AMAZON_S3,
enabled=True,
)
if s3_integrations:
# Pass the output directory path to S3 integration task to reconstruct objects from files
s3_integration_task.apply_async(
kwargs={
"tenant_id": tenant_id,
"provider_id": provider_id,
"output_directory": out_dir,
}
).get(
disable_sync_subtasks=False
) # TODO: This synchronous execution is NOT recommended
# We're forced to do this because we need the files to exist before deletion occurs.
# Once we have the periodic file cleanup task implemented, we should:
# 1. Remove this .get() call and make it fully async
# 2. For Cloud deployments, develop a secondary approach where outputs are stored
# directly in S3 and read from there, eliminating local file dependencies
if upload_uri:
# TODO: We need to create a new periodic task to delete the output files
# This task shouldn't be responsible for deleting the output files
try:
rmtree(Path(compressed).parent, ignore_errors=True)
except Exception as e:
@@ -372,7 +443,10 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
Scan.all_objects.filter(id=scan_id).update(output_location=final_location)
logger.info(f"Scan outputs at {final_location}")
return {"upload": did_upload}
return {
"upload": did_upload,
}
@shared_task(name="backfill-scan-resource-summaries", queue="backfill")
@@ -387,7 +461,7 @@ def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
return backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="overview")
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="compliance")
def create_compliance_requirements_task(tenant_id: str, scan_id: str):
"""
Creates detailed compliance requirement records for a scan.
@@ -420,3 +494,122 @@ def check_lighthouse_connection_task(lighthouse_config_id: str, tenant_id: str =
- 'available_models' (list): List of available models if connection is successful.
"""
return check_lighthouse_connection(lighthouse_config_id=lighthouse_config_id)
@shared_task(name="integration-check")
def check_integrations_task(tenant_id: str, provider_id: str, scan_id: str = None):
"""
Check and execute all configured integrations for a provider.
Args:
tenant_id (str): The tenant identifier
provider_id (str): The provider identifier
scan_id (str, optional): The scan identifier for integrations that need scan data
"""
logger.info(f"Checking integrations for provider {provider_id}")
try:
integration_tasks = []
with rls_transaction(tenant_id):
integrations = Integration.objects.filter(
integrationproviderrelationship__provider_id=provider_id,
enabled=True,
)
if not integrations.exists():
logger.info(f"No integrations configured for provider {provider_id}")
return {"integrations_processed": 0}
# Security Hub integration
security_hub_integrations = integrations.filter(
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB
)
if security_hub_integrations.exists():
integration_tasks.append(
security_hub_integration_task.s(
tenant_id=tenant_id, provider_id=provider_id, scan_id=scan_id
)
)
# TODO: Add other integration types here
# slack_integrations = integrations.filter(
# integration_type=Integration.IntegrationChoices.SLACK
# )
# if slack_integrations.exists():
# integration_tasks.append(
# slack_integration_task.s(
# tenant_id=tenant_id,
# provider_id=provider_id,
# )
# )
except Exception as e:
logger.error(f"Integration check failed for provider {provider_id}: {str(e)}")
return {"integrations_processed": 0, "error": str(e)}
# Execute all integration tasks in parallel if any were found
if integration_tasks:
job = group(integration_tasks)
job.apply_async()
logger.info(f"Launched {len(integration_tasks)} integration task(s)")
return {"integrations_processed": len(integration_tasks)}
@shared_task(
base=RLSTask,
name="integration-s3",
queue="integrations",
)
def s3_integration_task(
tenant_id: str,
provider_id: str,
output_directory: str,
):
"""
Process S3 integrations for a provider.
Args:
tenant_id (str): The tenant identifier
provider_id (str): The provider identifier
output_directory (str): Path to the directory containing output files
"""
return upload_s3_integration(tenant_id, provider_id, output_directory)
@shared_task(
base=RLSTask,
name="integration-security-hub",
queue="integrations",
)
def security_hub_integration_task(
tenant_id: str,
provider_id: str,
scan_id: str,
):
"""
Process Security Hub integrations for a provider.
Args:
tenant_id (str): The tenant identifier
provider_id (str): The provider identifier
scan_id (str): The scan identifier
"""
return upload_security_hub_integration(tenant_id, provider_id, scan_id)
@shared_task(
base=RLSTask,
name="integration-jira",
queue="integrations",
)
def jira_integration_task(
tenant_id: str,
integration_id: str,
project_key: str,
issue_type: str,
finding_ids: list[str],
):
return send_findings_to_jira(
tenant_id, integration_id, project_key, issue_type, finding_ids
)
+132 -2
View File
@@ -1,10 +1,15 @@
import uuid
from datetime import datetime, timezone
from unittest.mock import MagicMock, patch
import pytest
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
from tasks.jobs.connection import (
check_integration_connection,
check_lighthouse_connection,
check_provider_connection,
)
from api.models import LighthouseConfiguration, Provider
from api.models import Integration, LighthouseConfiguration, Provider
@pytest.mark.parametrize(
@@ -127,3 +132,128 @@ def test_check_lighthouse_connection_missing_api_key(mock_lighthouse_get):
assert result["available_models"] == []
assert mock_lighthouse_instance.is_active is False
mock_lighthouse_instance.save.assert_called_once()
@pytest.mark.django_db
class TestCheckIntegrationConnection:
def setup_method(self):
self.integration_id = str(uuid.uuid4())
@patch("tasks.jobs.connection.Integration.objects.filter")
@patch("tasks.jobs.connection.prowler_integration_connection_test")
def test_check_integration_connection_success(
self, mock_prowler_test, mock_integration_filter
):
"""Test successful integration connection check with enabled=True filter."""
mock_integration = MagicMock()
mock_integration.id = self.integration_id
mock_integration.integration_type = Integration.IntegrationChoices.AMAZON_S3
mock_queryset = MagicMock()
mock_queryset.first.return_value = mock_integration
mock_integration_filter.return_value = mock_queryset
mock_connection_result = MagicMock()
mock_connection_result.is_connected = True
mock_connection_result.error = None
mock_prowler_test.return_value = mock_connection_result
result = check_integration_connection(integration_id=self.integration_id)
# Verify that Integration.objects.filter was called with enabled=True filter
mock_integration_filter.assert_called_once_with(
pk=self.integration_id, enabled=True
)
mock_queryset.first.assert_called_once()
mock_prowler_test.assert_called_once_with(mock_integration)
# Verify the integration properties were updated
assert mock_integration.connected is True
assert mock_integration.connection_last_checked_at is not None
mock_integration.save.assert_called_once()
# Verify the return value
assert result["connected"] is True
assert result["error"] is None
@patch("tasks.jobs.connection.Integration.objects.filter")
@patch("tasks.jobs.connection.prowler_integration_connection_test")
def test_check_integration_connection_failure(
self, mock_prowler_test, mock_integration_filter
):
"""Test failed integration connection check."""
mock_integration = MagicMock()
mock_integration.id = self.integration_id
mock_queryset = MagicMock()
mock_queryset.first.return_value = mock_integration
mock_integration_filter.return_value = mock_queryset
test_error = Exception("Connection failed")
mock_connection_result = MagicMock()
mock_connection_result.is_connected = False
mock_connection_result.error = test_error
mock_prowler_test.return_value = mock_connection_result
result = check_integration_connection(integration_id=self.integration_id)
# Verify that Integration.objects.filter was called with enabled=True filter
mock_integration_filter.assert_called_once_with(
pk=self.integration_id, enabled=True
)
mock_queryset.first.assert_called_once()
# Verify the integration properties were updated
assert mock_integration.connected is False
assert mock_integration.connection_last_checked_at is not None
mock_integration.save.assert_called_once()
# Verify the return value
assert result["connected"] is False
assert result["error"] == str(test_error)
@patch("tasks.jobs.connection.Integration.objects.filter")
def test_check_integration_connection_not_enabled(self, mock_integration_filter):
"""Test that disabled integrations return proper error response."""
# Mock that no enabled integration is found
mock_queryset = MagicMock()
mock_queryset.first.return_value = None
mock_integration_filter.return_value = mock_queryset
result = check_integration_connection(integration_id=self.integration_id)
# Verify the filter was called with enabled=True
mock_integration_filter.assert_called_once_with(
pk=self.integration_id, enabled=True
)
mock_queryset.first.assert_called_once()
# Verify the return value matches the expected error response
assert result["connected"] is False
assert result["error"] == "Integration is not enabled"
@patch("tasks.jobs.connection.Integration.objects.filter")
@patch("tasks.jobs.connection.prowler_integration_connection_test")
def test_check_integration_connection_exception(
self, mock_prowler_test, mock_integration_filter
):
"""Test integration connection check when prowler test raises exception."""
mock_integration = MagicMock()
mock_integration.id = self.integration_id
mock_queryset = MagicMock()
mock_queryset.first.return_value = mock_integration
mock_integration_filter.return_value = mock_queryset
test_exception = Exception("Unexpected error during connection test")
mock_prowler_test.side_effect = test_exception
with pytest.raises(Exception, match="Unexpected error during connection test"):
check_integration_connection(integration_id=self.integration_id)
# Verify that Integration.objects.filter was called with enabled=True filter
mock_integration_filter.assert_called_once_with(
pk=self.integration_id, enabled=True
)
mock_queryset.first.assert_called_once()
mock_prowler_test.assert_called_once_with(mock_integration)
+38 -12
View File
@@ -1,5 +1,7 @@
import os
import uuid
import zipfile
from datetime import datetime
from pathlib import Path
from unittest.mock import MagicMock, patch
@@ -127,14 +129,26 @@ class TestOutputs:
_upload_to_s3("tenant", str(zip_path), "scan")
mock_logger.assert_called()
def test_generate_output_directory_creates_paths(self, tmpdir):
from prowler.config.config import output_file_timestamp
@patch("tasks.jobs.export.rls_transaction")
@patch("tasks.jobs.export.Scan")
def test_generate_output_directory_creates_paths(
self, mock_scan, mock_rls_transaction, tmpdir
):
# Mock the scan object with a started_at timestamp
mock_scan_instance = MagicMock()
mock_scan_instance.started_at = datetime(2023, 6, 15, 10, 30, 45)
mock_scan.objects.get.return_value = mock_scan_instance
# Mock rls_transaction as a context manager
mock_rls_transaction.return_value.__enter__ = MagicMock()
mock_rls_transaction.return_value.__exit__ = MagicMock(return_value=False)
base_tmp = Path(str(tmpdir.mkdir("generate_output")))
base_dir = str(base_tmp)
tenant_id = "t1"
scan_id = "s1"
tenant_id = str(uuid.uuid4())
scan_id = str(uuid.uuid4())
provider = "aws"
expected_timestamp = "20230615103045"
path, compliance = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
@@ -143,17 +157,29 @@ class TestOutputs:
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert path.endswith(f"{provider}-{output_file_timestamp}")
assert compliance.endswith(f"{provider}-{output_file_timestamp}")
assert path.endswith(f"{provider}-{expected_timestamp}")
assert compliance.endswith(f"{provider}-{expected_timestamp}")
def test_generate_output_directory_invalid_character(self, tmpdir):
from prowler.config.config import output_file_timestamp
@patch("tasks.jobs.export.rls_transaction")
@patch("tasks.jobs.export.Scan")
def test_generate_output_directory_invalid_character(
self, mock_scan, mock_rls_transaction, tmpdir
):
# Mock the scan object with a started_at timestamp
mock_scan_instance = MagicMock()
mock_scan_instance.started_at = datetime(2023, 6, 15, 10, 30, 45)
mock_scan.objects.get.return_value = mock_scan_instance
# Mock rls_transaction as a context manager
mock_rls_transaction.return_value.__enter__ = MagicMock()
mock_rls_transaction.return_value.__exit__ = MagicMock(return_value=False)
base_tmp = Path(str(tmpdir.mkdir("generate_output")))
base_dir = str(base_tmp)
tenant_id = "t1"
scan_id = "s1"
tenant_id = str(uuid.uuid4())
scan_id = str(uuid.uuid4())
provider = "aws/test@check"
expected_timestamp = "20230615103045"
path, compliance = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
@@ -162,5 +188,5 @@ class TestOutputs:
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert path.endswith(f"aws-test-check-{output_file_timestamp}")
assert compliance.endswith(f"aws-test-check-{output_file_timestamp}")
assert path.endswith(f"aws-test-check-{expected_timestamp}")
assert compliance.endswith(f"aws-test-check-{expected_timestamp}")
File diff suppressed because it is too large Load Diff
+358 -75
View File
@@ -7,22 +7,14 @@ import pytest
from tasks.jobs.scan import (
_create_finding_delta,
_store_resources,
_update_resource_failed_findings_count,
create_compliance_requirements,
perform_prowler_scan,
)
from tasks.utils import CustomEncoder
from api.exceptions import ProviderConnectionError
from api.models import (
Finding,
Provider,
Resource,
Scan,
Severity,
StateChoices,
StatusChoices,
)
from api.models import Finding, Provider, Resource, Scan, StateChoices, StatusChoices
from prowler.lib.check.models import Severity
@pytest.mark.django_db
@@ -182,6 +174,9 @@ class TestPerformScan:
assert tag_keys == set(finding.resource_tags.keys())
assert tag_values == set(finding.resource_tags.values())
# Assert that failed_findings_count is 0 (finding is PASS and muted)
assert scan_resource.failed_findings_count == 0
@patch("tasks.jobs.scan.ProwlerScan")
@patch(
"tasks.jobs.scan.initialize_prowler_provider",
@@ -386,6 +381,359 @@ class TestPerformScan:
assert resource == resource_instance
assert resource_uid_tuple == (resource_instance.uid, resource_instance.region)
def test_perform_prowler_scan_with_failed_findings(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""Test that failed findings increment the failed_findings_count"""
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
new_callable=dict,
),
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
):
# Ensure the database is empty
assert Finding.objects.count() == 0
assert Resource.objects.count() == 0
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
# Ensure the provider type is 'aws'
provider.provider = Provider.ProviderChoices.AWS
provider.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
provider_id = str(provider.id)
# Mock a FAIL finding that is not muted
fail_finding = MagicMock()
fail_finding.uid = "fail_finding_uid"
fail_finding.status = StatusChoices.FAIL
fail_finding.status_extended = "test fail status"
fail_finding.severity = Severity.high
fail_finding.check_id = "fail_check"
fail_finding.get_metadata.return_value = {"key": "value"}
fail_finding.resource_uid = "resource_uid_fail"
fail_finding.resource_name = "fail_resource"
fail_finding.region = "us-east-1"
fail_finding.service_name = "ec2"
fail_finding.resource_type = "instance"
fail_finding.resource_tags = {"env": "test"}
fail_finding.muted = False
fail_finding.raw = {}
fail_finding.resource_metadata = {"test": "metadata"}
fail_finding.resource_details = {"details": "test"}
fail_finding.partition = "aws"
fail_finding.compliance = {"compliance1": "FAIL"}
# Mock the ProwlerScan instance
mock_prowler_scan_instance = MagicMock()
mock_prowler_scan_instance.scan.return_value = [(100, [fail_finding])]
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
# Mock prowler_provider
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
# Call the function under test
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
# Refresh instances from the database
scan.refresh_from_db()
scan_resource = Resource.objects.get(provider=provider)
# Assert that failed_findings_count is 1 (one FAIL finding not muted)
assert scan_resource.failed_findings_count == 1
def test_perform_prowler_scan_multiple_findings_same_resource(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""Test that multiple FAIL findings on the same resource increment the counter correctly"""
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
new_callable=dict,
),
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
provider_id = str(provider.id)
# Create multiple findings for the same resource
# Two FAIL findings (not muted) and one PASS finding
resource_uid = "shared_resource_uid"
fail_finding_1 = MagicMock()
fail_finding_1.uid = "fail_finding_1"
fail_finding_1.status = StatusChoices.FAIL
fail_finding_1.status_extended = "fail 1"
fail_finding_1.severity = Severity.high
fail_finding_1.check_id = "fail_check_1"
fail_finding_1.get_metadata.return_value = {"key": "value1"}
fail_finding_1.resource_uid = resource_uid
fail_finding_1.resource_name = "shared_resource"
fail_finding_1.region = "us-east-1"
fail_finding_1.service_name = "ec2"
fail_finding_1.resource_type = "instance"
fail_finding_1.resource_tags = {}
fail_finding_1.muted = False
fail_finding_1.raw = {}
fail_finding_1.resource_metadata = {}
fail_finding_1.resource_details = {}
fail_finding_1.partition = "aws"
fail_finding_1.compliance = {}
fail_finding_2 = MagicMock()
fail_finding_2.uid = "fail_finding_2"
fail_finding_2.status = StatusChoices.FAIL
fail_finding_2.status_extended = "fail 2"
fail_finding_2.severity = Severity.medium
fail_finding_2.check_id = "fail_check_2"
fail_finding_2.get_metadata.return_value = {"key": "value2"}
fail_finding_2.resource_uid = resource_uid
fail_finding_2.resource_name = "shared_resource"
fail_finding_2.region = "us-east-1"
fail_finding_2.service_name = "ec2"
fail_finding_2.resource_type = "instance"
fail_finding_2.resource_tags = {}
fail_finding_2.muted = False
fail_finding_2.raw = {}
fail_finding_2.resource_metadata = {}
fail_finding_2.resource_details = {}
fail_finding_2.partition = "aws"
fail_finding_2.compliance = {}
pass_finding = MagicMock()
pass_finding.uid = "pass_finding"
pass_finding.status = StatusChoices.PASS
pass_finding.status_extended = "pass"
pass_finding.severity = Severity.low
pass_finding.check_id = "pass_check"
pass_finding.get_metadata.return_value = {"key": "value3"}
pass_finding.resource_uid = resource_uid
pass_finding.resource_name = "shared_resource"
pass_finding.region = "us-east-1"
pass_finding.service_name = "ec2"
pass_finding.resource_type = "instance"
pass_finding.resource_tags = {}
pass_finding.muted = False
pass_finding.raw = {}
pass_finding.resource_metadata = {}
pass_finding.resource_details = {}
pass_finding.partition = "aws"
pass_finding.compliance = {}
# Mock the ProwlerScan instance
mock_prowler_scan_instance = MagicMock()
mock_prowler_scan_instance.scan.return_value = [
(100, [fail_finding_1, fail_finding_2, pass_finding])
]
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
# Mock prowler_provider
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
# Call the function under test
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
# Refresh instances from the database
scan_resource = Resource.objects.get(provider=provider, uid=resource_uid)
# Assert that failed_findings_count is 2 (two FAIL findings, one PASS)
assert scan_resource.failed_findings_count == 2
def test_perform_prowler_scan_with_muted_findings(
self,
tenants_fixture,
scans_fixture,
providers_fixture,
):
"""Test that muted FAIL findings do not increment the failed_findings_count"""
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
new_callable=dict,
),
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
provider = providers_fixture[0]
provider.provider = Provider.ProviderChoices.AWS
provider.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
provider_id = str(provider.id)
# Mock a FAIL finding that is muted
muted_fail_finding = MagicMock()
muted_fail_finding.uid = "muted_fail_finding"
muted_fail_finding.status = StatusChoices.FAIL
muted_fail_finding.status_extended = "muted fail"
muted_fail_finding.severity = Severity.high
muted_fail_finding.check_id = "muted_fail_check"
muted_fail_finding.get_metadata.return_value = {"key": "value"}
muted_fail_finding.resource_uid = "muted_resource_uid"
muted_fail_finding.resource_name = "muted_resource"
muted_fail_finding.region = "us-east-1"
muted_fail_finding.service_name = "ec2"
muted_fail_finding.resource_type = "instance"
muted_fail_finding.resource_tags = {}
muted_fail_finding.muted = True
muted_fail_finding.raw = {}
muted_fail_finding.resource_metadata = {}
muted_fail_finding.resource_details = {}
muted_fail_finding.partition = "aws"
muted_fail_finding.compliance = {}
# Mock the ProwlerScan instance
mock_prowler_scan_instance = MagicMock()
mock_prowler_scan_instance.scan.return_value = [(100, [muted_fail_finding])]
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
# Mock prowler_provider
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
# Call the function under test
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
# Refresh instances from the database
scan_resource = Resource.objects.get(provider=provider)
# Assert that failed_findings_count is 0 (FAIL finding is muted)
assert scan_resource.failed_findings_count == 0
def test_perform_prowler_scan_reset_failed_findings_count(
self,
tenants_fixture,
providers_fixture,
resources_fixture,
):
"""Test that failed_findings_count is reset to 0 at the beginning of each scan"""
# Use existing resource from fixture and set initial failed_findings_count
tenant = tenants_fixture[0]
provider = providers_fixture[0]
resource = resources_fixture[0]
# Set a non-zero failed_findings_count initially
resource.failed_findings_count = 5
resource.save()
# Create a new scan
scan = Scan.objects.create(
name="Reset Test Scan",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.AVAILABLE,
tenant_id=tenant.id,
)
with (
patch("api.db_utils.rls_transaction"),
patch(
"tasks.jobs.scan.initialize_prowler_provider"
) as mock_initialize_prowler_provider,
patch("tasks.jobs.scan.ProwlerScan") as mock_prowler_scan_class,
patch(
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE",
new_callable=dict,
),
patch("api.compliance.PROWLER_CHECKS", new_callable=dict),
):
provider.provider = Provider.ProviderChoices.AWS
provider.save()
tenant_id = str(tenant.id)
scan_id = str(scan.id)
provider_id = str(provider.id)
# Mock a PASS finding for the existing resource
pass_finding = MagicMock()
pass_finding.uid = "reset_test_finding"
pass_finding.status = StatusChoices.PASS
pass_finding.status_extended = "reset test pass"
pass_finding.severity = Severity.low
pass_finding.check_id = "reset_test_check"
pass_finding.get_metadata.return_value = {"key": "value"}
pass_finding.resource_uid = resource.uid
pass_finding.resource_name = resource.name
pass_finding.region = resource.region
pass_finding.service_name = resource.service
pass_finding.resource_type = resource.type
pass_finding.resource_tags = {}
pass_finding.muted = False
pass_finding.raw = {}
pass_finding.resource_metadata = {}
pass_finding.resource_details = {}
pass_finding.partition = "aws"
pass_finding.compliance = {}
# Mock the ProwlerScan instance
mock_prowler_scan_instance = MagicMock()
mock_prowler_scan_instance.scan.return_value = [(100, [pass_finding])]
mock_prowler_scan_class.return_value = mock_prowler_scan_instance
# Mock prowler_provider
mock_prowler_provider_instance = MagicMock()
mock_prowler_provider_instance.get_regions.return_value = [resource.region]
mock_initialize_prowler_provider.return_value = (
mock_prowler_provider_instance
)
# Call the function under test
perform_prowler_scan(tenant_id, scan_id, provider_id, [])
# Refresh resource from the database
resource.refresh_from_db()
# Assert that failed_findings_count was reset to 0 during the scan
assert resource.failed_findings_count == 0
# TODO Add tests for aggregations
@@ -697,68 +1045,3 @@ class TestCreateComplianceRequirements:
assert "requirements_created" in result
assert result["requirements_created"] >= 0
@pytest.mark.django_db
class TestUpdateResourceFailedFindingsCount:
def test_execute_sql_update(
self, tenants_fixture, scans_fixture, providers_fixture, resources_fixture
):
resource = resources_fixture[0]
tenant_id = resource.tenant_id
scan_id = resource.provider.scans.first().id
# Common kwargs for all failing findings
base_kwargs = {
"tenant_id": tenant_id,
"scan_id": scan_id,
"delta": None,
"status": StatusChoices.FAIL,
"status_extended": "test status extended",
"impact": Severity.critical,
"impact_extended": "test impact extended",
"severity": Severity.critical,
"raw_result": {
"status": StatusChoices.FAIL,
"impact": Severity.critical,
"severity": Severity.critical,
},
"tags": {"test": "dev-qa"},
"check_id": "test_check_id",
"check_metadata": {
"CheckId": "test_check_id",
"Description": "test description apple sauce",
"servicename": "ec2",
},
"first_seen_at": "2024-01-02T00:00:00Z",
}
# UIDs to create (two with same UID, one unique)
uids = ["test_finding_uid_1", "test_finding_uid_1", "test_finding_uid_2"]
# Create findings and associate with the resource
for uid in uids:
finding = Finding.objects.create(uid=uid, **base_kwargs)
finding.add_resources([resource])
resource.refresh_from_db()
assert resource.failed_findings_count == 0
_update_resource_failed_findings_count(tenant_id=tenant_id, scan_id=scan_id)
resource.refresh_from_db()
# Only two since two findings share the same UID
assert resource.failed_findings_count == 2
@patch("tasks.jobs.scan.Scan.objects.get")
def test_scan_not_found(
self,
mock_scan_get,
):
mock_scan_get.side_effect = Scan.DoesNotExist
with pytest.raises(Scan.DoesNotExist):
_update_resource_failed_findings_count(
"8614ca97-8370-4183-a7f7-e96a6c7d2c93",
"4705bed5-8782-4e8b-bab6-55e8043edaa6",
)
+603 -10
View File
@@ -1,9 +1,16 @@
import uuid
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from tasks.tasks import _perform_scan_complete_tasks, generate_outputs_task
from tasks.tasks import (
_perform_scan_complete_tasks,
check_integrations_task,
generate_outputs_task,
s3_integration_task,
security_hub_integration_task,
)
from api.models import Integration
# TODO Move this to outputs/reports jobs
@@ -27,7 +34,6 @@ class TestGenerateOutputs:
assert result == {"upload": False}
mock_filter.assert_called_once_with(scan_id=self.scan_id)
@patch("tasks.tasks.rmtree")
@patch("tasks.tasks._upload_to_s3")
@patch("tasks.tasks._compress_output_files")
@patch("tasks.tasks.get_compliance_frameworks")
@@ -46,7 +52,6 @@ class TestGenerateOutputs:
mock_get_available_frameworks,
mock_compress,
mock_upload,
mock_rmtree,
):
mock_scan_summary_filter.return_value.exists.return_value = True
@@ -96,6 +101,7 @@ class TestGenerateOutputs:
return_value=("out-dir", "comp-dir"),
),
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
patch("tasks.tasks.rmtree"),
):
mock_compress.return_value = "/tmp/zipped.zip"
mock_upload.return_value = "s3://bucket/zipped.zip"
@@ -110,9 +116,6 @@ class TestGenerateOutputs:
mock_scan_update.return_value.update.assert_called_once_with(
output_location="s3://bucket/zipped.zip"
)
mock_rmtree.assert_called_once_with(
Path("/tmp/zipped.zip").parent, ignore_errors=True
)
def test_generate_outputs_fails_upload(self):
with (
@@ -144,6 +147,7 @@ class TestGenerateOutputs:
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value=None),
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
patch("tasks.tasks.rmtree"),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
@@ -153,7 +157,7 @@ class TestGenerateOutputs:
result = generate_outputs_task(
scan_id="scan",
provider_id="provider",
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
@@ -185,6 +189,7 @@ class TestGenerateOutputs:
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/f.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree"),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
@@ -255,8 +260,8 @@ class TestGenerateOutputs:
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
patch("tasks.tasks.rmtree"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree"),
patch(
"tasks.tasks.batched",
return_value=[
@@ -333,13 +338,13 @@ class TestGenerateOutputs:
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
patch("tasks.tasks.rmtree"),
patch(
"tasks.tasks.Scan.all_objects.filter",
return_value=MagicMock(update=lambda **kw: None),
),
patch("tasks.tasks.batched", return_value=two_batches),
patch("tasks.tasks.OUTPUT_FORMATS_MAPPING", {}),
patch("tasks.tasks.rmtree"),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda name: True, TrackingComplianceWriter)]},
@@ -358,6 +363,7 @@ class TestGenerateOutputs:
assert writer.transform_calls == [([raw2], compliance_obj, "cis")]
assert result == {"upload": True}
# TODO: We need to add a periodic task to delete old output files
def test_generate_outputs_logs_rmtree_exception(self, caplog):
mock_finding_output = MagicMock()
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
@@ -415,6 +421,56 @@ class TestGenerateOutputs:
)
assert "Error deleting output files" in caplog.text
@patch("tasks.tasks.rls_transaction")
@patch("tasks.tasks.Integration.objects.filter")
def test_generate_outputs_filters_enabled_s3_integrations(
self, mock_integration_filter, mock_rls
):
"""Test that generate_outputs_task only processes enabled S3 integrations."""
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_summary,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk"),
patch("tasks.tasks.get_compliance_frameworks", return_value=[]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/file.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree"),
patch("tasks.tasks.s3_integration_task.apply_async") as mock_s3_task,
):
mock_summary.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
[MagicMock()],
True,
]
mock_integration_filter.return_value = [MagicMock()]
mock_rls.return_value.__enter__.return_value = None
with (
patch("tasks.tasks.OUTPUT_FORMATS_MAPPING", {}),
patch("tasks.tasks.COMPLIANCE_CLASS_MAP", {"aws": []}),
):
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
# Verify the S3 integrations filters
mock_integration_filter.assert_called_once_with(
integrationproviderrelationship__provider_id=self.provider_id,
integration_type=Integration.IntegrationChoices.AMAZON_S3,
enabled=True,
)
mock_s3_task.assert_called_once()
class TestScanCompleteTasks:
@patch("tasks.tasks.create_compliance_requirements_task.apply_async")
@@ -436,3 +492,540 @@ class TestScanCompleteTasks:
provider_id="provider-id",
tenant_id="tenant-id",
)
@pytest.mark.django_db
class TestCheckIntegrationsTask:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
self.output_directory = "/tmp/some-output-dir"
@patch("tasks.tasks.rls_transaction")
@patch("tasks.tasks.Integration.objects.filter")
def test_check_integrations_no_integrations(
self, mock_integration_filter, mock_rls
):
mock_integration_filter.return_value.exists.return_value = False
# Ensure rls_transaction is mocked
mock_rls.return_value.__enter__.return_value = None
result = check_integrations_task(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
)
assert result == {"integrations_processed": 0}
mock_integration_filter.assert_called_once_with(
integrationproviderrelationship__provider_id=self.provider_id,
enabled=True,
)
@patch("tasks.tasks.security_hub_integration_task")
@patch("tasks.tasks.group")
@patch("tasks.tasks.rls_transaction")
@patch("tasks.tasks.Integration.objects.filter")
def test_check_integrations_security_hub_success(
self, mock_integration_filter, mock_rls, mock_group, mock_security_hub_task
):
"""Test that SecurityHub integrations are processed correctly."""
# Mock that we have SecurityHub integrations
mock_integrations = MagicMock()
mock_integrations.exists.return_value = True
# Mock SecurityHub integrations to return existing integrations
mock_security_hub_integrations = MagicMock()
mock_security_hub_integrations.exists.return_value = True
# Set up the filter chain
mock_integration_filter.return_value = mock_integrations
mock_integrations.filter.return_value = mock_security_hub_integrations
# Mock the task signature
mock_task_signature = MagicMock()
mock_security_hub_task.s.return_value = mock_task_signature
# Mock group job
mock_job = MagicMock()
mock_group.return_value = mock_job
# Ensure rls_transaction is mocked
mock_rls.return_value.__enter__.return_value = None
# Execute the function
result = check_integrations_task(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
scan_id="test-scan-id",
)
# Should process 1 SecurityHub integration
assert result == {"integrations_processed": 1}
# Verify the integration filter was called
mock_integration_filter.assert_called_once_with(
integrationproviderrelationship__provider_id=self.provider_id,
enabled=True,
)
# Verify SecurityHub integrations were filtered
mock_integrations.filter.assert_called_once_with(
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB
)
# Verify SecurityHub task was created with correct parameters
mock_security_hub_task.s.assert_called_once_with(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
scan_id="test-scan-id",
)
# Verify group was called and job was executed
mock_group.assert_called_once_with([mock_task_signature])
mock_job.apply_async.assert_called_once()
@patch("tasks.tasks.rls_transaction")
@patch("tasks.tasks.Integration.objects.filter")
def test_check_integrations_disabled_integrations_ignored(
self, mock_integration_filter, mock_rls
):
"""Test that disabled integrations are not processed."""
mock_integration_filter.return_value.exists.return_value = False
mock_rls.return_value.__enter__.return_value = None
result = check_integrations_task(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
)
assert result == {"integrations_processed": 0}
mock_integration_filter.assert_called_once_with(
integrationproviderrelationship__provider_id=self.provider_id,
enabled=True,
)
@patch("tasks.tasks.s3_integration_task")
@patch("tasks.tasks.Integration.objects.filter")
@patch("tasks.tasks.ScanSummary.objects.filter")
@patch("tasks.tasks.Provider.objects.get")
@patch("tasks.tasks.initialize_prowler_provider")
@patch("tasks.tasks.Compliance.get_bulk")
@patch("tasks.tasks.get_compliance_frameworks")
@patch("tasks.tasks.Finding.all_objects.filter")
@patch("tasks.tasks._generate_output_directory")
@patch("tasks.tasks.FindingOutput._transform_findings_stats")
@patch("tasks.tasks.FindingOutput.transform_api_finding")
@patch("tasks.tasks._compress_output_files")
@patch("tasks.tasks._upload_to_s3")
@patch("tasks.tasks.Scan.all_objects.filter")
@patch("tasks.tasks.rmtree")
def test_generate_outputs_with_asff_for_aws_with_security_hub(
self,
mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
mock_transform_finding,
mock_transform_stats,
mock_generate_dir,
mock_findings,
mock_get_frameworks,
mock_compliance_bulk,
mock_initialize_provider,
mock_provider_get,
mock_scan_summary,
mock_integration_filter,
mock_s3_task,
):
"""Test that ASFF output is generated for AWS providers with SecurityHub integration."""
# Setup
mock_scan_summary_qs = MagicMock()
mock_scan_summary_qs.exists.return_value = True
mock_scan_summary.return_value = mock_scan_summary_qs
# Mock AWS provider
mock_provider = MagicMock()
mock_provider.uid = "aws-account-123"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
# Mock SecurityHub integration exists
mock_security_hub_integrations = MagicMock()
mock_security_hub_integrations.exists.return_value = True
mock_integration_filter.return_value = mock_security_hub_integrations
# Mock s3_integration_task
mock_s3_task.apply_async.return_value.get.return_value = True
# Mock other necessary components
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
mock_finding = MagicMock()
mock_findings.return_value.order_by.return_value.iterator.return_value = [
[mock_finding],
True,
]
mock_transform_finding.return_value = MagicMock(compliance={})
# Track which output formats were created
created_writers = {}
def track_writer_creation(cls_type):
def factory(*args, **kwargs):
writer = MagicMock()
writer._data = []
writer.transform = MagicMock()
writer.batch_write_data_to_file = MagicMock()
created_writers[cls_type] = writer
return writer
return factory
# Mock OUTPUT_FORMATS_MAPPING with tracking
with patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"csv": {
"class": track_writer_creation("csv"),
"suffix": ".csv",
"kwargs": {},
},
"json-asff": {
"class": track_writer_creation("asff"),
"suffix": ".asff.json",
"kwargs": {},
},
"json-ocsf": {
"class": track_writer_creation("ocsf"),
"suffix": ".ocsf.json",
"kwargs": {},
},
},
):
mock_compress.return_value = "/tmp/compressed.zip"
mock_upload.return_value = "s3://bucket/file.zip"
# Execute
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
# Verify ASFF was created for AWS with SecurityHub
assert "asff" in created_writers, "ASFF writer should be created"
assert "csv" in created_writers, "CSV writer should be created"
assert "ocsf" in created_writers, "OCSF writer should be created"
# Verify SecurityHub integration was checked
assert mock_integration_filter.call_count == 2
mock_integration_filter.assert_any_call(
integrationproviderrelationship__provider_id=self.provider_id,
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
enabled=True,
)
assert result == {"upload": True}
@patch("tasks.tasks.s3_integration_task")
@patch("tasks.tasks.Integration.objects.filter")
@patch("tasks.tasks.ScanSummary.objects.filter")
@patch("tasks.tasks.Provider.objects.get")
@patch("tasks.tasks.initialize_prowler_provider")
@patch("tasks.tasks.Compliance.get_bulk")
@patch("tasks.tasks.get_compliance_frameworks")
@patch("tasks.tasks.Finding.all_objects.filter")
@patch("tasks.tasks._generate_output_directory")
@patch("tasks.tasks.FindingOutput._transform_findings_stats")
@patch("tasks.tasks.FindingOutput.transform_api_finding")
@patch("tasks.tasks._compress_output_files")
@patch("tasks.tasks._upload_to_s3")
@patch("tasks.tasks.Scan.all_objects.filter")
@patch("tasks.tasks.rmtree")
def test_generate_outputs_no_asff_for_aws_without_security_hub(
self,
mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
mock_transform_finding,
mock_transform_stats,
mock_generate_dir,
mock_findings,
mock_get_frameworks,
mock_compliance_bulk,
mock_initialize_provider,
mock_provider_get,
mock_scan_summary,
mock_integration_filter,
mock_s3_task,
):
"""Test that ASFF output is NOT generated for AWS providers without SecurityHub integration."""
# Setup
mock_scan_summary_qs = MagicMock()
mock_scan_summary_qs.exists.return_value = True
mock_scan_summary.return_value = mock_scan_summary_qs
# Mock AWS provider
mock_provider = MagicMock()
mock_provider.uid = "aws-account-123"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
# Mock NO SecurityHub integration
mock_security_hub_integrations = MagicMock()
mock_security_hub_integrations.exists.return_value = False
mock_integration_filter.return_value = mock_security_hub_integrations
# Mock other necessary components
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
mock_finding = MagicMock()
mock_findings.return_value.order_by.return_value.iterator.return_value = [
[mock_finding],
True,
]
mock_transform_finding.return_value = MagicMock(compliance={})
# Track which output formats were created
created_writers = {}
def track_writer_creation(cls_type):
def factory(*args, **kwargs):
writer = MagicMock()
writer._data = []
writer.transform = MagicMock()
writer.batch_write_data_to_file = MagicMock()
created_writers[cls_type] = writer
return writer
return factory
# Mock OUTPUT_FORMATS_MAPPING with tracking
with patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"csv": {
"class": track_writer_creation("csv"),
"suffix": ".csv",
"kwargs": {},
},
"json-asff": {
"class": track_writer_creation("asff"),
"suffix": ".asff.json",
"kwargs": {},
},
"json-ocsf": {
"class": track_writer_creation("ocsf"),
"suffix": ".ocsf.json",
"kwargs": {},
},
},
):
mock_compress.return_value = "/tmp/compressed.zip"
mock_upload.return_value = "s3://bucket/file.zip"
# Execute
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
# Verify ASFF was NOT created when no SecurityHub integration
assert "asff" not in created_writers, "ASFF writer should NOT be created"
assert "csv" in created_writers, "CSV writer should be created"
assert "ocsf" in created_writers, "OCSF writer should be created"
# Verify SecurityHub integration was checked
assert mock_integration_filter.call_count == 2
mock_integration_filter.assert_any_call(
integrationproviderrelationship__provider_id=self.provider_id,
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
enabled=True,
)
assert result == {"upload": True}
@patch("tasks.tasks.ScanSummary.objects.filter")
@patch("tasks.tasks.Provider.objects.get")
@patch("tasks.tasks.initialize_prowler_provider")
@patch("tasks.tasks.Compliance.get_bulk")
@patch("tasks.tasks.get_compliance_frameworks")
@patch("tasks.tasks.Finding.all_objects.filter")
@patch("tasks.tasks._generate_output_directory")
@patch("tasks.tasks.FindingOutput._transform_findings_stats")
@patch("tasks.tasks.FindingOutput.transform_api_finding")
@patch("tasks.tasks._compress_output_files")
@patch("tasks.tasks._upload_to_s3")
@patch("tasks.tasks.Scan.all_objects.filter")
@patch("tasks.tasks.rmtree")
def test_generate_outputs_no_asff_for_non_aws_provider(
self,
mock_rmtree,
mock_scan_update,
mock_upload,
mock_compress,
mock_transform_finding,
mock_transform_stats,
mock_generate_dir,
mock_findings,
mock_get_frameworks,
mock_compliance_bulk,
mock_initialize_provider,
mock_provider_get,
mock_scan_summary,
):
"""Test that ASFF output is NOT generated for non-AWS providers (e.g., Azure, GCP)."""
# Setup
mock_scan_summary_qs = MagicMock()
mock_scan_summary_qs.exists.return_value = True
mock_scan_summary.return_value = mock_scan_summary_qs
# Mock Azure provider (non-AWS)
mock_provider = MagicMock()
mock_provider.uid = "azure-subscription-123"
mock_provider.provider = "azure" # Non-AWS provider
mock_provider_get.return_value = mock_provider
# Mock other necessary components
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
mock_finding = MagicMock()
mock_findings.return_value.order_by.return_value.iterator.return_value = [
[mock_finding],
True,
]
mock_transform_finding.return_value = MagicMock(compliance={})
# Track which output formats were created
created_writers = {}
def track_writer_creation(cls_type):
def factory(*args, **kwargs):
writer = MagicMock()
writer._data = []
writer.transform = MagicMock()
writer.batch_write_data_to_file = MagicMock()
created_writers[cls_type] = writer
return writer
return factory
# Mock OUTPUT_FORMATS_MAPPING with tracking
with patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"csv": {
"class": track_writer_creation("csv"),
"suffix": ".csv",
"kwargs": {},
},
"json-asff": {
"class": track_writer_creation("asff"),
"suffix": ".asff.json",
"kwargs": {},
},
"json-ocsf": {
"class": track_writer_creation("ocsf"),
"suffix": ".ocsf.json",
"kwargs": {},
},
},
):
mock_compress.return_value = "/tmp/compressed.zip"
mock_upload.return_value = "s3://bucket/file.zip"
# Execute
result = generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
# Verify ASFF was NOT created for non-AWS provider
assert (
"asff" not in created_writers
), "ASFF writer should NOT be created for non-AWS providers"
assert "csv" in created_writers, "CSV writer should be created"
assert "ocsf" in created_writers, "OCSF writer should be created"
assert result == {"upload": True}
@patch("tasks.tasks.upload_s3_integration")
def test_s3_integration_task_success(self, mock_upload):
mock_upload.return_value = True
output_directory = "/tmp/prowler_api_output/test"
result = s3_integration_task(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
output_directory=output_directory,
)
assert result is True
mock_upload.assert_called_once_with(
self.tenant_id, self.provider_id, output_directory
)
@patch("tasks.tasks.upload_s3_integration")
def test_s3_integration_task_failure(self, mock_upload):
mock_upload.return_value = False
output_directory = "/tmp/prowler_api_output/test"
result = s3_integration_task(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
output_directory=output_directory,
)
assert result is False
mock_upload.assert_called_once_with(
self.tenant_id, self.provider_id, output_directory
)
@patch("tasks.tasks.upload_security_hub_integration")
def test_security_hub_integration_task_success(self, mock_upload):
"""Test successful SecurityHub integration task execution."""
mock_upload.return_value = True
scan_id = "test-scan-123"
result = security_hub_integration_task(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
scan_id=scan_id,
)
assert result is True
mock_upload.assert_called_once_with(self.tenant_id, self.provider_id, scan_id)
@patch("tasks.tasks.upload_security_hub_integration")
def test_security_hub_integration_task_failure(self, mock_upload):
"""Test SecurityHub integration task handling failure."""
mock_upload.return_value = False
scan_id = "test-scan-123"
result = security_hub_integration_task(
tenant_id=self.tenant_id,
provider_id=self.provider_id,
scan_id=scan_id,
)
assert result is False
mock_upload.assert_called_once_with(self.tenant_id, self.provider_id, scan_id)
@@ -0,0 +1,234 @@
from locust import events, task
from utils.config import (
L_PROVIDER_NAME,
M_PROVIDER_NAME,
RESOURCES_UI_FIELDS,
S_PROVIDER_NAME,
TARGET_INSERTED_AT,
)
from utils.helpers import (
APIUserBase,
get_api_token,
get_auth_headers,
get_dynamic_filters_pairs,
get_next_resource_filter,
get_scan_id_from_provider_name,
)
GLOBAL = {
"token": None,
"scan_ids": {},
"resource_filters": None,
"large_resource_filters": None,
}
@events.test_start.add_listener
def on_test_start(environment, **kwargs):
GLOBAL["token"] = get_api_token(environment.host)
GLOBAL["scan_ids"]["small"] = get_scan_id_from_provider_name(
environment.host, GLOBAL["token"], S_PROVIDER_NAME
)
GLOBAL["scan_ids"]["medium"] = get_scan_id_from_provider_name(
environment.host, GLOBAL["token"], M_PROVIDER_NAME
)
GLOBAL["scan_ids"]["large"] = get_scan_id_from_provider_name(
environment.host, GLOBAL["token"], L_PROVIDER_NAME
)
GLOBAL["resource_filters"] = get_dynamic_filters_pairs(
environment.host, GLOBAL["token"], "resources"
)
GLOBAL["large_resource_filters"] = get_dynamic_filters_pairs(
environment.host, GLOBAL["token"], "resources", GLOBAL["scan_ids"]["large"]
)
class APIUser(APIUserBase):
def on_start(self):
self.token = GLOBAL["token"]
self.s_scan_id = GLOBAL["scan_ids"]["small"]
self.m_scan_id = GLOBAL["scan_ids"]["medium"]
self.l_scan_id = GLOBAL["scan_ids"]["large"]
self.available_resource_filters = GLOBAL["resource_filters"]
self.available_resource_filters_large_scan = GLOBAL["large_resource_filters"]
@task
def resources_default(self):
name = "/resources"
page_number = self._next_page(name)
endpoint = (
f"/resources?page[number]={page_number}"
f"&filter[updated_at]={TARGET_INSERTED_AT}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(3)
def resources_default_ui_fields(self):
name = "/resources?fields"
page_number = self._next_page(name)
endpoint = (
f"/resources?page[number]={page_number}"
f"&fields[resources]={','.join(RESOURCES_UI_FIELDS)}"
f"&filter[updated_at]={TARGET_INSERTED_AT}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(3)
def resources_default_include(self):
name = "/resources?include"
page = self._next_page(name)
endpoint = (
f"/resources?page[number]={page}"
f"&filter[updated_at]={TARGET_INSERTED_AT}"
f"&include=provider"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(3)
def resources_metadata(self):
name = "/resources/metadata"
endpoint = f"/resources/metadata?filter[updated_at]={TARGET_INSERTED_AT}"
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task
def resources_scan_small(self):
name = "/resources?filter[scan_id] - 50k"
page_number = self._next_page(name)
endpoint = (
f"/resources?page[number]={page_number}" f"&filter[scan]={self.s_scan_id}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task
def resources_metadata_scan_small(self):
name = "/resources/metadata?filter[scan_id] - 50k"
endpoint = f"/resources/metadata?&filter[scan]={self.s_scan_id}"
self.client.get(
endpoint,
headers=get_auth_headers(self.token),
name=name,
)
@task(2)
def resources_scan_medium(self):
name = "/resources?filter[scan_id] - 250k"
page_number = self._next_page(name)
endpoint = (
f"/resources?page[number]={page_number}" f"&filter[scan]={self.m_scan_id}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task
def resources_metadata_scan_medium(self):
name = "/resources/metadata?filter[scan_id] - 250k"
endpoint = f"/resources/metadata?&filter[scan]={self.m_scan_id}"
self.client.get(
endpoint,
headers=get_auth_headers(self.token),
name=name,
)
@task
def resources_scan_large(self):
name = "/resources?filter[scan_id] - 500k"
page_number = self._next_page(name)
endpoint = (
f"/resources?page[number]={page_number}" f"&filter[scan]={self.l_scan_id}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task
def resources_scan_large_include(self):
name = "/resources?filter[scan_id]&include - 500k"
page_number = self._next_page(name)
endpoint = (
f"/resources?page[number]={page_number}"
f"&filter[scan]={self.l_scan_id}"
f"&include=provider"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task
def resources_metadata_scan_large(self):
endpoint = f"/resources/metadata?&filter[scan]={self.l_scan_id}"
self.client.get(
endpoint,
headers=get_auth_headers(self.token),
name="/resources/metadata?filter[scan_id] - 500k",
)
@task(2)
def resources_filters(self):
name = "/resources?filter[resource_filter]&include"
filter_name, filter_value = get_next_resource_filter(
self.available_resource_filters
)
endpoint = (
f"/resources?filter[{filter_name}]={filter_value}"
f"&filter[updated_at]={TARGET_INSERTED_AT}"
f"&include=provider"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(3)
def resources_metadata_filters(self):
name = "/resources/metadata?filter[resource_filter]"
filter_name, filter_value = get_next_resource_filter(
self.available_resource_filters
)
endpoint = (
f"/resources/metadata?filter[{filter_name}]={filter_value}"
f"&filter[updated_at]={TARGET_INSERTED_AT}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(3)
def resources_metadata_filters_scan_large(self):
name = "/resources/metadata?filter[resource_filter]&filter[scan_id] - 500k"
filter_name, filter_value = get_next_resource_filter(
self.available_resource_filters
)
endpoint = (
f"/resources/metadata?filter[{filter_name}]={filter_value}"
f"&filter[scan]={self.l_scan_id}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(2)
def resourcess_filter_large_scan_include(self):
name = "/resources?filter[resource_filter][scan]&include - 500k"
filter_name, filter_value = get_next_resource_filter(
self.available_resource_filters
)
endpoint = (
f"/resources?filter[{filter_name}]={filter_value}"
f"&filter[scan]={self.l_scan_id}"
f"&include=provider"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(3)
def resources_latest_default_ui_fields(self):
name = "/resources/latest?fields"
page_number = self._next_page(name)
endpoint = (
f"/resources/latest?page[number]={page_number}"
f"&fields[resources]={','.join(RESOURCES_UI_FIELDS)}"
)
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
@task(3)
def resources_latest_metadata_filters(self):
name = "/resources/metadata/latest?filter[resource_filter]"
filter_name, filter_value = get_next_resource_filter(
self.available_resource_filters
)
endpoint = f"/resources/metadata/latest?filter[{filter_name}]={filter_value}"
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
+17
View File
@@ -13,6 +13,23 @@ FINDINGS_RESOURCE_METADATA = {
"resource_types": "resource_type",
"services": "service",
}
RESOURCE_METADATA = {
"regions": "region",
"types": "type",
"services": "service",
}
RESOURCES_UI_FIELDS = [
"name",
"failed_findings_count",
"region",
"service",
"type",
"provider",
"inserted_at",
"updated_at",
"uid",
]
S_PROVIDER_NAME = "provider-50k"
M_PROVIDER_NAME = "provider-250k"
+16 -6
View File
@@ -7,6 +7,7 @@ from locust import HttpUser, between
from utils.config import (
BASE_HEADERS,
FINDINGS_RESOURCE_METADATA,
RESOURCE_METADATA,
TARGET_INSERTED_AT,
USER_EMAIL,
USER_PASSWORD,
@@ -121,13 +122,16 @@ def get_scan_id_from_provider_name(host: str, token: str, provider_name: str) ->
return response.json()["data"][0]["id"]
def get_resource_filters_pairs(host: str, token: str, scan_id: str = "") -> dict:
def get_dynamic_filters_pairs(
host: str, token: str, endpoint: str, scan_id: str = ""
) -> dict:
"""
Retrieves and maps resource metadata filter values from the findings endpoint.
Retrieves and maps metadata filter values from a given endpoint.
Args:
host (str): The host URL of the API.
token (str): Bearer token for authentication.
endpoint (str): The API endpoint to query for metadata.
scan_id (str, optional): Optional scan ID to filter metadata. Defaults to using inserted_at timestamp.
Returns:
@@ -136,22 +140,28 @@ def get_resource_filters_pairs(host: str, token: str, scan_id: str = "") -> dict
Raises:
AssertionError: If the request fails or does not return a 200 status code.
"""
metadata_mapping = (
FINDINGS_RESOURCE_METADATA if endpoint == "findings" else RESOURCE_METADATA
)
date_filter = "inserted_at" if endpoint == "findings" else "updated_at"
metadata_filters = (
f"filter[scan]={scan_id}"
if scan_id
else f"filter[inserted_at]={TARGET_INSERTED_AT}"
else f"filter[{date_filter}]={TARGET_INSERTED_AT}"
)
response = requests.get(
f"{host}/findings/metadata?{metadata_filters}", headers=get_auth_headers(token)
f"{host}/{endpoint}/metadata?{metadata_filters}",
headers=get_auth_headers(token),
)
assert (
response.status_code == 200
), f"Failed to get resource filters values: {response.text}"
attributes = response.json()["data"]["attributes"]
return {
FINDINGS_RESOURCE_METADATA[key]: values
metadata_mapping[key]: values
for key, values in attributes.items()
if key in FINDINGS_RESOURCE_METADATA.keys()
if key in metadata_mapping.keys()
}
+5 -4
View File
@@ -23,6 +23,7 @@ import argparse
import json
import os
import re
import shlex
import signal
import socket
import subprocess
@@ -145,11 +146,11 @@ def _get_script_arguments():
def _run_prowler(prowler_args):
_debug("Running prowler with args: {0}".format(prowler_args), 1)
_prowler_command = "{prowler}/prowler {args}".format(
prowler=PATH_TO_PROWLER, args=prowler_args
_prowler_command = shlex.split(
"{prowler}/prowler {args}".format(prowler=PATH_TO_PROWLER, args=prowler_args)
)
_debug("Running command: {0}".format(_prowler_command), 2)
_process = subprocess.Popen(_prowler_command, stdout=subprocess.PIPE, shell=True)
_debug("Running command: {0}".format(" ".join(_prowler_command)), 2)
_process = subprocess.Popen(_prowler_command, stdout=subprocess.PIPE)
_output, _error = _process.communicate()
_debug("Raw prowler output: {0}".format(_output), 3)
_debug("Raw prowler error: {0}".format(_error), 3)
+34
View File
@@ -0,0 +1,34 @@
/* Override Tailwind CSS reset for markdown content */
.markdown-content ul {
list-style: disc !important;
margin-left: 20px !important;
padding-left: 10px !important;
margin-bottom: 8px !important;
}
.markdown-content ol {
list-style: decimal !important;
margin-left: 20px !important;
padding-left: 10px !important;
margin-bottom: 8px !important;
}
.markdown-content li {
margin-bottom: 4px !important;
display: list-item !important;
}
.markdown-content p {
margin-bottom: 8px !important;
}
/* Ensure nested lists work properly */
.markdown-content ul ul {
margin-top: 4px !important;
margin-bottom: 4px !important;
}
.markdown-content ol ol {
margin-top: 4px !important;
margin-bottom: 4px !important;
}
+25
View File
@@ -0,0 +1,25 @@
import warnings
from dashboard.common_methods import get_section_containers_cis
warnings.filterwarnings("ignore")
def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_DESCRIPTION",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
].copy()
return get_section_containers_cis(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
)
+66 -16
View File
@@ -1654,6 +1654,39 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
[
html.Div(
[
# Description as first details item
html.Div(
[
html.P(
html.Strong(
"Description: ",
style={
"margin-bottom": "8px"
},
)
),
html.Div(
dcc.Markdown(
str(
data.get(
"DESCRIPTION",
"",
)
),
dangerously_allow_html=True,
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
className="markdown-content",
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
],
),
html.Div(
[
html.P(
@@ -1793,19 +1826,27 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
html.P(
html.Strong(
"Risk: ",
style={
"margin-right": "5px"
},
style={},
)
),
html.P(
str(data.get("RISK", "")),
html.Div(
dcc.Markdown(
str(
data.get("RISK", "")
),
dangerously_allow_html=True,
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
className="markdown-content",
style={
"margin-left": "5px"
"margin-left": "0px",
"padding-left": "10px",
},
),
],
style={"display": "flex"},
),
html.Div(
[
@@ -1847,23 +1888,32 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
html.Strong(
"Recommendation: ",
style={
"margin-right": "5px"
"margin-bottom": "8px"
},
)
),
html.P(
str(
data.get(
"REMEDIATION_RECOMMENDATION_TEXT",
"",
)
html.Div(
dcc.Markdown(
str(
data.get(
"REMEDIATION_RECOMMENDATION_TEXT",
"",
)
),
dangerously_allow_html=True,
style={
"margin-left": "0px",
"padding-left": "10px",
},
),
className="markdown-content",
style={
"margin-left": "5px"
"margin-left": "0px",
"padding-left": "10px",
},
),
],
style={"display": "flex"},
style={"margin-bottom": "15px"},
),
html.Div(
[

Some files were not shown because too many files have changed in this diff Show More