Compare commits

...

284 Commits

Author SHA1 Message Date
Alan Buscaglia f34a025acc fix(ui): improve category label formatting for acronyms and versions 2025-12-16 18:09:18 +01:00
Alan Buscaglia d2886a5e10 refactor(ui): simplify categories.ts to only use dynamic label formatting
Remove hardcoded CATEGORY_IDS and CATEGORY_LABELS constants.
Categories now come from the API and are formatted dynamically.
2025-12-15 19:16:06 +01:00
Alan Buscaglia 1e1dfa29c0 chore(ui): apply ThreatScore wording from PR #9524 2025-12-15 19:13:29 +01:00
Alan Buscaglia 747e6c9f81 feat(ui): add 'All categories' option to risk radar selector 2025-12-15 17:50:01 +01:00
Alan Buscaglia dab231d626 feat(ui): add category selector for risk radar and findings filters
- Add CategorySelector component to risk radar view for filtering by category
- Add fade effect to non-selected radar chart points
- Add category filter to findings page using shared labelFormatter
- Extract category labels to shared lib/categories.ts
- Remove category from active filter badges (now handled by select)
2025-12-15 17:46:03 +01:00
Adrián Peña b549c8dbad fix: make scan_id mandatory in compliance overviews endpoint (#9560) 2025-12-15 17:27:45 +01:00
Víctor Fernández Poyatos 79ac7cf6d4 fix(beat): Increase scheduled scans countdown to 5 seconds (#9558) 2025-12-15 17:13:08 +01:00
Rubén De la Torre Vico d292c6e58a chore(aws): enhance metadata for memorydb service (#9266)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-15 17:11:44 +01:00
Alan Buscaglia 8f361e7e8d feat(ui): add Risk Radar component with API integration (#9532) 2025-12-15 17:02:21 +01:00
Rubén De la Torre Vico 3eb278cb9f chore(aws): enhance metadata for kms service (#9263)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-15 16:56:17 +01:00
Rubén De la Torre Vico 2f7eec8bca chore(aws): enhance metadata for kafka service (#9261)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-15 13:13:47 +01:00
César Arroba 00063c57de chore(github): fix container checks workflows (#9556) 2025-12-15 13:06:18 +01:00
César Arroba 2341b5bc7d chore(github): check containers workflow only for prowler (#9555) 2025-12-15 12:47:36 +01:00
Rubén De la Torre Vico 4015beff20 docs(mcp_server): update documentation and add developer guide for extensibility (#9533) 2025-12-15 12:35:59 +01:00
Rubén De la Torre Vico ab475bafc3 chore(aws): enhance metadata for glue service (#9258)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-12-15 12:07:11 +01:00
Andoni Alonso b4ce01afd4 feat(iac): set only misconfig and secret as default scanners (#9553) 2025-12-15 12:01:31 +01:00
Chandrapal Badshah 2b4b23c719 feat(lighthouse): filter out non-compatible OpenAI models (#9523)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
2025-12-15 11:31:04 +01:00
César Arroba 4398b00801 chore(github): use QEMU to build ARM images if repository is not prowler (#9547) 2025-12-15 11:23:39 +01:00
Rubén De la Torre Vico e0cf8bffd4 feat(mcp_server): update API base URL environment variable to include complete path (#9542) 2025-12-15 11:04:44 +01:00
Daniel Barranquero 6761f0ffd0 docs: add mongodbatlas app support (#9312) 2025-12-15 10:57:27 +01:00
Hugo Pereira Brito 51bbaeb403 fix: trustboundaries category typo to trust-boundaries (#9536) 2025-12-15 10:48:33 +01:00
Pepe Fagoaga 6158c16108 feat(categories): add privilege-escalation and ec2-imdsv1 (#9537) 2025-12-12 15:14:26 +01:00
Alejandro Bailo 0c2c5ea265 chore: update React 19.2.2 for security improvements (#9534) 2025-12-12 14:11:01 +01:00
bota4go 3b56166c34 fix(apigateway): retrieve correct logingLevel status (#9304)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-12-12 13:44:37 +01:00
Víctor Fernández Poyatos b5151a8ee5 feat(api): new endpoint for categories overviews (#9529) 2025-12-12 13:30:59 +01:00
Alejandro Bailo 0495267351 feat: resource details added to findigns and resource view (#9515) 2025-12-12 13:12:17 +01:00
Pepe Fagoaga eefe045c18 docs(security): add more details (#9525)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-12-12 11:03:12 +01:00
Alejandro Bailo d7d1b22c45 chore(dependencies): update @next/third-parties to version 15.5.7 (#9513) 2025-12-12 11:00:48 +01:00
dependabot[bot] 439dbe679b build(deps): bump next from 15.5.7 to 15.5.9 in /ui (#9522)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-12-12 10:17:34 +01:00
Adrián Peña 0e9ba4b116 fix(api): add one second countdown to scheduled scan task to ensure transaction completion (#9516) 2025-12-12 10:08:42 +01:00
Pepe Fagoaga 89295f7e7d chore(overview): adjust wording for Prowler ThreatScore (#9524) 2025-12-12 09:18:58 +01:00
StylusFrost 7cf7758851 docs(k8s): enhance token management guidance in getting started guide (#9519) 2025-12-12 08:37:33 +01:00
Pepe Fagoaga 06142094cd chore(readme): Add LFX health score badge (#9297) 2025-12-11 19:34:40 +01:00
Prowler Bot 93f1c02f44 chore(release): Bump version to v5.16.0 (#9520)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-12-11 17:23:45 +01:00
Pepe Fagoaga e2f30e0987 chore(changelog): v5.15.0 (#9495) 2025-12-11 09:29:55 +01:00
Rubén De la Torre Vico c80710adfc feat(mcp_server): add muting management tools (#9510) 2025-12-11 09:19:17 +01:00
Rubén De la Torre Vico 1410fe2ff1 feat(mcp_server): add scan management tools (#9509) 2025-12-11 09:16:36 +01:00
Pedro Martín 284910d402 chore(readme): update with latest changes (#9508) 2025-12-10 18:48:28 +01:00
Pepe Fagoaga 04f795bd49 revert(docs): remove old image from readme (#9507) 2025-12-10 18:42:12 +01:00
Pepe Fagoaga 8b5e00163e docs: remove old image from readme (#9506) 2025-12-10 18:34:36 +01:00
Hugo Pereira Brito 57d7f77c81 docs: enhance README (#9505) 2025-12-10 18:28:27 +01:00
Rubén De la Torre Vico 16b1052ff1 feat(mcp_server): add resource management tools (#9380) 2025-12-10 17:40:45 +01:00
Rubén De la Torre Vico 978e2c82af feat(mcp_server): add provider management tools (#9350) 2025-12-10 17:31:21 +01:00
Pepe Fagoaga 0c3ba0b737 fix(timeseries): Remove inserted_at and add muted=false (#9504) 2025-12-10 16:45:12 +01:00
Adrián Peña 4addfcc848 chore: add migration to perform the backfill (#9500) 2025-12-10 16:39:12 +01:00
Alan Buscaglia 8588cc03f4 fix(ui): use Sentry namespace for browserTracingIntegration (#9503) 2025-12-10 16:02:04 +01:00
Alan Buscaglia 7507fea24b fix(ui): update dependencies to address security vulnerabilities (#9357) 2025-12-10 12:54:38 +01:00
Alan Buscaglia 18f0fc693e revert(ci): update UI E2E tests workflow for cloud environments (#9499) 2025-12-10 10:53:10 +01:00
Hugo Pereira Brito 606f505ba3 feat(docs): add dependency table to unit-testing page (#9498) 2025-12-10 10:51:50 +01:00
lydiavilchez bfce602859 fix(gcp-cloudstorage): handle VPC-blocked API calls as PASS (#9478) 2025-12-10 10:40:52 +01:00
Alan Buscaglia ba45b86a82 chore(ci): update UI E2E tests workflow for cloud environments (#9497) 2025-12-10 10:31:07 +01:00
Pedro Martín d786bb4440 fix(compliance): make unique requirements IDs for ISO27001 2013 - AWS (#9488) 2025-12-10 09:54:05 +01:00
KonstGolfi 9424289416 feat(compliance): add RBI Framework for Azure (#8822)
Co-authored-by: pedrooot <pedromarting3@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-10 09:24:35 +01:00
Pedro Martín 3cbb6175a5 feat(compliance): add SOC2 Azure Processing Integrity requirements (#9463) 2025-12-10 08:53:08 +01:00
Pedro Martín 438deef3f8 feat(compliance): add SOC2 GCP Processing Integrity requirements (#9464) 2025-12-10 08:45:53 +01:00
Pedro Martín 1cdf4e65b2 feat(compliance): add SOC2 AWS Processing Integrity requirements (#9462) 2025-12-10 08:41:56 +01:00
Andoni Alonso dbdd02ebd1 fix(docs): solve broken link (#9493) 2025-12-10 08:09:25 +01:00
Pedro Martín d264f3daff fix(deps): install alibabacloud missing dep (#9487) 2025-12-09 17:18:32 +01:00
Hugo Pereira Brito 01fe379b55 fix: remove incorrect threat-detection category from checks (#9489) 2025-12-09 17:11:09 +01:00
Pedro Martín 50286846e0 fix(ui): show Top Failed Requirements for compliances without section hierarchy (#9471)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-12-09 16:28:47 +01:00
Rubén De la Torre Vico 20ed8b3d2d fix: MCP findings tools errors (#9477) 2025-12-09 15:16:55 +01:00
Alan Buscaglia 45cc6e8b85 fix(ui): improve overview charts UX and consistency (#9484) 2025-12-09 13:33:41 +01:00
Hugo Pereira Brito 962c64eae5 chore: execute tests for only needed aws services (#9468) 2025-12-09 11:06:07 +01:00
César Arroba 7b56f0640f chore(github): fix release messages (#9459) 2025-12-09 10:06:55 +01:00
Alan Buscaglia 49c75cc418 fix(ui): add default date_from filter for severity over time endpoint (#9472) 2025-12-05 17:55:04 +01:00
Alan Buscaglia 56bca7c104 feat(ui): implement Risk Plot component with interactive legend and navigation (#9469) 2025-12-05 14:03:58 +01:00
Rubén De la Torre Vico faaa172b86 chore(aws): enhance metadata for macie service (#9265)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-12-05 12:03:13 +01:00
Alan Buscaglia 219ce0ba89 feat(ui): add navigation progress bar for better UX during page transitions (#9465) 2025-12-05 12:01:00 +01:00
Adrián Peña 2170e5fe12 feat(api): add findings severity timeseries endpoint (#9363) 2025-12-05 11:19:37 +01:00
Rubén De la Torre Vico e9efb12aa8 chore(aws): enhance metadata for networkfirewall service (#9382)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-12-05 09:39:01 +01:00
Chandrapal Badshah 74d72dd56b fix: remove importing non-existent classes (#9467)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-12-05 08:05:34 +01:00
Rubén De la Torre Vico 06d1d214fd chore(aws): enhance metadata for mq service (#9267)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-12-04 17:56:08 +01:00
Pepe Fagoaga 902bc9ad57 fix(api): unlimited limit-request-line (#9461)
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-12-04 17:45:58 +01:00
Rubén De la Torre Vico 3616c0a8c0 chore(aws): enhance metadata for lightsail service (#9264)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-12-04 16:05:10 +01:00
Alan Buscaglia 7288585fec chore(ui): migrate from npm to pnpm (#9442) 2025-12-04 15:12:39 +01:00
Rubén De la Torre Vico 6400dc1059 chore(aws): enhance metadata for guardduty service (#9259)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-12-04 14:35:41 +01:00
Alan Buscaglia 379c1dc7dd fix(ui): update severity trends endpoint and reorganize types (#9460) 2025-12-04 14:35:21 +01:00
Chandrapal Badshah eb247360c3 fix: return human readable error messages from lighthouse celery tasks (#9165)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-12-04 14:17:14 +01:00
Alan Buscaglia 7f12832808 feat(ui): add Finding Severity Over Time chart to overview page (#9405) 2025-12-04 13:19:15 +01:00
César Arroba 9c387d5742 chore(github): fix release notes (#9457) 2025-12-04 12:15:09 +01:00
César Arroba 4a5801c519 chore(github): debug release notes (#9456) 2025-12-04 12:07:02 +01:00
César Arroba 85cb39af28 chore(github): fix release notes (#9455) 2025-12-04 11:53:11 +01:00
Rubén De la Torre Vico c7abd77a1c feat(mcp_server): implement new Prowler App MCP server design (#9300)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-04 11:00:19 +01:00
César Arroba a622b9d965 chore(github): check and scan arm builds (#9450) 2025-12-04 10:50:39 +01:00
Alan Buscaglia 8bd95a04ce fix(ui): fix lint warnings and type issues in prompt-input (#9327) 2025-12-04 10:27:03 +01:00
Pepe Fagoaga 340454ba68 fix(overview): risk severity must show only fails (#9448) 2025-12-04 10:25:45 +01:00
Pedro Martín 6dff4bfd8b fix(ens): solve division by zero at reporting (#9443) 2025-12-04 10:08:12 +01:00
Alejandro Bailo 22c88e66a1 build(deps): update Next.js and React for CVE-2025-66478 (#9447)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-04 08:40:20 +01:00
Sergio Garcia 3b711f6143 fix(docker): add arm build toolchain for zstd compile (#9445) 2025-12-04 08:10:32 +01:00
Sergio Garcia dbdce98cf2 feat(alibaba): add Alibaba Cloud provider (#9329)
Co-authored-by: pedrooot <pedromarting3@gmail.com>
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-12-03 11:47:55 -05:00
Pepe Fagoaga 53404dfa62 docs(lighthouse): add version badge for bedrock long-term API keys (#9441) 2025-12-03 17:07:42 +01:00
Víctor Fernández Poyatos c8872dd6ac feat(db): Add admin read replica connection (#9440) 2025-12-03 16:53:48 +01:00
Chandrapal Badshah 26fd7d3adc feat(lighthouse): Support Amazon Bedrock Long-Term API Key (#9343)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-12-03 16:19:18 +01:00
Víctor Fernández Poyatos cb84bd0f94 fix(sentry): mute foreign key constraints alerts (#9439) 2025-12-03 16:08:47 +01:00
Pedro Martín cb3f3ab35d fix(ui): sort compliance overview by name (#9422) 2025-12-03 15:37:55 +01:00
Víctor Fernández Poyatos f58c1fddfb fix(compliance): ignore conflicts with unique summaries (#9436) 2025-12-03 15:37:04 +01:00
Alan Buscaglia c1bb51cf1a fix(ui): collection of UI bug fixes and improvements (#9346) 2025-12-03 14:31:23 +01:00
Adrián Peña a4e12a94f9 refactor(api): update compliance report endpoints and enhance query parameters (#9338) 2025-12-03 11:41:07 +01:00
César Arroba 7b1915e489 chore(github): update message when contaienr is pushed (#9421) 2025-12-03 10:53:01 +01:00
César Arroba 56d092c87e chore(github): fix changelog extraction and verify API specs file (#9420) 2025-12-03 10:52:52 +01:00
Víctor Fernández Poyatos 29a1034658 feat(exception): Add decorator for deleted providers during scans (#9414) 2025-12-03 09:46:59 +01:00
Chandrapal Badshah f5c2146d19 fix(lighthouse): show all models in selector even without default model (#9402)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-12-03 09:23:13 +01:00
Chandrapal Badshah 069f0d106c docs(lighthouse): update lighthouse multi llm docs (#9362)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-12-03 08:53:34 +01:00
Sergio Garcia 803ada7b16 docs(oci): add Prowler Cloud onboarding flow (#9417) 2025-12-02 13:04:56 -05:00
Alan Buscaglia 5e033321e8 feat(ui): add attack surface overview component (#9412) 2025-12-02 13:57:07 +01:00
Alan Buscaglia 175d7f95f5 fix: clear core.hooksPath before installing pre-commit hooks (#9413) 2025-12-02 13:42:04 +01:00
Víctor Fernández Poyatos 07e82bde56 feat(attack-surfaces): add new endpoints to retrieve overview data (#9309) 2025-12-02 12:12:47 +01:00
Hugo Pereira Brito 4661e01c26 chore(changelog): update for 5.14.2 release (#9404) 2025-12-02 11:22:01 +01:00
Alan Buscaglia dda0a2567d fix(ui): skip Sentry initialization when DSN is not configured (#9368) 2025-12-01 18:05:45 +01:00
StylusFrost 56ea498cca test(ui): Add e2e test for OCI Provider (#9347)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-12-01 16:13:12 +01:00
Hugo Pereira Brito f9e1e29631 fix(dashboard): typo and format errors (#9361) 2025-12-01 14:29:22 +01:00
lydiavilchez 3dadb264cc feat(gcp): add check for VM instance deletion protection (#9358) 2025-12-01 13:20:32 +01:00
Víctor Fernández Poyatos 495aee015e build: add gevent to API deps (#9359) 2025-12-01 13:11:38 +01:00
Pedro Martín d3a000cbc4 fix(report): update logic for threatscore (#9348) 2025-12-01 09:11:08 +01:00
lydiavilchez b2abdbeb60 feat(gcp-compute): add check to ensure VMs are not preemptible or spot (#9342)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-11-28 12:49:19 +01:00
lydiavilchez dc852b4595 feat(gcp-compute): add automatic restart check for VM instances (#9271) 2025-11-28 12:21:58 +01:00
Hugo Pereira Brito 1250f582a5 fix(check): custom check folder validation (#9335) 2025-11-28 12:19:47 +01:00
Pedro Martín bb43e924ee fix(report): use pagina for ENS in footer (#9345) 2025-11-28 12:04:30 +01:00
Andoni Alonso 0225627a98 fix(docs): fix image paths (#9341) 2025-11-28 11:20:54 +01:00
Alan Buscaglia 3097513525 fix(ui): filter Risk Pipeline chart by selected providers and show zero-data legends (#9340) 2025-11-27 17:39:01 +01:00
Alan Buscaglia 6af9ff4b4b feat(ui): add interactive charts with filter navigation (#9333) 2025-11-27 16:04:55 +01:00
Hugo Pereira Brito 06fa57a949 fix(docs): info warning format (#9339) 2025-11-27 09:57:05 -05:00
mattkeeler dc9e91ac4e fix(m365): Support multiple Exchange mailbox policies (#9241)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-11-27 14:10:15 +01:00
Shafkat Rahman 59f8dfe5ae feat(github): add immutable releases check (#9162)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-11-27 13:40:15 +01:00
Adrián Jesús Peña Rodríguez 7e0c5540bb feat(api): restore compliance overview endpoint (#9330) 2025-11-27 13:31:15 +01:00
Daniel Barranquero 79ec53bfc5 fix(ui): update changelog (#9334) 2025-11-27 13:16:50 +01:00
Daniel Barranquero ed5f6b3af6 feat(ui): add MongoDB Atlas provider support (#9253)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-11-27 12:37:20 +01:00
Andoni Alonso 6e135abaa0 fix(iac): ignore mutelist in IaC scans (#9331) 2025-11-27 11:08:58 +01:00
Hugo Pereira Brito 65b054f798 feat: enhance m365 documentation (#9287) 2025-11-26 16:17:43 +01:00
Alan Buscaglia 28d5b2bb6c feat(ui): integrate threat map with regions API endpoint (#9324)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-26 16:12:31 +01:00
Prowler Bot c8d9f37e70 feat(aws): Update regions for AWS services (#9294)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-11-26 09:42:40 -05:00
lydiavilchez 9d7b9c3327 feat(gcp): Add VPC Service Controls check for Cloud Storage (#9256)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-11-26 14:45:27 +01:00
Hugo Pereira Brito 127b8d8e56 fix: typo in pdf report generation (#9322)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-11-26 13:58:40 +01:00
Alan Buscaglia 4e9dd46a5e feat(ui): add Risk Pipeline View with Sankey chart to Overview page (#9320)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-26 13:33:58 +01:00
Hugo Pereira Brito 880345bebe fix(sharepoint): false positives on disabled external sharing (#9298) 2025-11-26 12:23:04 +01:00
Andoni Alonso 1259713fd6 docs: remove AMD-only docker images warning (#9315) 2025-11-26 10:26:39 +01:00
Prowler Bot 26088868a2 chore(release): Bump version to v5.15.0 (#9318)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-11-26 10:19:25 +01:00
César Arroba e58574e2a4 chore(github): fix container actions (#9321) 2025-11-26 10:16:26 +01:00
Alan Buscaglia a07e599cfc feat(ui): add service watchlist component with real API integration (#9316)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-25 17:03:24 +01:00
Alejandro Bailo e020b3f74b feat: add watchlist component (#9199)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-11-25 16:01:38 +01:00
Alan Buscaglia 8e7e376e4f feat(ui): hide new overview route and filter mongo providers (#9314) 2025-11-25 14:22:03 +01:00
Alan Buscaglia a63a3d3f68 fix: add filters for mongo providers and findings (#9311) 2025-11-25 13:19:49 +01:00
Andoni Alonso 10838de636 docs: refactor Lighthouse AI pages (#9310)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
2025-11-25 13:10:29 +01:00
Chandrapal Badshah 5ebf455e04 docs: Lighthouse multi LLM provider support (#9306)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-11-25 13:04:30 +01:00
Daniel Barranquero 0d59441c5f fix(api): add alter to mongodbatlas migration (#9308) 2025-11-25 11:29:07 +01:00
Pepe Fagoaga 3b05a1430e chore(changelog): reconcile for v5.14 (#9277)
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-11-24 19:03:53 +01:00
Alan Buscaglia ea953fb256 fix(ui): UI improvements - buttons, form validations, and chart alignment (#9299) 2025-11-24 17:14:12 +01:00
Andoni Alonso 2198e461c9 feat(iac): use branch as region for IaC findings (#9295) 2025-11-24 17:00:06 +01:00
Adrián Jesús Peña Rodríguez 75abd8f54d fix(threatscore): exclude muted findings from aggregated statistics in threatscore utils (#9296) 2025-11-24 13:25:20 +01:00
Adrián Jesús Peña Rodríguez 2f184a493b feat(threatscore): restore API threatscore snapshots (#9291) 2025-11-24 10:47:03 +01:00
Pepe Fagoaga e2e06a78f9 fix(lock): update poetry lock for prowler (#9290) 2025-11-24 10:05:14 +01:00
Adrián Jesús Peña Rodríguez de5aba6d4d feat(api): add new endpoint for retrieving findings data by region with associated filters and response schema (#9273) 2025-11-21 11:23:31 +01:00
César Arroba 6e7266eacf chore(github): fix sdk build action (#9288) 2025-11-21 11:03:52 +01:00
Alan Buscaglia 58bb66ff27 feat(ui/overview): add click navigation for charts and threat score improvements (#9281) 2025-11-20 18:47:42 +01:00
Pedro Martín 46bfe02ee8 feat(nis2): support PDF reporting (#9170)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
Co-authored-by: Josema Camacho <josema@prowler.com>
2025-11-20 17:14:54 +01:00
Pepe Fagoaga cee9a9a755 fix(html): logo URI (#9282) 2025-11-20 17:11:51 +01:00
Hugo Pereira Brito b11ba9b5cb feat(docs): add links for sp and cert from getting started to authentication (#9286) 2025-11-20 16:50:18 +01:00
Víctor Fernández Poyatos 789fc84e31 fix(overviews): exclude muted findings from severity overview (#9283) 2025-11-20 16:29:20 +01:00
Alejandro Bailo 6426558b18 fix(ui): pre-release fixes and improvements (#9278) 2025-11-20 16:18:25 +01:00
Hugo Pereira Brito 9a1ddedd94 fix(docs): typo (#9285) 2025-11-20 16:07:22 +01:00
Hugo Pereira Brito 0ae400d2b1 fix(docs): add link from getting started to auth for service accounts (#9284) 2025-11-20 15:55:19 +01:00
Víctor Fernández Poyatos ced122ac0d feat(migrations): add missing remove index operation (#9280) 2025-11-20 15:09:14 +01:00
Hugo Pereira Brito dc7d2d5aeb fix(outputs): refresh scan timestamps per run (#9272) 2025-11-20 13:12:39 +01:00
Alan Buscaglia b6ba6c6e31 feat(hooks): integrate Python pre-commit with Husky for monorepo (#9279) 2025-11-20 12:48:43 +01:00
Hugo Pereira Brito 30312bbc03 fix(docs): remove wrong threatscore warning (#9276) 2025-11-20 09:03:15 +01:00
Pedro Martín 94fe87b4a2 feat(ens): support PDF reporting (#9158)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-19 18:57:58 +01:00
Pedro Martín 219bc12365 feat(kubernetes): add Prowler ThreatScore compliance framework (#9235) 2025-11-19 18:31:54 +01:00
Pedro Martín 66394ab061 fix(threatscore): remove typo from 3. Logging and *m*onitoring (#9274) 2025-11-19 17:12:29 +01:00
Rubén De la Torre Vico 7348ed2179 chore(aws): enhance metadata for kinesis service (#9262)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-11-19 16:49:31 +01:00
Rubén De la Torre Vico 0b94f2929d chore(aws): enhance metadata for documentdb service (#8862)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-11-19 13:49:57 +01:00
Alejandro Bailo c23e2502f3 refactor(ui): redo the whole app with styles (#9234) 2025-11-19 11:37:17 +01:00
Adrián Jesús Peña Rodríguez c418c59b53 feat(compliance): enhance compliance overview filters and documentation (#9244) 2025-11-19 10:35:31 +01:00
Adrián Jesús Peña Rodríguez 3dc4ab5b83 refactor(api): remove ServiceOverviewFilter and update related tests (#9248) 2025-11-19 10:33:31 +01:00
Andoni Alonso 148a6f341b docs(sso): improve okta sso section (#9233) 2025-11-19 08:04:44 +01:00
Daniel Barranquero b5df26452a fix: split file_name not working on Windows (#9268) 2025-11-18 14:45:31 +01:00
Hugo Pereira Brito 45792686aa fix(docs): enhance gcp service account authentication and add missing permissions (#9231) 2025-11-18 14:09:03 +01:00
Rubén De la Torre Vico ee31e82707 fix: make JSON schema simpler to work with more MCP clients (#9257) 2025-11-18 13:35:11 +01:00
lydiavilchez 0ba1226d88 feat(gcp): implement Cloud Storage Data Access Audit Logs check (#9220)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-11-18 12:08:54 +01:00
Andoni Alonso 520cc31f73 docs: fix mutelist broken links (#9249) 2025-11-17 18:24:02 +01:00
Andoni Alonso a5a882a975 fix(iac): add trivy installation in CLI image (#9247) 2025-11-17 16:04:01 +01:00
Prowler Bot 84f9309a7c feat(aws): Update regions for AWS services (#9243)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-11-17 09:59:58 -05:00
Rubén De la Torre Vico cf3800dbbe chore(aws): enhance metadata for ecs service (#8888)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-11-17 15:25:30 +01:00
Adrián Jesús Peña Rodríguez d43455971b fix(scan): implement temporary workaround to skip findings with UID exceeding 300 characters (#9246) 2025-11-17 13:15:02 +01:00
Paco Sanchez Lopez 1ea0dabf42 feat(arm): adds support building multiarch prowler containers (#8773)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-11-17 12:35:33 +01:00
Rubén De la Torre Vico 0f43789666 chore(kubernetes): enhance metadata for etcd service (#9096)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-11-17 12:30:21 +01:00
Andoni Alonso 4f8e8ed935 chore(github): replace status/awaiting-response label with status/waiting-for-revision if comment added (#9245) 2025-11-17 12:20:33 +01:00
Rakan Farhouda 518508d5fe feat(api): add metadata attributes to ResourceSerializer and tests (#9098) 2025-11-17 14:10:45 +03:00
Rubén De la Torre Vico e715b9fbfb chore(aws): enhance metadata for ecr service (#8872)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-11-17 11:50:11 +01:00
Marc Espin 4167de39d2 fix(docs): Fix dead links leading to docs.prowler.cloud (#9240)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2025-11-17 09:56:51 +01:00
johannes-engler-mw 531ba5c31b feat(azure): new check for Entra ID authentication for Azure PostgreSQL Flexible Server (#8764)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-11-14 13:54:57 +01:00
Chandrapal Badshah 031548ca7e feat: Update Lighthouse UI to support multi LLM (#8925)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-14 11:46:38 +01:00
Hugo Pereira Brito 866edfb167 chore(outputs): raise an error when using -M asff for a provider other than aws (#9225) 2025-11-13 16:53:22 +01:00
Daniel Barranquero d1380fc19d fix(azure): validation and other errors in cosmosdb, defender, storage and vm (#8915) 2025-11-13 09:17:44 -05:00
Víctor Fernández Poyatos 46666d29d3 feat(db): optimize write queries for scan related tasks (#9190)
Co-authored-by: Josema Camacho <josema@prowler.com>
2025-11-13 12:27:57 +01:00
Rubén De la Torre Vico ce5f2cc5ed chore(aws): enhance metadata for elbv2 service (#9001)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-11-13 10:45:20 +01:00
Lee Trout c5c7b84afd chore(ec2): prevent test from calling live AWS endpoint (#9228) 2025-11-13 10:12:19 +01:00
Ryan Nolette 3432c8108c chore: updated gitignore file to be more robust for VSCode development environments and AI coding assistants. (#9226) 2025-11-13 09:32:21 +01:00
Andoni Alonso 7c42a61e17 docs(aws): restore STS Ireland endpoint warning (#9229) 2025-11-13 09:30:27 +01:00
Rubén De la Torre Vico 575521c025 chore(oraclecloud): enhance metadata for cloudguard service (#9223)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-11-12 11:58:54 -05:00
Rubén De la Torre Vico eab6c23333 chore(oraclecloud): enhance metadata for blockstorage service (#9222)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-11-12 11:51:29 -05:00
Rubén De la Torre Vico 8ee9454dbc chore(aws): enhance metadata for elb service (#8935)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-11-12 16:46:12 +01:00
Pedro Martín b46a8fd0ba feat(compliance): change C5 logo (#9224) 2025-11-12 16:01:18 +01:00
Rubén De la Torre Vico 77ef4869e3 chore(oraclecloud): enhance metadata for audit service (#9221)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-11-12 09:24:20 -05:00
Alan Buscaglia 07ac96661e feat: implement Finding Severity Over Time chart with time range selector (#9106)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-12 14:33:20 +01:00
Daniel Barranquero 98f8ef1b4b feat(mongodbatlas): add provider_id verification (#9211) 2025-11-12 13:50:00 +01:00
Pepe Fagoaga 5564b4c7ae fix(env): fallback to local (#9215) 2025-11-12 10:14:29 +01:00
Pedro Martín 427dab6810 fix(compliance): handle check_id not in Prowler Checks (#9208) 2025-11-12 09:11:34 +01:00
Andoni Alonso ee62ea384a chore(github): merge labeler actions (#9218) 2025-11-12 08:39:20 +01:00
Andoni Alonso ca4c4c8381 docs: remove Prowler App credentials handling duplicates (#9212) 2025-11-12 08:23:25 +01:00
Shaun e246c0cfd7 fix(aws): false negative in iam_role_cross_service_confused_deputy_prevention (#9213)
Co-authored-by: shaun <shaun@snotra.cloud>
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-11-11 17:39:16 -05:00
Rubén De la Torre Vico 74025b2b5e docs: add a architecture schema for MCP Server (#9214) 2025-11-11 11:53:01 -05:00
Alejandro Bailo ccb269caa2 chore(dependencies): add Sentry to /ui (#8730)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2025-11-11 17:12:42 +01:00
Rubén De la Torre Vico 0f22e754f2 chore(mongodbatlas): enhance metadata for projects service (#9093)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-11-11 11:10:40 -05:00
Josema Camacho 7cb0ed052d chore(security): upgrading django to 5.1.14 (#9176) 2025-11-11 16:51:28 +01:00
Andoni Alonso 1ec36d2285 docs: add Prowler Cloud public IPs (#9209) 2025-11-11 16:11:24 +01:00
lydiavilchez b0ec7daece feat(gcp): add check cloudstorage_bucket_sufficient_retention_period (#9149) 2025-11-11 15:51:57 +01:00
Hugo Pereira Brito 1292abcf91 fix(m365_powershell): restore MSAL.PS (#9210) 2025-11-11 15:35:45 +01:00
Rubén De la Torre Vico 136366f4d7 chore(github): enhance metadata for organization service (#9094)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-11-11 09:34:54 -05:00
StylusFrost 203b46196b fix(test-ui): update authentication method selection in ProvidersPage for AWS Add Provider e2e test (#9161) 2025-11-11 15:11:56 +01:00
Adrián Jesús Peña Rodríguez beec37b0da feat(threatscore): implement ThreatScoreSnapshot model, filter, serializer, and view for ThreatScore metrics retrieval (#9148) 2025-11-11 10:19:48 +01:00
Hugo Pereira Brito 73a277f27b chore(m365_powershell): remove unnecessary test_credentials (#9204) 2025-11-11 10:16:57 +01:00
Andoni Alonso 822d201159 fix(github): hardcode list of prowler-cloud organization members (#9207) 2025-11-11 10:03:12 +01:00
Andoni Alonso 8e07ec8727 docs: refactor contributing docs (#9202)
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-11-11 09:44:41 +01:00
Sergio Garcia 7c339ed9e4 docs(mutelist): fix misleading docstrings about tag and exception logic (#9205) 2025-11-10 13:39:24 -05:00
Sergio Garcia be0b8bba0d fix(html): rename get_oci_assessment_summary (#9200) 2025-11-10 10:15:54 -05:00
Prowler Bot 521afab4aa feat(aws): Update regions for AWS services (#9194)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-11-10 09:37:18 -05:00
Ethan Troy 789221d901 feat(compliance): add FedRAMP 20x KSI Low compliance frameworks (#9198)
Co-authored-by: pedrooot <pedromarting3@gmail.com>
2025-11-10 14:41:18 +01:00
Hugo Pereira Brito ef4e28da03 fix(m365_powershell): teams connection with --sp-env-auth and enhanced timeouts error logging (#9191) 2025-11-10 11:23:56 +01:00
Alejandro Bailo ee2d3ed052 feat: implement new design system variables across new components and add skeletons (#9193) 2025-11-10 09:19:10 +01:00
Pedro Martín 66a04b5547 feat(aws): improve nist_csf_2.0 mapping (#9189) 2025-11-07 10:59:40 -05:00
Hugo Pereira Brito fb9eda208e fix(powershell): depth truncation and parsing error (#9181) 2025-11-07 13:19:37 +01:00
Rakan Farhouda f0b1c4c29e fix(api): update unique constraint for Provider model to exclude soft… (#9054) 2025-11-07 13:16:55 +01:00
Alan Buscaglia a73a79f420 fix: exclude docs folder from Tailwind content scanning (#9184)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-07 10:49:27 +01:00
Rubén De la Torre Vico 5d4b7445f8 chore: fix image path in README for Prowler App (#9186) 2025-11-07 10:17:42 +01:00
Rubén De la Torre Vico 13e4866507 chore(oraclecloud): enhance metadata for analytics service (#9114)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-11-06 16:58:59 -05:00
UniCode 7d5c4d32ee feat(aws): add compliance NIST CSF 2.0 (#9185)
Co-authored-by: MrCloudSec <hello@mistercloudsec.com>
2025-11-06 16:55:16 -05:00
Daniel Barranquero 7e03b423dd feat(api): add MongoDBAtlas provider to api (#9167) 2025-11-06 16:37:38 -05:00
Maurício Harley 0ad5bbf350 feat(github): Add GitHub check ensuring repository creation is limited (#8844)
Signed-off-by: Mauricio Harley <mauricioharley@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-11-06 16:13:10 +01:00
Alejandro Bailo 38f60966e5 fix(ui): improve pre commit (#9180) 2025-11-06 14:32:06 +01:00
Alan Buscaglia 7bbc0d8e1b feat: add claude code validation to pre-commit hook (#9177)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-06 13:48:19 +01:00
Pedro Martín edfef51e7a feat(compliance): add naming and visual improvements (#9145) 2025-11-06 13:06:59 +01:00
Hugo Pereira Brito 788113b539 fix: changelog (#9179) 2025-11-06 12:57:51 +01:00
Hugo Pereira Brito 8ab77b7dba fix(gcp): check check_name has no resource_name error (#9169) 2025-11-06 12:37:49 +01:00
Sergio Garcia e038b2fd11 chore(sdk): add validation for invalid checks, services, and categories (#8971)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2025-11-06 11:46:21 +01:00
dependabot[bot] 2e5f17538d chore(deps): bump agenthunt/conventional-commit-checker-action from 2.0.0 to 2.0.1 (#9127)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:12:17 +01:00
dependabot[bot] 54294c862b chore(deps): bump trufflesecurity/trufflehog from 3.90.11 to 3.90.12 (#9128)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:11:46 +01:00
dependabot[bot] ace2b88c07 chore(deps): bump sorenlouv/backport-github-action from 9.5.1 to 10.2.0 (#9132)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:11:30 +01:00
dependabot[bot] 3de8159de9 chore(deps): bump actions/setup-node from 5.0.0 to 6.0.0 (#9135)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:10:29 +01:00
dependabot[bot] 1a4ae33235 chore(deps): bump softprops/action-gh-release from 2.3.3 to 2.4.1 (#9134)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:09:29 +01:00
dependabot[bot] e0260b91e6 chore(deps): bump peter-evans/create-or-update-comment from 4.0.0 to 5.0.0 (#9133)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:08:38 +01:00
dependabot[bot] 66590f2128 chore(deps): bump github/codeql-action from 3.30.5 to 4.31.2 (#9131)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:07:27 +01:00
dependabot[bot] 33bb2782f0 chore(deps): bump aws-actions/configure-aws-credentials from 5.0.0 to 5.1.0 (#9130)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 10:05:52 +01:00
César Arroba 2f61c88f74 chore(github): improve container slack notifications (#9144) 2025-11-06 09:33:33 +01:00
Andoni Alonso b25ed9fd27 feat(github): add external resource link (#9153)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-11-05 15:57:41 +01:00
Sergio Garcia 191d51675c chore(ui): rename OCI provider to oraclecloud (#9166) 2025-11-05 08:59:55 -05:00
Andoni Alonso 5b20fd1b3b docs(iac): add IaC getting started in Cloud/App (#9152) 2025-11-05 09:20:18 +01:00
Pepe Fagoaga 02489a5eef docs: get latest version to install Prowler App (#9163) 2025-11-04 18:31:00 +01:00
Sergio Garcia f16f94acf3 chore(oci): rename OCI provider to oraclecloud with oci alias (#9126) 2025-11-04 11:44:56 -05:00
Alejandro Bailo 1e584c5b58 feat: new overview threat score component (#9125) 2025-11-04 15:08:58 +01:00
César Arroba 1bb6bc148e chore(github): fix prepare release action (#9159) 2025-11-04 14:44:25 +01:00
César Arroba 166ab1d2c1 chore(github): fix actions paths (#9154) 2025-11-04 12:27:34 +01:00
StylusFrost dd85ca7c72 test(ui): add M365 provider management E2E tests (#8954) 2025-11-04 11:22:39 +01:00
Andoni Alonso b9aef85aa2 fix(github): user previous command to set labels (#9099) 2025-11-04 11:08:35 +01:00
Andoni Alonso 601495166c feat(iac): add IaC to Prowler App (#8751) 2025-11-04 10:01:58 +01:00
Hugo Pereira Brito 61a66f2bbf fix(aws): firehose_stream_encrypted_at_rest description and logic (#9142) 2025-11-03 11:31:18 -05:00
Rakan Farhouda 8b0b9cad32 fix(aws): update logger import in to use the correct module (#9138) 2025-11-03 18:09:41 +03:00
Alejandro Bailo 000b48b492 feat(ui): add Customer Support link to sidebar (#9143) 2025-11-03 16:01:11 +01:00
JDeep a564d6a04e feat(compliance): Add HIPAA compliance framework for GCP (#8955)
Co-authored-by: pedrooot <pedromarting3@gmail.com>
2025-11-03 15:34:08 +01:00
Prowler Bot 82bacef7c7 feat(aws): Update regions for AWS services (#9140)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-11-03 09:10:28 -05:00
Alejandro Bailo a4ac7bb067 feat(ui): move Resource ID field up (#9141) 2025-11-03 11:39:42 +01:00
StylusFrost a41f8dcb18 test(ui): add Azure provider management E2E tests (#8949) 2025-11-03 09:20:24 +01:00
Alejandro Bailo 2bf93c0de6 feat: RSS system (#9109) 2025-11-03 09:17:37 +01:00
Sergio Garcia 39710a6841 fix(api): correct OCI provider compliance directory mapping (#9111) 2025-10-31 10:33:13 -04:00
Rubén De la Torre Vico f330440c54 chore(aws): enhance metadata for codeartifact service (#8850)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-10-31 15:21:50 +01:00
Chandrapal Badshah c3940c7454 feat: Add Amazon Bedrock & OpenAI Compatible provider to Lighthouse AI (#8957)
Co-authored-by: Chandrapal Badshah <12944530+Chan9390@users.noreply.github.com>
Co-authored-by: Adrián Jesús Peña Rodríguez <adrianjpr@gmail.com>
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-10-31 13:54:15 +01:00
Rubén De la Torre Vico df39f332e4 docs: add new definitions for checks serverities (#9123) 2025-10-31 13:22:16 +01:00
lydiavilchez 4a364d91be feat(gcp): add cloudstorage_bucket_logging_enabled check (#9091)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-31 13:01:55 +01:00
César Arroba 4b99c7b651 chore(github): missed conditional on sdk container action (#9120) 2025-10-31 11:43:09 +01:00
Rubén De la Torre Vico c441423d6a chore(aws): enhance metadata for codebuild service (#8851)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Hugo Pereira Brito <101209179+HugoPBrito@users.noreply.github.com>
2025-10-31 11:41:34 +01:00
César Arroba 7e7f160b9a chore(sdk): allow sdk checks only on prowler repository (#9116) 2025-10-31 11:31:25 +01:00
César Arroba aaae73cd1c chore(github): rename jobs to know which component they belong (#9117) 2025-10-31 11:31:16 +01:00
Víctor Fernández Poyatos c5e88f4a74 feat(rls-transaction): add retry for read replica connections (#9064) 2025-10-31 11:09:05 +01:00
Víctor Fernández Poyatos 5d4415d090 feat(mute-rules): Support simple muting in API (#9051) 2025-10-31 10:49:17 +01:00
César Arroba 5d840385df chore(github): fix slack messages (#9107) 2025-10-30 17:21:11 +01:00
1539 changed files with 105547 additions and 40408 deletions
+18 -2
View File
@@ -10,13 +10,16 @@ NEXT_PUBLIC_API_BASE_URL=${API_BASE_URL}
NEXT_PUBLIC_API_DOCS_URL=http://prowler-api:8080/api/v1/docs
AUTH_TRUST_HOST=true
UI_PORT=3000
# Temp URL for feeds need to use actual
RSS_FEED_URL=https://prowler.com/blog/rss
# openssl rand -base64 32
AUTH_SECRET="N/c6mnaS5+SWq81+819OrzQZlmx1Vxtp/orjttJSmw8="
# Google Tag Manager ID
NEXT_PUBLIC_GOOGLE_TAG_MANAGER_ID=""
#### Code Review Configuration ####
# Enable Claude Code standards validation on pre-push hook
# Set to 'true' to validate changes against AGENTS.md standards via Claude Code
# Set to 'false' to skip validation
CODE_REVIEW_ENABLED=true
#### Prowler API Configuration ####
PROWLER_API_VERSION="stable"
@@ -35,6 +38,8 @@ POSTGRES_DB=prowler_db
# POSTGRES_REPLICA_USER=prowler
# POSTGRES_REPLICA_PASSWORD=postgres
# POSTGRES_REPLICA_DB=prowler_db
# POSTGRES_REPLICA_MAX_ATTEMPTS=3
# POSTGRES_REPLICA_RETRY_BASE_DELAY=0.5
# Celery-Prowler task settings
TASK_RETRY_DELAY_SECONDS=0.1
@@ -103,6 +108,8 @@ DJANGO_THROTTLE_TOKEN_OBTAIN=50/minute
# Sentry settings
SENTRY_ENVIRONMENT=local
SENTRY_RELEASE=local
NEXT_PUBLIC_SENTRY_ENVIRONMENT=${SENTRY_ENVIRONMENT}
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.12.2
@@ -124,3 +131,12 @@ LANGSMITH_TRACING=false
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY=""
LANGCHAIN_PROJECT=""
# RSS Feed Configuration
# Multiple feed sources can be configured as a JSON array (must be valid JSON, no trailing commas)
# Each source requires: id, name, type (github_releases|blog|custom), url, and enabled flag
# IMPORTANT: Must be a single line with valid JSON (no newlines, no trailing commas)
# Example with one source:
RSS_FEED_SOURCES='[{"id":"prowler-releases","name":"Prowler Releases","type":"github_releases","url":"https://github.com/prowler-cloud/prowler/releases.atom","enabled":true}]'
# Example with multiple sources (no trailing comma after last item):
# RSS_FEED_SOURCES='[{"id":"prowler-releases","name":"Prowler Releases","type":"github_releases","url":"https://github.com/prowler-cloud/prowler/releases.atom","enabled":true},{"id":"prowler-blog","name":"Prowler Blog","type":"blog","url":"https://prowler.com/blog/rss","enabled":false}]'
+14 -8
View File
@@ -12,7 +12,7 @@ inputs:
required: false
default: ''
step-outcome:
description: 'Outcome of a step to determine status (success/failure) - automatically sets STATUS_EMOJI, STATUS_TEXT, and STATUS_COLOR env vars'
description: 'Outcome of a step to determine status (success/failure) - automatically sets STATUS_TEXT and STATUS_COLOR env vars'
required: false
default: ''
outputs:
@@ -27,35 +27,41 @@ runs:
shell: bash
run: |
if [[ "${{ inputs.step-outcome }}" == "success" ]]; then
echo "STATUS_EMOJI=[✓]" >> $GITHUB_ENV
echo "STATUS_TEXT=succeeded" >> $GITHUB_ENV
echo "STATUS_COLOR=6aa84f" >> $GITHUB_ENV
echo "STATUS_TEXT=Completed" >> $GITHUB_ENV
echo "STATUS_COLOR=#6aa84f" >> $GITHUB_ENV
elif [[ "${{ inputs.step-outcome }}" == "failure" ]]; then
echo "STATUS_EMOJI=[✗]" >> $GITHUB_ENV
echo "STATUS_TEXT=failed" >> $GITHUB_ENV
echo "STATUS_COLOR=fc3434" >> $GITHUB_ENV
echo "STATUS_TEXT=Failed" >> $GITHUB_ENV
echo "STATUS_COLOR=#fc3434" >> $GITHUB_ENV
else
# No outcome provided - pending/in progress state
echo "STATUS_COLOR=dbab09" >> $GITHUB_ENV
echo "STATUS_COLOR=#dbab09" >> $GITHUB_ENV
fi
- name: Send Slack notification (new message)
if: inputs.update-ts == ''
id: slack-notification-post
uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1
env:
SLACK_PAYLOAD_FILE_PATH: ${{ inputs.payload-file-path }}
with:
method: chat.postMessage
token: ${{ inputs.slack-bot-token }}
payload-file-path: ${{ inputs.payload-file-path }}
payload-templated: true
errors: true
- name: Update Slack notification
if: inputs.update-ts != ''
id: slack-notification-update
uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1
env:
SLACK_PAYLOAD_FILE_PATH: ${{ inputs.payload-file-path }}
with:
method: chat.update
token: ${{ inputs.slack-bot-token }}
payload-file-path: ${{ inputs.payload-file-path }}
payload-templated: true
errors: true
- name: Set output
id: slack-notification
+1 -1
View File
@@ -87,7 +87,7 @@ runs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always()
with:
name: trivy-scan-report-${{ inputs.image-name }}
name: trivy-scan-report-${{ inputs.image-name }}-${{ inputs.image-tag }}
path: trivy-report.json
retention-days: ${{ inputs.artifact-retention-days }}
+7
View File
@@ -22,6 +22,13 @@ Please add a detailed description of how to review this PR.
- [ ] Review if is needed to change the [Readme.md](https://github.com/prowler-cloud/prowler/blob/master/README.md)
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/prowler/CHANGELOG.md), if applicable.
#### UI
- [ ] All issue/task requirements work as expected on the UI
- [ ] Screenshots/Video of the functionality flow (if applicable) - Mobile (X < 640px)
- [ ] Screenshots/Video of the functionality flow (if applicable) - Table (640px > X < 1024px)
- [ ] Screenshots/Video of the functionality flow (if applicable) - Desktop (X > 1024px)
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/ui/CHANGELOG.md), if applicable.
#### API
- [ ] Verify if API specs need to be regenerated.
- [ ] Check if version updates are required (e.g., specs, Poetry, etc.).
@@ -1,4 +1,18 @@
{
"channel": "$SLACK_CHANNEL_ID",
"text": "$STATUS_EMOJI $COMPONENT container release $RELEASE_TAG push $STATUS_TEXT <$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID|View run>"
}
"channel": "${{ env.SLACK_CHANNEL_ID }}",
"ts": "${{ env.MESSAGE_TS }}",
"attachments": [
{
"color": "${{ env.STATUS_COLOR }}",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Status:*\n${{ env.STATUS_TEXT }}\n\n${{ env.COMPONENT }} container release ${{ env.RELEASE_TAG }} push ${{ env.STATUS_TEXT }}\n\n<${{ env.GITHUB_SERVER_URL }}/${{ env.GITHUB_REPOSITORY }}/actions/runs/${{ env.GITHUB_RUN_ID }}|View run>"
}
}
]
}
]
}
@@ -1,4 +1,17 @@
{
"channel": "$SLACK_CHANNEL_ID",
"text": "$COMPONENT container release $RELEASE_TAG push started... <$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID|View run>"
"channel": "${{ env.SLACK_CHANNEL_ID }}",
"attachments": [
{
"color": "${{ env.STATUS_COLOR }}",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Status:*\nStarted\n\n${{ env.COMPONENT }} container release ${{ env.RELEASE_TAG }} push started...\n\n<${{ env.GITHUB_SERVER_URL }}/${{ env.GITHUB_REPOSITORY }}/actions/runs/${{ env.GITHUB_RUN_ID }}|View run>"
}
}
]
}
]
}
+3 -6
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-code-quality.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-code-quality.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -45,6 +39,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
.github/workflows/api-code-quality.yml
files_ignore: |
api/docs/**
api/README.md
+3 -3
View File
@@ -25,7 +25,7 @@ concurrency:
cancel-in-progress: true
jobs:
analyze:
api-analyze:
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
@@ -45,12 +45,12 @@ jobs:
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
category: '/language:${{ matrix.language }}'
+117 -37
View File
@@ -7,10 +7,16 @@ on:
paths:
- 'api/**'
- 'prowler/**'
- '.github/workflows/api-build-lint-push-containers.yml'
- '.github/workflows/api-container-build-push.yml'
release:
types:
- 'published'
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag (e.g., 5.14.0)'
required: true
type: string
permissions:
contents: read
@@ -22,7 +28,7 @@ concurrency:
env:
# Tags
LATEST_TAG: latest
RELEASE_TAG: ${{ github.event.release.tag_name }}
RELEASE_TAG: ${{ github.event.release.tag_name || inputs.release_tag }}
STABLE_TAG: stable
WORKING_DIRECTORY: ./api
@@ -42,9 +48,44 @@ jobs:
id: set-short-sha
run: echo "short-sha=${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
container-build-push:
notify-release-started:
if: github.repository == 'prowler-cloud/prowler' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: setup
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Notify container push started
id: slack-notification
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: API
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
container-build-push:
needs: [setup, notify-release-started]
if: always() && needs.setup.result == 'success' && (needs.notify-release-started.result == 'success' || needs.notify-release-started.result == 'skipped')
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -63,50 +104,88 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push API container (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Notify container push started
if: github.event_name == 'release'
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: API
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
- name: Build and push API container (release)
if: github.event_name == 'release'
- name: Build and push API container for ${{ matrix.arch }}
id: container-push
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
platforms: ${{ matrix.platform }}
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
cache-from: type=gha
cache-to: type=gha,mode=max
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
needs: [setup, container-build-push]
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
steps:
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Create and push manifests for push event
if: github.event_name == 'push'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
- name: Install regctl
if: always()
uses: regclient/actions/regctl-installer@f61d18f46c86af724a9c804cb9ff2a6fec741c7c # main
- name: Cleanup intermediate architecture tags
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64" || true
echo "Cleanup completed"
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: [setup, notify-release-started, container-build-push, create-manifest]
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
- name: Notify container push completed
if: github.event_name == 'release' && always()
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
MESSAGE_TS: ${{ needs.notify-release-started.outputs.message-ts }}
COMPONENT: API
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
@@ -115,7 +194,8 @@ jobs:
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-completed.json"
step-outcome: ${{ steps.container-push.outcome }}
step-outcome: ${{ steps.outcome.outputs.outcome }}
update-ts: ${{ needs.notify-release-started.outputs.message-ts }}
trigger-deployment:
if: github.event_name == 'push'
+21 -14
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -26,6 +20,7 @@ env:
jobs:
api-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
@@ -49,7 +44,17 @@ jobs:
ignore: DL3013
api-container-build-and-scan:
runs-on: ubuntu-latest
if: github.repository == 'prowler-cloud/prowler'
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -64,6 +69,7 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: api/**
files_ignore: |
api/docs/**
api/README.md
@@ -73,22 +79,23 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build container
- name: Build container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.API_WORKING_DIR }}
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
- name: Scan container with Trivy
if: github.repository == 'prowler-cloud/prowler' && steps.check-changes.outputs.any_changed == 'true'
- name: Scan container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+3 -6
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-security.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-security.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -45,6 +39,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
.github/workflows/api-security.yml
files_ignore: |
api/docs/**
api/README.md
+3 -6
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-tests.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'api/**'
- '.github/workflows/api-tests.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -85,6 +79,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
.github/workflows/api-tests.yml
files_ignore: |
api/docs/**
api/README.md
+1 -1
View File
@@ -38,7 +38,7 @@ jobs:
- name: Backport PR
if: steps.label_check.outputs.label_check == 'success'
uses: sorenlouv/backport-github-action@ad888e978060bc1b2798690dd9d03c4036560947 # v9.5.1
uses: sorenlouv/backport-github-action@516854e7c9f962b9939085c9a92ea28411d1ae90 # v10.2.0
with:
github_token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
auto_backport_label_prefix: ${{ env.BACKPORT_LABEL_PREFIX }}
@@ -0,0 +1,39 @@
name: 'Tools: Comment Label Update'
on:
issue_comment:
types:
- 'created'
concurrency:
group: ${{ github.workflow }}-${{ github.event.issue.number }}
cancel-in-progress: false
jobs:
update-labels:
if: contains(github.event.issue.labels.*.name, 'status/awaiting-response')
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
issues: write
pull-requests: write
steps:
- name: Remove 'status/awaiting-response' label
env:
GH_TOKEN: ${{ github.token }}
ISSUE_NUMBER: ${{ github.event.issue.number }}
run: |
echo "Removing 'status/awaiting-response' label from #$ISSUE_NUMBER"
gh api /repos/${{ github.repository }}/issues/$ISSUE_NUMBER/labels/status%2Fawaiting-response \
-X DELETE
- name: Add 'status/waiting-for-revision' label
env:
GH_TOKEN: ${{ github.token }}
ISSUE_NUMBER: ${{ github.event.issue.number }}
run: |
echo "Adding 'status/waiting-for-revision' label to #$ISSUE_NUMBER"
gh api /repos/${{ github.repository }}/issues/$ISSUE_NUMBER/labels \
-X POST \
-f labels[]='status/waiting-for-revision'
+1 -1
View File
@@ -26,6 +26,6 @@ jobs:
steps:
- name: Check PR title format
uses: agenthunt/conventional-commit-checker-action@9e552d650d0e205553ec7792d447929fc78e012b # v2.0.0
uses: agenthunt/conventional-commit-checker-action@f1823f632e95a64547566dcd2c7da920e67117ad # v2.0.1
with:
pr-title-regex: '^(feat|fix|docs|style|refactor|perf|test|chore|build|ci|revert)(\([^)]+\))?!?: .+'
+1 -1
View File
@@ -28,6 +28,6 @@ jobs:
fetch-depth: 0
- name: Scan for secrets with TruffleHog
uses: trufflesecurity/trufflehog@ad6fc8fb446b8fafbf7ea8193d2d6bfd42f45690 # v3.90.11
uses: trufflesecurity/trufflehog@b84c3d14d189e16da175e2c27fa8136603783ffc # v3.90.12
with:
extra_args: '--results=verified,unknown'
-43
View File
@@ -1,43 +0,0 @@
name: Community PR labelling
on:
# We need "write" permissions on the PR to be able to add a label.
pull_request_target: # We need this to have labelling permissions. There are no user inputs here, so we should be fine.
types:
- opened
permissions: {}
jobs:
label-if-community:
name: Add 'community' label if the PR is from a community contributor
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- name: Check if author is org member
id: check_membership
env:
GH_TOKEN: ${{ github.token }}
AUTHOR: ${{ github.event.pull_request.user.login }}
ORG: ${{ github.repository_owner }}
run: |
echo "Checking if $AUTHOR is a member of $ORG"
if gh api --method GET "orgs/$ORG/members/$AUTHOR" >/dev/null 2>&1; then
echo "is_member=true" >> $GITHUB_OUTPUT
echo "$AUTHOR is an organization member"
else
echo "is_member=false" >> $GITHUB_OUTPUT
echo "$AUTHOR is not an organization member"
fi
- name: Add community label
if: steps.check_membership.outputs.is_member == 'false'
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Adding 'community' label to PR #$PR_NUMBER"
gh pr edit "$PR_NUMBER" --add-label community
+63
View File
@@ -27,3 +27,66 @@ jobs:
uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
with:
sync-labels: true
label-community:
name: Add 'community' label if the PR is from a community contributor
needs: labeler
if: github.repository == 'prowler-cloud/prowler' && github.event.action == 'opened'
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- name: Check if author is org member
id: check_membership
env:
AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
# Hardcoded list of prowler-cloud organization members
# This list includes members who have set their organization membership as private
ORG_MEMBERS=(
"AdriiiPRodri"
"Alan-TheGentleman"
"alejandrobailo"
"amitsharm"
"andoniaf"
"cesararroba"
"Chan9390"
"danibarranqueroo"
"HugoPBrito"
"jfagoagas"
"josemazo"
"lydiavilchez"
"mmuller88"
"MrCloudSec"
"pedrooot"
"prowler-bot"
"puchy22"
"rakan-pro"
"RosaRivasProwler"
"StylusFrost"
"toniblyx"
"vicferpoy"
)
echo "Checking if $AUTHOR is a member of prowler-cloud organization"
# Check if author is in the org members list
if printf '%s\n' "${ORG_MEMBERS[@]}" | grep -q "^${AUTHOR}$"; then
echo "is_member=true" >> $GITHUB_OUTPUT
echo "$AUTHOR is an organization member"
else
echo "is_member=false" >> $GITHUB_OUTPUT
echo "$AUTHOR is not an organization member"
fi
- name: Add community label
if: steps.check_membership.outputs.is_member == 'false'
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Adding 'community' label to PR #$PR_NUMBER"
gh api /repos/${{ github.repository }}/issues/${{ github.event.number }}/labels \
-X POST \
-f labels[]='community'
+118 -45
View File
@@ -10,6 +10,12 @@ on:
release:
types:
- 'published'
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag (e.g., 5.14.0)'
required: true
type: string
permissions:
contents: read
@@ -21,7 +27,7 @@ concurrency:
env:
# Tags
LATEST_TAG: latest
RELEASE_TAG: ${{ github.event.release.tag_name }}
RELEASE_TAG: ${{ github.event.release.tag_name || inputs.release_tag }}
STABLE_TAG: stable
WORKING_DIRECTORY: ./mcp_server
@@ -41,9 +47,44 @@ jobs:
id: set-short-sha
run: echo "short-sha=${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
container-build-push:
notify-release-started:
if: github.repository == 'prowler-cloud/prowler' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: setup
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Notify container push started
id: slack-notification
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: MCP
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
container-build-push:
needs: [setup, notify-release-started]
if: always() && needs.setup.result == 'success' && (needs.notify-release-started.result == 'success' || needs.notify-release-started.result == 'skipped')
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -61,65 +102,96 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push MCP container (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}
labels: |
org.opencontainers.image.title=Prowler MCP Server
org.opencontainers.image.description=Model Context Protocol server for Prowler
org.opencontainers.image.vendor=ProwlerPro, Inc.
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ github.event.head_commit.timestamp }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Notify container push started
if: github.event_name == 'release'
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: MCP
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
- name: Build and push MCP container (release)
if: github.event_name == 'release'
- name: Build and push MCP container for ${{ matrix.arch }}
id: container-push
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
platforms: ${{ matrix.platform }}
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-${{ matrix.arch }}
labels: |
org.opencontainers.image.title=Prowler MCP Server
org.opencontainers.image.description=Model Context Protocol server for Prowler
org.opencontainers.image.vendor=ProwlerPro, Inc.
org.opencontainers.image.version=${{ env.RELEASE_TAG }}
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ github.event.release.published_at }}
cache-from: type=gha
cache-to: type=gha,mode=max
org.opencontainers.image.created=${{ github.event_name == 'release' && github.event.release.published_at || github.event.head_commit.timestamp }}
${{ github.event_name == 'release' && format('org.opencontainers.image.version={0}', env.RELEASE_TAG) || '' }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
needs: [setup, container-build-push]
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
steps:
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Create and push manifests for push event
if: github.event_name == 'push'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
- name: Install regctl
if: always()
uses: regclient/actions/regctl-installer@main
- name: Cleanup intermediate architecture tags
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64" || true
echo "Cleanup completed"
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: [setup, notify-release-started, container-build-push, create-manifest]
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
- name: Notify container push completed
if: github.event_name == 'release' && always()
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
MESSAGE_TS: ${{ needs.notify-release-started.outputs.message-ts }}
COMPONENT: MCP
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
@@ -128,7 +200,8 @@ jobs:
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-completed.json"
step-outcome: ${{ steps.container-push.outcome }}
step-outcome: ${{ steps.outcome.outputs.outcome }}
update-ts: ${{ needs.notify-release-started.outputs.message-ts }}
trigger-deployment:
if: github.event_name == 'push'
+21 -14
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'mcp_server/**'
- '.github/workflows/mcp-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'mcp_server/**'
- '.github/workflows/mcp-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -26,6 +20,7 @@ env:
jobs:
mcp-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
@@ -48,7 +43,17 @@ jobs:
dockerfile: mcp_server/Dockerfile
mcp-container-build-and-scan:
runs-on: ubuntu-latest
if: github.repository == 'prowler-cloud/prowler'
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -63,6 +68,7 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: mcp_server/**
files_ignore: |
mcp_server/README.md
mcp_server/CHANGELOG.md
@@ -71,22 +77,23 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build MCP container
- name: Build MCP container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.MCP_WORKING_DIR }}
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
- name: Scan MCP container with Trivy
if: github.repository == 'prowler-cloud/prowler' && steps.check-changes.outputs.any_changed == 'true'
- name: Scan MCP container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+1 -1
View File
@@ -83,7 +83,7 @@ jobs:
- name: Update PR comment with changelog status
if: github.event.pull_request.head.repo.full_name == github.repository
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find-comment.outputs.comment-id }}
+1 -1
View File
@@ -97,7 +97,7 @@ jobs:
body-includes: '<!-- conflict-checker-comment -->'
- name: Create or update comment
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
+112 -130
View File
@@ -47,7 +47,7 @@ jobs:
git config --global user.name 'prowler-bot'
git config --global user.email '179230569+prowler-bot@users.noreply.github.com'
- name: Parse version and read changelogs
- name: Parse version and determine branch
run: |
# Validate version format (reusing pattern from sdk-bump-version.yml)
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
@@ -64,66 +64,80 @@ jobs:
BRANCH_NAME="v${MAJOR_VERSION}.${MINOR_VERSION}"
echo "BRANCH_NAME=${BRANCH_NAME}" >> "${GITHUB_ENV}"
# Function to extract the latest version from changelog
extract_latest_version() {
local changelog_file="$1"
if [ -f "$changelog_file" ]; then
# Extract the first version entry (most recent) from changelog
# Format: ## [version] (1.2.3) or ## [vversion] (v1.2.3)
local version=$(grep -m 1 '^## \[' "$changelog_file" | sed 's/^## \[\(.*\)\].*/\1/' | sed 's/^v//' | tr -d '[:space:]')
echo "$version"
else
echo ""
fi
}
# Read actual versions from changelogs (source of truth)
UI_VERSION=$(extract_latest_version "ui/CHANGELOG.md")
API_VERSION=$(extract_latest_version "api/CHANGELOG.md")
SDK_VERSION=$(extract_latest_version "prowler/CHANGELOG.md")
MCP_VERSION=$(extract_latest_version "mcp_server/CHANGELOG.md")
echo "UI_VERSION=${UI_VERSION}" >> "${GITHUB_ENV}"
echo "API_VERSION=${API_VERSION}" >> "${GITHUB_ENV}"
echo "SDK_VERSION=${SDK_VERSION}" >> "${GITHUB_ENV}"
echo "MCP_VERSION=${MCP_VERSION}" >> "${GITHUB_ENV}"
if [ -n "$UI_VERSION" ]; then
echo "Read UI version from changelog: $UI_VERSION"
else
echo "Warning: No UI version found in ui/CHANGELOG.md"
fi
if [ -n "$API_VERSION" ]; then
echo "Read API version from changelog: $API_VERSION"
else
echo "Warning: No API version found in api/CHANGELOG.md"
fi
if [ -n "$SDK_VERSION" ]; then
echo "Read SDK version from changelog: $SDK_VERSION"
else
echo "Warning: No SDK version found in prowler/CHANGELOG.md"
fi
if [ -n "$MCP_VERSION" ]; then
echo "Read MCP version from changelog: $MCP_VERSION"
else
echo "Warning: No MCP version found in mcp_server/CHANGELOG.md"
fi
echo "Prowler version: $PROWLER_VERSION"
echo "Branch name: $BRANCH_NAME"
echo "UI version: $UI_VERSION"
echo "API version: $API_VERSION"
echo "SDK version: $SDK_VERSION"
echo "MCP version: $MCP_VERSION"
echo "Is minor release: $([ $PATCH_VERSION -eq 0 ] && echo 'true' || echo 'false')"
else
echo "Invalid version syntax: '$PROWLER_VERSION' (must be N.N.N)" >&2
exit 1
fi
- name: Checkout release branch
run: |
echo "Checking out branch $BRANCH_NAME for release $PROWLER_VERSION..."
if git show-ref --verify --quiet "refs/heads/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists locally, checking out..."
git checkout "$BRANCH_NAME"
elif git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists remotely, checking out..."
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "ERROR: Branch $BRANCH_NAME does not exist. For minor releases (X.Y.0), create it manually first. For patch releases (X.Y.Z), the branch should already exist."
exit 1
fi
- name: Read changelog versions from release branch
run: |
# Function to extract the version for a specific Prowler release from changelog
# This looks for entries with "(Prowler X.Y.Z)" to find the released version
extract_version_for_release() {
local changelog_file="$1"
local prowler_version="$2"
if [ -f "$changelog_file" ]; then
# Extract version that matches this Prowler release
# Format: ## [version] (Prowler X.Y.Z) or ## [vversion] (Prowler vX.Y.Z)
local version=$(grep '^## \[' "$changelog_file" | grep "(Prowler v\?${prowler_version})" | head -1 | sed 's/^## \[\(.*\)\].*/\1/' | sed 's/^v//' | tr -d '[:space:]')
echo "$version"
else
echo ""
fi
}
# Read versions from changelogs for this specific Prowler release
SDK_VERSION=$(extract_version_for_release "prowler/CHANGELOG.md" "$PROWLER_VERSION")
API_VERSION=$(extract_version_for_release "api/CHANGELOG.md" "$PROWLER_VERSION")
UI_VERSION=$(extract_version_for_release "ui/CHANGELOG.md" "$PROWLER_VERSION")
MCP_VERSION=$(extract_version_for_release "mcp_server/CHANGELOG.md" "$PROWLER_VERSION")
echo "SDK_VERSION=${SDK_VERSION}" >> "${GITHUB_ENV}"
echo "API_VERSION=${API_VERSION}" >> "${GITHUB_ENV}"
echo "UI_VERSION=${UI_VERSION}" >> "${GITHUB_ENV}"
echo "MCP_VERSION=${MCP_VERSION}" >> "${GITHUB_ENV}"
if [ -n "$SDK_VERSION" ]; then
echo "✓ SDK version for Prowler $PROWLER_VERSION: $SDK_VERSION"
else
echo " No SDK version found for Prowler $PROWLER_VERSION in prowler/CHANGELOG.md"
fi
if [ -n "$API_VERSION" ]; then
echo "✓ API version for Prowler $PROWLER_VERSION: $API_VERSION"
else
echo " No API version found for Prowler $PROWLER_VERSION in api/CHANGELOG.md"
fi
if [ -n "$UI_VERSION" ]; then
echo "✓ UI version for Prowler $PROWLER_VERSION: $UI_VERSION"
else
echo " No UI version found for Prowler $PROWLER_VERSION in ui/CHANGELOG.md"
fi
if [ -n "$MCP_VERSION" ]; then
echo "✓ MCP version for Prowler $PROWLER_VERSION: $MCP_VERSION"
else
echo " No MCP version found for Prowler $PROWLER_VERSION in mcp_server/CHANGELOG.md"
fi
- name: Extract and combine changelog entries
run: |
set -e
@@ -149,70 +163,54 @@ jobs:
# Remove --- separators
sed -i '/^---$/d' "$output_file"
# Remove only trailing empty lines (not all empty lines)
sed -i -e :a -e '/^\s*$/d;N;ba' "$output_file"
}
# Calculate expected versions for this release
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
EXPECTED_UI_VERSION="1.${BASH_REMATCH[2]}.${BASH_REMATCH[3]}"
EXPECTED_API_VERSION="1.$((${BASH_REMATCH[2]} + 1)).${BASH_REMATCH[3]}"
echo "Expected UI version for this release: $EXPECTED_UI_VERSION"
echo "Expected API version for this release: $EXPECTED_API_VERSION"
fi
# Determine if components have changes for this specific release
# UI has changes if its current version matches what we expect for this release
if [ -n "$UI_VERSION" ] && [ "$UI_VERSION" = "$EXPECTED_UI_VERSION" ]; then
echo "HAS_UI_CHANGES=true" >> $GITHUB_ENV
echo "✓ UI changes detected - version matches expected: $UI_VERSION"
extract_changelog "ui/CHANGELOG.md" "$UI_VERSION" "ui_changelog.md"
else
echo "HAS_UI_CHANGES=false" >> $GITHUB_ENV
echo " No UI changes for this release (current: $UI_VERSION, expected: $EXPECTED_UI_VERSION)"
touch "ui_changelog.md"
fi
# API has changes if its current version matches what we expect for this release
if [ -n "$API_VERSION" ] && [ "$API_VERSION" = "$EXPECTED_API_VERSION" ]; then
echo "HAS_API_CHANGES=true" >> $GITHUB_ENV
echo "✓ API changes detected - version matches expected: $API_VERSION"
extract_changelog "api/CHANGELOG.md" "$API_VERSION" "api_changelog.md"
else
echo "HAS_API_CHANGES=false" >> $GITHUB_ENV
echo " No API changes for this release (current: $API_VERSION, expected: $EXPECTED_API_VERSION)"
touch "api_changelog.md"
fi
# SDK has changes if its current version matches the input version
if [ -n "$SDK_VERSION" ] && [ "$SDK_VERSION" = "$PROWLER_VERSION" ]; then
if [ -n "$SDK_VERSION" ]; then
echo "HAS_SDK_CHANGES=true" >> $GITHUB_ENV
echo "✓ SDK changes detected - version matches input: $SDK_VERSION"
extract_changelog "prowler/CHANGELOG.md" "$PROWLER_VERSION" "prowler_changelog.md"
HAS_SDK_CHANGES="true"
echo "✓ SDK changes detected - version: $SDK_VERSION"
extract_changelog "prowler/CHANGELOG.md" "$SDK_VERSION" "prowler_changelog.md"
else
echo "HAS_SDK_CHANGES=false" >> $GITHUB_ENV
echo " No SDK changes for this release (current: $SDK_VERSION, input: $PROWLER_VERSION)"
HAS_SDK_CHANGES="false"
echo " No SDK changes for this release"
touch "prowler_changelog.md"
fi
# MCP has changes if the changelog references this Prowler version
# Check if the changelog contains "(Prowler X.Y.Z)" or "(Prowler UNRELEASED)"
if [ -f "mcp_server/CHANGELOG.md" ]; then
MCP_PROWLER_REF=$(grep -m 1 "^## \[.*\] (Prowler" mcp_server/CHANGELOG.md | sed -E 's/.*\(Prowler ([^)]+)\).*/\1/' | tr -d '[:space:]')
if [ "$MCP_PROWLER_REF" = "$PROWLER_VERSION" ] || [ "$MCP_PROWLER_REF" = "UNRELEASED" ]; then
echo "HAS_MCP_CHANGES=true" >> $GITHUB_ENV
echo "✓ MCP changes detected - Prowler reference: $MCP_PROWLER_REF (version: $MCP_VERSION)"
extract_changelog "mcp_server/CHANGELOG.md" "$MCP_VERSION" "mcp_changelog.md"
else
echo "HAS_MCP_CHANGES=false" >> $GITHUB_ENV
echo " No MCP changes for this release (Prowler reference: $MCP_PROWLER_REF, input: $PROWLER_VERSION)"
touch "mcp_changelog.md"
fi
if [ -n "$API_VERSION" ]; then
echo "HAS_API_CHANGES=true" >> $GITHUB_ENV
HAS_API_CHANGES="true"
echo "✓ API changes detected - version: $API_VERSION"
extract_changelog "api/CHANGELOG.md" "$API_VERSION" "api_changelog.md"
else
echo "HAS_API_CHANGES=false" >> $GITHUB_ENV
HAS_API_CHANGES="false"
echo " No API changes for this release"
touch "api_changelog.md"
fi
if [ -n "$UI_VERSION" ]; then
echo "HAS_UI_CHANGES=true" >> $GITHUB_ENV
HAS_UI_CHANGES="true"
echo "✓ UI changes detected - version: $UI_VERSION"
extract_changelog "ui/CHANGELOG.md" "$UI_VERSION" "ui_changelog.md"
else
echo "HAS_UI_CHANGES=false" >> $GITHUB_ENV
HAS_UI_CHANGES="false"
echo " No UI changes for this release"
touch "ui_changelog.md"
fi
if [ -n "$MCP_VERSION" ]; then
echo "HAS_MCP_CHANGES=true" >> $GITHUB_ENV
HAS_MCP_CHANGES="true"
echo "✓ MCP changes detected - version: $MCP_VERSION"
extract_changelog "mcp_server/CHANGELOG.md" "$MCP_VERSION" "mcp_changelog.md"
else
echo "HAS_MCP_CHANGES=false" >> $GITHUB_ENV
echo " No MCP changelog found"
HAS_MCP_CHANGES="false"
echo " No MCP changes for this release"
touch "mcp_changelog.md"
fi
@@ -255,21 +253,6 @@ jobs:
echo "Combined changelog preview:"
cat combined_changelog.md
- name: Checkout release branch for patch release
if: ${{ env.PATCH_VERSION != '0' }}
run: |
echo "Patch release detected, checking out existing branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/heads/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists locally, checking out..."
git checkout "$BRANCH_NAME"
elif git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists remotely, checking out..."
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "ERROR: Branch $BRANCH_NAME should exist for patch release $PROWLER_VERSION"
exit 1
fi
- name: Verify SDK version in pyproject.toml
run: |
CURRENT_VERSION=$(grep '^version = ' pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
@@ -323,17 +306,16 @@ jobs:
fi
echo "✓ api/src/backend/api/v1/views.py version: $CURRENT_API_VERSION"
- name: Checkout release branch for minor release
if: ${{ env.PATCH_VERSION == '0' }}
- name: Verify API version in api/src/backend/api/specs/v1.yaml
if: ${{ env.HAS_API_CHANGES == 'true' }}
run: |
echo "Minor release detected (patch = 0), checking out existing branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists remotely, checking out..."
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "ERROR: Branch $BRANCH_NAME should exist for minor release $PROWLER_VERSION. Please create it manually first."
CURRENT_API_VERSION=$(grep '^ version: ' api/src/backend/api/specs/v1.yaml | sed -E 's/ version: ([0-9]+\.[0-9]+\.[0-9]+)/\1/' | tr -d '[:space:]')
API_VERSION_TRIMMED=$(echo "$API_VERSION" | tr -d '[:space:]')
if [ "$CURRENT_API_VERSION" != "$API_VERSION_TRIMMED" ]; then
echo "ERROR: API version mismatch in api/src/backend/api/specs/v1.yaml (expected: '$API_VERSION_TRIMMED', found: '$CURRENT_API_VERSION')"
exit 1
fi
echo "✓ api/src/backend/api/specs/v1.yaml version: $CURRENT_API_VERSION"
- name: Update API prowler dependency for minor release
if: ${{ env.PATCH_VERSION == '0' }}
@@ -392,7 +374,7 @@ jobs:
no-changelog
- name: Create draft release
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090 # v2.4.1
with:
tag_name: ${{ env.PROWLER_VERSION }}
name: Prowler ${{ env.PROWLER_VERSION }}
+3 -12
View File
@@ -5,22 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-code-quality.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-code-quality.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -28,6 +16,7 @@ concurrency:
jobs:
sdk-code-quality:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
@@ -48,7 +37,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: ./**
files_ignore: |
.github/**
prowler/CHANGELOG.md
docs/**
permissions/**
+4 -3
View File
@@ -31,7 +31,8 @@ concurrency:
cancel-in-progress: true
jobs:
analyze:
sdk-analyze:
if: github.repository == 'prowler-cloud/prowler'
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
@@ -51,12 +52,12 @@ jobs:
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/sdk-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
category: '/language:${{ matrix.language }}'
+154 -60
View File
@@ -16,6 +16,12 @@ on:
release:
types:
- 'published'
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag (e.g., 5.14.0)'
required: true
type: string
permissions:
contents: read
@@ -44,19 +50,15 @@ env:
AWS_REGION: us-east-1
jobs:
container-build-push:
setup:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
packages: write
timeout-minutes: 5
outputs:
prowler_version: ${{ steps.get-prowler-version.outputs.prowler_version }}
prowler_version_major: ${{ steps.get-prowler-version.outputs.prowler_version_major }}
env:
POETRY_VIRTUALENVS_CREATE: 'false'
latest_tag: ${{ steps.get-prowler-version.outputs.latest_tag }}
stable_tag: ${{ steps.get-prowler-version.outputs.stable_tag }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
@@ -76,28 +78,26 @@ jobs:
run: |
PROWLER_VERSION="$(poetry version -s 2>/dev/null)"
echo "prowler_version=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
echo "PROWLER_VERSION=${PROWLER_VERSION}" >> "${GITHUB_ENV}"
# Extract major version
PROWLER_VERSION_MAJOR="${PROWLER_VERSION%%.*}"
echo "prowler_version_major=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_OUTPUT}"
echo "PROWLER_VERSION_MAJOR=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_ENV}"
# Set version-specific tags
case ${PROWLER_VERSION_MAJOR} in
3)
echo "LATEST_TAG=v3-latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=v3-stable" >> "${GITHUB_ENV}"
echo "latest_tag=v3-latest" >> "${GITHUB_OUTPUT}"
echo "stable_tag=v3-stable" >> "${GITHUB_OUTPUT}"
echo "✓ Prowler v3 detected - tags: v3-latest, v3-stable"
;;
4)
echo "LATEST_TAG=v4-latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=v4-stable" >> "${GITHUB_ENV}"
echo "latest_tag=v4-latest" >> "${GITHUB_OUTPUT}"
echo "stable_tag=v4-stable" >> "${GITHUB_OUTPUT}"
echo "✓ Prowler v4 detected - tags: v4-latest, v4-stable"
;;
5)
echo "LATEST_TAG=latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=stable" >> "${GITHUB_ENV}"
echo "latest_tag=latest" >> "${GITHUB_OUTPUT}"
echo "stable_tag=stable" >> "${GITHUB_OUTPUT}"
echo "✓ Prowler v5 detected - tags: latest, stable"
;;
*)
@@ -106,6 +106,53 @@ jobs:
;;
esac
notify-release-started:
if: github.repository == 'prowler-cloud/prowler' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: setup
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Notify container push started
id: slack-notification
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: SDK
RELEASE_TAG: ${{ needs.setup.outputs.prowler_version }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
container-build-push:
needs: [setup, notify-release-started]
if: always() && needs.setup.result == 'success' && (needs.notify-release-started.result == 'success' || needs.notify-release-started.result == 'skipped')
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 45
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
@@ -124,70 +171,117 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push SDK container (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: .
file: ${{ env.DOCKERFILE_PATH }}
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Notify container push started
if: github.event_name == 'release'
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: SDK
RELEASE_TAG: ${{ env.PROWLER_VERSION }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
- name: Build and push SDK container (release)
if: github.event_name == 'release'
- name: Build and push SDK container for ${{ matrix.arch }}
id: container-push
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: .
file: ${{ env.DOCKERFILE_PATH }}
push: true
platforms: ${{ matrix.platform }}
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.PROWLER_VERSION }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
cache-from: type=gha
cache-to: type=gha,mode=max
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
needs: [setup, container-build-push]
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
steps:
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: public.ecr.aws
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
password: ${{ secrets.PUBLIC_ECR_AWS_SECRET_ACCESS_KEY }}
env:
AWS_REGION: ${{ env.AWS_REGION }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Create and push manifests for push event
if: github.event_name == 'push'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }} \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-arm64
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.prowler_version }} \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.stable_tag }} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.prowler_version }} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.stable_tag }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.prowler_version }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.stable_tag }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-arm64
- name: Install regctl
if: always()
uses: regclient/actions/regctl-installer@main
- name: Cleanup intermediate architecture tags
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-arm64" || true
echo "Cleanup completed"
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: [setup, notify-release-started, container-build-push, create-manifest]
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
- name: Notify container push completed
if: github.event_name == 'release' && always()
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
MESSAGE_TS: ${{ needs.notify-release-started.outputs.message-ts }}
COMPONENT: SDK
RELEASE_TAG: ${{ env.PROWLER_VERSION }}
RELEASE_TAG: ${{ needs.setup.outputs.prowler_version }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-completed.json"
step-outcome: ${{ steps.container-push.outcome }}
step-outcome: ${{ steps.outcome.outputs.outcome }}
update-ts: ${{ needs.notify-release-started.outputs.message-ts }}
dispatch-v3-deployment:
if: needs.container-build-push.outputs.prowler_version_major == '3'
needs: container-build-push
if: needs.setup.outputs.prowler_version_major == '3'
needs: [setup, container-build-push]
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
@@ -214,4 +308,4 @@ jobs:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}
event-type: dispatch
client-payload: '{"version":"release","tag":"${{ needs.container-build-push.outputs.prowler_version }}"}'
client-payload: '{"version":"release","tag":"${{ needs.setup.outputs.prowler_version }}"}'
+22 -18
View File
@@ -5,20 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'Dockerfile'
- '.github/workflows/sdk-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'Dockerfile'
- '.github/workflows/sdk-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -29,6 +19,7 @@ env:
jobs:
sdk-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
@@ -52,7 +43,17 @@ jobs:
ignore: DL3013
sdk-container-build-and-scan:
runs-on: ubuntu-latest
if: github.repository == 'prowler-cloud/prowler'
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -67,7 +68,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: ./**
files_ignore: |
.github/**
prowler/CHANGELOG.md
docs/**
permissions/**
@@ -88,22 +91,23 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build SDK container
- name: Build SDK container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: .
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
- name: Scan SDK container with Trivy
if: github.repository == 'prowler-cloud/prowler' && steps.check-changes.outputs.any_changed == 'true'
- name: Scan SDK container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
@@ -39,7 +39,7 @@ jobs:
run: pip install boto3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@a03048d87541d1d9fcf2ecf528a4a65ba9bd7838 # v5.0.0
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
with:
aws-region: ${{ env.AWS_REGION }}
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
+3 -12
View File
@@ -5,22 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-security.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-security.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -28,6 +16,7 @@ concurrency:
jobs:
sdk-security-scans:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
@@ -41,7 +30,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: ./**
files_ignore: |
.github/**
prowler/CHANGELOG.md
docs/**
permissions/**
+105 -13
View File
@@ -5,22 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-tests.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'prowler/**'
- 'tests/**'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/sdk-tests.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -28,6 +16,7 @@ concurrency:
jobs:
sdk-tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 120
permissions:
@@ -48,7 +37,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: ./**
files_ignore: |
.github/**
prowler/CHANGELOG.md
docs/**
permissions/**
@@ -91,9 +82,110 @@ jobs:
./tests/**/aws/**
./poetry.lock
- name: Resolve AWS services under test
if: steps.changed-aws.outputs.any_changed == 'true'
id: aws-services
shell: bash
run: |
python3 <<'PY'
import os
from pathlib import Path
dependents = {
"acm": ["elb"],
"autoscaling": ["dynamodb"],
"awslambda": ["ec2", "inspector2"],
"backup": ["dynamodb", "ec2", "rds"],
"cloudfront": ["shield"],
"cloudtrail": ["awslambda", "cloudwatch"],
"cloudwatch": ["bedrock"],
"ec2": ["dlm", "dms", "elbv2", "emr", "inspector2", "rds", "redshift", "route53", "shield", "ssm"],
"ecr": ["inspector2"],
"elb": ["shield"],
"elbv2": ["shield"],
"globalaccelerator": ["shield"],
"iam": ["bedrock", "cloudtrail", "cloudwatch", "codebuild"],
"kafka": ["firehose"],
"kinesis": ["firehose"],
"kms": ["kafka"],
"organizations": ["iam", "servicecatalog"],
"route53": ["shield"],
"s3": ["bedrock", "cloudfront", "cloudtrail", "macie"],
"ssm": ["ec2"],
"vpc": ["awslambda", "ec2", "efs", "elasticache", "neptune", "networkfirewall", "rds", "redshift", "workspaces"],
"waf": ["elbv2"],
"wafv2": ["cognito", "elbv2"],
}
changed_raw = """${{ steps.changed-aws.outputs.all_changed_files }}"""
# all_changed_files is space-separated, not newline-separated
# Strip leading "./" if present for consistent path handling
changed_files = [Path(f.lstrip("./")) for f in changed_raw.split() if f]
services = set()
run_all = False
for path in changed_files:
path_str = path.as_posix()
parts = path.parts
if path_str.startswith("prowler/providers/aws/services/"):
if len(parts) > 4 and "." not in parts[4]:
services.add(parts[4])
else:
run_all = True
elif path_str.startswith("tests/providers/aws/services/"):
if len(parts) > 4 and "." not in parts[4]:
services.add(parts[4])
else:
run_all = True
elif path_str.startswith("prowler/providers/aws/") or path_str.startswith("tests/providers/aws/"):
run_all = True
# Expand with direct dependent services (one level only)
# We only test services that directly depend on the changed services,
# not transitive dependencies (services that depend on dependents)
original_services = set(services)
for svc in original_services:
for dep in dependents.get(svc, []):
services.add(dep)
if run_all or not services:
run_all = True
services = set()
service_paths = " ".join(sorted(f"tests/providers/aws/services/{svc}" for svc in services))
output_lines = [
f"run_all={'true' if run_all else 'false'}",
f"services={' '.join(sorted(services))}",
f"service_paths={service_paths}",
]
with open(os.environ["GITHUB_OUTPUT"], "a") as gh_out:
for line in output_lines:
gh_out.write(line + "\n")
print(f"AWS changed files (filtered): {changed_raw or 'none'}")
print(f"Run all AWS tests: {run_all}")
if services:
print(f"AWS service test paths: {service_paths}")
else:
print("AWS service test paths: none detected")
PY
- name: Run AWS tests
if: steps.changed-aws.outputs.any_changed == 'true'
run: poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml tests/providers/aws
run: |
echo "AWS run_all=${{ steps.aws-services.outputs.run_all }}"
echo "AWS service_paths='${{ steps.aws-services.outputs.service_paths }}'"
if [ "${{ steps.aws-services.outputs.run_all }}" = "true" ]; then
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml tests/providers/aws
elif [ -z "${{ steps.aws-services.outputs.service_paths }}" ]; then
echo "No AWS service paths detected; skipping AWS tests."
else
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml ${{ steps.aws-services.outputs.service_paths }}
fi
- name: Upload AWS coverage to Codecov
if: steps.changed-aws.outputs.any_changed == 'true'
+4 -3
View File
@@ -27,7 +27,8 @@ concurrency:
cancel-in-progress: true
jobs:
analyze:
ui-analyze:
if: github.repository == 'prowler-cloud/prowler'
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
@@ -47,12 +48,12 @@ jobs:
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/ui-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
category: '/language:${{ matrix.language }}'
+117 -40
View File
@@ -10,6 +10,12 @@ on:
release:
types:
- 'published'
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag (e.g., 5.14.0)'
required: true
type: string
permissions:
contents: read
@@ -21,7 +27,7 @@ concurrency:
env:
# Tags
LATEST_TAG: latest
RELEASE_TAG: ${{ github.event.release.tag_name }}
RELEASE_TAG: ${{ github.event.release.tag_name || inputs.release_tag }}
STABLE_TAG: stable
WORKING_DIRECTORY: ./ui
@@ -44,9 +50,44 @@ jobs:
id: set-short-sha
run: echo "short-sha=${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
container-build-push:
notify-release-started:
if: github.repository == 'prowler-cloud/prowler' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: setup
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
message-ts: ${{ steps.slack-notification.outputs.ts }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Notify container push started
id: slack-notification
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: UI
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
container-build-push:
needs: [setup, notify-release-started]
if: always() && needs.setup.result == 'success' && (needs.notify-release-started.result == 'success' || needs.notify-release-started.result == 'skipped')
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -65,56 +106,91 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push UI container (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
build-args: |
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=${{ needs.setup.outputs.short-sha }}
NEXT_PUBLIC_API_BASE_URL=${{ env.NEXT_PUBLIC_API_BASE_URL }}
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Notify container push started
if: github.event_name == 'release'
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
COMPONENT: UI
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_RUN_ID: ${{ github.run_id }}
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-started.json"
- name: Build and push UI container (release)
if: github.event_name == 'release'
- name: Build and push UI container for ${{ matrix.arch }}
id: container-push
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
build-args: |
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${{ env.RELEASE_TAG }}
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=${{ (github.event_name == 'release' || github.event_name == 'workflow_dispatch') && format('v{0}', env.RELEASE_TAG) || needs.setup.outputs.short-sha }}
NEXT_PUBLIC_API_BASE_URL=${{ env.NEXT_PUBLIC_API_BASE_URL }}
push: true
platforms: ${{ matrix.platform }}
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
cache-from: type=gha
cache-to: type=gha,mode=max
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
# Create and push multi-architecture manifest
create-manifest:
needs: [setup, container-build-push]
if: github.event_name == 'push' || github.event_name == 'release' || github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
steps:
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Create and push manifests for push event
if: github.event_name == 'push'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
- name: Install regctl
if: always()
uses: regclient/actions/regctl-installer@main
- name: Cleanup intermediate architecture tags
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64" || true
echo "Cleanup completed"
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
needs: [setup, notify-release-started, container-build-push, create-manifest]
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
- name: Notify container push completed
if: github.event_name == 'release' && always()
uses: ./.github/actions/slack-notification
env:
SLACK_CHANNEL_ID: ${{ secrets.SLACK_PLATFORM_DEPLOYMENTS }}
MESSAGE_TS: ${{ needs.notify-release-started.outputs.message-ts }}
COMPONENT: UI
RELEASE_TAG: ${{ env.RELEASE_TAG }}
GITHUB_SERVER_URL: ${{ github.server_url }}
@@ -123,7 +199,8 @@ jobs:
with:
slack-bot-token: ${{ secrets.SLACK_BOT_TOKEN }}
payload-file-path: "./.github/scripts/slack-messages/container-release-completed.json"
step-outcome: ${{ steps.container-push.outcome }}
step-outcome: ${{ steps.outcome.outputs.outcome }}
update-ts: ${{ needs.notify-release-started.outputs.message-ts }}
trigger-deployment:
if: github.event_name == 'push'
+21 -14
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'ui/**'
- '.github/workflows/ui-container-checks.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'ui/**'
- '.github/workflows/ui-container-checks.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -26,6 +20,7 @@ env:
jobs:
ui-dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
@@ -49,7 +44,17 @@ jobs:
ignore: DL3018
ui-container-build-and-scan:
runs-on: ubuntu-latest
if: github.repository == 'prowler-cloud/prowler'
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
arch: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
arch: arm64
timeout-minutes: 30
permissions:
contents: read
@@ -64,6 +69,7 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: ui/**
files_ignore: |
ui/CHANGELOG.md
ui/README.md
@@ -72,7 +78,7 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build UI container
- name: Build UI container for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
@@ -80,17 +86,18 @@ jobs:
target: prod
push: false
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: ${{ matrix.platform }}
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
build-args: |
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_51LwpXXXX
- name: Scan UI container with Trivy
if: github.repository == 'prowler-cloud/prowler' && steps.check-changes.outputs.any_changed == 'true'
- name: Scan UI container with Trivy for ${{ matrix.arch }}
if: steps.check-changes.outputs.any_changed == 'true'
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
image-tag: ${{ github.sha }}-${{ matrix.arch }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+70 -9
View File
@@ -10,6 +10,7 @@ on:
- 'ui/**'
jobs:
e2e-tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
@@ -24,12 +25,59 @@ jobs:
E2E_AWS_PROVIDER_ACCESS_KEY: ${{ secrets.E2E_AWS_PROVIDER_ACCESS_KEY }}
E2E_AWS_PROVIDER_SECRET_KEY: ${{ secrets.E2E_AWS_PROVIDER_SECRET_KEY }}
E2E_AWS_PROVIDER_ROLE_ARN: ${{ secrets.E2E_AWS_PROVIDER_ROLE_ARN }}
E2E_NEW_PASSWORD: ${{ secrets.E2E_NEW_PASSWORD }}
E2E_AZURE_SUBSCRIPTION_ID: ${{ secrets.E2E_AZURE_SUBSCRIPTION_ID }}
E2E_AZURE_CLIENT_ID: ${{ secrets.E2E_AZURE_CLIENT_ID }}
E2E_AZURE_SECRET_ID: ${{ secrets.E2E_AZURE_SECRET_ID }}
E2E_AZURE_TENANT_ID: ${{ secrets.E2E_AZURE_TENANT_ID }}
E2E_M365_DOMAIN_ID: ${{ secrets.E2E_M365_DOMAIN_ID }}
E2E_M365_CLIENT_ID: ${{ secrets.E2E_M365_CLIENT_ID }}
E2E_M365_SECRET_ID: ${{ secrets.E2E_M365_SECRET_ID }}
E2E_M365_TENANT_ID: ${{ secrets.E2E_M365_TENANT_ID }}
E2E_M365_CERTIFICATE_CONTENT: ${{ secrets.E2E_M365_CERTIFICATE_CONTENT }}
E2E_KUBERNETES_CONTEXT: 'kind-kind'
E2E_KUBERNETES_KUBECONFIG_PATH: /home/runner/.kube/config
E2E_GCP_BASE64_SERVICE_ACCOUNT_KEY: ${{ secrets.E2E_GCP_BASE64_SERVICE_ACCOUNT_KEY }}
E2E_GCP_PROJECT_ID: ${{ secrets.E2E_GCP_PROJECT_ID }}
E2E_GITHUB_APP_ID: ${{ secrets.E2E_GITHUB_APP_ID }}
E2E_GITHUB_BASE64_APP_PRIVATE_KEY: ${{ secrets.E2E_GITHUB_BASE64_APP_PRIVATE_KEY }}
E2E_GITHUB_USERNAME: ${{ secrets.E2E_GITHUB_USERNAME }}
E2E_GITHUB_PERSONAL_ACCESS_TOKEN: ${{ secrets.E2E_GITHUB_PERSONAL_ACCESS_TOKEN }}
E2E_GITHUB_ORGANIZATION: ${{ secrets.E2E_GITHUB_ORGANIZATION }}
E2E_GITHUB_ORGANIZATION_ACCESS_TOKEN: ${{ secrets.E2E_GITHUB_ORGANIZATION_ACCESS_TOKEN }}
E2E_ORGANIZATION_ID: ${{ secrets.E2E_ORGANIZATION_ID }}
E2E_OCI_TENANCY_ID: ${{ secrets.E2E_OCI_TENANCY_ID }}
E2E_OCI_USER_ID: ${{ secrets.E2E_OCI_USER_ID }}
E2E_OCI_FINGERPRINT: ${{ secrets.E2E_OCI_FINGERPRINT }}
E2E_OCI_KEY_CONTENT: ${{ secrets.E2E_OCI_KEY_CONTENT }}
E2E_OCI_REGION: ${{ secrets.E2E_OCI_REGION }}
E2E_NEW_USER_PASSWORD: ${{ secrets.E2E_NEW_USER_PASSWORD }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Create k8s Kind Cluster
uses: helm/kind-action@v1
with:
cluster_name: kind
- name: Modify kubeconfig
run: |
# Modify the kubeconfig to use the kind cluster server to https://kind-control-plane:6443
# from worker service into docker-compose.yml
kubectl config set-cluster kind-kind --server=https://kind-control-plane:6443
kubectl config view
- name: Add network kind to docker compose
run: |
# Add the network kind to the docker compose to interconnect to kind cluster
yq -i '.networks.kind.external = true' docker-compose.yml
# Add network kind to worker service and default network too
yq -i '.services.worker.networks = ["kind","default"]' docker-compose.yml
- name: Fix API data directory permissions
run: docker run --rm -v $(pwd)/_data/api:/data alpine chown -R 1000:1000 /data
- name: Add AWS credentials for testing AWS SDK Default Adding Provider
run: |
echo "Adding AWS credentials for testing AWS SDK Default Adding Provider..."
echo "AWS_ACCESS_KEY_ID=${{ secrets.E2E_AWS_PROVIDER_ACCESS_KEY }}" >> .env
echo "AWS_SECRET_ACCESS_KEY=${{ secrets.E2E_AWS_PROVIDER_SECRET_KEY }}" >> .env
- name: Start API services
run: |
# Override docker-compose image tag to use latest instead of stable
@@ -66,32 +114,45 @@ jobs:
echo "All database fixtures loaded successfully!"
'
- name: Setup Node.js environment
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: '20.x'
cache: 'npm'
cache-dependency-path: './ui/package-lock.json'
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('ui/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install UI dependencies
working-directory: ./ui
run: npm ci
run: pnpm install --frozen-lockfile
- name: Build UI application
working-directory: ./ui
run: npm run build
run: pnpm run build
- name: Cache Playwright browsers
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
id: playwright-cache
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('ui/package-lock.json') }}
key: ${{ runner.os }}-playwright-${{ hashFiles('ui/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-playwright-
- name: Install Playwright browsers
working-directory: ./ui
if: steps.playwright-cache.outputs.cache-hit != 'true'
run: npm run test:e2e:install
run: pnpm run test:e2e:install
- name: Run E2E tests
working-directory: ./ui
run: npm run test:e2e
run: pnpm run test:e2e
- name: Upload test reports
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: failure()
+27 -11
View File
@@ -5,16 +5,10 @@ on:
branches:
- 'master'
- 'v5.*'
paths:
- 'ui/**'
- '.github/workflows/ui-tests.yml'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'ui/**'
- '.github/workflows/ui-tests.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -42,6 +36,9 @@ jobs:
id: check-changes
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
ui/**
.github/workflows/ui-tests.yml
files_ignore: |
ui/CHANGELOG.md
ui/README.md
@@ -51,17 +48,36 @@ jobs:
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
cache-dependency-path: './ui/package-lock.json'
- name: Setup pnpm
if: steps.check-changes.outputs.any_changed == 'true'
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
if: steps.check-changes.outputs.any_changed == 'true'
shell: bash
run: echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
if: steps.check-changes.outputs.any_changed == 'true'
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('ui/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
if: steps.check-changes.outputs.any_changed == 'true'
run: npm ci
run: pnpm install --frozen-lockfile
- name: Run healthcheck
if: steps.check-changes.outputs.any_changed == 'true'
run: npm run healthcheck
run: pnpm run healthcheck
- name: Build application
if: steps.check-changes.outputs.any_changed == 'true'
run: npm run build
run: pnpm run build
+69 -9
View File
@@ -45,21 +45,86 @@ pytest_*.xml
.coverage
htmlcov/
# VSCode files
# VSCode files and settings
.vscode/
*.code-workspace
.vscode-test/
# Cursor files
# VSCode extension settings and workspaces
.history/
.ionide/
# MCP Server Settings (various locations)
**/cline_mcp_settings.json
**/mcp_settings.json
**/mcp-config.json
**/mcpServers.json
.mcp/
# AI Coding Assistants - Cursor
.cursorignore
.cursor/
.cursorrules
# RooCode files
# AI Coding Assistants - RooCode
.roo/
.rooignore
.roomodes
# Cline files
# AI Coding Assistants - Cline (formerly Claude Dev)
.cline/
.clineignore
.clinerules
# AI Coding Assistants - Continue
.continue/
continue.json
.continuerc
.continuerc.json
# AI Coding Assistants - GitHub Copilot
.copilot/
.github/copilot/
# AI Coding Assistants - Amazon Q Developer (formerly CodeWhisperer)
.aws/
.codewhisperer/
.amazonq/
.aws-toolkit/
# AI Coding Assistants - Tabnine
.tabnine/
tabnine_config.json
# AI Coding Assistants - Kiro
.kiro/
.kiroignore
kiro.config.json
# AI Coding Assistants - Aider
.aider/
.aider.chat.history.md
.aider.input.history
.aider.tags.cache.v3/
# AI Coding Assistants - Windsurf
.windsurf/
.windsurfignore
# AI Coding Assistants - Replit Agent
.replit
.replitignore
# AI Coding Assistants - Supermaven
.supermaven/
# AI Coding Assistants - Sourcegraph Cody
.cody/
# AI Coding Assistants - General
.ai/
.aiconfig
ai-config.json
# Terraform
.terraform*
@@ -70,7 +135,6 @@ htmlcov/
ui/.env*
api/.env*
mcp_server/.env*
.env.local
# Coverage
.coverage*
@@ -86,9 +150,5 @@ _data/
# Claude
CLAUDE.md
# MCP Server
mcp_server/prowler_mcp_server/prowler_app/server.py
mcp_server/prowler_mcp_server/prowler_app/utils/schema.yaml
# Compliance report
*.pdf
+9
View File
@@ -126,3 +126,12 @@ repos:
entry: bash -c 'vulture --exclude "contrib,.venv,api/src/backend/api/tests/,api/src/backend/conftest.py,api/src/backend/tasks/tests/" --min-confidence 100 .'
language: system
files: '.*\.py'
- id: ui-checks
name: UI - Husky Pre-commit
description: "Run UI pre-commit checks (Claude Code validation + healthcheck)"
entry: bash -c 'cd ui && .husky/pre-commit'
language: system
files: '^ui/.*\.(ts|tsx|js|jsx|json|css)$'
pass_filenames: false
verbose: true
+23
View File
@@ -4,10 +4,15 @@ LABEL maintainer="https://github.com/prowler-cloud/prowler"
LABEL org.opencontainers.image.source="https://github.com/prowler-cloud/prowler"
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.66.0
ENV TRIVY_VERSION=${TRIVY_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget libicu72 libunwind8 libssl3 libcurl4 ca-certificates apt-transport-https gnupg \
build-essential pkg-config libzstd-dev zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# Install PowerShell
@@ -25,6 +30,24 @@ RUN ARCH=$(uname -m) && \
ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && \
rm /tmp/powershell.tar.gz
# Install Trivy for IaC scanning
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
TRIVY_ARCH="Linux-64bit" ; \
elif [ "$ARCH" = "aarch64" ]; then \
TRIVY_ARCH="Linux-ARM64" ; \
else \
echo "Unsupported architecture for Trivy: $ARCH" && exit 1 ; \
fi && \
wget --progress=dot:giga "https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_${TRIVY_ARCH}.tar.gz" -O /tmp/trivy.tar.gz && \
tar zxf /tmp/trivy.tar.gz -C /tmp && \
mv /tmp/trivy /usr/local/bin/trivy && \
chmod +x /usr/local/bin/trivy && \
rm /tmp/trivy.tar.gz && \
# Create trivy cache directory with proper permissions
mkdir -p /tmp/.cache/trivy && \
chmod 777 /tmp/.cache/trivy
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
+35 -29
View File
@@ -6,7 +6,7 @@
<b><i>Prowler</b> is the Open Cloud Security platform trusted by thousands to automate security and compliance in any cloud environment. With hundreds of ready-to-use checks and compliance frameworks, Prowler delivers real-time, customizable monitoring and seamless integrations, making cloud security simple, scalable, and cost-effective for organizations of any size.
</p>
<p align="center">
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
<b>Secure ANY cloud at AI Speed at <a href="https://prowler.com">prowler.com</i></b>
</p>
<p align="center">
@@ -23,6 +23,7 @@
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://codecov.io/gh/prowler-cloud/prowler"><img src="https://codecov.io/gh/prowler-cloud/prowler/graph/badge.svg?token=OflBGsdpDl"/></a>
<a href="https://insights.linuxfoundation.org/project/prowler-cloud-prowler"><img src="https://insights.linuxfoundation.org/api/badge/health-score?project=prowler-cloud-prowler"/></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/v/release/prowler-cloud/prowler"></a>
@@ -35,28 +36,32 @@
</p>
<hr>
<p align="center">
<img align="center" src="/docs/img/prowler-cli-quick.gif" width="100%" height="100%">
<img align="center" src="/docs/img/prowler-cloud.gif" width="100%" height="100%">
</p>
# Description
**Prowler** is an open-source security tool designed to assess and enforce security best practices across AWS, Azure, Google Cloud, and Kubernetes. It supports tasks such as security audits, incident response, continuous monitoring, system hardening, forensic readiness, and remediation processes.
**Prowler** is the worlds most widely used _open-source cloud security platform_ that automates security and compliance across **any cloud environment**. With hundreds of ready-to-use security checks, remediation guidance, and compliance frameworks, Prowler is built to _“Secure ANY cloud at AI Speed”_. Prowler delivers **AI-driven**, **customizable**, and **easy-to-use** assessments, dashboards, reports, and integrations, making cloud security **simple**, **scalable**, and **cost-effective** for organizations of any size.
Prowler includes hundreds of built-in controls to ensure compliance with standards and frameworks, including:
- **Industry Standards:** CIS, NIST 800, NIST CSF, and CISA
- **Regulatory Compliance and Governance:** RBI, FedRAMP, and PCI-DSS
- **Prowler ThreatScore:** Weighted risk prioritization scoring that helps you focus on the most critical security findings first
- **Industry Standards:** CIS, NIST 800, NIST CSF, CISA, and MITRE ATT&CK
- **Regulatory Compliance and Governance:** RBI, FedRAMP, PCI-DSS, and NIS2
- **Frameworks for Sensitive Data and Privacy:** GDPR, HIPAA, and FFIEC
- **Frameworks for Organizational Governance and Quality Control:** SOC2 and GXP
- **AWS-Specific Frameworks:** AWS Foundational Technical Review (FTR) and AWS Well-Architected Framework (Security Pillar)
- **National Security Standards:** ENS (Spanish National Security Scheme)
- **Frameworks for Organizational Governance and Quality Control:** SOC2, GXP, and ISO 27001
- **Cloud-Specific Frameworks:** AWS Foundational Technical Review (FTR), AWS Well-Architected Framework, and BSI C5
- **National Security Standards:** ENS (Spanish National Security Scheme) and KISA ISMS-P (Korean)
- **Custom Security Frameworks:** Tailored to your needs
## Prowler App
## Prowler App / Prowler Cloud
Prowler App is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
Prowler App / [Prowler Cloud](https://cloud.prowler.com/) is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
![Prowler App](docs/images/products/overview.png)
![Risk Pipeline](docs/images/products/risk-pipeline.png)
![Threat Map](docs/images/products/threat-map.png)
![Prowler App](docs/products/img/overview.png)
>For more details, refer to the [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
@@ -73,26 +78,27 @@ prowler <provider>
```console
prowler dashboard
```
![Prowler Dashboard](docs/products/img/dashboard.png)
![Prowler Dashboard](docs/images/products/dashboard.png)
# Prowler at a Glance
> [!Tip]
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Stage | Interface |
|---|---|---|---|---|---|---|---|
| AWS | 576 | 82 | 38 | 10 | Official | Stable | UI, API, CLI |
| GCP | 79 | 13 | 11 | 3 | Official | Stable | UI, API, CLI |
| Azure | 162 | 19 | 12 | 4 | Official | Stable | UI, API, CLI |
| Kubernetes | 83 | 7 | 5 | 7 | Official | Stable | UI, API, CLI |
| GitHub | 17 | 2 | 1 | 0 | Official | Stable | UI, API, CLI |
| M365 | 70 | 7 | 3 | 2 | Official | Stable | UI, API, CLI |
| OCI | 51 | 13 | 1 | 10 | Official | Stable | UI, API, CLI |
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | Beta | CLI |
| MongoDB Atlas | 10 | 3 | 0 | 0 | Official | Beta | CLI |
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | Beta | CLI |
| NHN | 6 | 2 | 1 | 0 | Unofficial | Beta | CLI |
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Interface |
|---|---|---|---|---|---|---|
| AWS | 584 | 85 | 40 | 17 | Official | UI, API, CLI |
| GCP | 89 | 17 | 14 | 5 | Official | UI, API, CLI |
| Azure | 169 | 22 | 15 | 8 | Official | UI, API, CLI |
| Kubernetes | 84 | 7 | 6 | 9 | Official | UI, API, CLI |
| GitHub | 20 | 2 | 1 | 2 | Official | UI, API, CLI |
| M365 | 70 | 7 | 3 | 2 | Official | UI, API, CLI |
| OCI | 52 | 15 | 1 | 12 | Official | UI, API, CLI |
| Alibaba Cloud | 63 | 10 | 1 | 9 | Official | CLI |
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | UI, API, CLI |
| MongoDB Atlas | 10 | 4 | 0 | 3 | Official | UI, API, CLI |
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | CLI |
| NHN | 6 | 2 | 1 | 0 | Unofficial | CLI |
> [!Note]
> The numbers in the table are updated periodically.
@@ -153,7 +159,7 @@ You can find more information in the [Troubleshooting](./docs/troubleshooting.md
* `git` installed.
* `poetry` v2 installed: [poetry installation](https://python-poetry.org/docs/#installation).
* `npm` installed: [npm installation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
* `pnpm` installed: [pnpm installation](https://pnpm.io/installation).
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
**Commands to run the API**
@@ -209,9 +215,9 @@ python -m celery -A config.celery beat -l info --scheduler django_celery_beat.sc
``` console
git clone https://github.com/prowler-cloud/prowler
cd prowler/ui
npm install
npm run build
npm start
pnpm install
pnpm run build
pnpm start
```
> Once configured, access the Prowler App at http://localhost:3000. Sign up using your email and password to get started.
+92 -8
View File
@@ -2,23 +2,107 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.15.0] (Prowler UNRELEASED)
## [1.17.0] (Prowler UNRELEASED)
### Added
- New endpoint to retrieve and overview of the categories based on finding severities [(#9529)](https://github.com/prowler-cloud/prowler/pull/9529)
- Endpoints `GET /findings` and `GET /findings/latests` can now use the category filter [(#9529)](https://github.com/prowler-cloud/prowler/pull/9529)
### Changed
- Endpoint `GET /overviews/attack-surfaces` no longer returns the related check IDs [(#9529)](https://github.com/prowler-cloud/prowler/pull/9529)
- OpenAI provider to only load chat-compatible models with tool calling support [(#9523)](https://github.com/prowler-cloud/prowler/pull/9523)
- Increased execution delay for the first scheduled scan tasks to 5 seconds[(#9558)](https://github.com/prowler-cloud/prowler/pull/9558)
### Fixed
- Make `scan_id` a required filter in the compliance overview endpoint [(#9560)](https://github.com/prowler-cloud/prowler/pull/9560)
---
## [1.16.1] (Prowler v5.15.1)
### Fixed
- Race condition in scheduled scan creation by adding countdown to task [(#9516)](https://github.com/prowler-cloud/prowler/pull/9516)
## [1.16.0] (Prowler v5.15.0)
### Added
- New endpoint to retrieve an overview of the attack surfaces [(#9309)](https://github.com/prowler-cloud/prowler/pull/9309)
- New endpoint `GET /api/v1/overviews/findings_severity/timeseries` to retrieve daily aggregated findings by severity level [(#9363)](https://github.com/prowler-cloud/prowler/pull/9363)
- Lighthouse AI support for Amazon Bedrock API key [(#9343)](https://github.com/prowler-cloud/prowler/pull/9343)
- Exception handler for provider deletions during scans [(#9414)](https://github.com/prowler-cloud/prowler/pull/9414)
- Support to use admin credentials through the read replica database [(#9440)](https://github.com/prowler-cloud/prowler/pull/9440)
### Changed
- Error messages from Lighthouse celery tasks [(#9165)](https://github.com/prowler-cloud/prowler/pull/9165)
- Restore the compliance overview endpoint's mandatory filters [(#9338)](https://github.com/prowler-cloud/prowler/pull/9338)
---
## [1.15.2] (Prowler v5.14.2)
### Fixed
- Unique constraint violation during compliance overviews task [(#9436)](https://github.com/prowler-cloud/prowler/pull/9436)
- Division by zero error in ENS PDF report when all requirements are manual [(#9443)](https://github.com/prowler-cloud/prowler/pull/9443)
---
## [1.15.1] (Prowler v5.14.1)
### Fixed
- Fix typo in PDF reporting [(#9345)](https://github.com/prowler-cloud/prowler/pull/9345)
- Fix IaC provider initialization failure when mutelist processor is configured [(#9331)](https://github.com/prowler-cloud/prowler/pull/9331)
- Match logic for ThreatScore when counting findings [(#9348)](https://github.com/prowler-cloud/prowler/pull/9348)
---
## [1.15.0] (Prowler v5.14.0)
### Added
- IaC (Infrastructure as Code) provider support for remote repositories [(#8751)](https://github.com/prowler-cloud/prowler/pull/8751)
- Extend `GET /api/v1/providers` with provider-type filters and optional pagination disable to support the new Overview filters [(#8975)](https://github.com/prowler-cloud/prowler/pull/8975)
- New endpoint to retrieve the number of providers grouped by provider type [(#8975)](https://github.com/prowler-cloud/prowler/pull/8975)
- Support for configuring multiple LLM providers [(#8772)](https://github.com/prowler-cloud/prowler/pull/8772)
- Support C5 compliance framework for Azure provider [(#9081)](https://github.com/prowler-cloud/prowler/pull/9081)
- Support for Oracle Cloud Infrastructure (OCI) provider [(#8927)](https://github.com/prowler-cloud/prowler/pull/8927)
- Support muting findings based on simple rules with custom reason [(#9051)](https://github.com/prowler-cloud/prowler/pull/9051)
- Support C5 compliance framework for the GCP provider [(#9097)](https://github.com/prowler-cloud/prowler/pull/9097)
- Support for Amazon Bedrock and OpenAI compatible providers in Lighthouse AI [(#8957)](https://github.com/prowler-cloud/prowler/pull/8957)
- Support PDF reporting for ENS compliance framework [(#9158)](https://github.com/prowler-cloud/prowler/pull/9158)
- Support PDF reporting for NIS2 compliance framework [(#9170)](https://github.com/prowler-cloud/prowler/pull/9170)
- Tenant-wide ThreatScore overview aggregation and snapshot persistence with backfill support [(#9148)](https://github.com/prowler-cloud/prowler/pull/9148)
- Added `metadata`, `details`, and `partition` attributes to `/resources` endpoint & `details`, and `partition` to `/findings` endpoint [(#9098)](https://github.com/prowler-cloud/prowler/pull/9098)
- Support for MongoDB Atlas provider [(#9167)](https://github.com/prowler-cloud/prowler/pull/9167)
- Support Prowler ThreatScore for the K8S provider [(#9235)](https://github.com/prowler-cloud/prowler/pull/9235)
- Enhanced compliance overview endpoint with provider filtering and latest scan aggregation [(#9244)](https://github.com/prowler-cloud/prowler/pull/9244)
- New endpoint `GET /api/v1/overview/regions` to retrieve aggregated findings data by region [(#9273)](https://github.com/prowler-cloud/prowler/pull/9273)
## [1.14.1] (Prowler 5.13.1)
### Changed
- Optimized database write queries for scan related tasks [(#9190)](https://github.com/prowler-cloud/prowler/pull/9190)
- Date filters are now optional for `GET /api/v1/overviews/services` endpoint; returns latest scan data by default [(#9248)](https://github.com/prowler-cloud/prowler/pull/9248)
### Fixed
- Scans no longer fail when findings have UIDs exceeding 300 characters; such findings are now skipped with detailed logging [(#9246)](https://github.com/prowler-cloud/prowler/pull/9246)
- Updated unique constraint for `Provider` model to exclude soft-deleted entries, resolving duplicate errors when re-deleting providers [(#9054)](https://github.com/prowler-cloud/prowler/pull/9054)
- Removed compliance generation for providers without compliance frameworks [(#9208)](https://github.com/prowler-cloud/prowler/pull/9208)
- Refresh output report timestamps for each scan [(#9272)](https://github.com/prowler-cloud/prowler/pull/9272)
- Severity overview endpoint now ignores muted findings as expected [(#9283)](https://github.com/prowler-cloud/prowler/pull/9283)
- Fixed discrepancy between ThreatScore PDF report values and database calculations [(#9296)](https://github.com/prowler-cloud/prowler/pull/9296)
### Security
- Django updated to the latest 5.1 security release, 5.1.14, due to problems with potential [SQL injection](https://github.com/prowler-cloud/prowler/security/dependabot/113) and [denial-of-service vulnerability](https://github.com/prowler-cloud/prowler/security/dependabot/114) [(#9176)](https://github.com/prowler-cloud/prowler/pull/9176)
---
## [1.14.1] (Prowler v5.13.1)
### Fixed
- `/api/v1/overviews/providers` collapses data by provider type so the UI receives a single aggregated record per cloud family even when multiple accounts exist [(#9053)](https://github.com/prowler-cloud/prowler/pull/9053)
- Added retry logic to database transactions to handle Aurora read replica connection failures during scale-down events [(#9064)](https://github.com/prowler-cloud/prowler/pull/9064)
- Security Hub integrations stop failing when they read relationships via the replica by allowing replica relations and saving updates through the primary [(#9080)](https://github.com/prowler-cloud/prowler/pull/9080)
## [1.14.0] (Prowler 5.13.0)
---
## [1.14.0] (Prowler v5.13.0)
### Added
- Default JWT keys are generated and stored if they are missing from configuration [(#8655)](https://github.com/prowler-cloud/prowler/pull/8655)
@@ -42,14 +126,14 @@ All notable changes to the **Prowler API** are documented in this file.
---
## [1.13.2] (Prowler 5.12.3)
## [1.13.2] (Prowler v5.12.3)
### Fixed
- 500 error when deleting user [(#8731)](https://github.com/prowler-cloud/prowler/pull/8731)
---
## [1.13.1] (Prowler 5.12.2)
## [1.13.1] (Prowler v5.12.2)
### Changed
- Renamed compliance overview task queue to `compliance` [(#8755)](https://github.com/prowler-cloud/prowler/pull/8755)
@@ -59,7 +143,7 @@ All notable changes to the **Prowler API** are documented in this file.
---
## [1.13.0] (Prowler 5.12.0)
## [1.13.0] (Prowler v5.12.0)
### Added
- Integration with JIRA, enabling sending findings to a JIRA project [(#8622)](https://github.com/prowler-cloud/prowler/pull/8622), [(#8637)](https://github.com/prowler-cloud/prowler/pull/8637)
@@ -68,7 +152,7 @@ All notable changes to the **Prowler API** are documented in this file.
---
## [1.12.0] (Prowler 5.11.0)
## [1.12.0] (Prowler v5.11.0)
### Added
- Lighthouse support for OpenAI GPT-5 [(#8527)](https://github.com/prowler-cloud/prowler/pull/8527)
@@ -80,7 +164,7 @@ All notable changes to the **Prowler API** are documented in this file.
---
## [1.11.0] (Prowler 5.10.0)
## [1.11.0] (Prowler v5.10.0)
### Added
- Github provider support [(#8271)](https://github.com/prowler-cloud/prowler/pull/8271)
+21
View File
@@ -5,6 +5,9 @@ LABEL maintainer="https://github.com/prowler-cloud/api"
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.66.0
ENV TRIVY_VERSION=${TRIVY_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
@@ -36,6 +39,24 @@ RUN ARCH=$(uname -m) && \
ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && \
rm /tmp/powershell.tar.gz
# Install Trivy for IaC scanning
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
TRIVY_ARCH="Linux-64bit" ; \
elif [ "$ARCH" = "aarch64" ]; then \
TRIVY_ARCH="Linux-ARM64" ; \
else \
echo "Unsupported architecture for Trivy: $ARCH" && exit 1 ; \
fi && \
wget --progress=dot:giga "https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_${TRIVY_ARCH}.tar.gz" -O /tmp/trivy.tar.gz && \
tar zxf /tmp/trivy.tar.gz -C /tmp && \
mv /tmp/trivy /usr/local/bin/trivy && \
chmod +x /usr/local/bin/trivy && \
rm /tmp/trivy.tar.gz && \
# Create trivy cache directory with proper permissions
mkdir -p /tmp/.cache/trivy && \
chmod 777 /tmp/.cache/trivy
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
+290 -7
View File
@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.2.0 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -610,6 +610,24 @@ azure-mgmt-core = ">=1.3.2"
isodate = ">=0.6.1"
typing-extensions = ">=4.6.0"
[[package]]
name = "azure-mgmt-postgresqlflexibleservers"
version = "1.1.0"
description = "Microsoft Azure Postgresqlflexibleservers Management Client Library for Python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "azure_mgmt_postgresqlflexibleservers-1.1.0-py3-none-any.whl", hash = "sha256:87ddb5a5e6d12c45769485d234cfe0322140e3a0a7636d0e61fb00ac544b5d20"},
{file = "azure_mgmt_postgresqlflexibleservers-1.1.0.tar.gz", hash = "sha256:9ede9d8ba63e9d2879cb74adc903c649af3bc5460a02787287b0cd18d754af14"},
]
[package.dependencies]
azure-common = ">=1.1"
azure-mgmt-core = ">=1.3.2"
isodate = ">=0.6.1"
typing-extensions = ">=4.6.0"
[[package]]
name = "azure-mgmt-rdbms"
version = "10.1.0"
@@ -1164,6 +1182,18 @@ files = [
{file = "charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14"},
]
[[package]]
name = "circuitbreaker"
version = "2.1.3"
description = "Python Circuit Breaker pattern implementation"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "circuitbreaker-2.1.3-py3-none-any.whl", hash = "sha256:87ba6a3ed03fdc7032bc175561c2b04d52ade9d5faf94ca2b035fbdc5e6b1dd1"},
{file = "circuitbreaker-2.1.3.tar.gz", hash = "sha256:1a4baee510f7bea3c91b194dcce7c07805fe96c4423ed5594b75af438531d084"},
]
[[package]]
name = "click"
version = "8.2.1"
@@ -1671,14 +1701,14 @@ with-social = ["django-allauth[socialaccount] (>=64.0.0)"]
[[package]]
name = "django"
version = "5.1.13"
version = "5.1.14"
description = "A high-level Python web framework that encourages rapid development and clean, pragmatic design."
optional = false
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "django-5.1.13-py3-none-any.whl", hash = "sha256:06f257f79dc4c17f3f9e23b106a4c5ed1335abecbe731e83c598c941d14fbeed"},
{file = "django-5.1.13.tar.gz", hash = "sha256:543ff21679f15e80edfc01fe7ea35f8291b6d4ea589433882913626a7c1cf929"},
{file = "django-5.1.14-py3-none-any.whl", hash = "sha256:2a4b9c20404fd1bf50aaaa5542a19d860594cba1354f688f642feb271b91df27"},
{file = "django-5.1.14.tar.gz", hash = "sha256:b98409fb31fdd6e8c3a6ba2eef3415cc5c0020057b43b21ba7af6eff5f014831"},
]
[package.dependencies]
@@ -2438,6 +2468,72 @@ files = [
{file = "frozenlist-1.7.0.tar.gz", hash = "sha256:2e310d81923c2437ea8670467121cc3e9b0f76d3043cc1d2331d56c7fb7a3a8f"},
]
[[package]]
name = "gevent"
version = "25.9.1"
description = "Coroutine-based network library"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "gevent-25.9.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:856b990be5590e44c3a3dc6c8d48a40eaccbb42e99d2b791d11d1e7711a4297e"},
{file = "gevent-25.9.1-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:fe1599d0b30e6093eb3213551751b24feeb43db79f07e89d98dd2f3330c9063e"},
{file = "gevent-25.9.1-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:f0d8b64057b4bf1529b9ef9bd2259495747fba93d1f836c77bfeaacfec373fd0"},
{file = "gevent-25.9.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b56cbc820e3136ba52cd690bdf77e47a4c239964d5f80dc657c1068e0fe9521c"},
{file = "gevent-25.9.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:c5fa9ce5122c085983e33e0dc058f81f5264cebe746de5c401654ab96dddfca8"},
{file = "gevent-25.9.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:03c74fec58eda4b4edc043311fca8ba4f8744ad1632eb0a41d5ec25413581975"},
{file = "gevent-25.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:a8ae9f895e8651d10b0a8328a61c9c53da11ea51b666388aa99b0ce90f9fdc27"},
{file = "gevent-25.9.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:18e5aff9e8342dc954adb9c9c524db56c2f3557999463445ba3d9cbe3dada7b7"},
{file = "gevent-25.9.1-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1cdf6db28f050ee103441caa8b0448ace545364f775059d5e2de089da975c457"},
{file = "gevent-25.9.1-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:812debe235a8295be3b2a63b136c2474241fa5c58af55e6a0f8cfc29d4936235"},
{file = "gevent-25.9.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b28b61ff9216a3d73fe8f35669eefcafa957f143ac534faf77e8a19eb9e6883a"},
{file = "gevent-25.9.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:5e4b6278b37373306fc6b1e5f0f1cf56339a1377f67c35972775143d8d7776ff"},
{file = "gevent-25.9.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d99f0cb2ce43c2e8305bf75bee61a8bde06619d21b9d0316ea190fc7a0620a56"},
{file = "gevent-25.9.1-cp311-cp311-win_amd64.whl", hash = "sha256:72152517ecf548e2f838c61b4be76637d99279dbaa7e01b3924df040aa996586"},
{file = "gevent-25.9.1-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:46b188248c84ffdec18a686fcac5dbb32365d76912e14fda350db5dc0bfd4f86"},
{file = "gevent-25.9.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f2b54ea3ca6f0c763281cd3f96010ac7e98c2e267feb1221b5a26e2ca0b9a692"},
{file = "gevent-25.9.1-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:7a834804ac00ed8a92a69d3826342c677be651b1c3cd66cc35df8bc711057aa2"},
{file = "gevent-25.9.1-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:323a27192ec4da6b22a9e51c3d9d896ff20bc53fdc9e45e56eaab76d1c39dd74"},
{file = "gevent-25.9.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6ea78b39a2c51d47ff0f130f4c755a9a4bbb2dd9721149420ad4712743911a51"},
{file = "gevent-25.9.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:dc45cd3e1cc07514a419960af932a62eb8515552ed004e56755e4bf20bad30c5"},
{file = "gevent-25.9.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:34e01e50c71eaf67e92c186ee0196a039d6e4f4b35670396baed4a2d8f1b347f"},
{file = "gevent-25.9.1-cp312-cp312-win_amd64.whl", hash = "sha256:4acd6bcd5feabf22c7c5174bd3b9535ee9f088d2bbce789f740ad8d6554b18f3"},
{file = "gevent-25.9.1-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:4f84591d13845ee31c13f44bdf6bd6c3dbf385b5af98b2f25ec328213775f2ed"},
{file = "gevent-25.9.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:9cdbb24c276a2d0110ad5c978e49daf620b153719ac8a548ce1250a7eb1b9245"},
{file = "gevent-25.9.1-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:88b6c07169468af631dcf0fdd3658f9246d6822cc51461d43f7c44f28b0abb82"},
{file = "gevent-25.9.1-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b7bb0e29a7b3e6ca9bed2394aa820244069982c36dc30b70eb1004dd67851a48"},
{file = "gevent-25.9.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2951bb070c0ee37b632ac9134e4fdaad70d2e660c931bb792983a0837fe5b7d7"},
{file = "gevent-25.9.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e4e17c2d57e9a42e25f2a73d297b22b60b2470a74be5a515b36c984e1a246d47"},
{file = "gevent-25.9.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:8d94936f8f8b23d9de2251798fcb603b84f083fdf0d7f427183c1828fb64f117"},
{file = "gevent-25.9.1-cp313-cp313-win_amd64.whl", hash = "sha256:eb51c5f9537b07da673258b4832f6635014fee31690c3f0944d34741b69f92fa"},
{file = "gevent-25.9.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:1a3fe4ea1c312dbf6b375b416925036fe79a40054e6bf6248ee46526ea628be1"},
{file = "gevent-25.9.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0adb937f13e5fb90cca2edf66d8d7e99d62a299687400ce2edee3f3504009356"},
{file = "gevent-25.9.1-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:427f869a2050a4202d93cf7fd6ab5cffb06d3e9113c10c967b6e2a0d45237cb8"},
{file = "gevent-25.9.1-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c049880175e8c93124188f9d926af0a62826a3b81aa6d3074928345f8238279e"},
{file = "gevent-25.9.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b5a67a0974ad9f24721034d1e008856111e0535f1541499f72a733a73d658d1c"},
{file = "gevent-25.9.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1d0f5d8d73f97e24ea8d24d8be0f51e0cf7c54b8021c1fddb580bf239474690f"},
{file = "gevent-25.9.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:ddd3ff26e5c4240d3fbf5516c2d9d5f2a998ef87cfb73e1429cfaeaaec860fa6"},
{file = "gevent-25.9.1-cp314-cp314-win_amd64.whl", hash = "sha256:bb63c0d6cb9950cc94036a4995b9cc4667b8915366613449236970f4394f94d7"},
{file = "gevent-25.9.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f18f80aef6b1f6907219affe15b36677904f7cfeed1f6a6bc198616e507ae2d7"},
{file = "gevent-25.9.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b274a53e818124a281540ebb4e7a2c524778f745b7a99b01bdecf0ca3ac0ddb0"},
{file = "gevent-25.9.1-cp39-cp39-win32.whl", hash = "sha256:c6c91f7e33c7f01237755884316110ee7ea076f5bdb9aa0982b6dc63243c0a38"},
{file = "gevent-25.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:012a44b0121f3d7c800740ff80351c897e85e76a7e4764690f35c5ad9ec17de5"},
{file = "gevent-25.9.1.tar.gz", hash = "sha256:adf9cd552de44a4e6754c51ff2e78d9193b7fa6eab123db9578a210e657235dd"},
]
[package.dependencies]
cffi = {version = ">=1.17.1", markers = "platform_python_implementation == \"CPython\" and sys_platform == \"win32\""}
greenlet = {version = ">=3.2.2", markers = "platform_python_implementation == \"CPython\""}
"zope.event" = "*"
"zope.interface" = "*"
[package.extras]
dnspython = ["dnspython (>=1.16.0,<2.0) ; python_version < \"3.10\"", "idna ; python_version < \"3.10\""]
docs = ["furo", "repoze.sphinx.autointerface", "sphinx", "sphinxcontrib-programoutput", "zope.schema"]
monitor = ["psutil (>=5.7.0) ; sys_platform != \"win32\" or platform_python_implementation == \"CPython\""]
recommended = ["cffi (>=1.17.1) ; platform_python_implementation == \"CPython\"", "dnspython (>=1.16.0,<2.0) ; python_version < \"3.10\"", "idna ; python_version < \"3.10\"", "psutil (>=5.7.0) ; sys_platform != \"win32\" or platform_python_implementation == \"CPython\""]
test = ["cffi (>=1.17.1) ; platform_python_implementation == \"CPython\"", "coverage (>=5.0) ; sys_platform != \"win32\"", "dnspython (>=1.16.0,<2.0) ; python_version < \"3.10\"", "idna ; python_version < \"3.10\"", "objgraph", "psutil (>=5.7.0) ; sys_platform != \"win32\" or platform_python_implementation == \"CPython\"", "requests"]
[[package]]
name = "google-api-core"
version = "2.25.1"
@@ -2571,6 +2667,87 @@ files = [
dev = ["pytest"]
docs = ["sphinx", "sphinx-autobuild"]
[[package]]
name = "greenlet"
version = "3.2.4"
description = "Lightweight in-process concurrent programming"
optional = false
python-versions = ">=3.9"
groups = ["main"]
markers = "platform_python_implementation == \"CPython\""
files = [
{file = "greenlet-3.2.4-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:8c68325b0d0acf8d91dde4e6f930967dd52a5302cd4062932a6b2e7c2969f47c"},
{file = "greenlet-3.2.4-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:94385f101946790ae13da500603491f04a76b6e4c059dab271b3ce2e283b2590"},
{file = "greenlet-3.2.4-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f10fd42b5ee276335863712fa3da6608e93f70629c631bf77145021600abc23c"},
{file = "greenlet-3.2.4-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c8c9e331e58180d0d83c5b7999255721b725913ff6bc6cf39fa2a45841a4fd4b"},
{file = "greenlet-3.2.4-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:58b97143c9cc7b86fc458f215bd0932f1757ce649e05b640fea2e79b54cedb31"},
{file = "greenlet-3.2.4-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c2ca18a03a8cfb5b25bc1cbe20f3d9a4c80d8c3b13ba3df49ac3961af0b1018d"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fe0a28a7b952a21e2c062cd5756d34354117796c6d9215a87f55e38d15402c5"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8854167e06950ca75b898b104b63cc646573aa5fef1353d4508ecdd1ee76254f"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f47617f698838ba98f4ff4189aef02e7343952df3a615f847bb575c3feb177a7"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:af41be48a4f60429d5cad9d22175217805098a9ef7c40bfef44f7669fb9d74d8"},
{file = "greenlet-3.2.4-cp310-cp310-win_amd64.whl", hash = "sha256:73f49b5368b5359d04e18d15828eecc1806033db5233397748f4ca813ff1056c"},
{file = "greenlet-3.2.4-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2"},
{file = "greenlet-3.2.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246"},
{file = "greenlet-3.2.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:94abf90142c2a18151632371140b3dba4dee031633fe614cb592dbb6c9e17bc3"},
{file = "greenlet-3.2.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:4d1378601b85e2e5171b99be8d2dc85f594c79967599328f95c1dc1a40f1c633"},
{file = "greenlet-3.2.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0db5594dce18db94f7d1650d7489909b57afde4c580806b8d9203b6e79cdc079"},
{file = "greenlet-3.2.4-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c9c6de1940a7d828635fbd254d69db79e54619f165ee7ce32fda763a9cb6a58c"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03c5136e7be905045160b1b9fdca93dd6727b180feeafda6818e6496434ed8c5"},
{file = "greenlet-3.2.4-cp311-cp311-win_amd64.whl", hash = "sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9"},
{file = "greenlet-3.2.4-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:3b67ca49f54cede0186854a008109d6ee71f66bd57bb36abd6d0a0267b540cdd"},
{file = "greenlet-3.2.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ddf9164e7a5b08e9d22511526865780a576f19ddd00d62f8a665949327fde8bb"},
{file = "greenlet-3.2.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f28588772bb5fb869a8eb331374ec06f24a83a9c25bfa1f38b6993afe9c1e968"},
{file = "greenlet-3.2.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:5c9320971821a7cb77cfab8d956fa8e39cd07ca44b6070db358ceb7f8797c8c9"},
{file = "greenlet-3.2.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c60a6d84229b271d44b70fb6e5fa23781abb5d742af7b808ae3f6efd7c9c60f6"},
{file = "greenlet-3.2.4-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3b3812d8d0c9579967815af437d96623f45c0f2ae5f04e366de62a12d83a8fb0"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:abbf57b5a870d30c4675928c37278493044d7c14378350b3aa5d484fa65575f0"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:20fb936b4652b6e307b8f347665e2c615540d4b42b3b4c8a321d8286da7e520f"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ee7a6ec486883397d70eec05059353b8e83eca9168b9f3f9a361971e77e0bcd0"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:326d234cbf337c9c3def0676412eb7040a35a768efc92504b947b3e9cfc7543d"},
{file = "greenlet-3.2.4-cp312-cp312-win_amd64.whl", hash = "sha256:a7d4e128405eea3814a12cc2605e0e6aedb4035bf32697f72deca74de4105e02"},
{file = "greenlet-3.2.4-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:1a921e542453fe531144e91e1feedf12e07351b1cf6c9e8a3325ea600a715a31"},
{file = "greenlet-3.2.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cd3c8e693bff0fff6ba55f140bf390fa92c994083f838fece0f63be121334945"},
{file = "greenlet-3.2.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:710638eb93b1fa52823aa91bf75326f9ecdfd5e0466f00789246a5280f4ba0fc"},
{file = "greenlet-3.2.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c5111ccdc9c88f423426df3fd1811bfc40ed66264d35aa373420a34377efc98a"},
{file = "greenlet-3.2.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d76383238584e9711e20ebe14db6c88ddcedc1829a9ad31a584389463b5aa504"},
{file = "greenlet-3.2.4-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d25c5091190f2dc0eaa3f950252122edbbadbb682aa7b1ef2f8af0f8c0afefae"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6e343822feb58ac4d0a1211bd9399de2b3a04963ddeec21530fc426cc121f19b"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ca7f6f1f2649b89ce02f6f229d7c19f680a6238af656f61e0115b24857917929"},
{file = "greenlet-3.2.4-cp313-cp313-win_amd64.whl", hash = "sha256:554b03b6e73aaabec3745364d6239e9e012d64c68ccd0b8430c64ccc14939a8b"},
{file = "greenlet-3.2.4-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:c17b6b34111ea72fc5a4e4beec9711d2226285f0386ea83477cbb97c30a3f3a5"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337"},
{file = "greenlet-3.2.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:2917bdf657f5859fbf3386b12d68ede4cf1f04c90c3a6bc1f013dd68a22e2269"},
{file = "greenlet-3.2.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:015d48959d4add5d6c9f6c5210ee3803a830dce46356e3bc326d6776bde54681"},
{file = "greenlet-3.2.4-cp314-cp314-win_amd64.whl", hash = "sha256:e37ab26028f12dbb0ff65f29a8d3d44a765c61e729647bf2ddfbbed621726f01"},
{file = "greenlet-3.2.4-cp39-cp39-macosx_11_0_universal2.whl", hash = "sha256:b6a7c19cf0d2742d0809a4c05975db036fdff50cd294a93632d6a310bf9ac02c"},
{file = "greenlet-3.2.4-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:27890167f55d2387576d1f41d9487ef171849ea0359ce1510ca6e06c8bece11d"},
{file = "greenlet-3.2.4-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:18d9260df2b5fbf41ae5139e1be4e796d99655f023a636cd0e11e6406cca7d58"},
{file = "greenlet-3.2.4-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:671df96c1f23c4a0d4077a325483c1503c96a1b7d9db26592ae770daa41233d4"},
{file = "greenlet-3.2.4-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:16458c245a38991aa19676900d48bd1a6f2ce3e16595051a4db9d012154e8433"},
{file = "greenlet-3.2.4-cp39-cp39-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c9913f1a30e4526f432991f89ae263459b1c64d1608c0d22a5c79c287b3c70df"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b90654e092f928f110e0007f572007c9727b5265f7632c2fa7415b4689351594"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:81701fd84f26330f0d5f4944d4e92e61afe6319dcd9775e39396e39d7c3e5f98"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:28a3c6b7cd72a96f61b0e4b2a36f681025b60ae4779cc73c1535eb5f29560b10"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:52206cd642670b0b320a1fd1cbfd95bca0e043179c1d8a045f2c6109dfe973be"},
{file = "greenlet-3.2.4-cp39-cp39-win32.whl", hash = "sha256:65458b409c1ed459ea899e939f0e1cdb14f58dbc803f2f93c5eab5694d32671b"},
{file = "greenlet-3.2.4-cp39-cp39-win_amd64.whl", hash = "sha256:d2e685ade4dafd447ede19c31277a224a239a0a1a4eca4e6390efedf20260cfb"},
{file = "greenlet-3.2.4.tar.gz", hash = "sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d"},
]
[package.extras]
docs = ["Sphinx", "furo"]
test = ["objgraph", "psutil", "setuptools"]
[[package]]
name = "gunicorn"
version = "23.0.0"
@@ -4046,6 +4223,29 @@ rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "oci"
version = "2.160.3"
description = "Oracle Cloud Infrastructure Python SDK"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "oci-2.160.3-py3-none-any.whl", hash = "sha256:858bff3e697098bdda44833d2476bfb4632126f0182178e7dbde4dbd156d71f0"},
{file = "oci-2.160.3.tar.gz", hash = "sha256:57514889be3b713a8385d86e3ba8a33cf46e3563c2a7e29a93027fb30b8a2537"},
]
[package.dependencies]
certifi = "*"
circuitbreaker = {version = ">=1.3.1,<3.0.0", markers = "python_version >= \"3.7\""}
cryptography = ">=3.2.1,<46.0.0"
pyOpenSSL = ">=17.5.0,<25.0.0"
python-dateutil = ">=2.5.3,<3.0.0"
pytz = ">=2016.10"
[package.extras]
adk = ["docstring-parser (>=0.16) ; python_version >= \"3.10\" and python_version < \"4\"", "mcp (>=1.6.0) ; python_version >= \"3.10\" and python_version < \"4\"", "pydantic (>=2.10.6) ; python_version >= \"3.10\" and python_version < \"4\"", "rich (>=13.9.4) ; python_version >= \"3.10\" and python_version < \"4\""]
[[package]]
name = "openai"
version = "1.101.0"
@@ -4580,7 +4780,7 @@ files = [
[[package]]
name = "prowler"
version = "5.13.0"
version = "5.14.0"
description = "Prowler is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks."
optional = false
python-versions = ">3.9.1,<3.13"
@@ -4605,6 +4805,7 @@ azure-mgmt-keyvault = "10.3.1"
azure-mgmt-loganalytics = "12.0.0"
azure-mgmt-monitor = "6.0.2"
azure-mgmt-network = "28.1.0"
azure-mgmt-postgresqlflexibleservers = "1.1.0"
azure-mgmt-rdbms = "10.1.0"
azure-mgmt-recoveryservices = "3.1.0"
azure-mgmt-recoveryservicesbackup = "9.2.0"
@@ -4634,6 +4835,7 @@ markdown = "3.9.0"
microsoft-kiota-abstractions = "1.9.2"
msgraph-sdk = "1.23.0"
numpy = "2.0.2"
oci = "2.160.3"
pandas = "2.2.3"
py-iam-expand = "0.1.0"
py-ocsf-models = "0.5.0"
@@ -4651,7 +4853,7 @@ tzlocal = "5.3.1"
type = "git"
url = "https://github.com/prowler-cloud/prowler.git"
reference = "master"
resolved_reference = "a52697bfdfee83d14a49c11dcbe96888b5cd767e"
resolved_reference = "de5aba6d4db54eed4c95cb7629443da186c17afd"
[[package]]
name = "psutil"
@@ -5136,6 +5338,25 @@ cffi = ">=1.4.1"
docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"]
tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
[[package]]
name = "pyopenssl"
version = "24.3.0"
description = "Python wrapper module around the OpenSSL library"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "pyOpenSSL-24.3.0-py3-none-any.whl", hash = "sha256:e474f5a473cd7f92221cc04976e48f4d11502804657a08a989fb3be5514c904a"},
{file = "pyopenssl-24.3.0.tar.gz", hash = "sha256:49f7a019577d834746bc55c5fce6ecbcec0f2b4ec5ce1cf43a9a173b8138bb36"},
]
[package.dependencies]
cryptography = ">=41.0.5,<45"
[package.extras]
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx_rtd_theme"]
test = ["pretend", "pytest (>=3.0.1)", "pytest-rerunfailures"]
[[package]]
name = "pyparsing"
version = "3.2.3"
@@ -6783,7 +7004,69 @@ enabler = ["pytest-enabler (>=2.2)"]
test = ["big-O", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more_itertools", "pytest (>=6,!=8.1.*)", "pytest-ignore-flaky"]
type = ["pytest-mypy"]
[[package]]
name = "zope-event"
version = "6.1"
description = "Very basic event publishing system"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "zope_event-6.1-py3-none-any.whl", hash = "sha256:0ca78b6391b694272b23ec1335c0294cc471065ed10f7f606858fc54566c25a0"},
{file = "zope_event-6.1.tar.gz", hash = "sha256:6052a3e0cb8565d3d4ef1a3a7809336ac519bc4fe38398cb8d466db09adef4f0"},
]
[package.extras]
docs = ["Sphinx"]
test = ["zope.testrunner (>=6.4)"]
[[package]]
name = "zope-interface"
version = "8.1.1"
description = "Interfaces for Python"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "zope_interface-8.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5c6b12b656c7d7e3d79cad8e2afc4a37eae6b6076e2c209a33345143148e435e"},
{file = "zope_interface-8.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:557c0f1363c300db406e9eeaae8ab6d1ba429d4fed60d8ab7dadab5ca66ccd35"},
{file = "zope_interface-8.1.1-cp310-cp310-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:127b0e4c873752b777721543cf8525b3db5e76b88bd33bab807f03c568e9003f"},
{file = "zope_interface-8.1.1-cp310-cp310-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e0892c9d2dd47b45f62d1861bcae8b427fcc49b4a04fff67f12c5c55e56654d7"},
{file = "zope_interface-8.1.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ff8a92dc8c8a2c605074e464984e25b9b5a8ac9b2a0238dd73a0f374df59a77e"},
{file = "zope_interface-8.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:54627ddf6034aab1f506ba750dd093f67d353be6249467d720e9f278a578efe5"},
{file = "zope_interface-8.1.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:e8a0fdd5048c1bb733e4693eae9bc4145a19419ea6a1c95299318a93fe9f3d72"},
{file = "zope_interface-8.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a4cb0ea75a26b606f5bc8524fbce7b7d8628161b6da002c80e6417ce5ec757c0"},
{file = "zope_interface-8.1.1-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:c267b00b5a49a12743f5e1d3b4beef45479d696dab090f11fe3faded078a5133"},
{file = "zope_interface-8.1.1-cp311-cp311-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e25d3e2b9299e7ec54b626573673bdf0d740cf628c22aef0a3afef85b438aa54"},
{file = "zope_interface-8.1.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:63db1241804417aff95ac229c13376c8c12752b83cc06964d62581b493e6551b"},
{file = "zope_interface-8.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:9639bf4ed07b5277fb231e54109117c30d608254685e48a7104a34618bcbfc83"},
{file = "zope_interface-8.1.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:a16715808408db7252b8c1597ed9008bdad7bf378ed48eb9b0595fad4170e49d"},
{file = "zope_interface-8.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce6b58752acc3352c4aa0b55bbeae2a941d61537e6afdad2467a624219025aae"},
{file = "zope_interface-8.1.1-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:807778883d07177713136479de7fd566f9056a13aef63b686f0ab4807c6be259"},
{file = "zope_interface-8.1.1-cp312-cp312-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:50e5eb3b504a7d63dc25211b9298071d5b10a3eb754d6bf2f8ef06cb49f807ab"},
{file = "zope_interface-8.1.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:eee6f93b2512ec9466cf30c37548fd3ed7bc4436ab29cd5943d7a0b561f14f0f"},
{file = "zope_interface-8.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:80edee6116d569883c58ff8efcecac3b737733d646802036dc337aa839a5f06b"},
{file = "zope_interface-8.1.1-cp313-cp313-macosx_10_9_x86_64.whl", hash = "sha256:84f9be6d959640de9da5d14ac1f6a89148b16da766e88db37ed17e936160b0b1"},
{file = "zope_interface-8.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:531fba91dcb97538f70cf4642a19d6574269460274e3f6004bba6fe684449c51"},
{file = "zope_interface-8.1.1-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:fc65f5633d5a9583ee8d88d1f5de6b46cd42c62e47757cfe86be36fb7c8c4c9b"},
{file = "zope_interface-8.1.1-cp313-cp313-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:efef80ddec4d7d99618ef71bc93b88859248075ca2e1ae1c78636654d3d55533"},
{file = "zope_interface-8.1.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:49aad83525eca3b4747ef51117d302e891f0042b06f32aa1c7023c62642f962b"},
{file = "zope_interface-8.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:71cf329a21f98cb2bd9077340a589e316ac8a415cac900575a32544b3dffcb98"},
{file = "zope_interface-8.1.1-cp314-cp314-macosx_10_9_x86_64.whl", hash = "sha256:da311e9d253991ca327601f47c4644d72359bac6950fbb22f971b24cd7850f8c"},
{file = "zope_interface-8.1.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:3fb25fca0442c7fb93c4ee40b42e3e033fef2f648730c4b7ae6d43222a3e8946"},
{file = "zope_interface-8.1.1-cp314-cp314-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:bac588d0742b4e35efb7c7df1dacc0397b51ed37a17d4169a38019a1cebacf0a"},
{file = "zope_interface-8.1.1-cp314-cp314-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:3d1f053d2d5e2b393e619bce1e55954885c2e63969159aa521839e719442db49"},
{file = "zope_interface-8.1.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:64a1ad7f4cb17d948c6bdc525a1d60c0e567b2526feb4fa38b38f249961306b8"},
{file = "zope_interface-8.1.1-cp314-cp314-win_amd64.whl", hash = "sha256:169214da1b82b7695d1a36f92d70b11166d66b6b09d03df35d150cc62ac52276"},
{file = "zope_interface-8.1.1.tar.gz", hash = "sha256:51b10e6e8e238d719636a401f44f1e366146912407b58453936b781a19be19ec"},
]
[package.extras]
docs = ["Sphinx", "furo", "repoze.sphinx.autointerface"]
test = ["coverage[toml]", "zope.event", "zope.testing"]
testing = ["coverage[toml]", "zope.event", "zope.testing"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "3c9164d668d37d6373eb5200bbe768232ead934d9312b9c68046b1df922789f3"
content-hash = "77ef098291cb8631565a1ab5027ce33e7fcb5a04883dc7160bf373eac9e1fb49"
+4 -3
View File
@@ -7,7 +7,7 @@ authors = [{name = "Prowler Engineering", email = "engineering@prowler.com"}]
dependencies = [
"celery[pytest] (>=5.4.0,<6.0.0)",
"dj-rest-auth[with_social,jwt] (==7.0.1)",
"django (==5.1.13)",
"django (==5.1.14)",
"django-allauth[saml] (>=65.8.0,<66.0.0)",
"django-celery-beat (>=2.7.0,<3.0.0)",
"django-celery-results (>=2.5.1,<3.0.0)",
@@ -35,7 +35,8 @@ dependencies = [
"markdown (>=3.9,<4.0)",
"drf-simple-apikey (==2.2.1)",
"matplotlib (>=3.10.6,<4.0.0)",
"reportlab (>=4.4.4,<5.0.0)"
"reportlab (>=4.4.4,<5.0.0)",
"gevent (>=25.9.1,<26.0.0)"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
@@ -43,7 +44,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.15.0"
version = "1.16.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
+4 -25
View File
@@ -9,25 +9,6 @@ PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE = {}
PROWLER_CHECKS = {}
AVAILABLE_COMPLIANCE_FRAMEWORKS = {}
# Map API provider names to Prowler directory names
# This is needed because the OCI provider directory is 'oraclecloud' but the provider type is 'oci'
PROVIDER_NAME_MAPPING = {
"oci": "oraclecloud",
}
def get_prowler_provider_name(provider_type: str) -> str:
"""
Map API provider type to Prowler provider directory name.
Args:
provider_type: The provider type from the API (e.g., 'oci', 'aws', 'azure')
Returns:
The provider name used in Prowler's directory structure (e.g., 'oraclecloud', 'aws', 'azure')
"""
return PROVIDER_NAME_MAPPING.get(provider_type, provider_type)
def get_compliance_frameworks(provider_type: Provider.ProviderChoices) -> list[str]:
"""
@@ -47,9 +28,8 @@ def get_compliance_frameworks(provider_type: Provider.ProviderChoices) -> list[s
"""
global AVAILABLE_COMPLIANCE_FRAMEWORKS
if provider_type not in AVAILABLE_COMPLIANCE_FRAMEWORKS:
prowler_provider_name = get_prowler_provider_name(provider_type)
AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type] = (
get_available_compliance_frameworks(prowler_provider_name)
get_available_compliance_frameworks(provider_type)
)
return AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type]
@@ -69,8 +49,7 @@ def get_prowler_provider_checks(provider_type: Provider.ProviderChoices):
Returns:
Iterable[str]: An iterable of check IDs associated with the specified provider type.
"""
prowler_provider_name = get_prowler_provider_name(provider_type)
return CheckMetadata.get_bulk(prowler_provider_name).keys()
return CheckMetadata.get_bulk(provider_type).keys()
def get_prowler_provider_compliance(provider_type: Provider.ProviderChoices) -> dict:
@@ -88,8 +67,7 @@ def get_prowler_provider_compliance(provider_type: Provider.ProviderChoices) ->
dict: A dictionary mapping compliance framework names to their respective
Compliance objects for the specified provider.
"""
prowler_provider_name = get_prowler_provider_name(provider_type)
return Compliance.get_bulk(prowler_provider_name)
return Compliance.get_bulk(provider_type)
def load_prowler_compliance():
@@ -166,6 +144,7 @@ def generate_scan_compliance(
Returns:
None: This function modifies the compliance_overview in place.
"""
for compliance_id in PROWLER_CHECKS[provider_type][check_id]:
for requirement in compliance_overview[compliance_id]["requirements"].values():
if check_id in requirement["checks"]:
+7 -1
View File
@@ -26,6 +26,7 @@ class MainRouter:
default_db = "default"
admin_db = "admin"
replica_db = "replica"
admin_replica_db = "admin_replica"
def db_for_read(self, model, **hints): # noqa: F841
model_table_name = model._meta.db_table
@@ -49,7 +50,12 @@ class MainRouter:
def allow_relation(self, obj1, obj2, **hints): # noqa: F841
# Allow relations when both objects originate from allowed connectors
allowed_dbs = {self.default_db, self.admin_db, self.replica_db}
allowed_dbs = {
self.default_db,
self.admin_db,
self.replica_db,
self.admin_replica_db,
}
if {obj1._state.db, obj2._state.db} <= allowed_dbs:
return True
return None
+66 -19
View File
@@ -1,18 +1,35 @@
import re
import secrets
import time
import uuid
from contextlib import contextmanager
from datetime import datetime, timedelta, timezone
from celery.utils.log import get_task_logger
from config.env import env
from django.conf import settings
from django.contrib.auth.models import BaseUserManager
from django.db import DEFAULT_DB_ALIAS, connection, connections, models, transaction
from django.db import (
DEFAULT_DB_ALIAS,
OperationalError,
connection,
connections,
models,
transaction,
)
from django_celery_beat.models import PeriodicTask
from psycopg2 import connect as psycopg2_connect
from psycopg2.extensions import AsIs, new_type, register_adapter, register_type
from rest_framework_json_api.serializers import ValidationError
from api.db_router import get_read_db_alias, reset_read_db_alias, set_read_db_alias
from api.db_router import (
READ_REPLICA_ALIAS,
get_read_db_alias,
reset_read_db_alias,
set_read_db_alias,
)
logger = get_task_logger(__name__)
DB_USER = settings.DATABASES["default"]["USER"] if not settings.TESTING else "test"
DB_PASSWORD = (
@@ -28,6 +45,9 @@ TASK_RUNNER_DB_TABLE = "django_celery_results_taskresult"
POSTGRES_TENANT_VAR = "api.tenant_id"
POSTGRES_USER_VAR = "api.user_id"
REPLICA_MAX_ATTEMPTS = env.int("POSTGRES_REPLICA_MAX_ATTEMPTS", default=3)
REPLICA_RETRY_BASE_DELAY = env.float("POSTGRES_REPLICA_RETRY_BASE_DELAY", default=0.5)
SET_CONFIG_QUERY = "SELECT set_config(%s, %s::text, TRUE);"
@@ -71,24 +91,51 @@ def rls_transaction(
if db_alias not in connections:
db_alias = DEFAULT_DB_ALIAS
router_token = None
try:
if db_alias != DEFAULT_DB_ALIAS:
router_token = set_read_db_alias(db_alias)
alias = db_alias
is_replica = READ_REPLICA_ALIAS and alias == READ_REPLICA_ALIAS
max_attempts = REPLICA_MAX_ATTEMPTS if is_replica else 1
with transaction.atomic(using=db_alias):
conn = connections[db_alias]
with conn.cursor() as cursor:
try:
# just in case the value is a UUID object
uuid.UUID(str(value))
except ValueError:
raise ValidationError("Must be a valid UUID")
cursor.execute(SET_CONFIG_QUERY, [parameter, value])
yield cursor
finally:
if router_token is not None:
reset_read_db_alias(router_token)
for attempt in range(1, max_attempts + 1):
router_token = None
# On final attempt, fallback to primary
if attempt == max_attempts and is_replica:
logger.warning(
f"RLS transaction failed after {attempt - 1} attempts on replica, "
f"falling back to primary DB"
)
alias = DEFAULT_DB_ALIAS
conn = connections[alias]
try:
if alias != DEFAULT_DB_ALIAS:
router_token = set_read_db_alias(alias)
with transaction.atomic(using=alias):
with conn.cursor() as cursor:
try:
# just in case the value is a UUID object
uuid.UUID(str(value))
except ValueError:
raise ValidationError("Must be a valid UUID")
cursor.execute(SET_CONFIG_QUERY, [parameter, value])
yield cursor
return
except OperationalError as e:
# If on primary or max attempts reached, raise
if not is_replica or attempt == max_attempts:
raise
# Retry with exponential backoff
delay = REPLICA_RETRY_BASE_DELAY * (2 ** (attempt - 1))
logger.info(
f"RLS transaction failed on replica (attempt {attempt}/{max_attempts}), "
f"retrying in {delay}s. Error: {e}"
)
time.sleep(delay)
finally:
if router_token is not None:
reset_read_db_alias(router_token)
class CustomUserManager(BaseUserManager):
+52 -2
View File
@@ -1,10 +1,14 @@
import uuid
from functools import wraps
from django.db import connection, transaction
from django.core.exceptions import ObjectDoesNotExist
from django.db import IntegrityError, connection, transaction
from rest_framework_json_api.serializers import ValidationError
from api.db_utils import POSTGRES_TENANT_VAR, SET_CONFIG_QUERY
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import POSTGRES_TENANT_VAR, SET_CONFIG_QUERY, rls_transaction
from api.exceptions import ProviderDeletedException
from api.models import Provider, Scan
def set_tenant(func=None, *, keep_tenant=False):
@@ -66,3 +70,49 @@ def set_tenant(func=None, *, keep_tenant=False):
return decorator
else:
return decorator(func)
def handle_provider_deletion(func):
"""
Decorator that raises ProviderDeletedException if provider was deleted during execution.
Catches ObjectDoesNotExist and IntegrityError, checks if provider still exists,
and raises ProviderDeletedException if not. Otherwise, re-raises original exception.
Requires tenant_id and provider_id in kwargs.
Example:
@shared_task
@handle_provider_deletion
def scan_task(scan_id, tenant_id, provider_id):
...
"""
@wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except (ObjectDoesNotExist, IntegrityError):
tenant_id = kwargs.get("tenant_id")
provider_id = kwargs.get("provider_id")
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
if provider_id is None:
scan_id = kwargs.get("scan_id")
if scan_id is None:
raise AssertionError(
"This task does not have provider or scan in the kwargs"
)
scan = Scan.objects.filter(pk=scan_id).first()
if scan is None:
raise ProviderDeletedException(
f"Provider for scan '{scan_id}' was deleted during the scan"
) from None
provider_id = str(scan.provider_id)
if not Provider.objects.filter(pk=provider_id).exists():
raise ProviderDeletedException(
f"Provider '{provider_id}' was deleted during the scan"
) from None
raise
return wrapper
+4
View File
@@ -66,6 +66,10 @@ class ProviderConnectionError(Exception):
"""Base exception for provider connection errors."""
class ProviderDeletedException(Exception):
"""Raised when a provider has been deleted during scan/task execution."""
def custom_exception_handler(exc, context):
if isinstance(exc, django_validation_error):
if hasattr(exc, "error_dict"):
+167 -25
View File
@@ -23,13 +23,16 @@ from api.db_utils import (
StatusEnumField,
)
from api.models import (
AttackSurfaceOverview,
ComplianceRequirementOverview,
DailySeveritySummary,
Finding,
Integration,
Invitation,
LighthouseProviderConfiguration,
LighthouseProviderModels,
Membership,
MuteRule,
OverviewStatusChoices,
PermissionChoices,
Processor,
@@ -40,12 +43,14 @@ from api.models import (
ResourceTag,
Role,
Scan,
ScanCategorySummary,
ScanSummary,
SeverityChoices,
StateChoices,
StatusChoices,
Task,
TenantAPIKey,
ThreatScoreSnapshot,
User,
)
from api.rls import Tenant
@@ -153,6 +158,9 @@ class CommonFindingFilters(FilterSet):
field_name="resources__type", lookup_expr="icontains"
)
category = CharFilter(method="filter_category")
category__in = CharInFilter(field_name="categories", lookup_expr="overlap")
# Temporarily disabled until we implement tag filtering in the UI
# resource_tag_key = CharFilter(field_name="resources__tags__key")
# resource_tag_key__in = CharInFilter(
@@ -184,6 +192,9 @@ class CommonFindingFilters(FilterSet):
def filter_resource_type(self, queryset, name, value):
return queryset.filter(resource_types__contains=[value])
def filter_category(self, queryset, name, value):
return queryset.filter(categories__contains=[value])
def filter_resource_tag(self, queryset, name, value):
overall_query = Q()
for key_value_pair in value:
@@ -758,7 +769,7 @@ class RoleFilter(FilterSet):
class ComplianceOverviewFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
scan_id = UUIDFilter(field_name="scan_id")
scan_id = UUIDFilter(field_name="scan_id", required=True)
region = CharFilter(field_name="region")
class Meta:
@@ -792,6 +803,68 @@ class ScanSummaryFilter(FilterSet):
}
class DailySeveritySummaryFilter(FilterSet):
"""Filter for findings_severity/timeseries endpoint."""
MAX_DATE_RANGE_DAYS = 365
provider_id = UUIDFilter(field_name="provider_id", lookup_expr="exact")
provider_id__in = UUIDInFilter(field_name="provider_id", lookup_expr="in")
provider_type = ChoiceFilter(
field_name="provider__provider", choices=Provider.ProviderChoices.choices
)
provider_type__in = ChoiceInFilter(
field_name="provider__provider", choices=Provider.ProviderChoices.choices
)
date_from = DateFilter(method="filter_noop")
date_to = DateFilter(method="filter_noop")
class Meta:
model = DailySeveritySummary
fields = ["provider_id"]
def filter_noop(self, queryset, name, value):
return queryset
def filter_queryset(self, queryset):
if not self.data.get("date_from"):
raise ValidationError(
[
{
"detail": "This query parameter is required.",
"status": "400",
"source": {"pointer": "filter[date_from]"},
"code": "required",
}
]
)
today = date.today()
date_from = self.form.cleaned_data.get("date_from")
date_to = min(self.form.cleaned_data.get("date_to") or today, today)
if (date_to - date_from).days > self.MAX_DATE_RANGE_DAYS:
raise ValidationError(
[
{
"detail": f"Date range cannot exceed {self.MAX_DATE_RANGE_DAYS} days.",
"status": "400",
"source": {"pointer": "filter[date_from]"},
"code": "invalid",
}
]
)
# View access
self.request._date_from = date_from
self.request._date_to = date_to
# Apply date filter (only lte for fill-forward logic)
queryset = queryset.filter(date__lte=date_to)
return super().filter_queryset(queryset)
class ScanSummarySeverityFilter(ScanSummaryFilter):
"""Filter for findings_severity ScanSummary endpoint - includes status filters"""
@@ -810,7 +883,8 @@ class ScanSummarySeverityFilter(ScanSummaryFilter):
elif value == OverviewStatusChoices.PASS:
return queryset.annotate(status_count=F("_pass"))
else:
return queryset.annotate(status_count=F("total"))
# Exclude muted findings by default
return queryset.annotate(status_count=F("_pass") + F("fail"))
def filter_status_in(self, queryset, name, value):
# Validate the status values
@@ -819,7 +893,7 @@ class ScanSummarySeverityFilter(ScanSummaryFilter):
if status_val not in valid_statuses:
raise ValidationError(f"Invalid status value: {status_val}")
# If all statuses or no valid statuses, use total
# If all statuses or no valid statuses, exclude muted findings (pass + fail)
if (
set(value)
>= {
@@ -828,7 +902,7 @@ class ScanSummarySeverityFilter(ScanSummaryFilter):
}
or not value
):
return queryset.annotate(status_count=F("total"))
return queryset.annotate(status_count=F("_pass") + F("fail"))
# Build the sum expression based on status values
sum_expression = None
@@ -846,7 +920,7 @@ class ScanSummarySeverityFilter(ScanSummaryFilter):
sum_expression = sum_expression + field_expr
if sum_expression is None:
return queryset.annotate(status_count=F("total"))
return queryset.annotate(status_count=F("_pass") + F("fail"))
return queryset.annotate(status_count=sum_expression)
@@ -858,26 +932,6 @@ class ScanSummarySeverityFilter(ScanSummaryFilter):
}
class ServiceOverviewFilter(ScanSummaryFilter):
def is_valid(self):
# Check if at least one of the inserted_at filters is present
inserted_at_filters = [
self.data.get("inserted_at"),
self.data.get("inserted_at__gte"),
self.data.get("inserted_at__lte"),
]
if not any(inserted_at_filters):
raise ValidationError(
{
"inserted_at": [
"At least one of filter[inserted_at], filter[inserted_at__gte], or "
"filter[inserted_at__lte] is required."
]
}
)
return super().is_valid()
class IntegrationFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
integration_type = ChoiceFilter(choices=Integration.IntegrationChoices.choices)
@@ -980,3 +1034,91 @@ class LighthouseProviderModelsFilter(FilterSet):
fields = {
"model_id": ["exact", "icontains", "in"],
}
class MuteRuleFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
created_by = UUIDFilter(field_name="created_by__id", lookup_expr="exact")
class Meta:
model = MuteRule
fields = {
"id": ["exact", "in"],
"name": ["exact", "icontains"],
"reason": ["icontains"],
"enabled": ["exact"],
"inserted_at": ["gte", "lte"],
"updated_at": ["gte", "lte"],
}
class ThreatScoreSnapshotFilter(FilterSet):
"""
Filter for ThreatScore snapshots.
Allows filtering by scan, provider, compliance_id, and date ranges.
"""
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
scan_id = UUIDFilter(field_name="scan__id", lookup_expr="exact")
scan_id__in = UUIDInFilter(field_name="scan__id", lookup_expr="in")
provider_id = UUIDFilter(field_name="provider__id", lookup_expr="exact")
provider_id__in = UUIDInFilter(field_name="provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
field_name="provider__provider", choices=Provider.ProviderChoices.choices
)
provider_type__in = ChoiceInFilter(
field_name="provider__provider",
choices=Provider.ProviderChoices.choices,
lookup_expr="in",
)
compliance_id = CharFilter(field_name="compliance_id", lookup_expr="exact")
compliance_id__in = CharInFilter(field_name="compliance_id", lookup_expr="in")
class Meta:
model = ThreatScoreSnapshot
fields = {
"scan": ["exact", "in"],
"provider": ["exact", "in"],
"compliance_id": ["exact", "in"],
"inserted_at": ["date", "gte", "lte"],
"overall_score": ["exact", "gte", "lte"],
}
class AttackSurfaceOverviewFilter(FilterSet):
"""Filter for attack surface overview aggregations by provider."""
provider_id = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider_id__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
provider_type__in = ChoiceInFilter(
field_name="scan__provider__provider",
choices=Provider.ProviderChoices.choices,
lookup_expr="in",
)
class Meta:
model = AttackSurfaceOverview
fields = {}
class CategoryOverviewFilter(FilterSet):
provider_id = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider_id__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
provider_type__in = ChoiceInFilter(
field_name="scan__provider__provider",
choices=Provider.ProviderChoices.choices,
lookup_expr="in",
)
category = CharFilter(field_name="category", lookup_expr="exact")
category__in = CharInFilter(field_name="category", lookup_expr="in")
class Meta:
model = ScanCategorySummary
fields = {}
@@ -22,13 +22,13 @@ class Migration(migrations.Migration):
("kubernetes", "Kubernetes"),
("m365", "M365"),
("github", "GitHub"),
("oci", "Oracle Cloud Infrastructure"),
("oraclecloud", "Oracle Cloud Infrastructure"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'oci';",
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'oraclecloud';",
reverse_sql=migrations.RunSQL.noop,
),
]
@@ -0,0 +1,117 @@
# Generated by Django 5.1.13 on 2025-10-22 11:56
import uuid
import django.contrib.postgres.fields
import django.core.validators
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0051_oraclecloud_provider"),
]
operations = [
migrations.CreateModel(
name="MuteRule",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"name",
models.CharField(
help_text="Human-readable name for this rule",
max_length=100,
validators=[django.core.validators.MinLengthValidator(3)],
),
),
(
"reason",
models.TextField(
help_text="Reason for muting",
max_length=500,
validators=[django.core.validators.MinLengthValidator(3)],
),
),
(
"enabled",
models.BooleanField(
default=True, help_text="Whether this rule is currently enabled"
),
),
(
"finding_uids",
django.contrib.postgres.fields.ArrayField(
base_field=models.CharField(max_length=255),
help_text="List of finding UIDs to mute",
size=None,
),
),
],
options={
"db_table": "mute_rules",
"abstract": False,
},
),
migrations.AddField(
model_name="finding",
name="muted_at",
field=models.DateTimeField(
blank=True, help_text="Timestamp when this finding was muted", null=True
),
),
migrations.AlterField(
model_name="tenantapikey",
name="name",
field=models.CharField(
max_length=100,
validators=[django.core.validators.MinLengthValidator(3)],
),
),
migrations.AddField(
model_name="muterule",
name="created_by",
field=models.ForeignKey(
help_text="User who created this rule",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="created_mute_rules",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="muterule",
name="tenant",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
migrations.AddConstraint(
model_name="muterule",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_muterule",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="muterule",
constraint=models.UniqueConstraint(
fields=("tenant_id", "name"), name="unique_mute_rule_name_per_tenant"
),
),
]
@@ -0,0 +1,25 @@
# Generated by Django 5.1.12 on 2025-10-14 11:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0052_mute_rules"),
]
operations = [
migrations.AlterField(
model_name="lighthouseproviderconfiguration",
name="provider_type",
field=models.CharField(
choices=[
("openai", "OpenAI"),
("bedrock", "AWS Bedrock"),
("openai_compatible", "OpenAI Compatible"),
],
help_text="LLM provider name",
max_length=50,
),
)
]
@@ -0,0 +1,35 @@
# Generated by Django 5.1.10 on 2025-09-09 09:25
from django.db import migrations
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0053_lighthouse_bedrock_openai_compatible"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="provider",
field=api.db_utils.ProviderEnumField(
choices=[
("aws", "AWS"),
("azure", "Azure"),
("gcp", "GCP"),
("kubernetes", "Kubernetes"),
("m365", "M365"),
("github", "GitHub"),
("oraclecloud", "Oracle Cloud Infrastructure"),
("iac", "IaC"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'iac';",
reverse_sql=migrations.RunSQL.noop,
),
]
@@ -0,0 +1,36 @@
# Generated by Django 5.1.13 on 2025-11-05 08:37
from django.db import migrations
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0054_iac_provider"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="provider",
field=api.db_utils.ProviderEnumField(
choices=[
("aws", "AWS"),
("azure", "Azure"),
("gcp", "GCP"),
("kubernetes", "Kubernetes"),
("m365", "M365"),
("github", "GitHub"),
("mongodbatlas", "MongoDB Atlas"),
("iac", "IaC"),
("oraclecloud", "Oracle Cloud Infrastructure"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'mongodbatlas';",
reverse_sql=migrations.RunSQL.noop,
),
]
@@ -0,0 +1,24 @@
# Generated by Django 5.1.13 on 2025-11-06 09:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0055_mongodbatlas_provider"),
]
operations = [
migrations.RemoveConstraint(
model_name="provider",
name="unique_provider_uids",
),
migrations.AddConstraint(
model_name="provider",
constraint=models.UniqueConstraint(
condition=models.Q(("is_deleted", False)),
fields=("tenant_id", "provider", "uid"),
name="unique_provider_uids",
),
),
]
@@ -0,0 +1,170 @@
# Generated by Django 5.1.13 on 2025-10-31 09:04
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0056_remove_provider_unique_provider_uids_and_more"),
]
operations = [
migrations.CreateModel(
name="ThreatScoreSnapshot",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
(
"compliance_id",
models.CharField(
help_text="Compliance framework ID (e.g., 'prowler_threatscore_aws')",
max_length=100,
),
),
(
"overall_score",
models.DecimalField(
decimal_places=2,
help_text="Overall ThreatScore percentage (0-100)",
max_digits=5,
),
),
(
"score_delta",
models.DecimalField(
blank=True,
decimal_places=2,
help_text="Score change compared to previous snapshot (positive = improvement)",
max_digits=5,
null=True,
),
),
(
"section_scores",
models.JSONField(
blank=True,
default=dict,
help_text="ThreatScore breakdown by section",
),
),
(
"critical_requirements",
models.JSONField(
blank=True,
default=list,
help_text="List of critical failed requirements (risk >= 4)",
),
),
(
"total_requirements",
models.IntegerField(
default=0, help_text="Total number of requirements evaluated"
),
),
(
"passed_requirements",
models.IntegerField(
default=0, help_text="Number of requirements with PASS status"
),
),
(
"failed_requirements",
models.IntegerField(
default=0, help_text="Number of requirements with FAIL status"
),
),
(
"manual_requirements",
models.IntegerField(
default=0, help_text="Number of requirements with MANUAL status"
),
),
(
"total_findings",
models.IntegerField(
default=0,
help_text="Total number of findings across all requirements",
),
),
(
"passed_findings",
models.IntegerField(
default=0, help_text="Number of findings with PASS status"
),
),
(
"failed_findings",
models.IntegerField(
default=0, help_text="Number of findings with FAIL status"
),
),
(
"provider",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="threatscore_snapshots",
related_query_name="threatscore_snapshot",
to="api.provider",
),
),
(
"scan",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="threatscore_snapshots",
related_query_name="threatscore_snapshot",
to="api.scan",
),
),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "threatscore_snapshots",
"abstract": False,
},
),
migrations.AddIndex(
model_name="threatscoresnapshot",
index=models.Index(
fields=["tenant_id", "scan_id"], name="threatscore_snap_t_scan_idx"
),
),
migrations.AddIndex(
model_name="threatscoresnapshot",
index=models.Index(
fields=["tenant_id", "provider_id"], name="threatscore_snap_t_prov_idx"
),
),
migrations.AddIndex(
model_name="threatscoresnapshot",
index=models.Index(
fields=["tenant_id", "inserted_at"], name="threatscore_snap_t_time_idx"
),
),
migrations.AddConstraint(
model_name="threatscoresnapshot",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_threatscoresnapshot",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,29 @@
from django.contrib.postgres.operations import RemoveIndexConcurrently
from django.db import migrations
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0057_threatscoresnapshot"),
]
operations = [
RemoveIndexConcurrently(
model_name="compliancerequirementoverview",
name="cro_tenant_scan_idx",
),
RemoveIndexConcurrently(
model_name="compliancerequirementoverview",
name="cro_scan_comp_idx",
),
RemoveIndexConcurrently(
model_name="compliancerequirementoverview",
name="cro_scan_comp_req_idx",
),
RemoveIndexConcurrently(
model_name="compliancerequirementoverview",
name="cro_scan_comp_req_reg_idx",
),
]
@@ -0,0 +1,75 @@
# Generated by Django 5.1.13 on 2025-10-30 15:23
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0058_drop_redundant_compliance_requirement_indexes"),
]
operations = [
migrations.CreateModel(
name="ComplianceOverviewSummary",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("compliance_id", models.TextField()),
("requirements_passed", models.IntegerField(default=0)),
("requirements_failed", models.IntegerField(default=0)),
("requirements_manual", models.IntegerField(default=0)),
("total_requirements", models.IntegerField(default=0)),
(
"scan",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="compliance_summaries",
related_query_name="compliance_summary",
to="api.scan",
),
),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "compliance_overview_summaries",
"abstract": False,
"indexes": [
models.Index(
fields=["tenant_id", "scan_id"], name="cos_tenant_scan_idx"
)
],
"constraints": [
models.UniqueConstraint(
fields=("tenant_id", "scan_id", "compliance_id"),
name="unique_compliance_summary_per_scan",
)
],
},
),
migrations.AddConstraint(
model_name="complianceoverviewsummary",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_complianceoverviewsummary",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,89 @@
# Generated by Django 5.1.14 on 2025-11-19 13:03
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0059_compliance_overview_summary"),
]
operations = [
migrations.CreateModel(
name="AttackSurfaceOverview",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
(
"attack_surface_type",
models.CharField(
choices=[
("internet-exposed", "Internet Exposed"),
("secrets", "Exposed Secrets"),
("privilege-escalation", "Privilege Escalation"),
("ec2-imdsv1", "EC2 IMDSv1 Enabled"),
],
max_length=50,
),
),
("total_findings", models.IntegerField(default=0)),
("failed_findings", models.IntegerField(default=0)),
("muted_failed_findings", models.IntegerField(default=0)),
],
options={
"db_table": "attack_surface_overviews",
"abstract": False,
},
),
migrations.AddField(
model_name="attacksurfaceoverview",
name="scan",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="attack_surface_overviews",
related_query_name="attack_surface_overview",
to="api.scan",
),
),
migrations.AddField(
model_name="attacksurfaceoverview",
name="tenant",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
migrations.AddIndex(
model_name="attacksurfaceoverview",
index=models.Index(
fields=["tenant_id", "scan_id"], name="attack_surf_tenant_scan_idx"
),
),
migrations.AddConstraint(
model_name="attacksurfaceoverview",
constraint=models.UniqueConstraint(
fields=("tenant_id", "scan_id", "attack_surface_type"),
name="unique_attack_surface_per_scan",
),
),
migrations.AddConstraint(
model_name="attacksurfaceoverview",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_attacksurfaceoverview",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,96 @@
# Generated by Django 5.1.14 on 2025-12-03 13:38
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0060_attack_surface_overview"),
]
operations = [
migrations.CreateModel(
name="DailySeveritySummary",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("date", models.DateField()),
("critical", models.IntegerField(default=0)),
("high", models.IntegerField(default=0)),
("medium", models.IntegerField(default=0)),
("low", models.IntegerField(default=0)),
("informational", models.IntegerField(default=0)),
("muted", models.IntegerField(default=0)),
(
"provider",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="daily_severity_summaries",
related_query_name="daily_severity_summary",
to="api.provider",
),
),
(
"scan",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="daily_severity_summaries",
related_query_name="daily_severity_summary",
to="api.scan",
),
),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to="api.tenant",
),
),
],
options={
"db_table": "daily_severity_summaries",
"abstract": False,
},
),
migrations.AddIndex(
model_name="dailyseveritysummary",
index=models.Index(
fields=["tenant_id", "id"],
name="dss_tenant_id_idx",
),
),
migrations.AddIndex(
model_name="dailyseveritysummary",
index=models.Index(
fields=["tenant_id", "provider_id"],
name="dss_tenant_provider_idx",
),
),
migrations.AddConstraint(
model_name="dailyseveritysummary",
constraint=models.UniqueConstraint(
fields=("tenant_id", "provider", "date"),
name="unique_daily_severity_summary",
),
),
migrations.AddConstraint(
model_name="dailyseveritysummary",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_dailyseveritysummary",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,30 @@
# Generated by Django 5.1.14 on 2025-12-10
from django.db import migrations
from tasks.tasks import backfill_daily_severity_summaries_task
from api.db_router import MainRouter
from api.rls import Tenant
def trigger_backfill_task(apps, schema_editor):
"""
Trigger the backfill task for all tenants.
This dispatches backfill_daily_severity_summaries_task for each tenant
in the system to populate DailySeveritySummary records from historical scans.
"""
tenant_ids = Tenant.objects.using(MainRouter.admin_db).values_list("id", flat=True)
for tenant_id in tenant_ids:
backfill_daily_severity_summaries_task.delay(tenant_id=str(tenant_id), days=90)
class Migration(migrations.Migration):
dependencies = [
("api", "0061_daily_severity_summary"),
]
operations = [
migrations.RunPython(trigger_backfill_task, migrations.RunPython.noop),
]
@@ -0,0 +1,108 @@
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.db_utils
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0062_backfill_daily_severity_summaries"),
]
operations = [
migrations.CreateModel(
name="ScanCategorySummary",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
(
"tenant_id",
models.UUIDField(db_index=True, editable=False),
),
(
"inserted_at",
models.DateTimeField(auto_now_add=True),
),
(
"scan",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="category_summaries",
related_query_name="category_summary",
to="api.scan",
),
),
(
"category",
models.CharField(max_length=100),
),
(
"severity",
api.db_utils.SeverityEnumField(
choices=[
("critical", "Critical"),
("high", "High"),
("medium", "Medium"),
("low", "Low"),
("informational", "Informational"),
],
max_length=50,
),
),
(
"total_findings",
models.IntegerField(
default=0, help_text="Non-muted findings (PASS + FAIL)"
),
),
(
"failed_findings",
models.IntegerField(
default=0,
help_text="Non-muted FAIL findings (subset of total_findings)",
),
),
(
"new_failed_findings",
models.IntegerField(
default=0,
help_text="Non-muted FAIL with delta='new' (subset of failed_findings)",
),
),
],
options={
"db_table": "scan_category_summaries",
},
),
migrations.AddIndex(
model_name="scancategorysummary",
index=models.Index(
fields=["tenant_id", "scan"], name="scs_tenant_scan_idx"
),
),
migrations.AddConstraint(
model_name="scancategorysummary",
constraint=models.UniqueConstraint(
fields=("tenant_id", "scan_id", "category", "severity"),
name="unique_category_severity_per_scan",
),
),
migrations.AddConstraint(
model_name="scancategorysummary",
constraint=api.rls.RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_scancategorysummary",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]
@@ -0,0 +1,21 @@
import django.contrib.postgres.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0063_scan_category_summary"),
]
operations = [
migrations.AddField(
model_name="finding",
name="categories",
field=django.contrib.postgres.fields.ArrayField(
base_field=models.CharField(max_length=100),
blank=True,
null=True,
size=None,
),
),
]
+457 -23
View File
@@ -284,7 +284,9 @@ class Provider(RowLevelSecurityProtectedModel):
KUBERNETES = "kubernetes", _("Kubernetes")
M365 = "m365", _("M365")
GITHUB = "github", _("GitHub")
OCI = "oci", _("Oracle Cloud Infrastructure")
MONGODBATLAS = "mongodbatlas", _("MongoDB Atlas")
IAC = "iac", _("IaC")
ORACLECLOUD = "oraclecloud", _("Oracle Cloud Infrastructure")
@staticmethod
def validate_aws_uid(value):
@@ -356,14 +358,36 @@ class Provider(RowLevelSecurityProtectedModel):
)
@staticmethod
def validate_oci_uid(value):
def validate_iac_uid(value):
# Validate that it's a valid repository URL (git URL format)
if not re.match(
r"^(https?://|git@|ssh://)[^\s/]+[^\s]*\.git$|^(https?://)[^\s/]+[^\s]*$",
value,
):
raise ModelValidationError(
detail="IaC provider ID must be a valid repository URL (e.g., https://github.com/user/repo or https://github.com/user/repo.git).",
code="iac-uid",
pointer="/data/attributes/uid",
)
@staticmethod
def validate_oraclecloud_uid(value):
if not re.match(
r"^ocid1\.([a-z0-9_-]+)\.([a-z0-9_-]+)\.([a-z0-9_-]*)\.([a-z0-9]+)$", value
):
raise ModelValidationError(
detail="Oracle Cloud Infrastructure provider ID must be a valid tenancy OCID in the format: "
"ocid1.<resource_type>.<realm>.<region>.<unique_id>",
code="oci-uid",
code="oraclecloud-uid",
pointer="/data/attributes/uid",
)
@staticmethod
def validate_mongodbatlas_uid(value):
if not re.match(r"^[0-9a-fA-F]{24}$", value):
raise ModelValidationError(
detail="MongoDB Atlas organization ID must be a 24-character hexadecimal string.",
code="mongodbatlas-uid",
pointer="/data/attributes/uid",
)
@@ -401,7 +425,8 @@ class Provider(RowLevelSecurityProtectedModel):
constraints = [
models.UniqueConstraint(
fields=("tenant_id", "provider", "uid", "is_deleted"),
fields=("tenant_id", "provider", "uid"),
condition=Q(is_deleted=False),
name="unique_provider_uids",
),
RowLevelSecurityConstraint(
@@ -823,6 +848,9 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
muted_reason = models.TextField(
blank=True, null=True, validators=[MinLengthValidator(3)], max_length=500
)
muted_at = models.DateTimeField(
null=True, blank=True, help_text="Timestamp when this finding was muted"
)
compliance = models.JSONField(default=dict, null=True, blank=True)
# Denormalize resource data for performance
@@ -840,6 +868,14 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
null=True,
)
# Check metadata denormalization
categories = ArrayField(
models.CharField(max_length=100),
blank=True,
null=True,
help_text="Categories from check metadata for efficient filtering",
)
# Relationships
scan = models.ForeignKey(to=Scan, related_name="findings", on_delete=models.CASCADE)
@@ -1343,35 +1379,70 @@ class ComplianceRequirementOverview(RowLevelSecurityProtectedModel):
),
]
indexes = [
models.Index(fields=["tenant_id", "scan_id"], name="cro_tenant_scan_idx"),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id"],
name="cro_scan_comp_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id", "region"],
name="cro_scan_comp_reg_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "compliance_id", "requirement_id"],
name="cro_scan_comp_req_idx",
),
models.Index(
fields=[
"tenant_id",
"scan_id",
"compliance_id",
"requirement_id",
"region",
],
name="cro_scan_comp_req_reg_idx",
),
]
class JSONAPIMeta:
resource_name = "compliance-requirements-overviews"
class ComplianceOverviewSummary(RowLevelSecurityProtectedModel):
"""
Pre-aggregated compliance overview aggregated across ALL regions.
One row per (scan_id, compliance_id) combination.
This table optimizes the common case where users view overall compliance
without filtering by region. For region-specific views, the detailed
ComplianceRequirementOverview table is used instead.
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
scan = models.ForeignKey(
Scan,
on_delete=models.CASCADE,
related_name="compliance_summaries",
related_query_name="compliance_summary",
)
compliance_id = models.TextField(blank=False)
# Pre-aggregated scores (computed across ALL regions)
requirements_passed = models.IntegerField(default=0)
requirements_failed = models.IntegerField(default=0)
requirements_manual = models.IntegerField(default=0)
total_requirements = models.IntegerField(default=0)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "compliance_overview_summaries"
constraints = [
models.UniqueConstraint(
fields=("tenant_id", "scan_id", "compliance_id"),
name="unique_compliance_summary_per_scan",
),
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "DELETE"],
),
]
indexes = [
models.Index(
fields=["tenant_id", "scan_id"],
name="cos_tenant_scan_idx",
),
]
class JSONAPIMeta:
resource_name = "compliance-overview-summaries"
class ScanSummary(RowLevelSecurityProtectedModel):
objects = ActiveProviderManager()
all_objects = models.Manager()
@@ -1437,6 +1508,65 @@ class ScanSummary(RowLevelSecurityProtectedModel):
resource_name = "scan-summaries"
class DailySeveritySummary(RowLevelSecurityProtectedModel):
"""
Pre-aggregated daily severity counts per provider.
Used by findings_severity/timeseries endpoint for efficient queries.
"""
objects = ActiveProviderManager()
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
date = models.DateField()
provider = models.ForeignKey(
Provider,
on_delete=models.CASCADE,
related_name="daily_severity_summaries",
related_query_name="daily_severity_summary",
)
scan = models.ForeignKey(
Scan,
on_delete=models.CASCADE,
related_name="daily_severity_summaries",
related_query_name="daily_severity_summary",
)
# Aggregated fail counts by severity
critical = models.IntegerField(default=0)
high = models.IntegerField(default=0)
medium = models.IntegerField(default=0)
low = models.IntegerField(default=0)
informational = models.IntegerField(default=0)
muted = models.IntegerField(default=0)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "daily_severity_summaries"
constraints = [
models.UniqueConstraint(
fields=("tenant_id", "provider", "date"),
name="unique_daily_severity_summary",
),
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
indexes = [
models.Index(
fields=["tenant_id", "id"],
name="dss_tenant_id_idx",
),
models.Index(
fields=["tenant_id", "provider_id"],
name="dss_tenant_provider_idx",
),
]
class Integration(RowLevelSecurityProtectedModel):
class IntegrationChoices(models.TextChoices):
AMAZON_S3 = "amazon_s3", _("Amazon S3")
@@ -1829,6 +1959,64 @@ class ResourceScanSummary(RowLevelSecurityProtectedModel):
]
class ScanCategorySummary(RowLevelSecurityProtectedModel):
"""
Pre-aggregated category metrics per scan by severity.
Stores one row per (category, severity) combination per scan for efficient
overview queries. Categories come from check_metadata.categories.
Count relationships (each is a subset of the previous):
- total_findings >= failed_findings >= new_failed_findings
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
scan = models.ForeignKey(
Scan,
on_delete=models.CASCADE,
related_name="category_summaries",
related_query_name="category_summary",
)
category = models.CharField(max_length=100)
severity = SeverityEnumField(choices=SeverityChoices)
total_findings = models.IntegerField(
default=0, help_text="Non-muted findings (PASS + FAIL)"
)
failed_findings = models.IntegerField(
default=0, help_text="Non-muted FAIL findings (subset of total_findings)"
)
new_failed_findings = models.IntegerField(
default=0,
help_text="Non-muted FAIL with delta='new' (subset of failed_findings)",
)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "scan_category_summaries"
indexes = [
models.Index(fields=["tenant_id", "scan"], name="scs_tenant_scan_idx"),
]
constraints = [
models.UniqueConstraint(
fields=("tenant_id", "scan_id", "category", "severity"),
name="unique_category_severity_per_scan",
),
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
class JSONAPIMeta:
resource_name = "scan-category-summaries"
class LighthouseConfiguration(RowLevelSecurityProtectedModel):
"""
Stores configuration and API keys for LLM services.
@@ -1935,6 +2123,59 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
resource_name = "lighthouse-configurations"
class MuteRule(RowLevelSecurityProtectedModel):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
# Rule metadata
name = models.CharField(
max_length=100,
validators=[MinLengthValidator(3)],
help_text="Human-readable name for this rule",
)
reason = models.TextField(
validators=[MinLengthValidator(3)],
max_length=500,
help_text="Reason for muting",
)
enabled = models.BooleanField(
default=True, help_text="Whether this rule is currently enabled"
)
# Audit fields
created_by = models.ForeignKey(
User,
on_delete=models.SET_NULL,
null=True,
related_name="created_mute_rules",
help_text="User who created this rule",
)
# Rule criteria - array of finding UIDs
finding_uids = ArrayField(
models.CharField(max_length=255), help_text="List of finding UIDs to mute"
)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "mute_rules"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
models.UniqueConstraint(
fields=("tenant_id", "name"),
name="unique_mute_rule_name_per_tenant",
),
]
class JSONAPIMeta:
resource_name = "mute-rules"
class Processor(RowLevelSecurityProtectedModel):
class ProcessorChoices(models.TextChoices):
MUTELIST = "mutelist", _("Mutelist")
@@ -1983,6 +2224,8 @@ class LighthouseProviderConfiguration(RowLevelSecurityProtectedModel):
class LLMProviderChoices(models.TextChoices):
OPENAI = "openai", _("OpenAI")
BEDROCK = "bedrock", _("AWS Bedrock")
OPENAI_COMPATIBLE = "openai_compatible", _("OpenAI Compatible")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
@@ -2156,3 +2399,194 @@ class LighthouseProviderModels(RowLevelSecurityProtectedModel):
class JSONAPIMeta:
resource_name = "lighthouse-models"
class ThreatScoreSnapshot(RowLevelSecurityProtectedModel):
"""
Stores historical ThreatScore metrics for a given scan.
Snapshots are created automatically after each ThreatScore report generation.
"""
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
scan = models.ForeignKey(
Scan,
on_delete=models.CASCADE,
related_name="threatscore_snapshots",
related_query_name="threatscore_snapshot",
)
provider = models.ForeignKey(
Provider,
on_delete=models.CASCADE,
related_name="threatscore_snapshots",
related_query_name="threatscore_snapshot",
)
compliance_id = models.CharField(
max_length=100,
blank=False,
null=False,
help_text="Compliance framework ID (e.g., 'prowler_threatscore_aws')",
)
# Overall ThreatScore metrics
overall_score = models.DecimalField(
max_digits=5,
decimal_places=2,
help_text="Overall ThreatScore percentage (0-100)",
)
# Score improvement/degradation compared to previous snapshot
score_delta = models.DecimalField(
max_digits=5,
decimal_places=2,
null=True,
blank=True,
help_text="Score change compared to previous snapshot (positive = improvement)",
)
# Section breakdown stored as JSON
# Format: {"1. IAM": 85.5, "2. Attack Surface": 92.3, ...}
section_scores = models.JSONField(
default=dict,
blank=True,
help_text="ThreatScore breakdown by section",
)
# Critical requirements metadata stored as JSON
# Format: [{"requirement_id": "...", "risk_level": 5, "weight": 150, ...}, ...]
critical_requirements = models.JSONField(
default=list,
blank=True,
help_text="List of critical failed requirements (risk >= 4)",
)
# Summary statistics
total_requirements = models.IntegerField(
default=0,
help_text="Total number of requirements evaluated",
)
passed_requirements = models.IntegerField(
default=0,
help_text="Number of requirements with PASS status",
)
failed_requirements = models.IntegerField(
default=0,
help_text="Number of requirements with FAIL status",
)
manual_requirements = models.IntegerField(
default=0,
help_text="Number of requirements with MANUAL status",
)
total_findings = models.IntegerField(
default=0,
help_text="Total number of findings across all requirements",
)
passed_findings = models.IntegerField(
default=0,
help_text="Number of findings with PASS status",
)
failed_findings = models.IntegerField(
default=0,
help_text="Number of findings with FAIL status",
)
def __str__(self):
return f"ThreatScore {self.overall_score}% for scan {self.scan_id} ({self.inserted_at})"
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "threatscore_snapshots"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
indexes = [
models.Index(
fields=["tenant_id", "scan_id"],
name="threatscore_snap_t_scan_idx",
),
models.Index(
fields=["tenant_id", "provider_id"],
name="threatscore_snap_t_prov_idx",
),
models.Index(
fields=["tenant_id", "inserted_at"],
name="threatscore_snap_t_time_idx",
),
]
class JSONAPIMeta:
resource_name = "threatscore-snapshots"
class AttackSurfaceOverview(RowLevelSecurityProtectedModel):
"""
Pre-aggregated attack surface metrics per scan.
Stores counts for each attack surface type (internet-exposed, secrets,
privilege-escalation, ec2-imdsv1) to enable fast overview queries.
"""
class AttackSurfaceTypeChoices(models.TextChoices):
INTERNET_EXPOSED = "internet-exposed", _("Internet Exposed")
SECRETS = "secrets", _("Exposed Secrets")
PRIVILEGE_ESCALATION = "privilege-escalation", _("Privilege Escalation")
EC2_IMDSV1 = "ec2-imdsv1", _("EC2 IMDSv1 Enabled")
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
scan = models.ForeignKey(
Scan,
on_delete=models.CASCADE,
related_name="attack_surface_overviews",
related_query_name="attack_surface_overview",
)
attack_surface_type = models.CharField(
max_length=50,
choices=AttackSurfaceTypeChoices.choices,
)
# Finding counts
total_findings = models.IntegerField(default=0) # All findings (PASS + FAIL)
failed_findings = models.IntegerField(default=0) # Non-muted failed findings
muted_failed_findings = models.IntegerField(default=0) # Muted failed findings
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "attack_surface_overviews"
constraints = [
models.UniqueConstraint(
fields=("tenant_id", "scan_id", "attack_surface_type"),
name="unique_attack_surface_per_scan",
),
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
indexes = [
models.Index(
fields=["tenant_id", "scan_id"],
name="attack_surf_tenant_scan_idx",
),
]
class JSONAPIMeta:
resource_name = "attack-surface-overviews"
+2 -2
View File
@@ -65,11 +65,11 @@ def get_providers(role: Role) -> QuerySet[Provider]:
A QuerySet of Provider objects filtered by the role's provider groups.
If the role has no provider groups, returns an empty queryset.
"""
tenant = role.tenant
tenant_id = role.tenant_id
provider_groups = role.provider_groups.all()
if not provider_groups.exists():
return Provider.objects.none()
return Provider.objects.filter(
tenant=tenant, provider_groups__in=provider_groups
tenant_id=tenant_id, provider_groups__in=provider_groups
).distinct()
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,39 @@
"""Tests for rls_transaction retry and fallback logic."""
import pytest
from django.db import DEFAULT_DB_ALIAS
from rest_framework_json_api.serializers import ValidationError
from api.db_utils import rls_transaction
@pytest.mark.django_db
class TestRLSTransaction:
"""Simple integration tests for rls_transaction using real DB."""
@pytest.fixture
def tenant(self, tenants_fixture):
return tenants_fixture[0]
def test_success_on_primary(self, tenant):
"""Basic: transaction succeeds on primary database."""
with rls_transaction(str(tenant.id), using=DEFAULT_DB_ALIAS) as cursor:
cursor.execute("SELECT 1")
result = cursor.fetchone()
assert result == (1,)
def test_invalid_uuid_raises_validation_error(self):
"""Invalid UUID raises ValidationError before DB operations."""
with pytest.raises(ValidationError, match="Must be a valid UUID"):
with rls_transaction("not-a-uuid", using=DEFAULT_DB_ALIAS):
pass
def test_custom_parameter_name(self, tenant):
"""Test custom RLS parameter name."""
custom_param = "api.custom_id"
with rls_transaction(
str(tenant.id), parameter=custom_param, using=DEFAULT_DB_ALIAS
) as cursor:
cursor.execute("SELECT current_setting(%s, true)", [custom_param])
result = cursor.fetchone()
assert result == (str(tenant.id),)
+510 -1
View File
@@ -1,12 +1,15 @@
from datetime import datetime, timezone
from enum import Enum
from unittest.mock import patch
from unittest.mock import MagicMock, patch
import pytest
from django.conf import settings
from django.db import DEFAULT_DB_ALIAS, OperationalError
from freezegun import freeze_time
from rest_framework_json_api.serializers import ValidationError
from api.db_utils import (
POSTGRES_TENANT_VAR,
_should_create_index_on_partition,
batch_delete,
create_objects_in_batches,
@@ -14,11 +17,22 @@ from api.db_utils import (
generate_api_key_prefix,
generate_random_token,
one_week_from_now,
rls_transaction,
update_objects_in_batches,
)
from api.models import Provider
@pytest.fixture
def enable_read_replica():
"""
Fixture to enable READ_REPLICA_ALIAS for tests that need replica functionality.
This avoids polluting the global test configuration.
"""
with patch("api.db_utils.READ_REPLICA_ALIAS", "replica"):
yield "replica"
class TestEnumToChoices:
def test_enum_to_choices_simple(self):
class Color(Enum):
@@ -339,3 +353,498 @@ class TestGenerateApiKeyPrefix:
prefix = generate_api_key_prefix()
random_part = prefix[3:] # Strip 'pk_'
assert all(char in allowed_chars for char in random_part)
@pytest.mark.django_db
class TestRlsTransaction:
def test_rls_transaction_valid_uuid_string(self, tenants_fixture):
"""Test rls_transaction with valid UUID string."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with rls_transaction(tenant_id) as cursor:
assert cursor is not None
cursor.execute("SELECT current_setting(%s)", [POSTGRES_TENANT_VAR])
result = cursor.fetchone()
assert result[0] == tenant_id
def test_rls_transaction_valid_uuid_object(self, tenants_fixture):
"""Test rls_transaction with UUID object."""
tenant = tenants_fixture[0]
with rls_transaction(tenant.id) as cursor:
assert cursor is not None
cursor.execute("SELECT current_setting(%s)", [POSTGRES_TENANT_VAR])
result = cursor.fetchone()
assert result[0] == str(tenant.id)
def test_rls_transaction_invalid_uuid_raises_validation_error(self):
"""Test rls_transaction raises ValidationError for invalid UUID."""
invalid_uuid = "not-a-valid-uuid"
with pytest.raises(ValidationError, match="Must be a valid UUID"):
with rls_transaction(invalid_uuid):
pass
def test_rls_transaction_uses_default_database_when_no_alias(self, tenants_fixture):
"""Test rls_transaction uses DEFAULT_DB_ALIAS when no alias specified."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=None):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic"):
with rls_transaction(tenant_id):
pass
mock_connections.__getitem__.assert_called_with(DEFAULT_DB_ALIAS)
def test_rls_transaction_uses_specified_alias(self, tenants_fixture):
"""Test rls_transaction uses specified database alias via using parameter."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
custom_alias = "custom_db"
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic"):
with patch("api.db_utils.set_read_db_alias") as mock_set_alias:
with patch("api.db_utils.reset_read_db_alias") as mock_reset_alias:
mock_set_alias.return_value = "test_token"
with rls_transaction(tenant_id, using=custom_alias):
pass
mock_connections.__getitem__.assert_called_with(custom_alias)
mock_set_alias.assert_called_once_with(custom_alias)
mock_reset_alias.assert_called_once_with("test_token")
def test_rls_transaction_uses_read_replica_from_router(
self, tenants_fixture, enable_read_replica
):
"""Test rls_transaction uses read replica alias from router."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=enable_read_replica):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic"):
with patch("api.db_utils.set_read_db_alias") as mock_set_alias:
with patch(
"api.db_utils.reset_read_db_alias"
) as mock_reset_alias:
mock_set_alias.return_value = "test_token"
with rls_transaction(tenant_id):
pass
mock_connections.__getitem__.assert_called()
mock_set_alias.assert_called_once()
mock_reset_alias.assert_called_once()
def test_rls_transaction_fallback_to_default_when_alias_not_in_connections(
self, tenants_fixture
):
"""Test rls_transaction falls back to DEFAULT_DB_ALIAS when alias not in connections."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
invalid_alias = "nonexistent_db"
with patch("api.db_utils.get_read_db_alias", return_value=invalid_alias):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
def contains_check(alias):
return alias == DEFAULT_DB_ALIAS
mock_connections.__contains__.side_effect = contains_check
mock_connections.__getitem__.return_value = mock_conn
with patch("api.db_utils.transaction.atomic"):
with rls_transaction(tenant_id):
pass
mock_connections.__getitem__.assert_called_with(DEFAULT_DB_ALIAS)
def test_rls_transaction_successful_execution_on_replica_no_retries(
self, tenants_fixture, enable_read_replica
):
"""Test successful execution on replica without retries."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=enable_read_replica):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic"):
with patch("api.db_utils.set_read_db_alias", return_value="token"):
with patch("api.db_utils.reset_read_db_alias"):
with rls_transaction(tenant_id):
pass
assert mock_cursor.execute.call_count == 1
def test_rls_transaction_retry_with_exponential_backoff_on_operational_error(
self, tenants_fixture, enable_read_replica
):
"""Test retry with exponential backoff on OperationalError on replica."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=enable_read_replica):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
call_count = 0
def atomic_side_effect(*args, **kwargs):
nonlocal call_count
call_count += 1
if call_count < 3:
raise OperationalError("Connection error")
return MagicMock(
__enter__=MagicMock(return_value=None),
__exit__=MagicMock(return_value=False),
)
with patch(
"api.db_utils.transaction.atomic", side_effect=atomic_side_effect
):
with patch("api.db_utils.time.sleep") as mock_sleep:
with patch(
"api.db_utils.set_read_db_alias", return_value="token"
):
with patch("api.db_utils.reset_read_db_alias"):
with patch("api.db_utils.logger") as mock_logger:
with rls_transaction(tenant_id):
pass
assert mock_sleep.call_count == 2
mock_sleep.assert_any_call(0.5)
mock_sleep.assert_any_call(1.0)
assert mock_logger.info.call_count == 2
def test_rls_transaction_max_three_attempts_for_replica(
self, tenants_fixture, enable_read_replica
):
"""Test maximum 3 attempts for replica database."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=enable_read_replica):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic") as mock_atomic:
mock_atomic.side_effect = OperationalError("Persistent error")
with patch("api.db_utils.time.sleep"):
with patch(
"api.db_utils.set_read_db_alias", return_value="token"
):
with patch("api.db_utils.reset_read_db_alias"):
with pytest.raises(OperationalError):
with rls_transaction(tenant_id):
pass
assert mock_atomic.call_count == 3
def test_rls_transaction_only_one_attempt_for_primary(self, tenants_fixture):
"""Test only 1 attempt for primary database."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=None):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic") as mock_atomic:
mock_atomic.side_effect = OperationalError("Primary error")
with pytest.raises(OperationalError):
with rls_transaction(tenant_id):
pass
assert mock_atomic.call_count == 1
def test_rls_transaction_fallback_to_primary_after_max_attempts(
self, tenants_fixture, enable_read_replica
):
"""Test fallback to primary DB after max attempts on replica."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=enable_read_replica):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
call_count = 0
def atomic_side_effect(*args, **kwargs):
nonlocal call_count
call_count += 1
if call_count < 3:
raise OperationalError("Replica error")
return MagicMock(
__enter__=MagicMock(return_value=None),
__exit__=MagicMock(return_value=False),
)
with patch(
"api.db_utils.transaction.atomic", side_effect=atomic_side_effect
):
with patch("api.db_utils.time.sleep"):
with patch(
"api.db_utils.set_read_db_alias", return_value="token"
):
with patch("api.db_utils.reset_read_db_alias"):
with patch("api.db_utils.logger") as mock_logger:
with rls_transaction(tenant_id):
pass
mock_logger.warning.assert_called_once()
warning_msg = mock_logger.warning.call_args[0][0]
assert "falling back to primary DB" in warning_msg
def test_rls_transaction_logger_warning_on_fallback(
self, tenants_fixture, enable_read_replica
):
"""Test logger warnings are emitted on fallback to primary."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=enable_read_replica):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
call_count = 0
def atomic_side_effect(*args, **kwargs):
nonlocal call_count
call_count += 1
if call_count < 3:
raise OperationalError("Replica error")
return MagicMock(
__enter__=MagicMock(return_value=None),
__exit__=MagicMock(return_value=False),
)
with patch(
"api.db_utils.transaction.atomic", side_effect=atomic_side_effect
):
with patch("api.db_utils.time.sleep"):
with patch(
"api.db_utils.set_read_db_alias", return_value="token"
):
with patch("api.db_utils.reset_read_db_alias"):
with patch("api.db_utils.logger") as mock_logger:
with rls_transaction(tenant_id):
pass
assert mock_logger.info.call_count == 2
assert mock_logger.warning.call_count == 1
def test_rls_transaction_operational_error_raised_immediately_on_primary(
self, tenants_fixture
):
"""Test OperationalError raised immediately on primary without retry."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=None):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic") as mock_atomic:
mock_atomic.side_effect = OperationalError("Primary error")
with patch("api.db_utils.time.sleep") as mock_sleep:
with pytest.raises(OperationalError):
with rls_transaction(tenant_id):
pass
mock_sleep.assert_not_called()
def test_rls_transaction_operational_error_raised_after_max_attempts(
self, tenants_fixture, enable_read_replica
):
"""Test OperationalError raised after max attempts on replica."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=enable_read_replica):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic") as mock_atomic:
mock_atomic.side_effect = OperationalError(
"Persistent replica error"
)
with patch("api.db_utils.time.sleep"):
with patch(
"api.db_utils.set_read_db_alias", return_value="token"
):
with patch("api.db_utils.reset_read_db_alias"):
with pytest.raises(OperationalError):
with rls_transaction(tenant_id):
pass
def test_rls_transaction_router_token_set_for_non_default_alias(
self, tenants_fixture
):
"""Test router token is set when using non-default alias."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
custom_alias = "custom_db"
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic"):
with patch("api.db_utils.set_read_db_alias") as mock_set_alias:
with patch("api.db_utils.reset_read_db_alias") as mock_reset_alias:
mock_set_alias.return_value = "test_token"
with rls_transaction(tenant_id, using=custom_alias):
pass
mock_set_alias.assert_called_once_with(custom_alias)
mock_reset_alias.assert_called_once_with("test_token")
def test_rls_transaction_router_token_reset_in_finally_block(self, tenants_fixture):
"""Test router token is reset in finally block even on error."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
custom_alias = "custom_db"
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic") as mock_atomic:
mock_atomic.side_effect = Exception("Unexpected error")
with patch("api.db_utils.set_read_db_alias", return_value="test_token"):
with patch("api.db_utils.reset_read_db_alias") as mock_reset_alias:
with pytest.raises(Exception):
with rls_transaction(tenant_id, using=custom_alias):
pass
mock_reset_alias.assert_called_once_with("test_token")
def test_rls_transaction_router_token_not_set_for_default_alias(
self, tenants_fixture
):
"""Test router token is not set when using default alias."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with patch("api.db_utils.get_read_db_alias", return_value=None):
with patch("api.db_utils.connections") as mock_connections:
mock_conn = MagicMock()
mock_cursor = MagicMock()
mock_conn.cursor.return_value.__enter__.return_value = mock_cursor
mock_connections.__getitem__.return_value = mock_conn
mock_connections.__contains__.return_value = True
with patch("api.db_utils.transaction.atomic"):
with patch("api.db_utils.set_read_db_alias") as mock_set_alias:
with patch(
"api.db_utils.reset_read_db_alias"
) as mock_reset_alias:
with rls_transaction(tenant_id):
pass
mock_set_alias.assert_not_called()
mock_reset_alias.assert_not_called()
def test_rls_transaction_set_config_query_executed_with_correct_params(
self, tenants_fixture
):
"""Test SET_CONFIG_QUERY executed with correct parameters."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with rls_transaction(tenant_id) as cursor:
cursor.execute("SELECT current_setting(%s)", [POSTGRES_TENANT_VAR])
result = cursor.fetchone()
assert result[0] == tenant_id
def test_rls_transaction_custom_parameter(self, tenants_fixture):
"""Test rls_transaction with custom parameter name."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
custom_param = "api.user_id"
with rls_transaction(tenant_id, parameter=custom_param) as cursor:
cursor.execute("SELECT current_setting(%s)", [custom_param])
result = cursor.fetchone()
assert result[0] == tenant_id
def test_rls_transaction_cursor_yielded_correctly(self, tenants_fixture):
"""Test cursor is yielded correctly."""
tenant = tenants_fixture[0]
tenant_id = str(tenant.id)
with rls_transaction(tenant_id) as cursor:
assert cursor is not None
cursor.execute("SELECT 1")
result = cursor.fetchone()
assert result[0] == 1
+143 -1
View File
@@ -2,9 +2,12 @@ import uuid
from unittest.mock import call, patch
import pytest
from django.core.exceptions import ObjectDoesNotExist
from django.db import IntegrityError
from api.db_utils import POSTGRES_TENANT_VAR, SET_CONFIG_QUERY
from api.decorators import set_tenant
from api.decorators import handle_provider_deletion, set_tenant
from api.exceptions import ProviderDeletedException
@pytest.mark.django_db
@@ -34,3 +37,142 @@ class TestSetTenantDecorator:
with pytest.raises(KeyError):
random_func("test_arg")
@pytest.mark.django_db
class TestHandleProviderDeletionDecorator:
def test_success_no_exception(self, tenants_fixture, providers_fixture):
"""Decorated function runs normally when no exception is raised."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
@handle_provider_deletion
def task_func(**kwargs):
return "success"
result = task_func(
tenant_id=str(tenant.id),
provider_id=str(provider.id),
)
assert result == "success"
@patch("api.decorators.rls_transaction")
@patch("api.decorators.Provider.objects.filter")
def test_provider_deleted_with_provider_id(
self, mock_filter, mock_rls, tenants_fixture
):
"""Raises ProviderDeletedException when provider_id provided and provider deleted."""
tenant = tenants_fixture[0]
deleted_provider_id = str(uuid.uuid4())
mock_rls.return_value.__enter__ = lambda s: None
mock_rls.return_value.__exit__ = lambda s, *args: None
mock_filter.return_value.exists.return_value = False
@handle_provider_deletion
def task_func(**kwargs):
raise ObjectDoesNotExist("Some object not found")
with pytest.raises(ProviderDeletedException) as exc_info:
task_func(tenant_id=str(tenant.id), provider_id=deleted_provider_id)
assert deleted_provider_id in str(exc_info.value)
@patch("api.decorators.rls_transaction")
@patch("api.decorators.Provider.objects.filter")
@patch("api.decorators.Scan.objects.filter")
def test_provider_deleted_with_scan_id(
self, mock_scan_filter, mock_provider_filter, mock_rls, tenants_fixture
):
"""Raises ProviderDeletedException when scan exists but provider deleted."""
tenant = tenants_fixture[0]
scan_id = str(uuid.uuid4())
provider_id = str(uuid.uuid4())
mock_rls.return_value.__enter__ = lambda s: None
mock_rls.return_value.__exit__ = lambda s, *args: None
mock_scan = type("MockScan", (), {"provider_id": provider_id})()
mock_scan_filter.return_value.first.return_value = mock_scan
mock_provider_filter.return_value.exists.return_value = False
@handle_provider_deletion
def task_func(**kwargs):
raise ObjectDoesNotExist("Some object not found")
with pytest.raises(ProviderDeletedException) as exc_info:
task_func(tenant_id=str(tenant.id), scan_id=scan_id)
assert provider_id in str(exc_info.value)
@patch("api.decorators.rls_transaction")
@patch("api.decorators.Scan.objects.filter")
def test_scan_deleted_cascade(self, mock_scan_filter, mock_rls, tenants_fixture):
"""Raises ProviderDeletedException when scan was deleted (CASCADE from provider)."""
tenant = tenants_fixture[0]
scan_id = str(uuid.uuid4())
mock_rls.return_value.__enter__ = lambda s: None
mock_rls.return_value.__exit__ = lambda s, *args: None
mock_scan_filter.return_value.first.return_value = None
@handle_provider_deletion
def task_func(**kwargs):
raise ObjectDoesNotExist("Some object not found")
with pytest.raises(ProviderDeletedException) as exc_info:
task_func(tenant_id=str(tenant.id), scan_id=scan_id)
assert scan_id in str(exc_info.value)
@patch("api.decorators.rls_transaction")
@patch("api.decorators.Provider.objects.filter")
def test_provider_exists_reraises_original(
self, mock_filter, mock_rls, tenants_fixture, providers_fixture
):
"""Re-raises original exception when provider still exists."""
tenant = tenants_fixture[0]
provider = providers_fixture[0]
mock_rls.return_value.__enter__ = lambda s: None
mock_rls.return_value.__exit__ = lambda s, *args: None
mock_filter.return_value.exists.return_value = True
@handle_provider_deletion
def task_func(**kwargs):
raise ObjectDoesNotExist("Actual object missing")
with pytest.raises(ObjectDoesNotExist):
task_func(tenant_id=str(tenant.id), provider_id=str(provider.id))
@patch("api.decorators.rls_transaction")
@patch("api.decorators.Provider.objects.filter")
def test_integrity_error_provider_deleted(
self, mock_filter, mock_rls, tenants_fixture
):
"""Raises ProviderDeletedException on IntegrityError when provider deleted."""
tenant = tenants_fixture[0]
deleted_provider_id = str(uuid.uuid4())
mock_rls.return_value.__enter__ = lambda s: None
mock_rls.return_value.__exit__ = lambda s, *args: None
mock_filter.return_value.exists.return_value = False
@handle_provider_deletion
def task_func(**kwargs):
raise IntegrityError("FK constraint violation")
with pytest.raises(ProviderDeletedException):
task_func(tenant_id=str(tenant.id), provider_id=deleted_provider_id)
def test_missing_provider_and_scan_raises_assertion(self, tenants_fixture):
"""Raises AssertionError when neither provider_id nor scan_id in kwargs."""
@handle_provider_deletion
def task_func(**kwargs):
raise ObjectDoesNotExist("Some object not found")
with pytest.raises(AssertionError) as exc_info:
task_func(tenant_id=str(tenants_fixture[0].id))
assert "provider or scan" in str(exc_info.value)
+79 -3
View File
@@ -20,9 +20,12 @@ from prowler.providers.aws.aws_provider import AwsProvider
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHubConnection
from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.github.github_provider import GithubProvider
from prowler.providers.iac.iac_provider import IacProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
from prowler.providers.m365.m365_provider import M365Provider
from prowler.providers.oraclecloud.oci_provider import OciProvider
from prowler.providers.mongodbatlas.mongodbatlas_provider import MongodbatlasProvider
from prowler.providers.oraclecloud.oraclecloud_provider import OraclecloudProvider
class TestMergeDicts:
@@ -109,7 +112,10 @@ class TestReturnProwlerProvider:
(Provider.ProviderChoices.AZURE.value, AzureProvider),
(Provider.ProviderChoices.KUBERNETES.value, KubernetesProvider),
(Provider.ProviderChoices.M365.value, M365Provider),
(Provider.ProviderChoices.OCI.value, OciProvider),
(Provider.ProviderChoices.GITHUB.value, GithubProvider),
(Provider.ProviderChoices.MONGODBATLAS.value, MongodbatlasProvider),
(Provider.ProviderChoices.ORACLECLOUD.value, OraclecloudProvider),
(Provider.ProviderChoices.IAC.value, IacProvider),
],
)
def test_return_prowler_provider(self, provider_type, expected_provider):
@@ -206,9 +212,13 @@ class TestGetProwlerProviderKwargs:
{"organizations": ["provider_uid"]},
),
(
Provider.ProviderChoices.OCI.value,
Provider.ProviderChoices.ORACLECLOUD.value,
{},
),
(
Provider.ProviderChoices.MONGODBATLAS.value,
{"atlas_organization_id": "provider_uid"},
),
],
)
def test_get_prowler_provider_kwargs(self, provider_type, expected_extra_kwargs):
@@ -246,6 +256,72 @@ class TestGetProwlerProviderKwargs:
expected_result = {**secret_dict, "mutelist_content": {"key": "value"}}
assert result == expected_result
def test_get_prowler_provider_kwargs_iac_provider(self):
"""Test that IaC provider gets correct kwargs with repository URL."""
provider_uid = "https://github.com/org/repo"
secret_dict = {"access_token": "test_token"}
secret_mock = MagicMock()
secret_mock.secret = secret_dict
provider = MagicMock()
provider.provider = Provider.ProviderChoices.IAC.value
provider.secret = secret_mock
provider.uid = provider_uid
result = get_prowler_provider_kwargs(provider)
expected_result = {
"scan_repository_url": provider_uid,
"oauth_app_token": "test_token",
}
assert result == expected_result
def test_get_prowler_provider_kwargs_iac_provider_without_token(self):
"""Test that IaC provider works without access token for public repos."""
provider_uid = "https://github.com/org/public-repo"
secret_dict = {}
secret_mock = MagicMock()
secret_mock.secret = secret_dict
provider = MagicMock()
provider.provider = Provider.ProviderChoices.IAC.value
provider.secret = secret_mock
provider.uid = provider_uid
result = get_prowler_provider_kwargs(provider)
expected_result = {"scan_repository_url": provider_uid}
assert result == expected_result
def test_get_prowler_provider_kwargs_iac_provider_ignores_mutelist(self):
"""Test that IaC provider does NOT receive mutelist_content.
IaC provider uses Trivy's built-in mutelist logic, so it should not
receive mutelist_content even when a mutelist processor is configured.
"""
provider_uid = "https://github.com/org/repo"
secret_dict = {"access_token": "test_token"}
secret_mock = MagicMock()
secret_mock.secret = secret_dict
mutelist_processor = MagicMock()
mutelist_processor.configuration = {"Mutelist": {"key": "value"}}
provider = MagicMock()
provider.provider = Provider.ProviderChoices.IAC.value
provider.secret = secret_mock
provider.uid = provider_uid
result = get_prowler_provider_kwargs(provider, mutelist_processor)
# IaC provider should NOT have mutelist_content
assert "mutelist_content" not in result
expected_result = {
"scan_repository_url": provider_uid,
"oauth_app_token": "test_token",
}
assert result == expected_result
def test_get_prowler_provider_kwargs_unsupported_provider(self):
# Setup
provider_uid = "provider_uid"
File diff suppressed because it is too large Load Diff
+60 -12
View File
@@ -18,9 +18,11 @@ from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.common.models import Connection
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.github.github_provider import GithubProvider
from prowler.providers.iac.iac_provider import IacProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
from prowler.providers.m365.m365_provider import M365Provider
from prowler.providers.oraclecloud.oci_provider import OciProvider
from prowler.providers.mongodbatlas.mongodbatlas_provider import MongodbatlasProvider
from prowler.providers.oraclecloud.oraclecloud_provider import OraclecloudProvider
class CustomOAuth2Client(OAuth2Client):
@@ -66,9 +68,11 @@ def return_prowler_provider(
| AzureProvider
| GcpProvider
| GithubProvider
| IacProvider
| KubernetesProvider
| M365Provider
| OciProvider
| MongodbatlasProvider
| OraclecloudProvider
]:
"""Return the Prowler provider class based on the given provider type.
@@ -76,7 +80,7 @@ def return_prowler_provider(
provider (Provider): The provider object containing the provider type and associated secrets.
Returns:
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider | OciProvider: The corresponding provider class.
AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | OraclecloudProvider | MongodbatlasProvider: The corresponding provider class.
Raises:
ValueError: If the provider type specified in `provider.provider` is not supported.
@@ -94,8 +98,12 @@ def return_prowler_provider(
prowler_provider = M365Provider
case Provider.ProviderChoices.GITHUB.value:
prowler_provider = GithubProvider
case Provider.ProviderChoices.OCI.value:
prowler_provider = OciProvider
case Provider.ProviderChoices.MONGODBATLAS.value:
prowler_provider = MongodbatlasProvider
case Provider.ProviderChoices.IAC.value:
prowler_provider = IacProvider
case Provider.ProviderChoices.ORACLECLOUD.value:
prowler_provider = OraclecloudProvider
case _:
raise ValueError(f"Provider type {provider.provider} not supported")
return prowler_provider
@@ -132,10 +140,26 @@ def get_prowler_provider_kwargs(
**prowler_provider_kwargs,
"organizations": [provider.uid],
}
elif provider.provider == Provider.ProviderChoices.IAC.value:
# For IaC provider, uid contains the repository URL
# Extract the access token if present in the secret
prowler_provider_kwargs = {
"scan_repository_url": provider.uid,
}
if "access_token" in provider.secret.secret:
prowler_provider_kwargs["oauth_app_token"] = provider.secret.secret[
"access_token"
]
elif provider.provider == Provider.ProviderChoices.MONGODBATLAS.value:
prowler_provider_kwargs = {
**prowler_provider_kwargs,
"atlas_organization_id": provider.uid,
}
if mutelist_processor:
mutelist_content = mutelist_processor.configuration.get("Mutelist", {})
if mutelist_content:
# IaC provider doesn't support mutelist (uses Trivy's built-in logic)
if mutelist_content and provider.provider != Provider.ProviderChoices.IAC.value:
prowler_provider_kwargs["mutelist_content"] = mutelist_content
return prowler_provider_kwargs
@@ -149,9 +173,11 @@ def initialize_prowler_provider(
| AzureProvider
| GcpProvider
| GithubProvider
| IacProvider
| KubernetesProvider
| M365Provider
| OciProvider
| MongodbatlasProvider
| OraclecloudProvider
):
"""Initialize a Prowler provider instance based on the given provider type.
@@ -160,8 +186,8 @@ def initialize_prowler_provider(
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
Returns:
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider | OciProvider: An instance of the corresponding provider class
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `GithubProvider`, `KubernetesProvider`, `M365Provider` or `OciProvider`) initialized with the
AwsProvider | AzureProvider | GcpProvider | GithubProvider | IacProvider | KubernetesProvider | M365Provider | OraclecloudProvider | MongodbatlasProvider: An instance of the corresponding provider class
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `GithubProvider`, `IacProvider`, `KubernetesProvider`, `M365Provider`, `OraclecloudProvider` or `MongodbatlasProvider`) initialized with the
provider's secrets.
"""
prowler_provider = return_prowler_provider(provider)
@@ -185,9 +211,23 @@ def prowler_provider_connection_test(provider: Provider) -> Connection:
except Provider.secret.RelatedObjectDoesNotExist as secret_error:
return Connection(is_connected=False, error=secret_error)
return prowler_provider.test_connection(
**prowler_provider_kwargs, provider_id=provider.uid, raise_on_exception=False
)
# For IaC provider, construct the kwargs properly for test_connection
if provider.provider == Provider.ProviderChoices.IAC.value:
# Don't pass repository_url from secret, use scan_repository_url with the UID
iac_test_kwargs = {
"scan_repository_url": provider.uid,
"raise_on_exception": False,
}
# Add access_token if present in the secret
if "access_token" in prowler_provider_kwargs:
iac_test_kwargs["access_token"] = prowler_provider_kwargs["access_token"]
return prowler_provider.test_connection(**iac_test_kwargs)
else:
return prowler_provider.test_connection(
**prowler_provider_kwargs,
provider_id=provider.uid,
raise_on_exception=False,
)
def prowler_integration_connection_test(integration: Integration) -> Connection:
@@ -342,10 +382,18 @@ def get_findings_metadata_no_aggregations(tenant_id: str, filtered_queryset):
regions = sorted({region for region in aggregation["regions"] or [] if region})
resource_types = sorted(set(aggregation["resource_types"] or []))
# Aggregate categories from findings
categories_set = set()
for categories_list in filtered_queryset.values_list("categories", flat=True):
if categories_list:
categories_set.update(categories_list)
categories = sorted(categories_set)
result = {
"services": services,
"regions": regions,
"resource_types": resource_types,
"categories": categories,
}
serializer = FindingMetadataSerializer(data=result)
@@ -1,5 +1,6 @@
import re
from drf_spectacular.utils import extend_schema_field
from rest_framework_json_api import serializers
@@ -11,3 +12,289 @@ class OpenAICredentialsSerializer(serializers.Serializer):
if not re.match(pattern, value or ""):
raise serializers.ValidationError("Invalid OpenAI API key format.")
return value
def to_internal_value(self, data):
"""Check for unknown fields before DRF filters them out."""
if not isinstance(data, dict):
raise serializers.ValidationError(
{"non_field_errors": ["Credentials must be an object"]}
)
allowed_fields = set(self.fields.keys())
provided_fields = set(data.keys())
extra_fields = provided_fields - allowed_fields
if extra_fields:
raise serializers.ValidationError(
{
"non_field_errors": [
f"Unknown fields in credentials: {', '.join(sorted(extra_fields))}"
]
}
)
return super().to_internal_value(data)
class BedrockCredentialsSerializer(serializers.Serializer):
"""
Serializer for AWS Bedrock credentials validation.
Supports two authentication methods:
1. AWS access key + secret key
2. Bedrock API key (bearer token)
In both cases, region is mandatory.
"""
access_key_id = serializers.CharField(required=False, allow_blank=False)
secret_access_key = serializers.CharField(required=False, allow_blank=False)
api_key = serializers.CharField(required=False, allow_blank=False)
region = serializers.CharField()
def validate_access_key_id(self, value: str) -> str:
"""Validate AWS access key ID format (AKIA for long-term credentials)."""
pattern = r"^AKIA[0-9A-Z]{16}$"
if not re.match(pattern, value or ""):
raise serializers.ValidationError(
"Invalid AWS access key ID format. Must be AKIA followed by 16 alphanumeric characters."
)
return value
def validate_secret_access_key(self, value: str) -> str:
"""Validate AWS secret access key format (40 base64 characters)."""
pattern = r"^[A-Za-z0-9/+=]{40}$"
if not re.match(pattern, value or ""):
raise serializers.ValidationError(
"Invalid AWS secret access key format. Must be 40 base64 characters."
)
return value
def validate_api_key(self, value: str) -> str:
"""
Validate Bedrock API key (bearer token).
"""
pattern = r"^ABSKQmVkcm9ja0FQSUtleS[A-Za-z0-9+/=]{110}$"
if not re.match(pattern, value or ""):
raise serializers.ValidationError("Invalid Bedrock API key format.")
return value
def validate_region(self, value: str) -> str:
"""Validate AWS region format."""
pattern = r"^[a-z]{2}-[a-z]+-\d+$"
if not re.match(pattern, value or ""):
raise serializers.ValidationError(
"Invalid AWS region format. Expected format like 'us-east-1' or 'eu-west-2'."
)
return value
def validate(self, attrs):
"""
Enforce either:
- access_key_id + secret_access_key + region
OR
- api_key + region
"""
access_key_id = attrs.get("access_key_id")
secret_access_key = attrs.get("secret_access_key")
api_key = attrs.get("api_key")
region = attrs.get("region")
errors = {}
if not region:
errors["region"] = ["Region is required."]
using_access_keys = bool(access_key_id or secret_access_key)
using_api_key = api_key is not None and api_key != ""
if using_access_keys and using_api_key:
errors["non_field_errors"] = [
"Provide either access key + secret key OR api key, not both."
]
elif not using_access_keys and not using_api_key:
errors["non_field_errors"] = [
"You must provide either access key + secret key OR api key."
]
elif using_access_keys:
# Both access_key_id and secret_access_key must be present together
if not access_key_id:
errors.setdefault("access_key_id", []).append(
"AWS access key ID is required when using access key authentication."
)
if not secret_access_key:
errors.setdefault("secret_access_key", []).append(
"AWS secret access key is required when using access key authentication."
)
if errors:
raise serializers.ValidationError(errors)
return attrs
def to_internal_value(self, data):
"""Check for unknown fields before DRF filters them out."""
if not isinstance(data, dict):
raise serializers.ValidationError(
{"non_field_errors": ["Credentials must be an object"]}
)
allowed_fields = set(self.fields.keys())
provided_fields = set(data.keys())
extra_fields = provided_fields - allowed_fields
if extra_fields:
raise serializers.ValidationError(
{
"non_field_errors": [
f"Unknown fields in credentials: {', '.join(sorted(extra_fields))}"
]
}
)
return super().to_internal_value(data)
class BedrockCredentialsUpdateSerializer(BedrockCredentialsSerializer):
"""
Serializer for AWS Bedrock credentials during UPDATE operations.
Inherits all validation logic from BedrockCredentialsSerializer but makes
all fields optional to support partial updates.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Make all fields optional for updates
for field in self.fields.values():
field.required = False
def validate(self, attrs):
"""
For updates, this serializer only checks individual fields.
It does NOT enforce the "either access keys OR api key" rule.
That rule is applied later, after merging with existing stored
credentials, in LighthouseProviderConfigUpdateSerializer.
"""
return attrs
class OpenAICompatibleCredentialsSerializer(serializers.Serializer):
"""
Minimal serializer for OpenAI-compatible credentials.
Many OpenAI-compatible providers do not use the same key format as OpenAI.
We only require a non-empty API key string. Additional fields can be added later
without breaking existing configurations.
"""
api_key = serializers.CharField()
def validate_api_key(self, value: str) -> str:
if not isinstance(value, str) or not value.strip():
raise serializers.ValidationError("API key is required.")
return value.strip()
def to_internal_value(self, data):
"""Check for unknown fields before DRF filters them out."""
if not isinstance(data, dict):
raise serializers.ValidationError(
{"non_field_errors": ["Credentials must be an object"]}
)
allowed_fields = set(self.fields.keys())
provided_fields = set(data.keys())
extra_fields = provided_fields - allowed_fields
if extra_fields:
raise serializers.ValidationError(
{
"non_field_errors": [
f"Unknown fields in credentials: {', '.join(sorted(extra_fields))}"
]
}
)
return super().to_internal_value(data)
@extend_schema_field(
{
"oneOf": [
{
"type": "object",
"title": "OpenAI Credentials",
"properties": {
"api_key": {
"type": "string",
"description": "OpenAI API key. Must start with 'sk-' followed by alphanumeric characters, "
"hyphens, or underscores.",
"pattern": "^sk-[\\w-]+$",
}
},
"required": ["api_key"],
},
{
"title": "AWS Bedrock Credentials",
"oneOf": [
{
"title": "IAM Access Key Pair",
"type": "object",
"description": "Authenticate with AWS access key and secret key. Recommended when you manage IAM users or roles.",
"properties": {
"access_key_id": {
"type": "string",
"description": "AWS access key ID.",
"pattern": "^AKIA[0-9A-Z]{16}$",
},
"secret_access_key": {
"type": "string",
"description": "AWS secret access key.",
"pattern": "^[A-Za-z0-9/+=]{40}$",
},
"region": {
"type": "string",
"description": "AWS region identifier where Bedrock is available. Examples: us-east-1, "
"us-west-2, eu-west-1, ap-northeast-1.",
"pattern": "^[a-z]{2}-[a-z]+-\\d+$",
},
},
"required": ["access_key_id", "secret_access_key", "region"],
},
{
"title": "Amazon Bedrock API Key",
"type": "object",
"description": "Authenticate with an Amazon Bedrock API key (bearer token). Region is still required.",
"properties": {
"api_key": {
"type": "string",
"description": "Amazon Bedrock API key (bearer token).",
},
"region": {
"type": "string",
"description": "AWS region identifier where Bedrock is available. Examples: us-east-1, "
"us-west-2, eu-west-1, ap-northeast-1.",
"pattern": "^[a-z]{2}-[a-z]+-\\d+$",
},
},
"required": ["api_key", "region"],
},
],
},
{
"type": "object",
"title": "OpenAI Compatible Credentials",
"properties": {
"api_key": {
"type": "string",
"description": "API key for OpenAI-compatible provider. The format varies by provider. "
"Note: The 'base_url' field (separate from credentials) is required when using this provider type.",
}
},
"required": ["api_key"],
},
]
}
)
class LighthouseCredentialsField(serializers.JSONField):
pass
@@ -239,6 +239,21 @@ from rest_framework_json_api import serializers
},
"required": ["github_app_id", "github_app_key"],
},
{
"type": "object",
"title": "IaC Repository Credentials",
"properties": {
"repository_url": {
"type": "string",
"description": "Repository URL to scan for IaC files.",
},
"access_token": {
"type": "string",
"description": "Optional access token for private repositories.",
},
},
"required": ["repository_url"],
},
{
"type": "object",
"title": "Oracle Cloud Infrastructure (OCI) API Key Credentials",
@@ -274,6 +289,21 @@ from rest_framework_json_api import serializers
},
"required": ["user", "fingerprint", "tenancy", "region"],
},
{
"type": "object",
"title": "MongoDB Atlas API Key",
"properties": {
"atlas_public_key": {
"type": "string",
"description": "MongoDB Atlas API public key.",
},
"atlas_private_key": {
"type": "string",
"description": "MongoDB Atlas API private key.",
},
},
"required": ["atlas_public_key", "atlas_private_key"],
},
]
}
)
+524 -74
View File
@@ -31,6 +31,7 @@ from api.models import (
LighthouseProviderModels,
LighthouseTenantConfiguration,
Membership,
MuteRule,
Processor,
Provider,
ProviderGroup,
@@ -46,6 +47,7 @@ from api.models import (
StatusChoices,
Task,
TenantAPIKey,
ThreatScoreSnapshot,
User,
UserRoleRelationship,
)
@@ -59,11 +61,53 @@ from api.v1.serializer_utils.integrations import (
S3ConfigSerializer,
SecurityHubConfigSerializer,
)
from api.v1.serializer_utils.lighthouse import OpenAICredentialsSerializer
from api.v1.serializer_utils.lighthouse import (
BedrockCredentialsSerializer,
BedrockCredentialsUpdateSerializer,
LighthouseCredentialsField,
OpenAICompatibleCredentialsSerializer,
OpenAICredentialsSerializer,
)
from api.v1.serializer_utils.processors import ProcessorConfigField
from api.v1.serializer_utils.providers import ProviderSecretField
from prowler.lib.mutelist.mutelist import Mutelist
# Base
class BaseModelSerializerV1(serializers.ModelSerializer):
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
class BaseSerializerV1(serializers.Serializer):
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
class BaseWriteSerializer(BaseModelSerializerV1):
def validate(self, data):
if hasattr(self, "initial_data"):
initial_data = set(self.initial_data.keys()) - {"id", "type"}
unknown_keys = initial_data - set(self.fields.keys())
if unknown_keys:
raise ValidationError(f"Invalid fields: {unknown_keys}")
return data
class RLSSerializer(BaseModelSerializerV1):
def create(self, validated_data):
tenant_id = self.context.get("tenant_id")
validated_data["tenant_id"] = tenant_id
return super().create(validated_data)
class StateEnumSerializerField(serializers.ChoiceField):
def __init__(self, **kwargs):
kwargs["choices"] = StateChoices.choices
super().__init__(**kwargs)
# Tokens
@@ -171,7 +215,7 @@ class TokenSocialLoginSerializer(BaseTokenSerializer):
# TODO: Check if we can change the parent class to TokenRefreshSerializer from rest_framework_simplejwt.serializers
class TokenRefreshSerializer(serializers.Serializer):
class TokenRefreshSerializer(BaseSerializerV1):
refresh = serializers.CharField()
# Output token
@@ -205,7 +249,7 @@ class TokenRefreshSerializer(serializers.Serializer):
raise ValidationError({"refresh": "Invalid or expired token"})
class TokenSwitchTenantSerializer(serializers.Serializer):
class TokenSwitchTenantSerializer(BaseSerializerV1):
tenant_id = serializers.UUIDField(
write_only=True, help_text="The tenant ID for which to request a new token."
)
@@ -229,41 +273,10 @@ class TokenSwitchTenantSerializer(serializers.Serializer):
return generate_tokens(user, tenant_id)
# Base
class BaseSerializerV1(serializers.ModelSerializer):
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
class BaseWriteSerializer(BaseSerializerV1):
def validate(self, data):
if hasattr(self, "initial_data"):
initial_data = set(self.initial_data.keys()) - {"id", "type"}
unknown_keys = initial_data - set(self.fields.keys())
if unknown_keys:
raise ValidationError(f"Invalid fields: {unknown_keys}")
return data
class RLSSerializer(BaseSerializerV1):
def create(self, validated_data):
tenant_id = self.context.get("tenant_id")
validated_data["tenant_id"] = tenant_id
return super().create(validated_data)
class StateEnumSerializerField(serializers.ChoiceField):
def __init__(self, **kwargs):
kwargs["choices"] = StateChoices.choices
super().__init__(**kwargs)
# Users
class UserSerializer(BaseSerializerV1):
class UserSerializer(BaseModelSerializerV1):
"""
Serializer for the User model.
"""
@@ -394,7 +407,7 @@ class UserUpdateSerializer(BaseWriteSerializer):
return super().update(instance, validated_data)
class RoleResourceIdentifierSerializer(serializers.Serializer):
class RoleResourceIdentifierSerializer(BaseSerializerV1):
resource_type = serializers.CharField(source="type")
id = serializers.UUIDField()
@@ -577,7 +590,7 @@ class TaskSerializer(RLSSerializer, TaskBase):
# Tenants
class TenantSerializer(BaseSerializerV1):
class TenantSerializer(BaseModelSerializerV1):
"""
Serializer for the Tenant model.
"""
@@ -589,7 +602,7 @@ class TenantSerializer(BaseSerializerV1):
fields = ["id", "name", "memberships"]
class TenantIncludeSerializer(BaseSerializerV1):
class TenantIncludeSerializer(BaseModelSerializerV1):
class Meta:
model = Tenant
fields = ["id", "name"]
@@ -765,7 +778,7 @@ class ProviderGroupUpdateSerializer(ProviderGroupSerializer):
return super().update(instance, validated_data)
class ProviderResourceIdentifierSerializer(serializers.Serializer):
class ProviderResourceIdentifierSerializer(BaseSerializerV1):
resource_type = serializers.CharField(source="type")
id = serializers.UUIDField()
@@ -1102,7 +1115,7 @@ class ScanTaskSerializer(RLSSerializer):
]
class ScanReportSerializer(serializers.Serializer):
class ScanReportSerializer(BaseSerializerV1):
id = serializers.CharField(source="scan")
class Meta:
@@ -1110,7 +1123,7 @@ class ScanReportSerializer(serializers.Serializer):
fields = ["id"]
class ScanComplianceReportSerializer(serializers.Serializer):
class ScanComplianceReportSerializer(BaseSerializerV1):
id = serializers.CharField(source="scan")
name = serializers.CharField()
@@ -1159,11 +1172,17 @@ class ResourceSerializer(RLSSerializer):
"findings",
"failed_findings_count",
"url",
"metadata",
"details",
"partition",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"metadata": {"read_only": True},
"details": {"read_only": True},
"partition": {"read_only": True},
}
included_serializers = {
@@ -1220,11 +1239,15 @@ class ResourceIncludeSerializer(RLSSerializer):
"service",
"type_",
"tags",
"details",
"partition",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"details": {"read_only": True},
"partition": {"read_only": True},
}
@extend_schema_field(
@@ -1249,7 +1272,7 @@ class ResourceIncludeSerializer(RLSSerializer):
return fields
class ResourceMetadataSerializer(serializers.Serializer):
class ResourceMetadataSerializer(BaseSerializerV1):
services = serializers.ListField(child=serializers.CharField(), allow_empty=True)
regions = serializers.ListField(child=serializers.CharField(), allow_empty=True)
types = serializers.ListField(child=serializers.CharField(), allow_empty=True)
@@ -1278,6 +1301,7 @@ class FindingSerializer(RLSSerializer):
"severity",
"check_id",
"check_metadata",
"categories",
"raw_result",
"inserted_at",
"updated_at",
@@ -1319,7 +1343,7 @@ class FindingIncludeSerializer(RLSSerializer):
# To be removed when the related endpoint is removed as well
class FindingDynamicFilterSerializer(serializers.Serializer):
class FindingDynamicFilterSerializer(BaseSerializerV1):
services = serializers.ListField(child=serializers.CharField(), allow_empty=True)
regions = serializers.ListField(child=serializers.CharField(), allow_empty=True)
@@ -1327,12 +1351,13 @@ class FindingDynamicFilterSerializer(serializers.Serializer):
resource_name = "finding-dynamic-filters"
class FindingMetadataSerializer(serializers.Serializer):
class FindingMetadataSerializer(BaseSerializerV1):
services = serializers.ListField(child=serializers.CharField(), allow_empty=True)
regions = serializers.ListField(child=serializers.CharField(), allow_empty=True)
resource_types = serializers.ListField(
child=serializers.CharField(), allow_empty=True
)
categories = serializers.ListField(child=serializers.CharField(), allow_empty=True)
# Temporarily disabled until we implement tag filtering in the UI
# tags = serializers.JSONField(help_text="Tags are described as key-value pairs.")
@@ -1355,12 +1380,16 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
serializer = GCPProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.GITHUB.value:
serializer = GithubProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.IAC.value:
serializer = IacProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.KUBERNETES.value:
serializer = KubernetesProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.M365.value:
serializer = M365ProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.OCI.value:
elif provider_type == Provider.ProviderChoices.ORACLECLOUD.value:
serializer = OracleCloudProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.MONGODBATLAS.value:
serializer = MongoDBAtlasProviderSecret(data=secret)
else:
raise serializers.ValidationError(
{"provider": f"Provider type not supported {provider_type}"}
@@ -1457,6 +1486,14 @@ class GCPServiceAccountProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
class MongoDBAtlasProviderSecret(serializers.Serializer):
atlas_public_key = serializers.CharField()
atlas_private_key = serializers.CharField()
class Meta:
resource_name = "provider-secrets"
class KubernetesProviderSecret(serializers.Serializer):
kubeconfig_content = serializers.CharField()
@@ -1474,6 +1511,14 @@ class GithubProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
class IacProviderSecret(serializers.Serializer):
repository_url = serializers.CharField()
access_token = serializers.CharField(required=False)
class Meta:
resource_name = "provider-secrets"
class OracleCloudProviderSecret(serializers.Serializer):
user = serializers.CharField()
fingerprint = serializers.CharField()
@@ -2001,7 +2046,7 @@ class RoleProviderGroupRelationshipSerializer(RLSSerializer, BaseWriteSerializer
# Compliance overview
class ComplianceOverviewSerializer(serializers.Serializer):
class ComplianceOverviewSerializer(BaseSerializerV1):
"""
Serializer for compliance requirement status aggregated by compliance framework.
@@ -2023,7 +2068,7 @@ class ComplianceOverviewSerializer(serializers.Serializer):
resource_name = "compliance-overviews"
class ComplianceOverviewDetailSerializer(serializers.Serializer):
class ComplianceOverviewDetailSerializer(BaseSerializerV1):
"""
Serializer for detailed compliance requirement information.
@@ -2052,7 +2097,7 @@ class ComplianceOverviewDetailThreatscoreSerializer(ComplianceOverviewDetailSeri
total_findings = serializers.IntegerField()
class ComplianceOverviewAttributesSerializer(serializers.Serializer):
class ComplianceOverviewAttributesSerializer(BaseSerializerV1):
id = serializers.CharField()
compliance_name = serializers.CharField()
framework_description = serializers.CharField()
@@ -2066,7 +2111,7 @@ class ComplianceOverviewAttributesSerializer(serializers.Serializer):
resource_name = "compliance-requirements-attributes"
class ComplianceOverviewMetadataSerializer(serializers.Serializer):
class ComplianceOverviewMetadataSerializer(BaseSerializerV1):
regions = serializers.ListField(child=serializers.CharField(), allow_empty=True)
class JSONAPIMeta:
@@ -2076,7 +2121,7 @@ class ComplianceOverviewMetadataSerializer(serializers.Serializer):
# Overviews
class OverviewProviderSerializer(serializers.Serializer):
class OverviewProviderSerializer(BaseSerializerV1):
id = serializers.CharField(source="provider")
findings = serializers.SerializerMethodField(read_only=True)
resources = serializers.SerializerMethodField(read_only=True)
@@ -2084,9 +2129,6 @@ class OverviewProviderSerializer(serializers.Serializer):
class JSONAPIMeta:
resource_name = "providers-overview"
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
@extend_schema_field(
{
"type": "object",
@@ -2120,18 +2162,15 @@ class OverviewProviderSerializer(serializers.Serializer):
}
class OverviewProviderCountSerializer(serializers.Serializer):
class OverviewProviderCountSerializer(BaseSerializerV1):
id = serializers.CharField(source="provider")
count = serializers.IntegerField()
class JSONAPIMeta:
resource_name = "providers-count-overview"
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
class OverviewFindingSerializer(serializers.Serializer):
class OverviewFindingSerializer(BaseSerializerV1):
id = serializers.CharField(default="n/a")
new = serializers.IntegerField()
changed = serializers.IntegerField()
@@ -2150,15 +2189,12 @@ class OverviewFindingSerializer(serializers.Serializer):
class JSONAPIMeta:
resource_name = "findings-overview"
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields["pass"] = self.fields.pop("_pass")
class OverviewSeveritySerializer(serializers.Serializer):
class OverviewSeveritySerializer(BaseSerializerV1):
id = serializers.CharField(default="n/a")
critical = serializers.IntegerField()
high = serializers.IntegerField()
@@ -2169,11 +2205,24 @@ class OverviewSeveritySerializer(serializers.Serializer):
class JSONAPIMeta:
resource_name = "findings-severity-overview"
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
class FindingsSeverityOverTimeSerializer(BaseSerializerV1):
"""Serializer for daily findings severity trend data."""
id = serializers.DateField(source="date")
critical = serializers.IntegerField()
high = serializers.IntegerField()
medium = serializers.IntegerField()
low = serializers.IntegerField()
informational = serializers.IntegerField()
muted = serializers.IntegerField()
scan_ids = serializers.ListField(child=serializers.UUIDField())
class JSONAPIMeta:
resource_name = "findings-severity-over-time"
class OverviewServiceSerializer(serializers.Serializer):
class OverviewServiceSerializer(BaseSerializerV1):
id = serializers.CharField(source="service")
total = serializers.IntegerField()
_pass = serializers.IntegerField()
@@ -2187,6 +2236,54 @@ class OverviewServiceSerializer(serializers.Serializer):
super().__init__(*args, **kwargs)
self.fields["pass"] = self.fields.pop("_pass")
class AttackSurfaceOverviewSerializer(BaseSerializerV1):
"""Serializer for attack surface overview aggregations."""
id = serializers.CharField(source="attack_surface_type")
total_findings = serializers.IntegerField()
failed_findings = serializers.IntegerField()
muted_failed_findings = serializers.IntegerField()
class JSONAPIMeta:
resource_name = "attack-surface-overviews"
class CategoryOverviewSerializer(BaseSerializerV1):
"""Serializer for category overview aggregations."""
id = serializers.CharField(source="category")
total_findings = serializers.IntegerField()
failed_findings = serializers.IntegerField()
new_failed_findings = serializers.IntegerField()
severity = serializers.JSONField(
help_text="Severity breakdown: {informational, low, medium, high, critical}"
)
class JSONAPIMeta:
resource_name = "category-overviews"
class OverviewRegionSerializer(serializers.Serializer):
id = serializers.SerializerMethodField()
provider_type = serializers.CharField()
region = serializers.CharField()
total = serializers.IntegerField()
_pass = serializers.IntegerField()
fail = serializers.IntegerField()
muted = serializers.IntegerField()
class JSONAPIMeta:
resource_name = "regions-overview"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields["pass"] = self.fields.pop("_pass")
def get_id(self, obj):
"""Generate unique ID from provider_type and region."""
return f"{obj['provider_type']}:{obj['region']}"
def get_root_meta(self, _resource, _many):
return {"version": "v1"}
@@ -2194,7 +2291,7 @@ class OverviewServiceSerializer(serializers.Serializer):
# Schedules
class ScheduleDailyCreateSerializer(serializers.Serializer):
class ScheduleDailyCreateSerializer(BaseSerializerV1):
provider_id = serializers.UUIDField(required=True)
class JSONAPIMeta:
@@ -2530,7 +2627,7 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
return representation
class IntegrationJiraDispatchSerializer(serializers.Serializer):
class IntegrationJiraDispatchSerializer(BaseSerializerV1):
"""
Serializer for dispatching findings to JIRA integration.
"""
@@ -2693,14 +2790,14 @@ class ProcessorUpdateSerializer(BaseWriteSerializer):
# SSO
class SamlInitiateSerializer(serializers.Serializer):
class SamlInitiateSerializer(BaseSerializerV1):
email_domain = serializers.CharField()
class JSONAPIMeta:
resource_name = "saml-initiate"
class SamlMetadataSerializer(serializers.Serializer):
class SamlMetadataSerializer(BaseSerializerV1):
class JSONAPIMeta:
resource_name = "saml-meta"
@@ -3075,7 +3172,12 @@ class LighthouseProviderConfigCreateSerializer(RLSSerializer, BaseWriteSerialize
Accepts credentials as JSON; stored encrypted via credentials_decoded.
"""
credentials = serializers.JSONField(write_only=True, required=True)
credentials = LighthouseCredentialsField(write_only=True, required=True)
base_url = serializers.URLField(
required=False,
allow_null=True,
help_text="Base URL for the LLM provider API. Required for 'openai_compatible' provider type.",
)
class Meta:
model = LighthouseProviderConfiguration
@@ -3087,7 +3189,10 @@ class LighthouseProviderConfigCreateSerializer(RLSSerializer, BaseWriteSerialize
]
extra_kwargs = {
"is_active": {"required": False},
"base_url": {"required": False, "allow_null": True},
"provider_type": {
"help_text": "LLM provider type. Determines which credential format to use. "
"See 'credentials' field documentation for provider-specific requirements."
},
}
def create(self, validated_data):
@@ -3110,6 +3215,7 @@ class LighthouseProviderConfigCreateSerializer(RLSSerializer, BaseWriteSerialize
def validate(self, attrs):
provider_type = attrs.get("provider_type")
credentials = attrs.get("credentials") or {}
base_url = attrs.get("base_url")
if provider_type == LighthouseProviderConfiguration.LLMProviderChoices.OPENAI:
try:
@@ -3122,6 +3228,35 @@ class LighthouseProviderConfigCreateSerializer(RLSSerializer, BaseWriteSerialize
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
elif (
provider_type == LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
):
try:
BedrockCredentialsSerializer(data=credentials).is_valid(
raise_exception=True
)
except ValidationError as e:
details = e.detail.copy()
for key, value in details.items():
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
elif (
provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
):
if not base_url:
raise ValidationError({"base_url": "Base URL is required."})
try:
OpenAICompatibleCredentialsSerializer(data=credentials).is_valid(
raise_exception=True
)
except ValidationError as e:
details = e.detail.copy()
for key, value in details.items():
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
return super().validate(attrs)
@@ -3131,7 +3266,12 @@ class LighthouseProviderConfigUpdateSerializer(BaseWriteSerializer):
Update serializer for LighthouseProviderConfiguration.
"""
credentials = serializers.JSONField(write_only=True, required=False)
credentials = LighthouseCredentialsField(write_only=True, required=False)
base_url = serializers.URLField(
required=False,
allow_null=True,
help_text="Base URL for the LLM provider API. Required for 'openai_compatible' provider type.",
)
class Meta:
model = LighthouseProviderConfiguration
@@ -3145,7 +3285,6 @@ class LighthouseProviderConfigUpdateSerializer(BaseWriteSerializer):
extra_kwargs = {
"id": {"read_only": True},
"provider_type": {"read_only": True},
"base_url": {"required": False, "allow_null": True},
"is_active": {"required": False},
}
@@ -3156,7 +3295,11 @@ class LighthouseProviderConfigUpdateSerializer(BaseWriteSerializer):
setattr(instance, attr, value)
if credentials is not None:
instance.credentials_decoded = credentials
# Merge partial credentials with existing ones
# New values overwrite existing ones, but unspecified fields are preserved
existing_credentials = instance.credentials_decoded or {}
merged_credentials = {**existing_credentials, **credentials}
instance.credentials_decoded = merged_credentials
instance.save()
return instance
@@ -3164,6 +3307,7 @@ class LighthouseProviderConfigUpdateSerializer(BaseWriteSerializer):
def validate(self, attrs):
provider_type = getattr(self.instance, "provider_type", None)
credentials = attrs.get("credentials", None)
base_url = attrs.get("base_url", None)
if (
credentials is not None
@@ -3180,6 +3324,78 @@ class LighthouseProviderConfigUpdateSerializer(BaseWriteSerializer):
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
elif (
credentials is not None
and provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
):
# For updates, enforce that the authentication method (access keys vs API key)
# is immutable. To switch methods, the UI must delete and recreate the provider.
existing_credentials = (
self.instance.credentials_decoded if self.instance else {}
) or {}
existing_uses_api_key = "api_key" in existing_credentials
existing_uses_access_keys = any(
k in existing_credentials
for k in ("access_key_id", "secret_access_key")
)
# First run field-level validation on the partial payload
try:
BedrockCredentialsUpdateSerializer(data=credentials).is_valid(
raise_exception=True
)
except ValidationError as e:
details = e.detail.copy()
for key, value in details.items():
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
# Then enforce invariants about not changing the auth method
# If the existing config uses an API key, forbid introducing access keys.
if existing_uses_api_key and any(
k in credentials for k in ("access_key_id", "secret_access_key")
):
raise ValidationError(
{
"credentials/non_field_errors": [
"Cannot change Bedrock authentication method from API key "
"to access key via update. Delete and recreate the provider instead."
]
}
)
# If the existing config uses access keys, forbid introducing an API key.
if existing_uses_access_keys and "api_key" in credentials:
raise ValidationError(
{
"credentials/non_field_errors": [
"Cannot change Bedrock authentication method from access key "
"to API key via update. Delete and recreate the provider instead."
]
}
)
elif (
credentials is not None
and provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
):
if base_url is None:
pass
elif not base_url:
raise ValidationError({"base_url": "Base URL cannot be empty."})
try:
OpenAICompatibleCredentialsSerializer(data=credentials).is_valid(
raise_exception=True
)
except ValidationError as e:
details = e.detail.copy()
for key, value in details.items():
e.detail[f"credentials/{key}"] = value
del e.detail[key]
raise e
return super().validate(attrs)
@@ -3345,3 +3561,237 @@ class LighthouseProviderModelsUpdateSerializer(BaseWriteSerializer):
extra_kwargs = {
"id": {"read_only": True},
}
# Mute Rules
class MuteRuleSerializer(RLSSerializer):
"""
Serializer for reading MuteRule instances.
"""
finding_uids = serializers.ListField(
child=serializers.CharField(),
read_only=True,
help_text="List of finding UIDs that are muted by this rule",
)
class Meta:
model = MuteRule
fields = [
"id",
"inserted_at",
"updated_at",
"name",
"reason",
"enabled",
"created_by",
"finding_uids",
]
included_serializers = {
"created_by": "api.v1.serializers.UserIncludeSerializer",
}
class MuteRuleCreateSerializer(RLSSerializer, BaseWriteSerializer):
"""
Serializer for creating new MuteRule instances.
Accepts finding_ids in the request, converts them to UIDs, and stores in finding_uids.
"""
finding_ids = serializers.ListField(
child=serializers.UUIDField(),
write_only=True,
required=True,
help_text="List of Finding IDs to mute (will be converted to UIDs)",
)
finding_uids = serializers.ListField(
child=serializers.CharField(),
read_only=True,
help_text="List of finding UIDs that are muted by this rule",
)
class Meta:
model = MuteRule
fields = [
"id",
"inserted_at",
"updated_at",
"name",
"reason",
"enabled",
"created_by",
"finding_ids",
"finding_uids",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"updated_at": {"read_only": True},
"enabled": {"read_only": True},
"created_by": {"read_only": True},
}
def validate_name(self, value):
"""Validate that the name is unique within the tenant."""
tenant_id = self.context.get("tenant_id")
if MuteRule.objects.filter(tenant_id=tenant_id, name=value).exists():
raise ValidationError("A mute rule with this name already exists.")
return value
def validate_finding_ids(self, value):
"""Validate that all finding IDs exist and belong to the tenant."""
if not value:
raise ValidationError("At least one finding_id must be provided.")
tenant_id = self.context.get("tenant_id")
# Check that all findings exist and belong to this tenant
findings = Finding.all_objects.filter(tenant_id=tenant_id, id__in=value)
found_ids = set(findings.values_list("id", flat=True))
provided_ids = set(value)
missing_ids = provided_ids - found_ids
if missing_ids:
raise ValidationError(
f"The following finding IDs do not exist or do not belong to your tenant: {missing_ids}"
)
return value
def validate(self, data):
"""Validate the entire mute rule, including overlap detection."""
data = super().validate(data)
tenant_id = self.context.get("tenant_id")
finding_ids = data.get("finding_ids", [])
if not finding_ids:
return data
# Convert finding IDs to UIDs (deduplicate in case multiple findings have same UID)
findings = Finding.all_objects.filter(id__in=finding_ids, tenant_id=tenant_id)
finding_uids = list(set(findings.values_list("uid", flat=True)))
# Check for overlaps with existing enabled rules
existing_rules = MuteRule.objects.filter(tenant_id=tenant_id, enabled=True)
for rule in existing_rules:
overlap = set(finding_uids) & set(rule.finding_uids)
if overlap:
raise ConflictException(
detail=f"The following finding UIDs are already muted by rule '{rule.name}': {overlap}"
)
# Store finding_uids in validated_data for create
data["finding_uids"] = finding_uids
return data
def create(self, validated_data):
"""Create a new mute rule and set created_by."""
# Remove finding_ids from validated_data (we've already converted to finding_uids)
validated_data.pop("finding_ids", None)
# Set created_by to the current user
request = self.context.get("request")
if request and hasattr(request, "user"):
validated_data["created_by"] = request.user
return super().create(validated_data)
class MuteRuleUpdateSerializer(BaseWriteSerializer):
"""
Serializer for updating MuteRule instances.
"""
class Meta:
model = MuteRule
fields = [
"id",
"name",
"reason",
"enabled",
]
extra_kwargs = {
"id": {"read_only": True},
"name": {"required": False},
"reason": {"required": False},
"enabled": {"required": False},
}
def validate_name(self, value):
"""Validate that the name is unique within the tenant, excluding current instance."""
tenant_id = self.context.get("tenant_id")
if (
MuteRule.objects.filter(tenant_id=tenant_id, name=value)
.exclude(id=self.instance.id)
.exists()
):
raise ValidationError("A mute rule with this name already exists.")
return value
# ThreatScore Snapshots
class ThreatScoreSnapshotSerializer(RLSSerializer):
"""
Serializer for ThreatScore snapshots.
Read-only serializer for retrieving historical ThreatScore metrics.
"""
id = serializers.SerializerMethodField()
class Meta:
model = ThreatScoreSnapshot
fields = [
"id",
"inserted_at",
"scan",
"provider",
"compliance_id",
"overall_score",
"score_delta",
"section_scores",
"critical_requirements",
"total_requirements",
"passed_requirements",
"failed_requirements",
"manual_requirements",
"total_findings",
"passed_findings",
"failed_findings",
]
extra_kwargs = {
"id": {"read_only": True},
"inserted_at": {"read_only": True},
"scan": {"read_only": True},
"provider": {"read_only": True},
"compliance_id": {"read_only": True},
"overall_score": {"read_only": True},
"score_delta": {"read_only": True},
"section_scores": {"read_only": True},
"critical_requirements": {"read_only": True},
"total_requirements": {"read_only": True},
"passed_requirements": {"read_only": True},
"failed_requirements": {"read_only": True},
"manual_requirements": {"read_only": True},
"total_findings": {"read_only": True},
"passed_findings": {"read_only": True},
"failed_findings": {"read_only": True},
}
included_serializers = {
"scan": "api.v1.serializers.ScanIncludeSerializer",
"provider": "api.v1.serializers.ProviderIncludeSerializer",
}
def get_id(self, obj):
if getattr(obj, "_aggregated", False):
return "n/a"
return str(obj.id)
+2 -1
View File
@@ -21,6 +21,7 @@ from api.v1.views import (
LighthouseProviderModelsViewSet,
LighthouseTenantConfigViewSet,
MembershipViewSet,
MuteRuleViewSet,
OverviewViewSet,
ProcessorViewSet,
ProviderGroupProvidersRelationshipView,
@@ -80,6 +81,7 @@ router.register(
LighthouseProviderModelsViewSet,
basename="lighthouse-models",
)
router.register(r"mute-rules", MuteRuleViewSet, basename="mute-rule")
tenants_router = routers.NestedSimpleRouter(router, r"tenants", lookup="tenant")
tenants_router.register(
@@ -150,7 +152,6 @@ urlpatterns = [
),
name="provider_group-providers-relationship",
),
# Lighthouse tenant config as singleton endpoint
path(
"lighthouse/configuration",
LighthouseTenantConfigViewSet.as_view(
File diff suppressed because it is too large Load Diff
+8
View File
@@ -36,6 +36,14 @@ DATABASES = {
"HOST": env("POSTGRES_REPLICA_HOST", default=default_db_host),
"PORT": env("POSTGRES_REPLICA_PORT", default=default_db_port),
},
"admin_replica": {
"ENGINE": "psqlextra.backend",
"NAME": env("POSTGRES_REPLICA_DB", default=default_db_name),
"USER": env("POSTGRES_ADMIN_USER", default="prowler"),
"PASSWORD": env("POSTGRES_ADMIN_PASSWORD", default="S3cret"),
"HOST": env("POSTGRES_REPLICA_HOST", default=default_db_host),
"PORT": env("POSTGRES_REPLICA_PORT", default=default_db_port),
},
}
DATABASES["default"] = DATABASES["prowler_user"]
@@ -37,6 +37,14 @@ DATABASES = {
"HOST": env("POSTGRES_REPLICA_HOST", default=default_db_host),
"PORT": env("POSTGRES_REPLICA_PORT", default=default_db_port),
},
"admin_replica": {
"ENGINE": "psqlextra.backend",
"NAME": env("POSTGRES_REPLICA_DB", default=default_db_name),
"USER": env("POSTGRES_ADMIN_USER"),
"PASSWORD": env("POSTGRES_ADMIN_PASSWORD"),
"HOST": env("POSTGRES_REPLICA_HOST", default=default_db_host),
"PORT": env("POSTGRES_REPLICA_PORT", default=default_db_port),
},
}
DATABASES["default"] = DATABASES["prowler_user"]
@@ -5,6 +5,9 @@ IGNORED_EXCEPTIONS = [
# Provider is not connected due to credentials errors
"is not connected",
"ProviderConnectionError",
# Provider was deleted during a scan
"ProviderDeletedException",
"violates foreign key constraint",
# Authentication Errors from AWS
"InvalidToken",
"AccessDeniedException",
+207 -12
View File
@@ -11,10 +11,14 @@ from django.urls import reverse
from django_celery_results.models import TaskResult
from rest_framework import status
from rest_framework.test import APIClient
from tasks.jobs.backfill import backfill_resource_scan_summaries
from tasks.jobs.backfill import (
backfill_resource_scan_summaries,
backfill_scan_category_summaries,
)
from api.db_utils import rls_transaction
from api.models import (
AttackSurfaceOverview,
ComplianceOverview,
ComplianceRequirementOverview,
Finding,
@@ -23,6 +27,7 @@ from api.models import (
Invitation,
LighthouseConfiguration,
Membership,
MuteRule,
Processor,
Provider,
ProviderGroup,
@@ -34,6 +39,7 @@ from api.models import (
SAMLConfiguration,
SAMLDomainIndex,
Scan,
ScanCategorySummary,
ScanSummary,
StateChoices,
StatusChoices,
@@ -500,13 +506,28 @@ def providers_fixture(tenants_fixture):
tenant_id=tenant.id,
)
provider7 = Provider.objects.create(
provider="oci",
provider="oraclecloud",
uid="ocid1.tenancy.oc1..aaaaaaaa3dwoazoox4q7wrvriywpokp5grlhgnkwtyt6dmwyou7no6mdmzda",
alias="oci_testing",
tenant_id=tenant.id,
)
provider8 = Provider.objects.create(
provider="mongodbatlas",
uid="64b1d3c0e4b03b1234567890",
alias="mongodbatlas_testing",
tenant_id=tenant.id,
)
return provider1, provider2, provider3, provider4, provider5, provider6, provider7
return (
provider1,
provider2,
provider3,
provider4,
provider5,
provider6,
provider7,
provider8,
)
@pytest.fixture
@@ -1092,8 +1113,8 @@ def scan_summaries_fixture(tenants_fixture, providers_fixture):
region="region1",
_pass=1,
fail=0,
muted=0,
total=1,
muted=2,
total=3,
new=1,
changed=0,
unchanged=0,
@@ -1101,7 +1122,7 @@ def scan_summaries_fixture(tenants_fixture, providers_fixture):
fail_changed=0,
pass_new=1,
pass_changed=0,
muted_new=0,
muted_new=2,
muted_changed=0,
scan=scan,
)
@@ -1114,8 +1135,8 @@ def scan_summaries_fixture(tenants_fixture, providers_fixture):
region="region2",
_pass=0,
fail=1,
muted=1,
total=2,
muted=3,
total=4,
new=2,
changed=0,
unchanged=0,
@@ -1123,7 +1144,7 @@ def scan_summaries_fixture(tenants_fixture, providers_fixture):
fail_changed=0,
pass_new=0,
pass_changed=0,
muted_new=1,
muted_new=3,
muted_changed=0,
scan=scan,
)
@@ -1136,8 +1157,8 @@ def scan_summaries_fixture(tenants_fixture, providers_fixture):
region="region1",
_pass=1,
fail=0,
muted=0,
total=1,
muted=1,
total=2,
new=1,
changed=0,
unchanged=0,
@@ -1145,7 +1166,7 @@ def scan_summaries_fixture(tenants_fixture, providers_fixture):
fail_changed=0,
pass_new=1,
pass_changed=0,
muted_new=0,
muted_new=1,
muted_changed=0,
scan=scan,
)
@@ -1254,6 +1275,113 @@ def latest_scan_finding(authenticated_client, providers_fixture, resources_fixtu
return finding
@pytest.fixture(scope="function")
def findings_with_categories(scans_fixture, resources_fixture):
scan = scans_fixture[0]
resource = resources_fixture[0]
finding = Finding.objects.create(
tenant_id=scan.tenant_id,
uid="finding_with_categories_1",
scan=scan,
delta=None,
status=Status.FAIL,
status_extended="test status",
impact=Severity.critical,
impact_extended="test impact",
severity=Severity.critical,
raw_result={"status": Status.FAIL},
check_id="genai_check",
check_metadata={"CheckId": "genai_check"},
categories=["gen-ai", "security"],
first_seen_at="2024-01-02T00:00:00Z",
)
finding.add_resources([resource])
backfill_resource_scan_summaries(str(scan.tenant_id), str(scan.id))
return finding
@pytest.fixture(scope="function")
def findings_with_multiple_categories(scans_fixture, resources_fixture):
scan = scans_fixture[0]
resource1, resource2 = resources_fixture[:2]
finding1 = Finding.objects.create(
tenant_id=scan.tenant_id,
uid="finding_multi_cat_1",
scan=scan,
delta=None,
status=Status.FAIL,
status_extended="test status",
impact=Severity.critical,
impact_extended="test impact",
severity=Severity.critical,
raw_result={"status": Status.FAIL},
check_id="genai_check",
check_metadata={"CheckId": "genai_check"},
categories=["gen-ai", "security"],
first_seen_at="2024-01-02T00:00:00Z",
)
finding1.add_resources([resource1])
finding2 = Finding.objects.create(
tenant_id=scan.tenant_id,
uid="finding_multi_cat_2",
scan=scan,
delta=None,
status=Status.FAIL,
status_extended="test status 2",
impact=Severity.high,
impact_extended="test impact 2",
severity=Severity.high,
raw_result={"status": Status.FAIL},
check_id="iam_check",
check_metadata={"CheckId": "iam_check"},
categories=["iam", "security"],
first_seen_at="2024-01-02T00:00:00Z",
)
finding2.add_resources([resource2])
backfill_resource_scan_summaries(str(scan.tenant_id), str(scan.id))
return finding1, finding2
@pytest.fixture(scope="function")
def latest_scan_finding_with_categories(
authenticated_client, providers_fixture, resources_fixture
):
provider = providers_fixture[0]
tenant_id = str(providers_fixture[0].tenant_id)
resource = resources_fixture[0]
scan = Scan.objects.create(
name="latest completed scan with categories",
provider=provider,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant_id=tenant_id,
)
finding = Finding.objects.create(
tenant_id=tenant_id,
uid="latest_finding_with_categories",
scan=scan,
delta="new",
status=Status.FAIL,
status_extended="test status",
impact=Severity.critical,
impact_extended="test impact",
severity=Severity.critical,
raw_result={"status": Status.FAIL},
check_id="genai_iam_check",
check_metadata={"CheckId": "genai_iam_check"},
categories=["gen-ai", "iam"],
first_seen_at="2024-01-02T00:00:00Z",
)
finding.add_resources([resource])
backfill_resource_scan_summaries(tenant_id, str(scan.id))
backfill_scan_category_summaries(tenant_id, str(scan.id))
return finding
@pytest.fixture(scope="function")
def latest_scan_resource(authenticated_client, providers_fixture):
provider = providers_fixture[0]
@@ -1425,6 +1553,73 @@ def api_keys_fixture(tenants_fixture, create_test_user):
return [api_key1, api_key2, api_key3]
@pytest.fixture
def mute_rules_fixture(tenants_fixture, create_test_user, findings_fixture):
"""Create test mute rules for testing."""
tenant = tenants_fixture[0]
user = create_test_user
# Create two mute rules: one enabled, one disabled
mute_rule1 = MuteRule.objects.create(
tenant_id=tenant.id,
name="Test Rule 1",
reason="Security exception for testing",
enabled=True,
created_by=user,
finding_uids=[findings_fixture[0].uid],
)
mute_rule2 = MuteRule.objects.create(
tenant_id=tenant.id,
name="Test Rule 2",
reason="Compliance exception approved",
enabled=False,
created_by=user,
finding_uids=[findings_fixture[1].uid],
)
return mute_rule1, mute_rule2
@pytest.fixture
def create_attack_surface_overview():
def _create(tenant, scan, attack_surface_type, total=10, failed=5, muted_failed=2):
return AttackSurfaceOverview.objects.create(
tenant=tenant,
scan=scan,
attack_surface_type=attack_surface_type,
total_findings=total,
failed_findings=failed,
muted_failed_findings=muted_failed,
)
return _create
@pytest.fixture
def create_scan_category_summary():
def _create(
tenant,
scan,
category,
severity,
total_findings=10,
failed_findings=5,
new_failed_findings=2,
):
return ScanCategorySummary.objects.create(
tenant=tenant,
scan=scan,
category=category,
severity=severity,
total_findings=total_findings,
failed_findings=failed_findings,
new_failed_findings=new_failed_findings,
)
return _create
def get_authorization_header(access_token: str) -> dict:
return {"Authorization": f"Bearer {access_token}"}
Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

+1
View File
@@ -61,4 +61,5 @@ def schedule_provider_scan(provider_instance: Provider):
"tenant_id": str(provider_instance.tenant_id),
"provider_id": provider_id,
},
countdown=5, # Avoid race conditions between the worker and the database
)
+283 -1
View File
@@ -1,15 +1,29 @@
from collections import defaultdict
from datetime import timedelta
from django.db.models import Sum
from django.utils import timezone
from tasks.jobs.scan import aggregate_category_counts
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
from api.models import (
ComplianceOverviewSummary,
ComplianceRequirementOverview,
DailySeveritySummary,
Finding,
Resource,
ResourceFindingMapping,
ResourceScanSummary,
Scan,
ScanCategorySummary,
ScanSummary,
StateChoices,
)
def backfill_resource_scan_summaries(tenant_id: str, scan_id: str):
with rls_transaction(tenant_id):
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
if ResourceScanSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
@@ -59,3 +73,271 @@ def backfill_resource_scan_summaries(tenant_id: str, scan_id: str):
)
return {"status": "backfilled", "inserted": len(summaries)}
def backfill_compliance_summaries(tenant_id: str, scan_id: str):
"""
Backfill ComplianceOverviewSummary records for a completed scan.
This function checks if summary records already exist for the scan.
If not, it aggregates compliance requirement data and creates the summaries.
Args:
tenant_id: Target tenant UUID
scan_id: Scan UUID to backfill
Returns:
dict: Status indicating whether backfill was performed
"""
with rls_transaction(tenant_id):
if ComplianceOverviewSummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
return {"status": "already backfilled"}
with rls_transaction(tenant_id):
if not Scan.objects.filter(
tenant_id=tenant_id,
id=scan_id,
state__in=(StateChoices.COMPLETED, StateChoices.FAILED),
).exists():
return {"status": "scan is not completed"}
# Fetch all compliance requirement overview rows for this scan
requirement_rows = ComplianceRequirementOverview.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values(
"compliance_id",
"requirement_id",
"requirement_status",
)
if not requirement_rows:
return {"status": "no compliance data to backfill"}
# Group by (compliance_id, requirement_id) across regions
requirement_statuses = defaultdict(
lambda: {"fail_count": 0, "pass_count": 0, "total_count": 0}
)
for row in requirement_rows:
compliance_id = row["compliance_id"]
requirement_id = row["requirement_id"]
requirement_status = row["requirement_status"]
# Aggregate requirement status across regions
key = (compliance_id, requirement_id)
requirement_statuses[key]["total_count"] += 1
if requirement_status == "FAIL":
requirement_statuses[key]["fail_count"] += 1
elif requirement_status == "PASS":
requirement_statuses[key]["pass_count"] += 1
# Determine per-requirement status and aggregate to compliance level
compliance_summaries = defaultdict(
lambda: {
"total_requirements": 0,
"requirements_passed": 0,
"requirements_failed": 0,
"requirements_manual": 0,
}
)
for (compliance_id, requirement_id), counts in requirement_statuses.items():
# Apply business rule: any FAIL → requirement fails
if counts["fail_count"] > 0:
req_status = "FAIL"
elif counts["pass_count"] == counts["total_count"]:
req_status = "PASS"
else:
req_status = "MANUAL"
# Aggregate to compliance level
compliance_summaries[compliance_id]["total_requirements"] += 1
if req_status == "PASS":
compliance_summaries[compliance_id]["requirements_passed"] += 1
elif req_status == "FAIL":
compliance_summaries[compliance_id]["requirements_failed"] += 1
else:
compliance_summaries[compliance_id]["requirements_manual"] += 1
# Create summary objects
summary_objects = []
for compliance_id, data in compliance_summaries.items():
summary_objects.append(
ComplianceOverviewSummary(
tenant_id=tenant_id,
scan_id=scan_id,
compliance_id=compliance_id,
requirements_passed=data["requirements_passed"],
requirements_failed=data["requirements_failed"],
requirements_manual=data["requirements_manual"],
total_requirements=data["total_requirements"],
)
)
# Bulk insert summaries
if summary_objects:
ComplianceOverviewSummary.objects.bulk_create(
summary_objects, batch_size=500, ignore_conflicts=True
)
return {"status": "backfilled", "inserted": len(summary_objects)}
def backfill_daily_severity_summaries(tenant_id: str, days: int = None):
"""
Backfill DailySeveritySummary from completed scans.
Groups by provider+date, keeps latest scan per day.
"""
created_count = 0
updated_count = 0
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
scan_filter = {
"tenant_id": tenant_id,
"state": StateChoices.COMPLETED,
"completed_at__isnull": False,
}
if days is not None:
cutoff_date = timezone.now() - timedelta(days=days)
scan_filter["completed_at__gte"] = cutoff_date
completed_scans = (
Scan.objects.filter(**scan_filter)
.order_by("provider_id", "-completed_at")
.values("id", "provider_id", "completed_at")
)
if not completed_scans:
return {"status": "no scans to backfill"}
# Keep only latest scan per provider/day
latest_scans_by_day = {}
for scan in completed_scans:
key = (scan["provider_id"], scan["completed_at"].date())
if key not in latest_scans_by_day:
latest_scans_by_day[key] = scan
# Process each provider/day
for (provider_id, scan_date), scan in latest_scans_by_day.items():
scan_id = scan["id"]
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
severity_totals = (
ScanSummary.objects.filter(
tenant_id=tenant_id,
scan_id=scan_id,
)
.values("severity")
.annotate(total_fail=Sum("fail"), total_muted=Sum("muted"))
)
severity_data = {
"critical": 0,
"high": 0,
"medium": 0,
"low": 0,
"informational": 0,
"muted": 0,
}
for row in severity_totals:
severity = row["severity"]
if severity in severity_data:
severity_data[severity] = row["total_fail"] or 0
severity_data["muted"] += row["total_muted"] or 0
with rls_transaction(tenant_id):
_, created = DailySeveritySummary.objects.update_or_create(
tenant_id=tenant_id,
provider_id=provider_id,
date=scan_date,
defaults={
"scan_id": scan_id,
"critical": severity_data["critical"],
"high": severity_data["high"],
"medium": severity_data["medium"],
"low": severity_data["low"],
"informational": severity_data["informational"],
"muted": severity_data["muted"],
},
)
if created:
created_count += 1
else:
updated_count += 1
return {
"status": "backfilled",
"created": created_count,
"updated": updated_count,
"total_days": len(latest_scans_by_day),
}
def backfill_scan_category_summaries(tenant_id: str, scan_id: str):
"""
Backfill ScanCategorySummary for a completed scan.
Aggregates category counts from all findings in the scan and creates
one ScanCategorySummary row per (category, severity) combination.
Args:
tenant_id: Target tenant UUID
scan_id: Scan UUID to backfill
Returns:
dict: Status indicating whether backfill was performed
"""
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
if ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).exists():
return {"status": "already backfilled"}
if not Scan.objects.filter(
tenant_id=tenant_id,
id=scan_id,
state__in=(StateChoices.COMPLETED, StateChoices.FAILED),
).exists():
return {"status": "scan is not completed"}
category_counts: dict[tuple[str, str], dict[str, int]] = {}
for finding in Finding.all_objects.filter(
tenant_id=tenant_id, scan_id=scan_id
).values("categories", "severity", "status", "delta", "muted"):
aggregate_category_counts(
categories=finding.get("categories") or [],
severity=finding.get("severity"),
status=finding.get("status"),
delta=finding.get("delta"),
muted=finding.get("muted", False),
cache=category_counts,
)
if not category_counts:
return {"status": "no categories to backfill"}
category_summaries = [
ScanCategorySummary(
tenant_id=tenant_id,
scan_id=scan_id,
category=category,
severity=severity,
total_findings=counts["total"],
failed_findings=counts["failed"],
new_failed_findings=counts["new_failed"],
)
for (category, severity), counts in category_counts.items()
]
with rls_transaction(tenant_id):
ScanCategorySummary.objects.bulk_create(
category_summaries, batch_size=500, ignore_conflicts=True
)
return {"status": "backfilled", "categories_count": len(category_counts)}
+131 -38
View File
@@ -15,6 +15,7 @@ from prowler.config.config import (
html_file_suffix,
json_asff_file_suffix,
json_ocsf_file_suffix,
set_output_timestamp,
)
from prowler.lib.outputs.asff.asff import ASFF
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
@@ -32,7 +33,7 @@ from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
from prowler.lib.outputs.compliance.cis.cis_github import GithubCIS
from prowler.lib.outputs.compliance.cis.cis_kubernetes import KubernetesCIS
from prowler.lib.outputs.compliance.cis.cis_m365 import M365CIS
from prowler.lib.outputs.compliance.cis.cis_oci import OCICIS
from prowler.lib.outputs.compliance.cis.cis_oraclecloud import OracleCloudCIS
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
from prowler.lib.outputs.compliance.ens.ens_azure import AzureENS
from prowler.lib.outputs.compliance.ens.ens_gcp import GCPENS
@@ -58,6 +59,9 @@ from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_azur
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_gcp import (
ProwlerThreatScoreGCP,
)
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_kubernetes import (
ProwlerThreatScoreKubernetes,
)
from prowler.lib.outputs.compliance.prowler_threatscore.prowler_threatscore_m365 import (
ProwlerThreatScoreM365,
)
@@ -104,6 +108,10 @@ COMPLIANCE_CLASS_MAP = {
"kubernetes": [
(lambda name: name.startswith("cis_"), KubernetesCIS),
(lambda name: name.startswith("iso27001_"), KubernetesISO27001),
(
lambda name: name == "prowler_threatscore_kubernetes",
ProwlerThreatScoreKubernetes,
),
],
"m365": [
(lambda name: name.startswith("cis_"), M365CIS),
@@ -113,8 +121,12 @@ COMPLIANCE_CLASS_MAP = {
"github": [
(lambda name: name.startswith("cis_"), GithubCIS),
],
"oci": [
(lambda name: name.startswith("cis_"), OCICIS),
"iac": [
# IaC provider doesn't have specific compliance frameworks yet
# Trivy handles its own compliance checks
],
"oraclecloud": [
(lambda name: name.startswith("cis_"), OracleCloudCIS),
],
}
@@ -230,36 +242,33 @@ def _upload_to_s3(
logger.error(f"S3 upload failed: {str(e)}")
def _generate_output_directory(
output_directory, prowler_provider: object, tenant_id: str, scan_id: str
) -> tuple[str, str, str]:
def _build_output_path(
output_directory: str,
prowler_provider: str,
tenant_id: str,
scan_id: str,
subdirectory: str = None,
) -> str:
"""
Generate a file system path for the output directory of a prowler scan.
This function constructs the output directory path by combining a base
temporary output directory, the tenant ID, the scan ID, and details about
the prowler provider along with a timestamp. The resulting path is used to
store the output files of a prowler scan.
Note:
This function depends on one external variable:
- `output_file_timestamp`: A timestamp (as a string) used to uniquely identify the output.
Build a file system path for the output directory of a prowler scan.
Args:
output_directory (str): The base output directory.
prowler_provider (object): An identifier or descriptor for the prowler provider.
Typically, this is a string indicating the provider (e.g., "aws").
prowler_provider (str): An identifier or descriptor for the prowler provider.
Typically, this is a string indicating the provider (e.g., "aws").
tenant_id (str): The unique identifier for the tenant.
scan_id (str): The unique identifier for the scan.
subdirectory (str, optional): Optional subdirectory to include in the path
(e.g., "compliance", "threatscore", "ens").
Returns:
str: The constructed file system path for the prowler scan output directory.
str: The constructed path with directory created.
Example:
>>> _generate_output_directory("/tmp", "aws", "tenant-1234", "scan-5678")
'/tmp/tenant-1234/aws/scan-5678/prowler-output-2023-02-15T12:34:56',
'/tmp/tenant-1234/aws/scan-5678/compliance/prowler-output-2023-02-15T12:34:56'
'/tmp/tenant-1234/aws/scan-5678/threatscore/prowler-output-2023-02-15T12:34:56'
>>> _build_output_path("/tmp", "aws", "tenant-1234", "scan-5678")
'/tmp/tenant-1234/scan-5678/prowler-output-aws-20230215123456'
>>> _build_output_path("/tmp", "aws", "tenant-1234", "scan-5678", "threatscore")
'/tmp/tenant-1234/scan-5678/threatscore/prowler-output-aws-20230215123456'
"""
# Sanitize the prowler provider name to ensure it is a valid directory name
prowler_provider_sanitized = re.sub(r"[^\w\-]", "-", prowler_provider)
@@ -267,23 +276,107 @@ def _generate_output_directory(
with rls_transaction(tenant_id):
started_at = Scan.objects.get(id=scan_id).started_at
set_output_timestamp(started_at)
timestamp = started_at.strftime("%Y%m%d%H%M%S")
path = (
f"{output_directory}/{tenant_id}/{scan_id}/prowler-output-"
f"{prowler_provider_sanitized}-{timestamp}"
)
if subdirectory:
path = (
f"{output_directory}/{tenant_id}/{scan_id}/{subdirectory}/prowler-output-"
f"{prowler_provider_sanitized}-{timestamp}"
)
else:
path = (
f"{output_directory}/{tenant_id}/{scan_id}/prowler-output-"
f"{prowler_provider_sanitized}-{timestamp}"
)
# Create directory for the path if it doesn't exist
os.makedirs("/".join(path.split("/")[:-1]), exist_ok=True)
compliance_path = (
f"{output_directory}/{tenant_id}/{scan_id}/compliance/prowler-output-"
f"{prowler_provider_sanitized}-{timestamp}"
)
os.makedirs("/".join(compliance_path.split("/")[:-1]), exist_ok=True)
return path
threatscore_path = (
f"{output_directory}/{tenant_id}/{scan_id}/threatscore/prowler-output-"
f"{prowler_provider_sanitized}-{timestamp}"
)
os.makedirs("/".join(threatscore_path.split("/")[:-1]), exist_ok=True)
return path, compliance_path, threatscore_path
def _generate_compliance_output_directory(
output_directory: str,
prowler_provider: str,
tenant_id: str,
scan_id: str,
compliance_framework: str,
) -> str:
"""
Generate a file system path for a compliance framework output directory.
This function constructs the output directory path specifically for a compliance
framework (e.g., "threatscore", "ens") by combining a base temporary output directory,
the tenant ID, the scan ID, the compliance framework name, and details about the
prowler provider along with a timestamp.
Args:
output_directory (str): The base output directory.
prowler_provider (str): An identifier or descriptor for the prowler provider.
Typically, this is a string indicating the provider (e.g., "aws").
tenant_id (str): The unique identifier for the tenant.
scan_id (str): The unique identifier for the scan.
compliance_framework (str): The compliance framework name (e.g., "threatscore", "ens").
Returns:
str: The path for the compliance framework output directory.
Example:
>>> _generate_compliance_output_directory("/tmp", "aws", "tenant-1234", "scan-5678", "threatscore")
'/tmp/tenant-1234/scan-5678/threatscore/prowler-output-aws-20230215123456'
>>> _generate_compliance_output_directory("/tmp", "aws", "tenant-1234", "scan-5678", "ens")
'/tmp/tenant-1234/scan-5678/ens/prowler-output-aws-20230215123456'
>>> _generate_compliance_output_directory("/tmp", "aws", "tenant-1234", "scan-5678", "nis2")
'/tmp/tenant-1234/scan-5678/nis2/prowler-output-aws-20230215123456'
"""
return _build_output_path(
output_directory,
prowler_provider,
tenant_id,
scan_id,
subdirectory=compliance_framework,
)
def _generate_output_directory(
output_directory: str,
prowler_provider: str,
tenant_id: str,
scan_id: str,
) -> tuple[str, str]:
"""
Generate file system paths for the standard and compliance output directories of a prowler scan.
This function constructs both the standard output directory path and the compliance
output directory path by combining a base temporary output directory, the tenant ID,
the scan ID, and details about the prowler provider along with a timestamp.
Args:
output_directory (str): The base output directory.
prowler_provider (str): An identifier or descriptor for the prowler provider.
Typically, this is a string indicating the provider (e.g., "aws").
tenant_id (str): The unique identifier for the tenant.
scan_id (str): The unique identifier for the scan.
Returns:
tuple[str, str]: A tuple containing (standard_path, compliance_path).
Example:
>>> _generate_output_directory("/tmp", "aws", "tenant-1234", "scan-5678")
('/tmp/tenant-1234/scan-5678/prowler-output-aws-20230215123456',
'/tmp/tenant-1234/scan-5678/compliance/prowler-output-aws-20230215123456')
"""
standard_path = _build_output_path(
output_directory, prowler_provider, tenant_id, scan_id
)
compliance_path = _build_output_path(
output_directory,
prowler_provider,
tenant_id,
scan_id,
subdirectory="compliance",
)
return standard_path, compliance_path
@@ -1,12 +1,84 @@
from typing import Dict, Set
from typing import Dict
import boto3
import openai
from botocore import UNSIGNED
from botocore.config import Config
from botocore.exceptions import BotoCoreError, ClientError
from celery.utils.log import get_task_logger
from api.models import LighthouseProviderConfiguration, LighthouseProviderModels
logger = get_task_logger(__name__)
# OpenAI model prefixes to exclude from Lighthouse model selection.
# These models don't support text chat completions and tool calling.
EXCLUDED_OPENAI_MODEL_PREFIXES = (
"dall-e", # Image generation
"whisper", # Audio transcription
"tts-", # Text-to-speech (tts-1, tts-1-hd, etc.)
"sora", # Text-to-video (sora-2, sora-2-pro, etc.)
"text-embedding", # Embeddings
"embedding", # Embeddings (alternative naming)
"text-moderation", # Content moderation
"omni-moderation", # Content moderation
"text-davinci", # Legacy completion models
"text-curie", # Legacy completion models
"text-babbage", # Legacy completion models
"text-ada", # Legacy completion models
"davinci", # Legacy completion models
"curie", # Legacy completion models
"babbage", # Legacy completion models
"ada", # Legacy completion models
"computer-use", # Computer control agent
"gpt-image", # Image generation
"gpt-audio", # Audio models
"gpt-realtime", # Realtime voice API
)
# OpenAI model substrings to exclude (patterns that can appear anywhere in model ID).
# These patterns identify non-chat model variants.
EXCLUDED_OPENAI_MODEL_SUBSTRINGS = (
"-audio-", # Audio preview models (gpt-4o-audio-preview, etc.)
"-realtime-", # Realtime preview models (gpt-4o-realtime-preview, etc.)
"-transcribe", # Transcription models (gpt-4o-transcribe, etc.)
"-tts", # TTS models (gpt-4o-mini-tts)
"-instruct", # Legacy instruct models (gpt-3.5-turbo-instruct, etc.)
)
def _extract_error_message(e: Exception) -> str:
"""
Extract a user-friendly error message from various exception types.
This function handles exceptions from different providers (OpenAI, AWS Bedrock)
and extracts the most relevant error message for display to users.
Args:
e: The exception to extract a message from.
Returns:
str: A user-friendly error message.
"""
# For OpenAI SDK errors (>= v1.0)
# OpenAI exceptions have a 'body' attribute with error details
if hasattr(e, "body") and isinstance(e.body, dict):
if "message" in e.body:
return e.body["message"]
# Sometimes nested under 'error' key
if "error" in e.body and isinstance(e.body["error"], dict):
return e.body["error"].get("message", str(e))
# For boto3 ClientError
# Boto3 exceptions have a 'response' attribute with error details
if hasattr(e, "response") and isinstance(e.response, dict):
error_info = e.response.get("Error", {})
if error_info.get("Message"):
return error_info["Message"]
# Fallback to string representation for unknown error types
return str(e)
def _extract_openai_api_key(
provider_cfg: LighthouseProviderConfiguration,
@@ -30,16 +102,132 @@ def _extract_openai_api_key(
return api_key
def _extract_openai_compatible_params(
provider_cfg: LighthouseProviderConfiguration,
) -> Dict[str, str] | None:
"""
Extract base_url and api_key for OpenAI-compatible providers.
"""
creds = provider_cfg.credentials_decoded
base_url = provider_cfg.base_url
if not isinstance(creds, dict):
return None
api_key = creds.get("api_key")
if not isinstance(api_key, str) or not api_key:
return None
if not isinstance(base_url, str) or not base_url:
return None
return {"base_url": base_url, "api_key": api_key}
def _extract_bedrock_credentials(
provider_cfg: LighthouseProviderConfiguration,
) -> Dict[str, str] | None:
"""
Safely extract AWS Bedrock credentials from a provider configuration.
Supports two authentication methods:
1. AWS access key + secret key + region
2. Bedrock API key (bearer token) + region
Args:
provider_cfg (LighthouseProviderConfiguration): The provider configuration instance
containing the credentials.
Returns:
Dict[str, str] | None: Dictionary with either:
- 'access_key_id', 'secret_access_key', and 'region' for access key auth
- 'api_key' and 'region' for API key (bearer token) auth
Returns None if credentials are invalid or missing.
"""
creds = provider_cfg.credentials_decoded
if not isinstance(creds, dict):
return None
region = creds.get("region")
if not isinstance(region, str) or not region:
return None
# Check for API key authentication first
api_key = creds.get("api_key")
if isinstance(api_key, str) and api_key:
return {
"api_key": api_key,
"region": region,
}
# Fall back to access key authentication
access_key_id = creds.get("access_key_id")
secret_access_key = creds.get("secret_access_key")
# Validate all required fields are present and are strings
if (
not isinstance(access_key_id, str)
or not access_key_id
or not isinstance(secret_access_key, str)
or not secret_access_key
):
return None
return {
"access_key_id": access_key_id,
"secret_access_key": secret_access_key,
"region": region,
}
def _create_bedrock_client(
bedrock_creds: Dict[str, str], service_name: str = "bedrock"
):
"""
Create a boto3 Bedrock client with the appropriate authentication method.
Supports two authentication methods:
1. API key (bearer token) - uses unsigned requests with Authorization header
2. AWS access key + secret key - uses standard SigV4 signing
Args:
bedrock_creds: Dictionary with either:
- 'api_key' and 'region' for API key (bearer token) auth
- 'access_key_id', 'secret_access_key', and 'region' for access key auth
service_name: The Bedrock service name. Use 'bedrock' for control plane
operations (list_foundation_models, etc.) or 'bedrock-runtime' for
inference operations.
Returns:
boto3 client configured for the specified Bedrock service.
"""
region = bedrock_creds["region"]
if "api_key" in bedrock_creds:
bearer_token = bedrock_creds["api_key"]
client = boto3.client(
service_name=service_name,
region_name=region,
config=Config(signature_version=UNSIGNED),
)
def inject_bearer_token(request, **kwargs):
request.headers["Authorization"] = f"Bearer {bearer_token}"
client.meta.events.register("before-send.*.*", inject_bearer_token)
return client
return boto3.client(
service_name=service_name,
region_name=region,
aws_access_key_id=bedrock_creds["access_key_id"],
aws_secret_access_key=bedrock_creds["secret_access_key"],
)
def check_lighthouse_provider_connection(provider_config_id: str) -> Dict:
"""
Validate a Lighthouse provider configuration by calling the provider API and
toggle its active state accordingly.
Currently supports the OpenAI provider by invoking `models.list` to verify that
the provided credentials are valid.
Args:
provider_config_id (str): The primary key of the `LighthouseProviderConfiguration`
provider_config_id: The primary key of the `LighthouseProviderConfiguration`
to validate.
Returns:
@@ -55,42 +243,369 @@ def check_lighthouse_provider_connection(provider_config_id: str) -> Dict:
"""
provider_cfg = LighthouseProviderConfiguration.objects.get(pk=provider_config_id)
# TODO: Add support for other providers
if (
provider_cfg.provider_type
!= LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
return {"connected": False, "error": "Unsupported provider type"}
api_key = _extract_openai_api_key(provider_cfg)
if not api_key:
provider_cfg.is_active = False
provider_cfg.save()
return {"connected": False, "error": "API key is invalid or missing"}
try:
client = openai.OpenAI(api_key=api_key)
_ = client.models.list()
if (
provider_cfg.provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
api_key = _extract_openai_api_key(provider_cfg)
if not api_key:
provider_cfg.is_active = False
provider_cfg.save()
return {"connected": False, "error": "API key is invalid or missing"}
# Test connection by listing models
client = openai.OpenAI(api_key=api_key)
_ = client.models.list()
elif (
provider_cfg.provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
):
bedrock_creds = _extract_bedrock_credentials(provider_cfg)
if not bedrock_creds:
provider_cfg.is_active = False
provider_cfg.save()
return {
"connected": False,
"error": "AWS credentials are invalid or missing",
}
# Test connection by listing foundation models
bedrock_client = _create_bedrock_client(bedrock_creds)
_ = bedrock_client.list_foundation_models()
elif (
provider_cfg.provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
):
params = _extract_openai_compatible_params(provider_cfg)
if not params:
provider_cfg.is_active = False
provider_cfg.save()
return {
"connected": False,
"error": "Base URL or API key is invalid or missing",
}
# Test connection using OpenAI SDK with custom base_url
# Note: base_url should include version (e.g., https://openrouter.ai/api/v1)
client = openai.OpenAI(
api_key=params["api_key"],
base_url=params["base_url"],
)
_ = client.models.list()
else:
return {"connected": False, "error": "Unsupported provider type"}
# Connection successful
provider_cfg.is_active = True
provider_cfg.save()
return {"connected": True, "error": None}
except Exception as e:
logger.warning("OpenAI connection check failed: %s", str(e))
error_message = _extract_error_message(e)
logger.warning(
"%s connection check failed: %s", provider_cfg.provider_type, error_message
)
provider_cfg.is_active = False
provider_cfg.save()
return {"connected": False, "error": str(e)}
return {"connected": False, "error": error_message}
def _fetch_openai_models(api_key: str) -> Dict[str, str]:
"""
Fetch available models from OpenAI API.
Filters out models that don't support text input/output and tool calling,
such as image generation (DALL-E), audio transcription (Whisper),
text-to-speech (TTS), embeddings, and moderation models.
Args:
api_key: OpenAI API key for authentication.
Returns:
Dict mapping model_id to model_name. For OpenAI, both are the same
as the API doesn't provide separate display names. Only includes
models that support text input, text output or tool calling.
Raises:
Exception: If the API call fails.
"""
client = openai.OpenAI(api_key=api_key)
models = client.models.list()
# Filter models to only include those supporting chat completions + tool calling
filtered_models = {}
for model in getattr(models, "data", []):
model_id = model.id
# Skip if model ID starts with excluded prefixes
if model_id.startswith(EXCLUDED_OPENAI_MODEL_PREFIXES):
continue
# Skip if model ID contains excluded substrings
if any(substring in model_id for substring in EXCLUDED_OPENAI_MODEL_SUBSTRINGS):
continue
# Include model (supports chat completions + tool calling)
filtered_models[model_id] = model_id
return filtered_models
def _fetch_openai_compatible_models(base_url: str, api_key: str) -> Dict[str, str]:
"""
Fetch available models from an OpenAI-compatible API using the OpenAI SDK.
Returns a mapping of model_id -> model_name. Prefers the 'name' attribute
if available (e.g., from OpenRouter), otherwise falls back to 'id'.
Note: base_url should include version (e.g., https://openrouter.ai/api/v1)
"""
client = openai.OpenAI(api_key=api_key, base_url=base_url)
models = client.models.list()
available_models: Dict[str, str] = {}
for model in models.data:
model_id = model.id
# Prefer provider-supplied human-friendly name when available
name = getattr(model, "name", None)
if name:
available_models[model_id] = name
else:
available_models[model_id] = model_id
return available_models
def _get_region_prefix(region: str) -> str:
"""
Determine geographic prefix for AWS region.
Examples: ap-south-1 -> apac, us-east-1 -> us, eu-west-1 -> eu
"""
if region.startswith(("us-", "ca-", "sa-")):
return "us"
elif region.startswith("eu-"):
return "eu"
elif region.startswith("ap-"):
return "apac"
return "global"
def _clean_inference_profile_name(profile_name: str) -> str:
"""
Remove geographic prefix from inference profile name.
AWS includes geographic prefixes in profile names which are redundant
since the profile ID already contains this information.
Examples:
"APAC Anthropic Claude 3.5 Sonnet" -> "Anthropic Claude 3.5 Sonnet"
"GLOBAL Claude Sonnet 4.5" -> "Claude Sonnet 4.5"
"US Anthropic Claude 3 Haiku" -> "Anthropic Claude 3 Haiku"
"""
prefixes = ["APAC ", "GLOBAL ", "US ", "EU ", "APAC-", "GLOBAL-", "US-", "EU-"]
for prefix in prefixes:
if profile_name.upper().startswith(prefix.upper()):
return profile_name[len(prefix) :].strip()
return profile_name
def _supports_text_modality(input_modalities: list, output_modalities: list) -> bool:
"""Check if model supports TEXT for both input and output."""
return "TEXT" in input_modalities and "TEXT" in output_modalities
def _get_foundation_model_modalities(
bedrock_client, model_id: str
) -> tuple[list, list] | None:
"""
Fetch input and output modalities for a foundation model.
Returns:
(input_modalities, output_modalities) or None if fetch fails
"""
try:
model_info = bedrock_client.get_foundation_model(modelIdentifier=model_id)
model_details = model_info.get("modelDetails", {})
input_mods = model_details.get("inputModalities", [])
output_mods = model_details.get("outputModalities", [])
return (input_mods, output_mods)
except (BotoCoreError, ClientError) as e:
logger.debug("Could not fetch model details for %s: %s", model_id, str(e))
return None
def _extract_foundation_model_ids(profile_models: list) -> list[str]:
"""
Extract foundation model IDs from inference profile model ARNs.
Args:
profile_models: List of model references from inference profile
Returns:
List of foundation model IDs extracted from ARNs
"""
model_ids = []
for model_ref in profile_models:
model_arn = model_ref.get("modelArn", "")
if "foundation-model/" in model_arn:
model_id = model_arn.split("foundation-model/")[1]
model_ids.append(model_id)
return model_ids
def _build_inference_profile_map(
bedrock_client, region: str
) -> Dict[str, tuple[str, str]]:
"""
Build map of foundation_model_id -> best inference profile.
Returns:
Dict mapping foundation_model_id to (profile_id, profile_name)
Only includes profiles with TEXT modality support
Prefers region-matched profiles over others
"""
region_prefix = _get_region_prefix(region)
model_to_profile: Dict[str, tuple[str, str]] = {}
try:
response = bedrock_client.list_inference_profiles()
profiles = response.get("inferenceProfileSummaries", [])
for profile in profiles:
profile_id = profile.get("inferenceProfileId")
profile_name = profile.get("inferenceProfileName")
if not profile_id or not profile_name:
continue
profile_models = profile.get("models", [])
if not profile_models:
continue
foundation_model_ids = _extract_foundation_model_ids(profile_models)
if not foundation_model_ids:
continue
modalities = _get_foundation_model_modalities(
bedrock_client, foundation_model_ids[0]
)
if not modalities:
continue
input_mods, output_mods = modalities
if not _supports_text_modality(input_mods, output_mods):
continue
is_preferred = profile_id.startswith(f"{region_prefix}.")
clean_name = _clean_inference_profile_name(profile_name)
for foundation_model_id in foundation_model_ids:
if foundation_model_id not in model_to_profile:
model_to_profile[foundation_model_id] = (profile_id, clean_name)
elif is_preferred and not model_to_profile[foundation_model_id][
0
].startswith(f"{region_prefix}."):
model_to_profile[foundation_model_id] = (profile_id, clean_name)
except (BotoCoreError, ClientError) as e:
logger.info("Could not fetch inference profiles in %s: %s", region, str(e))
return model_to_profile
def _check_on_demand_availability(bedrock_client, model_id: str) -> bool:
"""Check if an ON_DEMAND foundation model is entitled and available."""
try:
availability = bedrock_client.get_foundation_model_availability(
modelId=model_id
)
entitlement = availability.get("entitlementAvailability")
return entitlement == "AVAILABLE"
except (BotoCoreError, ClientError) as e:
logger.debug("Could not check availability for %s: %s", model_id, str(e))
return False
def _fetch_bedrock_models(bedrock_creds: Dict[str, str]) -> Dict[str, str]:
"""
Fetch available models from AWS Bedrock, preferring inference profiles over ON_DEMAND.
Strategy:
1. Build map of foundation_model -> best_inference_profile (with TEXT validation)
2. For each TEXT-capable foundation model:
- Use inference profile ID if available (preferred - better throughput)
- Fallback to foundation model ID if only ON_DEMAND available
3. Verify entitlement for ON_DEMAND models
Args:
bedrock_creds: Dict with 'region' and auth credentials
Returns:
Dict mapping model_id to model_name. IDs can be:
- Inference profile IDs (e.g., "apac.anthropic.claude-3-5-sonnet-20240620-v1:0")
- Foundation model IDs (e.g., "anthropic.claude-3-5-sonnet-20240620-v1:0")
"""
bedrock_client = _create_bedrock_client(bedrock_creds)
region = bedrock_creds["region"]
model_to_profile = _build_inference_profile_map(bedrock_client, region)
foundation_response = bedrock_client.list_foundation_models()
model_summaries = foundation_response.get("modelSummaries", [])
models_to_return: Dict[str, str] = {}
on_demand_models: set[str] = set()
for model in model_summaries:
input_mods = model.get("inputModalities", [])
output_mods = model.get("outputModalities", [])
if not _supports_text_modality(input_mods, output_mods):
continue
model_id = model.get("modelId")
model_name = model.get("modelName")
if not model_id or not model_name:
continue
if model_id in model_to_profile:
profile_id, profile_name = model_to_profile[model_id]
models_to_return[profile_id] = profile_name
else:
inference_types = model.get("inferenceTypesSupported", [])
if "ON_DEMAND" in inference_types:
models_to_return[model_id] = model_name
on_demand_models.add(model_id)
available_models: Dict[str, str] = {}
for model_id, model_name in models_to_return.items():
if model_id in on_demand_models:
if _check_on_demand_availability(bedrock_client, model_id):
available_models[model_id] = model_name
else:
available_models[model_id] = model_name
return available_models
def refresh_lighthouse_provider_models(provider_config_id: str) -> Dict:
"""
Refresh the catalog of models for a Lighthouse provider configuration.
For the OpenAI provider, this fetches the current list of models, upserts entries
into `LighthouseProviderModels`, and deletes stale entries no longer returned by
the provider.
Fetches the current list of models from the provider, upserts entries into
`LighthouseProviderModels`, and deletes stale entries no longer returned.
Args:
provider_config_id (str): The primary key of the `LighthouseProviderConfiguration`
provider_config_id: The primary key of the `LighthouseProviderConfiguration`
whose models should be refreshed.
Returns:
@@ -104,45 +619,81 @@ def refresh_lighthouse_provider_models(provider_config_id: str) -> Dict:
LighthouseProviderConfiguration.DoesNotExist: If no configuration exists with the given ID.
"""
provider_cfg = LighthouseProviderConfiguration.objects.get(pk=provider_config_id)
if (
provider_cfg.provider_type
!= LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "Unsupported provider type",
}
api_key = _extract_openai_api_key(provider_cfg)
if not api_key:
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "API key is invalid or missing",
}
fetched_models: Dict[str, str] = {}
try:
client = openai.OpenAI(api_key=api_key)
models = client.models.list()
fetched_ids: Set[str] = {m.id for m in getattr(models, "data", [])}
except Exception as e: # noqa: BLE001
logger.warning("OpenAI models refresh failed: %s", str(e))
return {"created": 0, "updated": 0, "deleted": 0, "error": str(e)}
if (
provider_cfg.provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI
):
api_key = _extract_openai_api_key(provider_cfg)
if not api_key:
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "API key is invalid or missing",
}
fetched_models = _fetch_openai_models(api_key)
elif (
provider_cfg.provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
):
bedrock_creds = _extract_bedrock_credentials(provider_cfg)
if not bedrock_creds:
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "AWS credentials are invalid or missing",
}
fetched_models = _fetch_bedrock_models(bedrock_creds)
elif (
provider_cfg.provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE
):
params = _extract_openai_compatible_params(provider_cfg)
if not params:
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "Base URL or API key is invalid or missing",
}
fetched_models = _fetch_openai_compatible_models(
params["base_url"], params["api_key"]
)
else:
return {
"created": 0,
"updated": 0,
"deleted": 0,
"error": "Unsupported provider type",
}
except Exception as e:
error_message = _extract_error_message(e)
logger.warning(
"Unexpected error refreshing %s models: %s",
provider_cfg.provider_type,
error_message,
)
return {"created": 0, "updated": 0, "deleted": 0, "error": error_message}
# Upsert models into the catalog
created = 0
updated = 0
for model_id in fetched_ids:
for model_id, model_name in fetched_models.items():
obj, was_created = LighthouseProviderModels.objects.update_or_create(
tenant_id=provider_cfg.tenant_id,
provider_configuration=provider_cfg,
model_id=model_id,
defaults={
"model_name": model_id, # OpenAI doesn't return a separate display name
"model_name": model_name,
"default_parameters": {},
},
)
@@ -156,7 +707,7 @@ def refresh_lighthouse_provider_models(provider_config_id: str) -> Dict:
LighthouseProviderModels.objects.filter(
tenant_id=provider_cfg.tenant_id, provider_configuration=provider_cfg
)
.exclude(model_id__in=fetched_ids)
.exclude(model_id__in=fetched_models.keys())
.delete()
)
+64
View File
@@ -0,0 +1,64 @@
from celery.utils.log import get_task_logger
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
from tasks.utils import batched
from api.db_utils import rls_transaction
from api.models import Finding, MuteRule
logger = get_task_logger(__name__)
def mute_historical_findings(tenant_id: str, mute_rule_id: str):
"""
Mute historical findings that match the given mute rule.
This function processes findings in batches, updating their muted status
and adding the mute reason.
Args:
tenant_id (str): The tenant ID for RLS context
mute_rule_id (str): The ID of the mute rule to apply
Returns:
dict: Summary of the muting operation with findings_muted count
"""
findings_muted_count = 0
# Get the list of UIDs to mute and the reason
with rls_transaction(tenant_id):
mute_rule = MuteRule.objects.get(id=mute_rule_id, tenant_id=tenant_id)
finding_uids = mute_rule.finding_uids
mute_reason = mute_rule.reason
muted_at = mute_rule.inserted_at
# Query findings that match the UIDs and are not already muted
with rls_transaction(tenant_id):
findings_to_mute = Finding.objects.filter(
tenant_id=tenant_id, uid__in=finding_uids, muted=False
)
total_findings = findings_to_mute.count()
logger.info(
f"Processing {total_findings} findings for mute rule {mute_rule_id}"
)
if total_findings > 0:
for batch, is_last in batched(
findings_to_mute.iterator(), DJANGO_FINDINGS_BATCH_SIZE
):
batch_ids = [f.id for f in batch]
updated_count = Finding.all_objects.filter(
id__in=batch_ids, tenant_id=tenant_id
).update(
muted=True,
muted_at=muted_at,
muted_reason=mute_reason,
)
findings_muted_count += updated_count
logger.info(f"Muted {findings_muted_count} findings for rule {mute_rule_id}")
return {
"findings_muted": findings_muted_count,
"rule_id": mute_rule_id,
}
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+214
View File
@@ -0,0 +1,214 @@
from celery.utils.log import get_task_logger
from tasks.jobs.threatscore_utils import (
_aggregate_requirement_statistics_from_database,
_calculate_requirements_data_from_statistics,
)
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
from api.models import Provider, StatusChoices
from prowler.lib.check.compliance_models import Compliance
logger = get_task_logger(__name__)
def compute_threatscore_metrics(
tenant_id: str,
scan_id: str,
provider_id: str,
compliance_id: str,
min_risk_level: int = 4,
) -> dict:
"""
Compute ThreatScore metrics for a given scan.
This function calculates all the metrics needed for a ThreatScore snapshot:
- Overall ThreatScore percentage
- Section-by-section scores
- Critical failed requirements (risk >= min_risk_level)
- Summary statistics (requirements and findings counts)
Args:
tenant_id (str): The tenant ID for Row-Level Security context.
scan_id (str): The ID of the scan to analyze.
provider_id (str): The ID of the provider used in the scan.
compliance_id (str): Compliance framework ID (e.g., "prowler_threatscore_aws").
min_risk_level (int): Minimum risk level for critical requirements. Defaults to 4.
Returns:
dict: A dictionary containing:
- overall_score (float): Overall ThreatScore percentage (0-100)
- section_scores (dict): Section name -> score percentage mapping
- critical_requirements (list): List of critical failed requirement dicts
- total_requirements (int): Total number of requirements
- passed_requirements (int): Number of PASS requirements
- failed_requirements (int): Number of FAIL requirements
- manual_requirements (int): Number of MANUAL requirements
- total_findings (int): Total findings count
- passed_findings (int): Passed findings count
- failed_findings (int): Failed findings count
Example:
>>> metrics = compute_threatscore_metrics(
... tenant_id="tenant-123",
... scan_id="scan-456",
... provider_id="provider-789",
... compliance_id="prowler_threatscore_aws"
... )
>>> print(f"Overall ThreatScore: {metrics['overall_score']:.2f}%")
"""
# Get provider and compliance information
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
provider_obj = Provider.objects.get(id=provider_id)
provider_type = provider_obj.provider
frameworks_bulk = Compliance.get_bulk(provider_type)
compliance_obj = frameworks_bulk[compliance_id]
# Aggregate requirement statistics from database
requirement_statistics_by_check_id = (
_aggregate_requirement_statistics_from_database(tenant_id, scan_id)
)
# Calculate requirements data using aggregated statistics
attributes_by_requirement_id, requirements_list = (
_calculate_requirements_data_from_statistics(
compliance_obj, requirement_statistics_by_check_id
)
)
# Initialize metrics
overall_numerator = 0
overall_denominator = 0
overall_has_findings = False
sections_data = {}
total_requirements = len(requirements_list)
passed_requirements = 0
failed_requirements = 0
manual_requirements = 0
total_findings = 0
passed_findings = 0
failed_findings = 0
critical_requirements_list = []
# Process each requirement
for requirement in requirements_list:
requirement_id = requirement["id"]
requirement_status = requirement["attributes"]["status"]
requirement_attributes = attributes_by_requirement_id.get(requirement_id, {})
# Count requirements by status
if requirement_status == StatusChoices.PASS:
passed_requirements += 1
elif requirement_status == StatusChoices.FAIL:
failed_requirements += 1
elif requirement_status == StatusChoices.MANUAL:
manual_requirements += 1
# Get findings data
req_passed_findings = requirement["attributes"].get("passed_findings", 0)
req_total_findings = requirement["attributes"].get("total_findings", 0)
# Accumulate findings counts
total_findings += req_total_findings
passed_findings += req_passed_findings
failed_findings += req_total_findings - req_passed_findings
# Skip requirements with no findings
if req_total_findings == 0:
continue
overall_has_findings = True
# Get requirement metadata
metadata = requirement_attributes.get("attributes", {}).get(
"req_attributes", []
)
if not metadata or len(metadata) == 0:
continue
m = metadata[0]
risk_level = getattr(m, "LevelOfRisk", 0)
weight = getattr(m, "Weight", 0)
section = getattr(m, "Section", "Unknown")
# Calculate ThreatScore components using formula from UI
rate_i = req_passed_findings / req_total_findings
rfac_i = 1 + 0.25 * risk_level
# Update overall score
overall_numerator += rate_i * req_total_findings * weight * rfac_i
overall_denominator += req_total_findings * weight * rfac_i
# Update section scores
if section not in sections_data:
sections_data[section] = {
"numerator": 0,
"denominator": 0,
"has_findings": False,
}
sections_data[section]["has_findings"] = True
sections_data[section]["numerator"] += (
rate_i * req_total_findings * weight * rfac_i
)
sections_data[section]["denominator"] += req_total_findings * weight * rfac_i
# Identify critical failed requirements
if requirement_status == StatusChoices.FAIL and risk_level >= min_risk_level:
critical_requirements_list.append(
{
"requirement_id": requirement_id,
"title": getattr(m, "Title", "N/A"),
"section": section,
"subsection": getattr(m, "SubSection", "N/A"),
"risk_level": risk_level,
"weight": weight,
"passed_findings": req_passed_findings,
"total_findings": req_total_findings,
"description": getattr(m, "AttributeDescription", "N/A"),
}
)
# Calculate overall ThreatScore
if not overall_has_findings:
overall_score = 100.0
elif overall_denominator > 0:
overall_score = (overall_numerator / overall_denominator) * 100
else:
overall_score = 0.0
# Calculate section scores
section_scores = {}
for section, data in sections_data.items():
if data["has_findings"] and data["denominator"] > 0:
section_scores[section] = (data["numerator"] / data["denominator"]) * 100
else:
section_scores[section] = 100.0
# Sort critical requirements by risk level (desc) and weight (desc)
critical_requirements_list.sort(
key=lambda x: (x["risk_level"], x["weight"]), reverse=True
)
logger.info(
f"ThreatScore computed: {overall_score:.2f}% "
f"({passed_requirements}/{total_requirements} requirements passed, "
f"{len(critical_requirements_list)} critical failures)"
)
return {
"overall_score": round(overall_score, 2),
"section_scores": {k: round(v, 2) for k, v in section_scores.items()},
"critical_requirements": critical_requirements_list,
"total_requirements": total_requirements,
"passed_requirements": passed_requirements,
"failed_requirements": failed_requirements,
"manual_requirements": manual_requirements,
"total_findings": total_findings,
"passed_findings": passed_findings,
"failed_findings": failed_findings,
}
@@ -0,0 +1,239 @@
from collections import defaultdict
from celery.utils.log import get_task_logger
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
from django.db.models import Count, Q
from tasks.utils import batched
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
from api.models import Finding, StatusChoices
from prowler.lib.outputs.finding import Finding as FindingOutput
logger = get_task_logger(__name__)
def _aggregate_requirement_statistics_from_database(
tenant_id: str, scan_id: str
) -> dict[str, dict[str, int]]:
"""
Aggregate finding statistics by check_id using database aggregation.
This function uses Django ORM aggregation to calculate pass/fail statistics
entirely in the database, avoiding the need to load findings into memory.
Args:
tenant_id (str): The tenant ID for Row-Level Security context.
scan_id (str): The ID of the scan to retrieve findings for.
Returns:
dict[str, dict[str, int]]: Dictionary mapping check_id to statistics:
- 'passed' (int): Number of passed findings for this check
- 'total' (int): Total number of findings for this check
Example:
{
'aws_iam_user_mfa_enabled': {'passed': 10, 'total': 15},
'aws_s3_bucket_public_access': {'passed': 0, 'total': 5}
}
"""
requirement_statistics_by_check_id = {}
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
aggregated_statistics_queryset = (
Finding.all_objects.filter(
tenant_id=tenant_id, scan_id=scan_id, muted=False
)
.values("check_id")
.annotate(
total_findings=Count(
"id",
filter=Q(status__in=[StatusChoices.PASS, StatusChoices.FAIL]),
),
passed_findings=Count("id", filter=Q(status=StatusChoices.PASS)),
)
)
for aggregated_stat in aggregated_statistics_queryset:
check_id = aggregated_stat["check_id"]
requirement_statistics_by_check_id[check_id] = {
"passed": aggregated_stat["passed_findings"],
"total": aggregated_stat["total_findings"],
}
logger.info(
f"Aggregated statistics for {len(requirement_statistics_by_check_id)} unique checks"
)
return requirement_statistics_by_check_id
def _calculate_requirements_data_from_statistics(
compliance_obj, requirement_statistics_by_check_id: dict[str, dict[str, int]]
) -> tuple[dict[str, dict], list[dict]]:
"""
Calculate requirement status and statistics using pre-aggregated database statistics.
Args:
compliance_obj: The compliance framework object containing requirements.
requirement_statistics_by_check_id (dict[str, dict[str, int]]): Pre-aggregated statistics
mapping check_id to {'passed': int, 'total': int} counts.
Returns:
tuple[dict[str, dict], list[dict]]: A tuple containing:
- attributes_by_requirement_id: Dictionary mapping requirement IDs to their attributes.
- requirements_list: List of requirement dictionaries with status and statistics.
"""
attributes_by_requirement_id = {}
requirements_list = []
compliance_framework = getattr(compliance_obj, "Framework", "N/A")
compliance_version = getattr(compliance_obj, "Version", "N/A")
for requirement in compliance_obj.Requirements:
requirement_id = requirement.Id
requirement_description = getattr(requirement, "Description", "")
requirement_checks = getattr(requirement, "Checks", [])
requirement_attributes = getattr(requirement, "Attributes", [])
attributes_by_requirement_id[requirement_id] = {
"attributes": {
"req_attributes": requirement_attributes,
"checks": requirement_checks,
},
"description": requirement_description,
}
total_passed_findings = 0
total_findings_count = 0
for check_id in requirement_checks:
if check_id in requirement_statistics_by_check_id:
check_statistics = requirement_statistics_by_check_id[check_id]
total_findings_count += check_statistics["total"]
total_passed_findings += check_statistics["passed"]
if total_findings_count > 0:
if total_passed_findings == total_findings_count:
requirement_status = StatusChoices.PASS
else:
requirement_status = StatusChoices.FAIL
else:
requirement_status = StatusChoices.MANUAL
requirements_list.append(
{
"id": requirement_id,
"attributes": {
"framework": compliance_framework,
"version": compliance_version,
"status": requirement_status,
"description": requirement_description,
"passed_findings": total_passed_findings,
"total_findings": total_findings_count,
},
}
)
return attributes_by_requirement_id, requirements_list
def _load_findings_for_requirement_checks(
tenant_id: str,
scan_id: str,
check_ids: list[str],
prowler_provider,
findings_cache: dict[str, list[FindingOutput]] | None = None,
) -> dict[str, list[FindingOutput]]:
"""
Load findings for specific check IDs on-demand with optional caching.
This function loads only the findings needed for a specific set of checks,
minimizing memory usage by avoiding loading all findings at once. This is used
when generating detailed findings tables for specific requirements in the PDF.
Supports optional caching to avoid duplicate queries when generating multiple
reports for the same scan.
Args:
tenant_id (str): The tenant ID for Row-Level Security context.
scan_id (str): The ID of the scan to retrieve findings for.
check_ids (list[str]): List of check IDs to load findings for.
prowler_provider: The initialized Prowler provider instance.
findings_cache (dict, optional): Cache of already loaded findings.
If provided, checks are first looked up in cache before querying database.
Returns:
dict[str, list[FindingOutput]]: Dictionary mapping check_id to list of FindingOutput objects.
Example:
{
'aws_iam_user_mfa_enabled': [FindingOutput(...), FindingOutput(...)],
'aws_s3_bucket_public_access': [FindingOutput(...)]
}
"""
findings_by_check_id = defaultdict(list)
if not check_ids:
return dict(findings_by_check_id)
# Initialize cache if not provided
if findings_cache is None:
findings_cache = {}
# Separate cached and non-cached check_ids
check_ids_to_load = []
cache_hits = 0
cache_misses = 0
for check_id in check_ids:
if check_id in findings_cache:
# Reuse from cache
findings_by_check_id[check_id] = findings_cache[check_id]
cache_hits += 1
else:
# Need to load from database
check_ids_to_load.append(check_id)
cache_misses += 1
if cache_hits > 0:
logger.info(
f"Findings cache: {cache_hits} hits, {cache_misses} misses "
f"({cache_hits / (cache_hits + cache_misses) * 100:.1f}% hit rate)"
)
# If all check_ids were in cache, return early
if not check_ids_to_load:
return dict(findings_by_check_id)
logger.info(f"Loading findings for {len(check_ids_to_load)} checks on-demand")
findings_queryset = (
Finding.all_objects.filter(
tenant_id=tenant_id, scan_id=scan_id, check_id__in=check_ids_to_load
)
.order_by("uid")
.iterator()
)
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
for batch, is_last_batch in batched(
findings_queryset, DJANGO_FINDINGS_BATCH_SIZE
):
for finding_model in batch:
finding_output = FindingOutput.transform_api_finding(
finding_model, prowler_provider
)
findings_by_check_id[finding_output.check_id].append(finding_output)
# Update cache with newly loaded findings
if finding_output.check_id not in findings_cache:
findings_cache[finding_output.check_id] = []
findings_cache[finding_output.check_id].append(finding_output)
total_findings_loaded = sum(
len(findings) for findings in findings_by_check_id.values()
)
logger.info(
f"Loaded {total_findings_loaded} findings for {len(findings_by_check_id)} checks"
)
return dict(findings_by_check_id)
+132 -12
View File
@@ -8,7 +8,12 @@ from celery.utils.log import get_task_logger
from config.celery import RLSTask
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE, DJANGO_TMP_OUTPUT_DIRECTORY
from django_celery_beat.models import PeriodicTask
from tasks.jobs.backfill import backfill_resource_scan_summaries
from tasks.jobs.backfill import (
backfill_compliance_summaries,
backfill_daily_severity_summaries,
backfill_resource_scan_summaries,
backfill_scan_category_summaries,
)
from tasks.jobs.connection import (
check_integration_connection,
check_lighthouse_connection,
@@ -31,8 +36,11 @@ from tasks.jobs.lighthouse_providers import (
check_lighthouse_provider_connection,
refresh_lighthouse_provider_models,
)
from tasks.jobs.report import generate_threatscore_report_job
from tasks.jobs.muting import mute_historical_findings
from tasks.jobs.report import generate_compliance_reports_job
from tasks.jobs.scan import (
aggregate_attack_surface,
aggregate_daily_severity,
aggregate_findings,
create_compliance_requirements,
perform_prowler_scan,
@@ -42,7 +50,7 @@ from tasks.utils import batched, get_next_execution_datetime
from api.compliance import get_compliance_frameworks
from api.db_router import READ_REPLICA_ALIAS
from api.db_utils import rls_transaction
from api.decorators import set_tenant
from api.decorators import handle_provider_deletion, set_tenant
from api.models import Finding, Integration, Provider, Scan, ScanSummary, StateChoices
from api.utils import initialize_prowler_provider
from api.v1.serializers import ScanTaskSerializer
@@ -65,13 +73,20 @@ def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str)
create_compliance_requirements_task.apply_async(
kwargs={"tenant_id": tenant_id, "scan_id": scan_id}
)
aggregate_attack_surface_task.apply_async(
kwargs={"tenant_id": tenant_id, "scan_id": scan_id}
)
chain(
perform_scan_summary_task.si(tenant_id=tenant_id, scan_id=scan_id),
generate_outputs_task.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
group(
aggregate_daily_severity_task.si(tenant_id=tenant_id, scan_id=scan_id),
generate_outputs_task.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
),
group(
generate_threatscore_report_task.si(
# Use optimized task that generates both reports with shared queries
generate_compliance_reports_task.si(
tenant_id=tenant_id, scan_id=scan_id, provider_id=provider_id
),
check_integrations_task.si(
@@ -135,6 +150,7 @@ def delete_provider_task(provider_id: str, tenant_id: str):
@shared_task(base=RLSTask, name="scan-perform", queue="scans")
@handle_provider_deletion
def perform_scan_task(
tenant_id: str, scan_id: str, provider_id: str, checks_to_execute: list[str] = None
):
@@ -167,6 +183,7 @@ def perform_scan_task(
@shared_task(base=RLSTask, bind=True, name="scan-perform-scheduled", queue="scans")
@handle_provider_deletion
def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
"""
Task to perform a scheduled Prowler scan on a given provider.
@@ -272,6 +289,7 @@ def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
@shared_task(name="scan-summary", queue="overview")
@handle_provider_deletion
def perform_scan_summary_task(tenant_id: str, scan_id: str):
return aggregate_findings(tenant_id=tenant_id, scan_id=scan_id)
@@ -287,6 +305,7 @@ def delete_tenant_task(tenant_id: str):
queue="scan-reports",
)
@set_tenant(keep_tenant=True)
@handle_provider_deletion
def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
"""
Process findings in batches and generate output files in multiple formats.
@@ -315,7 +334,7 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
frameworks_bulk = Compliance.get_bulk(provider_type)
frameworks_avail = get_compliance_frameworks(provider_type)
out_dir, comp_dir, _ = _generate_output_directory(
out_dir, comp_dir = _generate_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY, provider_uid, tenant_id, scan_id
)
@@ -482,6 +501,7 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
@shared_task(name="backfill-scan-resource-summaries", queue="backfill")
@handle_provider_deletion
def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
"""
Tries to backfill the resource scan summaries table for a given scan.
@@ -493,7 +513,45 @@ def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
return backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="backfill-compliance-summaries", queue="backfill")
@handle_provider_deletion
def backfill_compliance_summaries_task(tenant_id: str, scan_id: str):
"""
Tries to backfill compliance overview summaries for a completed scan.
This task aggregates compliance requirement data across regions
to create pre-computed summary records for fast compliance overview queries.
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
"""
return backfill_compliance_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="backfill-daily-severity-summaries", queue="backfill")
def backfill_daily_severity_summaries_task(tenant_id: str, days: int = None):
"""Backfill DailySeveritySummary from historical scans. Use days param to limit scope."""
return backfill_daily_severity_summaries(tenant_id=tenant_id, days=days)
@shared_task(name="backfill-scan-category-summaries", queue="backfill")
@handle_provider_deletion
def backfill_scan_category_summaries_task(tenant_id: str, scan_id: str):
"""
Backfill ScanCategorySummary for a completed scan.
Aggregates unique categories from findings and creates a summary row.
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
"""
return backfill_scan_category_summaries(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="compliance")
@handle_provider_deletion
def create_compliance_requirements_task(tenant_id: str, scan_id: str):
"""
Creates detailed compliance requirement records for a scan.
@@ -509,6 +567,29 @@ def create_compliance_requirements_task(tenant_id: str, scan_id: str):
return create_compliance_requirements(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="scan-attack-surface-overviews", queue="overview")
@handle_provider_deletion
def aggregate_attack_surface_task(tenant_id: str, scan_id: str):
"""
Creates attack surface overview records for a scan.
This task processes findings and aggregates them into attack surface categories
(internet-exposed, secrets, privilege-escalation, ec2-imdsv1) for quick overview queries.
Args:
tenant_id (str): The tenant ID for which to create records.
scan_id (str): The ID of the scan for which to create records.
"""
return aggregate_attack_surface(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(name="scan-daily-severity", queue="overview")
@handle_provider_deletion
def aggregate_daily_severity_task(tenant_id: str, scan_id: str):
"""Aggregate scan severity into DailySeveritySummary for findings_severity/timeseries endpoint."""
return aggregate_daily_severity(tenant_id=tenant_id, scan_id=scan_id)
@shared_task(base=RLSTask, name="lighthouse-connection-check")
@set_tenant
def check_lighthouse_connection_task(lighthouse_config_id: str, tenant_id: str = None):
@@ -547,6 +628,7 @@ def refresh_lighthouse_provider_models_task(
@shared_task(name="integration-check")
@handle_provider_deletion
def check_integrations_task(tenant_id: str, provider_id: str, scan_id: str = None):
"""
Check and execute all configured integrations for a provider.
@@ -611,6 +693,7 @@ def check_integrations_task(tenant_id: str, provider_id: str, scan_id: str = Non
name="integration-s3",
queue="integrations",
)
@handle_provider_deletion
def s3_integration_task(
tenant_id: str,
provider_id: str,
@@ -667,17 +750,54 @@ def jira_integration_task(
@shared_task(
base=RLSTask,
name="scan-threatscore-report",
name="scan-compliance-reports",
queue="scan-reports",
)
def generate_threatscore_report_task(tenant_id: str, scan_id: str, provider_id: str):
@handle_provider_deletion
def generate_compliance_reports_task(tenant_id: str, scan_id: str, provider_id: str):
"""
Task to generate a threatscore report for a given scan.
Optimized task to generate ThreatScore, ENS, and NIS2 reports with shared queries.
This task is more efficient than running separate report tasks because it reuses database queries:
- Provider object fetched once (instead of three times)
- Requirement statistics aggregated once (instead of three times)
- Can reduce database load by up to 50-70%
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
provider_id (str): The provider identifier.
Returns:
dict: Results for all reports containing upload status and paths.
"""
return generate_threatscore_report_job(
tenant_id=tenant_id, scan_id=scan_id, provider_id=provider_id
return generate_compliance_reports_job(
tenant_id=tenant_id,
scan_id=scan_id,
provider_id=provider_id,
generate_threatscore=True,
generate_ens=True,
generate_nis2=True,
)
@shared_task(name="findings-mute-historical")
def mute_historical_findings_task(tenant_id: str, mute_rule_id: str):
"""
Background task to mute all historical findings matching a mute rule.
This task processes findings in batches to avoid memory issues with large datasets.
It updates the Finding.muted, Finding.muted_at, and Finding.muted_reason fields
for all findings whose UID is in the mute rule's finding_uids list.
Args:
tenant_id (str): The tenant ID for RLS context.
mute_rule_id (str): The primary key of the MuteRule to apply.
Returns:
dict: A dictionary containing:
- 'findings_muted' (int): Total number of findings muted.
- 'rule_id' (str): The mute rule ID.
- 'status' (str): Final status ('completed').
"""
return mute_historical_findings(tenant_id, mute_rule_id)
+215 -32
View File
@@ -1,43 +1,97 @@
from uuid import uuid4
import pytest
from tasks.jobs.backfill import backfill_resource_scan_summaries
from tasks.jobs.backfill import (
backfill_compliance_summaries,
backfill_resource_scan_summaries,
backfill_scan_category_summaries,
)
from api.models import ResourceScanSummary, Scan, StateChoices
from api.models import (
ComplianceOverviewSummary,
Finding,
ResourceScanSummary,
Scan,
ScanCategorySummary,
StateChoices,
)
from prowler.lib.check.models import Severity
from prowler.lib.outputs.finding import Status
@pytest.fixture(scope="function")
def resource_scan_summary_data(scans_fixture):
scan = scans_fixture[0]
return ResourceScanSummary.objects.create(
tenant_id=scan.tenant_id,
scan_id=scan.id,
resource_id=str(uuid4()),
service="aws",
region="us-east-1",
resource_type="instance",
)
@pytest.fixture(scope="function")
def get_not_completed_scans(providers_fixture):
provider_id = providers_fixture[0].id
tenant_id = providers_fixture[0].tenant_id
scan_1 = Scan.objects.create(
tenant_id=tenant_id,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.EXECUTING,
provider_id=provider_id,
)
scan_2 = Scan.objects.create(
tenant_id=tenant_id,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.AVAILABLE,
provider_id=provider_id,
)
return scan_1, scan_2
@pytest.fixture(scope="function")
def findings_with_categories_fixture(scans_fixture, resources_fixture):
scan = scans_fixture[0]
resource = resources_fixture[0]
finding = Finding.objects.create(
tenant_id=scan.tenant_id,
uid="finding_with_categories",
scan=scan,
delta="new",
status=Status.FAIL,
status_extended="test status",
impact=Severity.critical,
impact_extended="test impact",
severity=Severity.critical,
raw_result={"status": Status.FAIL},
check_id="test_check",
check_metadata={"CheckId": "test_check"},
categories=["gen-ai", "security"],
first_seen_at="2024-01-02T00:00:00Z",
)
finding.add_resources([resource])
return finding
@pytest.fixture(scope="function")
def scan_category_summary_fixture(scans_fixture):
scan = scans_fixture[0]
return ScanCategorySummary.objects.create(
tenant_id=scan.tenant_id,
scan=scan,
category="existing-category",
severity=Severity.critical,
total_findings=1,
failed_findings=0,
new_failed_findings=0,
)
@pytest.mark.django_db
class TestBackfillResourceScanSummaries:
@pytest.fixture(scope="function")
def resource_scan_summary_data(self, scans_fixture):
scan = scans_fixture[0]
return ResourceScanSummary.objects.create(
tenant_id=scan.tenant_id,
scan_id=scan.id,
resource_id=str(uuid4()),
service="aws",
region="us-east-1",
resource_type="instance",
)
@pytest.fixture(scope="function")
def get_not_completed_scans(self, providers_fixture):
provider_id = providers_fixture[0].id
tenant_id = providers_fixture[0].tenant_id
scan_1 = Scan.objects.create(
tenant_id=tenant_id,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.EXECUTING,
provider_id=provider_id,
)
scan_2 = Scan.objects.create(
tenant_id=tenant_id,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.AVAILABLE,
provider_id=provider_id,
)
return scan_1, scan_2
def test_already_backfilled(self, resource_scan_summary_data):
tenant_id = resource_scan_summary_data.tenant_id
scan_id = resource_scan_summary_data.scan_id
@@ -77,3 +131,132 @@ class TestBackfillResourceScanSummaries:
assert summary.service == resource.service
assert summary.region == resource.region
assert summary.resource_type == resource.type
def test_no_resources_to_backfill(self, scans_fixture):
scan = scans_fixture[1] # Failed scan with no findings/resources
tenant_id = str(scan.tenant_id)
scan_id = str(scan.id)
result = backfill_resource_scan_summaries(tenant_id, scan_id)
assert result == {"status": "no resources to backfill"}
@pytest.mark.django_db
class TestBackfillComplianceSummaries:
def test_already_backfilled(self, scans_fixture):
scan = scans_fixture[0]
tenant_id = str(scan.tenant_id)
ComplianceOverviewSummary.objects.create(
tenant_id=scan.tenant_id,
scan=scan,
compliance_id="aws_account_security_onboarding_aws",
requirements_passed=1,
requirements_failed=0,
requirements_manual=0,
total_requirements=1,
)
result = backfill_compliance_summaries(tenant_id, str(scan.id))
assert result == {"status": "already backfilled"}
def test_not_completed_scan(self, get_not_completed_scans):
for scan in get_not_completed_scans:
result = backfill_compliance_summaries(str(scan.tenant_id), str(scan.id))
assert result == {"status": "scan is not completed"}
def test_no_compliance_data(self, scans_fixture):
scan = scans_fixture[1] # Failed scan with no compliance rows
result = backfill_compliance_summaries(str(scan.tenant_id), str(scan.id))
assert result == {"status": "no compliance data to backfill"}
def test_backfill_creates_compliance_summaries(
self, tenants_fixture, scans_fixture, compliance_requirements_overviews_fixture
):
tenant = tenants_fixture[0]
scan = scans_fixture[0]
result = backfill_compliance_summaries(str(tenant.id), str(scan.id))
expected = {
"aws_account_security_onboarding_aws": {
"requirements_passed": 1,
"requirements_failed": 1,
"requirements_manual": 1,
"total_requirements": 3,
},
"cis_1.4_aws": {
"requirements_passed": 0,
"requirements_failed": 1,
"requirements_manual": 0,
"total_requirements": 1,
},
"mitre_attack_aws": {
"requirements_passed": 0,
"requirements_failed": 1,
"requirements_manual": 0,
"total_requirements": 1,
},
}
assert result == {"status": "backfilled", "inserted": len(expected)}
summaries = ComplianceOverviewSummary.objects.filter(
tenant_id=str(tenant.id), scan_id=str(scan.id)
)
assert summaries.count() == len(expected)
for summary in summaries:
assert summary.compliance_id in expected
expected_counts = expected[summary.compliance_id]
assert summary.requirements_passed == expected_counts["requirements_passed"]
assert summary.requirements_failed == expected_counts["requirements_failed"]
assert summary.requirements_manual == expected_counts["requirements_manual"]
assert summary.total_requirements == expected_counts["total_requirements"]
@pytest.mark.django_db
class TestBackfillScanCategorySummaries:
def test_already_backfilled(self, scan_category_summary_fixture):
tenant_id = scan_category_summary_fixture.tenant_id
scan_id = scan_category_summary_fixture.scan_id
result = backfill_scan_category_summaries(str(tenant_id), str(scan_id))
assert result == {"status": "already backfilled"}
def test_not_completed_scan(self, get_not_completed_scans):
for scan in get_not_completed_scans:
result = backfill_scan_category_summaries(str(scan.tenant_id), str(scan.id))
assert result == {"status": "scan is not completed"}
def test_no_categories_to_backfill(self, scans_fixture):
scan = scans_fixture[1] # Failed scan with no findings
result = backfill_scan_category_summaries(str(scan.tenant_id), str(scan.id))
assert result == {"status": "no categories to backfill"}
def test_successful_backfill(self, findings_with_categories_fixture):
finding = findings_with_categories_fixture
tenant_id = str(finding.tenant_id)
scan_id = str(finding.scan_id)
result = backfill_scan_category_summaries(tenant_id, scan_id)
# 2 categories × 1 severity = 2 rows
assert result == {"status": "backfilled", "categories_count": 2}
summaries = ScanCategorySummary.objects.filter(
tenant_id=tenant_id, scan_id=scan_id
)
assert summaries.count() == 2
categories = set(summaries.values_list("category", flat=True))
assert categories == {"gen-ai", "security"}
for summary in summaries:
assert summary.severity == Severity.critical
assert summary.total_findings == 1
assert summary.failed_findings == 1
assert summary.new_failed_findings == 1
+1
View File
@@ -28,6 +28,7 @@ class TestScheduleProviderScan:
"tenant_id": str(provider_instance.tenant_id),
"provider_id": str(provider_instance.id),
},
countdown=5,
)
task_name = f"scan-perform-scheduled-{provider_instance.id}"
+40 -9
View File
@@ -9,6 +9,7 @@ import pytest
from botocore.exceptions import ClientError
from tasks.jobs.export import (
_compress_output_files,
_generate_compliance_output_directory,
_generate_output_directory,
_upload_to_s3,
get_s3_client,
@@ -147,10 +148,11 @@ class TestOutputs:
)
mock_logger.assert_called()
@patch("tasks.jobs.export.set_output_timestamp")
@patch("tasks.jobs.export.rls_transaction")
@patch("tasks.jobs.export.Scan")
def test_generate_output_directory_creates_paths(
self, mock_scan, mock_rls_transaction, tmpdir
self, mock_scan, mock_rls_transaction, mock_set_timestamp, tmpdir
):
# Mock the scan object with a started_at timestamp
mock_scan_instance = MagicMock()
@@ -168,22 +170,40 @@ class TestOutputs:
provider = "aws"
expected_timestamp = "20230615103045"
path, compliance, threatscore = _generate_output_directory(
# Test _generate_output_directory (returns standard and compliance paths)
path, compliance = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert os.path.isdir(os.path.dirname(threatscore))
assert path.endswith(f"{provider}-{expected_timestamp}")
assert compliance.endswith(f"{provider}-{expected_timestamp}")
assert threatscore.endswith(f"{provider}-{expected_timestamp}")
assert "/compliance/" in compliance
# Test _generate_compliance_output_directory with "threatscore"
threatscore = _generate_compliance_output_directory(
base_dir, provider, tenant_id, scan_id, compliance_framework="threatscore"
)
assert os.path.isdir(os.path.dirname(threatscore))
assert threatscore.endswith(f"{provider}-{expected_timestamp}")
assert "/threatscore/" in threatscore
# Test _generate_compliance_output_directory with "ens"
ens = _generate_compliance_output_directory(
base_dir, provider, tenant_id, scan_id, compliance_framework="ens"
)
assert os.path.isdir(os.path.dirname(ens))
assert ens.endswith(f"{provider}-{expected_timestamp}")
assert "/ens/" in ens
@patch("tasks.jobs.export.set_output_timestamp")
@patch("tasks.jobs.export.rls_transaction")
@patch("tasks.jobs.export.Scan")
def test_generate_output_directory_invalid_character(
self, mock_scan, mock_rls_transaction, tmpdir
self, mock_scan, mock_rls_transaction, mock_set_timestamp, tmpdir
):
# Mock the scan object with a started_at timestamp
mock_scan_instance = MagicMock()
@@ -201,14 +221,25 @@ class TestOutputs:
provider = "aws/test@check"
expected_timestamp = "20230615103045"
path, compliance, threatscore = _generate_output_directory(
# Test provider name sanitization with _generate_output_directory
path, compliance = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert os.path.isdir(os.path.dirname(threatscore))
assert path.endswith(f"aws-test-check-{expected_timestamp}")
assert compliance.endswith(f"aws-test-check-{expected_timestamp}")
# Test provider name sanitization with _generate_compliance_output_directory
threatscore = _generate_compliance_output_directory(
base_dir, provider, tenant_id, scan_id, compliance_framework="threatscore"
)
ens = _generate_compliance_output_directory(
base_dir, provider, tenant_id, scan_id, compliance_framework="ens"
)
assert os.path.isdir(os.path.dirname(threatscore))
assert os.path.isdir(os.path.dirname(ens))
assert threatscore.endswith(f"aws-test-check-{expected_timestamp}")
assert ens.endswith(f"aws-test-check-{expected_timestamp}")
+532
View File
@@ -0,0 +1,532 @@
from datetime import datetime, timezone
from uuid import uuid4
import pytest
from django.core.exceptions import ObjectDoesNotExist
from tasks.jobs.muting import mute_historical_findings
from api.models import Finding, MuteRule
from prowler.lib.check.models import Severity
from prowler.lib.outputs.finding import Status
@pytest.mark.django_db
class TestMuteHistoricalFindings:
"""
Test suite for the mute_historical_findings function.
This class tests the batch processing of findings to update their muted status
based on MuteRule criteria.
"""
@pytest.fixture(scope="function")
def test_user(self, create_test_user):
"""Create a test user for mute rule creation."""
return create_test_user
@pytest.fixture(scope="function")
def mute_rule_with_findings(self, tenants_fixture, findings_fixture, test_user):
"""
Create a mute rule that targets the first finding in the fixture.
"""
tenant = tenants_fixture[0]
finding = findings_fixture[0]
mute_rule = MuteRule.objects.create(
tenant_id=tenant.id,
name="Test Mute Rule",
reason="Testing mute functionality",
enabled=True,
created_by=test_user,
finding_uids=[finding.uid],
)
return mute_rule
@pytest.fixture(scope="function")
def mute_rule_multiple_findings(self, scans_fixture, test_user):
"""
Create multiple unmuted findings and a mute rule targeting all of them.
"""
scan = scans_fixture[0]
tenant_id = scan.tenant_id
# Create 5 unmuted findings
finding_uids = []
for i in range(5):
finding = Finding.objects.create(
tenant_id=tenant_id,
uid=f"test_finding_uid_mute_{i}",
scan=scan,
status=Status.FAIL,
status_extended=f"Test status {i}",
impact=Severity.high,
severity=Severity.high,
raw_result={
"status": Status.FAIL,
"impact": Severity.high,
"severity": Severity.high,
},
check_id=f"test_check_id_{i}",
check_metadata={
"CheckId": f"test_check_id_{i}",
"Description": f"Test description {i}",
},
muted=False,
)
finding_uids.append(finding.uid)
# Create mute rule targeting all findings
mute_rule = MuteRule.objects.create(
tenant_id=tenant_id,
name="Test Multiple Findings Mute Rule",
reason="Testing batch muting",
enabled=True,
created_by=test_user,
finding_uids=finding_uids,
)
return mute_rule, finding_uids
@pytest.fixture(scope="function")
def mute_rule_already_muted(self, findings_fixture, test_user):
"""
Create a mute rule that targets an already-muted finding.
"""
tenant_id = findings_fixture[1].tenant_id
already_muted_finding = findings_fixture[1]
mute_rule = MuteRule.objects.create(
tenant_id=tenant_id,
name="Test Already Muted Rule",
reason="Testing already muted findings",
enabled=True,
created_by=test_user,
finding_uids=[already_muted_finding.uid],
)
return mute_rule
@pytest.fixture(scope="function")
def mute_rule_mixed_findings(self, scans_fixture, test_user):
"""
Create a mute rule with a mix of muted and unmuted findings.
"""
scan = scans_fixture[0]
tenant_id = scan.tenant_id
# Create 3 unmuted findings
unmuted_uids = []
for i in range(3):
finding = Finding.objects.create(
tenant_id=tenant_id,
uid=f"unmuted_finding_{i}",
scan=scan,
status=Status.FAIL,
status_extended=f"Unmuted status {i}",
impact=Severity.medium,
severity=Severity.medium,
raw_result={
"status": Status.FAIL,
"impact": Severity.medium,
"severity": Severity.medium,
},
check_id=f"unmuted_check_{i}",
check_metadata={
"CheckId": f"unmuted_check_{i}",
"Description": f"Unmuted description {i}",
},
muted=False,
)
unmuted_uids.append(finding.uid)
# Create 2 already muted findings
muted_uids = []
for i in range(2):
finding = Finding.objects.create(
tenant_id=tenant_id,
uid=f"muted_finding_{i}",
scan=scan,
status=Status.FAIL,
status_extended=f"Muted status {i}",
impact=Severity.low,
severity=Severity.low,
raw_result={
"status": Status.FAIL,
"impact": Severity.low,
"severity": Severity.low,
},
check_id=f"muted_check_{i}",
check_metadata={
"CheckId": f"muted_check_{i}",
"Description": f"Muted description {i}",
},
muted=True,
muted_at=datetime.now(timezone.utc),
muted_reason="Already muted",
)
muted_uids.append(finding.uid)
# Create mute rule targeting all findings
all_uids = unmuted_uids + muted_uids
mute_rule = MuteRule.objects.create(
tenant_id=tenant_id,
name="Test Mixed Findings Rule",
reason="Testing mixed muted/unmuted findings",
enabled=True,
created_by=test_user,
finding_uids=all_uids,
)
return mute_rule, unmuted_uids, muted_uids
@pytest.fixture(scope="function")
def mute_rule_batch_test(self, scans_fixture, test_user):
"""
Create enough findings to test batch processing (>1000 for default batch size).
"""
scan = scans_fixture[0]
tenant_id = scan.tenant_id
# Create 1500 findings to exceed default batch size of 1000
finding_uids = []
for i in range(1500):
finding = Finding.objects.create(
tenant_id=tenant_id,
uid=f"batch_test_finding_{i}",
scan=scan,
status=Status.FAIL,
status_extended=f"Batch test status {i}",
impact=Severity.critical,
severity=Severity.critical,
raw_result={
"status": Status.FAIL,
"impact": Severity.critical,
"severity": Severity.critical,
},
check_id=f"batch_test_check_{i}",
check_metadata={
"CheckId": f"batch_test_check_{i}",
"Description": f"Batch test description {i}",
},
muted=False,
)
finding_uids.append(finding.uid)
# Create mute rule targeting all findings
mute_rule = MuteRule.objects.create(
tenant_id=tenant_id,
name="Test Batch Processing Rule",
reason="Testing batch processing functionality",
enabled=True,
created_by=test_user,
finding_uids=finding_uids,
)
return mute_rule, finding_uids
def test_mute_historical_findings_single_finding(
self, mute_rule_with_findings, findings_fixture
):
"""
Test muting a single historical finding.
"""
mute_rule = mute_rule_with_findings
tenant_id = str(mute_rule.tenant_id)
finding = findings_fixture[0]
# Ensure the finding is not muted before execution
finding.refresh_from_db()
assert finding.muted is False
assert finding.muted_at is None
assert finding.muted_reason is None
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify return value
assert result["findings_muted"] == 1
assert result["rule_id"] == str(mute_rule.id)
# Verify the finding was muted
finding.refresh_from_db()
assert finding.muted is True
assert finding.muted_at == mute_rule.inserted_at
assert finding.muted_reason == mute_rule.reason
def test_mute_historical_findings_multiple_findings(
self, mute_rule_multiple_findings
):
"""
Test muting multiple historical findings.
"""
mute_rule, finding_uids = mute_rule_multiple_findings
tenant_id = str(mute_rule.tenant_id)
# Verify all findings are unmuted
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
assert findings.count() == 5
for finding in findings:
assert finding.muted is False
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify return value
assert result["findings_muted"] == 5
assert result["rule_id"] == str(mute_rule.id)
# Verify all findings were muted
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
for finding in findings:
assert finding.muted is True
assert finding.muted_at == mute_rule.inserted_at
assert finding.muted_reason == mute_rule.reason
def test_mute_historical_findings_already_muted(
self, mute_rule_already_muted, findings_fixture
):
"""
Test that already-muted findings are not counted or updated.
"""
mute_rule = mute_rule_already_muted
tenant_id = str(mute_rule.tenant_id)
finding = findings_fixture[1]
# Verify the finding is already muted
finding.refresh_from_db()
assert finding.muted is True
original_muted_at = finding.muted_at
original_muted_reason = finding.muted_reason
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify no findings were muted
assert result["findings_muted"] == 0
assert result["rule_id"] == str(mute_rule.id)
# Verify the finding's mute status did not change
finding.refresh_from_db()
assert finding.muted is True
assert finding.muted_at == original_muted_at
assert finding.muted_reason == original_muted_reason
def test_mute_historical_findings_mixed_status(self, mute_rule_mixed_findings):
"""
Test muting when some findings are already muted and others are not.
"""
mute_rule, unmuted_uids, muted_uids = mute_rule_mixed_findings
tenant_id = str(mute_rule.tenant_id)
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify only unmuted findings were counted
assert result["findings_muted"] == 3
assert result["rule_id"] == str(mute_rule.id)
# Verify unmuted findings are now muted
unmuted_findings = Finding.objects.filter(
tenant_id=tenant_id, uid__in=unmuted_uids
)
for finding in unmuted_findings:
assert finding.muted is True
assert finding.muted_at == mute_rule.inserted_at
assert finding.muted_reason == mute_rule.reason
# Verify already-muted findings remained unchanged
already_muted_findings = Finding.objects.filter(
tenant_id=tenant_id, uid__in=muted_uids
)
for finding in already_muted_findings:
assert finding.muted is True
assert finding.muted_reason == "Already muted"
def test_mute_historical_findings_nonexistent_rule(self, tenants_fixture):
"""
Test that a nonexistent mute rule raises ObjectDoesNotExist.
"""
tenant_id = str(tenants_fixture[0].id)
nonexistent_rule_id = str(uuid4())
with pytest.raises(ObjectDoesNotExist):
mute_historical_findings(tenant_id, nonexistent_rule_id)
def test_mute_historical_findings_no_matching_findings(
self, tenants_fixture, test_user
):
"""
Test muting when no findings match the rule's UIDs.
"""
tenant_id = str(tenants_fixture[0].id)
# Create a mute rule with non-existent finding UIDs
mute_rule = MuteRule.objects.create(
tenant_id=tenant_id,
name="Test No Match Rule",
reason="Testing no matching findings",
enabled=True,
created_by=test_user,
finding_uids=[
"nonexistent_uid_1",
"nonexistent_uid_2",
"nonexistent_uid_3",
],
)
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify no findings were muted
assert result["findings_muted"] == 0
assert result["rule_id"] == str(mute_rule.id)
def test_mute_historical_findings_batch_processing(self, mute_rule_batch_test):
"""
Test that large numbers of findings are processed in batches correctly.
"""
mute_rule, finding_uids = mute_rule_batch_test
tenant_id = str(mute_rule.tenant_id)
# Verify all findings exist and are unmuted
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
assert findings.count() == 1500
for finding in findings:
assert finding.muted is False
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify return value
assert result["findings_muted"] == 1500
assert result["rule_id"] == str(mute_rule.id)
# Verify all findings were muted
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=finding_uids)
for finding in findings:
assert finding.muted is True
assert finding.muted_at == mute_rule.inserted_at
assert finding.muted_reason == mute_rule.reason
def test_mute_historical_findings_preserves_muted_at_timestamp(
self, mute_rule_with_findings, findings_fixture
):
"""
Test that muted_at is set to the rule's inserted_at, not the current time.
"""
mute_rule = mute_rule_with_findings
tenant_id = str(mute_rule.tenant_id)
finding = findings_fixture[0]
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify the finding was muted
assert result["findings_muted"] == 1
# Verify muted_at matches the rule's inserted_at timestamp
finding.refresh_from_db()
assert finding.muted_at == mute_rule.inserted_at
assert finding.muted_at is not None
def test_mute_historical_findings_partial_match(self, scans_fixture, test_user):
"""
Test muting when only some of the rule's UIDs exist as findings.
"""
scan = scans_fixture[0]
tenant_id = str(scan.tenant_id)
# Create 3 findings
existing_uids = []
for i in range(3):
finding = Finding.objects.create(
tenant_id=tenant_id,
uid=f"partial_match_finding_{i}",
scan=scan,
status=Status.FAIL,
status_extended=f"Partial match status {i}",
impact=Severity.high,
severity=Severity.high,
raw_result={
"status": Status.FAIL,
"impact": Severity.high,
"severity": Severity.high,
},
check_id=f"partial_match_check_{i}",
check_metadata={
"CheckId": f"partial_match_check_{i}",
"Description": f"Partial match description {i}",
},
muted=False,
)
existing_uids.append(finding.uid)
# Create a mute rule with both existing and non-existing UIDs
all_uids = existing_uids + [
"nonexistent_uid_1",
"nonexistent_uid_2",
]
mute_rule = MuteRule.objects.create(
tenant_id=tenant_id,
name="Test Partial Match Rule",
reason="Testing partial matching",
enabled=True,
created_by=test_user,
finding_uids=all_uids,
)
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify only existing findings were muted
assert result["findings_muted"] == 3
assert result["rule_id"] == str(mute_rule.id)
# Verify the existing findings were muted
findings = Finding.objects.filter(tenant_id=tenant_id, uid__in=existing_uids)
assert findings.count() == 3
for finding in findings:
assert finding.muted is True
assert finding.muted_at == mute_rule.inserted_at
assert finding.muted_reason == mute_rule.reason
def test_mute_historical_findings_empty_uids(self, tenants_fixture, test_user):
"""
Test muting when the rule has an empty finding_uids array.
"""
tenant_id = str(tenants_fixture[0].id)
# Create a mute rule with empty finding_uids
mute_rule = MuteRule.objects.create(
tenant_id=tenant_id,
name="Test Empty UIDs Rule",
reason="Testing empty UIDs",
enabled=True,
created_by=test_user,
finding_uids=[],
)
# Execute the muting function
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify no findings were muted
assert result["findings_muted"] == 0
assert result["rule_id"] == str(mute_rule.id)
def test_mute_historical_findings_return_format(self, mute_rule_with_findings):
"""
Test that the return value has the correct format and fields.
"""
mute_rule = mute_rule_with_findings
tenant_id = str(mute_rule.tenant_id)
result = mute_historical_findings(tenant_id, str(mute_rule.id))
# Verify return value structure
assert isinstance(result, dict)
assert "findings_muted" in result
assert "rule_id" in result
assert isinstance(result["findings_muted"], int)
assert isinstance(result["rule_id"], str)
assert result["rule_id"] == str(mute_rule.id)
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+634 -16
View File
@@ -1,16 +1,220 @@
import uuid
from unittest.mock import MagicMock, patch
import openai
import pytest
from botocore.exceptions import ClientError
from tasks.jobs.lighthouse_providers import (
_create_bedrock_client,
_extract_bedrock_credentials,
)
from tasks.tasks import (
_perform_scan_complete_tasks,
check_integrations_task,
check_lighthouse_provider_connection_task,
generate_outputs_task,
refresh_lighthouse_provider_models_task,
s3_integration_task,
security_hub_integration_task,
)
from api.models import Integration
from api.models import (
Integration,
LighthouseProviderConfiguration,
LighthouseProviderModels,
)
@pytest.mark.django_db
class TestExtractBedrockCredentials:
"""Unit tests for _extract_bedrock_credentials helper function."""
def test_extract_access_key_credentials(self, tenants_fixture):
"""Test extraction of access key + secret key credentials."""
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
is_active=True,
)
provider_cfg.credentials_decoded = {
"access_key_id": "AKIAIOSFODNN7EXAMPLE",
"secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"region": "us-east-1",
}
provider_cfg.save()
result = _extract_bedrock_credentials(provider_cfg)
assert result is not None
assert result["access_key_id"] == "AKIAIOSFODNN7EXAMPLE"
assert result["secret_access_key"] == "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
assert result["region"] == "us-east-1"
assert "api_key" not in result
def test_extract_api_key_credentials(self, tenants_fixture):
"""Test extraction of API key (bearer token) credentials."""
valid_api_key = "ABSKQmVkcm9ja0FQSUtleS" + ("A" * 110)
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
is_active=True,
)
provider_cfg.credentials_decoded = {
"api_key": valid_api_key,
"region": "us-west-2",
}
provider_cfg.save()
result = _extract_bedrock_credentials(provider_cfg)
assert result is not None
assert result["api_key"] == valid_api_key
assert result["region"] == "us-west-2"
assert "access_key_id" not in result
assert "secret_access_key" not in result
def test_api_key_takes_precedence_over_access_keys(self, tenants_fixture):
"""Test that API key is preferred when both auth methods are present."""
valid_api_key = "ABSKQmVkcm9ja0FQSUtleS" + ("B" * 110)
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
is_active=True,
)
provider_cfg.credentials_decoded = {
"api_key": valid_api_key,
"access_key_id": "AKIAIOSFODNN7EXAMPLE",
"secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"region": "eu-west-1",
}
provider_cfg.save()
result = _extract_bedrock_credentials(provider_cfg)
assert result is not None
assert result["api_key"] == valid_api_key
assert result["region"] == "eu-west-1"
assert "access_key_id" not in result
def test_missing_region_returns_none(self, tenants_fixture):
"""Test that missing region returns None."""
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
is_active=True,
)
provider_cfg.credentials_decoded = {
"api_key": "ABSKQmVkcm9ja0FQSUtleS" + ("A" * 110),
}
provider_cfg.save()
result = _extract_bedrock_credentials(provider_cfg)
assert result is None
def test_empty_credentials_returns_none(self, tenants_fixture):
"""Test that empty credentials dict returns None (region only is not enough)."""
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
is_active=True,
)
# Only region, no auth credentials - should return None
provider_cfg.credentials_decoded = {
"region": "us-east-1",
}
provider_cfg.save()
result = _extract_bedrock_credentials(provider_cfg)
assert result is None
def test_non_dict_credentials_returns_none(self, tenants_fixture):
"""Test that non-dict credentials returns None."""
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
is_active=True,
)
# Store valid credentials first to pass model validation
provider_cfg.credentials_decoded = {
"api_key": "ABSKQmVkcm9ja0FQSUtleS" + ("A" * 110),
"region": "us-east-1",
}
provider_cfg.save()
# Mock the credentials_decoded property to return a non-dict value
# This simulates corrupted/invalid stored data
with patch.object(
type(provider_cfg),
"credentials_decoded",
new_callable=lambda: property(lambda self: "invalid"),
):
result = _extract_bedrock_credentials(provider_cfg)
assert result is None
class TestCreateBedrockClient:
"""Unit tests for _create_bedrock_client helper function."""
@patch("tasks.jobs.lighthouse_providers.boto3.client")
def test_create_client_with_access_keys(self, mock_boto_client):
"""Test creating client with access key authentication."""
mock_client = MagicMock()
mock_boto_client.return_value = mock_client
creds = {
"access_key_id": "AKIAIOSFODNN7EXAMPLE",
"secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"region": "us-east-1",
}
result = _create_bedrock_client(creds)
assert result == mock_client
mock_boto_client.assert_called_once_with(
service_name="bedrock",
region_name="us-east-1",
aws_access_key_id="AKIAIOSFODNN7EXAMPLE",
aws_secret_access_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
)
@patch("tasks.jobs.lighthouse_providers.Config")
@patch("tasks.jobs.lighthouse_providers.boto3.client")
def test_create_client_with_api_key(self, mock_boto_client, mock_config):
"""Test creating client with API key authentication."""
mock_client = MagicMock()
mock_events = MagicMock()
mock_client.meta.events = mock_events
mock_boto_client.return_value = mock_client
mock_config_instance = MagicMock()
mock_config.return_value = mock_config_instance
valid_api_key = "ABSKQmVkcm9ja0FQSUtleS" + ("A" * 110)
creds = {
"api_key": valid_api_key,
"region": "us-west-2",
}
result = _create_bedrock_client(creds)
assert result == mock_client
mock_boto_client.assert_called_once_with(
service_name="bedrock",
region_name="us-west-2",
config=mock_config_instance,
)
mock_events.register.assert_called_once()
call_args = mock_events.register.call_args
assert call_args[0][0] == "before-send.*.*"
# Verify handler injects bearer token
handler_fn = call_args[0][1]
mock_request = MagicMock()
mock_request.headers = {}
handler_fn(mock_request)
assert mock_request.headers["Authorization"] == f"Bearer {valid_api_key}"
# TODO Move this to outputs/reports jobs
@@ -101,7 +305,6 @@ class TestGenerateOutputs:
return_value=(
"/tmp/test/out-dir",
"/tmp/test/comp-dir",
"/tmp/test/threat-dir",
),
),
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
@@ -131,7 +334,7 @@ class TestGenerateOutputs:
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
return_value=("/tmp/test/out", "/tmp/test/comp"),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
@@ -201,7 +404,7 @@ class TestGenerateOutputs:
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
return_value=("/tmp/test/out", "/tmp/test/comp"),
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
@@ -281,7 +484,6 @@ class TestGenerateOutputs:
return_value=(
"/tmp/test/outdir",
"/tmp/test/compdir",
"/tmp/test/threatdir",
),
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
@@ -360,7 +562,6 @@ class TestGenerateOutputs:
return_value=(
"/tmp/test/outdir",
"/tmp/test/compdir",
"/tmp/test/threatdir",
),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
@@ -428,7 +629,7 @@ class TestGenerateOutputs:
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
return_value=("/tmp/test/out", "/tmp/test/comp"),
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
@@ -486,7 +687,7 @@ class TestGenerateOutputs:
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
return_value=("/tmp/test/out", "/tmp/test/comp"),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
@@ -524,37 +725,55 @@ class TestGenerateOutputs:
class TestScanCompleteTasks:
@patch("tasks.tasks.aggregate_attack_surface_task.apply_async")
@patch("tasks.tasks.create_compliance_requirements_task.apply_async")
@patch("tasks.tasks.perform_scan_summary_task.si")
@patch("tasks.tasks.generate_outputs_task.si")
@patch("tasks.tasks.generate_threatscore_report_task.si")
@patch("tasks.tasks.generate_compliance_reports_task.si")
@patch("tasks.tasks.check_integrations_task.si")
def test_scan_complete_tasks(
self,
mock_check_integrations_task,
mock_threatscore_task,
mock_compliance_reports_task,
mock_outputs_task,
mock_scan_summary_task,
mock_compliance_tasks,
mock_compliance_requirements_task,
mock_attack_surface_task,
):
"""Test that scan complete tasks are properly orchestrated with optimized reports."""
_perform_scan_complete_tasks("tenant-id", "scan-id", "provider-id")
mock_compliance_tasks.assert_called_once_with(
# Verify compliance requirements task is called
mock_compliance_requirements_task.assert_called_once_with(
kwargs={"tenant_id": "tenant-id", "scan_id": "scan-id"},
)
# Verify attack surface task is called
mock_attack_surface_task.assert_called_once_with(
kwargs={"tenant_id": "tenant-id", "scan_id": "scan-id"},
)
# Verify scan summary task is called
mock_scan_summary_task.assert_called_once_with(
scan_id="scan-id",
tenant_id="tenant-id",
)
# Verify outputs task is called
mock_outputs_task.assert_called_once_with(
scan_id="scan-id",
provider_id="provider-id",
tenant_id="tenant-id",
)
mock_threatscore_task.assert_called_once_with(
# Verify optimized compliance reports task is called (replaces individual tasks)
mock_compliance_reports_task.assert_called_once_with(
tenant_id="tenant-id",
scan_id="scan-id",
provider_id="provider-id",
)
# Verify integrations task is called
mock_check_integrations_task.assert_called_once_with(
tenant_id="tenant-id",
provider_id="provider-id",
@@ -730,7 +949,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
@@ -855,7 +1074,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
@@ -971,7 +1190,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
@@ -1097,3 +1316,402 @@ class TestCheckIntegrationsTask:
assert result is False
mock_upload.assert_called_once_with(self.tenant_id, self.provider_id, scan_id)
@pytest.mark.django_db
class TestCheckLighthouseProviderConnectionTask:
def setup_method(self):
self.tenant_id = str(uuid.uuid4())
@pytest.mark.parametrize(
"provider_type,credentials,base_url,expected_result",
[
(
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
{"api_key": "sk-test123"},
None,
{"connected": True, "error": None},
),
(
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE,
{"api_key": "sk-test123"},
"https://openrouter.ai/api/v1",
{"connected": True, "error": None},
),
(
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
{
"access_key_id": "AKIA123",
"secret_access_key": "secret",
"region": "us-east-1",
},
None,
{"connected": True, "error": None},
),
# Bedrock API key authentication
(
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
{
"api_key": "ABSKQmVkcm9ja0FQSUtleS" + ("A" * 110),
"region": "us-east-1",
},
None,
{"connected": True, "error": None},
),
],
)
def test_check_connection_success_all_providers(
self, tenants_fixture, provider_type, credentials, base_url, expected_result
):
"""Test successful connection check for all provider types."""
# Create provider configuration
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=provider_type,
base_url=base_url,
is_active=False,
)
provider_cfg.credentials_decoded = credentials
provider_cfg.save()
# Mock the appropriate API calls
with (
patch("tasks.jobs.lighthouse_providers.openai.OpenAI") as mock_openai,
patch("tasks.jobs.lighthouse_providers.boto3.client") as mock_boto3,
):
mock_client = MagicMock()
mock_client.models.list.return_value = MagicMock()
mock_client.list_foundation_models.return_value = {}
mock_openai.return_value = mock_client
mock_boto3.return_value = mock_client
# Execute
result = check_lighthouse_provider_connection_task(
provider_config_id=str(provider_cfg.id),
tenant_id=str(tenants_fixture[0].id),
)
# Assert
assert result == expected_result
provider_cfg.refresh_from_db()
assert provider_cfg.is_active is True
@pytest.mark.parametrize(
"provider_type,credentials,base_url,exception_to_raise",
[
(
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
{"api_key": "sk-invalid"},
None,
openai.AuthenticationError(
"Invalid API key", response=MagicMock(), body=None
),
),
(
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE,
{"api_key": "sk-invalid"},
"https://openrouter.ai/api/v1",
openai.APIConnectionError(request=MagicMock()),
),
(
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
{
"access_key_id": "AKIA123",
"secret_access_key": "secret",
"region": "us-east-1",
},
None,
ClientError(
{"Error": {"Code": "AccessDenied", "Message": "Access Denied"}},
"list_foundation_models",
),
),
# Bedrock API key authentication failure
(
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
{
"api_key": "ABSKQmVkcm9ja0FQSUtleS" + ("X" * 110),
"region": "us-east-1",
},
None,
ClientError(
{
"Error": {
"Code": "UnrecognizedClientException",
"Message": "Invalid API key",
}
},
"list_foundation_models",
),
),
],
)
def test_check_connection_api_failure(
self,
tenants_fixture,
provider_type,
credentials,
base_url,
exception_to_raise,
):
"""Test connection check when API calls fail."""
# Create provider configuration
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=provider_type,
base_url=base_url,
is_active=True,
)
provider_cfg.credentials_decoded = credentials
provider_cfg.save()
# Mock the API to raise exception
with (
patch("tasks.jobs.lighthouse_providers.openai.OpenAI") as mock_openai,
patch("tasks.jobs.lighthouse_providers.boto3.client") as mock_boto3,
):
mock_client = MagicMock()
if (
provider_type
== LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK
):
mock_client.list_foundation_models.side_effect = exception_to_raise
mock_boto3.return_value = mock_client
else:
mock_client.models.list.side_effect = exception_to_raise
mock_openai.return_value = mock_client
# Execute
result = check_lighthouse_provider_connection_task(
provider_config_id=str(provider_cfg.id),
tenant_id=str(tenants_fixture[0].id),
)
# Assert
assert result["connected"] is False
assert result["error"] is not None
provider_cfg.refresh_from_db()
assert provider_cfg.is_active is False
def test_check_connection_updates_active_status(self, tenants_fixture):
"""Test that connection check toggles is_active from True to False on failure."""
# Create provider with is_active=True
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
base_url=None,
is_active=True,
)
provider_cfg.credentials_decoded = {"api_key": "sk-test123"}
provider_cfg.save()
# Mock API to fail
with patch("tasks.jobs.lighthouse_providers.openai.OpenAI") as mock_openai:
mock_client = MagicMock()
mock_client.models.list.side_effect = openai.AuthenticationError(
"Invalid", response=MagicMock(), body=None
)
mock_openai.return_value = mock_client
# Execute
result = check_lighthouse_provider_connection_task(
provider_config_id=str(provider_cfg.id),
tenant_id=str(tenants_fixture[0].id),
)
# Assert status changed
assert result["connected"] is False
provider_cfg.refresh_from_db()
assert provider_cfg.is_active is False
def test_check_connection_provider_does_not_exist(self, tenants_fixture):
"""Test that checking non-existent provider raises DoesNotExist."""
non_existent_id = str(uuid.uuid4())
with pytest.raises(LighthouseProviderConfiguration.DoesNotExist):
check_lighthouse_provider_connection_task(
provider_config_id=non_existent_id,
tenant_id=str(tenants_fixture[0].id),
)
@pytest.mark.django_db
class TestRefreshLighthouseProviderModelsTask:
def setup_method(self):
self.tenant_id = str(uuid.uuid4())
@pytest.mark.parametrize(
"provider_type,credentials,base_url,mock_models,expected_count",
[
(
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
{"api_key": "sk-test123"},
None,
{"gpt-5": "gpt-5", "gpt-4o": "gpt-4o"},
2,
),
(
LighthouseProviderConfiguration.LLMProviderChoices.OPENAI_COMPATIBLE,
{"api_key": "sk-test123"},
"https://openrouter.ai/api/v1",
{"model-1": "Model One", "model-2": "Model Two"},
2,
),
(
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
{
"access_key_id": "AKIA123",
"secret_access_key": "secret",
"region": "us-east-1",
},
None,
{"openai.gpt-oss-120b-1:0": "gpt-oss-120b"},
1,
),
# Bedrock API key authentication
(
LighthouseProviderConfiguration.LLMProviderChoices.BEDROCK,
{
"api_key": "ABSKQmVkcm9ja0FQSUtleS" + ("A" * 110),
"region": "us-east-1",
},
None,
{"anthropic.claude-v3": "Claude 3"},
1,
),
],
)
def test_refresh_models_create_new(
self,
tenants_fixture,
provider_type,
credentials,
base_url,
mock_models,
expected_count,
):
"""Test creating new models for all provider types."""
# Create provider configuration
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=provider_type,
base_url=base_url,
is_active=True,
)
provider_cfg.credentials_decoded = credentials
provider_cfg.save()
# Mock the fetch functions
with (
patch(
"tasks.jobs.lighthouse_providers._fetch_openai_models",
return_value=mock_models,
),
patch(
"tasks.jobs.lighthouse_providers._fetch_openai_compatible_models",
return_value=mock_models,
),
patch(
"tasks.jobs.lighthouse_providers._fetch_bedrock_models",
return_value=mock_models,
),
):
# Execute
result = refresh_lighthouse_provider_models_task(
provider_config_id=str(provider_cfg.id),
tenant_id=str(tenants_fixture[0].id),
)
# Assert
assert result["created"] == expected_count
assert result["updated"] == 0
assert result["deleted"] == 0
assert (
LighthouseProviderModels.objects.filter(
provider_configuration=provider_cfg
).count()
== expected_count
)
def test_refresh_models_mixed_operations(self, tenants_fixture):
"""Test mixed create, update, and delete operations."""
# Create provider configuration
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
base_url=None,
is_active=True,
)
provider_cfg.credentials_decoded = {"api_key": "sk-test123"}
provider_cfg.save()
# Create 2 existing models (A, B)
LighthouseProviderModels.objects.create(
tenant_id=tenants_fixture[0].id,
provider_configuration=provider_cfg,
model_id="model-a",
model_name="Model A",
)
LighthouseProviderModels.objects.create(
tenant_id=tenants_fixture[0].id,
provider_configuration=provider_cfg,
model_id="model-b",
model_name="Model B",
)
# Mock API to return models B (existing), C (new) - A will be deleted
mock_models = {"model-b": "Model B", "model-c": "Model C"}
with patch(
"tasks.jobs.lighthouse_providers._fetch_openai_models",
return_value=mock_models,
):
# Execute
result = refresh_lighthouse_provider_models_task(
provider_config_id=str(provider_cfg.id),
tenant_id=str(tenants_fixture[0].id),
)
# Assert
assert result["created"] == 1 # model-c created
assert result["updated"] == 1 # model-b updated
assert result["deleted"] == 1 # model-a deleted
# Verify only B and C exist
remaining_models = LighthouseProviderModels.objects.filter(
provider_configuration=provider_cfg
)
assert remaining_models.count() == 2
assert set(remaining_models.values_list("model_id", flat=True)) == {
"model-b",
"model-c",
}
def test_refresh_models_api_exception(self, tenants_fixture):
"""Test refresh when API raises an exception."""
# Create provider configuration
provider_cfg = LighthouseProviderConfiguration(
tenant_id=tenants_fixture[0].id,
provider_type=LighthouseProviderConfiguration.LLMProviderChoices.OPENAI,
base_url=None,
is_active=True,
)
provider_cfg.credentials_decoded = {"api_key": "sk-test123"}
provider_cfg.save()
# Mock fetch to raise exception
with patch(
"tasks.jobs.lighthouse_providers._fetch_openai_models",
side_effect=openai.APIError("API Error", request=MagicMock(), body=None),
):
# Execute
result = refresh_lighthouse_provider_models_task(
provider_config_id=str(provider_cfg.id),
tenant_id=str(tenants_fixture[0].id),
)
# Assert
assert result["created"] == 0
assert result["updated"] == 0
assert result["deleted"] == 0
assert "error" in result
assert result["error"] is not None

Some files were not shown because too many files have changed in this diff Show More