Compare commits

..

102 Commits

Author SHA1 Message Date
Alan Buscaglia 173659ae0b fix(ci): prevent grep exit code 1 from failing empty dir check
- Add || true to grep -v that filters empty lines from VALID_PATHS
- GitHub Actions bash uses set -eo pipefail, so grep returning 1 (no
  matches) killed the script before reaching the graceful exit 0
2026-03-12 11:33:33 +01:00
Alan Buscaglia 37abe6b47e test(ci): add path resolution tests and architecture documentation
- Add test script for E2E path resolution logic (18 edge case scenarios)
- Add developer guide for test impact analysis system
2026-03-12 10:13:00 +01:00
Alan Buscaglia 7b2ad97317 fix(ci): gracefully skip E2E when test directories are empty
- Filter out resolved test paths that contain no spec/test files
- Exit gracefully instead of failing with Playwright "No tests found"
- Supports forward-looking e2e patterns for modules without tests yet
2026-03-12 09:58:41 +01:00
Josema Camacho 628a076118 docs(attack-paths): add module docstring to scan orchestrator (#10277) 2026-03-12 08:49:48 +01:00
Daniel Barranquero b08cb8ffb3 fix(csv): move OU columns to the end (#10307) 2026-03-12 08:28:52 +01:00
Josema Camacho 57bcb74d0d fix(api): upgrade Cartography to 0.132.0 to fix exposed_internet on ELB/ELBv2 nodes (#10272)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-11 18:12:43 +01:00
Raajhesh Kannaa Chidambaram 39385567fc feat(organizations): add OU metadata to outputs (#10283)
Co-authored-by: Raajhesh Kannaa Chidambaram <495042+raajheshkannaa@users.noreply.github.com>
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-11 16:41:44 +01:00
Alan Buscaglia 125ba830f7 fix(ci): prevent E2E auth setups from running on broad path matches (#10304) 2026-03-11 15:38:18 +01:00
Alejandro Bailo db7554c8fb feat(ui): redesign providers page with modern table and cloud recursion (#10292) 2026-03-11 13:13:28 +01:00
lydiavilchez 65a7098104 feat(api): add Google Workspace provider API integration (#10247) 2026-03-11 12:06:30 +01:00
Daniel Barranquero e28bde797f feat(openstack): object storage service with 7 new checks (#10258) 2026-03-11 12:00:43 +01:00
Rubén De la Torre Vico cc0d83de91 docs(mcp_server): add Attack Paths MCP tools documentation (#10302)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 10:11:37 +01:00
Utwo e40beee315 feat: Helm CD (#10079)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-03-11 10:07:22 +01:00
Daniel Barranquero e9855bbf2f docs: update mutelist docs (#10296) 2026-03-10 16:58:31 +01:00
Daniel Barranquero 2768b7ad4e docs: update readme and docs with new providers (#10295) 2026-03-10 16:58:08 +01:00
Josema Camacho 57f3920e66 refactor(api): migrate Attack Paths network exposure queries from APOC to openCypher (#10266) 2026-03-10 16:48:16 +01:00
Josema Camacho 3288a4a131 fix(api): add missing logging for Attack Paths query execution and scan error handling (#10269) 2026-03-10 16:47:53 +01:00
Michael Wentz c4d692f77b feat(guardduty): add org-wide delegated admin check across all regions (#9867)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-10 12:56:00 +01:00
Adrián Peña 344a098ddc docs: document required permissions for mutelist features (#10294) 2026-03-10 12:20:25 +01:00
Eran Cohen 0b461233c1 feat(iam): Add trusted IP configurable option to reduce false positives in 'opensearch' check (#8631)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-10 12:12:54 +01:00
Pepe Fagoaga d3213e9f1e chore(providers): Return 409 on conflict (#10293)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-10 10:54:09 +01:00
Alejandro Bailo e4bccfb26e chore(ui): move security changelog entry from v19.1 to v20 (#10291) 2026-03-10 09:54:30 +01:00
Rubén De la Torre Vico e3e2408717 chore(m365): enhance metadata for purview service (#9092)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-09 20:42:33 +01:00
Rubén De la Torre Vico 20efe001ff chore(m365): enhance metadata for defender service (#9681)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-09 20:13:45 +01:00
Rubén De la Torre Vico 9b64efeec2 chore(m365): enhance metadata for admincenter service (#9680)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-09 19:48:23 +01:00
Pedro Martín 23a8d4e680 feat(ui): improve organizations onboarding (#10274) 2026-03-09 16:54:50 +01:00
Daniel Barranquero 809142de35 chore(alibaba): update all metadata files (#10289) 2026-03-09 16:37:19 +01:00
Alejandro Bailo 1e95b48c86 fix(ui): rename error text token to text-text-error-primary (#10285) 2026-03-09 13:36:31 +01:00
Pepe Fagoaga 5a062b19dc chore: remove SaaS reference in dashboard (#10288) 2026-03-09 13:14:19 +01:00
Rubén De la Torre Vico b60867c5b6 chore(oraclecloud): enhance metadata for identity service (#9375)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 14:12:06 +01:00
Rubén De la Torre Vico 25c982d915 chore(oraclecloud): enhance metadata for events service (#9373)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 13:46:13 +01:00
Alejandro Bailo 2e60bb82d5 fix(ui): skip launch step when updating provider credentials (#10278) 2026-03-06 13:39:25 +01:00
Rubén De la Torre Vico ab92755e47 chore(oraclecloud): enhance metadata for objectstorage service (#9379)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 13:17:14 +01:00
Rubén De la Torre Vico 2e236a2cd1 chore(oraclecloud): enhance metadata for network service (#9378)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 13:05:51 +01:00
Rubén De la Torre Vico be6d1823c9 chore(oraclecloud): enhance metadata for kms service (#9377)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 12:39:01 +01:00
Pedro Martín 86daf7bc05 fix(pdf): align ENS report requirement status (#10270) 2026-03-06 12:36:50 +01:00
Rubén De la Torre Vico 1a6285c6a0 chore(oraclecloud): enhance metadata for integration service (#9376)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 12:30:50 +01:00
Alejandro Bailo acc6f731b4 chore(ui): update changelog for v1.20.0 (#10275) 2026-03-06 12:26:59 +01:00
Rubén De la Torre Vico 6aa524c47d chore(oraclecloud): enhance metadata for filestorage service (#9374)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 12:21:45 +01:00
Rubén De la Torre Vico ca992006b8 chore(oraclecloud): enhance metadata for database service (#9372)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 12:10:14 +01:00
Rubén De la Torre Vico 77c70114dc chore(oraclecloud): enhance metadata for compute service (#9371)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 12:01:15 +01:00
Daniel Barranquero 7ae14ea1ac chore(github): enhance metadata for 'organization' service (#10273) 2026-03-06 11:02:45 +01:00
Alejandro Bailo 48df613095 feat(ui): improve attack paths page layout and UX (#10249) 2026-03-06 10:49:11 +01:00
Rubén De la Torre Vico 97f4cb716d chore(github): enhance metadata for repository service (#9659)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 10:36:07 +01:00
Alejandro Bailo b1c5fa4c46 refactor(ui): migrate provider wizard forms from HeroUI to shadcn (#10259) 2026-03-06 10:13:47 +01:00
Rubén De la Torre Vico cc02c6f880 chore(mongodbatlas): enhance metadata for clusters service (#9657)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 10:09:24 +01:00
Rubén De la Torre Vico d5827f3e83 chore(mongodbatlas): enhance metadata for organizations service (#9658)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-06 09:58:38 +01:00
Hugo Pereira Brito 9cf63a2a68 feat(m365): add custom entra_conditional_access_policy_compliant_device_hybrid_joined_device_mfa_required check (#10197)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 18:11:20 +01:00
Alejandro Bailo e2fe482238 fix(ui): bump pnpm overrides to resolve 11 npm security vulnerabilities (#10267) 2026-03-05 14:00:44 +01:00
Pedro Martín 72938ca797 docs(aws): improve organizations (#10265) 2026-03-05 12:56:42 +01:00
Rubén De la Torre Vico fe9dbdfd2c chore(kubernetes): enhance metadata for scheduler service (#9679)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 12:08:47 +01:00
Rubén De la Torre Vico a5763289dd chore(kubernetes): enhance metadata for rbac service (#9678)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 11:57:18 +01:00
Rubén De la Torre Vico 36f4daf646 chore(kubernetes): enhance metadata for kubelet service (#9677)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 11:36:50 +01:00
Rubén De la Torre Vico 4a2d8111bc chore(kubernetes): enhance metadata for core service (#9676)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 11:24:54 +01:00
Hugo Pereira Brito 726b5665d0 feat(m365): add entra_conditional_access_policy_approved_client_app_required_for_mobile security check (#10216)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 10:58:18 +01:00
Rubén De la Torre Vico 5968441f59 chore(kubernetes): enhance metadata for controllermanager service (#9675)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 10:44:22 +01:00
Rubén De la Torre Vico 6069d6e231 chore(kubernetes): enhance metadata for apiserver service (#9674)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2026-03-05 10:29:27 +01:00
Daniel Barranquero 9a4167d947 feat(docs): add Prowler Cloud docs to Openstack getting started (#10100)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-03-05 10:13:34 +01:00
Prowler Bot 43792f39c8 docs: Update version to v5.19.0 (#10255)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-03-04 21:18:41 +01:00
Prowler Bot 4e80e0564d chore(api): Bump version to v1.21.0 (#10254)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-03-04 21:18:34 +01:00
Prowler Bot a81931bb35 chore(release): Bump version to v5.20.0 (#10252)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2026-03-04 21:18:24 +01:00
Hugo Pereira Brito 6ad991c63c docs(docs): add Prowler Cloud documentation for Cloudflare provider (#10151)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-03-04 17:36:21 +01:00
mintlify[bot] 104a4a92c3 docs: Add OCSF field requirements for Prowler Cloud integration (#10245)
Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-03-04 11:59:22 +01:00
Pedro Martín 62988821a7 chore(mcp_server): update for release 5.19 (#10248) 2026-03-04 11:46:15 +01:00
Pepe Fagoaga 7a712d5fef chore(changelog): review latest entries (#10246) 2026-03-04 11:26:53 +01:00
Josema Camacho 8a3d27139a docs: add Attack Paths UI documentation (#10230)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-03-04 10:54:45 +01:00
Alejandro Bailo 73415e2f8a chore(ui): improve provider wizard docs link labels (#10244) 2026-03-04 09:33:32 +01:00
Andoni Alonso e8d2b4a189 fix(iac): include resource line range in finding UID to prevent duplicates (#10241)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-03-03 17:40:36 +01:00
Andoni Alonso b61b6cba53 feat(sdk): add provider identity fields to OCSF unmapped output (#10240)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-03-03 16:42:08 +01:00
Pepe Fagoaga 71ee4213b3 chore(ingestions): rename flag, update docs (#10236) 2026-03-03 15:04:34 +01:00
Hugo Pereira Brito e96ea54f3b feat(m365): add entra_break_glass_users_fido2_security_key_registered security check (#10213)
Co-authored-by: Daniel Barranquero <74871504+danibarranqueroo@users.noreply.github.com>
2026-03-03 13:58:44 +01:00
Andoni Alonso dfca97633e feat(sdk): add provider_uid to OCSF unmapped output (#10231)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-03-03 13:35:58 +01:00
Daniel Barranquero 3538e7accf chore: modify Cloudflare account and resource UIDs (#10227)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-03-03 13:09:38 +01:00
Hugo Pereira Brito 548a137046 feat(m365): add entra_authentication_method_sms_voice_disabled security check (#10212) 2026-03-03 13:08:02 +01:00
Daniel Barranquero 012fd84cb0 chore: add provider-uid flag for iac provider (#10233)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-03-03 13:07:15 +01:00
Hugo Pereira Brito 8f3e69f571 docs(tutorials): add note about latest scan results in Overview and Resources (#10221)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 12:58:05 +01:00
Pepe Fagoaga 9c2cb5efa8 fix(elbv2): Handle post-quantum (PQ) TLS policies (#10219) 2026-03-03 10:18:00 +01:00
Pepe Fagoaga fa93cabc0b chore: print OCSF import result in the CLI (#10229) 2026-03-03 10:17:04 +01:00
Andoni Alonso efcbbf63c2 docs: review and fix documentation coverage for provider CLI flags (#10040) 2026-03-03 09:57:05 +01:00
Harsh Mishra 150abce4a8 fix(aws): respect AWS_ENDPOINT_URL for STS session creation (#10228)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-03-03 08:25:59 +01:00
Daniel Barranquero dcf74113fc chore: modify M365 and Github account UIDs (#10226) 2026-03-02 17:22:09 +01:00
mintlify[bot] 42f9b5fb2f docs: rename Findings Ingestion to Import Findings (#10224)
Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
2026-03-02 16:25:06 +01:00
Alejandro Bailo c74fa131ea fix(ui): navigate to launch step after successful test in update mode (#10223) 2026-03-02 15:59:16 +01:00
Hugo Pereira Brito 07dea4f402 refactor(m365): rename conditional access policy checks to include policy prefix (#10217) 2026-03-02 13:41:24 +01:00
Pepe Fagoaga c71ae75c70 chore(changelog): release v5.19.0 (#10180) 2026-03-02 13:24:03 +01:00
Daniel Barranquero b21ded6d46 feat(openstack): add image service with 6 checks (#10096) 2026-03-02 12:47:49 +01:00
Daniel Barranquero 8eddb48b16 feat(openstack): add blockstorage service with 7 checks (#10120)
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
2026-03-02 12:08:08 +01:00
Daniel Barranquero d3ba93f0c0 feat(openstack): add networking service with 6 checks (#9970)
Co-authored-by: Andoni Alonso <14891798+andoniaf@users.noreply.github.com>
2026-03-02 11:55:37 +01:00
Andoni Alonso 8adb4f43ad chore: bump Trivy to 0.69.2 (#10210) 2026-03-02 09:54:34 +01:00
Pepe Fagoaga 8af9b333c9 ci: restore persist credentials when no output is generated (#10211) 2026-03-02 09:14:02 +01:00
Pepe Fagoaga 4e71a9dcf1 ci(security): Add zizmor (#10208) 2026-03-02 08:25:13 +01:00
Pepe Fagoaga 7adcbed727 fix(ci): zizmor security improvements (#10207) 2026-03-02 08:24:51 +01:00
Andoni Alonso 8be218b29f fix(ci): harden GitHub Actions workflows against expression injection (#10200) 2026-03-01 19:58:43 +01:00
Alejandro Bailo 80e84d1da4 fix(ui): stabilize provider wizard modal and DataTable rendering (#10194) 2026-02-27 14:35:13 +01:00
mintlify[bot] fff80a920b chore(docs): Add Reo tracking beacon (#10193)
Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
2026-02-27 13:07:46 +01:00
mintlify[bot] 90a4579230 docs(install): Add missing notes for Docker Compose installation (#10192)
Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2026-02-27 12:53:59 +01:00
Pedro Martín 2f44be8db4 docs(aws): add AWS Organizations (#10183) 2026-02-27 12:28:16 +01:00
Alejandro Bailo 288593d01e fix(ui): patch npm transitive dependency vulnerabilities (#10187) 2026-02-27 10:31:20 +01:00
Alejandro Bailo ddb6c03c0e test(ui): fix provider E2E test selectors and reliability (#10178) 2026-02-27 10:12:54 +01:00
mintlify[bot] 79d4476713 docs(import): Add billing impact section to Findings Import (#10186)
Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
2026-02-27 10:11:16 +01:00
Anthony 06f6e8b99b fix(ui): apply provider/account filters to Findings Severity Over Time chart (#10103)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2026-02-27 10:10:47 +01:00
Adrián Peña 8ee4a9e3fc fix(sdk): scope scan_id by provider and account (#10184) 2026-02-26 19:19:29 +01:00
724 changed files with 30758 additions and 6100 deletions
@@ -35,7 +35,9 @@ runs:
shell: bash
run: |
python -m pip install --upgrade pip
pipx install poetry==${{ inputs.poetry-version }}
pipx install poetry==${INPUTS_POETRY_VERSION}
env:
INPUTS_POETRY_VERSION: ${{ inputs.poetry-version }}
- name: Update poetry.lock with latest Prowler commit
if: github.repository_owner == 'prowler-cloud' && github.repository != 'prowler-cloud/prowler'
+10 -5
View File
@@ -26,16 +26,18 @@ runs:
id: status
shell: bash
run: |
if [[ "${{ inputs.step-outcome }}" == "success" ]]; then
if [[ "${INPUTS_STEP_OUTCOME}" == "success" ]]; then
echo "STATUS_TEXT=Completed" >> $GITHUB_ENV
echo "STATUS_COLOR=#6aa84f" >> $GITHUB_ENV
elif [[ "${{ inputs.step-outcome }}" == "failure" ]]; then
elif [[ "${INPUTS_STEP_OUTCOME}" == "failure" ]]; then
echo "STATUS_TEXT=Failed" >> $GITHUB_ENV
echo "STATUS_COLOR=#fc3434" >> $GITHUB_ENV
else
# No outcome provided - pending/in progress state
echo "STATUS_COLOR=#dbab09" >> $GITHUB_ENV
fi
env:
INPUTS_STEP_OUTCOME: ${{ inputs.step-outcome }}
- name: Send Slack notification (new message)
if: inputs.update-ts == ''
@@ -67,8 +69,11 @@ runs:
id: slack-notification
shell: bash
run: |
if [[ "${{ inputs.update-ts }}" == "" ]]; then
echo "ts=${{ steps.slack-notification-post.outputs.ts }}" >> $GITHUB_OUTPUT
if [[ "${INPUTS_UPDATE_TS}" == "" ]]; then
echo "ts=${STEPS_SLACK_NOTIFICATION_POST_OUTPUTS_TS}" >> $GITHUB_OUTPUT
else
echo "ts=${{ inputs.update-ts }}" >> $GITHUB_OUTPUT
echo "ts=${INPUTS_UPDATE_TS}" >> $GITHUB_OUTPUT
fi
env:
INPUTS_UPDATE_TS: ${{ inputs.update-ts }}
STEPS_SLACK_NOTIFICATION_POST_OUTPUTS_TS: ${{ steps.slack-notification-post.outputs.ts }}
+13 -5
View File
@@ -54,7 +54,7 @@ runs:
trivy-db-${{ runner.os }}-
- name: Run Trivy vulnerability scan (JSON)
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
uses: aquasecurity/trivy-action@e368e328979b113139d6f9068e03accaed98a518 # 0.34.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'json'
@@ -63,10 +63,11 @@ runs:
exit-code: '0'
scanners: 'vuln'
timeout: '5m'
version: 'v0.69.2'
- name: Run Trivy vulnerability scan (SARIF)
if: inputs.upload-sarif == 'true' && github.event_name == 'push'
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
uses: aquasecurity/trivy-action@e368e328979b113139d6f9068e03accaed98a518 # 0.34.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'sarif'
@@ -75,6 +76,7 @@ runs:
exit-code: '0'
scanners: 'vuln'
timeout: '5m'
version: 'v0.69.2'
- name: Upload Trivy results to GitHub Security tab
if: inputs.upload-sarif == 'true' && github.event_name == 'push'
@@ -105,11 +107,14 @@ runs:
echo "### 🔒 Container Security Scan" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Image:** \`${{ inputs.image-name }}:${{ inputs.image-tag }}\`" >> $GITHUB_STEP_SUMMARY
echo "**Image:** \`${INPUTS_IMAGE_NAME}:${INPUTS_IMAGE_TAG}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- 🔴 Critical: $CRITICAL" >> $GITHUB_STEP_SUMMARY
echo "- 🟠 High: $HIGH" >> $GITHUB_STEP_SUMMARY
echo "- **Total**: $TOTAL" >> $GITHUB_STEP_SUMMARY
env:
INPUTS_IMAGE_NAME: ${{ inputs.image-name }}
INPUTS_IMAGE_TAG: ${{ inputs.image-tag }}
- name: Comment scan results on PR
if: inputs.create-pr-comment == 'true' && github.event_name == 'pull_request'
@@ -123,7 +128,7 @@ runs:
const comment = require('./.github/scripts/trivy-pr-comment.js');
// Unique identifier to find our comment
const marker = '<!-- trivy-scan-comment:${{ inputs.image-name }} -->';
const marker = `<!-- trivy-scan-comment:${process.env.IMAGE_NAME} -->`;
const body = marker + '\n' + comment;
// Find existing comment
@@ -159,6 +164,9 @@ runs:
if: inputs.fail-on-critical == 'true' && steps.security-check.outputs.critical != '0'
shell: bash
run: |
echo "::error::Found ${{ steps.security-check.outputs.critical }} critical vulnerabilities"
echo "::error::Found ${STEPS_SECURITY_CHECK_OUTPUTS_CRITICAL} critical vulnerabilities"
echo "::warning::Please update packages or use a different base image"
exit 1
env:
STEPS_SECURITY_CHECK_OUTPUTS_CRITICAL: ${{ steps.security-check.outputs.critical }}
+6
View File
@@ -15,6 +15,8 @@ updates:
labels:
- "dependencies"
- "pip"
cooldown:
default-days: 7
# Dependabot Updates are temporary disabled - 2025/03/19
# - package-ecosystem: "pip"
@@ -37,6 +39,8 @@ updates:
labels:
- "dependencies"
- "github_actions"
cooldown:
default-days: 7
# Dependabot Updates are temporary disabled - 2025/03/19
# - package-ecosystem: "npm"
@@ -59,6 +63,8 @@ updates:
labels:
- "dependencies"
- "docker"
cooldown:
default-days: 7
# Dependabot Updates are temporary disabled - 2025/04/15
# v4.6
+350
View File
@@ -0,0 +1,350 @@
#!/usr/bin/env bash
#
# Test script for E2E test path resolution logic from ui-e2e-tests-v2.yml.
# Validates that the shell logic correctly transforms E2E_TEST_PATHS into
# Playwright-compatible paths.
#
# Usage: .github/scripts/test-e2e-path-resolution.sh
set -euo pipefail
# -- Colors ------------------------------------------------------------------
RED='\033[0;31m'
GREEN='\033[0;32m'
BOLD='\033[1m'
RESET='\033[0m'
# -- Counters ----------------------------------------------------------------
TOTAL=0
PASSED=0
FAILED=0
# -- Temp directory setup & cleanup ------------------------------------------
TMPDIR_ROOT="$(mktemp -d)"
trap 'rm -rf "$TMPDIR_ROOT"' EXIT
# ---------------------------------------------------------------------------
# create_test_tree DIR [SUBDIRS_WITH_TESTS...]
#
# Creates a fake ui/tests/ tree inside DIR.
# All standard subdirs are created (empty).
# For each name in SUBDIRS_WITH_TESTS, a fake .spec.ts file is placed inside.
# ---------------------------------------------------------------------------
create_test_tree() {
local base="$1"; shift
local all_subdirs=(
auth home invitations profile providers scans
setups sign-in-base sign-up attack-paths findings
compliance browse manage-groups roles users overview
integrations
)
for d in "${all_subdirs[@]}"; do
mkdir -p "${base}/tests/${d}"
done
# Populate requested subdirs with a fake test file
for d in "$@"; do
mkdir -p "${base}/tests/${d}"
touch "${base}/tests/${d}/example.spec.ts"
done
}
# ---------------------------------------------------------------------------
# resolve_paths E2E_TEST_PATHS WORKING_DIR
#
# Extracted EXACT logic from .github/workflows/ui-e2e-tests-v2.yml lines 212-250.
# Outputs space-separated TEST_PATHS, or "SKIP" if no tests found.
# Must be run with WORKING_DIR as the cwd equivalent (we cd into it).
# ---------------------------------------------------------------------------
resolve_paths() {
local E2E_TEST_PATHS="$1"
local WORKING_DIR="$2"
(
cd "$WORKING_DIR"
# --- Line 212-214: strip ui/ prefix, strip **, deduplicate ---------------
TEST_PATHS="${E2E_TEST_PATHS}"
TEST_PATHS=$(echo "$TEST_PATHS" | sed 's|ui/||g' | sed 's|\*\*||g' | tr ' ' '\n' | sort -u)
# --- Line 216: drop setup helpers ----------------------------------------
TEST_PATHS=$(echo "$TEST_PATHS" | grep -v '^tests/setups/' || true)
# --- Lines 219-230: safety net for bare tests/ --------------------------
if echo "$TEST_PATHS" | grep -qx 'tests/'; then
SPECIFIC_DIRS=""
for dir in tests/*/; do
[[ "$dir" == "tests/setups/" ]] && continue
SPECIFIC_DIRS="${SPECIFIC_DIRS}${dir}"$'\n'
done
TEST_PATHS=$(echo "$TEST_PATHS" | grep -vx 'tests/' || true)
TEST_PATHS="${TEST_PATHS}"$'\n'"${SPECIFIC_DIRS}"
TEST_PATHS=$(echo "$TEST_PATHS" | grep -v '^$' | sort -u)
fi
# --- Lines 231-234: bail if empty ----------------------------------------
if [[ -z "$TEST_PATHS" ]]; then
echo "SKIP"
return
fi
# --- Lines 236-245: filter dirs with no test files -----------------------
VALID_PATHS=""
while IFS= read -r p; do
[[ -z "$p" ]] && continue
if find "$p" -name '*.spec.ts' -o -name '*.test.ts' 2>/dev/null | head -1 | grep -q .; then
VALID_PATHS="${VALID_PATHS}${p}"$'\n'
fi
done <<< "$TEST_PATHS"
VALID_PATHS=$(echo "$VALID_PATHS" | grep -v '^$')
# --- Lines 246-249: bail if all empty ------------------------------------
if [[ -z "$VALID_PATHS" ]]; then
echo "SKIP"
return
fi
# --- Line 250: final output (space-separated) ---------------------------
echo "$VALID_PATHS" | tr '\n' ' ' | sed 's/ $//'
)
}
# ---------------------------------------------------------------------------
# run_test NAME INPUT EXPECTED_TYPE [EXPECTED_VALUE]
#
# EXPECTED_TYPE is one of:
# "contains <path>" — output must contain this path
# "equals <value>" — output must exactly equal this value
# "skip" — expect SKIP (no runnable tests)
# "not_contains <p>" — output must NOT contain this path
#
# Multiple expectations can be specified by calling assert_* after run_test.
# For convenience, run_test supports a single assertion inline.
# ---------------------------------------------------------------------------
CURRENT_RESULT=""
CURRENT_TEST_NAME=""
run_test() {
local name="$1"
local input="$2"
local expect_type="$3"
local expect_value="${4:-}"
TOTAL=$((TOTAL + 1))
CURRENT_TEST_NAME="$name"
# Create a fresh temp tree per test
local test_dir="${TMPDIR_ROOT}/test_${TOTAL}"
mkdir -p "$test_dir"
# Default populated dirs: scans, providers, auth, home, profile, sign-up, sign-in-base
create_test_tree "$test_dir" scans providers auth home profile sign-up sign-in-base
CURRENT_RESULT=$(resolve_paths "$input" "$test_dir")
_check "$expect_type" "$expect_value"
}
# Like run_test but lets caller specify which subdirs have test files.
run_test_custom_tree() {
local name="$1"
local input="$2"
local expect_type="$3"
local expect_value="${4:-}"
shift 4
local populated_dirs=("$@")
TOTAL=$((TOTAL + 1))
CURRENT_TEST_NAME="$name"
local test_dir="${TMPDIR_ROOT}/test_${TOTAL}"
mkdir -p "$test_dir"
create_test_tree "$test_dir" "${populated_dirs[@]}"
CURRENT_RESULT=$(resolve_paths "$input" "$test_dir")
_check "$expect_type" "$expect_value"
}
_check() {
local expect_type="$1"
local expect_value="$2"
case "$expect_type" in
skip)
if [[ "$CURRENT_RESULT" == "SKIP" ]]; then
_pass
else
_fail "expected SKIP, got: '$CURRENT_RESULT'"
fi
;;
contains)
if [[ "$CURRENT_RESULT" == *"$expect_value"* ]]; then
_pass
else
_fail "expected to contain '$expect_value', got: '$CURRENT_RESULT'"
fi
;;
not_contains)
if [[ "$CURRENT_RESULT" != *"$expect_value"* ]]; then
_pass
else
_fail "expected NOT to contain '$expect_value', got: '$CURRENT_RESULT'"
fi
;;
equals)
if [[ "$CURRENT_RESULT" == "$expect_value" ]]; then
_pass
else
_fail "expected exactly '$expect_value', got: '$CURRENT_RESULT'"
fi
;;
*)
_fail "unknown expect_type: $expect_type"
;;
esac
}
_pass() {
PASSED=$((PASSED + 1))
printf '%b PASS%b %s\n' "$GREEN" "$RESET" "$CURRENT_TEST_NAME"
}
_fail() {
FAILED=$((FAILED + 1))
printf '%b FAIL%b %s\n' "$RED" "$RESET" "$CURRENT_TEST_NAME"
printf " %s\n" "$1"
}
# ===========================================================================
# TEST CASES
# ===========================================================================
echo ""
printf '%bE2E Path Resolution Tests%b\n' "$BOLD" "$RESET"
echo "=========================================="
# 1. Normal single module
run_test \
"1. Normal single module" \
"ui/tests/scans/**" \
"contains" "tests/scans/"
# 2. Multiple modules
run_test \
"2. Multiple modules — scans present" \
"ui/tests/scans/** ui/tests/providers/**" \
"contains" "tests/scans/"
run_test \
"2. Multiple modules — providers present" \
"ui/tests/scans/** ui/tests/providers/**" \
"contains" "tests/providers/"
# 3. Broad pattern (many modules)
run_test \
"3. Broad pattern — no bare tests/" \
"ui/tests/auth/** ui/tests/scans/** ui/tests/providers/** ui/tests/home/** ui/tests/profile/**" \
"not_contains" "tests/ "
# 4. Empty directory
run_test \
"4. Empty directory — skipped" \
"ui/tests/attack-paths/**" \
"skip"
# 5. Mix of populated and empty dirs
run_test \
"5. Mix populated+empty — scans present" \
"ui/tests/scans/** ui/tests/attack-paths/**" \
"contains" "tests/scans/"
run_test \
"5. Mix populated+empty — attack-paths absent" \
"ui/tests/scans/** ui/tests/attack-paths/**" \
"not_contains" "tests/attack-paths/"
# 6. All empty directories
run_test \
"6. All empty directories" \
"ui/tests/attack-paths/** ui/tests/findings/**" \
"skip"
# 7. Setup paths filtered
run_test \
"7. Setup paths filtered out" \
"ui/tests/setups/**" \
"skip"
# 8. Bare tests/ from broad pattern — safety net expands
run_test \
"8. Bare tests/ expands — scans present" \
"ui/tests/**" \
"contains" "tests/scans/"
run_test \
"8. Bare tests/ expands — setups excluded" \
"ui/tests/**" \
"not_contains" "tests/setups/"
# 9. Bare tests/ with all empty subdirs (only setups has files)
run_test_custom_tree \
"9. Bare tests/ — only setups has files" \
"ui/tests/**" \
"skip" "" \
setups
# 10. Duplicate paths
run_test \
"10. Duplicate paths — deduplicated" \
"ui/tests/scans/** ui/tests/scans/**" \
"equals" "tests/scans/"
# 11. Empty input
TOTAL=$((TOTAL + 1))
CURRENT_TEST_NAME="11. Empty input"
test_dir="${TMPDIR_ROOT}/test_${TOTAL}"
mkdir -p "$test_dir"
create_test_tree "$test_dir" scans providers
CURRENT_RESULT=$(resolve_paths "" "$test_dir")
_check "skip" ""
# 12. Trailing/leading whitespace
run_test \
"12. Whitespace handling" \
" ui/tests/scans/** " \
"contains" "tests/scans/"
# 13. Path without ui/ prefix
run_test \
"13. Path without ui/ prefix" \
"tests/scans/**" \
"contains" "tests/scans/"
# 14. Setup mixed with valid paths — only valid pass through
run_test \
"14. Setups + valid — setups filtered" \
"ui/tests/setups/** ui/tests/scans/**" \
"contains" "tests/scans/"
run_test \
"14. Setups + valid — setups absent" \
"ui/tests/setups/** ui/tests/scans/**" \
"not_contains" "tests/setups/"
# ===========================================================================
# SUMMARY
# ===========================================================================
echo ""
echo "=========================================="
if [[ "$FAILED" -eq 0 ]]; then
printf '%b%bAll tests passed: %d/%d%b\n' "$GREEN" "$BOLD" "$PASSED" "$TOTAL" "$RESET"
else
printf '%b%b%d/%d passed, %d FAILED%b\n' "$RED" "$BOLD" "$PASSED" "$TOTAL" "$FAILED" "$RESET"
fi
echo ""
exit "$FAILED"
+54 -6
View File
@@ -224,8 +224,24 @@ modules:
tests:
- api/src/backend/api/tests/test_views.py
e2e:
# API view changes can break UI
- ui/tests/**
# All E2E test suites (explicit to avoid triggering auth setups in tests/setups/)
- ui/tests/auth/**
- ui/tests/sign-in/**
- ui/tests/sign-up/**
- ui/tests/sign-in-base/**
- ui/tests/scans/**
- ui/tests/providers/**
- ui/tests/findings/**
- ui/tests/compliance/**
- ui/tests/invitations/**
- ui/tests/roles/**
- ui/tests/users/**
- ui/tests/integrations/**
- ui/tests/resources/**
- ui/tests/profile/**
- ui/tests/lighthouse/**
- ui/tests/home/**
- ui/tests/attack-paths/**
- name: api-serializers
match:
@@ -234,8 +250,24 @@ modules:
tests:
- api/src/backend/api/tests/**
e2e:
# Serializer changes affect API responses → UI
- ui/tests/**
# All E2E test suites (explicit to avoid triggering auth setups in tests/setups/)
- ui/tests/auth/**
- ui/tests/sign-in/**
- ui/tests/sign-up/**
- ui/tests/sign-in-base/**
- ui/tests/scans/**
- ui/tests/providers/**
- ui/tests/findings/**
- ui/tests/compliance/**
- ui/tests/invitations/**
- ui/tests/roles/**
- ui/tests/users/**
- ui/tests/integrations/**
- ui/tests/resources/**
- ui/tests/profile/**
- ui/tests/lighthouse/**
- ui/tests/home/**
- ui/tests/attack-paths/**
- name: api-filters
match:
@@ -407,8 +439,24 @@ modules:
- ui/components/ui/**
tests: []
e2e:
# Shared components can affect any E2E
- ui/tests/**
# All E2E test suites (explicit to avoid triggering auth setups in tests/setups/)
- ui/tests/auth/**
- ui/tests/sign-in/**
- ui/tests/sign-up/**
- ui/tests/sign-in-base/**
- ui/tests/scans/**
- ui/tests/providers/**
- ui/tests/findings/**
- ui/tests/compliance/**
- ui/tests/invitations/**
- ui/tests/roles/**
- ui/tests/users/**
- ui/tests/integrations/**
- ui/tests/resources/**
- ui/tests/profile/**
- ui/tests/lighthouse/**
- ui/tests/home/**
- ui/tests/attack-paths/**
- name: ui-attack-paths
match:
+30 -10
View File
@@ -29,6 +29,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Get current API version
id: get_api_version
@@ -79,12 +81,14 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next API minor version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_API_VERSION="${{ needs.detect-release-type.outputs.current_api_version }}"
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_API_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_API_VERSION}"
# API version follows Prowler minor + 1
# For Prowler 5.17.0 -> API 1.18.0
@@ -97,6 +101,10 @@ jobs:
echo "Prowler release version: ${MAJOR_VERSION}.${MINOR_VERSION}.0"
echo "Current API version: $CURRENT_API_VERSION"
echo "Next API minor version (for master): $NEXT_API_VERSION"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_API_VERSION: ${{ needs.detect-release-type.outputs.current_api_version }}
- name: Bump API versions in files for master
run: |
@@ -132,12 +140,13 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
persist-credentials: false
- name: Calculate first API patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_API_VERSION="${{ needs.detect-release-type.outputs.current_api_version }}"
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_API_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_API_VERSION}"
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
# API version follows Prowler minor + 1
@@ -151,6 +160,10 @@ jobs:
echo "Prowler release version: ${MAJOR_VERSION}.${MINOR_VERSION}.0"
echo "First API patch version (for ${VERSION_BRANCH}): $FIRST_API_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_API_VERSION: ${{ needs.detect-release-type.outputs.current_api_version }}
- name: Bump API versions in files for version branch
run: |
@@ -193,13 +206,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next API patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
PATCH_VERSION=${{ needs.detect-release-type.outputs.patch_version }}
CURRENT_API_VERSION="${{ needs.detect-release-type.outputs.current_api_version }}"
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
PATCH_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION}
CURRENT_API_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_API_VERSION}"
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
# Extract current API patch to increment it
@@ -222,6 +237,11 @@ jobs:
echo "::error::Invalid API version format: $CURRENT_API_VERSION"
exit 1
fi
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION: ${{ needs.detect-release-type.outputs.patch_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_API_VERSION: ${{ needs.detect-release-type.outputs.current_api_version }}
- name: Bump API versions in files for version branch
run: |
+3
View File
@@ -34,6 +34,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for API changes
id: check-changes
+2
View File
@@ -43,6 +43,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Initialize CodeQL
uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
+24 -9
View File
@@ -58,6 +58,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Notify container push started
id: slack-notification
@@ -94,6 +96,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -138,18 +142,22 @@ jobs:
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${RELEASE_TAG} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
- name: Install regctl
if: always()
@@ -159,9 +167,11 @@ jobs:
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64" || true
echo "Cleanup completed"
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
@@ -171,15 +181,20 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
if [[ "${NEEDS_CONTAINER_BUILD_PUSH_RESULT}" == "success" && "${NEEDS_CREATE_MANIFEST_RESULT}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
env:
NEEDS_CONTAINER_BUILD_PUSH_RESULT: ${{ needs.container-build-push.result }}
NEEDS_CREATE_MANIFEST_RESULT: ${{ needs.create-manifest.result }}
- name: Notify container push completed
uses: ./.github/actions/slack-notification
@@ -29,6 +29,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -64,6 +67,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for API changes
id: check-changes
+3
View File
@@ -34,6 +34,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for API changes
id: check-changes
+4 -1
View File
@@ -43,7 +43,7 @@ jobs:
services:
postgres:
image: postgres
image: postgres:17@sha256:2cd82735a36356842d5eb1ef80db3ae8f1154172f0f653db48fde079b2a0b7f7
env:
POSTGRES_HOST: ${{ env.POSTGRES_HOST }}
POSTGRES_PORT: ${{ env.POSTGRES_PORT }}
@@ -74,6 +74,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for API changes
id: check-changes
+1
View File
@@ -1,6 +1,7 @@
name: 'Tools: Backport'
on:
# zizmor: ignore[dangerous-triggers] - intentional: needs write access for backport PRs, no PR code checkout
pull_request_target:
branches:
- 'master'
+44
View File
@@ -0,0 +1,44 @@
name: 'CI: Zizmor'
on:
push:
branches:
- 'master'
- 'v5.*'
paths:
- '.github/**'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- '.github/**'
schedule:
- cron: '30 06 * * *'
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
zizmor:
if: github.repository == 'prowler-cloud/prowler'
name: GitHub Actions Security Audit
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
security-events: write
contents: read
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@0dce2577a4760a2749d8cfb7a84b7d5585ebcb7d # v0.5.0
with:
token: ${{ github.token }}
+2 -1
View File
@@ -25,8 +25,9 @@ jobs:
- name: Create backport label for minor releases
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_EVENT_RELEASE_TAG_NAME: ${{ github.event.release.tag_name }}
run: |
RELEASE_TAG="${{ github.event.release.tag_name }}"
RELEASE_TAG="${GITHUB_EVENT_RELEASE_TAG_NAME}"
if [ -z "$RELEASE_TAG" ]; then
echo "Error: No release tag provided"
+30 -10
View File
@@ -29,6 +29,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Get current documentation version
id: get_docs_version
@@ -79,12 +81,14 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next minor version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_DOCS_VERSION="${{ needs.detect-release-type.outputs.current_docs_version }}"
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
NEXT_MINOR_VERSION=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).0
echo "CURRENT_DOCS_VERSION=${CURRENT_DOCS_VERSION}" >> "${GITHUB_ENV}"
@@ -93,6 +97,10 @@ jobs:
echo "Current documentation version: $CURRENT_DOCS_VERSION"
echo "Current release version: $PROWLER_VERSION"
echo "Next minor version: $NEXT_MINOR_VERSION"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation for master
run: |
@@ -132,12 +140,13 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
persist-credentials: false
- name: Calculate first patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
CURRENT_DOCS_VERSION="${{ needs.detect-release-type.outputs.current_docs_version }}"
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
FIRST_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.1
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
@@ -148,6 +157,10 @@ jobs:
echo "First patch version: $FIRST_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation for version branch
run: |
@@ -193,13 +206,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
PATCH_VERSION=${{ needs.detect-release-type.outputs.patch_version }}
CURRENT_DOCS_VERSION="${{ needs.detect-release-type.outputs.current_docs_version }}"
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
PATCH_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION}
CURRENT_DOCS_VERSION="${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION}"
NEXT_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION + 1))
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
@@ -212,6 +227,11 @@ jobs:
echo "Current release version: $PROWLER_VERSION"
echo "Next patch version: $NEXT_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION: ${{ needs.detect-release-type.outputs.patch_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_CURRENT_DOCS_VERSION: ${{ needs.detect-release-type.outputs.current_docs_version }}
- name: Bump versions in documentation for patch version
run: |
+1
View File
@@ -26,6 +26,7 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
- name: Scan for secrets with TruffleHog
uses: trufflesecurity/trufflehog@ef6e76c3c4023279497fab4721ffa071a722fd05 # v3.92.4
+48
View File
@@ -0,0 +1,48 @@
name: 'Helm: Chart Checks'
# DISCLAIMER: This workflow is not maintained by the Prowler team. Refer to contrib/k8s/helm/prowler-app for the source code.
on:
push:
branches:
- 'master'
- 'v5.*'
paths:
- 'contrib/k8s/helm/prowler-app/**'
pull_request:
branches:
- 'master'
- 'v5.*'
paths:
- 'contrib/k8s/helm/prowler-app/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
CHART_PATH: contrib/k8s/helm/prowler-app
jobs:
helm-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Set up Helm
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4.3.1
- name: Update chart dependencies
run: helm dependency update ${{ env.CHART_PATH }}
- name: Lint Helm chart
run: helm lint ${{ env.CHART_PATH }}
- name: Validate Helm chart template rendering
run: helm template prowler ${{ env.CHART_PATH }}
+54
View File
@@ -0,0 +1,54 @@
name: 'Helm: Chart Release'
# DISCLAIMER: This workflow is not maintained by the Prowler team. Refer to contrib/k8s/helm/prowler-app for the source code.
on:
release:
types:
- 'published'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: false
env:
CHART_PATH: contrib/k8s/helm/prowler-app
jobs:
release-helm-chart:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Set up Helm
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
- name: Set appVersion from release tag
run: |
RELEASE_TAG="${GITHUB_EVENT_RELEASE_TAG_NAME}"
echo "Setting appVersion to ${RELEASE_TAG}"
sed -i "s/^appVersion:.*/appVersion: \"${RELEASE_TAG}\"/" ${{ env.CHART_PATH }}/Chart.yaml
env:
GITHUB_EVENT_RELEASE_TAG_NAME: ${{ github.event.release.tag_name }}
- name: Login to GHCR
run: echo "${{ secrets.GITHUB_TOKEN }}" | helm registry login ghcr.io -u ${GITHUB_ACTOR} --password-stdin
- name: Update chart dependencies
run: helm dependency update ${{ env.CHART_PATH }}
- name: Package Helm chart
run: helm package ${{ env.CHART_PATH }} --destination .helm-packages
- name: Push chart to GHCR
run: |
PACKAGE=$(ls .helm-packages/*.tgz)
helm push "$PACKAGE" oci://ghcr.io/${{ github.repository_owner }}/charts
+1
View File
@@ -1,6 +1,7 @@
name: 'Tools: PR Labeler'
on:
# zizmor: ignore[dangerous-triggers] - intentional: needs write access to apply labels, no PR code checkout
pull_request_target:
branches:
- 'master'
+25 -10
View File
@@ -57,6 +57,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Notify container push started
id: slack-notification
@@ -92,6 +94,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -144,30 +148,36 @@ jobs:
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${RELEASE_TAG} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
- name: Install regctl
if: always()
uses: regclient/actions/regctl-installer@main
uses: regclient/actions/regctl-installer@da9319db8e44e8b062b3a147e1dfb2f574d41a03 # main
- name: Cleanup intermediate architecture tags
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64" || true
echo "Cleanup completed"
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
@@ -177,15 +187,20 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
if [[ "${NEEDS_CONTAINER_BUILD_PUSH_RESULT}" == "success" && "${NEEDS_CREATE_MANIFEST_RESULT}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
env:
NEEDS_CONTAINER_BUILD_PUSH_RESULT: ${{ needs.container-build-push.result }}
NEEDS_CREATE_MANIFEST_RESULT: ${{ needs.create-manifest.result }}
- name: Notify container push completed
uses: ./.github/actions/slack-notification
@@ -29,6 +29,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -63,6 +66,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for MCP changes
id: check-changes
+6 -2
View File
@@ -29,7 +29,7 @@ jobs:
- name: Parse and validate version
id: parse-version
run: |
PROWLER_VERSION="${{ env.RELEASE_TAG }}"
PROWLER_VERSION="${RELEASE_TAG}"
echo "version=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
# Extract major version
@@ -61,9 +61,13 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Install uv
uses: astral-sh/setup-uv@v7
uses: astral-sh/setup-uv@5a095e7a2014a4212f075830d4f7277575a9d098 # v7
with:
enable-cache: false
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
+9 -4
View File
@@ -32,6 +32,8 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Get changed files
id: changed-files
@@ -50,11 +52,11 @@ jobs:
run: |
missing_changelogs=""
if [[ "${{ steps.changed-files.outputs.any_changed }}" == "true" ]]; then
if [[ "${STEPS_CHANGED_FILES_OUTPUTS_ANY_CHANGED}" == "true" ]]; then
# Check monitored folders
for folder in $MONITORED_FOLDERS; do
# Get files changed in this folder
changed_in_folder=$(echo "${{ steps.changed-files.outputs.all_changed_files }}" | tr ' ' '\n' | grep "^${folder}/" || true)
changed_in_folder=$(echo "${STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n' | grep "^${folder}/" || true)
if [ -n "$changed_in_folder" ]; then
echo "Detected changes in ${folder}/"
@@ -69,11 +71,11 @@ jobs:
# Check root-level dependency files (poetry.lock, pyproject.toml)
# These are associated with the prowler folder changelog
root_deps_changed=$(echo "${{ steps.changed-files.outputs.all_changed_files }}" | tr ' ' '\n' | grep -E "^(poetry\.lock|pyproject\.toml)$" || true)
root_deps_changed=$(echo "${STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n' | grep -E "^(poetry\.lock|pyproject\.toml)$" || true)
if [ -n "$root_deps_changed" ]; then
echo "Detected changes in root dependency files: $root_deps_changed"
# Check if prowler/CHANGELOG.md was already updated (might have been caught above)
prowler_changelog_updated=$(echo "${{ steps.changed-files.outputs.all_changed_files }}" | tr ' ' '\n' | grep "^prowler/CHANGELOG.md$" || true)
prowler_changelog_updated=$(echo "${STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n' | grep "^prowler/CHANGELOG.md$" || true)
if [ -z "$prowler_changelog_updated" ]; then
# Only add if prowler wasn't already flagged
if ! echo "$missing_changelogs" | grep -q "prowler"; then
@@ -89,6 +91,9 @@ jobs:
echo -e "${missing_changelogs}"
echo "EOF"
} >> $GITHUB_OUTPUT
env:
STEPS_CHANGED_FILES_OUTPUTS_ANY_CHANGED: ${{ steps.changed-files.outputs.any_changed }}
STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
- name: Find existing changelog comment
if: github.event.pull_request.head.repo.full_name == github.repository
+5 -1
View File
@@ -1,6 +1,7 @@
name: 'Tools: PR Conflict Checker'
on:
# zizmor: ignore[dangerous-triggers] - intentional: needs write access for conflict labels/comments, checkout uses PR head SHA for read-only grep
pull_request_target:
types:
- 'opened'
@@ -29,6 +30,7 @@ jobs:
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Get changed files
id: changed-files
@@ -45,7 +47,7 @@ jobs:
HAS_CONFLICTS=false
# Check each changed file for conflict markers
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
for file in ${STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES}; do
if [ -f "$file" ]; then
echo "Checking file: $file"
@@ -70,6 +72,8 @@ jobs:
echo "has_conflicts=false" >> $GITHUB_OUTPUT
echo "No conflict markers found in changed files"
fi
env:
STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
- name: Manage conflict label
env:
+6 -3
View File
@@ -1,6 +1,7 @@
name: 'Tools: PR Merged'
on:
# zizmor: ignore[dangerous-triggers] - intentional: needs read access to merged PR metadata, no PR code checkout
pull_request_target:
branches:
- 'master'
@@ -25,8 +26,10 @@ jobs:
- name: Calculate short commit SHA
id: vars
run: |
SHORT_SHA="${{ github.event.pull_request.merge_commit_sha }}"
echo "SHORT_SHA=${SHORT_SHA::7}" >> $GITHUB_ENV
SHORT_SHA="${GITHUB_EVENT_PULL_REQUEST_MERGE_COMMIT_SHA}"
echo "short_sha=${SHORT_SHA::7}" >> $GITHUB_OUTPUT
env:
GITHUB_EVENT_PULL_REQUEST_MERGE_COMMIT_SHA: ${{ github.event.pull_request.merge_commit_sha }}
- name: Trigger Cloud repository pull request
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
@@ -37,7 +40,7 @@ jobs:
client-payload: |
{
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ steps.vars.outputs.short_sha }}",
"PROWLER_PR_NUMBER": "${{ github.event.pull_request.number }}",
"PROWLER_PR_TITLE": ${{ toJson(github.event.pull_request.title) }},
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
+1
View File
@@ -31,6 +31,7 @@ jobs:
with:
fetch-depth: 0
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
+22 -7
View File
@@ -68,17 +68,22 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next minor version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
NEXT_MINOR_VERSION=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).0
echo "NEXT_MINOR_VERSION=${NEXT_MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "Current version: $PROWLER_VERSION"
echo "Next minor version: $NEXT_MINOR_VERSION"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
- name: Bump versions in files for master
run: |
@@ -113,11 +118,12 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
persist-credentials: false
- name: Calculate first patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
FIRST_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.1
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
@@ -127,6 +133,9 @@ jobs:
echo "First patch version: $FIRST_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
- name: Bump versions in files for version branch
run: |
@@ -168,12 +177,14 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
PATCH_VERSION=${{ needs.detect-release-type.outputs.patch_version }}
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
PATCH_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION}
NEXT_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION + 1))
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
@@ -184,6 +195,10 @@ jobs:
echo "Current version: $PROWLER_VERSION"
echo "Next patch version: $NEXT_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION: ${{ needs.detect-release-type.outputs.patch_version }}
- name: Bump versions in files for version branch
run: |
@@ -21,6 +21,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Check for duplicate test names across providers
run: |
+3
View File
@@ -32,6 +32,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for SDK changes
id: check-changes
+2
View File
@@ -50,6 +50,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Initialize CodeQL
uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
+36 -17
View File
@@ -62,6 +62,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
@@ -116,6 +118,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Notify container push started
id: slack-notification
@@ -152,6 +156,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -214,36 +220,44 @@ jobs:
if: github.event_name == 'push'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }} \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-arm64
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG} \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG}-arm64
env:
NEEDS_SETUP_OUTPUTS_LATEST_TAG: ${{ needs.setup.outputs.latest_tag }}
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.prowler_version }} \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.stable_tag }} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.prowler_version }} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ needs.setup.outputs.stable_tag }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.prowler_version }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.stable_tag }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-arm64
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${NEEDS_SETUP_OUTPUTS_PROWLER_VERSION} \
-t ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${NEEDS_SETUP_OUTPUTS_STABLE_TAG} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${NEEDS_SETUP_OUTPUTS_PROWLER_VERSION} \
-t ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${NEEDS_SETUP_OUTPUTS_STABLE_TAG} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_PROWLER_VERSION} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_STABLE_TAG} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG}-arm64
env:
NEEDS_SETUP_OUTPUTS_PROWLER_VERSION: ${{ needs.setup.outputs.prowler_version }}
NEEDS_SETUP_OUTPUTS_STABLE_TAG: ${{ needs.setup.outputs.stable_tag }}
NEEDS_SETUP_OUTPUTS_LATEST_TAG: ${{ needs.setup.outputs.latest_tag }}
- name: Install regctl
if: always()
uses: regclient/actions/regctl-installer@main
uses: regclient/actions/regctl-installer@da9319db8e44e8b062b3a147e1dfb2f574d41a03 # main
- name: Cleanup intermediate architecture tags
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.latest_tag }}-arm64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_LATEST_TAG}-arm64" || true
echo "Cleanup completed"
env:
NEEDS_SETUP_OUTPUTS_LATEST_TAG: ${{ needs.setup.outputs.latest_tag }}
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
@@ -253,15 +267,20 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
if [[ "${NEEDS_CONTAINER_BUILD_PUSH_RESULT}" == "success" && "${NEEDS_CREATE_MANIFEST_RESULT}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
env:
NEEDS_CONTAINER_BUILD_PUSH_RESULT: ${{ needs.container-build-push.result }}
NEEDS_CREATE_MANIFEST_RESULT: ${{ needs.create-manifest.result }}
- name: Notify container push completed
uses: ./.github/actions/slack-notification
@@ -28,6 +28,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -63,6 +66,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for SDK changes
id: check-changes
+5 -3
View File
@@ -28,7 +28,7 @@ jobs:
- name: Parse and validate version
id: parse-version
run: |
PROWLER_VERSION="${{ env.RELEASE_TAG }}"
PROWLER_VERSION="${RELEASE_TAG}"
echo "version=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
# Extract major version
@@ -60,6 +60,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Install Poetry
run: pipx install poetry==2.1.1
@@ -68,7 +70,6 @@ jobs:
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'poetry'
- name: Build Prowler package
run: poetry build
@@ -92,6 +93,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Install Poetry
run: pipx install poetry==2.1.1
@@ -100,7 +103,6 @@ jobs:
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'poetry'
- name: Install toml package
run: pip install toml
@@ -28,6 +28,7 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: 'master'
persist-credentials: false
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
@@ -82,9 +83,14 @@ jobs:
- name: PR creation result
run: |
if [[ "${{ steps.create-pr.outputs.pull-request-number }}" ]]; then
echo "✓ Pull request #${{ steps.create-pr.outputs.pull-request-number }} created successfully"
echo "URL: ${{ steps.create-pr.outputs.pull-request-url }}"
if [[ "${STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_NUMBER}" ]]; then
echo "✓ Pull request #${STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_NUMBER} created successfully"
echo "URL: ${STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_URL}"
else
echo "✓ No changes detected - AWS regions are up to date"
fi
env:
STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_NUMBER: ${{ steps.create-pr.outputs.pull-request-number }}
STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_URL: ${{ steps.create-pr.outputs.pull-request-url }}
@@ -26,6 +26,7 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: 'master'
persist-credentials: false
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
@@ -85,9 +86,14 @@ jobs:
- name: PR creation result
run: |
if [[ "${{ steps.create-pr.outputs.pull-request-number }}" ]]; then
echo "✓ Pull request #${{ steps.create-pr.outputs.pull-request-number }} created successfully"
echo "URL: ${{ steps.create-pr.outputs.pull-request-url }}"
if [[ "${STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_NUMBER}" ]]; then
echo "✓ Pull request #${STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_NUMBER} created successfully"
echo "URL: ${STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_URL}"
else
echo "✓ No changes detected - OCI regions are up to date"
fi
env:
STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_NUMBER: ${{ steps.create-pr.outputs.pull-request-number }}
STEPS_CREATE_PR_OUTPUTS_PULL_REQUEST_URL: ${{ steps.create-pr.outputs.pull-request-url }}
+4 -1
View File
@@ -25,12 +25,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for SDK changes
id: check-changes
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files:
files:
./**
.github/workflows/sdk-security.yml
files_ignore: |
+14 -6
View File
@@ -32,6 +32,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for SDK changes
id: check-changes
@@ -119,7 +122,7 @@ jobs:
"wafv2": ["cognito", "elbv2"],
}
changed_raw = """${{ steps.changed-aws.outputs.all_changed_files }}"""
changed_raw = os.environ.get("STEPS_CHANGED_AWS_OUTPUTS_ALL_CHANGED_FILES", "")
# all_changed_files is space-separated, not newline-separated
# Strip leading "./" if present for consistent path handling
changed_files = [Path(f.lstrip("./")) for f in changed_raw.split() if f]
@@ -174,20 +177,25 @@ jobs:
else:
print("AWS service test paths: none detected")
PY
env:
STEPS_CHANGED_AWS_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-aws.outputs.all_changed_files }}
- name: Run AWS tests
if: steps.changed-aws.outputs.any_changed == 'true'
run: |
echo "AWS run_all=${{ steps.aws-services.outputs.run_all }}"
echo "AWS service_paths='${{ steps.aws-services.outputs.service_paths }}'"
echo "AWS run_all=${STEPS_AWS_SERVICES_OUTPUTS_RUN_ALL}"
echo "AWS service_paths='${STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS}'"
if [ "${{ steps.aws-services.outputs.run_all }}" = "true" ]; then
if [ "${STEPS_AWS_SERVICES_OUTPUTS_RUN_ALL}" = "true" ]; then
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml tests/providers/aws
elif [ -z "${{ steps.aws-services.outputs.service_paths }}" ]; then
elif [ -z "${STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS}" ]; then
echo "No AWS service paths detected; skipping AWS tests."
else
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml ${{ steps.aws-services.outputs.service_paths }}
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml ${STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS}
fi
env:
STEPS_AWS_SERVICES_OUTPUTS_RUN_ALL: ${{ steps.aws-services.outputs.run_all }}
STEPS_AWS_SERVICES_OUTPUTS_SERVICE_PATHS: ${{ steps.aws-services.outputs.service_paths }}
- name: Upload AWS coverage to Codecov
if: steps.changed-aws.outputs.any_changed == 'true'
+30 -14
View File
@@ -49,6 +49,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Get changed files
id: changed-files
@@ -66,47 +69,60 @@ jobs:
id: impact
run: |
echo "Changed files:"
echo "${{ steps.changed-files.outputs.all_changed_files }}" | tr ' ' '\n'
echo "${STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n'
echo ""
python .github/scripts/test-impact.py ${{ steps.changed-files.outputs.all_changed_files }}
python .github/scripts/test-impact.py ${STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES}
env:
STEPS_CHANGED_FILES_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
- name: Set convenience flags
id: set-flags
run: |
if [[ -n "${{ steps.impact.outputs.sdk-tests }}" ]]; then
if [[ -n "${STEPS_IMPACT_OUTPUTS_SDK_TESTS}" ]]; then
echo "has-sdk-tests=true" >> $GITHUB_OUTPUT
else
echo "has-sdk-tests=false" >> $GITHUB_OUTPUT
fi
if [[ -n "${{ steps.impact.outputs.api-tests }}" ]]; then
if [[ -n "${STEPS_IMPACT_OUTPUTS_API_TESTS}" ]]; then
echo "has-api-tests=true" >> $GITHUB_OUTPUT
else
echo "has-api-tests=false" >> $GITHUB_OUTPUT
fi
if [[ -n "${{ steps.impact.outputs.ui-e2e }}" ]]; then
if [[ -n "${STEPS_IMPACT_OUTPUTS_UI_E2E}" ]]; then
echo "has-ui-e2e=true" >> $GITHUB_OUTPUT
else
echo "has-ui-e2e=false" >> $GITHUB_OUTPUT
fi
env:
STEPS_IMPACT_OUTPUTS_SDK_TESTS: ${{ steps.impact.outputs.sdk-tests }}
STEPS_IMPACT_OUTPUTS_API_TESTS: ${{ steps.impact.outputs.api-tests }}
STEPS_IMPACT_OUTPUTS_UI_E2E: ${{ steps.impact.outputs.ui-e2e }}
- name: Summary
run: |
echo "## Test Impact Analysis" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [[ "${{ steps.impact.outputs.run-all }}" == "true" ]]; then
if [[ "${STEPS_IMPACT_OUTPUTS_RUN_ALL}" == "true" ]]; then
echo "🚨 **Critical path changed - running ALL tests**" >> $GITHUB_STEP_SUMMARY
else
echo "### Affected Modules" >> $GITHUB_STEP_SUMMARY
echo "\`${{ steps.impact.outputs.modules }}\`" >> $GITHUB_STEP_SUMMARY
echo "\`${STEPS_IMPACT_OUTPUTS_MODULES}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Tests to Run" >> $GITHUB_STEP_SUMMARY
echo "| Category | Paths |" >> $GITHUB_STEP_SUMMARY
echo "|----------|-------|" >> $GITHUB_STEP_SUMMARY
echo "| SDK Tests | \`${{ steps.impact.outputs.sdk-tests || 'none' }}\` |" >> $GITHUB_STEP_SUMMARY
echo "| API Tests | \`${{ steps.impact.outputs.api-tests || 'none' }}\` |" >> $GITHUB_STEP_SUMMARY
echo "| UI E2E | \`${{ steps.impact.outputs.ui-e2e || 'none' }}\` |" >> $GITHUB_STEP_SUMMARY
echo "| SDK Tests | \`${STEPS_IMPACT_OUTPUTS_SDK_TESTS:-none}\` |" >> $GITHUB_STEP_SUMMARY
echo "| API Tests | \`${STEPS_IMPACT_OUTPUTS_API_TESTS:-none}\` |" >> $GITHUB_STEP_SUMMARY
echo "| UI E2E | \`${STEPS_IMPACT_OUTPUTS_UI_E2E:-none}\` |" >> $GITHUB_STEP_SUMMARY
fi
env:
STEPS_IMPACT_OUTPUTS_RUN_ALL: ${{ steps.impact.outputs.run-all }}
STEPS_IMPACT_OUTPUTS_SDK_TESTS: ${{ steps.impact.outputs.sdk-tests }}
STEPS_IMPACT_OUTPUTS_API_TESTS: ${{ steps.impact.outputs.api-tests }}
STEPS_IMPACT_OUTPUTS_UI_E2E: ${{ steps.impact.outputs.ui-e2e }}
STEPS_IMPACT_OUTPUTS_MODULES: ${{ steps.impact.outputs.modules }}
+22 -7
View File
@@ -68,17 +68,22 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next minor version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
NEXT_MINOR_VERSION=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).0
echo "NEXT_MINOR_VERSION=${NEXT_MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "Current version: $PROWLER_VERSION"
echo "Next minor version: $NEXT_MINOR_VERSION"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
- name: Bump UI version in .env for master
run: |
@@ -115,11 +120,12 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: v${{ needs.detect-release-type.outputs.major_version }}.${{ needs.detect-release-type.outputs.minor_version }}
persist-credentials: false
- name: Calculate first patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
FIRST_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.1
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
@@ -129,6 +135,9 @@ jobs:
echo "First patch version: $FIRST_PATCH_VERSION"
echo "Version branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
- name: Bump UI version in .env for version branch
run: |
@@ -172,12 +181,14 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Calculate next patch version
run: |
MAJOR_VERSION=${{ needs.detect-release-type.outputs.major_version }}
MINOR_VERSION=${{ needs.detect-release-type.outputs.minor_version }}
PATCH_VERSION=${{ needs.detect-release-type.outputs.patch_version }}
MAJOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION}
MINOR_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION}
PATCH_VERSION=${NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION}
NEXT_PATCH_VERSION=${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION + 1))
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
@@ -188,6 +199,10 @@ jobs:
echo "Current version: $PROWLER_VERSION"
echo "Next patch version: $NEXT_PATCH_VERSION"
echo "Target branch: $VERSION_BRANCH"
env:
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MAJOR_VERSION: ${{ needs.detect-release-type.outputs.major_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_MINOR_VERSION: ${{ needs.detect-release-type.outputs.minor_version }}
NEEDS_DETECT_RELEASE_TYPE_OUTPUTS_PATCH_VERSION: ${{ needs.detect-release-type.outputs.patch_version }}
- name: Bump UI version in .env for version branch
run: |
+2
View File
@@ -46,6 +46,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Initialize CodeQL
uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
+25 -10
View File
@@ -60,6 +60,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Notify container push started
id: slack-notification
@@ -96,6 +98,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
@@ -143,30 +147,36 @@ jobs:
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
- name: Create and push manifests for release event
if: github.event_name == 'release' || github.event_name == 'workflow_dispatch'
run: |
docker buildx imagetools create \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${RELEASE_TAG} \
-t ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }} \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64 \
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
- name: Install regctl
if: always()
uses: regclient/actions/regctl-installer@main
uses: regclient/actions/regctl-installer@da9319db8e44e8b062b3a147e1dfb2f574d41a03 # main
- name: Cleanup intermediate architecture tags
if: always()
run: |
echo "Cleaning up intermediate tags..."
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ needs.setup.outputs.short-sha }}-arm64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-amd64" || true
regctl tag delete "${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${NEEDS_SETUP_OUTPUTS_SHORT_SHA}-arm64" || true
echo "Cleanup completed"
env:
NEEDS_SETUP_OUTPUTS_SHORT_SHA: ${{ needs.setup.outputs.short-sha }}
notify-release-completed:
if: always() && needs.notify-release-started.result == 'success' && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
@@ -176,15 +186,20 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Determine overall outcome
id: outcome
run: |
if [[ "${{ needs.container-build-push.result }}" == "success" && "${{ needs.create-manifest.result }}" == "success" ]]; then
if [[ "${NEEDS_CONTAINER_BUILD_PUSH_RESULT}" == "success" && "${NEEDS_CREATE_MANIFEST_RESULT}" == "success" ]]; then
echo "outcome=success" >> $GITHUB_OUTPUT
else
echo "outcome=failure" >> $GITHUB_OUTPUT
fi
env:
NEEDS_CONTAINER_BUILD_PUSH_RESULT: ${{ needs.container-build-push.result }}
NEEDS_CREATE_MANIFEST_RESULT: ${{ needs.create-manifest.result }}
- name: Notify container push completed
uses: ./.github/actions/slack-notification
@@ -29,6 +29,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check if Dockerfile changed
id: dockerfile-changed
@@ -64,6 +67,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for UI changes
id: check-changes
+48 -10
View File
@@ -15,6 +15,9 @@ on:
- 'ui/**'
- 'api/**' # API changes can affect UI E2E
permissions:
contents: read
jobs:
# First, analyze which tests need to run
impact-analysis:
@@ -76,20 +79,24 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Show test scope
run: |
echo "## E2E Test Scope" >> $GITHUB_STEP_SUMMARY
if [[ "${{ env.RUN_ALL_TESTS }}" == "true" ]]; then
if [[ "${RUN_ALL_TESTS}" == "true" ]]; then
echo "Running **ALL** E2E tests (critical path changed)" >> $GITHUB_STEP_SUMMARY
else
echo "Running tests matching: \`${{ env.E2E_TEST_PATHS }}\`" >> $GITHUB_STEP_SUMMARY
echo "Running tests matching: \`${E2E_TEST_PATHS}\`" >> $GITHUB_STEP_SUMMARY
fi
echo ""
echo "Affected modules: \`${{ needs.impact-analysis.outputs.modules }}\`" >> $GITHUB_STEP_SUMMARY
echo "Affected modules: \`${NEEDS_IMPACT_ANALYSIS_OUTPUTS_MODULES}\`" >> $GITHUB_STEP_SUMMARY
env:
NEEDS_IMPACT_ANALYSIS_OUTPUTS_MODULES: ${{ needs.impact-analysis.outputs.modules }}
- name: Create k8s Kind Cluster
uses: helm/kind-action@v1
uses: helm/kind-action@ef37e7f390d99f746eb8b610417061a60e82a6cc # v1
with:
cluster_name: kind
@@ -150,7 +157,7 @@ jobs:
node-version: '24.13.0'
- name: Setup pnpm
uses: pnpm/action-setup@v4
uses: pnpm/action-setup@41ff72655975bd51cab0327fa583b6e92b6d3061 # v4
with:
version: 10
run_install: false
@@ -195,23 +202,52 @@ jobs:
- name: Run E2E tests
working-directory: ./ui
run: |
if [[ "${{ env.RUN_ALL_TESTS }}" == "true" ]]; then
if [[ "${RUN_ALL_TESTS}" == "true" ]]; then
echo "Running ALL E2E tests..."
pnpm run test:e2e
else
echo "Running targeted E2E tests: ${{ env.E2E_TEST_PATHS }}"
echo "Running targeted E2E tests: ${E2E_TEST_PATHS}"
# Convert glob patterns to playwright test paths
# e.g., "ui/tests/providers/**" -> "tests/providers"
TEST_PATHS="${{ env.E2E_TEST_PATHS }}"
TEST_PATHS="${E2E_TEST_PATHS}"
# Remove ui/ prefix and convert ** to empty (playwright handles recursion)
TEST_PATHS=$(echo "$TEST_PATHS" | sed 's|ui/||g' | sed 's|\*\*||g' | tr ' ' '\n' | sort -u)
# Drop auth setup helpers (not runnable test suites)
TEST_PATHS=$(echo "$TEST_PATHS" | grep -v '^tests/setups/')
# Safety net: if bare "tests/" appears (from broad patterns like ui/tests/**),
# expand to specific subdirs to avoid Playwright discovering setup files
if echo "$TEST_PATHS" | grep -qx 'tests/'; then
echo "Expanding bare 'tests/' to specific subdirs (excluding setups)..."
SPECIFIC_DIRS=""
for dir in tests/*/; do
[[ "$dir" == "tests/setups/" ]] && continue
SPECIFIC_DIRS="${SPECIFIC_DIRS}${dir}"$'\n'
done
# Replace "tests/" with specific dirs, keep other paths
TEST_PATHS=$(echo "$TEST_PATHS" | grep -vx 'tests/')
TEST_PATHS="${TEST_PATHS}"$'\n'"${SPECIFIC_DIRS}"
TEST_PATHS=$(echo "$TEST_PATHS" | grep -v '^$' | sort -u)
fi
if [[ -z "$TEST_PATHS" ]]; then
echo "No runnable E2E test paths after filtering setups"
exit 0
fi
TEST_PATHS=$(echo "$TEST_PATHS" | tr '\n' ' ')
# Filter out directories that don't contain any test files
VALID_PATHS=""
while IFS= read -r p; do
[[ -z "$p" ]] && continue
if find "$p" -name '*.spec.ts' -o -name '*.test.ts' 2>/dev/null | head -1 | grep -q .; then
VALID_PATHS="${VALID_PATHS}${p}"$'\n'
else
echo "Skipping empty test directory: $p"
fi
done <<< "$TEST_PATHS"
VALID_PATHS=$(echo "$VALID_PATHS" | grep -v '^$' || true)
if [[ -z "$VALID_PATHS" ]]; then
echo "No test files found in any resolved paths — skipping E2E"
exit 0
fi
TEST_PATHS=$(echo "$VALID_PATHS" | tr '\n' ' ')
echo "Resolved test paths: $TEST_PATHS"
pnpm exec playwright test $TEST_PATHS
fi
@@ -244,6 +280,8 @@ jobs:
echo "" >> $GITHUB_STEP_SUMMARY
echo "No UI E2E tests needed for this change." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "Affected modules: \`${{ needs.impact-analysis.outputs.modules }}\`" >> $GITHUB_STEP_SUMMARY
echo "Affected modules: \`${NEEDS_IMPACT_ANALYSIS_OUTPUTS_MODULES}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "To run all tests, modify a file in a critical path (e.g., \`ui/lib/**\`)." >> $GITHUB_STEP_SUMMARY
env:
NEEDS_IMPACT_ANALYSIS_OUTPUTS_MODULES: ${{ needs.impact-analysis.outputs.modules }}
+8 -3
View File
@@ -31,6 +31,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# zizmor: ignore[artipacked]
persist-credentials: true # Required by tj-actions/changed-files to fetch PR branch
- name: Check for UI changes
id: check-changes
@@ -81,7 +84,7 @@ jobs:
- name: Setup pnpm
if: steps.check-changes.outputs.any_changed == 'true'
uses: pnpm/action-setup@v4
uses: pnpm/action-setup@41ff72655975bd51cab0327fa583b6e92b6d3061 # v4
with:
version: 10
run_install: false
@@ -122,10 +125,12 @@ jobs:
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files != ''
run: |
echo "Running tests related to changed files:"
echo "${{ steps.changed-source.outputs.all_changed_files }}"
echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}"
# Convert space-separated to vitest related format (remove ui/ prefix for relative paths)
CHANGED_FILES=$(echo "${{ steps.changed-source.outputs.all_changed_files }}" | tr ' ' '\n' | sed 's|^ui/||' | tr '\n' ' ')
CHANGED_FILES=$(echo "${STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES}" | tr ' ' '\n' | sed 's|^ui/||' | tr '\n' ' ')
pnpm exec vitest related $CHANGED_FILES --run
env:
STEPS_CHANGED_SOURCE_OUTPUTS_ALL_CHANGED_FILES: ${{ steps.changed-source.outputs.all_changed_files }}
- name: Run unit tests (test files only changed)
if: steps.check-changes.outputs.any_changed == 'true' && steps.critical-changes.outputs.any_changed != 'true' && steps.changed-source.outputs.all_changed_files == ''
+1 -1
View File
@@ -6,7 +6,7 @@ LABEL org.opencontainers.image.source="https://github.com/prowler-cloud/prowler"
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.66.0
ARG TRIVY_VERSION=0.69.2
ENV TRIVY_VERSION=${TRIVY_VERSION}
# hadolint ignore=DL3008
+14 -16
View File
@@ -109,14 +109,16 @@ Every AWS provider scan will enqueue an Attack Paths ingestion job automatically
| GCP | 100 | 13 | 15 | 11 | Official | UI, API, CLI |
| Kubernetes | 83 | 7 | 7 | 9 | Official | UI, API, CLI |
| GitHub | 21 | 2 | 1 | 2 | Official | UI, API, CLI |
| M365 | 75 | 7 | 4 | 4 | Official | UI, API, CLI |
| OCI | 51 | 13 | 3 | 12 | Official | UI, API, CLI |
| M365 | 89 | 9 | 4 | 5 | Official | UI, API, CLI |
| OCI | 48 | 13 | 3 | 10 | Official | UI, API, CLI |
| Alibaba Cloud | 61 | 9 | 3 | 9 | Official | UI, API, CLI |
| Cloudflare | 29 | 2 | 0 | 5 | Official | CLI, API |
| Cloudflare | 29 | 2 | 0 | 5 | Official | UI, API, CLI |
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | UI, API, CLI |
| MongoDB Atlas | 10 | 3 | 0 | 3 | Official | UI, API, CLI |
| MongoDB Atlas | 10 | 3 | 0 | 8 | Official | UI, API, CLI |
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | CLI |
| OpenStack | 1 | 1 | 0 | 2 | Official | CLI |
| Image | N/A | N/A | N/A | N/A | Official | CLI, API |
| Google Workspace | 1 | 1 | 0 | 1 | Official | CLI |
| OpenStack | 27 | 4 | 0 | 8 | Official | UI, API, CLI |
| NHN | 6 | 2 | 1 | 0 | Unofficial | CLI |
> [!Note]
@@ -148,21 +150,17 @@ Prowler App offers flexible installation methods tailored to various environment
**Commands**
``` console
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/docker-compose.yml
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/.env
VERSION=$(curl -s https://api.github.com/repos/prowler-cloud/prowler/releases/latest | jq -r .tag_name)
curl -sLO "https://raw.githubusercontent.com/prowler-cloud/prowler/refs/tags/${VERSION}/docker-compose.yml"
# Environment variables can be customized in the .env file. Using default values in production environments is not recommended.
curl -sLO "https://raw.githubusercontent.com/prowler-cloud/prowler/refs/tags/${VERSION}/.env"
docker compose up -d
```
> Containers are built for `linux/amd64`.
> [!WARNING]
> 🔒 For a secure setup, the API auto-generates a unique key pair, `DJANGO_TOKEN_SIGNING_KEY` and `DJANGO_TOKEN_VERIFYING_KEY`, and stores it in `~/.config/prowler-api` (non-container) or the bound Docker volume in `_data/api` (container). Never commit or reuse static/default keys. To rotate keys, delete the stored key files and restart the API.
### Configuring Your Workstation for Prowler App
If your workstation's architecture is incompatible, you can resolve this by:
- **Setting the environment variable**: `DOCKER_DEFAULT_PLATFORM=linux/amd64`
- **Using the following flag in your Docker command**: `--platform linux/amd64`
> Once configured, access the Prowler App at http://localhost:3000. Sign up using your email and password to get started.
Once configured, access the Prowler App at http://localhost:3000. Sign up using your email and password to get started.
### Common Issues with Docker Pull Installation
+23 -8
View File
@@ -2,7 +2,25 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.20.0] (Prowler UNRELEASED)
## [1.21.0] (Prowler UNRELEASED)
### 🔄 Changed
- Attack Paths: Migrate network exposure queries from APOC to standard openCypher for Neo4j and Neptune compatibility [(#10266)](https://github.com/prowler-cloud/prowler/pull/10266)
- `POST /api/v1/providers` returns `409 Conflict` if already exists [(#10293)](https://github.com/prowler-cloud/prowler/pull/10293)
---
## [1.20.1] (Prowler UNRELEASED)
### 🐞 Fixed
- Attack Paths: Add missing logging for query execution and exception details in scan error handling [(#10269)](https://github.com/prowler-cloud/prowler/pull/10269)
- Attack Paths: Upgrade Cartography from 0.129.0 to 0.132.0, fixing `exposed_internet` not set on ELB/ELBv2 nodes [(#10272)](https://github.com/prowler-cloud/prowler/pull/10272)
---
## [1.20.0] (Prowler v5.19.0)
### 🚀 Added
@@ -11,29 +29,26 @@ All notable changes to the **Prowler API** are documented in this file.
- PDF report for the CSA CCM compliance framework [(#10088)](https://github.com/prowler-cloud/prowler/pull/10088)
- `image` provider support for container image scanning [(#10128)](https://github.com/prowler-cloud/prowler/pull/10128)
- Attack Paths: Custom query and Cartography schema endpoints (temporarily blocked) [(#10149)](https://github.com/prowler-cloud/prowler/pull/10149)
- `googleworkspace` provider support [(#10247)](https://github.com/prowler-cloud/prowler/pull/10247)
### 🔄 Changed
- Attack Paths: Queries definition now has short description and attribution [(#9983)](https://github.com/prowler-cloud/prowler/pull/9983)
- Attack Paths: Internet node is created while scan [(#9992)](https://github.com/prowler-cloud/prowler/pull/9992)
- Attack Paths: Add full paths set from [pathfinding.cloud](https://pathfinding.cloud/) [(#10008)](https://github.com/prowler-cloud/prowler/pull/10008)
- Support CSA CCM 4.0 for the AWS provider [(#10018)](https://github.com/prowler-cloud/prowler/pull/10018)
- Support CSA CCM 4.0 for the GCP provider [(#10042)](https://github.com/prowler-cloud/prowler/pull/10042)
- Support CSA CCM 4.0 for the Azure provider [(#10039)](https://github.com/prowler-cloud/prowler/pull/10039)
- Support CSA CCM 4.0 for the Oracle Cloud provider [(#10057)](https://github.com/prowler-cloud/prowler/pull/10057)
- Support CSA CCM 4.0 for the Alibaba Cloud provider [(#10061)](https://github.com/prowler-cloud/prowler/pull/10061)
- Attack Paths: Mark attack Paths scan as failed when Celery task fails outside job error handling [(#10065)](https://github.com/prowler-cloud/prowler/pull/10065)
- Attack Paths: Remove legacy per-scan `graph_database` and `is_graph_database_deleted` fields from AttackPathsScan model [(#10077)](https://github.com/prowler-cloud/prowler/pull/10077)
- Attack Paths: Add `graph_data_ready` field to decouple query availability from scan state [(#10089)](https://github.com/prowler-cloud/prowler/pull/10089)
- AI agent guidelines with TDD and testing skills references [(#9925)](https://github.com/prowler-cloud/prowler/pull/9925)
- Attack Paths: Upgrade Cartography from fork 0.126.1 to upstream 0.129.0 and Neo4j driver from 5.x to 6.x [(#10110)](https://github.com/prowler-cloud/prowler/pull/10110)
- Attack Paths: Query results now filtered by provider, preventing future cross-tenant and cross-provider data leakage [(#10118)](https://github.com/prowler-cloud/prowler/pull/10118)
- Attack Paths: Add private labels and properties in Attack Paths graphs for avoiding future overlapping with Cartography's ones [(#10124)](https://github.com/prowler-cloud/prowler/pull/10124)
- Attack Paths: Query endpoint executes them in read only mode [(#10140)](https://github.com/prowler-cloud/prowler/pull/10140)
- Attack Paths: `Accept` header query endpoints also accepts `text/plain`, supporting compact plain-text format for LLM consumption [(#10162)](https://github.com/prowler-cloud/prowler/pull/10162)
- Bump Trivy from 0.69.1 to 0.69.2 [(#10210)](https://github.com/prowler-cloud/prowler/pull/10210)
### 🐞 Fixed
- PDF compliance reports consistency with UI: exclude resourceless findings and fix ENS MANUAL status handling [(#10270)](https://github.com/prowler-cloud/prowler/pull/10270)
- Attack Paths: Orphaned temporary Neo4j databases are now cleaned up on scan failure and provider deletion [(#10101)](https://github.com/prowler-cloud/prowler/pull/10101)
- Attack Paths: scan no longer raises `DatabaseError` when provider is deleted mid-scan [(#10116)](https://github.com/prowler-cloud/prowler/pull/10116)
- Tenant compliance summaries recalculated after provider deletion [(#10172)](https://github.com/prowler-cloud/prowler/pull/10172)
@@ -46,7 +61,7 @@ All notable changes to the **Prowler API** are documented in this file.
---
## [1.19.3] (Prowler UNRELEASED)
## [1.19.3] (Prowler v5.18.3)
### 🐞 Fixed
+1 -8
View File
@@ -5,7 +5,7 @@ LABEL maintainer="https://github.com/prowler-cloud/api"
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
ARG TRIVY_VERSION=0.69.1
ARG TRIVY_VERSION=0.69.2
ENV TRIVY_VERSION=${TRIVY_VERSION}
# hadolint ignore=DL3008
@@ -24,13 +24,6 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Cartography depends on `dockerfile` which has no pre-built arm64 wheel and requires Go to compile
# hadolint ignore=DL3008
RUN if [ "$(uname -m)" = "aarch64" ]; then \
apt-get update && apt-get install -y --no-install-recommends golang-go \
&& rm -rf /var/lib/apt/lists/* ; \
fi
# Install PowerShell
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
+5 -20
View File
@@ -1822,14 +1822,14 @@ crt = ["awscrt (==0.27.6)"]
[[package]]
name = "cartography"
version = "0.129.0"
version = "0.132.0"
description = "Explore assets and their relationships across your technical infrastructure."
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "cartography-0.129.0-py3-none-any.whl", hash = "sha256:d42c840369be9e4d0ac4d024074e3732416e40bab3d9a3023b6a247918daed4c"},
{file = "cartography-0.129.0.tar.gz", hash = "sha256:cb47d603e652554a4cbcc1a868c96014eb02b3d5cc1affea0428b2ed7fa61699"},
{file = "cartography-0.132.0-py3-none-any.whl", hash = "sha256:c070aa51d0ab4479cb043cae70b35e7df49f2fb5f1fa95ccf10000bbeb952262"},
{file = "cartography-0.132.0.tar.gz", hash = "sha256:7c6332bc57fd2629d7b83aee7bd95a7b2edb0d51ef746efa0461399e0b66625c"},
]
[package.dependencies]
@@ -1864,8 +1864,8 @@ boto3 = ">=1.15.1"
botocore = ">=1.18.1"
cloudflare = ">=4.1.0,<5.0.0"
crowdstrike-falconpy = ">=0.5.1"
cryptography = "*"
dnspython = ">=1.15.0"
dockerfile = ">=3.0.0"
duo-client = "*"
google-api-python-client = ">=1.7.8"
google-auth = ">=2.37.0"
@@ -3095,21 +3095,6 @@ docs = ["myst-parser (==0.18.0)", "sphinx (==5.1.1)"]
ssh = ["paramiko (>=2.4.3)"]
websockets = ["websocket-client (>=1.3.0)"]
[[package]]
name = "dockerfile"
version = "3.4.0"
description = "Parse a dockerfile into a high-level representation using the official go parser."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "dockerfile-3.4.0-cp39-abi3-macosx_13_0_x86_64.whl", hash = "sha256:ed33446a76007cbb3f28c247f189cc06db34667d4f59a398a5c44912d7c13f36"},
{file = "dockerfile-3.4.0-cp39-abi3-macosx_14_0_arm64.whl", hash = "sha256:a4549d4f038483c25906d4fec56bb6ffe82ae26e0f80a15f2c0fedbb50712053"},
{file = "dockerfile-3.4.0-cp39-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b95102bd82e6f67c836186b51c13114aa586a20e8cb6441bde24d4070542009d"},
{file = "dockerfile-3.4.0-cp39-abi3-win_amd64.whl", hash = "sha256:30202187f1885f99ac839fd41ca8150b2fd0a66fac12db0166361d0c4622e71a"},
{file = "dockerfile-3.4.0.tar.gz", hash = "sha256:238bb950985c55a525daef8bbfe994a0230aa0978c419f4caa4d9ce0a37343f1"},
]
[[package]]
name = "dogpile-cache"
version = "1.5.0"
@@ -9397,4 +9382,4 @@ files = [
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "42759b370c9e38da727e73f9d8ec0fa61bc6137eab18f11ccd7deff79a0dee69"
content-hash = "6e38c38b1f8dc05b881f49703fa445eec299527e6697992b18e4613534fbcdb6"
+2 -2
View File
@@ -37,7 +37,7 @@ dependencies = [
"matplotlib (>=3.10.6,<4.0.0)",
"reportlab (>=4.4.4,<5.0.0)",
"neo4j (>=6.0.0,<7.0.0)",
"cartography (==0.129.0)",
"cartography (==0.132.0)",
"gevent (>=25.9.1,<26.0.0)",
"werkzeug (>=3.1.4)",
"sqlparse (>=0.5.4)",
@@ -49,7 +49,7 @@ name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.20.0"
version = "1.21.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
+17 -43
View File
@@ -16,8 +16,7 @@ AWS_INTERNET_EXPOSED_EC2_SENSITIVE_S3_ACCESS = AttackPathsQueryDefinition(
description="Detect EC2 instances with SSH exposed to the internet that can assume higher-privileged roles to read tagged sensitive S3 buckets despite bucket-level public access blocks.",
provider="aws",
cypher=f"""
CALL apoc.create.vNode(['Internet'], {{id: 'Internet', name: 'Internet', provider_id: $provider_id}})
YIELD node AS internet
OPTIONAL MATCH (internet:Internet {{_provider_id: $provider_id}})
MATCH path_s3 = (aws:AWSAccount {{id: $provider_uid}})--(s3:S3Bucket)--(t:AWSTag)
WHERE toLower(t.key) = toLower($tag_key) AND toLower(t.value) = toLower($tag_value)
@@ -32,8 +31,7 @@ AWS_INTERNET_EXPOSED_EC2_SENSITIVE_S3_ACCESS = AttackPathsQueryDefinition(
MATCH path_assume_role = (ec2)-[p:STS_ASSUMEROLE_ALLOW*1..9]-(r:AWSRole)
CALL apoc.create.vRelationship(internet, 'CAN_ACCESS', {{provider_id: $provider_id}}, ec2)
YIELD rel AS can_access
OPTIONAL MATCH (internet)-[can_access:CAN_ACCESS]->(ec2)
UNWIND nodes(path_s3) + nodes(path_ec2) + nodes(path_role) + nodes(path_assume_role) as n
OPTIONAL MATCH (n)-[pfr]-(pf:{PROWLER_FINDING_LABEL} {{status: 'FAIL', provider_uid: $provider_uid}})
@@ -181,14 +179,12 @@ AWS_EC2_INSTANCES_INTERNET_EXPOSED = AttackPathsQueryDefinition(
description="Find EC2 instances flagged as exposed to the internet within the selected account.",
provider="aws",
cypher=f"""
CALL apoc.create.vNode(['Internet'], {{id: 'Internet', name: 'Internet', provider_id: $provider_id}})
YIELD node AS internet
OPTIONAL MATCH (internet:Internet {{_provider_id: $provider_id}})
MATCH path = (aws:AWSAccount {{id: $provider_uid}})--(ec2:EC2Instance)
WHERE ec2.exposed_internet = true
CALL apoc.create.vRelationship(internet, 'CAN_ACCESS', {{provider_id: $provider_id}}, ec2)
YIELD rel AS can_access
OPTIONAL MATCH (internet)-[can_access:CAN_ACCESS]->(ec2)
UNWIND nodes(path) as n
OPTIONAL MATCH (n)-[pfr]-(pf:{PROWLER_FINDING_LABEL} {{status: 'FAIL', provider_uid: $provider_uid}})
@@ -205,16 +201,14 @@ AWS_SECURITY_GROUPS_OPEN_INTERNET_FACING = AttackPathsQueryDefinition(
description="Find internet-facing resources associated with security groups that allow inbound access from '0.0.0.0/0'.",
provider="aws",
cypher=f"""
CALL apoc.create.vNode(['Internet'], {{id: 'Internet', name: 'Internet', provider_id: $provider_id}})
YIELD node AS internet
OPTIONAL MATCH (internet:Internet {{_provider_id: $provider_id}})
// Match EC2 instances that are internet-exposed with open security groups (0.0.0.0/0)
MATCH path_ec2 = (aws:AWSAccount {{id: $provider_uid}})--(ec2:EC2Instance)--(sg:EC2SecurityGroup)--(ipi:IpPermissionInbound)--(ir:IpRange)
WHERE ec2.exposed_internet = true
AND ir.range = "0.0.0.0/0"
CALL apoc.create.vRelationship(internet, 'CAN_ACCESS', {{provider_id: $provider_id}}, ec2)
YIELD rel AS can_access
OPTIONAL MATCH (internet)-[can_access:CAN_ACCESS]->(ec2)
UNWIND nodes(path_ec2) as n
OPTIONAL MATCH (n)-[pfr]-(pf:{PROWLER_FINDING_LABEL} {{status: 'FAIL', provider_uid: $provider_uid}})
@@ -231,14 +225,12 @@ AWS_CLASSIC_ELB_INTERNET_EXPOSED = AttackPathsQueryDefinition(
description="Find Classic Load Balancers exposed to the internet along with their listeners.",
provider="aws",
cypher=f"""
CALL apoc.create.vNode(['Internet'], {{id: 'Internet', name: 'Internet', provider_id: $provider_id}})
YIELD node AS internet
OPTIONAL MATCH (internet:Internet {{_provider_id: $provider_id}})
MATCH path = (aws:AWSAccount {{id: $provider_uid}})--(elb:LoadBalancer)--(listener:ELBListener)
WHERE elb.exposed_internet = true
CALL apoc.create.vRelationship(internet, 'CAN_ACCESS', {{provider_id: $provider_id}}, elb)
YIELD rel AS can_access
OPTIONAL MATCH (internet)-[can_access:CAN_ACCESS]->(elb)
UNWIND nodes(path) as n
OPTIONAL MATCH (n)-[pfr]-(pf:{PROWLER_FINDING_LABEL} {{status: 'FAIL', provider_uid: $provider_uid}})
@@ -255,14 +247,12 @@ AWS_ELBV2_INTERNET_EXPOSED = AttackPathsQueryDefinition(
description="Find ELBv2 load balancers exposed to the internet along with their listeners.",
provider="aws",
cypher=f"""
CALL apoc.create.vNode(['Internet'], {{id: 'Internet', name: 'Internet', provider_id: $provider_id}})
YIELD node AS internet
OPTIONAL MATCH (internet:Internet {{_provider_id: $provider_id}})
MATCH path = (aws:AWSAccount {{id: $provider_uid}})--(elbv2:LoadBalancerV2)--(listener:ELBV2Listener)
WHERE elbv2.exposed_internet = true
CALL apoc.create.vRelationship(internet, 'CAN_ACCESS', {{provider_id: $provider_id}}, elbv2)
YIELD rel AS can_access
OPTIONAL MATCH (internet)-[can_access:CAN_ACCESS]->(elbv2)
UNWIND nodes(path) as n
OPTIONAL MATCH (n)-[pfr]-(pf:{PROWLER_FINDING_LABEL} {{status: 'FAIL', provider_uid: $provider_uid}})
@@ -279,31 +269,15 @@ AWS_PUBLIC_IP_RESOURCE_LOOKUP = AttackPathsQueryDefinition(
description="Given a public IP address, find the related AWS resource and its adjacent node within the selected account.",
provider="aws",
cypher=f"""
CALL apoc.create.vNode(['Internet'], {{id: 'Internet', name: 'Internet', provider_id: $provider_id}})
YIELD node AS internet
OPTIONAL MATCH (internet:Internet {{_provider_id: $provider_id}})
CALL () {{
MATCH path = (aws:AWSAccount {{id: $provider_uid}})-[r]-(x:EC2PrivateIp)-[q]-(y)
WHERE x.public_ip = $ip
RETURN path, x
MATCH path = (aws:AWSAccount {{id: $provider_uid}})-[r]-(x)-[q]-(y)
WHERE (x:EC2PrivateIp AND x.public_ip = $ip)
OR (x:EC2Instance AND x.publicipaddress = $ip)
OR (x:NetworkInterface AND x.public_ip = $ip)
OR (x:ElasticIPAddress AND x.public_ip = $ip)
UNION MATCH path = (aws:AWSAccount {{id: $provider_uid}})-[r]-(x:EC2Instance)-[q]-(y)
WHERE x.publicipaddress = $ip
RETURN path, x
UNION MATCH path = (aws:AWSAccount {{id: $provider_uid}})-[r]-(x:NetworkInterface)-[q]-(y)
WHERE x.public_ip = $ip
RETURN path, x
UNION MATCH path = (aws:AWSAccount {{id: $provider_uid}})-[r]-(x:ElasticIPAddress)-[q]-(y)
WHERE x.public_ip = $ip
RETURN path, x
}}
WITH path, x, internet
CALL apoc.create.vRelationship(internet, 'CAN_ACCESS', {{provider_id: $provider_id}}, x)
YIELD rel AS can_access
OPTIONAL MATCH (internet)-[can_access:CAN_ACCESS]->(x)
UNWIND nodes(path) as n
OPTIONAL MATCH (n)-[pfr]-(pf:{PROWLER_FINDING_LABEL} {{status: 'FAIL', provider_uid: $provider_uid}})
@@ -0,0 +1,39 @@
from django.db import migrations
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0083_image_provider"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="provider",
field=api.db_utils.ProviderEnumField(
choices=[
("aws", "AWS"),
("azure", "Azure"),
("gcp", "GCP"),
("kubernetes", "Kubernetes"),
("m365", "M365"),
("github", "GitHub"),
("mongodbatlas", "MongoDB Atlas"),
("iac", "IaC"),
("oraclecloud", "Oracle Cloud Infrastructure"),
("alibabacloud", "Alibaba Cloud"),
("cloudflare", "Cloudflare"),
("openstack", "OpenStack"),
("image", "Image"),
("googleworkspace", "Google Workspace"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'googleworkspace';",
reverse_sql=migrations.RunSQL.noop,
),
]
+10
View File
@@ -293,6 +293,7 @@ class Provider(RowLevelSecurityProtectedModel):
CLOUDFLARE = "cloudflare", _("Cloudflare")
OPENSTACK = "openstack", _("OpenStack")
IMAGE = "image", _("Image")
GOOGLEWORKSPACE = "googleworkspace", _("Google Workspace")
@staticmethod
def validate_aws_uid(value):
@@ -342,6 +343,15 @@ class Provider(RowLevelSecurityProtectedModel):
pointer="/data/attributes/uid",
)
@staticmethod
def validate_googleworkspace_uid(value):
if not re.match(r"^C[0-9a-zA-Z]+$", value):
raise ModelValidationError(
detail="Google Workspace Customer ID must start with 'C' followed by one or more alphanumeric characters (e.g., C01234abc, C12345678).",
code="googleworkspace-uid",
pointer="/data/attributes/uid",
)
@staticmethod
def validate_kubernetes_uid(value):
if not re.match(
File diff suppressed because it is too large Load Diff
+8
View File
@@ -23,6 +23,9 @@ from prowler.providers.azure.azure_provider import AzureProvider
from prowler.providers.cloudflare.cloudflare_provider import CloudflareProvider
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.github.github_provider import GithubProvider
from prowler.providers.googleworkspace.googleworkspace_provider import (
GoogleworkspaceProvider,
)
from prowler.providers.iac.iac_provider import IacProvider
from prowler.providers.image.image_provider import ImageProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
@@ -113,6 +116,7 @@ class TestReturnProwlerProvider:
[
(Provider.ProviderChoices.AWS.value, AwsProvider),
(Provider.ProviderChoices.GCP.value, GcpProvider),
(Provider.ProviderChoices.GOOGLEWORKSPACE.value, GoogleworkspaceProvider),
(Provider.ProviderChoices.AZURE.value, AzureProvider),
(Provider.ProviderChoices.KUBERNETES.value, KubernetesProvider),
(Provider.ProviderChoices.M365.value, M365Provider),
@@ -248,6 +252,10 @@ class TestGetProwlerProviderKwargs:
Provider.ProviderChoices.GCP.value,
{"project_ids": ["provider_uid"]},
),
(
Provider.ProviderChoices.GOOGLEWORKSPACE.value,
{},
),
(
Provider.ProviderChoices.KUBERNETES.value,
{"context": "provider_uid"},
+68 -5
View File
@@ -1190,6 +1190,26 @@ class TestProviderViewSet:
"uid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"alias": "OpenStack Project",
},
{
"provider": "googleworkspace",
"uid": "C01234abc",
"alias": "Google Workspace Customer",
},
{
"provider": "googleworkspace",
"uid": "C12345678",
"alias": "Google Workspace All Digits",
},
{
"provider": "googleworkspace",
"uid": "CABCDEF123",
"alias": "Google Workspace Uppercase",
},
{
"provider": "googleworkspace",
"uid": "C12",
"alias": "Google Workspace Minimum Length",
},
]
),
)
@@ -1342,7 +1362,11 @@ class TestProviderViewSet:
response = authenticated_client.post(
reverse("provider-list"), data=provider_json_payload, format="json"
)
assert response.status_code == status.HTTP_400_BAD_REQUEST
assert response.status_code == status.HTTP_409_CONFLICT
error = response.json()["errors"][0]
assert error["detail"] == "Provider already exists."
assert error["code"] == "conflict"
assert error["source"]["pointer"] == "/data/attributes/uid"
mock_delete_task.reset_mock()
mock_delete_task.return_value = task_mock
@@ -1634,6 +1658,36 @@ class TestProviderViewSet:
"min_length",
"uid",
),
# Google Workspace UID validation - missing 'C' prefix
(
{
"provider": "googleworkspace",
"uid": "01234abc",
"alias": "test",
},
"googleworkspace-uid",
"uid",
),
# Google Workspace UID validation - contains special characters
(
{
"provider": "googleworkspace",
"uid": "C0123-abc",
"alias": "test",
},
"googleworkspace-uid",
"uid",
),
# Google Workspace UID validation - lowercase 'c' prefix
(
{
"provider": "googleworkspace",
"uid": "c12345678",
"alias": "test",
},
"googleworkspace-uid",
"uid",
),
]
),
)
@@ -1807,21 +1861,21 @@ class TestProviderViewSet:
(
"uid.icontains",
"1",
10,
11,
),
("alias", "aws_testing_1", 1),
("alias.icontains", "aws", 2),
("inserted_at", TODAY, 11),
("inserted_at", TODAY, 12),
(
"inserted_at.gte",
"2024-01-01",
11,
12,
),
("inserted_at.lte", "2024-01-01", 0),
(
"updated_at.gte",
"2024-01-01",
11,
12,
),
("updated_at.lte", "2024-01-01", 0),
]
@@ -2437,6 +2491,15 @@ class TestProviderSecretViewSet:
"clouds_yaml_cloud": "mycloud",
},
),
# Google Workspace with service account credentials
(
Provider.ProviderChoices.GOOGLEWORKSPACE.value,
ProviderSecret.TypeChoices.STATIC,
{
"credentials_content": '{"type": "service_account", "project_id": "test-project", "private_key_id": "key123", "private_key": "-----BEGIN PRIVATE KEY-----\\ntest\\n-----END PRIVATE KEY-----\\n", "client_email": "test@test-project.iam.gserviceaccount.com", "client_id": "123456789"}',
"delegated_user": "admin@example.com",
},
),
],
)
def test_provider_secrets_create_valid(
+13 -2
View File
@@ -27,6 +27,9 @@ if TYPE_CHECKING:
from prowler.providers.cloudflare.cloudflare_provider import CloudflareProvider
from prowler.providers.gcp.gcp_provider import GcpProvider
from prowler.providers.github.github_provider import GithubProvider
from prowler.providers.googleworkspace.googleworkspace_provider import (
GoogleworkspaceProvider,
)
from prowler.providers.iac.iac_provider import IacProvider
from prowler.providers.image.image_provider import ImageProvider
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
@@ -83,6 +86,7 @@ def return_prowler_provider(
| CloudflareProvider
| GcpProvider
| GithubProvider
| GoogleworkspaceProvider
| IacProvider
| ImageProvider
| KubernetesProvider
@@ -97,7 +101,7 @@ def return_prowler_provider(
provider (Provider): The provider object containing the provider type and associated secrets.
Returns:
AlibabacloudProvider | AwsProvider | AzureProvider | CloudflareProvider | GcpProvider | GithubProvider | IacProvider | ImageProvider | KubernetesProvider | M365Provider | MongodbatlasProvider | OpenstackProvider | OraclecloudProvider: The corresponding provider class.
AlibabacloudProvider | AwsProvider | AzureProvider | CloudflareProvider | GcpProvider | GithubProvider | GoogleworkspaceProvider | IacProvider | ImageProvider | KubernetesProvider | M365Provider | MongodbatlasProvider | OpenstackProvider | OraclecloudProvider: The corresponding provider class.
Raises:
ValueError: If the provider type specified in `provider.provider` is not supported.
@@ -111,6 +115,12 @@ def return_prowler_provider(
from prowler.providers.gcp.gcp_provider import GcpProvider
prowler_provider = GcpProvider
case Provider.ProviderChoices.GOOGLEWORKSPACE.value:
from prowler.providers.googleworkspace.googleworkspace_provider import (
GoogleworkspaceProvider,
)
prowler_provider = GoogleworkspaceProvider
case Provider.ProviderChoices.AZURE.value:
from prowler.providers.azure.azure_provider import AzureProvider
@@ -263,6 +273,7 @@ def initialize_prowler_provider(
| CloudflareProvider
| GcpProvider
| GithubProvider
| GoogleworkspaceProvider
| IacProvider
| ImageProvider
| KubernetesProvider
@@ -278,7 +289,7 @@ def initialize_prowler_provider(
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
Returns:
AlibabacloudProvider | AwsProvider | AzureProvider | CloudflareProvider | GcpProvider | GithubProvider | IacProvider | ImageProvider | KubernetesProvider | M365Provider | MongodbatlasProvider | OpenstackProvider | OraclecloudProvider: An instance of the corresponding provider class
AlibabacloudProvider | AwsProvider | AzureProvider | CloudflareProvider | GcpProvider | GithubProvider | GoogleworkspaceProvider | IacProvider | ImageProvider | KubernetesProvider | M365Provider | MongodbatlasProvider | OpenstackProvider | OraclecloudProvider: An instance of the corresponding provider class
initialized with the provider's secrets.
"""
prowler_provider = return_prowler_provider(provider)
@@ -191,6 +191,22 @@ from rest_framework_json_api import serializers
},
"required": ["service_account_key"],
},
{
"type": "object",
"title": "Google Workspace Service Account",
"properties": {
"credentials_content": {
"type": "string",
"description": "The service account JSON credentials content for Google Workspace API access with domain-wide delegation enabled.",
},
"delegated_user": {
"type": "string",
"format": "email",
"description": "The email address of the Google Workspace super admin user to impersonate for domain-wide delegation.",
},
},
"required": ["credentials_content", "delegated_user"],
},
{
"type": "object",
"title": "Kubernetes Static Credentials",
+31
View File
@@ -6,6 +6,7 @@ from django.conf import settings
from django.contrib.auth import authenticate
from django.contrib.auth.models import update_last_login
from django.contrib.auth.password_validation import validate_password
from django.core.exceptions import ValidationError as DjangoValidationError
from django.db import IntegrityError
from drf_spectacular.utils import extend_schema_field
from jwt.exceptions import InvalidKeyError
@@ -959,6 +960,26 @@ class ProviderCreateSerializer(RLSSerializer, BaseWriteSerializer):
},
}
def create(self, validated_data):
try:
return super().create(validated_data)
except DjangoValidationError as e:
if "unique_provider_uids" in str(e):
raise ConflictException(
detail="Provider already exists.",
pointer="/data/attributes/uid",
)
raise
except IntegrityError as e:
# Handle race conditions where the unique constraint is enforced at the DB level
# after validation has already passed.
if "unique_provider_uids" in str(e):
raise ConflictException(
detail="Provider already exists.",
pointer="/data/attributes/uid",
)
raise
class ProviderUpdateSerializer(BaseWriteSerializer):
"""
@@ -1520,6 +1541,8 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
serializer = AzureProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.GCP.value:
serializer = GCPProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.GOOGLEWORKSPACE.value:
serializer = GoogleWorkspaceProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.GITHUB.value:
serializer = GithubProviderSecret(data=secret)
elif provider_type == Provider.ProviderChoices.IAC.value:
@@ -1655,6 +1678,14 @@ class GCPServiceAccountProviderSecret(serializers.Serializer):
resource_name = "provider-secrets"
class GoogleWorkspaceProviderSecret(serializers.Serializer):
credentials_content = serializers.CharField()
delegated_user = serializers.EmailField()
class Meta:
resource_name = "provider-secrets"
class MongoDBAtlasProviderSecret(serializers.Serializer):
atlas_public_key = serializers.CharField()
atlas_private_key = serializers.CharField()
+45 -1
View File
@@ -3,6 +3,7 @@ import glob
import json
import logging
import os
import time
from collections import defaultdict
from copy import deepcopy
@@ -407,7 +408,7 @@ class SchemaView(SpectacularAPIView):
def get(self, request, *args, **kwargs):
spectacular_settings.TITLE = "Prowler API"
spectacular_settings.VERSION = "1.20.0"
spectacular_settings.VERSION = "1.21.0"
spectacular_settings.DESCRIPTION = (
"Prowler API specification.\n\nThis file is auto-generated."
)
@@ -2570,14 +2571,35 @@ class AttackPathsScanViewSet(BaseRLSViewSet):
provider_id,
)
start = time.monotonic()
graph = attack_paths_views_helpers.execute_query(
database_name,
query_definition,
parameters,
provider_id,
)
query_duration = time.monotonic() - start
graph_database.clear_cache(database_name)
result_nodes = len(graph.get("nodes", []))
result_relationships = len(graph.get("relationships", []))
logger.info(
"attack_paths_query_run",
extra={
"user_id": str(request.user.id),
"tenant_id": str(attack_paths_scan.provider.tenant_id),
"metadata": {
"query_id": query_definition.id,
"provider": query_definition.provider,
"scan_id": pk,
"provider_id": provider_id,
"result_nodes": result_nodes,
"result_relationships": result_relationships,
"query_duration": round(query_duration, 3),
},
},
)
status_code = status.HTTP_200_OK
if not graph.get("nodes"):
status_code = status.HTTP_404_NOT_FOUND
@@ -2618,13 +2640,35 @@ class AttackPathsScanViewSet(BaseRLSViewSet):
)
provider_id = str(attack_paths_scan.provider_id)
start = time.monotonic()
graph = attack_paths_views_helpers.execute_custom_query(
database_name,
serializer.validated_data["query"],
provider_id,
)
query_duration = time.monotonic() - start
graph_database.clear_cache(database_name)
query_length = len(serializer.validated_data["query"])
result_nodes = len(graph.get("nodes", []))
result_relationships = len(graph.get("relationships", []))
logger.info(
"attack_paths_custom_query_run",
extra={
"user_id": str(request.user.id),
"tenant_id": str(attack_paths_scan.provider.tenant_id),
"metadata": {
"provider": attack_paths_scan.provider.provider,
"scan_id": pk,
"provider_id": provider_id,
"query_length": query_length,
"result_nodes": result_nodes,
"result_relationships": result_relationships,
"query_duration": round(query_duration, 3),
},
},
)
status_code = status.HTTP_200_OK
if not graph.get("nodes"):
status_code = status.HTTP_404_NOT_FOUND
+5
View File
@@ -2,6 +2,7 @@ import json
import logging
from enum import StrEnum
from config.env import env
from django_guid.log_filters import CorrelationId
@@ -62,6 +63,8 @@ class NDJSONFormatter(logging.Formatter):
log_record["duration"] = record.duration
if hasattr(record, "status_code"):
log_record["status_code"] = record.status_code
if hasattr(record, "metadata"):
log_record["metadata"] = record.metadata
if record.exc_info:
log_record["exc_info"] = self.formatException(record.exc_info)
@@ -107,6 +110,8 @@ class HumanReadableFormatter(logging.Formatter):
log_components.append(f"done in {record.duration}s:")
if hasattr(record, "status_code"):
log_components.append(f"{record.status_code}")
if hasattr(record, "metadata"):
log_components.append(f"metadata={record.metadata}")
if record.exc_info:
log_components.append(self.formatException(record.exc_info))
+7
View File
@@ -543,6 +543,12 @@ def providers_fixture(tenants_fixture):
alias="openstack_testing",
tenant_id=tenant.id,
)
provider12 = Provider.objects.create(
provider="googleworkspace",
uid="C12345678",
alias="googleworkspace_testing",
tenant_id=tenant.id,
)
return (
provider1,
@@ -556,6 +562,7 @@ def providers_fixture(tenants_fixture):
provider9,
provider10,
provider11,
provider12,
)
+28 -2
View File
@@ -43,6 +43,7 @@ def start_aws_ingestion(
"aws_guardduty_severity_threshold": cartography_config.aws_guardduty_severity_threshold,
"aws_cloudtrail_management_events_lookback_hours": cartography_config.aws_cloudtrail_management_events_lookback_hours,
"experimental_aws_inspector_batch": cartography_config.experimental_aws_inspector_batch,
"aws_tagging_api_cleanup_batch": cartography_config.aws_tagging_api_cleanup_batch,
}
boto3_session = get_boto3_session(prowler_api_provider, prowler_sdk_provider)
@@ -116,6 +117,30 @@ def start_aws_ingestion(
neo4j_session,
common_job_parameters,
)
if all(
s in requested_syncs
for s in ["ecs", "ec2:load_balancer_v2", "ec2:load_balancer_v2:expose"]
):
logger.info(
f"Syncing lb_container_exposure scoped analysis for AWS account {prowler_api_provider.uid}"
)
cartography_aws.run_scoped_analysis_job(
"aws_lb_container_exposure.json",
neo4j_session,
common_job_parameters,
)
if all(s in requested_syncs for s in ["ec2:network_acls", "ec2:load_balancer_v2"]):
logger.info(
f"Syncing lb_nacl_direct scoped analysis for AWS account {prowler_api_provider.uid}"
)
cartography_aws.run_scoped_analysis_job(
"aws_lb_nacl_direct.json",
neo4j_session,
common_job_parameters,
)
db_utils.update_attack_paths_scan_progress(attack_paths_scan, 91)
logger.info(f"Syncing metadata for AWS account {prowler_api_provider.uid}")
@@ -239,8 +264,9 @@ def sync_aws_account(
failed_syncs[func_name] = exception_message
logger.warning(
f"Caught exception syncing function {func_name} from AWS account {prowler_api_provider.uid}. We "
"are continuing on to the next AWS sync function.",
f"Caught exception syncing function {func_name} from AWS account {prowler_api_provider.uid}: {e}. "
"Continuing to the next AWS sync function.",
exc_info=True,
)
continue
@@ -1,3 +1,58 @@
"""
Attack Paths scan orchestrator.
Runs the full scan lifecycle for a single provider, called from a Celery task.
The idea is simple: ingest everything into a throwaway Neo4j database, enrich
it with Prowler-specific data, then swap it into the tenant's long-lived
database so queries never see a half-built graph.
Two databases are involved:
- Temporary (db-tmp-scan-<attack_paths_scan_id>): short-lived, single-provider, dropped after sync.
- Tenant (db-tenant-<tenant_uuid>): long-lived, multi-provider, what the API queries against.
Pipeline steps:
1. Resolve the Prowler provider and SDK credentials from the scan ID.
Retrieve or create the AttackPathsScan row. Exit early if the provider
type has no ingestion function (only AWS is supported today).
2. Create a fresh temporary Neo4j database and set up Cartography indexes
plus ProwlerFinding indexes before writing any data.
3. Run the provider-specific Cartography ingestion (e.g. aws.start_aws_ingestion).
This iterates over cloud services and writes the standard Cartography nodes
(AWSAccount, EC2Instance, IAMRole, etc.) and relationships (RESOURCE,
POLICY, STATEMENT, TRUSTS_AWS_PRINCIPAL, ...) into the temp database.
Wrapped in call_within_event_loop because some Cartography modules use async.
4. Run Cartography post-processing: ontology for label propagation and
analysis for derived relationships.
5. Create an Internet singleton node and add CAN_ACCESS relationships to
internet-exposed resources (EC2Instance, LoadBalancer, LoadBalancerV2).
6. Stream Prowler findings from Postgres in batches. Each finding becomes a
ProwlerFinding node linked to its cloud-resource node via HAS_FINDING.
Before that, an _AWSResource label (provider-specific) is added to all
nodes connected to the AWSAccount so finding lookups can use an index.
Stale findings from previous scans are cleaned up.
7. Sync the temp database into the tenant database:
- Drop the old provider subgraph (matched by _provider_id property).
graph_data_ready is set to False for all scans of this provider while
the swap happens so the API doesn't serve partial data.
- Copy nodes and relationships in batches. Every synced node gets a
_ProviderResource label and _provider_id / _provider_element_id
properties for multi-provider isolation.
- Set graph_data_ready back to True.
8. Drop the temporary database, mark the AttackPathsScan as COMPLETED.
On failure the temp database is dropped, the scan is marked FAILED, and the
exception propagates to Celery.
"""
import logging
import time
@@ -212,18 +267,20 @@ def run(tenant_id: str, scan_id: str, task_id: str) -> dict[str, Any]:
try:
graph_database.drop_database(tmp_cartography_config.neo4j_database)
except Exception:
except Exception as e:
logger.error(
f"Failed to drop temporary Neo4j database {tmp_cartography_config.neo4j_database} during cleanup"
f"Failed to drop temporary Neo4j database {tmp_cartography_config.neo4j_database} during cleanup: {e}",
exc_info=True,
)
try:
db_utils.finish_attack_paths_scan(
attack_paths_scan, StateChoices.FAILED, ingestion_exceptions
)
except Exception:
logger.warning(
f"Could not mark attack paths scan {attack_paths_scan.id} as FAILED (row may have been deleted)"
except Exception as e:
logger.error(
f"Could not mark attack paths scan {attack_paths_scan.id} as FAILED (row may have been deleted): {e}",
exc_info=True,
)
raise
+11 -6
View File
@@ -336,7 +336,6 @@ class ENSReportGenerator(BaseComplianceReportGenerator):
for req in data.requirements:
if req.status == StatusChoices.MANUAL:
continue
m = get_requirement_metadata(req.id, data.attributes_by_requirement_id)
if m:
marco = getattr(m, "Marco", "Otros")
@@ -365,9 +364,12 @@ class ENSReportGenerator(BaseComplianceReportGenerator):
elements.append(Paragraph(f"{categoria_name}", self.styles["h3"]))
for req in reqs:
status_indicator = (
"" if req["status"] == StatusChoices.PASS else ""
)
if req["status"] == StatusChoices.PASS:
status_indicator = ""
elif req["status"] == StatusChoices.MANUAL:
status_indicator = ""
else:
status_indicator = ""
nivel_badge = f"[{req['nivel'].upper()}]" if req["nivel"] else ""
elements.append(
Paragraph(
@@ -841,11 +843,14 @@ class ENSReportGenerator(BaseComplianceReportGenerator):
elements.append(Spacer(1, 0.15 * inch))
# Status and Nivel badges row
status_color = COLOR_HIGH_RISK # FAIL
status_text = str(req.status).upper()
status_color = (
COLOR_HIGH_RISK if req.status == StatusChoices.FAIL else COLOR_GRAY
)
nivel_color = nivel_colors.get(nivel, COLOR_GRAY)
badges_row1 = [
["State:", "FAIL", "", f"Nivel: {nivel.upper()}"],
["State:", status_text, "", f"Nivel: {nivel.upper()}"],
]
badges_table1 = Table(
badges_row1,
@@ -35,19 +35,27 @@ def _aggregate_requirement_statistics_from_database(
}
"""
requirement_statistics_by_check_id = {}
# TODO: take into account that now the relation is 1 finding == 1 resource, review this when the logic changes
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
aggregated_statistics_queryset = (
Finding.all_objects.filter(
tenant_id=tenant_id, scan_id=scan_id, muted=False
tenant_id=tenant_id,
scan_id=scan_id,
muted=False,
resources__provider__is_deleted=False,
)
.values("check_id")
.annotate(
total_findings=Count(
"id",
distinct=True,
filter=Q(status__in=[StatusChoices.PASS, StatusChoices.FAIL]),
),
passed_findings=Count("id", filter=Q(status=StatusChoices.PASS)),
passed_findings=Count(
"id",
distinct=True,
filter=Q(status=StatusChoices.PASS),
),
)
)
+116 -75
View File
@@ -29,7 +29,7 @@ from tasks.jobs.threatscore_utils import (
_load_findings_for_requirement_checks,
)
from api.models import Finding, StatusChoices
from api.models import Finding, Resource, ResourceFindingMapping, StatusChoices
from prowler.lib.check.models import Severity
matplotlib.use("Agg") # Use non-interactive backend for tests
@@ -39,43 +39,50 @@ matplotlib.use("Agg") # Use non-interactive backend for tests
class TestAggregateRequirementStatistics:
"""Test suite for _aggregate_requirement_statistics_from_database function."""
def _create_finding_with_resource(
self, tenant, scan, uid, check_id, status, severity=Severity.high
):
"""Helper to create a finding linked to a resource (matching scan processing behavior)."""
finding = Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=uid,
check_id=check_id,
status=status,
severity=severity,
impact=severity,
check_metadata={},
raw_result={},
)
resource = Resource.objects.create(
tenant_id=tenant.id,
provider=scan.provider,
uid=f"resource-{uid}",
name=f"resource-{uid}",
region="us-east-1",
service="test",
type="test::resource",
)
ResourceFindingMapping.objects.create(
tenant_id=tenant.id,
finding=finding,
resource=resource,
)
return finding
def test_aggregates_findings_correctly(self, tenants_fixture, scans_fixture):
"""Verify correct pass/total counts per check are aggregated from database."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_1",
status=StatusChoices.PASS,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
self._create_finding_with_resource(
tenant, scan, "finding-1", "check_1", StatusChoices.PASS
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-2",
check_id="check_1",
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
self._create_finding_with_resource(
tenant, scan, "finding-2", "check_1", StatusChoices.FAIL
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-3",
check_id="check_2",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
self._create_finding_with_resource(
tenant, scan, "finding-3", "check_2", StatusChoices.PASS, Severity.medium
)
result = _aggregate_requirement_statistics_from_database(
@@ -106,27 +113,11 @@ class TestAggregateRequirementStatistics:
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_1",
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
self._create_finding_with_resource(
tenant, scan, "finding-1", "check_1", StatusChoices.FAIL
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-2",
check_id="check_1",
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
self._create_finding_with_resource(
tenant, scan, "finding-2", "check_1", StatusChoices.FAIL
)
result = _aggregate_requirement_statistics_from_database(
@@ -142,16 +133,12 @@ class TestAggregateRequirementStatistics:
scan = scans_fixture[0]
for i in range(5):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-{i}",
check_id="check_1",
status=StatusChoices.PASS if i % 2 == 0 else StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
self._create_finding_with_resource(
tenant,
scan,
f"finding-{i}",
"check_1",
StatusChoices.PASS if i % 2 == 0 else StatusChoices.FAIL,
)
result = _aggregate_requirement_statistics_from_database(
@@ -162,27 +149,43 @@ class TestAggregateRequirementStatistics:
assert result["check_1"]["total"] == 5
def test_mixed_statuses(self, tenants_fixture, scans_fixture):
"""Verify MANUAL status is counted in total but not passed."""
"""Verify MANUAL status is not counted in total or passed."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_1",
status=StatusChoices.PASS,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
self._create_finding_with_resource(
tenant, scan, "finding-1", "check_1", StatusChoices.PASS
)
self._create_finding_with_resource(
tenant, scan, "finding-2", "check_1", StatusChoices.MANUAL
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
# MANUAL findings are excluded from the aggregation query
# since it only counts PASS and FAIL statuses
assert result["check_1"]["passed"] == 1
assert result["check_1"]["total"] == 1
def test_excludes_findings_without_resources(self, tenants_fixture, scans_fixture):
"""Verify findings without resources are excluded from aggregation."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Finding WITH resource → should be counted
self._create_finding_with_resource(
tenant, scan, "finding-1", "check_1", StatusChoices.PASS
)
# Finding WITHOUT resource → should be EXCLUDED
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-2",
check_id="check_1",
status=StatusChoices.MANUAL,
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
@@ -193,8 +196,46 @@ class TestAggregateRequirementStatistics:
str(tenant.id), str(scan.id)
)
# MANUAL findings are excluded from the aggregation query
# since it only counts PASS and FAIL statuses
assert result["check_1"]["passed"] == 1
assert result["check_1"]["total"] == 1
def test_multiple_resources_no_double_count(self, tenants_fixture, scans_fixture):
"""Verify a finding with multiple resources is only counted once."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
finding = Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_1",
status=StatusChoices.PASS,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
# Link two resources to the same finding
for i in range(2):
resource = Resource.objects.create(
tenant_id=tenant.id,
provider=scan.provider,
uid=f"resource-{i}",
name=f"resource-{i}",
region="us-east-1",
service="test",
type="test::resource",
)
ResourceFindingMapping.objects.create(
tenant_id=tenant.id,
finding=finding,
resource=resource,
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result["check_1"]["passed"] == 1
assert result["check_1"]["total"] == 1
+1
View File
@@ -0,0 +1 @@
charts/
+2
View File
@@ -13,6 +13,8 @@ keywords:
- gcp
- kubernetes
maintainers:
- name: Dani
email: andre.gomes@promptlyhealth.com
- name: Mihai
email: mihai.legat@gmail.com
dependencies:
+1 -1
View File
@@ -21,7 +21,7 @@ print(
f"{Fore.GREEN}Loading all CSV files from the folder {folder_path_overview} ...\n{Style.RESET_ALL}"
)
cli.show_server_banner = lambda *x: click.echo(
f"{Fore.YELLOW}NOTE:{Style.RESET_ALL} If you are using {Fore.GREEN}{Style.BRIGHT}Prowler SaaS{Style.RESET_ALL} with the S3 integration or that integration \nfrom {Fore.CYAN}{Style.BRIGHT}Prowler Open Source{Style.RESET_ALL} and you want to use your data from your S3 bucket,\nrun: `{orange_color}aws s3 cp s3://<your-bucket>/output/csv ./output --recursive{Style.RESET_ALL}`\nand then run `prowler dashboard` again to load the new files."
f"{Fore.YELLOW}NOTE:{Style.RESET_ALL} If you are using {Fore.GREEN}{Style.BRIGHT}Prowler Cloud{Style.RESET_ALL} with the S3 integration or that integration \nfrom {Fore.CYAN}{Style.BRIGHT}Prowler CLI{Style.RESET_ALL} and you want to use your data from your S3 bucket,\nrun: `{orange_color}aws s3 cp s3://<your-bucket>/output/csv ./output --recursive{Style.RESET_ALL}`\nand then run `prowler dashboard` again to load the new files."
)
# Initialize the app - incorporate css
+1
View File
@@ -315,6 +315,7 @@ The type of resource being audited. This field helps categorize and organize fin
- **Kubernetes**: Use types shown under `KIND` from `kubectl api-resources`.
- **Oracle Cloud Infrastructure**: Use types from [Oracle Cloud Infrastructure documentation](https://docs.public.oneportal.content.oci.oraclecloud.com/en-us/iaas/Content/Search/Tasks/queryingresources_topic-Listing_Supported_Resource_Types.htm).
- **OpenStack**: Use types from [OpenStack Heat resource types](https://docs.openstack.org/heat/latest/template_guide/openstack.html).
- **Alibaba Cloud**: Use types from [Alibaba Cloud ROS resource types](https://www.alibabacloud.com/help/en/ros/developer-reference/list-of-resource-types-by-service).
- **Any other provider**: Use `NotDefined` due to lack of standardized resource types in their SDK or documentation.
#### ResourceGroup
+34
View File
@@ -3406,6 +3406,40 @@ Use existing providers as templates, this will help you to understand better the
- **Use Rules**: Use rules to ensure the code generated by AI is following the way of working in Prowler.
---
## OCSF Field Requirements for Prowler Cloud Integration
When implementing a new provider that supports the `--push-to-cloud` feature, specific OCSF fields must be correctly populated to ensure proper findings ingestion into Prowler Cloud.
### Required OCSF Fields
The following fields in the OCSF output are critical for successful ingestion:
| Field | Requirement | Description |
|-------|-------------|-------------|
| `provider_uid` | Must match the UID used when registering the provider in the API | This identifier links findings to the correct provider in Prowler Cloud |
| `provider` | Must be the provider name | The name of the provider (e.g., `aws`, `azure`, `gcp`, `googleworkspace`) |
| `finding_info.uid` | Must be unique | Each finding must have a unique identifier to avoid duplicates |
| `resources.uid` | Must have a value | The resource UID cannot be empty; it identifies the specific resource being assessed |
### Implementation Reference
These fields are set in the OCSF output generation. See the [OCSF output implementation](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/outputs/ocsf/ocsf.py) for reference.
### Validation Checklist
Before releasing a new provider with `--push-to-cloud` support:
- [ ] Verify `provider_uid` matches the UID used in the API to register the provider
- [ ] Confirm `provider` field contains the correct provider name
- [ ] Ensure all `finding_info.uid` values are unique across findings
- [ ] Validate that `resources.uid` is populated for every finding
<Tip>
Use `python scripts/validate_ocsf_output.py output/*.ocsf.json` to automate these checks.
</Tip>
## Checklist for New Providers
### CLI Integration Only
@@ -0,0 +1,315 @@
---
title: 'Test Impact Analysis'
---
Test impact analysis (TIA) determines which tests to run based on the files changed in a pull request. Instead of running the full test suite on every pull request, TIA maps changed files to the specific Prowler SDK, API, and end-to-end (E2E) tests that cover them. This approach reduces continuous integration (CI) time and resource usage while maintaining confidence that relevant code paths are tested.
## Architecture
### Components
| Component | Path | Role |
|-----------|------|------|
| Configuration | `.github/test-impact.yml` | Defines ignored, critical, and module path mappings |
| Analysis engine | `.github/scripts/test-impact.py` | Python script that evaluates changed files against the configuration |
| Reusable workflow | `.github/workflows/test-impact-analysis.yml` | GitHub Actions reusable workflow that orchestrates the analysis |
| E2E consumer | `.github/workflows/ui-e2e-tests-v2.yml` | Consumes TIA outputs to run targeted Playwright tests |
### Flow Diagram
```
PR opened/updated
|
v
+-------------------------------+
| tj-actions/changed-files | Gets list of changed files from PR
+-------------------------------+
|
v
+-------------------------------+
| test-impact.py |
| |
| 1. Filter ignored paths | docs/**, *.md, .gitignore, etc.
| 2. Check critical paths | prowler/lib/**, ui/lib/**, .github/workflows/**
| 3. Match modules | Map remaining files to module definitions
| 4. Categorize tests | Split into sdk-tests, api-tests, ui-e2e
+-------------------------------+
|
v
+-------------------------------+
| GitHub Actions Outputs |
| |
| run-all: true/false |
| sdk-tests: "tests/providers/aws/**"
| api-tests: "api/src/backend/api/tests/**"
| ui-e2e: "ui/tests/providers/**"
| modules: "sdk-aws,ui-providers"
| has-tests: true/false |
| has-sdk-tests: true/false |
| has-api-tests: true/false |
| has-ui-e2e: true/false |
+-------------------------------+
|
v
+-------------------------------+
| Consumer Workflows |
| |
| ui-e2e-tests-v2.yml: |
| - Path resolution pipeline |
| - Playwright execution |
+-------------------------------+
```
## Configuration Reference
The configuration lives in `.github/test-impact.yml` and contains three sections.
### `ignored` — Paths That Never Trigger Tests
Files matching these patterns are filtered out before any analysis takes place. This section is intended for non-code files.
```yaml
ignored:
paths:
- docs/**
- "*.md"
- .gitignore
- skills/**
- ui/tests/setups/** # E2E auth setup helpers (not runnable tests)
```
### `critical` — Paths That Trigger All Tests
If any changed file matches a critical path, the system short-circuits and outputs `run-all: true`. All downstream consumers then run their complete test suites.
```yaml
critical:
paths:
- prowler/lib/** # SDK core
- ui/lib/** # UI shared utilities
- ui/playwright.config.ts # Test infrastructure
- .github/workflows/** # CI changes
- .github/test-impact.yml # This config itself
```
### `modules` — Path-to-Test Mappings
Each module maps source file patterns to the tests that cover them.
```yaml
- name: ui-providers # Unique identifier
match: # Source file glob patterns
- ui/components/providers/**
- ui/actions/providers/**
- ui/app/**/providers/**
- ui/tests/providers/** # Test file changes also trigger themselves
tests: [] # SDK/API unit test patterns (empty for UI modules)
e2e: # Playwright E2E test patterns
- ui/tests/providers/**
```
#### Module Schema
| Field | Type | Description |
|-------|------|-------------|
| `name` | `string` | Unique module identifier (for example, `sdk-aws`, `ui-providers`, `api-views`) |
| `match` | `list[glob]` | Source file patterns that trigger this module |
| `tests` | `list[glob]` | Prowler SDK (`tests/`) or API (`api/`) unit test patterns to run |
| `e2e` | `list[glob]` | UI E2E test patterns (`ui/tests/`) to run |
#### Module Categories
- **`sdk-*`:** Provider and lib modules. These only produce `tests` output, not `e2e`.
- **`api-*`:** API views, serializers, filters, and role-based access control (RBAC). These produce `tests` and sometimes `e2e` (API changes can affect UI flows).
- **`ui-*`:** UI feature modules. These only produce `e2e` output, not `tests`.
## Path Resolution Pipeline
The E2E consumer workflow (`.github/workflows/ui-e2e-tests-v2.yml`, lines 202253) transforms the `ui-e2e` output from glob patterns into paths that Playwright can execute. This transformation follows a multi-step shell pipeline.
### Step 1: Check Run Mode
```bash
if [[ "${RUN_ALL_TESTS}" == "true" ]]; then
pnpm run test:e2e # Run everything, skip pipeline
fi
```
### Step 2: Strip the `ui/` Prefix and `**` Suffix
```bash
# "ui/tests/providers/**" -> "tests/providers/"
TEST_PATHS=$(echo "$E2E_TEST_PATHS" | sed 's|ui/||g' | sed 's|\*\*||g')
```
### Step 3: Filter Out Setup Paths
```bash
# Remove auth setup helpers (not runnable test suites)
TEST_PATHS=$(echo "$TEST_PATHS" | grep -v '^tests/setups/')
```
### Step 4: Safety Net for Bare `tests/`
If the pattern `ui/tests/**` was present in the output (from a critical path or a broad module like `ui-shadcn`), it resolves to bare `tests/` after stripping. This would cause Playwright to discover setup files in `tests/setups/`, so it gets expanded instead:
```bash
if echo "$TEST_PATHS" | grep -qx 'tests/'; then
# Expand to specific subdirs, excluding tests/setups/
for dir in tests/*/; do
[[ "$dir" == "tests/setups/" ]] && continue
SPECIFIC_DIRS="${SPECIFIC_DIRS}${dir}"
done
fi
```
### Step 5: Empty Directory Check
Directories that do not contain any `.spec.ts` or `.test.ts` files are skipped. This handles forward-looking patterns where a module is configured but tests have not been written yet.
```bash
if find "$p" -name '*.spec.ts' -o -name '*.test.ts' | head -1 | grep -q .; then
VALID_PATHS="${VALID_PATHS}${p}"
else
echo "Skipping empty test directory: $p"
fi
```
### Step 6: Execute Playwright
```bash
pnpm exec playwright test $TEST_PATHS
# For example: pnpm exec playwright test tests/providers/ tests/scans/
```
## Playwright Project Mapping
Playwright discovers tests by scanning the directories passed to it. The `playwright.config.ts` file defines projects with `testMatch` patterns that control which spec files each project claims:
```
tests/providers/providers.spec.ts -> "providers" project -> depends on admin.auth.setup
tests/scans/scans.spec.ts -> "scans" project -> depends on admin.auth.setup
tests/sign-in-base/*.spec.ts -> "sign-in-base" -> no auth dependency
tests/auth/*.spec.ts -> "auth" -> no auth dependency
tests/sign-up/sign-up.spec.ts -> "sign-up" -> no auth dependency
tests/invitations/invitations.spec.ts -> "invitations" -> depends on admin.auth.setup
```
Auth setup projects (`admin.auth.setup`, `manage-scans.auth.setup`, and others) create authenticated browser state files. Projects that declare them as `dependencies` wait for the setup to complete before running.
When TIA runs only `tests/providers/`, Playwright still automatically runs `admin.auth.setup` because the `providers` project declares it as a dependency.
## Edge Cases and Known Considerations
### Forward-Looking Patterns (Empty Test Directories)
A module can reference `ui/tests/attack-paths/**` before any tests exist there. The empty directory check (step 5) gracefully skips it instead of failing.
### Broad Patterns and the Safety Net
Modules like `ui-shadcn` and `api-views` list every E2E test suite explicitly to avoid using `ui/tests/**`. If a broad pattern does produce bare `tests/`, the safety net expands it to specific subdirectories, excluding `tests/setups/`.
### Setup Files and Auth Dependencies
`ui/tests/setups/**` is listed in the `ignored` section and also filtered in the path resolution pipeline. This double protection ensures setup files are never passed as test targets to Playwright. Auth setups run only when declared as project dependencies.
### Critical Path Triggering Run-All
Changes to `.github/workflows/**` or `.github/test-impact.yml` trigger `run-all: true`. This means editing any workflow file (even unrelated ones) runs the full test suite. This behavior is intentional — CI infrastructure changes should be validated broadly.
### Unmatched Files
Files that do not match any ignored, critical, or module pattern produce no test output. The `has-tests` flag is set to `false` and consumer workflows skip entirely via the `skip-e2e` job.
## Adding New Test Modules
To add tests for a new UI feature (for example, `dashboards`):
1. **Add the module to `.github/test-impact.yml`:**
```yaml
- name: ui-dashboards
match:
- ui/components/dashboards/**
- ui/actions/dashboards/**
- ui/app/**/dashboards/**
- ui/tests/dashboards/**
tests: []
e2e:
- ui/tests/dashboards/**
```
2. **Create the test directory and spec file:**
```
ui/tests/dashboards/dashboards.spec.ts
```
3. **Add a Playwright project in `ui/playwright.config.ts`:**
```typescript
{
name: "dashboards",
testMatch: "dashboards.spec.ts",
dependencies: ["admin.auth.setup"], // if tests need auth
},
```
4. **Register E2E paths in shared UI modules (if applicable):**
If the feature uses shared UI components, add the E2E path to the `ui-shadcn` module so that changes to shared components also trigger dashboard tests:
```yaml
- name: ui-shadcn
match:
- ui/components/shadcn/**
- ui/components/ui/**
e2e:
- ui/tests/dashboards/** # Add here
# ... existing paths
```
5. **Register E2E paths in API modules (if applicable):**
If API changes affect this feature, add the E2E path to the relevant `api-*` module (for example, `api-views`).
## Troubleshooting
### Tests Not Running When Expected
1. Check whether the changed file matches an `ignored` pattern. The script logs `[IGNORED]` to stderr.
2. Verify the file matches a module's `match` pattern. To test locally, run:
```bash
python .github/scripts/test-impact.py path/to/changed/file.ts
```
3. Confirm the module has non-empty `e2e` (for E2E) or `tests` (for unit tests).
4. Check the `has-ui-e2e` output — the consumer workflow gates on this flag.
### Unexpected Auth Setup Errors
Auth setup projects run automatically when a test project declares them as `dependencies`. If auth failures occur:
- **Verify secrets:** Confirm that the `E2E_ADMIN_USER` and `E2E_ADMIN_PASSWORD` secrets are set.
- **Check setup file existence:** Ensure the auth setup file exists in `ui/tests/setups/`.
- **Validate test match patterns:** Ensure the `testMatch` pattern in `playwright.config.ts` correctly matches the setup file.
### "No Tests Found" Errors
This typically means the path resolution pipeline produced valid directories but Playwright could not match any spec files to a project:
- **Check project configuration:** Verify that `playwright.config.ts` has a project with a `testMatch` pattern for the spec files in that directory.
- **Verify file naming:** Confirm the spec file naming matches the expected pattern (for example, `feature.spec.ts`).
### "No Runnable E2E Test Paths After Filtering Setups"
All resolved paths were under `tests/setups/`. This indicates the module's `e2e` patterns only point to setup files, which is a configuration error. The module should be updated to point to actual test directories.
### Debugging Locally
```bash
# See what the analysis engine produces for specific files
python .github/scripts/test-impact.py ui/components/providers/some-file.tsx
# Output goes to stderr (analysis log) and GITHUB_OUTPUT (structured output)
# Without the GITHUB_OUTPUT env var, results print to stderr only
```
+9 -1
View File
@@ -99,7 +99,7 @@
},
"user-guide/tutorials/prowler-app-rbac",
"user-guide/tutorials/prowler-app-api-keys",
"user-guide/tutorials/prowler-app-findings-ingestion",
"user-guide/tutorials/prowler-app-import-findings",
{
"group": "Mutelist",
"expanded": true,
@@ -117,6 +117,13 @@
"user-guide/tutorials/prowler-app-jira-integration"
]
},
{
"group": "AWS Organizations",
"expanded": true,
"pages": [
"user-guide/tutorials/prowler-cloud-aws-organizations"
]
},
{
"group": "Lighthouse AI",
"pages": [
@@ -124,6 +131,7 @@
"user-guide/tutorials/prowler-app-lighthouse-multi-llm"
]
},
"user-guide/tutorials/prowler-app-attack-paths",
"user-guide/tutorials/prowler-cloud-public-ips",
{
"group": "Tutorials",
@@ -10,7 +10,7 @@ Complete reference guide for all tools available in the Prowler MCP Server. Tool
|----------|------------|------------------------|
| Prowler Hub | 10 tools | No |
| Prowler Documentation | 2 tools | No |
| Prowler Cloud/App | 24 tools | Yes |
| Prowler Cloud/App | 27 tools | Yes |
## Tool Naming Convention
@@ -80,6 +80,14 @@ Tools for managing finding muting, including pattern-based bulk muting (mutelist
- **`prowler_app_update_mute_rule`** - Update a mute rule's name, reason, or enabled status
- **`prowler_app_delete_mute_rule`** - Delete a mute rule from the system
### Attack Paths Analysis
Tools for analyzing privilege escalation chains and security misconfigurations using graph-based analysis. Attack Paths maps relationships between cloud resources, permissions, and security findings to detect how privileges can be escalated and how misconfigurations can be exploited.
- **`prowler_app_list_attack_paths_scans`** - List Attack Paths scans with filtering by provider, provider type, and scan state (available, scheduled, executing, completed, failed, cancelled)
- **`prowler_app_list_attack_paths_queries`** - Discover available Attack Paths queries for a completed scan, including query names, descriptions, and required parameters
- **`prowler_app_run_attack_paths_query`** - Execute an Attack Paths query against a completed scan and retrieve graph results with nodes (cloud resources, findings, virtual nodes) and relationships (access paths, role assumptions, security group memberships)
### Compliance Management
Tools for viewing compliance status and framework details across all cloud providers.
@@ -23,9 +23,15 @@ Refer to the [Prowler App Tutorial](/user-guide/tutorials/prowler-app) for detai
```bash
VERSION=$(curl -s https://api.github.com/repos/prowler-cloud/prowler/releases/latest | jq -r .tag_name)
curl -sLO "https://raw.githubusercontent.com/prowler-cloud/prowler/refs/tags/${VERSION}/docker-compose.yml"
# Environment variables can be customized in the .env file. Using default values in production environments is not recommended.
curl -sLO "https://raw.githubusercontent.com/prowler-cloud/prowler/refs/tags/${VERSION}/.env"
docker compose up -d
```
<Callout icon="lock" iconType="regular" color="#e74c3c">
For a secure setup, the API auto-generates a unique key pair, `DJANGO_TOKEN_SIGNING_KEY` and `DJANGO_TOKEN_VERIFYING_KEY`, and stores it in `~/.config/prowler-api` (non-container) or the bound Docker volume in `_data/api` (container). Never commit or reuse static/default keys. To rotate keys, delete the stored key files and restart the API.
</Callout>
</Tab>
<Tab title="GitHub">
_Requirements_:
@@ -115,8 +121,8 @@ To update the environment file:
Edit the `.env` file and change version values:
```env
PROWLER_UI_VERSION="5.18.0"
PROWLER_API_VERSION="5.18.0"
PROWLER_UI_VERSION="5.19.0"
PROWLER_API_VERSION="5.19.0"
```
<Note>
@@ -24,6 +24,7 @@ Full access to Prowler Cloud platform and self-managed Prowler App for:
- **Scan Orchestration**: Trigger on-demand scans and schedule recurring security assessments
- **Resource Inventory**: Search and view detailed information about your audited resources
- **Muting Management**: Create and manage muting lists/rules to suppress non-relevant findings
- **Attack Paths Analysis**: Analyze privilege escalation chains and security misconfigurations through graph-based analysis of cloud resource relationships
### 2. Prowler Hub
@@ -61,6 +62,7 @@ The Prowler MCP Server enables powerful workflows through AI assistants:
- "Show me all critical findings from my AWS production accounts"
- "Register my new AWS account in Prowler and run a scheduled scan every day"
- "List all muted findings and detect what findgings are muted by a not enough good reason in relation to their severity"
- "Run an attack paths query to find EC2 instances exposed to the Internet with access to sensitive S3 buckets"
**Security Research**
- "Explain what the S3 bucket public access Prowler check does"
Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

@@ -0,0 +1,71 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1100 420" font-family="'Inter', 'Segoe UI', system-ui, -apple-system, sans-serif">
<defs>
<filter id="shadow" x="-4%" y="-4%" width="108%" height="108%">
<feDropShadow dx="0" dy="2" stdDeviation="3" flood-opacity="0.08"/>
</filter>
</defs>
<!-- Title -->
<text x="550" y="40" text-anchor="middle" font-size="22" font-weight="700" fill="#4285F4">Onboarding Flow</text>
<!-- Step 1 -->
<rect x="30" y="70" width="220" height="220" rx="12" fill="#fff" stroke="#4285F4" stroke-width="2.5" stroke-dasharray="8 4" filter="url(#shadow)"/>
<circle cx="140" cy="100" r="22" fill="#4285F4"/>
<text x="140" y="107" text-anchor="middle" font-size="16" font-weight="700" fill="#fff">1</text>
<text x="140" y="145" text-anchor="middle" font-size="15" font-weight="700" fill="#1a1a2e">Create Management</text>
<text x="140" y="165" text-anchor="middle" font-size="15" font-weight="700" fill="#1a1a2e">Account Role</text>
<rect x="60" y="185" width="160" height="24" rx="12" fill="#E8F0FE"/>
<text x="140" y="201" text-anchor="middle" font-size="11" font-weight="600" fill="#4285F4">Quick Create or Manual</text>
<text x="140" y="232" text-anchor="middle" font-size="12" fill="#5f6368">Allows Prowler to</text>
<text x="140" y="248" text-anchor="middle" font-size="12" fill="#5f6368">discover your org</text>
<text x="140" y="264" text-anchor="middle" font-size="12" fill="#5f6368">structure</text>
<!-- Arrow 1→2 -->
<path d="M260 180 L290 180" stroke="#9aa0a6" stroke-width="2" fill="none" marker-end="url(#arrowhead)"/>
<defs>
<marker id="arrowhead" markerWidth="10" markerHeight="7" refX="9" refY="3.5" orient="auto">
<polygon points="0 0, 10 3.5, 0 7" fill="#9aa0a6"/>
</marker>
</defs>
<!-- Step 2 -->
<rect x="300" y="70" width="220" height="220" rx="12" fill="#fff" stroke="#7B61FF" stroke-width="2.5" filter="url(#shadow)"/>
<circle cx="410" cy="100" r="22" fill="#7B61FF"/>
<text x="410" y="107" text-anchor="middle" font-size="16" font-weight="700" fill="#fff">2</text>
<text x="410" y="145" text-anchor="middle" font-size="15" font-weight="700" fill="#1a1a2e">Deploy StackSet</text>
<rect x="340" y="165" width="140" height="24" rx="12" fill="#F3F0FF"/>
<text x="410" y="181" text-anchor="middle" font-size="11" font-weight="600" fill="#7B61FF">In AWS Console</text>
<text x="410" y="212" text-anchor="middle" font-size="12" fill="#5f6368">Creates ProwlerScan</text>
<text x="410" y="228" text-anchor="middle" font-size="12" fill="#5f6368">role in every</text>
<text x="410" y="244" text-anchor="middle" font-size="12" fill="#5f6368">member account</text>
<!-- Arrow 2→3 -->
<path d="M530 180 L560 180" stroke="#9aa0a6" stroke-width="2" fill="none" marker-end="url(#arrowhead)"/>
<!-- Step 3 -->
<rect x="570" y="70" width="220" height="220" rx="12" fill="#fff" stroke="#00BFA5" stroke-width="2.5" filter="url(#shadow)"/>
<circle cx="680" cy="100" r="22" fill="#00BFA5"/>
<text x="680" y="107" text-anchor="middle" font-size="16" font-weight="700" fill="#fff">3</text>
<text x="680" y="145" text-anchor="middle" font-size="15" font-weight="700" fill="#1a1a2e">Run the Wizard</text>
<rect x="615" y="165" width="130" height="24" rx="12" fill="#E0F7F4"/>
<text x="680" y="181" text-anchor="middle" font-size="11" font-weight="600" fill="#00BFA5">In Prowler Cloud</text>
<text x="680" y="212" text-anchor="middle" font-size="12" fill="#5f6368">Discovers accounts,</text>
<text x="680" y="228" text-anchor="middle" font-size="12" fill="#5f6368">tests connections</text>
<!-- Arrow 3→4 -->
<path d="M800 180 L830 180" stroke="#9aa0a6" stroke-width="2" fill="none" marker-end="url(#arrowhead)"/>
<!-- Step 4 -->
<rect x="840" y="70" width="220" height="220" rx="12" fill="#fff" stroke="#F9AB00" stroke-width="2.5" filter="url(#shadow)"/>
<circle cx="950" cy="100" r="22" fill="#F9AB00"/>
<text x="950" y="107" text-anchor="middle" font-size="16" font-weight="700" fill="#fff">4</text>
<text x="950" y="145" text-anchor="middle" font-size="15" font-weight="700" fill="#1a1a2e">Launch Scans</text>
<rect x="898" y="165" width="104" height="24" rx="12" fill="#FEF7E0"/>
<text x="950" y="181" text-anchor="middle" font-size="11" font-weight="600" fill="#F9AB00">Automatic</text>
<text x="950" y="212" text-anchor="middle" font-size="12" fill="#5f6368">Scans run on all</text>
<text x="950" y="228" text-anchor="middle" font-size="12" fill="#5f6368">connected accounts</text>
<text x="950" y="244" text-anchor="middle" font-size="12" fill="#5f6368">on your schedule</text>
<!-- Footer -->
<text x="550" y="340" text-anchor="middle" font-size="13" fill="#9aa0a6">Steps 1 and 2 are done once in AWS | Steps 3 and 4 are done in Prowler Cloud</text>
</svg>

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 229 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 227 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

@@ -0,0 +1,96 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1100 520" font-family="'Inter', 'Segoe UI', system-ui, -apple-system, sans-serif">
<defs>
<filter id="shadow" x="-4%" y="-4%" width="108%" height="108%">
<feDropShadow dx="0" dy="2" stdDeviation="4" flood-opacity="0.08"/>
</filter>
<linearGradient id="mgmtHeader" x1="0" y1="0" x2="1" y2="0">
<stop offset="0%" stop-color="#4285F4"/>
<stop offset="100%" stop-color="#5E97F6"/>
</linearGradient>
<linearGradient id="memberHeader" x1="0" y1="0" x2="1" y2="0">
<stop offset="0%" stop-color="#00BFA5"/>
<stop offset="100%" stop-color="#1DE9B6"/>
</linearGradient>
</defs>
<!-- Title -->
<text x="550" y="36" text-anchor="middle" font-size="22" font-weight="700" fill="#4285F4">Two Roles Architecture</text>
<!-- ===== Management Account Card ===== -->
<rect x="30" y="60" width="440" height="380" rx="14" fill="#fff" stroke="#4285F4" stroke-width="2" filter="url(#shadow)"/>
<!-- Header bar -->
<rect x="30" y="60" width="440" height="44" rx="14" fill="url(#mgmtHeader)"/>
<rect x="30" y="90" width="440" height="14" fill="url(#mgmtHeader)"/>
<text x="250" y="88" text-anchor="middle" font-size="16" font-weight="700" fill="#fff">Management Account</text>
<!-- Inner card -->
<rect x="50" y="118" width="400" height="300" rx="10" fill="#F0F4FF" stroke="#C5D6F7" stroke-width="1"/>
<text x="250" y="148" text-anchor="middle" font-size="15" font-weight="700" fill="#4285F4">Management Role</text>
<!-- Purpose -->
<text x="70" y="178" font-size="13" font-weight="700" fill="#1a1a2e">Purpose:</text>
<text x="70" y="196" font-size="12" fill="#5f6368">Discover Organization structure + scan management account</text>
<!-- Permissions -->
<text x="70" y="222" font-size="13" font-weight="700" fill="#1a1a2e">Permissions:</text>
<text x="82" y="240" font-size="11" fill="#5f6368">SecurityAudit (AWS managed policy)</text>
<text x="82" y="256" font-size="11" fill="#5f6368">ViewOnlyAccess (AWS managed policy)</text>
<text x="82" y="272" font-size="11" fill="#5f6368">Additional read-only (inline policy)</text>
<text x="82" y="288" font-size="11" fill="#4285F4" font-weight="600">organizations:DescribeAccount</text>
<text x="82" y="304" font-size="11" fill="#4285F4" font-weight="600">organizations:DescribeOrganization</text>
<text x="82" y="320" font-size="11" fill="#4285F4" font-weight="600">organizations:ListAccounts</text>
<text x="82" y="336" font-size="11" fill="#4285F4" font-weight="600">organizations:ListAccountsForParent</text>
<text x="82" y="352" font-size="11" fill="#4285F4" font-weight="600">organizations:ListOrganizationalUnitsForParent</text>
<text x="82" y="368" font-size="11" fill="#4285F4" font-weight="600">organizations:ListRoots</text>
<text x="82" y="384" font-size="11" fill="#4285F4" font-weight="600">organizations:ListTagsForResource</text>
<!-- Deploy badge -->
<rect x="115" y="400" width="270" height="28" rx="14" fill="#FFF3E0" stroke="#F9AB00" stroke-width="1.5"/>
<text x="250" y="419" text-anchor="middle" font-size="12" font-weight="700" fill="#E65100">Deploy: Quick Create link or Manual</text>
<!-- ===== Prowler Cloud connector ===== -->
<rect x="490" y="195" width="120" height="36" rx="8" fill="#F5F5F5" stroke="#E0E0E0" stroke-width="1"/>
<text x="550" y="218" text-anchor="middle" font-size="12" font-weight="600" fill="#5f6368">Prowler Cloud</text>
<!-- Connector lines -->
<line x1="490" y1="213" x2="470" y2="213" stroke="#9aa0a6" stroke-width="1.5" stroke-dasharray="4 3"/>
<line x1="610" y1="213" x2="630" y2="213" stroke="#9aa0a6" stroke-width="1.5" stroke-dasharray="4 3"/>
<!-- ===== Member Accounts Card ===== -->
<rect x="630" y="60" width="440" height="380" rx="14" fill="#fff" stroke="#00BFA5" stroke-width="2" filter="url(#shadow)"/>
<!-- Header bar -->
<rect x="630" y="60" width="440" height="44" rx="14" fill="url(#memberHeader)"/>
<rect x="630" y="90" width="440" height="14" fill="url(#memberHeader)"/>
<text x="850" y="88" text-anchor="middle" font-size="16" font-weight="700" fill="#fff">Member Accounts</text>
<!-- Inner card -->
<rect x="650" y="118" width="400" height="300" rx="10" fill="#E8F8F5" stroke="#B2DFDB" stroke-width="1"/>
<text x="850" y="148" text-anchor="middle" font-size="15" font-weight="700" fill="#00897B">ProwlerScan Role (per account)</text>
<!-- Purpose -->
<text x="670" y="178" font-size="13" font-weight="700" fill="#1a1a2e">Purpose:</text>
<text x="670" y="196" font-size="12" fill="#5f6368">Security scanning of each account</text>
<!-- Permissions -->
<text x="670" y="222" font-size="13" font-weight="700" fill="#1a1a2e">Permissions:</text>
<text x="682" y="240" font-size="11" fill="#5f6368">SecurityAudit (AWS managed policy)</text>
<text x="682" y="256" font-size="11" fill="#5f6368">ViewOnlyAccess (AWS managed policy)</text>
<text x="682" y="272" font-size="11" fill="#5f6368">Additional read-only (inline policy)</text>
<!-- Scope -->
<text x="670" y="302" font-size="13" font-weight="700" fill="#1a1a2e">Scope:</text>
<text x="682" y="320" font-size="12" fill="#5f6368">Read-only access across all AWS services</text>
<text x="682" y="338" font-size="12" fill="#5f6368">No write or modify permissions</text>
<!-- Deploy badge -->
<rect x="735" y="400" width="230" height="28" rx="14" fill="#E8F5E9" stroke="#66BB6A" stroke-width="1.5"/>
<text x="850" y="419" text-anchor="middle" font-size="12" font-weight="700" fill="#2E7D32">Deploy: via CloudFormation StackSet</text>
<!-- Footer labels -->
<text x="250" y="478" text-anchor="middle" font-size="14" font-weight="700" fill="#4285F4">Prowler discovers</text>
<text x="250" y="496" text-anchor="middle" font-size="12" fill="#5f6368">your org structure</text>
<text x="850" y="478" text-anchor="middle" font-size="14" font-weight="700" fill="#00BFA5">Prowler scans each</text>
<text x="850" y="496" text-anchor="middle" font-size="12" fill="#5f6368">account for findings</text>
</svg>

After

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 554 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 760 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 665 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 568 KiB

Some files were not shown because too many files have changed in this diff Show More