Compare commits

...

58 Commits

Author SHA1 Message Date
Víctor Fernández Poyatos f90b4d22b9 feat: add django command to handle stuck scans 2025-10-23 16:05:16 +02:00
Andoni Alonso e6d1b5639b chore(github): include roadmap in features request template (#9000) 2025-10-23 15:06:34 +02:00
Alan Buscaglia b1856e42f0 chore: update changelog for release v5.13.0 (#8996) 2025-10-23 13:54:30 +02:00
Víctor Fernández Poyatos ba8dbb0d28 fix(s3): file uploading for threatscore (#8993) 2025-10-23 16:07:06 +05:45
Daniel Barranquero b436cc1cac chore(sdk): update changelog to released (#8994) 2025-10-23 15:55:50 +05:45
Josema Camacho 51baa88644 chore(api): Update changelog for API's version 1.14.0 to Prowler 5.13.0 (#8992) 2025-10-23 12:03:07 +02:00
Rubén De la Torre Vico 5098b12e97 chore(mcp): update changelog to released (#8991) 2025-10-23 11:47:58 +02:00
Daniel Barranquero 3d1e7015a6 fix(entra): value errors due tu enums (#8919) 2025-10-23 11:36:51 +02:00
Alejandro Bailo 0b7f02f7e4 feat: Check Findings component (#8976)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-23 10:38:25 +02:00
Daniel Barranquero c0396e97bf feat(docs): add new provider e2e guide (#8430)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
2025-10-23 10:09:15 +02:00
Andoni Alonso 8d4fa46038 chore: script to generate AWS accounts list from AWS Org for bulk provisioning (#8903) 2025-10-22 16:23:14 -04:00
Daniel Barranquero 4b160257b9 chore(sdk): update changelog for v5.13.0 (#8989) 2025-10-22 12:26:58 -04:00
César Arroba 6184de52d9 chore(github): fix pr merged action (#8988) 2025-10-22 18:05:31 +02:00
César Arroba fdf45ea777 chore(github): improve pr merged action (#8987) 2025-10-22 17:52:00 +02:00
César Arroba b7ce9ae5f3 chore(github): improve mcp container action (#8986) 2025-10-22 17:35:38 +02:00
César Arroba 2039a5005c chore(github): rename prepare release action (#8985) 2025-10-22 17:29:22 +02:00
César Arroba 52ed92ac6a chore(github): improve check changelog action (#8983) 2025-10-22 17:17:22 +02:00
César Arroba f5cccecac6 chore(github): improve prepare release action (#8981) 2025-10-22 17:02:51 +02:00
César Arroba a47f6444f8 chore(github): improve conflicts checker action (#8980) 2025-10-22 16:45:38 +02:00
lydiavilchez f8c8dee2b3 feat(gcp): add cloudstorage_bucket_lifecycle_management_enabled check (#8936)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-22 16:45:26 +02:00
Andoni Alonso 6656629391 docs: include docker platform warning in App installation too (#8979) 2025-10-22 16:07:28 +02:00
Pedro Martín 9f372902ad feat(threatscore): support compliance pdf reporting (#8867)
Co-authored-by: Sergio Garcia <hello@mistercloudsec.com>
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
Co-authored-by: Víctor Fernández Poyatos <victor@prowler.com>
2025-10-22 15:59:56 +02:00
Alan Buscaglia b4ff1dcc75 refactor(graphs): graph components kebab case (#8966)
Co-authored-by: alejandrobailo <alejandrobailo94@gmail.com>
2025-10-22 15:51:43 +02:00
César Arroba f596907223 chore(github): improve labeler action (#8978) 2025-10-22 12:50:19 +02:00
César Arroba fe768c0a3e chore(github): improve trufflehog action (#8977) 2025-10-22 12:39:39 +02:00
César Arroba 18f3bc098c chore(github): trigger only if repository is prowler (#8974) 2025-10-22 09:27:33 +02:00
César Arroba 67b1983d85 chore(github): fix action (#8973) 2025-10-22 09:10:47 +02:00
César Arroba a3db23af7d chore(github): improve conventional commits action (#8969) 2025-10-21 17:57:29 +02:00
César Arroba 3eaa21f06f chore(github): improve backport label action (#8970) 2025-10-21 17:57:04 +02:00
Rubén De la Torre Vico 5d5c109067 chore(aws): enhance metadata for dlm service (#8860)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-21 17:40:19 +02:00
César Arroba c6cb4e4814 chore(github): improve backport action (#8968) 2025-10-21 17:14:40 +02:00
César Arroba ab06a09173 chore(api): improve pull request action (#8963) 2025-10-21 17:10:48 +02:00
Rubén De la Torre Vico 9c6c007f73 fix(mcp): add missing argument to health check (#8967) 2025-10-21 16:45:05 +02:00
Rubén De la Torre Vico 206f23b5a5 chore(aws): enhance metadata for dms service (#8861)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-21 16:31:18 +02:00
Andoni Alonso 5c9e9bc86a docs: fix security heading (#8965) 2025-10-21 16:13:55 +02:00
Rubén De la Torre Vico 34554d6123 feat(mcp): add support for production deployment with uvicorn (#8958) 2025-10-21 16:03:24 +02:00
Pepe Fagoaga 000cb93157 chore: remove security template as it's already there (#8964) 2025-10-21 19:34:42 +05:45
Adrián Jesús Peña Rodríguez 524209bdf2 feat(api): add provider_id__in filter for ScanSummary queries (#8951) 2025-10-21 15:24:09 +02:00
César Arroba c4a0da8204 chore(github): review and update issue templates (#8961) 2025-10-21 13:40:25 +02:00
César Arroba f0cba0321c chore(codeql): improve API CodeQL action and settings (#8962) 2025-10-21 13:40:07 +02:00
dependabot[bot] 79888c9312 chore(deps): bump playwright and @playwright/test in /ui (#8956)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 13:22:21 +02:00
Rubén De la Torre Vico a79910a694 chore(aws): enhance metadata for cloudtrail service (#8831)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-21 12:45:31 +02:00
César Arroba 4cadee7bb1 chore(github): update codeowners file (#8960) 2025-10-21 11:48:21 +02:00
Pedro Martín 756d436a2f feat(compliance): improve CCC catalogs (#8944) 2025-10-21 03:16:05 +02:00
Alejandro Bailo 5e85ef5835 feat(ui): new card components and derivates for overview (#8921)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-20 16:49:09 +02:00
Prowler Bot 0fa9e2da6c chore(regions_update): Changes in regions for AWS services (#8946)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-10-20 09:20:29 -04:00
Andoni Alonso ce7510db28 docs: remove anchors from redirects (#8953) 2025-10-20 14:58:53 +02:00
Pepe Fagoaga 8e3d50c807 fix(docs): redirect user-guide-tutorials (#8945) 2025-10-20 14:51:15 +02:00
Pepe Fagoaga d8908d2ccc docs(fix): space in providers table (#8938) 2025-10-20 14:39:03 +02:00
Alejandro Bailo 0b9969a723 feat: update M365 credentials form (#8929)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-20 13:51:11 +02:00
StylusFrost 985d73f44f test(ui): enhance Playwright test setups for user authentication (#8881)
Co-authored-by: Alejandro Bailo <59607668+alejandrobailo@users.noreply.github.com>
2025-10-20 13:45:20 +02:00
Pedro Martín 1d705e22da feat(util): add from_yaml_to_json.py (#8943) 2025-10-20 12:29:29 +02:00
Rubén De la Torre Vico ca55d4ce86 chore(aws): enhance metadata for directoryservice service (#8859)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-20 12:20:16 +02:00
Hugo Pereira Brito 0201073fcb fix(docs): small enhancement in warning (#8950) 2025-10-20 12:19:49 +02:00
Alejandro Bailo 928c556721 fix: Mutelist view blinks at opening (#8932) 2025-10-17 19:26:57 +02:00
Rubén De la Torre Vico a653ad7852 chore(deps): remove docs group dependency (#8937) 2025-10-17 16:37:32 +02:00
Sergio Garcia a3c811f801 docs(github): clarify GitHub App configuration requirements (#8930) 2025-10-17 09:30:54 -04:00
Hugo Pereira Brito c85d3e9188 feat(docs): add M365 certificate and azure cli authentication methods (#8939) 2025-10-17 13:42:48 +02:00
204 changed files with 18479 additions and 9951 deletions
+27 -5
View File
@@ -1,6 +1,28 @@
# SDK
/* @prowler-cloud/sdk
/.github/ @prowler-cloud/sdk
prowler @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
tests @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
api @prowler-cloud/api
ui @prowler-cloud/ui
/prowler/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/tests/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/dashboard/ @prowler-cloud/sdk
/docs/ @prowler-cloud/sdk
/examples/ @prowler-cloud/sdk
/util/ @prowler-cloud/sdk
/contrib/ @prowler-cloud/sdk
/permissions/ @prowler-cloud/sdk
/codecov.yml @prowler-cloud/sdk @prowler-cloud/api
# API
/api/ @prowler-cloud/api
# UI
/ui/ @prowler-cloud/ui
# AI
/mcp_server/ @prowler-cloud/ai
# Platform
/.github/ @prowler-cloud/platform
/Makefile @prowler-cloud/platform
/kubernetes/ @prowler-cloud/platform
**/Dockerfile* @prowler-cloud/platform
**/docker-compose*.yml @prowler-cloud/platform
**/docker-compose*.yaml @prowler-cloud/platform
+44
View File
@@ -3,6 +3,41 @@ description: Create a report to help us improve
labels: ["bug", "status/needs-triage"]
body:
- type: checkboxes
id: search
attributes:
label: Issue search
options:
- label: I have searched the existing issues and this bug has not been reported yet
required: true
- type: dropdown
id: component
attributes:
label: Which component is affected?
multiple: true
options:
- Prowler CLI/SDK
- Prowler API
- Prowler UI
- Prowler Dashboard
- Prowler MCP Server
- Documentation
- Other
validations:
required: true
- type: dropdown
id: provider
attributes:
label: Cloud Provider (if applicable)
multiple: true
options:
- AWS
- Azure
- GCP
- Kubernetes
- GitHub
- Microsoft 365
- Not applicable
- type: textarea
id: reproduce
attributes:
@@ -78,6 +113,15 @@ body:
prowler --version
validations:
required: true
- type: input
id: python-version
attributes:
label: Python version
description: Which Python version are you using?
placeholder: |-
python --version
validations:
required: true
- type: input
id: pip-version
attributes:
+10
View File
@@ -1 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: 📖 Documentation
url: https://docs.prowler.com
about: Check our comprehensive documentation for guides and tutorials
- name: 💬 GitHub Discussions
url: https://github.com/prowler-cloud/prowler/discussions
about: Ask questions and discuss with the community
- name: 🌟 Prowler Community
url: https://goto.prowler.com/slack
about: Join our community for support and updates
@@ -3,6 +3,42 @@ description: Suggest an idea for this project
labels: ["feature-request", "status/needs-triage"]
body:
- type: checkboxes
id: search
attributes:
label: Feature search
options:
- label: I have searched the existing issues and this feature has not been requested yet or is already in our [Public Roadmap](https://roadmap.prowler.com/roadmap)
required: true
- type: dropdown
id: component
attributes:
label: Which component would this feature affect?
multiple: true
options:
- Prowler CLI/SDK
- Prowler API
- Prowler UI
- Prowler Dashboard
- Prowler MCP Server
- Documentation
- New component/Integration
validations:
required: true
- type: dropdown
id: provider
attributes:
label: Related to specific cloud provider?
multiple: true
options:
- AWS
- Azure
- GCP
- Kubernetes
- GitHub
- Microsoft 365
- All providers
- Not provider-specific
- type: textarea
id: Problem
attributes:
@@ -19,6 +55,14 @@ body:
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: use-case
attributes:
label: Use case and benefits
description: Who would benefit from this feature and how?
placeholder: This would help security teams by...
validations:
required: true
- type: textarea
id: Alternatives
attributes:
@@ -0,0 +1,71 @@
name: 'Setup Python with Poetry'
description: 'Setup Python environment with Poetry and install dependencies'
author: 'Prowler'
inputs:
python-version:
description: 'Python version to use'
required: true
working-directory:
description: 'Working directory for Poetry'
required: false
default: '.'
poetry-version:
description: 'Poetry version to install'
required: false
default: '2.1.1'
install-dependencies:
description: 'Install Python dependencies with Poetry'
required: false
default: 'true'
runs:
using: 'composite'
steps:
- name: Replace @master with current branch in pyproject.toml
if: github.event_name == 'pull_request' && github.base_ref == 'master'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
shell: bash
run: |
python -m pip install --upgrade pip
pipx install poetry==${{ inputs.poetry-version }}
- name: Update SDK resolved_reference to latest commit
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
shell: bash
working-directory: ${{ inputs.working-directory }}
run: poetry lock
- name: Set up Python ${{ inputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ inputs.python-version }}
cache: 'poetry'
cache-dependency-path: ${{ inputs.working-directory }}/poetry.lock
- name: Install Python dependencies
if: inputs.install-dependencies == 'true'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --no-root
poetry run pip list
+152
View File
@@ -0,0 +1,152 @@
name: 'Container Security Scan with Trivy'
description: 'Scans container images for vulnerabilities using Trivy and reports results'
author: 'Prowler'
inputs:
image-name:
description: 'Container image name to scan'
required: true
image-tag:
description: 'Container image tag to scan'
required: true
default: ${{ github.sha }}
severity:
description: 'Severities to scan for (comma-separated)'
required: false
default: 'CRITICAL,HIGH,MEDIUM,LOW'
fail-on-critical:
description: 'Fail the build if critical vulnerabilities are found'
required: false
default: 'false'
upload-sarif:
description: 'Upload results to GitHub Security tab'
required: false
default: 'true'
create-pr-comment:
description: 'Create a comment on the PR with scan results'
required: false
default: 'true'
artifact-retention-days:
description: 'Days to retain the Trivy report artifact'
required: false
default: '2'
outputs:
critical-count:
description: 'Number of critical vulnerabilities found'
value: ${{ steps.security-check.outputs.critical }}
high-count:
description: 'Number of high vulnerabilities found'
value: ${{ steps.security-check.outputs.high }}
total-count:
description: 'Total number of vulnerabilities found'
value: ${{ steps.security-check.outputs.total }}
runs:
using: 'composite'
steps:
- name: Run Trivy vulnerability scan (SARIF)
if: inputs.upload-sarif == 'true'
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '0'
- name: Upload Trivy results to GitHub Security tab
if: inputs.upload-sarif == 'true'
uses: github/codeql-action/upload-sarif@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
sarif_file: 'trivy-results.sarif'
category: 'trivy-container'
- name: Run Trivy vulnerability scan (JSON)
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'json'
output: 'trivy-report.json'
severity: ${{ inputs.severity }}
exit-code: '0'
- name: Upload Trivy report artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always()
with:
name: trivy-scan-report-${{ inputs.image-name }}
path: trivy-report.json
retention-days: ${{ inputs.artifact-retention-days }}
- name: Generate security summary
id: security-check
shell: bash
run: |
CRITICAL=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="CRITICAL")] | length' trivy-report.json)
HIGH=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="HIGH")] | length' trivy-report.json)
TOTAL=$(jq '[.Results[]?.Vulnerabilities[]?] | length' trivy-report.json)
echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
echo "high=$HIGH" >> $GITHUB_OUTPUT
echo "total=$TOTAL" >> $GITHUB_OUTPUT
echo "### 🔒 Container Security Scan" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Image:** \`${{ inputs.image-name }}:${{ inputs.image-tag }}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- 🔴 Critical: $CRITICAL" >> $GITHUB_STEP_SUMMARY
echo "- 🟠 High: $HIGH" >> $GITHUB_STEP_SUMMARY
echo "- **Total**: $TOTAL" >> $GITHUB_STEP_SUMMARY
- name: Comment scan results on PR
if: inputs.create-pr-comment == 'true' && github.event_name == 'pull_request'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
env:
IMAGE_NAME: ${{ inputs.image-name }}
GITHUB_SHA: ${{ inputs.image-tag }}
SEVERITY: ${{ inputs.severity }}
with:
script: |
const comment = require('./.github/scripts/trivy-pr-comment.js');
// Unique identifier to find our comment
const marker = '<!-- trivy-scan-comment:${{ inputs.image-name }} -->';
const body = marker + '\n' + comment;
// Find existing comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const existingComment = comments.find(c => c.body?.includes(marker));
if (existingComment) {
// Update existing comment
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existingComment.id,
body: body
});
console.log('✅ Updated existing Trivy scan comment');
} else {
// Create new comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: body
});
console.log('✅ Created new Trivy scan comment');
}
- name: Check for critical vulnerabilities
if: inputs.fail-on-critical == 'true' && steps.security-check.outputs.critical != '0'
shell: bash
run: |
echo "::error::Found ${{ steps.security-check.outputs.critical }} critical vulnerabilities"
echo "::warning::Please update packages or use a different base image"
exit 1
+11 -2
View File
@@ -1,3 +1,12 @@
name: "API - CodeQL Config"
name: 'API: CodeQL Config'
paths:
- "api/"
- 'api/'
paths-ignore:
- 'api/tests/**'
- 'api/**/__pycache__/**'
- 'api/**/migrations/**'
- 'api/**/*.md'
queries:
- uses: security-and-quality
+102
View File
@@ -0,0 +1,102 @@
const fs = require('fs');
// Configuration from environment variables
const REPORT_FILE = process.env.TRIVY_REPORT_FILE || 'trivy-report.json';
const IMAGE_NAME = process.env.IMAGE_NAME || 'container-image';
const GITHUB_SHA = process.env.GITHUB_SHA || 'unknown';
const GITHUB_REPOSITORY = process.env.GITHUB_REPOSITORY || '';
const GITHUB_RUN_ID = process.env.GITHUB_RUN_ID || '';
const SEVERITY = process.env.SEVERITY || 'CRITICAL,HIGH,MEDIUM,LOW';
// Parse severities to scan
const scannedSeverities = SEVERITY.split(',').map(s => s.trim());
// Read and parse the Trivy report
const report = JSON.parse(fs.readFileSync(REPORT_FILE, 'utf-8'));
let vulnCount = 0;
let vulnsByType = { CRITICAL: 0, HIGH: 0, MEDIUM: 0, LOW: 0 };
let affectedPackages = new Set();
if (report.Results && Array.isArray(report.Results)) {
for (const result of report.Results) {
if (result.Vulnerabilities && Array.isArray(result.Vulnerabilities)) {
for (const vuln of result.Vulnerabilities) {
vulnCount++;
if (vulnsByType[vuln.Severity] !== undefined) {
vulnsByType[vuln.Severity]++;
}
if (vuln.PkgName) {
affectedPackages.add(vuln.PkgName);
}
}
}
}
}
const shortSha = GITHUB_SHA.substring(0, 7);
const timestamp = new Date().toISOString().replace('T', ' ').substring(0, 19) + ' UTC';
// Severity icons and labels
const severityConfig = {
CRITICAL: { icon: '🔴', label: 'Critical' },
HIGH: { icon: '🟠', label: 'High' },
MEDIUM: { icon: '🟡', label: 'Medium' },
LOW: { icon: '🔵', label: 'Low' }
};
let comment = '## 🔒 Container Security Scan\n\n';
comment += `**Image:** \`${IMAGE_NAME}:${shortSha}\`\n`;
comment += `**Last scan:** ${timestamp}\n\n`;
if (vulnCount === 0) {
comment += '### ✅ No Vulnerabilities Detected\n\n';
comment += 'The container image passed all security checks. No known CVEs were found.\n';
} else {
comment += '### 📊 Vulnerability Summary\n\n';
comment += '| Severity | Count |\n';
comment += '|----------|-------|\n';
// Only show severities that were scanned
for (const severity of scannedSeverities) {
const config = severityConfig[severity];
const count = vulnsByType[severity] || 0;
const isBold = (severity === 'CRITICAL' || severity === 'HIGH') && count > 0;
const countDisplay = isBold ? `**${count}**` : count;
comment += `| ${config.icon} ${config.label} | ${countDisplay} |\n`;
}
comment += `| **Total** | **${vulnCount}** |\n\n`;
if (affectedPackages.size > 0) {
comment += `**${affectedPackages.size}** package(s) affected\n\n`;
}
if (vulnsByType.CRITICAL > 0) {
comment += '### ⚠️ Action Required\n\n';
comment += '**Critical severity vulnerabilities detected.** These should be addressed before merging:\n';
comment += '- Review the detailed scan results\n';
comment += '- Update affected packages to patched versions\n';
comment += '- Consider using a different base image if updates are unavailable\n\n';
} else if (vulnsByType.HIGH > 0) {
comment += '### ⚠️ Attention Needed\n\n';
comment += '**High severity vulnerabilities found.** Please review and plan remediation:\n';
comment += '- Assess the risk and exploitability\n';
comment += '- Prioritize updates in the next maintenance cycle\n\n';
} else {
comment += '### ️ Review Recommended\n\n';
comment += 'Medium/Low severity vulnerabilities found. Consider addressing during regular maintenance.\n\n';
}
}
comment += '---\n';
comment += '📋 **Resources:**\n';
if (GITHUB_REPOSITORY && GITHUB_RUN_ID) {
comment += `- [Download full report](https://github.com/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}) (see artifacts)\n`;
}
comment += '- [View in Security tab](https://github.com/' + (GITHUB_REPOSITORY || 'repository') + '/security/code-scanning)\n';
comment += '- Scanned with [Trivy](https://github.com/aquasecurity/trivy)\n';
module.exports = comment;
+30 -33
View File
@@ -1,36 +1,34 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: API - CodeQL
name: 'API: CodeQL'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "api/**"
- 'api/**'
- '.github/workflows/api-codeql.yml'
- '.github/codeql/api-codeql-config.yml'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "api/**"
- 'api/**'
- '.github/workflows/api-codeql.yml'
- '.github/codeql/api-codeql-config.yml'
schedule:
- cron: '00 12 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
analyze:
name: Analyze
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
actions: read
contents: read
@@ -39,21 +37,20 @@ jobs:
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
language:
- 'python'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: '/language:${{ matrix.language }}'
+148 -153
View File
@@ -1,20 +1,30 @@
name: API - Pull Request
name: 'API: Pull Request'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
- '.github/workflows/api-pull-request.yml'
- 'api/**'
- '!api/docs/**'
- '!api/README.md'
- '!api/CHANGELOG.md'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
- '.github/workflows/api-pull-request.yml'
- 'api/**'
- '!api/docs/**'
- '!api/README.md'
- '!api/CHANGELOG.md'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POSTGRES_HOST: localhost
@@ -29,21 +39,94 @@ env:
VALKEY_DB: 0
API_WORKING_DIR: ./api
IMAGE_NAME: prowler-api
IGNORE_FILES: |
api/docs/**
api/README.md
api/CHANGELOG.md
jobs:
test:
code-quality:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
strategy:
matrix:
python-version: ["3.12"]
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
- name: Poetry check
run: poetry check --lock
- name: Ruff lint
run: poetry run ruff check . --exclude contrib
- name: Ruff format
run: poetry run ruff format --check . --exclude contrib
- name: Pylint
run: poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/
security-scans:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
strategy:
matrix:
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
- name: Bandit
run: poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
run: poetry run vulture --exclude "contrib,tests,conftest.py" --min-confidence 100 .
tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
strategy:
matrix:
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
# Service containers to run with `test`
services:
# Label used to access the service container
postgres:
image: postgres
env:
@@ -52,7 +135,6 @@ jobs:
POSTGRES_USER: ${{ env.POSTGRES_USER }}
POSTGRES_PASSWORD: ${{ env.POSTGRES_PASSWORD }}
POSTGRES_DB: ${{ env.POSTGRES_DB }}
# Set health checks to wait until postgres has started
ports:
- 5432:5432
options: >-
@@ -66,7 +148,6 @@ jobs:
VALKEY_HOST: ${{ env.VALKEY_HOST }}
VALKEY_PORT: ${{ env.VALKEY_PORT }}
VALKEY_DB: ${{ env.VALKEY_DB }}
# Set health checks to wait until postgres has started
ports:
- 6379:6379
options: >-
@@ -76,158 +157,72 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
.github/workflows/api-pull-request.yml
files_ignore: ${{ env.IGNORE_FILES }}
- name: Replace @master with current branch in pyproject.toml - Only for pull requests to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'pull_request' && github.base_ref == 'master'
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push' && github.ref == 'refs/heads/master'
run: |
# Get the latest commit hash from the prowler-cloud/prowler repository
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
# Update the resolved_reference specifically for prowler-cloud/prowler repository
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
# Verify the change was made
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry lock
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
working-directory: ./api
- name: Install dependencies
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry install --no-root
poetry run pip list
VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
) && curl -L -o /tmp/hadolint "https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64" \
&& chmod +x /tmp/hadolint
- name: Poetry check
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry check --lock
- name: Lint with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff check . --exclude contrib
- name: Check Format with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff format --check . --exclude contrib
- name: Lint with pylint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/
- name: Bandit
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: |
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run vulture --exclude "contrib,tests,conftest.py" --min-confidence 100 .
- name: Hadolint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest --cov=./src/backend --cov-report=xml src/backend
- name: Run tests with pytest
run: poetry run pytest --cov=./src/backend --cov-report=xml src/backend
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: api
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Lint Dockerfile with Hadolint
uses: hadolint/hadolint-action@2332a7b74a6de0dda2e2221d575162eba76ba5e5 # v3.3.0
with:
files: api/**
files_ignore: ${{ env.IGNORE_FILES }}
dockerfile: api/Dockerfile
ignore: DL3013
container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
security-events: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Docker Buildx
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build Container
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
- name: Build container
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.API_WORKING_DIR }}
push: false
tags: ${{ env.IMAGE_NAME }}:latest
outputs: type=docker
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan container with Trivy
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+22 -15
View File
@@ -1,28 +1,35 @@
name: Prowler - Automatic Backport
name: 'Tools: Backport'
on:
pull_request_target:
branches: ['master']
types: ['labeled', 'closed']
branches:
- 'master'
types:
- 'labeled'
- 'closed'
paths:
- '.github/workflows/backport.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: false
env:
# The prefix of the label that triggers the backport must not contain the branch name
# so, for example, if the branch is 'master', the label should be 'backport-to-<branch>'
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_IGNORE: was-backported
jobs:
backport:
name: Backport PR
if: github.event.pull_request.merged == true && !(contains(github.event.pull_request.labels.*.name, 'backport')) && !(contains(github.event.pull_request.labels.*.name, 'was-backported'))
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
id-token: write
pull-requests: write
contents: write
pull-requests: write
steps:
- name: Check labels
id: preview_label_check
id: label_check
uses: agilepathway/label-checker@c3d16ad512e7cea5961df85ff2486bb774caf3c5 # v1.6.65
with:
allow_failure: true
@@ -31,17 +38,17 @@ jobs:
none_of: ${{ env.BACKPORT_LABEL_IGNORE }}
repo_token: ${{ secrets.GITHUB_TOKEN }}
- name: Backport Action
if: steps.preview_label_check.outputs.label_check == 'success'
- name: Backport PR
if: steps.label_check.outputs.label_check == 'success'
uses: sorenlouv/backport-github-action@ad888e978060bc1b2798690dd9d03c4036560947 # v9.5.1
with:
github_token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
auto_backport_label_prefix: ${{ env.BACKPORT_LABEL_PREFIX }}
- name: Info log
if: ${{ success() && steps.preview_label_check.outputs.label_check == 'success' }}
- name: Display backport info log
if: success() && steps.label_check.outputs.label_check == 'success'
run: cat ~/.backport/backport.info.log
- name: Debug log
if: ${{ failure() && steps.preview_label_check.outputs.label_check == 'success' }}
- name: Display backport debug log
if: failure() && steps.label_check.outputs.label_check == 'success'
run: cat ~/.backport/backport.debug.log
+20 -13
View File
@@ -1,24 +1,31 @@
name: Prowler - Conventional Commit
name: 'Tools: Conventional Commit'
on:
pull_request:
types:
- "opened"
- "edited"
- "synchronize"
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
- 'master'
- 'v3'
- 'v4.*'
- 'v5.*'
types:
- 'opened'
- 'edited'
- 'synchronize'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
conventional-commit-check:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: read
steps:
- name: conventional-commit-check
id: conventional-commit-check
- name: Check PR title format
uses: agenthunt/conventional-commit-checker-action@9e552d650d0e205553ec7792d447929fc78e012b # v2.0.0
with:
pr-title-regex: '^(feat|fix|docs|style|refactor|perf|test|chore|build|ci|revert)(\([^)]+\))?!?: .+'
pr-title-regex: '^(feat|fix|docs|style|refactor|perf|test|chore|build|ci|revert)(\([^)]+\))?!?: .+'
+49 -46
View File
@@ -1,67 +1,70 @@
name: Prowler - Create Backport Label
name: 'Tools: Backport Label'
on:
release:
types: [published]
types:
- 'published'
concurrency:
group: ${{ github.workflow }}-${{ github.event.release.tag_name }}
cancel-in-progress: false
env:
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_COLOR: B60205
jobs:
create_label:
create-label:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: write
contents: read
issues: write
steps:
- name: Create backport label
- name: Create backport label for minor releases
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_TAG: ${{ github.event.release.tag_name }}
OWNER_REPO: ${{ github.repository }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
VERSION_ONLY=${RELEASE_TAG#v} # Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
RELEASE_TAG="${{ github.event.release.tag_name }}"
if [ -z "$RELEASE_TAG" ]; then
echo "Error: No release tag provided"
exit 1
fi
echo "Processing release tag: $RELEASE_TAG"
# Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
VERSION_ONLY="${RELEASE_TAG#v}"
# Check if it's a minor version (X.Y.0)
if [[ "$VERSION_ONLY" =~ ^[0-9]+\.[0-9]+\.0$ ]]; then
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is a minor version. Proceeding to create backport label."
if [[ "$VERSION_ONLY" =~ ^([0-9]+)\.([0-9]+)\.0$ ]]; then
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is a minor version. Proceeding to create backport label."
TWO_DIGIT_VERSION=${VERSION_ONLY%.0} # Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
# Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
MAJOR="${BASH_REMATCH[1]}"
MINOR="${BASH_REMATCH[2]}"
TWO_DIGIT_VERSION="${MAJOR}.${MINOR}"
FINAL_LABEL_NAME="backport-to-v${TWO_DIGIT_VERSION}"
FINAL_DESCRIPTION="Backport PR to the v${TWO_DIGIT_VERSION} branch"
LABEL_NAME="${BACKPORT_LABEL_PREFIX}v${TWO_DIGIT_VERSION}"
LABEL_DESC="Backport PR to the v${TWO_DIGIT_VERSION} branch"
LABEL_COLOR="$BACKPORT_LABEL_COLOR"
echo "Effective label name will be: ${FINAL_LABEL_NAME}"
echo "Effective description will be: ${FINAL_DESCRIPTION}"
echo "Label name: $LABEL_NAME"
echo "Label description: $LABEL_DESC"
# Check if the label already exists
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token ${GITHUB_TOKEN}" "https://api.github.com/repos/${OWNER_REPO}/labels/${FINAL_LABEL_NAME}")
if [ "${STATUS_CODE}" -eq 200 ]; then
echo "Label '${FINAL_LABEL_NAME}' already exists."
elif [ "${STATUS_CODE}" -eq 404 ]; then
echo "Label '${FINAL_LABEL_NAME}' does not exist. Creating it..."
# Prepare JSON data payload
JSON_DATA=$(printf '{"name":"%s","description":"%s","color":"B60205"}' "${FINAL_LABEL_NAME}" "${FINAL_DESCRIPTION}")
CREATE_STATUS_CODE=$(curl -s -o /tmp/curl_create_response.json -w "%{http_code}" -X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${GITHUB_TOKEN}" \
--data "${JSON_DATA}" \
"https://api.github.com/repos/${OWNER_REPO}/labels")
CREATE_RESPONSE_BODY=$(cat /tmp/curl_create_response.json)
rm -f /tmp/curl_create_response.json
if [ "$CREATE_STATUS_CODE" -eq 201 ]; then
echo "Label '${FINAL_LABEL_NAME}' created successfully."
else
echo "Error creating label '${FINAL_LABEL_NAME}'. Status: $CREATE_STATUS_CODE"
echo "Response: $CREATE_RESPONSE_BODY"
exit 1
fi
# Check if label already exists
if gh label list --repo ${{ github.repository }} --limit 1000 | grep -q "^${LABEL_NAME}[[:space:]]"; then
echo "Label '$LABEL_NAME' already exists."
else
echo "Error checking for label '${FINAL_LABEL_NAME}'. HTTP Status: ${STATUS_CODE}"
exit 1
echo "Label '$LABEL_NAME' does not exist. Creating it..."
gh label create "$LABEL_NAME" \
--description "$LABEL_DESC" \
--color "$LABEL_COLOR" \
--repo ${{ github.repository }}
echo "Label '$LABEL_NAME' created successfully."
fi
else
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is not a minor version. Skipping backport label creation."
exit 0
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is not a minor version. Skipping backport label creation."
fi
+24 -10
View File
@@ -1,19 +1,33 @@
name: Prowler - Find secrets
name: 'Tools: TruffleHog'
on: pull_request
on:
push:
branches:
- 'master'
- 'v5.*'
pull_request:
branches:
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
trufflehog:
scan-secrets:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
steps:
- name: Checkout
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@466da5b0bb161144f6afca9afe5d57975828c410 # v3.90.8
- name: Scan for secrets with TruffleHog
uses: trufflesecurity/trufflehog@ad6fc8fb446b8fafbf7ea8193d2d6bfd42f45690 # v3.90.11
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD
extra_args: --only-verified
extra_args: '--results=verified,unknown'
+20 -8
View File
@@ -1,17 +1,29 @@
name: Prowler - PR Labeler
name: 'Tools: PR Labeler'
on:
pull_request_target:
branches:
- "master"
- "v3"
- "v4.*"
pull_request_target:
branches:
- 'master'
- 'v5.*'
types:
- 'opened'
- 'reopened'
- 'synchronize'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
labeler:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
- name: Apply labels to PR
uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
with:
sync-labels: true
+14 -17
View File
@@ -3,21 +3,13 @@ name: 'MCP: Container Build and Push'
on:
push:
branches:
- "master"
- 'master'
paths:
- "mcp_server/**"
- ".github/workflows/mcp-container-build-push.yml"
# Uncomment to test this workflow on PRs
# pull_request:
# branches:
# - "master"
# paths:
# - "mcp_server/**"
# - ".github/workflows/mcp-container-build-push.yml"
- 'mcp_server/**'
- '.github/workflows/mcp-container-build-push.yml'
release:
types: [published]
types:
- 'published'
permissions:
contents: read
@@ -41,6 +33,7 @@ jobs:
setup:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
short-sha: ${{ steps.set-short-sha.outputs.short-sha }}
steps:
@@ -51,8 +44,12 @@ jobs:
container-build-push:
needs: setup
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
packages: write
steps:
- name: Checkout
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Login to DockerHub
@@ -64,7 +61,7 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push container (latest)
- name: Build and push MCP container (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
@@ -83,7 +80,7 @@ jobs:
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push container (release)
- name: Build and push MCP container (release)
if: github.event_name == 'release'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
@@ -103,7 +100,7 @@ jobs:
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Trigger deployment
- name: Trigger MCP deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
with:
+103
View File
@@ -0,0 +1,103 @@
name: 'Tools: Check Changelog'
on:
pull_request:
types:
- 'opened'
- 'synchronize'
- 'reopened'
- 'labeled'
- 'unlabeled'
branches:
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
check-changelog:
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
env:
MONITORED_FOLDERS: 'api ui prowler mcp_server'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
ui/**
prowler/**
mcp_server/**
- name: Check for folder changes and changelog presence
id: check-folders
run: |
missing_changelogs=""
# Check api folder
if [[ "${{ steps.changed-files.outputs.any_changed }}" == "true" ]]; then
for folder in $MONITORED_FOLDERS; do
# Get files changed in this folder
changed_in_folder=$(echo "${{ steps.changed-files.outputs.all_changed_files }}" | tr ' ' '\n' | grep "^${folder}/" || true)
if [ -n "$changed_in_folder" ]; then
echo "Detected changes in ${folder}/"
# Check if CHANGELOG.md was updated
if ! echo "$changed_in_folder" | grep -q "^${folder}/CHANGELOG.md$"; then
echo "No changelog update found for ${folder}/"
missing_changelogs="${missing_changelogs}- \`${folder}\`"$'\n'
fi
fi
done
fi
{
echo "missing_changelogs<<EOF"
echo -e "${missing_changelogs}"
echo "EOF"
} >> $GITHUB_OUTPUT
- name: Find existing changelog comment
if: github.event.pull_request.head.repo.full_name == github.repository
id: find-comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- changelog-check -->'
- name: Update PR comment with changelog status
if: github.event.pull_request.head.repo.full_name == github.repository
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find-comment.outputs.comment-id }}
edit-mode: replace
body: |
<!-- changelog-check -->
${{ steps.check-folders.outputs.missing_changelogs != '' && format('⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
{0}
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.', steps.check-folders.outputs.missing_changelogs) || '✅ All necessary `CHANGELOG.md` files have been updated.' }}
- name: Fail if changelog is missing
if: steps.check-folders.outputs.missing_changelogs != ''
run: |
echo "::error::Missing changelog updates in some folders"
exit 1
+52 -104
View File
@@ -1,42 +1,40 @@
name: Prowler - PR Conflict Checker
name: 'Tools: PR Conflict Checker'
on:
pull_request:
pull_request_target:
types:
- opened
- synchronize
- reopened
- 'opened'
- 'synchronize'
- 'reopened'
branches:
- "master"
- "v5.*"
# Leaving this commented until we find a way to run it for forks but in Prowler's context
# pull_request_target:
# types:
# - opened
# - synchronize
# - reopened
# branches:
# - "master"
# - "v5.*"
- 'master'
- 'v5.*'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
conflict-checker:
check-conflicts:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
issues: write
steps:
- name: Checkout repository
- name: Checkout PR head
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
**
files: '**'
- name: Check for conflict markers
id: conflict-check
@@ -51,10 +49,10 @@ jobs:
if [ -f "$file" ]; then
echo "Checking file: $file"
# Look for conflict markers
if grep -l "^<<<<<<<\|^=======\|^>>>>>>>" "$file" 2>/dev/null; then
# Look for conflict markers (more precise regex)
if grep -qE '^(<<<<<<<|=======|>>>>>>>)' "$file" 2>/dev/null; then
echo "Conflict markers found in: $file"
CONFLICT_FILES="$CONFLICT_FILES$file "
CONFLICT_FILES="${CONFLICT_FILES}- \`${file}\`"$'\n'
HAS_CONFLICTS=true
fi
fi
@@ -62,114 +60,64 @@ jobs:
if [ "$HAS_CONFLICTS" = true ]; then
echo "has_conflicts=true" >> $GITHUB_OUTPUT
echo "conflict_files=$CONFLICT_FILES" >> $GITHUB_OUTPUT
echo "Conflict markers detected in files: $CONFLICT_FILES"
{
echo "conflict_files<<EOF"
echo "$CONFLICT_FILES"
echo "EOF"
} >> $GITHUB_OUTPUT
echo "Conflict markers detected"
else
echo "has_conflicts=false" >> $GITHUB_OUTPUT
echo "No conflict markers found in changed files"
fi
- name: Add conflict label
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
- name: Manage conflict label
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.pull_request.number }}
HAS_CONFLICTS: ${{ steps.conflict-check.outputs.has_conflicts }}
run: |
LABEL_NAME="has-conflicts"
const hasConflictLabel = labels.some(label => label.name === 'has-conflicts');
# Add or remove label based on conflict status
if [ "$HAS_CONFLICTS" = "true" ]; then
echo "Adding conflict label to PR #${PR_NUMBER}..."
gh pr edit "$PR_NUMBER" --add-label "$LABEL_NAME" --repo ${{ github.repository }} || true
else
echo "Removing conflict label from PR #${PR_NUMBER}..."
gh pr edit "$PR_NUMBER" --remove-label "$LABEL_NAME" --repo ${{ github.repository }} || true
fi
if (!hasConflictLabel) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['has-conflicts']
});
console.log('Added has-conflicts label');
} else {
console.log('has-conflicts label already exists');
}
- name: Remove conflict label
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: 'has-conflicts'
});
console.log('Removed has-conflicts label');
} catch (error) {
if (error.status === 404) {
console.log('has-conflicts label was not present');
} else {
throw error;
}
}
- name: Find existing conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
- name: Find existing comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
body-includes: '<!-- conflict-checker-comment -->'
- name: Create or update conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
- name: Create or update comment
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
⚠️ **Conflict Markers Detected**
<!-- conflict-checker-comment -->
${{ steps.conflict-check.outputs.has_conflicts == 'true' && '⚠️ **Conflict Markers Detected**' || '✅ **Conflict Markers Resolved**' }}
This pull request contains unresolved conflict markers in the following files:
```
${{ steps.conflict-check.outputs.conflict_files }}
```
${{ steps.conflict-check.outputs.has_conflicts == 'true' && format('This pull request contains unresolved conflict markers in the following files:
{0}
Please resolve these conflicts by:
1. Locating the conflict markers: `<<<<<<<`, `=======`, and `>>>>>>>`
2. Manually editing the files to resolve the conflicts
3. Removing all conflict markers
4. Committing and pushing the changes
- name: Find existing conflict comment when resolved
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
id: find-resolved-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Update comment when conflicts resolved
if: steps.conflict-check.outputs.has_conflicts == 'false' && steps.find-resolved-comment.outputs.comment-id != ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-resolved-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
✅ **Conflict Markers Resolved**
All conflict markers have been successfully resolved in this pull request.
4. Committing and pushing the changes', steps.conflict-check.outputs.conflict_files) || 'All conflict markers have been successfully resolved in this pull request.' }}
- name: Fail workflow if conflicts detected
if: steps.conflict-check.outputs.has_conflicts == 'true'
run: |
echo "::error::Workflow failed due to conflict markers in files: ${{ steps.conflict-check.outputs.conflict_files }}"
echo "::error::Workflow failed due to conflict markers detected in the PR"
exit 1
@@ -1,27 +1,31 @@
name: Prowler - Merged Pull Request
name: 'Tools: PR Merged'
on:
pull_request_target:
branches: ['master']
types: ['closed']
branches:
- 'master'
types:
- 'closed'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: false
jobs:
trigger-cloud-pull-request:
name: Trigger Cloud Pull Request
if: github.event.pull_request.merged == true && github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.merge_commit_sha }}
- name: Set short git commit SHA
- name: Calculate short commit SHA
id: vars
run: |
shortSha=$(git rev-parse --short ${{ github.event.pull_request.merge_commit_sha }})
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
SHORT_SHA="${{ github.event.pull_request.merge_commit_sha }}"
echo "SHORT_SHA=${SHORT_SHA::7}" >> $GITHUB_ENV
- name: Trigger pull request
- name: Trigger Cloud repository pull request
uses: peter-evans/repository-dispatch@5fc4efd1a4797ddb68ffd0714a238564e4cc0e6f # v4.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
@@ -31,8 +35,12 @@ jobs:
{
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
"PROWLER_PR_NUMBER": "${{ github.event.pull_request.number }}",
"PROWLER_PR_TITLE": ${{ toJson(github.event.pull_request.title) }},
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
"PROWLER_PR_URL": ${{ toJson(github.event.pull_request.html_url) }}
"PROWLER_PR_URL": ${{ toJson(github.event.pull_request.html_url) }},
"PROWLER_PR_MERGED_BY": "${{ github.event.pull_request.merged_by.login }}",
"PROWLER_PR_BASE_BRANCH": "${{ github.event.pull_request.base.ref }}",
"PROWLER_PR_HEAD_BRANCH": "${{ github.event.pull_request.head.ref }}"
}
@@ -1,6 +1,6 @@
name: Prowler - Release Preparation
name: 'Tools: Prepare Release'
run-name: Prowler Release Preparation for ${{ inputs.prowler_version }}
run-name: 'Prepare Release for Prowler ${{ inputs.prowler_version }}'
on:
workflow_dispatch:
@@ -10,18 +10,23 @@ on:
required: true
type: string
concurrency:
group: ${{ github.workflow }}-${{ inputs.prowler_version }}
cancel-in-progress: false
env:
PROWLER_VERSION: ${{ github.event.inputs.prowler_version }}
PROWLER_VERSION: ${{ inputs.prowler_version }}
jobs:
prepare-release:
if: github.repository == 'prowler-cloud/prowler'
if: github.event_name == 'workflow_dispatch' && github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout code
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
@@ -34,15 +39,15 @@ jobs:
- name: Install Poetry
run: |
python3 -m pip install --user poetry
python3 -m pip install --user poetry==2.1.1
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Configure Git
run: |
git config --global user.name "prowler-bot"
git config --global user.email "179230569+prowler-bot@users.noreply.github.com"
git config --global user.name 'prowler-bot'
git config --global user.email '179230569+prowler-bot@users.noreply.github.com'
- name: Parse version and determine branch
- name: Parse version and read changelogs
run: |
# Validate version format (reusing pattern from sdk-bump-version.yml)
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
@@ -119,7 +124,7 @@ jobs:
exit 1
fi
- name: Extract changelog entries
- name: Extract and combine changelog entries
run: |
set -e
@@ -245,7 +250,7 @@ jobs:
echo "Combined changelog preview:"
cat combined_changelog.md
- name: Checkout existing branch for patch release
- name: Checkout release branch for patch release
if: ${{ env.PATCH_VERSION != '0' }}
run: |
echo "Patch release detected, checking out existing branch $BRANCH_NAME..."
@@ -260,7 +265,7 @@ jobs:
exit 1
fi
- name: Verify version in pyproject.toml
- name: Verify SDK version in pyproject.toml
run: |
CURRENT_VERSION=$(grep '^version = ' pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
PROWLER_VERSION_TRIMMED=$(echo "$PROWLER_VERSION" | tr -d '[:space:]')
@@ -270,7 +275,7 @@ jobs:
fi
echo "✓ pyproject.toml version: $CURRENT_VERSION"
- name: Verify version in prowler/config/config.py
- name: Verify SDK version in prowler/config/config.py
run: |
CURRENT_VERSION=$(grep '^prowler_version = ' prowler/config/config.py | sed -E 's/prowler_version = "([^"]+)"/\1/' | tr -d '[:space:]')
PROWLER_VERSION_TRIMMED=$(echo "$PROWLER_VERSION" | tr -d '[:space:]')
@@ -280,7 +285,7 @@ jobs:
fi
echo "✓ prowler/config/config.py version: $CURRENT_VERSION"
- name: Verify version in api/pyproject.toml
- name: Verify API version in api/pyproject.toml
if: ${{ env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_API_VERSION=$(grep '^version = ' api/pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
@@ -291,7 +296,7 @@ jobs:
fi
echo "✓ api/pyproject.toml version: $CURRENT_API_VERSION"
- name: Verify prowler dependency in api/pyproject.toml
- name: Verify API prowler dependency in api/pyproject.toml
if: ${{ env.PATCH_VERSION != '0' && env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
@@ -302,7 +307,7 @@ jobs:
fi
echo "✓ api/pyproject.toml prowler dependency: $CURRENT_PROWLER_REF"
- name: Verify version in api/src/backend/api/v1/views.py
- name: Verify API version in api/src/backend/api/v1/views.py
if: ${{ env.HAS_API_CHANGES == 'true' }}
run: |
CURRENT_API_VERSION=$(grep 'spectacular_settings.VERSION = ' api/src/backend/api/v1/views.py | sed -E 's/.*spectacular_settings.VERSION = "([^"]+)".*/\1/' | tr -d '[:space:]')
@@ -313,7 +318,7 @@ jobs:
fi
echo "✓ api/src/backend/api/v1/views.py version: $CURRENT_API_VERSION"
- name: Checkout existing release branch for minor release
- name: Checkout release branch for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
echo "Minor release detected (patch = 0), checking out existing branch $BRANCH_NAME..."
@@ -325,7 +330,7 @@ jobs:
exit 1
fi
- name: Prepare prowler dependency update for minor release
- name: Update API prowler dependency for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
@@ -362,7 +367,7 @@ jobs:
echo "✓ Prepared prowler dependency update to: $UPDATED_PROWLER_REF"
- name: Create Pull Request against release branch
- name: Create PR for API dependency update
if: ${{ env.PATCH_VERSION == '0' }}
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
@@ -1,77 +0,0 @@
name: Prowler - Check Changelog
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
jobs:
check-changelog:
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: write
env:
MONITORED_FOLDERS: "api ui prowler mcp_server"
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Get list of changed files
id: changed_files
run: |
git fetch origin ${{ github.base_ref }}
git diff --name-only origin/${{ github.base_ref }}...HEAD > changed_files.txt
cat changed_files.txt
- name: Check for folder changes and changelog presence
id: check_folders
run: |
missing_changelogs=""
for folder in $MONITORED_FOLDERS; do
if grep -q "^${folder}/" changed_files.txt; then
echo "Detected changes in ${folder}/"
if ! grep -q "^${folder}/CHANGELOG.md$" changed_files.txt; then
echo "No changelog update found for ${folder}/"
missing_changelogs="${missing_changelogs}- \`${folder}\`\n"
fi
fi
done
echo "missing_changelogs<<EOF" >> $GITHUB_OUTPUT
echo -e "${missing_changelogs}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Find existing changelog comment
if: github.event.pull_request.head.repo.full_name == github.repository
id: find_comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad #v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- changelog-check -->'
- name: Update PR comment with changelog status
if: github.event.pull_request.head.repo.full_name == github.repository
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
edit-mode: replace
body: |
<!-- changelog-check -->
${{ steps.check_folders.outputs.missing_changelogs != '' && format('⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
{0}
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.', steps.check_folders.outputs.missing_changelogs) || '✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉' }}
- name: Fail if changelog is missing
if: steps.check_folders.outputs.missing_changelogs != ''
run: |
echo "ERROR: Missing changelog updates in some folders."
exit 1
+3
View File
@@ -83,3 +83,6 @@ CLAUDE.md
# MCP Server
mcp_server/prowler_mcp_server/prowler_app/server.py
mcp_server/prowler_mcp_server/prowler_app/utils/schema.yaml
# Compliance report
*.pdf
+9 -1
View File
@@ -46,6 +46,14 @@ help: ## Show this help.
@echo "Prowler Makefile"
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
##@ Build no cache
build-no-cache-dev:
docker compose -f docker-compose-dev.yml build --no-cache api-dev worker-dev worker-beat
##@ Development Environment
run-api-dev: ## Start development environment with API, PostgreSQL, Valkey, and workers
docker compose -f docker-compose-dev.yml up api-dev postgres valkey worker-dev worker-beat --build
docker compose -f docker-compose-dev.yml up api-dev postgres valkey worker-dev worker-beat
##@ Development Environment
build-and-run-api-dev: build-no-cache-dev run-api-dev
+3 -1
View File
@@ -2,7 +2,7 @@
All notable changes to the **Prowler API** are documented in this file.
## [1.14.0] (Prowler UNRELEASED)
## [1.14.0] (Prowler 5.13.0)
### Added
- Default JWT keys are generated and stored if they are missing from configuration [(#8655)](https://github.com/prowler-cloud/prowler/pull/8655)
@@ -12,8 +12,10 @@ All notable changes to the **Prowler API** are documented in this file.
- API Key support [(#8805)](https://github.com/prowler-cloud/prowler/pull/8805)
- SAML role mapping protection for single-admin tenants to prevent accidental lockout [(#8882)](https://github.com/prowler-cloud/prowler/pull/8882)
- Support for `passed_findings` and `total_findings` fields in compliance requirement overview for accurate Prowler ThreatScore calculation [(#8582)](https://github.com/prowler-cloud/prowler/pull/8582)
- PDF reporting for Prowler ThreatScore [(#8867)](https://github.com/prowler-cloud/prowler/pull/8867)
- Database read replica support [(#8869)](https://github.com/prowler-cloud/prowler/pull/8869)
- Support Common Cloud Controls for AWS, Azure and GCP [(#8000)](https://github.com/prowler-cloud/prowler/pull/8000)
- Add `provider_id__in` filter support to findings and findings severity overview endpoints [(#8951)](https://github.com/prowler-cloud/prowler/pull/8951)
### Changed
- Now the MANAGE_ACCOUNT permission is required to modify or read user permissions instead of MANAGE_USERS [(#8281)](https://github.com/prowler-cloud/prowler/pull/8281)
+528 -1
View File
@@ -1256,6 +1256,98 @@ files = [
{file = "contextlib2-21.6.0.tar.gz", hash = "sha256:ab1e2bfe1d01d968e1b7e8d9023bc51ef3509bba217bb730cee3827e1ee82869"},
]
[[package]]
name = "contourpy"
version = "1.3.3"
description = "Python library for calculating contours of 2D quadrilateral grids"
optional = false
python-versions = ">=3.11"
groups = ["main"]
files = [
{file = "contourpy-1.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:709a48ef9a690e1343202916450bc48b9e51c049b089c7f79a267b46cffcdaa1"},
{file = "contourpy-1.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:23416f38bfd74d5d28ab8429cc4d63fa67d5068bd711a85edb1c3fb0c3e2f381"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:929ddf8c4c7f348e4c0a5a3a714b5c8542ffaa8c22954862a46ca1813b667ee7"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:9e999574eddae35f1312c2b4b717b7885d4edd6cb46700e04f7f02db454e67c1"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf67e0e3f482cb69779dd3061b534eb35ac9b17f163d851e2a547d56dba0a3a"},
{file = "contourpy-1.3.3-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:51e79c1f7470158e838808d4a996fa9bac72c498e93d8ebe5119bc1e6becb0db"},
{file = "contourpy-1.3.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:598c3aaece21c503615fd59c92a3598b428b2f01bfb4b8ca9c4edeecc2438620"},
{file = "contourpy-1.3.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:322ab1c99b008dad206d406bb61d014cf0174df491ae9d9d0fac6a6fda4f977f"},
{file = "contourpy-1.3.3-cp311-cp311-win32.whl", hash = "sha256:fd907ae12cd483cd83e414b12941c632a969171bf90fc937d0c9f268a31cafff"},
{file = "contourpy-1.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:3519428f6be58431c56581f1694ba8e50626f2dd550af225f82fb5f5814d2a42"},
{file = "contourpy-1.3.3-cp311-cp311-win_arm64.whl", hash = "sha256:15ff10bfada4bf92ec8b31c62bf7c1834c244019b4a33095a68000d7075df470"},
{file = "contourpy-1.3.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b08a32ea2f8e42cf1d4be3169a98dd4be32bafe4f22b6c4cb4ba810fa9e5d2cb"},
{file = "contourpy-1.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:556dba8fb6f5d8742f2923fe9457dbdd51e1049c4a43fd3986a0b14a1d815fc6"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:92d9abc807cf7d0e047b95ca5d957cf4792fcd04e920ca70d48add15c1a90ea7"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b2e8faa0ed68cb29af51edd8e24798bb661eac3bd9f65420c1887b6ca89987c8"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:626d60935cf668e70a5ce6ff184fd713e9683fb458898e4249b63be9e28286ea"},
{file = "contourpy-1.3.3-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4d00e655fcef08aba35ec9610536bfe90267d7ab5ba944f7032549c55a146da1"},
{file = "contourpy-1.3.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:451e71b5a7d597379ef572de31eeb909a87246974d960049a9848c3bc6c41bf7"},
{file = "contourpy-1.3.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:459c1f020cd59fcfe6650180678a9993932d80d44ccde1fa1868977438f0b411"},
{file = "contourpy-1.3.3-cp312-cp312-win32.whl", hash = "sha256:023b44101dfe49d7d53932be418477dba359649246075c996866106da069af69"},
{file = "contourpy-1.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:8153b8bfc11e1e4d75bcb0bff1db232f9e10b274e0929de9d608027e0d34ff8b"},
{file = "contourpy-1.3.3-cp312-cp312-win_arm64.whl", hash = "sha256:07ce5ed73ecdc4a03ffe3e1b3e3c1166db35ae7584be76f65dbbe28a7791b0cc"},
{file = "contourpy-1.3.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:177fb367556747a686509d6fef71d221a4b198a3905fe824430e5ea0fda54eb5"},
{file = "contourpy-1.3.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:d002b6f00d73d69333dac9d0b8d5e84d9724ff9ef044fd63c5986e62b7c9e1b1"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:348ac1f5d4f1d66d3322420f01d42e43122f43616e0f194fc1c9f5d830c5b286"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:655456777ff65c2c548b7c454af9c6f33f16c8884f11083244b5819cc214f1b5"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:644a6853d15b2512d67881586bd03f462c7ab755db95f16f14d7e238f2852c67"},
{file = "contourpy-1.3.3-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4debd64f124ca62069f313a9cb86656ff087786016d76927ae2cf37846b006c9"},
{file = "contourpy-1.3.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a15459b0f4615b00bbd1e91f1b9e19b7e63aea7483d03d804186f278c0af2659"},
{file = "contourpy-1.3.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ca0fdcd73925568ca027e0b17ab07aad764be4706d0a925b89227e447d9737b7"},
{file = "contourpy-1.3.3-cp313-cp313-win32.whl", hash = "sha256:b20c7c9a3bf701366556e1b1984ed2d0cedf999903c51311417cf5f591d8c78d"},
{file = "contourpy-1.3.3-cp313-cp313-win_amd64.whl", hash = "sha256:1cadd8b8969f060ba45ed7c1b714fe69185812ab43bd6b86a9123fe8f99c3263"},
{file = "contourpy-1.3.3-cp313-cp313-win_arm64.whl", hash = "sha256:fd914713266421b7536de2bfa8181aa8c699432b6763a0ea64195ebe28bff6a9"},
{file = "contourpy-1.3.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:88df9880d507169449d434c293467418b9f6cbe82edd19284aa0409e7fdb933d"},
{file = "contourpy-1.3.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d06bb1f751ba5d417047db62bca3c8fde202b8c11fb50742ab3ab962c81e8216"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e4e6b05a45525357e382909a4c1600444e2a45b4795163d3b22669285591c1ae"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ab3074b48c4e2cf1a960e6bbeb7f04566bf36b1861d5c9d4d8ac04b82e38ba20"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:6c3d53c796f8647d6deb1abe867daeb66dcc8a97e8455efa729516b997b8ed99"},
{file = "contourpy-1.3.3-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:50ed930df7289ff2a8d7afeb9603f8289e5704755c7e5c3bbd929c90c817164b"},
{file = "contourpy-1.3.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:4feffb6537d64b84877da813a5c30f1422ea5739566abf0bd18065ac040e120a"},
{file = "contourpy-1.3.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:2b7e9480ffe2b0cd2e787e4df64270e3a0440d9db8dc823312e2c940c167df7e"},
{file = "contourpy-1.3.3-cp313-cp313t-win32.whl", hash = "sha256:283edd842a01e3dcd435b1c5116798d661378d83d36d337b8dde1d16a5fc9ba3"},
{file = "contourpy-1.3.3-cp313-cp313t-win_amd64.whl", hash = "sha256:87acf5963fc2b34825e5b6b048f40e3635dd547f590b04d2ab317c2619ef7ae8"},
{file = "contourpy-1.3.3-cp313-cp313t-win_arm64.whl", hash = "sha256:3c30273eb2a55024ff31ba7d052dde990d7d8e5450f4bbb6e913558b3d6c2301"},
{file = "contourpy-1.3.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fde6c716d51c04b1c25d0b90364d0be954624a0ee9d60e23e850e8d48353d07a"},
{file = "contourpy-1.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:cbedb772ed74ff5be440fa8eee9bd49f64f6e3fc09436d9c7d8f1c287b121d77"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:22e9b1bd7a9b1d652cd77388465dc358dafcd2e217d35552424aa4f996f524f5"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a22738912262aa3e254e4f3cb079a95a67132fc5a063890e224393596902f5a4"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:afe5a512f31ee6bd7d0dda52ec9864c984ca3d66664444f2d72e0dc4eb832e36"},
{file = "contourpy-1.3.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f64836de09927cba6f79dcd00fdd7d5329f3fccc633468507079c829ca4db4e3"},
{file = "contourpy-1.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1fd43c3be4c8e5fd6e4f2baeae35ae18176cf2e5cced681cca908addf1cdd53b"},
{file = "contourpy-1.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:6afc576f7b33cf00996e5c1102dc2a8f7cc89e39c0b55df93a0b78c1bd992b36"},
{file = "contourpy-1.3.3-cp314-cp314-win32.whl", hash = "sha256:66c8a43a4f7b8df8b71ee1840e4211a3c8d93b214b213f590e18a1beca458f7d"},
{file = "contourpy-1.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:cf9022ef053f2694e31d630feaacb21ea24224be1c3ad0520b13d844274614fd"},
{file = "contourpy-1.3.3-cp314-cp314-win_arm64.whl", hash = "sha256:95b181891b4c71de4bb404c6621e7e2390745f887f2a026b2d99e92c17892339"},
{file = "contourpy-1.3.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:33c82d0138c0a062380332c861387650c82e4cf1747aaa6938b9b6516762e772"},
{file = "contourpy-1.3.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:ea37e7b45949df430fe649e5de8351c423430046a2af20b1c1961cae3afcda77"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d304906ecc71672e9c89e87c4675dc5c2645e1f4269a5063b99b0bb29f232d13"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ca658cd1a680a5c9ea96dc61cdbae1e85c8f25849843aa799dfd3cb370ad4fbe"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:ab2fd90904c503739a75b7c8c5c01160130ba67944a7b77bbf36ef8054576e7f"},
{file = "contourpy-1.3.3-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b7301b89040075c30e5768810bc96a8e8d78085b47d8be6e4c3f5a0b4ed478a0"},
{file = "contourpy-1.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:2a2a8b627d5cc6b7c41a4beff6c5ad5eb848c88255fda4a8745f7e901b32d8e4"},
{file = "contourpy-1.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:fd6ec6be509c787f1caf6b247f0b1ca598bef13f4ddeaa126b7658215529ba0f"},
{file = "contourpy-1.3.3-cp314-cp314t-win32.whl", hash = "sha256:e74a9a0f5e3fff48fb5a7f2fd2b9b70a3fe014a67522f79b7cca4c0c7e43c9ae"},
{file = "contourpy-1.3.3-cp314-cp314t-win_amd64.whl", hash = "sha256:13b68d6a62db8eafaebb8039218921399baf6e47bf85006fd8529f2a08ef33fc"},
{file = "contourpy-1.3.3-cp314-cp314t-win_arm64.whl", hash = "sha256:b7448cb5a725bb1e35ce88771b86fba35ef418952474492cf7c764059933ff8b"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:cd5dfcaeb10f7b7f9dc8941717c6c2ade08f587be2226222c12b25f0483ed497"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:0c1fc238306b35f246d61a1d416a627348b5cf0648648a031e14bb8705fcdfe8"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70f9aad7de812d6541d29d2bbf8feb22ff7e1c299523db288004e3157ff4674e"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5ed3657edf08512fc3fe81b510e35c2012fbd3081d2e26160f27ca28affec989"},
{file = "contourpy-1.3.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:3d1a3799d62d45c18bafd41c5fa05120b96a28079f2393af559b843d1a966a77"},
{file = "contourpy-1.3.3.tar.gz", hash = "sha256:083e12155b210502d0bca491432bb04d56dc3432f95a979b429f2848c3dbe880"},
]
[package.dependencies]
numpy = ">=1.25"
[package.extras]
bokeh = ["bokeh", "selenium"]
docs = ["furo", "sphinx (>=7.2)", "sphinx-copybutton"]
mypy = ["bokeh", "contourpy[bokeh,docs]", "docutils-stubs", "mypy (==1.17.0)", "types-Pillow"]
test = ["Pillow", "contourpy[test-no-images]", "matplotlib"]
test-no-images = ["pytest", "pytest-cov", "pytest-rerunfailures", "pytest-xdist", "wurlitzer"]
[[package]]
name = "coverage"
version = "7.5.4"
@@ -1390,6 +1482,22 @@ ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==44.0.1)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "cycler"
version = "0.12.1"
description = "Composable style cycles"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30"},
{file = "cycler-0.12.1.tar.gz", hash = "sha256:88bb128f02ba341da8ef447245a9e138fae777f6a23943da4540077d3601eb1c"},
]
[package.extras]
docs = ["ipython", "matplotlib", "numpydoc", "sphinx"]
tests = ["pytest", "pytest-cov", "pytest-xdist"]
[[package]]
name = "dash"
version = "3.1.1"
@@ -2120,6 +2228,87 @@ werkzeug = ">=3.1.0"
async = ["asgiref (>=3.2)"]
dotenv = ["python-dotenv"]
[[package]]
name = "fonttools"
version = "4.60.1"
description = "Tools to manipulate font files"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "fonttools-4.60.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:9a52f254ce051e196b8fe2af4634c2d2f02c981756c6464dc192f1b6050b4e28"},
{file = "fonttools-4.60.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7420a2696a44650120cdd269a5d2e56a477e2bfa9d95e86229059beb1c19e15"},
{file = "fonttools-4.60.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee0c0b3b35b34f782afc673d503167157094a16f442ace7c6c5e0ca80b08f50c"},
{file = "fonttools-4.60.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:282dafa55f9659e8999110bd8ed422ebe1c8aecd0dc396550b038e6c9a08b8ea"},
{file = "fonttools-4.60.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4ba4bd646e86de16160f0fb72e31c3b9b7d0721c3e5b26b9fa2fc931dfdb2652"},
{file = "fonttools-4.60.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0b0835ed15dd5b40d726bb61c846a688f5b4ce2208ec68779bc81860adb5851a"},
{file = "fonttools-4.60.1-cp310-cp310-win32.whl", hash = "sha256:1525796c3ffe27bb6268ed2a1bb0dcf214d561dfaf04728abf01489eb5339dce"},
{file = "fonttools-4.60.1-cp310-cp310-win_amd64.whl", hash = "sha256:268ecda8ca6cb5c4f044b1fb9b3b376e8cd1b361cef275082429dc4174907038"},
{file = "fonttools-4.60.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:7b4c32e232a71f63a5d00259ca3d88345ce2a43295bb049d21061f338124246f"},
{file = "fonttools-4.60.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3630e86c484263eaac71d117085d509cbcf7b18f677906824e4bace598fb70d2"},
{file = "fonttools-4.60.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5c1015318e4fec75dd4943ad5f6a206d9727adf97410d58b7e32ab644a807914"},
{file = "fonttools-4.60.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e6c58beb17380f7c2ea181ea11e7db8c0ceb474c9dd45f48e71e2cb577d146a1"},
{file = "fonttools-4.60.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:ec3681a0cb34c255d76dd9d865a55f260164adb9fa02628415cdc2d43ee2c05d"},
{file = "fonttools-4.60.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f4b5c37a5f40e4d733d3bbaaef082149bee5a5ea3156a785ff64d949bd1353fa"},
{file = "fonttools-4.60.1-cp311-cp311-win32.whl", hash = "sha256:398447f3d8c0c786cbf1209711e79080a40761eb44b27cdafffb48f52bcec258"},
{file = "fonttools-4.60.1-cp311-cp311-win_amd64.whl", hash = "sha256:d066ea419f719ed87bc2c99a4a4bfd77c2e5949cb724588b9dd58f3fd90b92bf"},
{file = "fonttools-4.60.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:7b0c6d57ab00dae9529f3faf187f2254ea0aa1e04215cf2f1a8ec277c96661bc"},
{file = "fonttools-4.60.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:839565cbf14645952d933853e8ade66a463684ed6ed6c9345d0faf1f0e868877"},
{file = "fonttools-4.60.1-cp312-cp312-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8177ec9676ea6e1793c8a084a90b65a9f778771998eb919d05db6d4b1c0b114c"},
{file = "fonttools-4.60.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:996a4d1834524adbb423385d5a629b868ef9d774670856c63c9a0408a3063401"},
{file = "fonttools-4.60.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a46b2f450bc79e06ef3b6394f0c68660529ed51692606ad7f953fc2e448bc903"},
{file = "fonttools-4.60.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6ec722ee589e89a89f5b7574f5c45604030aa6ae24cb2c751e2707193b466fed"},
{file = "fonttools-4.60.1-cp312-cp312-win32.whl", hash = "sha256:b2cf105cee600d2de04ca3cfa1f74f1127f8455b71dbad02b9da6ec266e116d6"},
{file = "fonttools-4.60.1-cp312-cp312-win_amd64.whl", hash = "sha256:992775c9fbe2cf794786fa0ffca7f09f564ba3499b8fe9f2f80bd7197db60383"},
{file = "fonttools-4.60.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6f68576bb4bbf6060c7ab047b1574a1ebe5c50a17de62830079967b211059ebb"},
{file = "fonttools-4.60.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:eedacb5c5d22b7097482fa834bda0dafa3d914a4e829ec83cdea2a01f8c813c4"},
{file = "fonttools-4.60.1-cp313-cp313-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b33a7884fabd72bdf5f910d0cf46be50dce86a0362a65cfc746a4168c67eb96c"},
{file = "fonttools-4.60.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2409d5fb7b55fd70f715e6d34e7a6e4f7511b8ad29a49d6df225ee76da76dd77"},
{file = "fonttools-4.60.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c8651e0d4b3bdeda6602b85fdc2abbefc1b41e573ecb37b6779c4ca50753a199"},
{file = "fonttools-4.60.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:145daa14bf24824b677b9357c5e44fd8895c2a8f53596e1b9ea3496081dc692c"},
{file = "fonttools-4.60.1-cp313-cp313-win32.whl", hash = "sha256:2299df884c11162617a66b7c316957d74a18e3758c0274762d2cc87df7bc0272"},
{file = "fonttools-4.60.1-cp313-cp313-win_amd64.whl", hash = "sha256:a3db56f153bd4c5c2b619ab02c5db5192e222150ce5a1bc10f16164714bc39ac"},
{file = "fonttools-4.60.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:a884aef09d45ba1206712c7dbda5829562d3fea7726935d3289d343232ecb0d3"},
{file = "fonttools-4.60.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8a44788d9d91df72d1a5eac49b31aeb887a5f4aab761b4cffc4196c74907ea85"},
{file = "fonttools-4.60.1-cp314-cp314-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e852d9dda9f93ad3651ae1e3bb770eac544ec93c3807888798eccddf84596537"},
{file = "fonttools-4.60.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:154cb6ee417e417bf5f7c42fe25858c9140c26f647c7347c06f0cc2d47eff003"},
{file = "fonttools-4.60.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:5664fd1a9ea7f244487ac8f10340c4e37664675e8667d6fee420766e0fb3cf08"},
{file = "fonttools-4.60.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:583b7f8e3c49486e4d489ad1deacfb8d5be54a8ef34d6df824f6a171f8511d99"},
{file = "fonttools-4.60.1-cp314-cp314-win32.whl", hash = "sha256:66929e2ea2810c6533a5184f938502cfdaea4bc3efb7130d8cc02e1c1b4108d6"},
{file = "fonttools-4.60.1-cp314-cp314-win_amd64.whl", hash = "sha256:f3d5be054c461d6a2268831f04091dc82753176f6ea06dc6047a5e168265a987"},
{file = "fonttools-4.60.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:b6379e7546ba4ae4b18f8ae2b9bc5960936007a1c0e30b342f662577e8bc3299"},
{file = "fonttools-4.60.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9d0ced62b59e0430b3690dbc5373df1c2aa7585e9a8ce38eff87f0fd993c5b01"},
{file = "fonttools-4.60.1-cp314-cp314t-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:875cb7764708b3132637f6c5fb385b16eeba0f7ac9fa45a69d35e09b47045801"},
{file = "fonttools-4.60.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a184b2ea57b13680ab6d5fbde99ccef152c95c06746cb7718c583abd8f945ccc"},
{file = "fonttools-4.60.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:026290e4ec76583881763fac284aca67365e0be9f13a7fb137257096114cb3bc"},
{file = "fonttools-4.60.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:f0e8817c7d1a0c2eedebf57ef9a9896f3ea23324769a9a2061a80fe8852705ed"},
{file = "fonttools-4.60.1-cp314-cp314t-win32.whl", hash = "sha256:1410155d0e764a4615774e5c2c6fc516259fe3eca5882f034eb9bfdbee056259"},
{file = "fonttools-4.60.1-cp314-cp314t-win_amd64.whl", hash = "sha256:022beaea4b73a70295b688f817ddc24ed3e3418b5036ffcd5658141184ef0d0c"},
{file = "fonttools-4.60.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:122e1a8ada290423c493491d002f622b1992b1ab0b488c68e31c413390dc7eb2"},
{file = "fonttools-4.60.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a140761c4ff63d0cb9256ac752f230460ee225ccef4ad8f68affc723c88e2036"},
{file = "fonttools-4.60.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0eae96373e4b7c9e45d099d7a523444e3554360927225c1cdae221a58a45b856"},
{file = "fonttools-4.60.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:596ecaca36367027d525b3b426d8a8208169d09edcf8c7506aceb3a38bfb55c7"},
{file = "fonttools-4.60.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2ee06fc57512144d8b0445194c2da9f190f61ad51e230f14836286470c99f854"},
{file = "fonttools-4.60.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b42d86938e8dda1cd9a1a87a6d82f1818eaf933348429653559a458d027446da"},
{file = "fonttools-4.60.1-cp39-cp39-win32.whl", hash = "sha256:8b4eb332f9501cb1cd3d4d099374a1e1306783ff95489a1026bde9eb02ccc34a"},
{file = "fonttools-4.60.1-cp39-cp39-win_amd64.whl", hash = "sha256:7473a8ed9ed09aeaa191301244a5a9dbe46fe0bf54f9d6cd21d83044c3321217"},
{file = "fonttools-4.60.1-py3-none-any.whl", hash = "sha256:906306ac7afe2156fcf0042173d6ebbb05416af70f6b370967b47f8f00103bbb"},
{file = "fonttools-4.60.1.tar.gz", hash = "sha256:ef00af0439ebfee806b25f24c8f92109157ff3fac5731dc7867957812e87b8d9"},
]
[package.extras]
all = ["brotli (>=1.0.1) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\"", "lxml (>=4.0)", "lz4 (>=1.7.4.2)", "matplotlib", "munkres ; platform_python_implementation == \"PyPy\"", "pycairo", "scipy ; platform_python_implementation != \"PyPy\"", "skia-pathops (>=0.5.0)", "sympy", "uharfbuzz (>=0.23.0)", "unicodedata2 (>=15.1.0) ; python_version <= \"3.12\"", "xattr ; sys_platform == \"darwin\"", "zopfli (>=0.1.4)"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["munkres ; platform_python_implementation == \"PyPy\"", "pycairo", "scipy ; platform_python_implementation != \"PyPy\""]
lxml = ["lxml (>=4.0)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr ; sys_platform == \"darwin\""]
unicode = ["unicodedata2 (>=15.1.0) ; python_version <= \"3.12\""]
woff = ["brotli (>=1.0.1) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\"", "zopfli (>=0.1.4)"]
[[package]]
name = "freezegun"
version = "1.5.1"
@@ -2787,6 +2976,117 @@ files = [
[package.dependencies]
referencing = ">=0.31.0"
[[package]]
name = "kiwisolver"
version = "1.4.9"
description = "A fast implementation of the Cassowary constraint solver"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "kiwisolver-1.4.9-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:b4b4d74bda2b8ebf4da5bd42af11d02d04428b2c32846e4c2c93219df8a7987b"},
{file = "kiwisolver-1.4.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:fb3b8132019ea572f4611d770991000d7f58127560c4889729248eb5852a102f"},
{file = "kiwisolver-1.4.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:84fd60810829c27ae375114cd379da1fa65e6918e1da405f356a775d49a62bcf"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b78efa4c6e804ecdf727e580dbb9cba85624d2e1c6b5cb059c66290063bd99a9"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d4efec7bcf21671db6a3294ff301d2fc861c31faa3c8740d1a94689234d1b415"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:90f47e70293fc3688b71271100a1a5453aa9944a81d27ff779c108372cf5567b"},
{file = "kiwisolver-1.4.9-cp310-cp310-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:8fdca1def57a2e88ef339de1737a1449d6dbf5fab184c54a1fca01d541317154"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:9cf554f21be770f5111a1690d42313e140355e687e05cf82cb23d0a721a64a48"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:fc1795ac5cd0510207482c3d1d3ed781143383b8cfd36f5c645f3897ce066220"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:ccd09f20ccdbbd341b21a67ab50a119b64a403b09288c27481575105283c1586"},
{file = "kiwisolver-1.4.9-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:540c7c72324d864406a009d72f5d6856f49693db95d1fbb46cf86febef873634"},
{file = "kiwisolver-1.4.9-cp310-cp310-win_amd64.whl", hash = "sha256:ede8c6d533bc6601a47ad4046080d36b8fc99f81e6f1c17b0ac3c2dc91ac7611"},
{file = "kiwisolver-1.4.9-cp310-cp310-win_arm64.whl", hash = "sha256:7b4da0d01ac866a57dd61ac258c5607b4cd677f63abaec7b148354d2b2cdd536"},
{file = "kiwisolver-1.4.9-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:eb14a5da6dc7642b0f3a18f13654847cd8b7a2550e2645a5bda677862b03ba16"},
{file = "kiwisolver-1.4.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:39a219e1c81ae3b103643d2aedb90f1ef22650deb266ff12a19e7773f3e5f089"},
{file = "kiwisolver-1.4.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2405a7d98604b87f3fc28b1716783534b1b4b8510d8142adca34ee0bc3c87543"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dc1ae486f9abcef254b5618dfb4113dd49f94c68e3e027d03cf0143f3f772b61"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8a1f570ce4d62d718dce3f179ee78dac3b545ac16c0c04bb363b7607a949c0d1"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:cb27e7b78d716c591e88e0a09a2139c6577865d7f2e152488c2cc6257f460872"},
{file = "kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:15163165efc2f627eb9687ea5f3a28137217d217ac4024893d753f46bce9de26"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:bdee92c56a71d2b24c33a7d4c2856bd6419d017e08caa7802d2963870e315028"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:412f287c55a6f54b0650bd9b6dce5aceddb95864a1a90c87af16979d37c89771"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:2c93f00dcba2eea70af2be5f11a830a742fe6b579a1d4e00f47760ef13be247a"},
{file = "kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f117e1a089d9411663a3207ba874f31be9ac8eaa5b533787024dc07aeb74f464"},
{file = "kiwisolver-1.4.9-cp311-cp311-win_amd64.whl", hash = "sha256:be6a04e6c79819c9a8c2373317d19a96048e5a3f90bec587787e86a1153883c2"},
{file = "kiwisolver-1.4.9-cp311-cp311-win_arm64.whl", hash = "sha256:0ae37737256ba2de764ddc12aed4956460277f00c4996d51a197e72f62f5eec7"},
{file = "kiwisolver-1.4.9-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ac5a486ac389dddcc5bef4f365b6ae3ffff2c433324fb38dd35e3fab7c957999"},
{file = "kiwisolver-1.4.9-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f2ba92255faa7309d06fe44c3a4a97efe1c8d640c2a79a5ef728b685762a6fd2"},
{file = "kiwisolver-1.4.9-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4a2899935e724dd1074cb568ce7ac0dce28b2cd6ab539c8e001a8578eb106d14"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f6008a4919fdbc0b0097089f67a1eb55d950ed7e90ce2cc3e640abadd2757a04"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:67bb8b474b4181770f926f7b7d2f8c0248cbcb78b660fdd41a47054b28d2a752"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2327a4a30d3ee07d2fbe2e7933e8a37c591663b96ce42a00bc67461a87d7df77"},
{file = "kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:7a08b491ec91b1d5053ac177afe5290adacf1f0f6307d771ccac5de30592d198"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d8fc5c867c22b828001b6a38d2eaeb88160bf5783c6cb4a5e440efc981ce286d"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:3b3115b2581ea35bb6d1f24a4c90af37e5d9b49dcff267eeed14c3893c5b86ab"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:858e4c22fb075920b96a291928cb7dea5644e94c0ee4fcd5af7e865655e4ccf2"},
{file = "kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ed0fecd28cc62c54b262e3736f8bb2512d8dcfdc2bcf08be5f47f96bf405b145"},
{file = "kiwisolver-1.4.9-cp312-cp312-win_amd64.whl", hash = "sha256:f68208a520c3d86ea51acf688a3e3002615a7f0238002cccc17affecc86a8a54"},
{file = "kiwisolver-1.4.9-cp312-cp312-win_arm64.whl", hash = "sha256:2c1a4f57df73965f3f14df20b80ee29e6a7930a57d2d9e8491a25f676e197c60"},
{file = "kiwisolver-1.4.9-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a5d0432ccf1c7ab14f9949eec60c5d1f924f17c037e9f8b33352fa05799359b8"},
{file = "kiwisolver-1.4.9-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efb3a45b35622bb6c16dbfab491a8f5a391fe0e9d45ef32f4df85658232ca0e2"},
{file = "kiwisolver-1.4.9-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1a12cf6398e8a0a001a059747a1cbf24705e18fe413bc22de7b3d15c67cffe3f"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b67e6efbf68e077dd71d1a6b37e43e1a99d0bff1a3d51867d45ee8908b931098"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5656aa670507437af0207645273ccdfee4f14bacd7f7c67a4306d0dcaeaf6eed"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:bfc08add558155345129c7803b3671cf195e6a56e7a12f3dde7c57d9b417f525"},
{file = "kiwisolver-1.4.9-cp313-cp313-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:40092754720b174e6ccf9e845d0d8c7d8e12c3d71e7fc35f55f3813e96376f78"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:497d05f29a1300d14e02e6441cf0f5ee81c1ff5a304b0d9fb77423974684e08b"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:bdd1a81a1860476eb41ac4bc1e07b3f07259e6d55bbf739b79c8aaedcf512799"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:e6b93f13371d341afee3be9f7c5964e3fe61d5fa30f6a30eb49856935dfe4fc3"},
{file = "kiwisolver-1.4.9-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:d75aa530ccfaa593da12834b86a0724f58bff12706659baa9227c2ccaa06264c"},
{file = "kiwisolver-1.4.9-cp313-cp313-win_amd64.whl", hash = "sha256:dd0a578400839256df88c16abddf9ba14813ec5f21362e1fe65022e00c883d4d"},
{file = "kiwisolver-1.4.9-cp313-cp313-win_arm64.whl", hash = "sha256:d4188e73af84ca82468f09cadc5ac4db578109e52acb4518d8154698d3a87ca2"},
{file = "kiwisolver-1.4.9-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:5a0f2724dfd4e3b3ac5a82436a8e6fd16baa7d507117e4279b660fe8ca38a3a1"},
{file = "kiwisolver-1.4.9-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:1b11d6a633e4ed84fc0ddafd4ebfd8ea49b3f25082c04ad12b8315c11d504dc1"},
{file = "kiwisolver-1.4.9-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61874cdb0a36016354853593cffc38e56fc9ca5aa97d2c05d3dcf6922cd55a11"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:60c439763a969a6af93b4881db0eed8fadf93ee98e18cbc35bc8da868d0c4f0c"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:92a2f997387a1b79a75e7803aa7ded2cfbe2823852ccf1ba3bcf613b62ae3197"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a31d512c812daea6d8b3be3b2bfcbeb091dbb09177706569bcfc6240dcf8b41c"},
{file = "kiwisolver-1.4.9-cp313-cp313t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:52a15b0f35dad39862d376df10c5230155243a2c1a436e39eb55623ccbd68185"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a30fd6fdef1430fd9e1ba7b3398b5ee4e2887783917a687d86ba69985fb08748"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:cc9617b46837c6468197b5945e196ee9ca43057bb7d9d1ae688101e4e1dddf64"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:0ab74e19f6a2b027ea4f845a78827969af45ce790e6cb3e1ebab71bdf9f215ff"},
{file = "kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dba5ee5d3981160c28d5490f0d1b7ed730c22470ff7f6cc26cfcfaacb9896a07"},
{file = "kiwisolver-1.4.9-cp313-cp313t-win_arm64.whl", hash = "sha256:0749fd8f4218ad2e851e11cc4dc05c7cbc0cbc4267bdfdb31782e65aace4ee9c"},
{file = "kiwisolver-1.4.9-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:9928fe1eb816d11ae170885a74d074f57af3a0d65777ca47e9aeb854a1fba386"},
{file = "kiwisolver-1.4.9-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d0005b053977e7b43388ddec89fa567f43d4f6d5c2c0affe57de5ebf290dc552"},
{file = "kiwisolver-1.4.9-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:2635d352d67458b66fd0667c14cb1d4145e9560d503219034a18a87e971ce4f3"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:767c23ad1c58c9e827b649a9ab7809fd5fd9db266a9cf02b0e926ddc2c680d58"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:72d0eb9fba308b8311685c2268cf7d0a0639a6cd027d8128659f72bdd8a024b4"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f68e4f3eeca8fb22cc3d731f9715a13b652795ef657a13df1ad0c7dc0e9731df"},
{file = "kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d84cd4061ae292d8ac367b2c3fa3aad11cb8625a95d135fe93f286f914f3f5a6"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:a60ea74330b91bd22a29638940d115df9dc00af5035a9a2a6ad9399ffb4ceca5"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:ce6a3a4e106cf35c2d9c4fa17c05ce0b180db622736845d4315519397a77beaf"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:77937e5e2a38a7b48eef0585114fe7930346993a88060d0bf886086d2aa49ef5"},
{file = "kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:24c175051354f4a28c5d6a31c93906dc653e2bf234e8a4bbfb964892078898ce"},
{file = "kiwisolver-1.4.9-cp314-cp314-win_amd64.whl", hash = "sha256:0763515d4df10edf6d06a3c19734e2566368980d21ebec439f33f9eb936c07b7"},
{file = "kiwisolver-1.4.9-cp314-cp314-win_arm64.whl", hash = "sha256:0e4e2bf29574a6a7b7f6cb5fa69293b9f96c928949ac4a53ba3f525dffb87f9c"},
{file = "kiwisolver-1.4.9-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:d976bbb382b202f71c67f77b0ac11244021cfa3f7dfd9e562eefcea2df711548"},
{file = "kiwisolver-1.4.9-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2489e4e5d7ef9a1c300a5e0196e43d9c739f066ef23270607d45aba368b91f2d"},
{file = "kiwisolver-1.4.9-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:e2ea9f7ab7fbf18fffb1b5434ce7c69a07582f7acc7717720f1d69f3e806f90c"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b34e51affded8faee0dfdb705416153819d8ea9250bbbf7ea1b249bdeb5f1122"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d8aacd3d4b33b772542b2e01beb50187536967b514b00003bdda7589722d2a64"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:7cf974dd4e35fa315563ac99d6287a1024e4dc2077b8a7d7cd3d2fb65d283134"},
{file = "kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:85bd218b5ecfbee8c8a82e121802dcb519a86044c9c3b2e4aef02fa05c6da370"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:0856e241c2d3df4efef7c04a1e46b1936b6120c9bcf36dd216e3acd84bc4fb21"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:9af39d6551f97d31a4deebeac6f45b156f9755ddc59c07b402c148f5dbb6482a"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:bb4ae2b57fc1d8cbd1cf7b1d9913803681ffa903e7488012be5b76dedf49297f"},
{file = "kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:aedff62918805fb62d43a4aa2ecd4482c380dc76cd31bd7c8878588a61bd0369"},
{file = "kiwisolver-1.4.9-cp314-cp314t-win_amd64.whl", hash = "sha256:1fa333e8b2ce4d9660f2cda9c0e1b6bafcfb2457a9d259faa82289e73ec24891"},
{file = "kiwisolver-1.4.9-cp314-cp314t-win_arm64.whl", hash = "sha256:4a48a2ce79d65d363597ef7b567ce3d14d68783d2b2263d98db3d9477805ba32"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:4d1d9e582ad4d63062d34077a9a1e9f3c34088a2ec5135b1f7190c07cf366527"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:deed0c7258ceb4c44ad5ec7d9918f9f14fd05b2be86378d86cf50e63d1e7b771"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0a590506f303f512dff6b7f75fd2fd18e16943efee932008fe7140e5fa91d80e"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e09c2279a4d01f099f52d5c4b3d9e208e91edcbd1a175c9662a8b16e000fece9"},
{file = "kiwisolver-1.4.9-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:c9e7cdf45d594ee04d5be1b24dd9d49f3d1590959b2271fb30b5ca2b262c00fb"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:720e05574713db64c356e86732c0f3c5252818d05f9df320f0ad8380641acea5"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:17680d737d5335b552994a2008fab4c851bcd7de33094a82067ef3a576ff02fa"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:85b5352f94e490c028926ea567fc569c52ec79ce131dadb968d3853e809518c2"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:464415881e4801295659462c49461a24fb107c140de781d55518c4b80cb6790f"},
{file = "kiwisolver-1.4.9-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:fb940820c63a9590d31d88b815e7a3aa5915cad3ce735ab45f0c730b39547de1"},
{file = "kiwisolver-1.4.9.tar.gz", hash = "sha256:c3b22c26c6fd6811b0ae8363b95ca8ce4ea3c202d3d0975b2914310ceb1bcc4d"},
]
[[package]]
name = "kombu"
version = "5.5.4"
@@ -3137,6 +3437,85 @@ dev = ["marshmallow[tests]", "pre-commit (>=3.5,<5.0)", "tox"]
docs = ["autodocsumm (==0.2.14)", "furo (==2024.8.6)", "sphinx (==8.1.3)", "sphinx-copybutton (==0.5.2)", "sphinx-issues (==5.0.0)", "sphinxext-opengraph (==0.9.1)"]
tests = ["pytest", "simplejson"]
[[package]]
name = "matplotlib"
version = "3.10.6"
description = "Python plotting package"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "matplotlib-3.10.6-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:bc7316c306d97463a9866b89d5cc217824e799fa0de346c8f68f4f3d27c8693d"},
{file = "matplotlib-3.10.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d00932b0d160ef03f59f9c0e16d1e3ac89646f7785165ce6ad40c842db16cc2e"},
{file = "matplotlib-3.10.6-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8fa4c43d6bfdbfec09c733bca8667de11bfa4970e8324c471f3a3632a0301c15"},
{file = "matplotlib-3.10.6-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ea117a9c1627acaa04dbf36265691921b999cbf515a015298e54e1a12c3af837"},
{file = "matplotlib-3.10.6-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:08fc803293b4e1694ee325896030de97f74c141ccff0be886bb5915269247676"},
{file = "matplotlib-3.10.6-cp310-cp310-win_amd64.whl", hash = "sha256:2adf92d9b7527fbfb8818e050260f0ebaa460f79d61546374ce73506c9421d09"},
{file = "matplotlib-3.10.6-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:905b60d1cb0ee604ce65b297b61cf8be9f4e6cfecf95a3fe1c388b5266bc8f4f"},
{file = "matplotlib-3.10.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7bac38d816637343e53d7185d0c66677ff30ffb131044a81898b5792c956ba76"},
{file = "matplotlib-3.10.6-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:942a8de2b5bfff1de31d95722f702e2966b8a7e31f4e68f7cd963c7cd8861cf6"},
{file = "matplotlib-3.10.6-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a3276c85370bc0dfca051ec65c5817d1e0f8f5ce1b7787528ec8ed2d524bbc2f"},
{file = "matplotlib-3.10.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9df5851b219225731f564e4b9e7f2ac1e13c9e6481f941b5631a0f8e2d9387ce"},
{file = "matplotlib-3.10.6-cp311-cp311-win_amd64.whl", hash = "sha256:abb5d9478625dd9c9eb51a06d39aae71eda749ae9b3138afb23eb38824026c7e"},
{file = "matplotlib-3.10.6-cp311-cp311-win_arm64.whl", hash = "sha256:886f989ccfae63659183173bb3fced7fd65e9eb793c3cc21c273add368536951"},
{file = "matplotlib-3.10.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:31ca662df6a80bd426f871105fdd69db7543e28e73a9f2afe80de7e531eb2347"},
{file = "matplotlib-3.10.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1678bb61d897bb4ac4757b5ecfb02bfb3fddf7f808000fb81e09c510712fda75"},
{file = "matplotlib-3.10.6-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:56cd2d20842f58c03d2d6e6c1f1cf5548ad6f66b91e1e48f814e4fb5abd1cb95"},
{file = "matplotlib-3.10.6-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:662df55604a2f9a45435566d6e2660e41efe83cd94f4288dfbf1e6d1eae4b0bb"},
{file = "matplotlib-3.10.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:08f141d55148cd1fc870c3387d70ca4df16dee10e909b3b038782bd4bda6ea07"},
{file = "matplotlib-3.10.6-cp312-cp312-win_amd64.whl", hash = "sha256:590f5925c2d650b5c9d813c5b3b5fc53f2929c3f8ef463e4ecfa7e052044fb2b"},
{file = "matplotlib-3.10.6-cp312-cp312-win_arm64.whl", hash = "sha256:f44c8d264a71609c79a78d50349e724f5d5fc3684ead7c2a473665ee63d868aa"},
{file = "matplotlib-3.10.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:819e409653c1106c8deaf62e6de6b8611449c2cd9939acb0d7d4e57a3d95cc7a"},
{file = "matplotlib-3.10.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:59c8ac8382fefb9cb71308dde16a7c487432f5255d8f1fd32473523abecfecdf"},
{file = "matplotlib-3.10.6-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:84e82d9e0fd70c70bc55739defbd8055c54300750cbacf4740c9673a24d6933a"},
{file = "matplotlib-3.10.6-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:25f7a3eb42d6c1c56e89eacd495661fc815ffc08d9da750bca766771c0fd9110"},
{file = "matplotlib-3.10.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f9c862d91ec0b7842920a4cfdaaec29662195301914ea54c33e01f1a28d014b2"},
{file = "matplotlib-3.10.6-cp313-cp313-win_amd64.whl", hash = "sha256:1b53bd6337eba483e2e7d29c5ab10eee644bc3a2491ec67cc55f7b44583ffb18"},
{file = "matplotlib-3.10.6-cp313-cp313-win_arm64.whl", hash = "sha256:cbd5eb50b7058b2892ce45c2f4e92557f395c9991f5c886d1bb74a1582e70fd6"},
{file = "matplotlib-3.10.6-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:acc86dd6e0e695c095001a7fccff158c49e45e0758fdf5dcdbb0103318b59c9f"},
{file = "matplotlib-3.10.6-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e228cd2ffb8f88b7d0b29e37f68ca9aaf83e33821f24a5ccc4f082dd8396bc27"},
{file = "matplotlib-3.10.6-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:658bc91894adeab669cf4bb4a186d049948262987e80f0857216387d7435d833"},
{file = "matplotlib-3.10.6-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8913b7474f6dd83ac444c9459c91f7f0f2859e839f41d642691b104e0af056aa"},
{file = "matplotlib-3.10.6-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:091cea22e059b89f6d7d1a18e2c33a7376c26eee60e401d92a4d6726c4e12706"},
{file = "matplotlib-3.10.6-cp313-cp313t-win_amd64.whl", hash = "sha256:491e25e02a23d7207629d942c666924a6b61e007a48177fdd231a0097b7f507e"},
{file = "matplotlib-3.10.6-cp313-cp313t-win_arm64.whl", hash = "sha256:3d80d60d4e54cda462e2cd9a086d85cd9f20943ead92f575ce86885a43a565d5"},
{file = "matplotlib-3.10.6-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:70aaf890ce1d0efd482df969b28a5b30ea0b891224bb315810a3940f67182899"},
{file = "matplotlib-3.10.6-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1565aae810ab79cb72e402b22facfa6501365e73ebab70a0fdfb98488d2c3c0c"},
{file = "matplotlib-3.10.6-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f3b23315a01981689aa4e1a179dbf6ef9fbd17143c3eea77548c2ecfb0499438"},
{file = "matplotlib-3.10.6-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:30fdd37edf41a4e6785f9b37969de57aea770696cb637d9946eb37470c94a453"},
{file = "matplotlib-3.10.6-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:bc31e693da1c08012c764b053e702c1855378e04102238e6a5ee6a7117c53a47"},
{file = "matplotlib-3.10.6-cp314-cp314-win_amd64.whl", hash = "sha256:05be9bdaa8b242bc6ff96330d18c52f1fc59c6fb3a4dd411d953d67e7e1baf98"},
{file = "matplotlib-3.10.6-cp314-cp314-win_arm64.whl", hash = "sha256:f56a0d1ab05d34c628592435781d185cd99630bdfd76822cd686fb5a0aecd43a"},
{file = "matplotlib-3.10.6-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:94f0b4cacb23763b64b5dace50d5b7bfe98710fed5f0cef5c08135a03399d98b"},
{file = "matplotlib-3.10.6-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:cc332891306b9fb39462673d8225d1b824c89783fee82840a709f96714f17a5c"},
{file = "matplotlib-3.10.6-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee1d607b3fb1590deb04b69f02ea1d53ed0b0bf75b2b1a5745f269afcbd3cdd3"},
{file = "matplotlib-3.10.6-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:376a624a218116461696b27b2bbf7a8945053e6d799f6502fc03226d077807bf"},
{file = "matplotlib-3.10.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:83847b47f6524c34b4f2d3ce726bb0541c48c8e7692729865c3df75bfa0f495a"},
{file = "matplotlib-3.10.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c7e0518e0d223683532a07f4b512e2e0729b62674f1b3a1a69869f98e6b1c7e3"},
{file = "matplotlib-3.10.6-cp314-cp314t-win_arm64.whl", hash = "sha256:4dd83e029f5b4801eeb87c64efd80e732452781c16a9cf7415b7b63ec8f374d7"},
{file = "matplotlib-3.10.6-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:13fcd07ccf17e354398358e0307a1f53f5325dca22982556ddb9c52837b5af41"},
{file = "matplotlib-3.10.6-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:470fc846d59d1406e34fa4c32ba371039cd12c2fe86801159a965956f2575bd1"},
{file = "matplotlib-3.10.6-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f7173f8551b88f4ef810a94adae3128c2530e0d07529f7141be7f8d8c365f051"},
{file = "matplotlib-3.10.6-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:f2d684c3204fa62421bbf770ddfebc6b50130f9cad65531eeba19236d73bb488"},
{file = "matplotlib-3.10.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:6f4a69196e663a41d12a728fab8751177215357906436804217d6d9cf0d4d6cf"},
{file = "matplotlib-3.10.6-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d6ca6ef03dfd269f4ead566ec6f3fb9becf8dab146fb999022ed85ee9f6b3eb"},
{file = "matplotlib-3.10.6.tar.gz", hash = "sha256:ec01b645840dd1996df21ee37f208cd8ba57644779fa20464010638013d3203c"},
]
[package.dependencies]
contourpy = ">=1.0.1"
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.3.1"
numpy = ">=1.23"
packaging = ">=20.0"
pillow = ">=8"
pyparsing = ">=2.3.1"
python-dateutil = ">=2.7"
[package.extras]
dev = ["meson-python (>=0.13.1,<0.17.0)", "pybind11 (>=2.13.2,!=2.13.3)", "setuptools (>=64)", "setuptools_scm (>=7)"]
[[package]]
name = "mccabe"
version = "0.7.0"
@@ -3857,6 +4236,131 @@ files = [
[package.dependencies]
setuptools = "*"
[[package]]
name = "pillow"
version = "11.3.0"
description = "Python Imaging Library (Fork)"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pillow-11.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b9c17fd4ace828b3003dfd1e30bff24863e0eb59b535e8f80194d9cc7ecf860"},
{file = "pillow-11.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:65dc69160114cdd0ca0f35cb434633c75e8e7fad4cf855177a05bf38678f73ad"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7107195ddc914f656c7fc8e4a5e1c25f32e9236ea3ea860f257b0436011fddd0"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:cc3e831b563b3114baac7ec2ee86819eb03caa1a2cef0b481a5675b59c4fe23b"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f1f182ebd2303acf8c380a54f615ec883322593320a9b00438eb842c1f37ae50"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4445fa62e15936a028672fd48c4c11a66d641d2c05726c7ec1f8ba6a572036ae"},
{file = "pillow-11.3.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:71f511f6b3b91dd543282477be45a033e4845a40278fa8dcdbfdb07109bf18f9"},
{file = "pillow-11.3.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:040a5b691b0713e1f6cbe222e0f4f74cd233421e105850ae3b3c0ceda520f42e"},
{file = "pillow-11.3.0-cp310-cp310-win32.whl", hash = "sha256:89bd777bc6624fe4115e9fac3352c79ed60f3bb18651420635f26e643e3dd1f6"},
{file = "pillow-11.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:19d2ff547c75b8e3ff46f4d9ef969a06c30ab2d4263a9e287733aa8b2429ce8f"},
{file = "pillow-11.3.0-cp310-cp310-win_arm64.whl", hash = "sha256:819931d25e57b513242859ce1876c58c59dc31587847bf74cfe06b2e0cb22d2f"},
{file = "pillow-11.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:1cd110edf822773368b396281a2293aeb91c90a2db00d78ea43e7e861631b722"},
{file = "pillow-11.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9c412fddd1b77a75aa904615ebaa6001f169b26fd467b4be93aded278266b288"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1aa4de119a0ecac0a34a9c8bde33f34022e2e8f99104e47a3ca392fd60e37d"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:91da1d88226663594e3f6b4b8c3c8d85bd504117d043740a8e0ec449087cc494"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:643f189248837533073c405ec2f0bb250ba54598cf80e8c1e043381a60632f58"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:106064daa23a745510dabce1d84f29137a37224831d88eb4ce94bb187b1d7e5f"},
{file = "pillow-11.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cd8ff254faf15591e724dc7c4ddb6bf4793efcbe13802a4ae3e863cd300b493e"},
{file = "pillow-11.3.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:932c754c2d51ad2b2271fd01c3d121daaa35e27efae2a616f77bf164bc0b3e94"},
{file = "pillow-11.3.0-cp311-cp311-win32.whl", hash = "sha256:b4b8f3efc8d530a1544e5962bd6b403d5f7fe8b9e08227c6b255f98ad82b4ba0"},
{file = "pillow-11.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:1a992e86b0dd7aeb1f053cd506508c0999d710a8f07b4c791c63843fc6a807ac"},
{file = "pillow-11.3.0-cp311-cp311-win_arm64.whl", hash = "sha256:30807c931ff7c095620fe04448e2c2fc673fcbb1ffe2a7da3fb39613489b1ddd"},
{file = "pillow-11.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fdae223722da47b024b867c1ea0be64e0df702c5e0a60e27daad39bf960dd1e4"},
{file = "pillow-11.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:921bd305b10e82b4d1f5e802b6850677f965d8394203d182f078873851dada69"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:eb76541cba2f958032d79d143b98a3a6b3ea87f0959bbe256c0b5e416599fd5d"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:67172f2944ebba3d4a7b54f2e95c786a3a50c21b88456329314caaa28cda70f6"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:97f07ed9f56a3b9b5f49d3661dc9607484e85c67e27f3e8be2c7d28ca032fec7"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:676b2815362456b5b3216b4fd5bd89d362100dc6f4945154ff172e206a22c024"},
{file = "pillow-11.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3e184b2f26ff146363dd07bde8b711833d7b0202e27d13540bfe2e35a323a809"},
{file = "pillow-11.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6be31e3fc9a621e071bc17bb7de63b85cbe0bfae91bb0363c893cbe67247780d"},
{file = "pillow-11.3.0-cp312-cp312-win32.whl", hash = "sha256:7b161756381f0918e05e7cb8a371fff367e807770f8fe92ecb20d905d0e1c149"},
{file = "pillow-11.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:a6444696fce635783440b7f7a9fc24b3ad10a9ea3f0ab66c5905be1c19ccf17d"},
{file = "pillow-11.3.0-cp312-cp312-win_arm64.whl", hash = "sha256:2aceea54f957dd4448264f9bf40875da0415c83eb85f55069d89c0ed436e3542"},
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_arm64_iphoneos.whl", hash = "sha256:1c627742b539bba4309df89171356fcb3cc5a9178355b2727d1b74a6cf155fbd"},
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:30b7c02f3899d10f13d7a48163c8969e4e653f8b43416d23d13d1bbfdc93b9f8"},
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f"},
{file = "pillow-11.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c"},
{file = "pillow-11.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8"},
{file = "pillow-11.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2"},
{file = "pillow-11.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:45dfc51ac5975b938e9809451c51734124e73b04d0f0ac621649821a63852e7b"},
{file = "pillow-11.3.0-cp313-cp313-win32.whl", hash = "sha256:a4d336baed65d50d37b88ca5b60c0fa9d81e3a87d4a7930d3880d1624d5b31f3"},
{file = "pillow-11.3.0-cp313-cp313-win_amd64.whl", hash = "sha256:0bce5c4fd0921f99d2e858dc4d4d64193407e1b99478bc5cacecba2311abde51"},
{file = "pillow-11.3.0-cp313-cp313-win_arm64.whl", hash = "sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59"},
{file = "pillow-11.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe"},
{file = "pillow-11.3.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:83e1b0161c9d148125083a35c1c5a89db5b7054834fd4387499e06552035236c"},
{file = "pillow-11.3.0-cp313-cp313t-win32.whl", hash = "sha256:2a3117c06b8fb646639dce83694f2f9eac405472713fcb1ae887469c0d4f6788"},
{file = "pillow-11.3.0-cp313-cp313t-win_amd64.whl", hash = "sha256:857844335c95bea93fb39e0fa2726b4d9d758850b34075a7e3ff4f4fa3aa3b31"},
{file = "pillow-11.3.0-cp313-cp313t-win_arm64.whl", hash = "sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e"},
{file = "pillow-11.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12"},
{file = "pillow-11.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77"},
{file = "pillow-11.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874"},
{file = "pillow-11.3.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:98a9afa7b9007c67ed84c57c9e0ad86a6000da96eaa638e4f8abe5b65ff83f0a"},
{file = "pillow-11.3.0-cp314-cp314-win32.whl", hash = "sha256:02a723e6bf909e7cea0dac1b0e0310be9d7650cd66222a5f1c571455c0a45214"},
{file = "pillow-11.3.0-cp314-cp314-win_amd64.whl", hash = "sha256:a418486160228f64dd9e9efcd132679b7a02a5f22c982c78b6fc7dab3fefb635"},
{file = "pillow-11.3.0-cp314-cp314-win_arm64.whl", hash = "sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477"},
{file = "pillow-11.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50"},
{file = "pillow-11.3.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a1bc6ba083b145187f648b667e05a2534ecc4b9f2784c2cbe3089e44868f2b9b"},
{file = "pillow-11.3.0-cp314-cp314t-win32.whl", hash = "sha256:118ca10c0d60b06d006be10a501fd6bbdfef559251ed31b794668ed569c87e12"},
{file = "pillow-11.3.0-cp314-cp314t-win_amd64.whl", hash = "sha256:8924748b688aa210d79883357d102cd64690e56b923a186f35a82cbc10f997db"},
{file = "pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa"},
{file = "pillow-11.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:48d254f8a4c776de343051023eb61ffe818299eeac478da55227d96e241de53f"},
{file = "pillow-11.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7aee118e30a4cf54fdd873bd3a29de51e29105ab11f9aad8c32123f58c8f8081"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:23cff760a9049c502721bdb743a7cb3e03365fafcdfc2ef9784610714166e5a4"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6359a3bc43f57d5b375d1ad54a0074318a0844d11b76abccf478c37c986d3cfc"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:092c80c76635f5ecb10f3f83d76716165c96f5229addbd1ec2bdbbda7d496e06"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cadc9e0ea0a2431124cde7e1697106471fc4c1da01530e679b2391c37d3fbb3a"},
{file = "pillow-11.3.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6a418691000f2a418c9135a7cf0d797c1bb7d9a485e61fe8e7722845b95ef978"},
{file = "pillow-11.3.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:97afb3a00b65cc0804d1c7abddbf090a81eaac02768af58cbdcaaa0a931e0b6d"},
{file = "pillow-11.3.0-cp39-cp39-win32.whl", hash = "sha256:ea944117a7974ae78059fcc1800e5d3295172bb97035c0c1d9345fca1419da71"},
{file = "pillow-11.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:e5c5858ad8ec655450a7c7df532e9842cf8df7cc349df7225c60d5d348c8aada"},
{file = "pillow-11.3.0-cp39-cp39-win_arm64.whl", hash = "sha256:6abdbfd3aea42be05702a8dd98832329c167ee84400a1d1f61ab11437f1717eb"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3cee80663f29e3843b68199b9d6f4f54bd1d4a6b59bdd91bceefc51238bcb967"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b5f56c3f344f2ccaf0dd875d3e180f631dc60a51b314295a3e681fe8cf851fbe"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e67d793d180c9df62f1f40aee3accca4829d3794c95098887edc18af4b8b780c"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d000f46e2917c705e9fb93a3606ee4a819d1e3aa7a9b442f6444f07e77cf5e25"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:527b37216b6ac3a12d7838dc3bd75208ec57c1c6d11ef01902266a5a0c14fc27"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:be5463ac478b623b9dd3937afd7fb7ab3d79dd290a28e2b6df292dc75063eb8a"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:8dc70ca24c110503e16918a658b869019126ecfe03109b754c402daff12b3d9f"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:465b9e8844e3c3519a983d58b80be3f668e2a7a5db97f2784e7079fbc9f9822c"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5418b53c0d59b3824d05e029669efa023bbef0f3e92e75ec8428f3799487f361"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:504b6f59505f08ae014f724b6207ff6222662aab5cc9542577fb084ed0676ac7"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c84d689db21a1c397d001aa08241044aa2069e7587b398c8cc63020390b1c1b8"},
{file = "pillow-11.3.0.tar.gz", hash = "sha256:3828ee7586cd0b2091b6209e5ad53e20d0649bbe87164a459d0676e035e8f523"},
]
[package.extras]
docs = ["furo", "olefile", "sphinx (>=8.2)", "sphinx-autobuild", "sphinx-copybutton", "sphinx-inline-tabs", "sphinxext-opengraph"]
fpx = ["olefile"]
mic = ["olefile"]
test-arrow = ["pyarrow"]
tests = ["check-manifest", "coverage (>=7.4.2)", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout", "pytest-xdist", "trove-classifiers (>=2024.10.12)"]
typing = ["typing-extensions ; python_version < \"3.10\""]
xmp = ["defusedxml"]
[[package]]
name = "platformdirs"
version = "4.3.8"
@@ -5016,6 +5520,29 @@ attrs = ">=22.2.0"
rpds-py = ">=0.7.0"
typing-extensions = {version = ">=4.4.0", markers = "python_version < \"3.13\""}
[[package]]
name = "reportlab"
version = "4.4.4"
description = "The Reportlab Toolkit"
optional = false
python-versions = "<4,>=3.9"
groups = ["main"]
files = [
{file = "reportlab-4.4.4-py3-none-any.whl", hash = "sha256:299b3b0534e7202bb94ed2ddcd7179b818dcda7de9d8518a57c85a58a1ebaadb"},
{file = "reportlab-4.4.4.tar.gz", hash = "sha256:cb2f658b7f4a15be2cc68f7203aa67faef67213edd4f2d4bdd3eb20dab75a80d"},
]
[package.dependencies]
charset-normalizer = "*"
pillow = ">=9.0.0"
[package.extras]
accel = ["rl_accel (>=0.9.0,<1.1)"]
bidi = ["rlbidi"]
pycairo = ["freetype-py (>=2.3.0,<2.4)", "rlPyCairo (>=0.2.0,<1)"]
renderpm = ["rl_renderPM (>=4.0.3,<4.1)"]
shaping = ["uharfbuzz"]
[[package]]
name = "requests"
version = "2.32.5"
@@ -6259,4 +6786,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
content-hash = "03442fd4673006c5a74374f90f53621fd1c9d117279fe6cc0355ef833eb7f9bb"
content-hash = "3c9164d668d37d6373eb5200bbe768232ead934d9312b9c68046b1df922789f3"
+3 -1
View File
@@ -33,7 +33,9 @@ dependencies = [
"xmlsec==1.3.14",
"h2 (==4.3.0)",
"markdown (>=3.9,<4.0)",
"drf-simple-apikey (==2.2.1)"
"drf-simple-apikey (==2.2.1)",
"matplotlib (>=3.10.6,<4.0.0)",
"reportlab (>=4.4.4,<5.0.0)"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
+1
View File
@@ -765,6 +765,7 @@ class ComplianceOverviewFilter(FilterSet):
class ScanSummaryFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
provider_id = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider_id__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
@@ -0,0 +1,558 @@
import json
import logging
import time
from datetime import datetime, timezone
from typing import Dict, List, Set
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from rich.align import Align
from rich.console import Console
from rich.logging import RichHandler
from rich.panel import Panel
from rich.progress import (
BarColumn,
Progress,
SpinnerColumn,
TaskProgressColumn,
TextColumn,
)
from rich.prompt import Confirm
from rich.table import Table
from rich.text import Text
from rich.theme import Theme
from ...db_router import MainRouter
from ...models import Scan, StateChoices
class Command(BaseCommand):
help = "Check for stuck scans and mark them as failed"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.console = Console(theme=self.get_custom_theme())
self.logger = None
def get_custom_theme(self):
"""Create a custom theme without purple colors"""
return Theme(
{
"prompt.choices": "bright_cyan",
"prompt.default": "bright_white",
"progress.description": "bright_white",
"progress.percentage": "bright_cyan",
"progress.data.speed": "bright_green",
"progress.spinner": "bright_cyan",
}
)
def setup_logging(self, verbose=False):
"""Setup rich logging handler"""
if verbose:
logging.basicConfig(
level=logging.INFO,
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler(console=self.console, rich_tracebacks=True)],
)
self.logger = logging.getLogger(__name__)
else:
# Create a no-op logger
self.logger = logging.getLogger(__name__)
self.logger.addHandler(logging.NullHandler())
self.logger.setLevel(logging.CRITICAL)
def add_arguments(self, parser):
parser.add_argument(
"--force",
action="store_true",
help="Mark stuck scans as failed without confirmation",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Show what would be done without making changes",
)
parser.add_argument(
"--verbose", action="store_true", help="Enable verbose logging"
)
def get_celery_app(self):
"""Get the Celery application instance"""
try:
from config.celery import celery_app
return celery_app
except ImportError:
raise CommandError("Could not import Celery app from config.celery")
def get_active_task_ids(self) -> Set[str]:
"""Get all active task IDs from all Celery workers"""
celery_app = self.get_celery_app()
inspect = celery_app.control.inspect()
active_task_ids = set()
try:
# Get active tasks from all workers
active_tasks = inspect.active()
if active_tasks:
for worker, tasks in active_tasks.items():
for task in tasks:
active_task_ids.add(task["id"])
# Get scheduled tasks from all workers
scheduled_tasks = inspect.scheduled()
if scheduled_tasks:
for worker, tasks in scheduled_tasks.items():
for task in tasks:
active_task_ids.add(task["id"])
# Get reserved tasks from all workers
reserved_tasks = inspect.reserved()
if reserved_tasks:
for worker, tasks in reserved_tasks.items():
for task in tasks:
active_task_ids.add(task["id"])
except Exception as e:
if self.logger and hasattr(self.logger, "error"):
self.logger.error(f"Error connecting to Celery broker: {e}")
raise CommandError(f"Failed to connect to Celery broker: {e}")
return active_task_ids
def find_stuck_scans(self) -> List[Dict]:
"""Find scans that appear to be stuck with interactive progress"""
with Progress(
SpinnerColumn(),
TextColumn("[progress.description]{task.description}"),
BarColumn(),
TaskProgressColumn(),
console=self.console,
transient=True,
) as progress:
# Step 1: Find executing scans
scan_task = progress.add_task(
"🔍 Scanning for executing scans...", total=100
)
executing_scans = (
Scan.objects.using(MainRouter.admin_db)
.filter(state=StateChoices.EXECUTING)
.select_related("task__task_runner_task", "provider")
.exclude(task__isnull=True)
.exclude(task__task_runner_task__isnull=True)
)
progress.update(scan_task, advance=50)
time.sleep(0.5) # Small delay for visual effect
scan_count = executing_scans.count()
progress.update(
scan_task,
advance=50,
description=f"✅ Found {scan_count} executing scans",
)
time.sleep(0.3)
if scan_count == 0:
return []
# Step 2: Get active tasks from Celery
celery_task = progress.add_task("🔄 Checking Celery workers...", total=100)
active_task_ids = self.get_active_task_ids()
progress.update(
celery_task,
advance=100,
description=f"✅ Found {len(active_task_ids)} active tasks",
)
time.sleep(0.3)
# Step 3: Check each scan
check_task = progress.add_task("🕵️ Analyzing scans...", total=scan_count)
stuck_scans = []
for i, scan in enumerate(executing_scans):
progress.update(
check_task,
advance=1,
description=f"🕵️ Analyzing scan {i + 1}/{scan_count}",
)
task_result = scan.task.task_runner_task
task_id = task_result.task_id
# Check if task is still active in any worker
if task_id not in active_task_ids:
stuck_scans.append(
{
"scan": scan,
"task_result": task_result,
}
)
time.sleep(0.3) # Small delay for visual effect
progress.update(
check_task,
description=f"✅ Analysis complete - {len(stuck_scans)} stuck scans found",
)
time.sleep(2)
return stuck_scans
def display_scan_details(self, scan, task_result):
"""Display detailed information about a single scan"""
# Create scan details panel
scan_info = Text()
scan_info.append("🆔 Scan ID: ", style="bold cyan")
scan_info.append(f"{scan.id}\n", style="cyan")
scan_info.append("🏢 Tenant ID: ", style="bold bright_blue")
scan_info.append(f"{scan.tenant_id}\n", style="bright_blue")
scan_info.append("☁️ Provider: ", style="bold green")
scan_info.append(f"{scan.provider.provider.upper()}\n", style="green")
scan_info.append("🔗 Provider UID: ", style="bold green")
scan_info.append(f"{scan.provider.uid}\n", style="green")
scan_info.append("⏰ Started At: ", style="bold yellow")
started_time = (
scan.started_at.strftime("%Y-%m-%d %H:%M:%S UTC")
if scan.started_at
else "Unknown"
)
scan_info.append(f"{started_time}\n", style="yellow")
scan_info.append("📝 Scan Name: ", style="bold white")
scan_info.append(f"{scan.name or 'No name'}\n", style="white")
scan_info.append("🔄 Task ID: ", style="bold blue")
scan_info.append(f"{task_result.task_id}\n", style="blue")
scan_info.append("📊 Task Status: ", style="bold red")
scan_info.append(f"{task_result.status or 'Unknown'}\n", style="red")
if scan.started_at:
duration = datetime.now(timezone.utc) - scan.started_at
scan_info.append("⏱️ Running For: ", style="bold bright_cyan")
scan_info.append(f"{duration}\n", style="bright_cyan")
return Panel(
scan_info,
title="🚨 Stuck Scan Detected",
border_style="red",
title_align="center",
)
def display_stuck_scans(self, stuck_scans: List[Dict], force: bool = False):
"""Display stuck scans interactively"""
if not stuck_scans:
self.console.print("\n")
self.console.print(
Panel(
Align.center(
"🎉 No stuck scans found!\nAll scans are running properly."
),
style="green",
title="✅ All Clear",
)
)
return []
# Show summary first
self.console.print("\n")
self.console.print(
Panel(
Align.center(
f"⚠️ Found {len(stuck_scans)} stuck scan{'s' if len(stuck_scans) != 1 else ''}"
),
style="yellow",
title="🔍 Detection Results",
)
)
if force:
self.console.print(
Panel(
Align.center(
"🚀 Force mode enabled - marking all stuck scans as failed"
),
style="cyan",
)
)
return stuck_scans
confirmed_scans = []
for i, stuck_scan in enumerate(stuck_scans, 1):
self.console.clear()
self.console.print(
Panel.fit("🔍 Prowler Stuck Scans Checker", style="bold blue")
)
scan = stuck_scan["scan"]
task_result = stuck_scan["task_result"]
# Show progress
progress_text = f"Reviewing scan {i} of {len(stuck_scans)}"
self.console.print(f"\n{progress_text}", style="dim")
# Show scan details
self.console.print("\n")
self.console.print(self.display_scan_details(scan, task_result))
# Ask for confirmation
self.console.print("\n")
if Confirm.ask(
"❓ Mark this scan as failed?", console=self.console, default=False
):
confirmed_scans.append(stuck_scan)
self.console.print("✅ Scan will be marked as failed", style="green")
else:
self.console.print("⏭️ Scan skipped", style="yellow")
# Small pause before next scan (except for last one)
if i < len(stuck_scans):
time.sleep(0.5)
return confirmed_scans
def mark_scans_as_failed(self, stuck_scans: List[Dict], dry_run: bool = False):
"""Mark stuck scans as failed with interactive progress"""
if not stuck_scans:
return
if dry_run:
self.console.print("\n")
self.console.print(
Panel(
Align.center(
f"🧪 DRY RUN: Would mark {len(stuck_scans)} scan{'s' if len(stuck_scans) != 1 else ''} as failed"
),
style="yellow",
title="🔍 Dry Run Results",
)
)
return
# Show processing animation
with Progress(
SpinnerColumn(),
TextColumn("[progress.description]{task.description}"),
BarColumn(),
TaskProgressColumn(),
console=self.console,
transient=True,
) as progress:
task = progress.add_task(
"🔧 Marking scans as failed...", total=len(stuck_scans)
)
failed_count = 0
with transaction.atomic():
for i, stuck_scan in enumerate(stuck_scans):
scan = stuck_scan["scan"]
task_result = stuck_scan["task_result"]
progress.update(
task,
advance=1,
description=f"🔧 Processing scan {i + 1}/{len(stuck_scans)}",
)
try:
# Update scan state to FAILED using admin connection
scan.state = StateChoices.FAILED
scan.completed_at = datetime.now(timezone.utc)
scan.save(
using=MainRouter.admin_db,
update_fields=["state", "completed_at"],
)
task_result.status = "FAILURE"
task_result.result = json.dumps(
{
"exc_type": "ScanStuckError",
"exc_message": [
"Scan was detected as stuck and marked as failed."
],
}
)
task_result.date_done = datetime.now(timezone.utc)
task_result.save(using=MainRouter.admin_db)
failed_count += 1
if self.logger and hasattr(self.logger, "info"):
self.logger.info(
f"Marked scan {scan.id} (tenant: {scan.tenant_id}) as failed"
)
except Exception as e:
if self.logger and hasattr(self.logger, "error"):
self.logger.error(f"Failed to update scan {scan.id}: {e}")
time.sleep(0.2) # Small delay for visual effect
progress.update(
task,
description=f"✅ Completed - {failed_count} scans marked as failed",
)
time.sleep(0.5)
# Show final results
self.console.print("\n")
if failed_count > 0:
self.console.print(
Panel(
Align.center(
f"🎉 Successfully marked {failed_count} scan{'s' if failed_count != 1 else ''} as failed"
),
style="green",
title="✅ Task Complete",
)
)
# Show summary table
self.show_summary_table(stuck_scans, failed_count)
else:
self.console.print(
Panel(
Align.center("⚠️ No scans were updated"),
style="yellow",
title="⚠️ Warning",
)
)
def show_summary_table(self, processed_scans: List[Dict], success_count: int):
"""Show a summary table of processed scans"""
if success_count == 0:
return
self.console.print("\n")
# Create summary table
table = Table(
title=f"📋 Summary - {success_count} Scan{'s' if success_count != 1 else ''} Marked as Failed",
show_header=True,
header_style="bold white",
title_style="bold green",
border_style="green",
)
table.add_column("🆔 Scan ID", style="cyan", no_wrap=True)
table.add_column("🏢 Tenant", style="bright_blue", no_wrap=True)
table.add_column("☁️ Provider", style="green", no_wrap=True)
table.add_column("⏰ Started At", style="yellow")
table.add_column("📝 Scan Name", style="blue")
for scan_data in processed_scans:
scan = scan_data["scan"]
# Show full IDs since we have no_wrap=True
scan_id_full = str(scan.id)
tenant_id_full = str(scan.tenant_id) if scan.tenant_id else "Unknown"
# Format provider info with full details
provider_info = f"{scan.provider.provider.upper()}: {scan.provider.uid}"
# Format start time
started_time = (
scan.started_at.strftime("%Y-%m-%d %H:%M:%S UTC")
if scan.started_at
else "Unknown"
)
# Get scan name
scan_name = scan.name or "N/A"
table.add_row(
scan_id_full, tenant_id_full, provider_info, started_time, scan_name
)
self.console.print(table)
# Add helpful note
self.console.print("\n")
self.console.print(
Panel(
"💡 These scans were stuck (executing but no active task in workers) and have been marked as failed.\n"
"You can now retry them from the Prowler interface.",
style="dim",
title="️ Note",
border_style="dim",
)
)
def handle(self, *args, **options):
force = options["force"]
dry_run = options["dry_run"]
verbose = options["verbose"]
# Setup logging based on verbose flag
self.setup_logging(verbose)
# Clear screen and show header
self.console.clear()
self.console.print(
Panel.fit("🔍 Prowler Stuck Scans Checker", style="bold blue")
)
if self.logger and hasattr(self.logger, "info"):
self.logger.info("Starting stuck scans check across all tenants...")
try:
# Find stuck scans with interactive progress
stuck_scans = self.find_stuck_scans()
# Display results interactively
scans_to_process = self.display_stuck_scans(stuck_scans, force)
if not scans_to_process:
if stuck_scans and not force:
# User didn't confirm any scans
self.console.print("\n")
self.console.print(
Panel(
Align.center("🚫 No scans selected for processing"),
style="yellow",
title="❌ Operation Cancelled",
)
)
return
# Mark confirmed scans as failed
self.mark_scans_as_failed(scans_to_process, dry_run)
except KeyboardInterrupt:
self.console.print("\n")
self.console.print(
Panel(
Align.center("🛑 Operation cancelled by user"),
style="red",
title="❌ Interrupted",
)
)
return
except Exception as e:
if self.logger and hasattr(self.logger, "error"):
self.logger.error(f"Error during stuck scans check: {e}")
self.console.print("\n")
self.console.print(
Panel(
Align.center(f"💥 Error: {str(e)}"),
style="red",
title="❌ Command Failed",
)
)
raise CommandError(f"Command failed: {e}")
if self.logger and hasattr(self.logger, "info"):
self.logger.info("Stuck scans check completed successfully")
@@ -24,7 +24,7 @@ class Migration(migrations.Migration):
(
"name",
models.CharField(
max_length=255,
max_length=100,
validators=[django.core.validators.MinLengthValidator(3)],
),
),
+30
View File
@@ -3611,6 +3611,16 @@ paths:
schema:
type: string
format: uuid
- in: query
name: filter[provider_id__in]
schema:
type: array
items:
type: string
format: uuid
description: Multiple values may be separated by commas.
explode: false
style: form
- in: query
name: filter[provider_type]
schema:
@@ -3778,6 +3788,16 @@ paths:
schema:
type: string
format: uuid
- in: query
name: filter[provider_id__in]
schema:
type: array
items:
type: string
format: uuid
description: Multiple values may be separated by commas.
explode: false
style: form
- in: query
name: filter[provider_type]
schema:
@@ -3980,6 +4000,16 @@ paths:
schema:
type: string
format: uuid
- in: query
name: filter[provider_id__in]
schema:
type: array
items:
type: string
format: uuid
description: Multiple values may be separated by commas.
explode: false
style: form
- in: query
name: filter[provider_type]
schema:
+215
View File
@@ -46,6 +46,7 @@ from api.models import (
SAMLConfiguration,
SAMLToken,
Scan,
ScanSummary,
StateChoices,
Task,
TenantAPIKey,
@@ -2688,6 +2689,55 @@ class TestScanViewSet:
== "There is a problem with credentials."
)
@patch("api.v1.views.ScanViewSet._get_task_status")
@patch("api.v1.views.get_s3_client")
@patch("api.v1.views.env.str")
def test_threatscore_s3_wildcard(
self,
mock_env_str,
mock_get_s3_client,
mock_get_task_status,
authenticated_client,
scans_fixture,
):
"""
When the threatscore endpoint is called with an S3 output_location,
the view should list objects in S3 using wildcard pattern matching,
retrieve the matching PDF file, and return it with HTTP 200 and proper headers.
"""
scan = scans_fixture[0]
scan.state = StateChoices.COMPLETED
bucket = "test-bucket"
zip_key = "tenant-id/scan-id/prowler-output-foo.zip"
scan.output_location = f"s3://{bucket}/{zip_key}"
scan.save()
pdf_key = os.path.join(
os.path.dirname(zip_key),
"threatscore",
"prowler-output-123_threatscore_report.pdf",
)
mock_s3_client = Mock()
mock_s3_client.list_objects_v2.return_value = {"Contents": [{"Key": pdf_key}]}
mock_s3_client.get_object.return_value = {"Body": io.BytesIO(b"pdf-bytes")}
mock_env_str.return_value = bucket
mock_get_s3_client.return_value = mock_s3_client
mock_get_task_status.return_value = None
url = reverse("scan-threatscore", kwargs={"pk": scan.id})
response = authenticated_client.get(url)
assert response.status_code == status.HTTP_200_OK
assert response["Content-Type"] == "application/pdf"
assert response["Content-Disposition"].endswith(
'"prowler-output-123_threatscore_report.pdf"'
)
assert response.content == b"pdf-bytes"
mock_s3_client.list_objects_v2.assert_called_once()
mock_s3_client.get_object.assert_called_once_with(Bucket=bucket, Key=pdf_key)
def test_report_s3_success(self, authenticated_client, scans_fixture, monkeypatch):
"""
When output_location is an S3 URL and the S3 client returns the file successfully,
@@ -5766,6 +5816,171 @@ class TestOverviewViewSet:
assert service1_data["attributes"]["muted"] == 1
assert service2_data["attributes"]["muted"] == 0
def test_overview_findings_provider_id_in_filter(
self, authenticated_client, tenants_fixture, providers_fixture
):
tenant = tenants_fixture[0]
provider1, provider2, *_ = providers_fixture
scan1 = Scan.objects.create(
name="scan-one",
provider=provider1,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
scan2 = Scan.objects.create(
name="scan-two",
provider=provider2,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan1,
check_id="check-provider-one",
service="service-a",
severity="high",
region="region-a",
_pass=5,
fail=1,
muted=2,
total=8,
new=5,
changed=2,
unchanged=1,
fail_new=1,
fail_changed=0,
pass_new=3,
pass_changed=2,
muted_new=1,
muted_changed=1,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan2,
check_id="check-provider-two",
service="service-b",
severity="medium",
region="region-b",
_pass=2,
fail=3,
muted=1,
total=6,
new=3,
changed=2,
unchanged=1,
fail_new=2,
fail_changed=1,
pass_new=1,
pass_changed=1,
muted_new=1,
muted_changed=0,
)
single_response = authenticated_client.get(
reverse("overview-findings"),
{"filter[provider_id__in]": str(provider1.id)},
)
assert single_response.status_code == status.HTTP_200_OK
single_attributes = single_response.json()["data"]["attributes"]
assert single_attributes["pass"] == 5
assert single_attributes["fail"] == 1
assert single_attributes["muted"] == 2
assert single_attributes["total"] == 8
combined_response = authenticated_client.get(
reverse("overview-findings"),
{"filter[provider_id__in]": f"{provider1.id},{provider2.id}"},
)
assert combined_response.status_code == status.HTTP_200_OK
combined_attributes = combined_response.json()["data"]["attributes"]
assert combined_attributes["pass"] == 7
assert combined_attributes["fail"] == 4
assert combined_attributes["muted"] == 3
assert combined_attributes["total"] == 14
def test_overview_findings_severity_provider_id_in_filter(
self, authenticated_client, tenants_fixture, providers_fixture
):
tenant = tenants_fixture[0]
provider1, provider2, *_ = providers_fixture
scan1 = Scan.objects.create(
name="severity-scan-one",
provider=provider1,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
scan2 = Scan.objects.create(
name="severity-scan-two",
provider=provider2,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan1,
check_id="severity-check-one",
service="service-a",
severity="high",
region="region-a",
_pass=4,
fail=4,
muted=0,
total=8,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan1,
check_id="severity-check-two",
service="service-a",
severity="medium",
region="region-b",
_pass=2,
fail=2,
muted=0,
total=4,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan2,
check_id="severity-check-three",
service="service-b",
severity="critical",
region="region-c",
_pass=1,
fail=2,
muted=0,
total=3,
)
single_response = authenticated_client.get(
reverse("overview-findings_severity"),
{"filter[provider_id__in]": str(provider1.id)},
)
assert single_response.status_code == status.HTTP_200_OK
single_attributes = single_response.json()["data"]["attributes"]
assert single_attributes["high"] == 8
assert single_attributes["medium"] == 4
assert single_attributes["critical"] == 0
combined_response = authenticated_client.get(
reverse("overview-findings_severity"),
{"filter[provider_id__in]": f"{provider1.id},{provider2.id}"},
)
assert combined_response.status_code == status.HTTP_200_OK
combined_attributes = combined_response.json()["data"]["attributes"]
assert combined_attributes["high"] == 8
assert combined_attributes["medium"] == 4
assert combined_attributes["critical"] == 3
@pytest.mark.django_db
class TestScheduleViewSet:
+74 -1
View File
@@ -1,3 +1,4 @@
import fnmatch
import glob
import logging
import os
@@ -1593,6 +1594,25 @@ class ProviderViewSet(BaseRLSViewSet):
},
request=None,
),
threatscore=extend_schema(
tags=["Scan"],
summary="Retrieve threatscore report",
description="Download a specific threatscore report (e.g., 'prowler_threatscore_aws') as a PDF file.",
request=None,
responses={
200: OpenApiResponse(
description="PDF file containing the threatscore report"
),
202: OpenApiResponse(description="The task is in progress"),
401: OpenApiResponse(
description="API key missing or user not Authenticated"
),
403: OpenApiResponse(description="There is a problem with credentials"),
404: OpenApiResponse(
description="The scan has no threatscore reports, or the threatscore report generation task has not started yet"
),
},
),
)
@method_decorator(CACHE_DECORATOR, name="list")
@method_decorator(CACHE_DECORATOR, name="retrieve")
@@ -1649,6 +1669,9 @@ class ScanViewSet(BaseRLSViewSet):
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
return ScanComplianceReportSerializer
elif self.action == "threatscore":
if hasattr(self, "response_serializer_class"):
return self.response_serializer_class
return super().get_serializer_class()
def partial_update(self, request, *args, **kwargs):
@@ -1753,7 +1776,18 @@ class ScanViewSet(BaseRLSViewSet):
status=status.HTTP_502_BAD_GATEWAY,
)
contents = resp.get("Contents", [])
keys = [obj["Key"] for obj in contents if obj["Key"].endswith(suffix)]
keys = []
for obj in contents:
key = obj["Key"]
key_basename = os.path.basename(key)
if any(ch in suffix for ch in ("*", "?", "[")):
if fnmatch.fnmatch(key_basename, suffix):
keys.append(key)
elif key_basename == suffix:
keys.append(key)
elif key.endswith(suffix):
# Backward compatibility if suffix already includes directories
keys.append(key)
if not keys:
return Response(
{
@@ -1880,6 +1914,45 @@ class ScanViewSet(BaseRLSViewSet):
content, filename = loader
return self._serve_file(content, filename, "text/csv")
@action(
detail=True,
methods=["get"],
url_name="threatscore",
)
def threatscore(self, request, pk=None):
scan = self.get_object()
running_resp = self._get_task_status(scan)
if running_resp:
return running_resp
if not scan.output_location:
return Response(
{
"detail": "The scan has no reports, or the threatscore report generation task has not started yet."
},
status=status.HTTP_404_NOT_FOUND,
)
if scan.output_location.startswith("s3://"):
bucket = env.str("DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET", "")
key_prefix = scan.output_location.removeprefix(f"s3://{bucket}/")
prefix = os.path.join(
os.path.dirname(key_prefix),
"threatscore",
"*_threatscore_report.pdf",
)
loader = self._load_file(prefix, s3=True, bucket=bucket, list_objects=True)
else:
base = os.path.dirname(scan.output_location)
pattern = os.path.join(base, "threatscore", "*_threatscore_report.pdf")
loader = self._load_file(pattern, s3=False)
if isinstance(loader, Response):
return loader
content, filename = loader
return self._serve_file(content, filename, "application/pdf")
def create(self, request, *args, **kwargs):
input_serializer = self.get_serializer(data=request.data)
input_serializer.is_valid(raise_exception=True)
Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

+31 -29
View File
@@ -20,10 +20,10 @@ from prowler.lib.outputs.asff.asff import ASFF
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
AWSWellArchitected,
)
from prowler.lib.outputs.compliance.c5.c5_aws import AWSC5
from prowler.lib.outputs.compliance.ccc.ccc_aws import CCC_AWS
from prowler.lib.outputs.compliance.ccc.ccc_azure import CCC_Azure
from prowler.lib.outputs.compliance.ccc.ccc_gcp import CCC_GCP
from prowler.lib.outputs.compliance.c5.c5_aws import AWSC5
from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
@@ -183,18 +183,21 @@ def get_s3_client():
return s3_client
def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str | None:
def _upload_to_s3(
tenant_id: str, scan_id: str, local_path: str, relative_key: str
) -> str | None:
"""
Upload the specified ZIP file to an S3 bucket.
If the S3 bucket environment variables are not configured,
the function returns None without performing an upload.
Upload a local artifact to an S3 bucket under the tenant/scan prefix.
Args:
tenant_id (str): The tenant identifier, used as part of the S3 key prefix.
zip_path (str): The local file system path to the ZIP file to be uploaded.
scan_id (str): The scan identifier, used as part of the S3 key prefix.
tenant_id (str): The tenant identifier used as the first segment of the S3 key.
scan_id (str): The scan identifier used as the second segment of the S3 key.
local_path (str): Filesystem path to the artifact to upload.
relative_key (str): Object key relative to `<tenant_id>/<scan_id>/`.
Returns:
str: The S3 URI of the uploaded file (e.g., "s3://<bucket>/<key>") if successful.
None: If the required environment variables for the S3 bucket are not set.
str | None: S3 URI of the uploaded artifact, or None if the upload is skipped.
Raises:
botocore.exceptions.ClientError: If the upload attempt to S3 fails for any reason.
"""
@@ -202,34 +205,26 @@ def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str | None:
if not bucket:
return
if not relative_key:
return
if not os.path.isfile(local_path):
return
try:
s3 = get_s3_client()
# Upload the ZIP file (outputs) to the S3 bucket
zip_key = f"{tenant_id}/{scan_id}/{os.path.basename(zip_path)}"
s3.upload_file(
Filename=zip_path,
Bucket=bucket,
Key=zip_key,
)
s3_key = f"{tenant_id}/{scan_id}/{relative_key}"
s3.upload_file(Filename=local_path, Bucket=bucket, Key=s3_key)
# Upload the compliance directory to the S3 bucket
compliance_dir = os.path.join(os.path.dirname(zip_path), "compliance")
for filename in os.listdir(compliance_dir):
local_path = os.path.join(compliance_dir, filename)
if not os.path.isfile(local_path):
continue
file_key = f"{tenant_id}/{scan_id}/compliance/{filename}"
s3.upload_file(Filename=local_path, Bucket=bucket, Key=file_key)
return f"s3://{base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET}/{zip_key}"
return f"s3://{base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET}/{s3_key}"
except (ClientError, NoCredentialsError, ParamValidationError, ValueError) as e:
logger.error(f"S3 upload failed: {str(e)}")
def _generate_output_directory(
output_directory, prowler_provider: object, tenant_id: str, scan_id: str
) -> tuple[str, str]:
) -> tuple[str, str, str]:
"""
Generate a file system path for the output directory of a prowler scan.
@@ -256,6 +251,7 @@ def _generate_output_directory(
>>> _generate_output_directory("/tmp", "aws", "tenant-1234", "scan-5678")
'/tmp/tenant-1234/aws/scan-5678/prowler-output-2023-02-15T12:34:56',
'/tmp/tenant-1234/aws/scan-5678/compliance/prowler-output-2023-02-15T12:34:56'
'/tmp/tenant-1234/aws/scan-5678/threatscore/prowler-output-2023-02-15T12:34:56'
"""
# Sanitize the prowler provider name to ensure it is a valid directory name
prowler_provider_sanitized = re.sub(r"[^\w\-]", "-", prowler_provider)
@@ -276,4 +272,10 @@ def _generate_output_directory(
)
os.makedirs("/".join(compliance_path.split("/")[:-1]), exist_ok=True)
return path, compliance_path
threatscore_path = (
f"{output_directory}/{tenant_id}/{scan_id}/threatscore/prowler-output-"
f"{prowler_provider_sanitized}-{timestamp}"
)
os.makedirs("/".join(threatscore_path.split("/")[:-1]), exist_ok=True)
return path, compliance_path, threatscore_path
File diff suppressed because it is too large Load Diff
+48 -6
View File
@@ -1,3 +1,4 @@
import os
from datetime import datetime, timedelta, timezone
from pathlib import Path
from shutil import rmtree
@@ -26,6 +27,7 @@ from tasks.jobs.integrations import (
upload_s3_integration,
upload_security_hub_integration,
)
from tasks.jobs.report import generate_threatscore_report_job
from tasks.jobs.scan import (
aggregate_findings,
create_compliance_requirements,
@@ -64,10 +66,15 @@ def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str)
generate_outputs_task.si(
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
),
check_integrations_task.si(
tenant_id=tenant_id,
provider_id=provider_id,
scan_id=scan_id,
group(
generate_threatscore_report_task.si(
tenant_id=tenant_id, scan_id=scan_id, provider_id=provider_id
),
check_integrations_task.si(
tenant_id=tenant_id,
provider_id=provider_id,
scan_id=scan_id,
),
),
).apply_async()
@@ -304,7 +311,7 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
frameworks_bulk = Compliance.get_bulk(provider_type)
frameworks_avail = get_compliance_frameworks(provider_type)
out_dir, comp_dir = _generate_output_directory(
out_dir, comp_dir, _ = _generate_output_directory(
DJANGO_TMP_OUTPUT_DIRECTORY, provider_uid, tenant_id, scan_id
)
@@ -407,7 +414,24 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
writer._data.clear()
compressed = _compress_output_files(out_dir)
upload_uri = _upload_to_s3(tenant_id, compressed, scan_id)
upload_uri = _upload_to_s3(
tenant_id,
scan_id,
compressed,
os.path.basename(compressed),
)
compliance_dir_path = Path(comp_dir).parent
if compliance_dir_path.exists():
for artifact_path in sorted(compliance_dir_path.iterdir()):
if artifact_path.is_file():
_upload_to_s3(
tenant_id,
scan_id,
str(artifact_path),
f"compliance/{artifact_path.name}",
)
# S3 integrations (need output_directory)
with rls_transaction(tenant_id, using=READ_REPLICA_ALIAS):
@@ -617,3 +641,21 @@ def jira_integration_task(
return send_findings_to_jira(
tenant_id, integration_id, project_key, issue_type, finding_ids
)
@shared_task(
base=RLSTask,
name="scan-threatscore-report",
queue="scan-reports",
)
def generate_threatscore_report_task(tenant_id: str, scan_id: str, provider_id: str):
"""
Task to generate a threatscore report for a given scan.
Args:
tenant_id (str): The tenant identifier.
scan_id (str): The scan identifier.
provider_id (str): The provider identifier.
"""
return generate_threatscore_report_job(
tenant_id=tenant_id, scan_id=scan_id, provider_id=provider_id
)
+32 -10
View File
@@ -72,17 +72,26 @@ class TestOutputs:
client_mock = MagicMock()
mock_get_client.return_value = client_mock
result = _upload_to_s3("tenant-id", str(zip_path), "scan-id")
result = _upload_to_s3(
"tenant-id",
"scan-id",
str(zip_path),
"outputs.zip",
)
expected_uri = "s3://test-bucket/tenant-id/scan-id/outputs.zip"
assert result == expected_uri
assert client_mock.upload_file.call_count == 2
client_mock.upload_file.assert_called_once_with(
Filename=str(zip_path),
Bucket="test-bucket",
Key="tenant-id/scan-id/outputs.zip",
)
@patch("tasks.jobs.export.get_s3_client")
@patch("tasks.jobs.export.base")
def test_upload_to_s3_missing_bucket(self, mock_base, mock_get_client):
mock_base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET = ""
result = _upload_to_s3("tenant", "/tmp/fake.zip", "scan")
result = _upload_to_s3("tenant", "scan", "/tmp/fake.zip", "fake.zip")
assert result is None
@patch("tasks.jobs.export.get_s3_client")
@@ -101,11 +110,15 @@ class TestOutputs:
client_mock = MagicMock()
mock_get_client.return_value = client_mock
result = _upload_to_s3("tenant", str(zip_path), "scan")
result = _upload_to_s3(
"tenant",
"scan",
str(compliance_dir / "subdir"),
"compliance/subdir",
)
expected_uri = "s3://test-bucket/tenant/scan/results.zip"
assert result == expected_uri
client_mock.upload_file.assert_called_once()
assert result is None
client_mock.upload_file.assert_not_called()
@patch(
"tasks.jobs.export.get_s3_client",
@@ -126,7 +139,12 @@ class TestOutputs:
compliance_dir.mkdir()
(compliance_dir / "report.csv").write_text("csv")
_upload_to_s3("tenant", str(zip_path), "scan")
_upload_to_s3(
"tenant",
"scan",
str(zip_path),
"zipfile.zip",
)
mock_logger.assert_called()
@patch("tasks.jobs.export.rls_transaction")
@@ -150,15 +168,17 @@ class TestOutputs:
provider = "aws"
expected_timestamp = "20230615103045"
path, compliance = _generate_output_directory(
path, compliance, threatscore = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert os.path.isdir(os.path.dirname(threatscore))
assert path.endswith(f"{provider}-{expected_timestamp}")
assert compliance.endswith(f"{provider}-{expected_timestamp}")
assert threatscore.endswith(f"{provider}-{expected_timestamp}")
@patch("tasks.jobs.export.rls_transaction")
@patch("tasks.jobs.export.Scan")
@@ -181,12 +201,14 @@ class TestOutputs:
provider = "aws/test@check"
expected_timestamp = "20230615103045"
path, compliance = _generate_output_directory(
path, compliance, threatscore = _generate_output_directory(
base_dir, provider, tenant_id, scan_id
)
assert os.path.isdir(os.path.dirname(path))
assert os.path.isdir(os.path.dirname(compliance))
assert os.path.isdir(os.path.dirname(threatscore))
assert path.endswith(f"aws-test-check-{expected_timestamp}")
assert compliance.endswith(f"aws-test-check-{expected_timestamp}")
assert threatscore.endswith(f"aws-test-check-{expected_timestamp}")
+963
View File
@@ -0,0 +1,963 @@
import uuid
from pathlib import Path
from unittest.mock import MagicMock, patch
import matplotlib
import pytest
from tasks.jobs.report import (
_aggregate_requirement_statistics_from_database,
_calculate_requirements_data_from_statistics,
_load_findings_for_requirement_checks,
generate_threatscore_report,
generate_threatscore_report_job,
)
from tasks.tasks import generate_threatscore_report_task
from api.models import Finding, StatusChoices
from prowler.lib.check.models import Severity
matplotlib.use("Agg") # Use non-interactive backend for tests
@pytest.mark.django_db
class TestGenerateThreatscoreReport:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
def test_no_findings_returns_early(self):
with patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter:
mock_filter.return_value.exists.return_value = False
result = generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": False}
mock_filter.assert_called_once_with(scan_id=self.scan_id)
@patch("tasks.jobs.report.rmtree")
@patch("tasks.jobs.report._upload_to_s3")
@patch("tasks.jobs.report.generate_threatscore_report")
@patch("tasks.jobs.report._generate_output_directory")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.ScanSummary.objects.filter")
def test_generate_threatscore_report_happy_path(
self,
mock_scan_summary_filter,
mock_provider_get,
mock_generate_output_directory,
mock_generate_report,
mock_upload,
mock_rmtree,
):
mock_scan_summary_filter.return_value.exists.return_value = True
mock_provider = MagicMock()
mock_provider.uid = "provider-uid"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
mock_generate_output_directory.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
mock_upload.return_value = "s3://bucket/threatscore_report.pdf"
result = generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": True}
mock_generate_report.assert_called_once_with(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id="prowler_threatscore_aws",
output_path="/tmp/threatscore_path_threatscore_report.pdf",
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
mock_upload.assert_called_once_with(
self.tenant_id,
self.scan_id,
"/tmp/threatscore_path_threatscore_report.pdf",
"threatscore/threatscore_path_threatscore_report.pdf",
)
mock_rmtree.assert_called_once_with(
Path("/tmp/threatscore_path_threatscore_report.pdf").parent,
ignore_errors=True,
)
def test_generate_threatscore_report_fails_upload(self):
with (
patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter,
patch("tasks.jobs.report.Provider.objects.get") as mock_provider_get,
patch("tasks.jobs.report._generate_output_directory") as mock_gen_dir,
patch("tasks.jobs.report.generate_threatscore_report"),
patch("tasks.jobs.report._upload_to_s3", return_value=None),
):
mock_filter.return_value.exists.return_value = True
# Mock provider
mock_provider = MagicMock()
mock_provider.uid = "aws-provider-uid"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
mock_gen_dir.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
result = generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": False}
def test_generate_threatscore_report_logs_rmtree_exception(self, caplog):
with (
patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter,
patch("tasks.jobs.report.Provider.objects.get") as mock_provider_get,
patch("tasks.jobs.report._generate_output_directory") as mock_gen_dir,
patch("tasks.jobs.report.generate_threatscore_report"),
patch(
"tasks.jobs.report._upload_to_s3", return_value="s3://bucket/report.pdf"
),
patch(
"tasks.jobs.report.rmtree", side_effect=Exception("Test deletion error")
),
):
mock_filter.return_value.exists.return_value = True
# Mock provider
mock_provider = MagicMock()
mock_provider.uid = "aws-provider-uid"
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
mock_gen_dir.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
with caplog.at_level("ERROR"):
generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert "Error deleting output files" in caplog.text
def test_generate_threatscore_report_azure_provider(self):
with (
patch("tasks.jobs.report.ScanSummary.objects.filter") as mock_filter,
patch("tasks.jobs.report.Provider.objects.get") as mock_provider_get,
patch("tasks.jobs.report._generate_output_directory") as mock_gen_dir,
patch("tasks.jobs.report.generate_threatscore_report") as mock_generate,
patch(
"tasks.jobs.report._upload_to_s3", return_value="s3://bucket/report.pdf"
),
patch("tasks.jobs.report.rmtree"),
):
mock_filter.return_value.exists.return_value = True
mock_provider = MagicMock()
mock_provider.uid = "azure-provider-uid"
mock_provider.provider = "azure"
mock_provider_get.return_value = mock_provider
mock_gen_dir.return_value = (
"/tmp/output",
"/tmp/compressed",
"/tmp/threatscore_path",
)
generate_threatscore_report_job(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
mock_generate.assert_called_once_with(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id="prowler_threatscore_azure",
output_path="/tmp/threatscore_path_threatscore_report.pdf",
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
@pytest.mark.django_db
class TestAggregateRequirementStatistics:
"""Test suite for _aggregate_requirement_statistics_from_database function."""
def test_aggregates_findings_correctly(self, tenants_fixture, scans_fixture):
"""Verify correct pass/total counts per check are aggregated from database."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create findings with different check_ids and statuses
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_1",
status=StatusChoices.PASS,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-2",
check_id="check_1",
status=StatusChoices.FAIL,
severity=Severity.high,
impact=Severity.high,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-3",
check_id="check_2",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {
"check_1": {"passed": 1, "total": 2},
"check_2": {"passed": 1, "total": 1},
}
def test_handles_empty_scan(self, tenants_fixture, scans_fixture):
"""Return empty dict when no findings exist for the scan."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {}
def test_multiple_findings_same_check(self, tenants_fixture, scans_fixture):
"""Aggregate multiple findings for same check_id correctly."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create 5 findings for same check, 3 passed
for i in range(3):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-pass-{i}",
check_id="check_same",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
for i in range(2):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-fail-{i}",
check_id="check_same",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {"check_same": {"passed": 3, "total": 5}}
def test_only_failed_findings(self, tenants_fixture, scans_fixture):
"""Correctly count when all findings are FAIL status."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-fail-1",
check_id="check_fail",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-fail-2",
check_id="check_fail",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
assert result == {"check_fail": {"passed": 0, "total": 2}}
def test_mixed_statuses(self, tenants_fixture, scans_fixture):
"""Test with PASS, FAIL, and MANUAL statuses mixed."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-pass",
check_id="check_mixed",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-fail",
check_id="check_mixed",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-manual",
check_id="check_mixed",
status=StatusChoices.MANUAL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
result = _aggregate_requirement_statistics_from_database(
str(tenant.id), str(scan.id)
)
# Only PASS status is counted as passed
assert result == {"check_mixed": {"passed": 1, "total": 3}}
@pytest.mark.django_db
class TestLoadFindingsForChecks:
"""Test suite for _load_findings_for_requirement_checks function."""
def test_loads_only_requested_checks(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Verify only findings for specified check_ids are loaded."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
providers_fixture[0]
# Create findings with different check_ids
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-1",
check_id="check_requested",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-2",
check_id="check_not_requested",
status=StatusChoices.FAIL,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_requested"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_requested"], mock_provider
)
# Only one finding should be loaded
assert "check_requested" in result
assert "check_not_requested" not in result
assert len(result["check_requested"]) == 1
assert mock_transform.call_count == 1
def test_empty_check_ids_returns_empty(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Return empty dict when check_ids list is empty."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
mock_provider = MagicMock()
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), [], mock_provider
)
assert result == {}
def test_groups_by_check_id(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Multiple findings for same check are grouped correctly."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create multiple findings for same check
for i in range(3):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-{i}",
check_id="check_group",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_group"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_group"], mock_provider
)
assert len(result["check_group"]) == 3
def test_transforms_to_finding_output(
self, tenants_fixture, scans_fixture, providers_fixture
):
"""Findings are transformed using FindingOutput.transform_api_finding."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid="finding-transform",
check_id="check_transform",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_transform"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_transform"], mock_provider
)
# Verify transform was called
mock_transform.assert_called_once()
# Verify the transformed output is in the result
assert result["check_transform"][0] == mock_finding_output
def test_batched_iteration(self, tenants_fixture, scans_fixture, providers_fixture):
"""Works correctly with multiple batches of findings."""
tenant = tenants_fixture[0]
scan = scans_fixture[0]
# Create enough findings to ensure batching (assuming batch size > 1)
for i in range(10):
Finding.objects.create(
tenant_id=tenant.id,
scan=scan,
uid=f"finding-batch-{i}",
check_id="check_batch",
status=StatusChoices.PASS,
severity=Severity.medium,
impact=Severity.medium,
check_metadata={},
raw_result={},
)
mock_provider = MagicMock()
with patch(
"tasks.jobs.report.FindingOutput.transform_api_finding"
) as mock_transform:
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_batch"
mock_transform.return_value = mock_finding_output
result = _load_findings_for_requirement_checks(
str(tenant.id), str(scan.id), ["check_batch"], mock_provider
)
# All 10 findings should be loaded regardless of batching
assert len(result["check_batch"]) == 10
assert mock_transform.call_count == 10
@pytest.mark.django_db
class TestCalculateRequirementsData:
"""Test suite for _calculate_requirements_data_from_statistics function."""
def test_requirement_status_all_pass(self):
"""Status is PASS when all findings for requirement checks pass."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_1"
mock_requirement.Description = "Test requirement"
mock_requirement.Checks = ["check_1", "check_2"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {
"check_1": {"passed": 5, "total": 5},
"check_2": {"passed": 3, "total": 3},
}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
assert requirements_list[0]["attributes"]["status"] == StatusChoices.PASS
assert requirements_list[0]["attributes"]["passed_findings"] == 8
assert requirements_list[0]["attributes"]["total_findings"] == 8
def test_requirement_status_some_fail(self):
"""Status is FAIL when some findings fail."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_2"
mock_requirement.Description = "Test requirement with failures"
mock_requirement.Checks = ["check_3"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {
"check_3": {"passed": 2, "total": 5},
}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
assert requirements_list[0]["attributes"]["status"] == StatusChoices.FAIL
assert requirements_list[0]["attributes"]["passed_findings"] == 2
assert requirements_list[0]["attributes"]["total_findings"] == 5
def test_requirement_status_no_findings(self):
"""Status is MANUAL when no findings exist for requirement."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_3"
mock_requirement.Description = "Manual requirement"
mock_requirement.Checks = ["check_nonexistent"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
assert requirements_list[0]["attributes"]["status"] == StatusChoices.MANUAL
assert requirements_list[0]["attributes"]["passed_findings"] == 0
assert requirements_list[0]["attributes"]["total_findings"] == 0
def test_aggregates_multiple_checks(self):
"""Correctly sum stats across multiple checks in requirement."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_requirement = MagicMock()
mock_requirement.Id = "req_4"
mock_requirement.Description = "Multi-check requirement"
mock_requirement.Checks = ["check_a", "check_b", "check_c"]
mock_requirement.Attributes = [MagicMock()]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {
"check_a": {"passed": 10, "total": 15},
"check_b": {"passed": 5, "total": 10},
"check_c": {"passed": 0, "total": 5},
}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
assert len(requirements_list) == 1
# 10 + 5 + 0 = 15 passed
assert requirements_list[0]["attributes"]["passed_findings"] == 15
# 15 + 10 + 5 = 30 total
assert requirements_list[0]["attributes"]["total_findings"] == 30
# Not all passed, so should be FAIL
assert requirements_list[0]["attributes"]["status"] == StatusChoices.FAIL
def test_returns_correct_structure(self):
"""Verify tuple structure and dict keys are correct."""
mock_compliance = MagicMock()
mock_compliance.Framework = "TestFramework"
mock_compliance.Version = "1.0"
mock_attribute = MagicMock()
mock_requirement = MagicMock()
mock_requirement.Id = "req_5"
mock_requirement.Description = "Structure test"
mock_requirement.Checks = ["check_struct"]
mock_requirement.Attributes = [mock_attribute]
mock_compliance.Requirements = [mock_requirement]
requirement_statistics = {"check_struct": {"passed": 1, "total": 1}}
attributes_by_id, requirements_list = (
_calculate_requirements_data_from_statistics(
mock_compliance, requirement_statistics
)
)
# Verify attributes_by_id structure
assert "req_5" in attributes_by_id
assert "attributes" in attributes_by_id["req_5"]
assert "description" in attributes_by_id["req_5"]
assert "req_attributes" in attributes_by_id["req_5"]["attributes"]
assert "checks" in attributes_by_id["req_5"]["attributes"]
# Verify requirements_list structure
assert len(requirements_list) == 1
req = requirements_list[0]
assert "id" in req
assert "attributes" in req
assert "framework" in req["attributes"]
assert "version" in req["attributes"]
assert "status" in req["attributes"]
assert "description" in req["attributes"]
assert "passed_findings" in req["attributes"]
assert "total_findings" in req["attributes"]
@pytest.mark.django_db
class TestGenerateThreatscoreReportFunction:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
self.compliance_id = "prowler_threatscore_aws"
self.output_path = "/tmp/test_threatscore_report.pdf"
@patch("tasks.jobs.report.initialize_prowler_provider")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report._aggregate_requirement_statistics_from_database")
@patch("tasks.jobs.report._calculate_requirements_data_from_statistics")
@patch("tasks.jobs.report._load_findings_for_requirement_checks")
@patch("tasks.jobs.report.SimpleDocTemplate")
@patch("tasks.jobs.report.Image")
@patch("tasks.jobs.report.Spacer")
@patch("tasks.jobs.report.Paragraph")
@patch("tasks.jobs.report.PageBreak")
@patch("tasks.jobs.report.Table")
@patch("tasks.jobs.report.TableStyle")
@patch("tasks.jobs.report.plt.subplots")
@patch("tasks.jobs.report.plt.savefig")
@patch("tasks.jobs.report.io.BytesIO")
def test_generate_threatscore_report_success(
self,
mock_bytesio,
mock_savefig,
mock_subplots,
mock_table_style,
mock_table,
mock_page_break,
mock_paragraph,
mock_spacer,
mock_image,
mock_doc_template,
mock_load_findings,
mock_calculate_requirements,
mock_aggregate_statistics,
mock_compliance_get_bulk,
mock_provider_get,
mock_initialize_provider,
):
"""Test the updated generate_threatscore_report using new memory-efficient architecture."""
mock_provider = MagicMock()
mock_provider.provider = "aws"
mock_provider_get.return_value = mock_provider
prowler_provider = MagicMock()
mock_initialize_provider.return_value = prowler_provider
# Mock compliance object with requirements
mock_compliance_obj = MagicMock()
mock_compliance_obj.Framework = "ProwlerThreatScore"
mock_compliance_obj.Version = "1.0"
mock_compliance_obj.Description = "Test Description"
# Configure requirement with properly set numeric attributes for chart generation
mock_requirement = MagicMock()
mock_requirement.Id = "req_1"
mock_requirement.Description = "Test requirement"
mock_requirement.Checks = ["check_1"]
# Create a properly configured attribute mock with numeric values
mock_requirement_attr = MagicMock()
mock_requirement_attr.Section = "1. IAM"
mock_requirement_attr.SubSection = "1.1 Identity"
mock_requirement_attr.Title = "Test Requirement Title"
mock_requirement_attr.LevelOfRisk = 3
mock_requirement_attr.Weight = 100
mock_requirement_attr.AttributeDescription = "Test requirement description"
mock_requirement_attr.AdditionalInformation = "Additional test information"
mock_requirement.Attributes = [mock_requirement_attr]
mock_compliance_obj.Requirements = [mock_requirement]
mock_compliance_get_bulk.return_value = {
self.compliance_id: mock_compliance_obj
}
# Mock the aggregated statistics from database
mock_aggregate_statistics.return_value = {"check_1": {"passed": 5, "total": 10}}
# Mock the calculated requirements data with properly configured attributes
mock_attributes_by_id = {
"req_1": {
"attributes": {
"req_attributes": [mock_requirement_attr],
"checks": ["check_1"],
},
"description": "Test requirement",
}
}
mock_requirements_list = [
{
"id": "req_1",
"attributes": {
"framework": "ProwlerThreatScore",
"version": "1.0",
"status": StatusChoices.FAIL,
"description": "Test requirement",
"passed_findings": 5,
"total_findings": 10,
},
}
]
mock_calculate_requirements.return_value = (
mock_attributes_by_id,
mock_requirements_list,
)
# Mock the on-demand loaded findings
mock_finding_output = MagicMock()
mock_finding_output.check_id = "check_1"
mock_finding_output.status = "FAIL"
mock_finding_output.metadata = MagicMock()
mock_finding_output.metadata.CheckTitle = "Test Check"
mock_finding_output.metadata.Severity = "HIGH"
mock_finding_output.resource_name = "test-resource"
mock_finding_output.region = "us-east-1"
mock_load_findings.return_value = {"check_1": [mock_finding_output]}
# Mock PDF generation components
mock_doc = MagicMock()
mock_doc_template.return_value = mock_doc
mock_fig, mock_ax = MagicMock(), MagicMock()
mock_subplots.return_value = (mock_fig, mock_ax)
mock_buffer = MagicMock()
mock_bytesio.return_value = mock_buffer
mock_image.return_value = MagicMock()
mock_spacer.return_value = MagicMock()
mock_paragraph.return_value = MagicMock()
mock_page_break.return_value = MagicMock()
mock_table.return_value = MagicMock()
mock_table_style.return_value = MagicMock()
# Execute the function
generate_threatscore_report(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id=self.compliance_id,
output_path=self.output_path,
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
# Verify the new workflow was followed
mock_provider_get.assert_called_once_with(id=self.provider_id)
mock_initialize_provider.assert_called_once_with(mock_provider)
mock_compliance_get_bulk.assert_called_once_with("aws")
# Verify the new functions were called in correct order with correct parameters
mock_aggregate_statistics.assert_called_once_with(self.tenant_id, self.scan_id)
mock_calculate_requirements.assert_called_once_with(
mock_compliance_obj, {"check_1": {"passed": 5, "total": 10}}
)
mock_load_findings.assert_called_once_with(
self.tenant_id, self.scan_id, ["check_1"], prowler_provider
)
# Verify PDF was built
mock_doc_template.assert_called_once()
mock_doc.build.assert_called_once()
@patch("tasks.jobs.report.initialize_prowler_provider")
@patch("tasks.jobs.report.Provider.objects.get")
@patch("tasks.jobs.report.Compliance.get_bulk")
@patch("tasks.jobs.report.Finding.all_objects.filter")
def test_generate_threatscore_report_exception_handling(
self,
mock_finding_filter,
mock_compliance_get_bulk,
mock_provider_get,
mock_initialize_provider,
):
mock_provider_get.side_effect = Exception("Provider not found")
with pytest.raises(Exception, match="Provider not found"):
generate_threatscore_report(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
compliance_id=self.compliance_id,
output_path=self.output_path,
provider_id=self.provider_id,
only_failed=True,
min_risk_level=4,
)
@pytest.mark.django_db
class TestGenerateThreatscoreReportTask:
def setup_method(self):
self.scan_id = str(uuid.uuid4())
self.provider_id = str(uuid.uuid4())
self.tenant_id = str(uuid.uuid4())
@patch("tasks.tasks.generate_threatscore_report_job")
def test_generate_threatscore_report_task_calls_job(self, mock_generate_job):
mock_generate_job.return_value = {"upload": True}
result = generate_threatscore_report_task(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
assert result == {"upload": True}
mock_generate_job.assert_called_once_with(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
@patch("tasks.tasks.generate_threatscore_report_job")
def test_generate_threatscore_report_task_handles_job_exception(
self, mock_generate_job
):
mock_generate_job.side_effect = Exception("Job failed")
with pytest.raises(Exception, match="Job failed"):
generate_threatscore_report_task(
tenant_id=self.tenant_id,
scan_id=self.scan_id,
provider_id=self.provider_id,
)
+127 -59
View File
@@ -98,7 +98,11 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks._generate_output_directory",
return_value=("out-dir", "comp-dir"),
return_value=(
"/tmp/test/out-dir",
"/tmp/test/comp-dir",
"/tmp/test/threat-dir",
),
),
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
patch("tasks.tasks.rmtree"),
@@ -126,7 +130,8 @@ class TestGenerateOutputs:
patch("tasks.tasks.get_compliance_frameworks"),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
@@ -168,15 +173,35 @@ class TestGenerateOutputs:
mock_finding_output = MagicMock()
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
html_writer_mock = MagicMock()
html_writer_mock._data = []
html_writer_mock.close_file = False
html_writer_mock.transform = MagicMock()
html_writer_mock.batch_write_data_to_file = MagicMock()
compliance_writer_mock = MagicMock()
compliance_writer_mock._data = []
compliance_writer_mock.close_file = False
compliance_writer_mock.transform = MagicMock()
compliance_writer_mock.batch_write_data_to_file = MagicMock()
# Create a mock class that returns our mock instance when called
mock_compliance_class = MagicMock(return_value=compliance_writer_mock)
mock_provider = MagicMock()
mock_provider.provider = "aws"
mock_provider.uid = "test-provider-uid"
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.Provider.objects.get", return_value=mock_provider),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk", return_value={"cis": MagicMock()}),
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
@@ -190,6 +215,20 @@ class TestGenerateOutputs:
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/f.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree"),
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"html": {
"class": lambda *args, **kwargs: html_writer_mock,
"suffix": ".html",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
@@ -197,29 +236,12 @@ class TestGenerateOutputs:
True,
]
html_writer_mock = MagicMock()
with (
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"html": {
"class": lambda *args, **kwargs: html_writer_mock,
"suffix": ".html",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
),
):
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
html_writer_mock.batch_write_data_to_file.assert_called_once()
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
html_writer_mock.batch_write_data_to_file.assert_called_once()
def test_transform_called_only_on_second_batch(self):
raw1 = MagicMock()
@@ -256,7 +278,11 @@ class TestGenerateOutputs:
),
patch(
"tasks.tasks._generate_output_directory",
return_value=("outdir", "compdir"),
return_value=(
"/tmp/test/outdir",
"/tmp/test/compdir",
"/tmp/test/threatdir",
),
),
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
@@ -303,12 +329,14 @@ class TestGenerateOutputs:
def __init__(self, *args, **kwargs):
self.transform_calls = []
self._data = []
self.close_file = False
writer_instances.append(self)
def transform(self, fos, comp_obj, name):
self.transform_calls.append((fos, comp_obj, name))
def batch_write_data_to_file(self):
# Mock implementation - do nothing
pass
two_batches = [
@@ -329,7 +357,11 @@ class TestGenerateOutputs:
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch(
"tasks.tasks._generate_output_directory",
return_value=("outdir", "compdir"),
return_value=(
"/tmp/test/outdir",
"/tmp/test/compdir",
"/tmp/test/threatdir",
),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch(
@@ -368,15 +400,35 @@ class TestGenerateOutputs:
mock_finding_output = MagicMock()
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
json_writer_mock = MagicMock()
json_writer_mock._data = []
json_writer_mock.close_file = False
json_writer_mock.transform = MagicMock()
json_writer_mock.batch_write_data_to_file = MagicMock()
compliance_writer_mock = MagicMock()
compliance_writer_mock._data = []
compliance_writer_mock.close_file = False
compliance_writer_mock.transform = MagicMock()
compliance_writer_mock.batch_write_data_to_file = MagicMock()
# Create a mock class that returns our mock instance when called
mock_compliance_class = MagicMock(return_value=compliance_writer_mock)
mock_provider = MagicMock()
mock_provider.provider = "aws"
mock_provider.uid = "test-provider-uid"
with (
patch("tasks.tasks.ScanSummary.objects.filter") as mock_filter,
patch("tasks.tasks.Provider.objects.get"),
patch("tasks.tasks.Provider.objects.get", return_value=mock_provider),
patch("tasks.tasks.initialize_prowler_provider"),
patch("tasks.tasks.Compliance.get_bulk", return_value={"cis": MagicMock()}),
patch("tasks.tasks.get_compliance_frameworks", return_value=["cis"]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch(
"tasks.tasks.FindingOutput._transform_findings_stats",
@@ -390,6 +442,20 @@ class TestGenerateOutputs:
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/file.zip"),
patch("tasks.tasks.Scan.all_objects.filter"),
patch("tasks.tasks.rmtree", side_effect=Exception("Test deletion error")),
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": lambda *args, **kwargs: json_writer_mock,
"suffix": ".json",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, mock_compliance_class)]},
),
):
mock_filter.return_value.exists.return_value = True
mock_findings.return_value.order_by.return_value.iterator.return_value = [
@@ -397,29 +463,13 @@ class TestGenerateOutputs:
True,
]
with (
patch(
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
{
"json": {
"class": lambda *args, **kwargs: MagicMock(),
"suffix": ".json",
"kwargs": {},
}
},
),
patch(
"tasks.tasks.COMPLIANCE_CLASS_MAP",
{"aws": [(lambda x: True, MagicMock())]},
),
):
with caplog.at_level("ERROR"):
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert "Error deleting output files" in caplog.text
with caplog.at_level("ERROR"):
generate_outputs_task(
scan_id=self.scan_id,
provider_id=self.provider_id,
tenant_id=self.tenant_id,
)
assert "Error deleting output files" in caplog.text
@patch("tasks.tasks.rls_transaction")
@patch("tasks.tasks.Integration.objects.filter")
@@ -435,7 +485,8 @@ class TestGenerateOutputs:
patch("tasks.tasks.get_compliance_frameworks", return_value=[]),
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
patch(
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
"tasks.tasks._generate_output_directory",
return_value=("/tmp/test/out", "/tmp/test/comp", "/tmp/test/threat"),
),
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
patch("tasks.tasks.FindingOutput.transform_api_finding"),
@@ -476,8 +527,15 @@ class TestScanCompleteTasks:
@patch("tasks.tasks.create_compliance_requirements_task.apply_async")
@patch("tasks.tasks.perform_scan_summary_task.si")
@patch("tasks.tasks.generate_outputs_task.si")
@patch("tasks.tasks.generate_threatscore_report_task.si")
@patch("tasks.tasks.check_integrations_task.si")
def test_scan_complete_tasks(
self, mock_outputs_task, mock_scan_summary_task, mock_compliance_tasks
self,
mock_check_integrations_task,
mock_threatscore_task,
mock_outputs_task,
mock_scan_summary_task,
mock_compliance_tasks,
):
_perform_scan_complete_tasks("tenant-id", "scan-id", "provider-id")
mock_compliance_tasks.assert_called_once_with(
@@ -492,6 +550,16 @@ class TestScanCompleteTasks:
provider_id="provider-id",
tenant_id="tenant-id",
)
mock_threatscore_task.assert_called_once_with(
tenant_id="tenant-id",
scan_id="scan-id",
provider_id="provider-id",
)
mock_check_integrations_task.assert_called_once_with(
tenant_id="tenant-id",
provider_id="provider-id",
scan_id="scan-id",
)
@pytest.mark.django_db
@@ -662,7 +730,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
@@ -787,7 +855,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
@@ -903,7 +971,7 @@ class TestCheckIntegrationsTask:
mock_initialize_provider.return_value = MagicMock()
mock_compliance_bulk.return_value = {}
mock_get_frameworks.return_value = []
mock_generate_dir.return_value = ("out-dir", "comp-dir")
mock_generate_dir.return_value = ("out-dir", "comp-dir", "threat-dir")
mock_transform_stats.return_value = {"stats": "data"}
# Mock findings
+27
View File
@@ -118,6 +118,33 @@ services:
- "../docker-entrypoint.sh"
- "beat"
check-scans:
build:
context: ./api
dockerfile: Dockerfile
target: dev
environment:
- DJANGO_SETTINGS_MODULE=config.django.devel
- DJANGO_LOGGING_FORMATTER=${LOGGING_FORMATTER:-human_readable}
env_file:
- path: .env
required: false
volumes:
- "./api/src/backend:/home/prowler/backend"
- "./api/pyproject.toml:/home/prowler/pyproject.toml"
depends_on:
postgres:
condition: service_healthy
valkey:
condition: service_healthy
stdin_open: true
tty: true
working_dir: /home/prowler/backend
entrypoint: []
command: ["poetry", "run", "python", "manage.py", "check_scans"]
profiles:
- tools
volumes:
outputs:
driver: local
File diff suppressed because it is too large Load Diff
+300 -4
View File
@@ -5,8 +5,7 @@ title: 'Prowler Services'
Here you can find how to create a new service, or to complement an existing one, for a [Prowler Provider](/developer-guide/provider).
<Note>
First ensure that the provider you want to add the service is already created. It can be checked [here](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers). If the provider is not present, please refer to the [Provider](/developer-guide/provider) documentation to create it from scratch.
First ensure that the provider you want to add the service is already created. It can be checked [here](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers). If the provider is not present, please refer to the [Provider](./provider.md) documentation to create it from scratch.
</Note>
## Introduction
@@ -201,11 +200,11 @@ class <Item>(BaseModel):
#### Service Attributes
*Optimized Data Storage with Python Dictionaries*
_Optimized Data Storage with Python Dictionaries_
Each group of resources within a service should be structured as a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) to enable efficient lookups. The dictionary lookup operation has [O(1) complexity](https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions), and lookups are constantly executed.
*Assigning Unique Identifiers*
_Assigning Unique Identifiers_
Each dictionary key must be a unique ID to identify the resource in a univocal way.
@@ -241,6 +240,301 @@ Provider-Specific Permissions Documentation:
- [M365](/user-guide/providers/microsoft365/authentication#required-permissions)
- [GitHub](/user-guide/providers/github/authentication)
## Service Architecture and Cross-Service Communication
### Core Principle: Service Isolation with Client Communication
Each service must contain **ONLY** the information unique to that specific service. When a check requires information from multiple services, it must use the **client objects** of other services rather than directly accessing their data structures.
This architecture ensures:
- **Loose coupling** between services
- **Clear separation of concerns**
- **Maintainable and testable code**
- **Consistent data access patterns**
### Cross-Service Communication Pattern
Instead of services directly accessing each other's internal data, checks should import and use client objects:
**❌ INCORRECT - Direct data access:**
```python
# DON'T DO THIS
from prowler.providers.aws.services.cloudtrail.cloudtrail_service import cloudtrail_service
from prowler.providers.aws.services.s3.s3_service import s3_service
class cloudtrail_bucket_requires_mfa_delete(Check):
def execute(self):
# WRONG: Directly accessing service data
for trail in cloudtrail_service.trails.values():
for bucket in s3_service.buckets.values():
# Direct access violates separation of concerns
```
**✅ CORRECT - Client-based communication:**
```python
# DO THIS INSTEAD
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import cloudtrail_client
from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_bucket_requires_mfa_delete(Check):
def execute(self):
# CORRECT: Using client objects for cross-service communication
for trail in cloudtrail_client.trails.values():
trail_bucket = trail.s3_bucket
for bucket in s3_client.buckets.values():
if trail_bucket == bucket.name:
# Use bucket properties through s3_client
if bucket.mfa_delete:
# Implementation logic
```
### Real-World Example: CloudTrail + S3 Integration
This example demonstrates how CloudTrail checks validate S3 bucket configurations:
```python
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import cloudtrail_client
from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_bucket_requires_mfa_delete(Check):
def execute(self):
findings = []
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket
# Cross-service communication: CloudTrail check uses S3 client
for bucket in s3_client.buckets.values():
if trail_bucket == bucket.name:
trail_bucket_is_in_account = True
if bucket.mfa_delete:
report.status = "PASS"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) has MFA delete enabled."
# Handle cross-account scenarios
if not trail_bucket_is_in_account:
report.status = "MANUAL"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket or out of Prowler's audit scope, please check it manually."
findings.append(report)
return findings
```
**Key Benefits:**
- **CloudTrail service** only contains CloudTrail-specific data (trails, configurations)
- **S3 service** only contains S3-specific data (buckets, policies, ACLs)
- **Check logic** orchestrates between services using their public client interfaces
- **Cross-account detection** is handled gracefully when resources span accounts
### Service Consolidation Guidelines
**When to combine services in the same file:**
Implement multiple services as **separate classes in the same file** when two services are **practically the same** or one is a **direct extension** of another.
**Example: S3 and S3Control**
S3Control is an extension of S3 that provides account-level controls and access points. Both are implemented in `s3_service.py`:
```python
# File: prowler/providers/aws/services/s3/s3_service.py
class S3(AWSService):
"""Standard S3 service for bucket operations"""
def __init__(self, provider):
super().__init__(__class__.__name__, provider)
self.buckets = {}
self.regions_with_buckets = []
# S3-specific initialization
self._list_buckets(provider)
self._get_bucket_versioning()
# ... other S3-specific operations
class S3Control(AWSService):
"""S3Control service for account-level and access point operations"""
def __init__(self, provider):
super().__init__(__class__.__name__, provider)
self.account_public_access_block = None
self.access_points = {}
# S3Control-specific initialization
self._get_public_access_block()
self._list_access_points()
# ... other S3Control-specific operations
```
**Separate client files:**
```python
# File: prowler/providers/aws/services/s3/s3_client.py
from prowler.providers.aws.services.s3.s3_service import S3
s3_client = S3(Provider.get_global_provider())
# File: prowler/providers/aws/services/s3/s3control_client.py
from prowler.providers.aws.services.s3.s3_service import S3Control
s3control_client = S3Control(Provider.get_global_provider())
```
**When NOT to consolidate services:**
Keep services separate when they:
- **Operate on different resource types** (EC2 vs RDS)
- **Have different authentication mechanisms** (different API endpoints)
- **Serve different operational domains** (IAM vs CloudTrail)
- **Have different regional behaviors** (global vs regional services)
### Cross-Service Dependencies Guidelines
**1. Always use client imports:**
```python
# Correct pattern
from prowler.providers.aws.services.service_a.service_a_client import service_a_client
from prowler.providers.aws.services.service_b.service_b_client import service_b_client
```
**2. Handle missing resources gracefully:**
```python
# Handle cross-service scenarios
resource_found_in_account = False
for external_resource in other_service_client.resources.values():
if target_resource_id == external_resource.id:
resource_found_in_account = True
# Process found resource
break
if not resource_found_in_account:
# Handle cross-account or missing resource scenarios
report.status = "MANUAL"
report.status_extended = "Resource is cross-account or out of audit scope"
```
**3. Document cross-service dependencies:**
```python
class check_with_dependencies(Check):
"""
Check Description
Dependencies:
- service_a_client: For primary resource information
- service_b_client: For related resource validation
- service_c_client: For policy analysis
"""
```
## Regional Service Implementation
When implementing services for regional providers (like AWS, Azure, GCP), special considerations are needed to handle resource discovery across multiple geographic locations. This section provides a complete guide using AWS as the reference example.
### Regional vs Non-Regional Services
**Regional Services:** Require iteration across multiple geographic locations where resources may exist (e.g., EC2 instances, VPC, RDS databases).
**Non-Regional/Global Services:** Operate at a global or tenant level without regional concepts (e.g., IAM users, Route53 hosted zones).
### AWS Regional Implementation Example
AWS is the perfect example of a regional provider. Here's how Prowler handles AWS's regional architecture:
```python
# File: prowler/providers/aws/services/ec2/ec2_service.py
class EC2(AWSService):
def __init__(self, provider):
super().__init__(__class__.__name__, provider)
self.instances = {}
self.security_groups = {}
# Regional resource discovery across all AWS regions
self.__threading_call__(self._describe_instances)
self.__threading_call__(self._describe_security_groups)
def _describe_instances(self, regional_client):
"""Discover EC2 instances in a specific region"""
try:
describe_instances_paginator = regional_client.get_paginator("describe_instances")
for page in describe_instances_paginator.paginate():
for reservation in page["Reservations"]:
for instance in reservation["Instances"]:
# Each instance includes its region
self.instances[instance["InstanceId"]] = Instance(
id=instance["InstanceId"],
region=regional_client.region,
state=instance["State"]["Name"],
# ... other properties
)
except Exception as error:
logger.error(f"Failed to describe instances in {regional_client.region}: {error}")
```
#### Regional Check Execution
```python
# File: prowler/providers/aws/services/ec2/ec2_instance_public_ip/ec2_instance_public_ip.py
class ec2_instance_public_ip(Check):
def execute(self):
findings = []
# Automatically iterates across ALL AWS regions where instances exist
for instance in ec2_client.instances.values():
report = Check_Report_AWS(metadata=self.metadata(), resource=instance)
report.region = instance.region # Critical: region attribution
report.resource_arn = f"arn:aws:ec2:{instance.region}:{instance.account_id}:instance/{instance.id}"
if instance.public_ip:
report.status = "FAIL"
report.status_extended = f"Instance {instance.id} in {instance.region} has public IP {instance.public_ip}"
else:
report.status = "PASS"
report.status_extended = f"Instance {instance.id} in {instance.region} does not have a public IP"
findings.append(report)
return findings
```
#### Key AWS Regional Features
**Region-Specific ARNs:**
```
arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0
arn:aws:s3:eu-west-1:123456789012:bucket/my-bucket
arn:aws:rds:ap-southeast-2:123456789012:db:my-database
```
**Parallel Processing:**
- Each region processed independently in separate threads
- Failed regions don't affect other regions
- User can filter specific regions: `-f us-east-1`
**Global vs Regional Services:**
- **Regional**: EC2, RDS, VPC (require region iteration)
- **Global**: IAM, Route53, CloudFront (single `us-east-1` call)
This architecture allows Prowler to efficiently scan AWS accounts with resources spread across multiple regions while maintaining performance and error isolation.
### Regional Service Best Practices
1. **Use Threading for Regional Discovery**: Leverage the `__threading_call__` method to parallelize resource discovery across regions
2. **Store Region Information**: Always include region metadata in resource objects for proper attribution
3. **Handle Regional Failures Gracefully**: Ensure that failures in one region don't affect others
4. **Optimize for Performance**: Use paginated calls and efficient data structures for large-scale resource discovery
5. **Support Region Filtering**: Allow users to limit scans to specific regions for focused audits
## Best Practices
- When available in the provider, use threading or parallelization utilities for all methods that can be parallelized by to maximize performance and reduce scan time.
@@ -252,3 +546,5 @@ Provider-Specific Permissions Documentation:
- Collect and store resource tags and additional attributes to support richer checks and reporting.
- Leverage shared utility helpers for session setup, identifier parsing, and other cross-cutting concerns to avoid code duplication. This kind of code is typically stored in a `lib` folder in the service folder.
- Keep code modular, maintainable, and well-documented for ease of extension and troubleshooting.
- **Each service should contain only information unique to that specific service** - use client objects for cross-service communication.
- **Handle cross-account and missing resources gracefully** when checks span multiple services.
+6 -9
View File
@@ -114,7 +114,8 @@
"group": "Tutorials",
"pages": [
"user-guide/tutorials/prowler-app-sso-entra",
"user-guide/tutorials/bulk-provider-provisioning"
"user-guide/tutorials/bulk-provider-provisioning",
"user-guide/tutorials/aws-organizations-bulk-provisioning"
]
}
]
@@ -404,14 +405,6 @@
"source": "/projects/prowler-open-source/en/latest/tutorials/gcp/getting-started-gcp",
"destination": "/user-guide/providers/gcp/getting-started-gcp"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/prowler-app",
"destination": "/user-guide/tutorials/prowler-app#step-4-4%3A-kubernetes-credentials%3A"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/prowler-app/#step-3-add-a-provider",
"destination": "/user-guide/tutorials/prowler-app#step-3-add-a-provider"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/microsoft365/getting-started-m365",
"destination": "/user-guide/providers/microsoft365/getting-started-m365"
@@ -431,6 +424,10 @@
{
"source": "/projects/prowler-saas/en/latest/:slug*",
"destination": "https://docs.prowler.pro/en/latest/:slug*"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/:slug*",
"destination": "/user-guide/tutorials/:slug*"
}
]
}
@@ -25,6 +25,9 @@ Prowler configuration is based in `.env` files. Every version of Prowler can hav
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/.env
docker compose up -d
```
> Containers are built for `linux/amd64`. If your workstation's architecture is different, please set `DOCKER_DEFAULT_PLATFORM=linux/amd64` in your environment or use the `--platform linux/amd64` flag in the docker command.
</Tab>
<Tab title="GitHub">
_Requirements_:
@@ -182,19 +182,19 @@ Configure the server using environment variables:
|----------|-------------|----------|---------|
| `PROWLER_APP_API_KEY` | Prowler API key | Only for STDIO mode | - |
| `PROWLER_API_BASE_URL` | Custom Prowler API endpoint | No | `https://api.prowler.com` |
| `PROWLER_MCP_MODE` | Default transport mode (overwritten by `--transport` argument) | No | `stdio` |
| `PROWLER_MCP_TRANSPORT_MODE` | Default transport mode (overwritten by `--transport` argument) | No | `stdio` |
<CodeGroup>
```bash macOS/Linux
export PROWLER_APP_API_KEY="pk_your_api_key_here"
export PROWLER_API_BASE_URL="https://api.prowler.com"
export PROWLER_MCP_MODE="http"
export PROWLER_MCP_TRANSPORT_MODE="http"
```
```bash Windows PowerShell
$env:PROWLER_APP_API_KEY="pk_your_api_key_here"
$env:PROWLER_API_BASE_URL="https://api.prowler.com"
$env:PROWLER_MCP_MODE="http"
$env:PROWLER_MCP_TRANSPORT_MODE="http"
```
</CodeGroup>
@@ -209,7 +209,7 @@ For convenience, create a `.env` file in the `mcp_server` directory:
```bash .env
PROWLER_APP_API_KEY=pk_your_api_key_here
PROWLER_API_BASE_URL=https://api.prowler.com
PROWLER_MCP_MODE=stdio
PROWLER_MCP_TRANSPORT_MODE=stdio
```
When using Docker, pass the environment file:
@@ -228,6 +228,35 @@ uvx /path/to/prowler/mcp_server/
This is particularly useful when configuring MCP clients that need to launch the server from a specific path.
## Production Deployment
For production deployments that require customization, it is recommended to use the ASGI application that can be found in `prowler_mcp_server.server`. This can be run with uvicorn:
```bash
uvicorn prowler_mcp_server.server:app --host 0.0.0.0 --port 8000
```
For more details on production deployment options, see the [FastMCP production deployment guide](https://gofastmcp.com/deployment/http#production-deployment) and [uvicorn settings](https://www.uvicorn.org/settings/).
### Entrypoint Script
The source tree includes `entrypoint.sh` to simplify switching between the
standard CLI runner and the ASGI app. The first argument selects the mode and
any additional flags are passed straight through:
```bash
# Default CLI experience (prowler-mcp console script)
./entrypoint.sh main --transport http --host 0.0.0.0
# ASGI app via uvicorn
./entrypoint.sh uvicorn --host 0.0.0.0 --port 9000
```
Omitting the mode defaults to `main`, matching the `prowler-mcp` console script.
When `uvicorn` mode is selected, the script exports `PROWLER_MCP_TRANSPORT_MODE=http` automatically.
This is the default entrypoint for the Docker container.
## Next Steps
Now that you have the Prowler MCP Server installed, proceed to configure your MCP client:
Binary file not shown.

After

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 346 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

+22 -44
View File
@@ -5,69 +5,47 @@
![](/images/products/overview.png)
<Columns cols={2}>
<Card
title="Prowler CLI"
icon="terminal"
href="/getting-started/products/prowler-cli"
>
<Card title="Prowler CLI" icon="terminal" href="/getting-started/products/prowler-cli">
Command Line Interface
</Card>
<Card
title="Prowler App"
icon="pen-to-square"
href="/getting-started/products/prowler-app"
>
<Card title="Prowler App" icon="pen-to-square" href="/getting-started/products/prowler-app">
Web Application
</Card>
<Card
title="Prowler Cloud"
icon="pen-to-square"
href="/getting-started/products/prowler-cloud"
>
<Card title="Prowler Cloud" icon="pen-to-square" href="/getting-started/products/prowler-cloud">
A managed service built on top of Prowler App.
</Card>
<Card
title="Prowler Hub"
icon="map"
href="/getting-started/products/prowler-hub"
>
<Card title="Prowler Hub" icon="map" href="/getting-started/products/prowler-hub">
A public library of versioned checks, cloud service artifacts, and compliance frameworks.
</Card>
</Columns>
## Supported Providers
The supported providers right now are:
| Provider | Support | Interface |
|----------|--------|----------|
| [AWS](/user-guide/providers/aws/getting-started-aws) | Official | UI, API, CLI |
| [Azure](/user-guide/providers/azure/getting-started-azure) | Official | UI, API, CLI |
| [Google Cloud](/user-guide/providers/gcp/getting-started-gcp) | Official | UI, API, CLI |
| [Kubernetes](/user-guide/providers/kubernetes/in-cluster) | Official | UI, API, CLI |
| [M365](/user-guide/providers/microsoft365/getting-started-m365) | Official | UI, API, CLI |
| [Github](/user-guide/providers/github/getting-started-github) | Official | UI, API, CLI |
| [Oracle Cloud](/user-guide/providers/oci/getting-started-oci) | Official | CLI |
| [Infra as Code](/user-guide/providers/iac/getting-started-iac) | Official | CLI |
| [MongoDB Atlas](/user-guide/providers/mongodbatlas/getting-started-mongodbatlas) | Official | CLI |
| [LLM](/user-guide/providers/llm/getting-started-llm) | Official | CLI |
| **NHN** | Unofficial | CLI |
| Provider | Support | Interface |
| -------------------------------------------------------------------------------- | ---------- | ------------ |
| [AWS](/user-guide/providers/aws/getting-started-aws) | Official | UI, API, CLI |
| [Azure](/user-guide/providers/azure/getting-started-azure) | Official | UI, API, CLI |
| [Google Cloud](/user-guide/providers/gcp/getting-started-gcp) | Official | UI, API, CLI |
| [Kubernetes](/user-guide/providers/kubernetes/in-cluster) | Official | UI, API, CLI |
| [M365](/user-guide/providers/microsoft365/getting-started-m365) | Official | UI, API, CLI |
| [Github](/user-guide/providers/github/getting-started-github) | Official | UI, API, CLI |
| [Oracle Cloud](/user-guide/providers/oci/getting-started-oci) | Official | CLI |
| [Infra as Code](/user-guide/providers/iac/getting-started-iac) | Official | CLI |
| [MongoDB Atlas](/user-guide/providers/mongodbatlas/getting-started-mongodbatlas) | Official | CLI |
| [LLM](/user-guide/providers/llm/getting-started-llm) | Official | CLI |
| **NHN** | Unofficial | CLI |
For more information about the checks and compliance of each provider visit [Prowler Hub](https://hub.prowler.com).
## Where to go next?
<Columns cols={2}>
<Card
title="User Guide"
icon="terminal"
href="/user-guide/tutorials/prowler-app"
>
<Card title="User Guide" icon="terminal" href="/user-guide/tutorials/prowler-app">
Detailed instructions on how to use Prowler.
</Card>
<Card
title="Development Guide"
icon="pen-to-square"
href="/developer-guide/introduction"
>
<Card title="Development Guide" icon="pen-to-square" href="/developer-guide/introduction">
Interested in contributing to Prowler?
</Card>
</Columns>
</Columns>
+1 -1
View File
@@ -16,7 +16,7 @@ We use encryption everywhere possible. The data and communications used by **Pro
Prowler Cloud is GDPR compliant in regards to personal data and the ["right to be forgotten"](https://gdpr.eu/right-to-be-forgotten/). When a user deletes their account their user information will be deleted from Prowler Cloud online and backup systems within 10 calendar days.
## Software Security
## Software Security
We follow a **security-by-design approach** throughout our software development lifecycle. All changes go through automated checks at every stage, from local development to production deployment.
@@ -144,10 +144,19 @@ GitHub Apps provide the recommended integration method for accessing multiple re
2. **Create New GitHub App**
- Click "New GitHub App"
- Complete the required fields:
- **GitHub App name**: Unique application name
- **Homepage URL**: Application homepage
- **Webhook URL**: Webhook payload URL (optional)
- **Permissions**: Application permission requirements
- **GitHub App name**: Choose a unique, descriptive name (e.g., "Prowler Security Scanner")
- **Homepage URL**: Enter your organization's website or the Prowler documentation URL (e.g., `https://prowler.com` or `https://docs.prowler.com`). This is just for reference and doesn't affect functionality.
- **Webhook URL**: Leave blank or uncheck "Active" under Webhook. Prowler doesn't require webhooks since it performs on-demand scans rather than responding to GitHub events.
- **Webhook secret**: Leave blank (not needed for Prowler)
- **Permissions**: Configure in the next step (see below)
<Note>
**About Homepage URL and Webhooks**
The Homepage URL is purely informational and can be any valid URL - it's just displayed to users who view the app. Use your company website, your GitHub organization URL, or even `https://docs.prowler.com`.
Webhooks are **not required** for Prowler. Since Prowler performs on-demand security scans when you run it (rather than automatically responding to GitHub events), you can safely disable webhooks or leave the URL blank.
</Note>
3. **Configure Permissions**
To enable Prowler functionality, configure these permissions:
@@ -1,24 +1,26 @@
---
title: 'Microsoft 365 Authentication in Prowler'
title: "Microsoft 365 Authentication in Prowler"
---
Prowler for Microsoft 365 supports multiple authentication types. Authentication methods vary between Prowler App and Prowler CLI:
**Prowler App:**
- [**Service Principal Application**](#service-principal-authentication-recommended) (**Recommended**)
- **Service Principal with User Credentials** (Deprecated)
- [**Application Certificate Authentication**](#certificate-based-authentication) (**Recommended**)
- [**Application Client Secret Authentication**](#client-secret-authentication)
**Prowler CLI:**
- [**Service Principal Application**](#service-principal-authentication-recommended) (**Recommended**)
- **Service Principal with User Credentials** (Deprecated)
- [**Interactive browser authentication**](#interactive-browser-authentication)
- [**Application Certificate Authentication**](#certificate-based-authentication) (**Recommended**)
- [**Application Client Secret Authentication**](#client-secret-authentication)
- [**Azure CLI Authentication**](#azure-cli-authentication)
- [**Interactive Browser Authentication**](#interactive-browser-authentication)
## Required Permissions
To run the full Prowler provider, including PowerShell checks, two types of permission scopes must be set in **Microsoft Entra ID**.
### Service Principal Authentication Permissions (Recommended)
### Application Permissions for App-Only Authentication
When using service principal authentication, add these **Application Permissions**:
@@ -35,11 +37,11 @@ When using service principal authentication, add these **Application Permissions
- `application_access` from external API `Skype and Teams Tenant Admin API`: Required for Teams PowerShell module app authentication.
<Note>
`Directory.Read.All` can be replaced with `Domain.Read.All` for more restrictive permissions, but Entra checks related to DirectoryRoles and GetUsers will not run. If using this option, you must also add the `Organization.Read.All` permission to the service principal application for authentication.
`Directory.Read.All` can be replaced with `Domain.Read.All` for more restrictive permissions, but Entra checks related to DirectoryRoles and GetUsers will not run. If using this option, you must also add the `Organization.Read.All` permission to the application registration for authentication.
</Note>
<Note>
This is the **recommended authentication method** because it allows running the full M365 provider including PowerShell checks, providing complete coverage of all available security checks.
These permissions enable application-based authentication methods (client secret and certificate). Using certificate-based authentication is the recommended way to run the full M365 provider, including PowerShell checks.
</Note>
### Browser Authentication Permissions
@@ -47,96 +49,189 @@ This is the **recommended authentication method** because it allows running the
When using browser authentication, permissions are delegated to the user, so the user must have the appropriate permissions rather than the application.
<Warning>
With browser authentication, you will only be able to run checks that work through MS Graph API. PowerShell module checks will not be executed.
Browser and Azure CLI authentication methods limit scanning capabilities to checks that operate through Microsoft Graph API. Checks requiring PowerShell modules will not execute, as they need application-level permissions that cannot be delegated through browser authentication.
</Warning>
### Step-by-Step Permission Assignment
#### Create Service Principal Application
#### Create Application Registration
1. Access **Microsoft Entra ID**
![Overview of Microsoft Entra ID](/images/providers/microsoft-entra-id.png)
![Overview of Microsoft Entra ID](/images/providers/microsoft-entra-id.png)
2. Navigate to "Applications" > "App registrations"
![App Registration nav](/images/providers/app-registration-menu.png)
![App Registration nav](/images/providers/app-registration-menu.png)
3. Click "+ New registration", complete the form, and click "Register"
![New Registration](/images/providers/new-registration.png)
![New Registration](/images/providers/new-registration.png)
4. Go to "Certificates & secrets" > "Client secrets" > "+ New client secret"
![Certificate & Secrets nav](/images/providers/certificates-and-secrets.png)
![Certificate & Secrets nav](/images/providers/certificates-and-secrets.png)
5. Fill in the required fields and click "Add", then copy the generated value (this will be `AZURE_CLIENT_SECRET`)
![New Client Secret](/images/providers/new-client-secret.png)
![New Client Secret](/images/providers/new-client-secret.png)
#### Grant Microsoft Graph API Permissions
1. Go to App Registration > Select your Prowler App > click on "API permissions"
![API Permission Page](/images/providers/api-permissions-page.png)
![API Permission Page](/images/providers/api-permissions-page.png)
2. Click "+ Add a permission" > "Microsoft Graph" > "Application permissions"
![Add API Permission](/images/providers/add-app-api-permission.png)
![Add API Permission](/images/providers/add-app-api-permission.png)
3. Search and select the required permissions:
- `AuditLog.Read.All`: Required for Entra service
- `Directory.Read.All`: Required for all services
- `Policy.Read.All`: Required for all services
- `SharePointTenantSettings.Read.All`: Required for SharePoint service
![Permission Screenshots](/images/providers/directory-permission.png)
- `AuditLog.Read.All`: Required for Entra service
- `Directory.Read.All`: Required for all services
- `Policy.Read.All`: Required for all services
- `SharePointTenantSettings.Read.All`: Required for SharePoint service
![Application Permissions](/images/providers/app-permissions.png)
![Permission Screenshots](/images/providers/directory-permission.png)
4. Click "Add permissions", then click "Grant admin consent for ``<your-tenant-name>``"
![Application Permissions](/images/providers/app-permissions.png)
#### Grant PowerShell Module Permissions (For Service Principal Authentication)
4. Click "Add permissions", then click "Grant admin consent for `<your-tenant-name>`"
#### Grant PowerShell Module Permissions
1. **Add Exchange API:**
- Search and select "Office 365 Exchange Online" API in **APIs my organization uses**
- Search and select "Office 365 Exchange Online" API in **APIs my organization uses**
![Office 365 Exchange Online API](/images/providers/search-exchange-api.png)
![Office 365 Exchange Online API](/images/providers/search-exchange-api.png)
- Select "Exchange.ManageAsApp" permission and click "Add permissions"
- Select "Exchange.ManageAsApp" permission and click "Add permissions"
![Exchange.ManageAsApp Permission](/images/providers/exchange-permission.png)
![Exchange.ManageAsApp Permission](/images/providers/exchange-permission.png)
- Assign `Global Reader` role to the app: Go to `Roles and administrators` > click `here` for directory level assignment
- Assign `Global Reader` role to the app: Go to `Roles and administrators` > click `here` for directory level assignment
![Roles and administrators](/images/providers/here.png)
![Roles and administrators](/images/providers/here.png)
- Search for `Global Reader` and assign it to your application
- Search for `Global Reader` and assign it to your application
![Global Reader Role](/images/providers/global-reader-role.png)
![Global Reader Role](/images/providers/global-reader-role.png)
2. **Add Teams API:**
- Search and select "Skype and Teams Tenant Admin API" in **APIs my organization uses**
- Search and select "Skype and Teams Tenant Admin API" in **APIs my organization uses**
![Skype and Teams Tenant Admin API](/images/providers/search-skype-teams-tenant-admin-api.png)
![Skype and Teams Tenant Admin API](/images/providers/search-skype-teams-tenant-admin-api.png)
- Select "application_access" permission and click "Add permissions"
- Select "application_access" permission and click "Add permissions"
![application_access Permission](/images/providers/teams-permission.png)
![application_access Permission](/images/providers/teams-permission.png)
3. Click "Grant admin consent for `<your-tenant-name>`" to grant admin consent
![Grant Admin Consent](/images/providers/grant-external-api-permissions.png)
![Grant Admin Consent](/images/providers/grant-external-api-permissions.png)
## Service Principal Authentication (Recommended)
Final permissions should look like this:
*Available for both Prowler App and Prowler CLI*
![Final Permissions](/images/providers/final-permissions.png)
<a id="client-secret-authentication"></a>
<a id="certificate-based-authentication"></a>
## Application Certificate Authentication (Recommended)
_Available for both Prowler App and Prowler CLI_
**Authentication flag for CLI:** `--certificate-auth`
Certificate-based authentication replaces the client secret with an X.509 certificate that signs Microsoft Entra ID tokens for the Prowler application registration.
This is the recommended approach for production environments because it avoids long-lived secrets, supports the full provider (including PowerShell checks), and simplifies unattended automation. Microsoft also recommends certificate credentials for app-only access, see [Manage certificates for applications](https://learn.microsoft.com/en-us/entra/identity-platform/certificate-credentials).
### Generate the Certificate
The service principal needs a certificate that contains the private key locally (for Prowler) and the public key uploaded to Microsoft Entra ID. The following commands show a secure baseline workflow on macOS or Linux using OpenSSL:
```console
# 1. Create a private key (keep this file private; do not upload it to the portal)
openssl genrsa -out prowlerm365.key 2048
# 2. Create a self-signed certificate valid for two years
openssl req -x509 -new -nodes -key prowlerm365.key -sha256 -days 730 -out prowlerm365.cer -subj "/CN=ProwlerM365Cert"
# 3. Package the key and certificate into a passwordless PFX bundle for Prowler
openssl pkcs12 -export \
-out prowlerm365.pfx \
-inkey prowlerm365.key \
-in prowlerm365.cer \
-passout pass:
```
<Warning>
Guard `prowlerm365.key` and `prowlerm365.pfx`. Only upload the `.cer` file to the Entra ID portal. Rotate or revoke the certificate before it expires or if there is any suspicion of exposure.
</Warning>
If your organization uses a certificate authority, you can replace step 2 with a CSR workflow and import the signed certificate instead.
### Upload the Certificate to Microsoft Entra ID
1. Open **Microsoft Entra ID** > **App registrations** > your application.
2. Go to **Certificates & secrets** > **Certificates**.
3. Select **Upload certificate** and choose `prowlerm365.cer`.
4. Confirm the certificate appears with the expected expiration date.
After the certificate is in place, encode the PFX file so it can be stored in an environment variable (macOS/Linux example):
```console
base64 -i prowlerm365.pfx -o prowlerm365.pfx.b64
cat prowlerm365.pfx.b64 | tr -d '\n'
```
Copy the resulting single-line Base64 string (or the contents of `prowlerm365.pfx.b64`)—you will use it in the next step.
### Provide the Certificate to Prowler
You can supply the private certificate to Prowler in two ways:
- **Environment variables (recommended for headless execution)**
```console
export AZURE_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export AZURE_TENANT_ID="11111111-1111-1111-1111-111111111111"
export M365_CERTIFICATE_CONTENT="$(base64 < prowlerm365.pfx | tr -d '\n')"
```
The `M365_CERTIFICATE_CONTENT` variable must contain a single-line Base64 string. Remove any line breaks or spaces before exporting.
- **Local file path**
Store the PFX securely and reference it when you run the CLI:
```console
python3 prowler-cli.py m365 --certificate-auth --certificate-path /secure/path/prowlerm365.pfx
```
The CLI still needs `AZURE_CLIENT_ID` and `AZURE_TENANT_ID` in the environment when you use `--certificate-path`.
For the **Prowler App**, paste the Base64-encoded PFX in the `certificate_content` field when you configure the provider secrets. The platform persists the encrypted certificate and supplies it during scans.
<Note>
Do not mix certificate authentication with a client secret. Provide either a certificate **or** a secret to the application registration and Prowler configuration.
</Note>
<a id="client-secret-authentication"></a>
<a id="service-principal-authentication"></a>
<a id="service-principal-authentication-recommended"></a>
## Application Client Secret Authentication
_Available for both Prowler App and Prowler CLI_
**Authentication flag for CLI:** `--sp-env-auth`
Authenticate using the **Service Principal Application** by configuring the following environment variables:
Authenticate using a **Microsoft Entra application registration with a client secret** by configuring the following environment variables:
```console
export AZURE_CLIENT_ID="XXXXXXXXX"
@@ -150,13 +245,61 @@ Refer to the [Step-by-Step Permission Assignment](#step-by-step-permission-assig
If the external API permissions described in the mentioned section above are not added only checks that work through MS Graph will be executed. This means that the full provider will not be executed.
This workflow is helpful for initial validation or temporary access. Plan to transition to certificate-based authentication to remove long-lived secrets and keep full provider coverage in unattended environments.
<Note>
In order to scan all the checks from M365 required permissions to the service principal application must be added. Refer to the [PowerShell Module Permissions](#grant-powershell-module-permissions-for-service-principal-authentication) section for more information.
To scan every M365 check, ensure the required permissions are added to the application registration. Refer to the [PowerShell Module Permissions](#grant-powershell-module-permissions-for-app-only-authentication) section for more information.
</Note>
### Run Prowler with Certificate Authentication
After the variables or path are in place, run the Microsoft 365 provider as usual:
```console
python3 prowler-cli.py m365 --certificate-auth --init-modules --log-level ERROR
```
The command above initializes PowerShell modules if needed. You can combine other standard flags (for example, `--region M365USGovernment` or custom outputs) with `--certificate-auth`.
Prowler prints the certificate thumbprint during execution so you can confirm the correct credential is in use.
<a id="azure-cli-authentication"></a>
## Azure CLI Authentication
_Available only for Prowler CLI_
**Authentication flag for CLI:** `--az-cli-auth`
Azure CLI authentication relies on the identity that is already signed in with the Azure CLI. Before running Prowler, make sure you have an active CLI session in the target tenant:
```console
az login --tenant <TENANT_ID>
# Optional: enforce the tenant when several are available
az account set --tenant <TENANT_ID>
```
If you prefer to reuse the same service principal that powers certificate-based authentication, authenticate it through Azure CLI instead of exporting environment variables. Azure CLI expects the certificate in PEM format; convert the PFX produced earlier and sign in:
```console
openssl pkcs12 -in prowlerm365.pfx -out prowlerm365.pem -nodes
az login --service-principal \
--username <AZURE_CLIENT_ID> \
--password /secure/path/prowlerm365.pem \
--tenant <AZURE_TENANT_ID>
```
After the CLI session is authenticated, launch Prowler with the Azure CLI flag:
```console
python3 prowler-cli.py m365 --az-cli-auth
```
The Azure CLI identity must hold the same Microsoft Graph and external API permissions required for the full provider. Signing in with a user account limits the scan to delegated Microsoft Graph endpoints and skips PowerShell-based checks. Use a service principal with the necessary application permissions to keep complete coverage.
## Interactive Browser Authentication
*Available only for Prowler CLI*
_Available only for Prowler CLI_
**Authentication flag:** `--browser-auth`
@@ -171,6 +314,7 @@ Since this is a **delegated permission** authentication method, necessary permis
PowerShell is required to run certain M365 checks.
**Supported versions:**
- **PowerShell 7.4 or higher** (7.5 is recommended)
#### Why Is PowerShell 7.4+ Required?
@@ -193,6 +337,7 @@ Installing PowerShell is different depending on your OS:
```console
winget install --id Microsoft.PowerShell --source winget
```
</Tab>
<Tab title="MacOS">
[MacOS](https://learn.microsoft.com/es-es/powershell/scripting/install/installing-powershell-on-macos?view=powershell-7.5#install-the-latest-stable-release-of-powershell): installing PowerShell on MacOS needs to have installed [brew](https://brew.sh/), once installed, simply run the command shown above, Pwsh is only supported in macOS 15 (Sequoia) x64 and Arm64, macOS 14 (Sonoma) x64 and Arm64, macOS 13 (Ventura) x64 and Arm64
@@ -202,6 +347,7 @@ Installing PowerShell is different depending on your OS:
```
Once it's installed run `pwsh` on your terminal to verify it's working.
</Tab>
<Tab title="Linux (Ubuntu)">
[Ubuntu](https://learn.microsoft.com/es-es/powershell/scripting/install/install-ubuntu?view=powershell-7.5#installation-via-package-repository-the-package-repository): The required version for installing PowerShell +7.4 on Ubuntu are Ubuntu 22.04 and Ubuntu 24.04.
@@ -241,6 +387,7 @@ Installing PowerShell is different depending on your OS:
# Start PowerShell
pwsh
```
</Tab>
<Tab title="Linux (Alpine)">
[Alpine](https://learn.microsoft.com/es-es/powershell/scripting/install/install-alpine?view=powershell-7.5#installation-steps): The only supported version for installing PowerShell +7.4 on Alpine is Alpine 3.20. The unique way to install it is downloading the tar.gz package available on [PowerShell github](https://github.com/PowerShell/PowerShell/releases/download/v7.5.0/powershell-7.5.0-linux-musl-x64.tar.gz).
@@ -286,6 +433,7 @@ Installing PowerShell is different depending on your OS:
# Start PowerShell
pwsh
```
</Tab>
<Tab title="Linux (Debian)">
[Debian](https://learn.microsoft.com/es-es/powershell/scripting/install/install-debian?view=powershell-7.5#installation-on-debian-11-or-12-via-the-package-repository): The required version for installing PowerShell +7.4 on Debian are Debian 11 and Debian 12. The recommended way to install it is downloading the package available on PMC.
@@ -324,6 +472,7 @@ Installing PowerShell is different depending on your OS:
# Start PowerShell
pwsh
```
</Tab>
<Tab title="Linux (RHEL)">
[Rhel](https://learn.microsoft.com/es-es/powershell/scripting/install/install-rhel?view=powershell-7.5#installation-via-the-package-repository): The required version for installing PowerShell +7.4 on Red Hat are RHEL 8 and RHEL 9. The recommended way to install it is downloading the package available on PMC.
@@ -357,6 +506,7 @@ Installing PowerShell is different depending on your OS:
# Install PowerShell
sudo dnf install powershell -y
```
</Tab>
<Tab title="Docker">
[Docker](https://learn.microsoft.com/es-es/powershell/scripting/install/powershell-in-docker?view=powershell-7.5#use-powershell-in-a-container): The following command download the latest stable versions of PowerShell:
@@ -370,6 +520,7 @@ Installing PowerShell is different depending on your OS:
```console
docker run -it mcr.microsoft.com/dotnet/sdk:9.0 pwsh
```
</Tab>
</Tabs>
### Required PowerShell Modules
@@ -386,6 +537,7 @@ Example command:
```console
python3 prowler-cli.py m365 --verbose --log-level ERROR --sp-env-auth --init-modules
```
If the modules are already installed, running this command will not cause issues—it will simply verify that the necessary modules are available.
<Note>
@@ -399,7 +551,6 @@ Install-Module -Name "ModuleName" -Scope AllUsers -Force
</Note>
#### Modules Version
- [MSAL.PS](https://www.powershellgallery.com/packages/MSAL.PS/4.32.0): Required for Exchange module via application authentication.
- [ExchangeOnlineManagement](https://www.powershellgallery.com/packages/ExchangeOnlineManagement/3.6.0) (Minimum version: 3.6.0) Required for checks across Exchange, Defender, and Purview.
- [MicrosoftTeams](https://www.powershellgallery.com/packages/MicrosoftTeams/6.6.0) (Minimum version: 6.6.0) Required for all Teams checks.
- [MSAL.PS](https://www.powershellgallery.com/packages/MSAL.PS/4.32.0): Required for Exchange module via application authentication.
- [MSAL.PS](https://www.powershellgallery.com/packages/MSAL.PS/4.32.0): Required for Exchange module via application authentication.
@@ -12,8 +12,9 @@ Government cloud accounts or tenants (Microsoft 365 Government) are currently un
Configure authentication for Microsoft 365 by following the [Microsoft 365 Authentication](/user-guide/providers/microsoft365/authentication) guide. This includes:
- Creating a Service Principal Application
- Granting required Microsoft Graph API permissions
- Registering an application in Microsoft Entra ID
- Granting all required Microsoft Graph and external API permissions
- Generating the application certificate (recommended) or client secret
- Setting up PowerShell module permissions (for full security coverage)
## Prowler App
@@ -47,25 +48,38 @@ Configure authentication for Microsoft 365 by following the [Microsoft 365 Authe
![Add Domain ID](/images/providers/add-domain-id.png)
### Step 3: Add Credentials to Prowler App
### Step 3: Select Authentication Method and Provide Credentials
1. Go to App Registration overview and copy the Client ID and Tenant ID
Prowler App now separates Microsoft 365 authentication into two app-only options. After adding the Domain ID, choose the method that matches your setup:
![App Overview](/images/providers/app-overview.png)
<img src="/images/providers/m365-auth-selection-form.png" alt="M365 authentication method selection" width="700" />
2. Go to Prowler App and paste:
#### Application Certificate Authentication (Recommended)
- Client ID
- Tenant ID
- `AZURE_CLIENT_SECRET` from the Service Principal setup
1. Copy the Application (client) ID and Tenant ID from the app registration overview page.
2. Paste both values into the Prowler App form.
3. Upload the PFX bundle or paste the Base64-encoded certificate (`M365_CERTIFICATE_CONTENT`), then click **Test Connection**.
![Prowler Cloud M365 Credentials](/images/providers/m365-credentials.png)
<img src="/images/providers/certificate-form.png" alt="M365 certificate authentication form" width="700" />
3. Click "Next"
Use this method whenever possible to avoid managing client secrets and to unlock every Microsoft 365 check, including those that require PowerShell modules.
#### Application Client Secret Authentication
1. From the app registration, copy the Application (client) ID and Tenant ID.
2. Paste both values plus the client secret into the Prowler App form.
3. Click **Test Connection** to validate the credentials.
<img src="/images/providers/secret-form.png" alt="M365 client secret authentication form" width="700" />
### Step 4: Launch the Scan
1. Review the summary, then click **Next**.
![Next Detail](/images/providers/click-next-m365.png)
4. Click "Launch Scan"
2. Click **Launch Scan** to start auditing Microsoft 365.
![Launch Scan M365](/images/providers/launch-scan.png)
@@ -83,7 +97,9 @@ PowerShell 7.4+ is required for comprehensive Microsoft 365 security coverage. I
Select an authentication method from the [Microsoft 365 Authentication](/user-guide/providers/microsoft365/authentication) guide:
- **Service Principal Application** (recommended): `--sp-env-auth`
- **Application Certificate Authentication** (recommended): `--certificate-auth`
- **Application Client Secret Authentication**: `--sp-env-auth`
- **Azure CLI Authentication**: `--az-cli-auth`
- **Interactive Browser Authentication**: `--browser-auth`
### Basic Usage
@@ -0,0 +1,491 @@
---
title: 'AWS Organizations Bulk Provisioning in Prowler'
---
Prowler offers an automated tool to discover and provision all AWS accounts within an AWS Organization. This streamlines onboarding for organizations managing multiple AWS accounts by automatically generating the configuration needed for bulk provisioning.
The tool, `aws_org_generator.py`, complements the [Bulk Provider Provisioning](./bulk-provider-provisioning) tool and is available in the Prowler repository at: [util/prowler-bulk-provisioning](https://github.com/prowler-cloud/prowler/tree/master/util/prowler-bulk-provisioning)
<Note>
Native support for bulk provisioning AWS Organizations and similar multi-account structures directly in the Prowler UI/API is on the official roadmap.
Track progress and vote for this feature at: [Bulk Provisioning in the UI/API for AWS Organizations](https://roadmap.prowler.com/p/builk-provisioning-in-the-uiapi-for-aws-organizations-and-alike)
</Note>
{/* TODO: Add screenshot of the tool in action */}
## Overview
The AWS Organizations Bulk Provisioning tool simplifies multi-account onboarding by:
* Automatically discovering all active accounts in an AWS Organization
* Generating YAML configuration files for bulk provisioning
* Supporting account filtering and custom role configurations
* Eliminating manual entry of account IDs and role ARNs
## Prerequisites
### Requirements
* Python 3.7 or higher
* AWS credentials with Organizations read access
* ProwlerRole (or custom role) deployed across all target accounts
* Prowler API key (from Prowler Cloud or self-hosted Prowler App)
* For self-hosted Prowler App, remember to [point to your API base URL](./bulk-provider-provisioning#custom-api-endpoints)
* Learn how to create API keys: [Prowler App API Keys](../providers/prowler-app-api-keys)
### Deploying ProwlerRole Across AWS Organizations
Before using the AWS Organizations generator, deploy the ProwlerRole across all accounts in the organization using CloudFormation StackSets.
<Note>
**Follow the official documentation:**
[Deploying Prowler IAM Roles Across AWS Organizations](../providers/aws/organizations#deploying-prowler-iam-roles-across-aws-organizations)
**Key points:**
* Use CloudFormation StackSets from the management account
* Deploy to all organizational units (OUs) or specific OUs
* Use an external ID for enhanced security
* Ensure the role has necessary permissions for Prowler scans
</Note>
### Installation
Clone the repository and install required dependencies:
```bash
git clone https://github.com/prowler-cloud/prowler.git
cd prowler/util/prowler-bulk-provisioning
pip install -r requirements-aws-org.txt
```
### AWS Credentials Setup
Configure AWS credentials with Organizations read access:
* **Management account credentials**, or
* **Delegated administrator account** with `organizations:ListAccounts` permission
Required IAM permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"organizations:ListAccounts",
"organizations:DescribeOrganization"
],
"Resource": "*"
}
]
}
```
### Prowler API Key Setup
Configure your Prowler API key:
```bash
export PROWLER_API_KEY="pk_example-api-key"
```
To create an API key:
1. Log in to Prowler Cloud or Prowler App
2. Click **Profile** → **Account**
3. Click **Create API Key**
4. Provide a descriptive name and optionally set an expiration date
5. Copy the generated API key (it will only be shown once)
For detailed instructions, see: [Prowler App API Keys](../providers/prowler-app-api-keys)
## Basic Usage
### Generate Configuration for All Accounts
To generate a YAML configuration file for all active accounts in the organization:
```bash
python aws_org_generator.py -o aws-accounts.yaml --external-id prowler-ext-id-2024
```
This command:
1. Lists all ACTIVE accounts in the organization
2. Generates YAML entries for each account
3. Saves the configuration to `aws-accounts.yaml`
**Output:**
```
Fetching accounts from AWS Organizations...
Found 47 active accounts in organization
Generated configuration for 47 accounts
Configuration written to: aws-accounts.yaml
Next steps:
1. Review the generated file: cat aws-accounts.yaml | head -n 20
2. Run bulk provisioning: python prowler_bulk_provisioning.py aws-accounts.yaml
```
### Review Generated Configuration
Review the generated YAML configuration:
```bash
head -n 20 aws-accounts.yaml
```
**Example output:**
```yaml
- provider: aws
uid: '111111111111'
alias: Production-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::111111111111:role/ProwlerRole
external_id: prowler-ext-id-2024
- provider: aws
uid: '222222222222'
alias: Development-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::222222222222:role/ProwlerRole
external_id: prowler-ext-id-2024
```
### Dry Run Mode
Test the configuration without writing a file:
```bash
python aws_org_generator.py \
--external-id prowler-ext-id-2024 \
--dry-run
```
## Advanced Configuration
### Using a Specific AWS Profile
Specify an AWS profile when multiple profiles are configured:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--profile org-management-admin \
--external-id prowler-ext-id-2024
```
### Excluding Specific Accounts
Exclude the management account or other accounts from provisioning:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id prowler-ext-id-2024 \
--exclude 123456789012,210987654321
```
Common exclusion scenarios:
* Management account (requires different permissions)
* Break-glass accounts (emergency access)
* Suspended or archived accounts
### Including Only Specific Accounts
Generate configuration for specific accounts only:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id prowler-ext-id-2024 \
--include 111111111111,222222222222,333333333333
```
### Custom Role Name
Specify a custom role name if not using the default `ProwlerRole`:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--role-name ProwlerExecutionRole \
--external-id prowler-ext-id-2024
```
### Custom Alias Format
Customize account aliases using template variables:
```bash
# Use account name and ID
python aws_org_generator.py \
-o aws-accounts.yaml \
--alias-format "{name}-{id}" \
--external-id prowler-ext-id-2024
# Use email prefix
python aws_org_generator.py \
-o aws-accounts.yaml \
--alias-format "{email}" \
--external-id prowler-ext-id-2024
```
Available template variables:
* `{name}` - Account name
* `{id}` - Account ID
* `{email}` - Account email
### Additional Role Assumption Options
Configure optional role assumption parameters:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--role-name ProwlerRole \
--external-id prowler-ext-id-2024 \
--session-name prowler-scan-session \
--duration-seconds 3600
```
## Complete Workflow Example
<Steps>
<Step title="Deploy ProwlerRole Using StackSets">
1. Log in to the AWS management account
2. Open CloudFormation → StackSets
3. Create a new StackSet using the [Prowler role template](https://github.com/prowler-cloud/prowler/blob/master/permissions/templates/cloudformation/prowler-scan-role.yml)
4. Deploy to all organizational units
5. Use a unique external ID (e.g., `prowler-org-2024-abc123`)
{/* TODO: Add screenshot of CloudFormation StackSets deployment */}
</Step>
<Step title="Generate YAML Configuration">
Configure AWS credentials and generate the YAML file:
```bash
# Using management account credentials
export AWS_PROFILE=org-management
# Generate configuration
python aws_org_generator.py \
-o aws-org-accounts.yaml \
--external-id prowler-org-2024-abc123 \
--exclude 123456789012
```
**Output:**
```
Fetching accounts from AWS Organizations...
Using AWS profile: org-management
Found 47 active accounts in organization
Generated configuration for 46 accounts
Configuration written to: aws-org-accounts.yaml
Next steps:
1. Review the generated file: cat aws-org-accounts.yaml | head -n 20
2. Run bulk provisioning: python prowler_bulk_provisioning.py aws-org-accounts.yaml
```
</Step>
<Step title="Review Generated Configuration">
Verify the generated YAML configuration:
```bash
# View first 20 lines
head -n 20 aws-org-accounts.yaml
# Check for unexpected accounts
grep "uid:" aws-org-accounts.yaml
# Verify role ARNs
grep "role_arn:" aws-org-accounts.yaml | head -5
# Count accounts
grep "provider: aws" aws-org-accounts.yaml | wc -l
```
</Step>
<Step title="Run Bulk Provisioning">
Provision all accounts to Prowler Cloud or Prowler App:
```bash
# Set Prowler API key
export PROWLER_API_KEY="pk_example-api-key"
# Run bulk provisioning with connection testing
python prowler_bulk_provisioning.py aws-org-accounts.yaml
```
**With custom options:**
```bash
python prowler_bulk_provisioning.py aws-org-accounts.yaml \
--concurrency 10 \
--timeout 120
```
**Successful output:**
```
[1] ✅ Created provider (id=db9a8985-f9ec-4dd8-b5a0-e05ab3880bed)
[1] ✅ Created secret (id=466f76c6-5878-4602-a4bc-13f9522c1fd2)
[1] ✅ Connection test: Connected
[2] ✅ Created provider (id=7a99f789-0cf5-4329-8279-2d443a962676)
[2] ✅ Created secret (id=c5702180-f7c4-40fd-be0e-f6433479b126)
[2] ✅ Connection test: Connected
Done. Success: 47 Failures: 0
```
{/* TODO: Add screenshot of successful bulk provisioning output */}
</Step>
</Steps>
## Command Reference
### Full Command-Line Options
```bash
python aws_org_generator.py \
-o OUTPUT_FILE \
--role-name ROLE_NAME \
--external-id EXTERNAL_ID \
--session-name SESSION_NAME \
--duration-seconds SECONDS \
--alias-format FORMAT \
--exclude ACCOUNT_IDS \
--include ACCOUNT_IDS \
--profile AWS_PROFILE \
--region AWS_REGION \
--dry-run
```
## Troubleshooting
### Error: "No AWS credentials found"
**Solution:** Configure AWS credentials using one of these methods:
```bash
# Method 1: AWS CLI configure
aws configure
# Method 2: Environment variables
export AWS_ACCESS_KEY_ID=your-key-id
export AWS_SECRET_ACCESS_KEY=your-secret-key
# Method 3: Use AWS profile
export AWS_PROFILE=org-management
```
### Error: "Access denied to AWS Organizations API"
**Cause:** Current credentials don't have permission to list organization accounts.
**Solution:**
* Ensure management account credentials are used
* Verify IAM permissions include `organizations:ListAccounts`
* Check IAM policies for Organizations access
### Error: "AWS Organizations is not enabled"
**Cause:** The account is not part of an organization.
**Solution:** This tool requires an AWS Organization. Create one in the AWS Organizations console or use standard bulk provisioning for standalone accounts.
### No Accounts Generated After Filters
**Cause:** All accounts were filtered out by `--exclude` or `--include` options.
**Solution:** Review filter options and verify account IDs are correct:
```bash
# List all accounts in organization
aws organizations list-accounts --query "Accounts[?Status=='ACTIVE'].[Id,Name]" --output table
```
### Connection Test Failures During Bulk Provisioning
**Cause:** ProwlerRole may not be deployed correctly or credentials are invalid.
**Solution:**
* Verify StackSet deployment status in CloudFormation
* Check role trust policy includes correct external ID
* Test role assumption manually:
```bash
aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/ProwlerRole \
--role-session-name test \
--external-id prowler-ext-id-2024
```
## Security Best Practices
### Use External ID
Always use an external ID when assuming cross-account roles:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id $(uuidgen | tr '[:upper:]' '[:lower:]')
```
The external ID must match the one configured in the ProwlerRole trust policy across all accounts.
### Exclude Sensitive Accounts
Exclude accounts that shouldn't be scanned or require special handling:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id prowler-ext-id \
--exclude 123456789012,111111111111 # management, break-glass accounts
```
### Review Generated Configuration
Always review the generated YAML before provisioning:
```bash
# Check for unexpected accounts
grep "uid:" aws-org-accounts.yaml
# Verify role ARNs
grep "role_arn:" aws-org-accounts.yaml | head -5
# Count accounts
grep "provider: aws" aws-org-accounts.yaml | wc -l
```
## Next Steps
<Columns cols={2}>
<Card title="Bulk Provider Provisioning" icon="terminal" href="/user-guide/tutorials/bulk-provider-provisioning">
Learn how to bulk provision providers in Prowler.
</Card>
<Card title="Prowler App" icon="pen-to-square" href="/user-guide/tutorials/prowler-app">
Detailed instructions on how to use Prowler.
</Card>
</Columns>
@@ -17,14 +17,18 @@ The Bulk Provider Provisioning tool automates the creation of cloud providers in
* Testing connections to verify successful authentication
* Processing multiple providers concurrently for efficiency
<Tip>
**Using AWS Organizations?** For organizations with many AWS accounts, use the automated [AWS Organizations Bulk Provisioning](./aws-organizations-bulk-provisioning) tool to automatically discover and generate configuration for all accounts in your organization.
</Tip>
## Prerequisites
### Requirements
* Python 3.7 or higher
* Prowler API token (from Prowler Cloud or self-hosted Prowler App)
* Prowler API key (from Prowler Cloud or self-hosted Prowler App)
* For self-hosted Prowler App, remember to [point to your API base URL](#custom-api-endpoints)
* Learn how to create API keys: [Prowler App API Keys](../providers/prowler-app-api-keys)
* Authentication credentials for target cloud providers
### Installation
@@ -39,28 +43,21 @@ pip install -r requirements.txt
### Authentication Setup
Configure your Prowler API token:
Configure your Prowler API key:
```bash
export PROWLER_API_TOKEN="your-prowler-api-token"
export PROWLER_API_KEY="pk_example-api-key"
```
To obtain an API token programmatically:
To create an API key:
```bash
export PROWLER_API_TOKEN=$(curl --location 'https://api.prowler.com/api/v1/tokens' \
--header 'Content-Type: application/vnd.api+json' \
--header 'Accept: application/vnd.api+json' \
--data-raw '{
"data": {
"type": "tokens",
"attributes": {
"email": "your@email.com",
"password": "your-password"
}
}
}' | jq -r .data.attributes.access)
```
1. Log in to Prowler Cloud or Prowler App
2. Click **Profile** → **Account**
3. Click **Create API Key**
4. Provide a descriptive name and optionally set an expiration date
5. Copy the generated API key (it will only be shown once)
For detailed instructions, see: [Prowler App API Keys](../providers/prowler-app-api-keys)
## Configuration File Structure
@@ -340,11 +337,11 @@ Done. Success: 2 Failures: 0
## Troubleshooting
### Invalid API Token
### Invalid API Key
```
Error: 401 Unauthorized
Solution: Verify your PROWLER_API_TOKEN or --token parameter
Solution: Verify your PROWLER_API_KEY environment variable or --api-key parameter
```
### Network Timeouts
+18 -7
View File
@@ -207,17 +207,28 @@ If you are adding an **EKS**, **GKE**, **AKS** or external cluster, follow these
4. Now you can add the modified `kubeconfig` in Prowler Cloud. Then test the connection.
### **Step 4.5: M365 Credentials**
For M365, you must enter your Domain ID and choose the authentication method you want to use:
Enter your Microsoft Entra domain (primary tenant domain) and select how the provider should authenticate. Prowler App guides you through the process:
- Service Principal Authentication (Recommended)
<img src="/images/providers/m365-auth-selection-form.png" alt="M365 authentication method selection" width="700" />
<Warning>
User authentication with M365_USER and M365_PASSWORD is deprecated and will be removed.
- **Application Client Secret Authentication**: Client secret-based authentication.
- **Application Certificate Authentication (Recommended)**: Certificate-based authentication. Recommended by Microsoft.
</Warning>
For full setup instructions and requirements, check the [Microsoft 365 provider requirements](/user-guide/providers/microsoft365/getting-started-m365).
#### Step 4.5.1: Application Client Secret Authentication
1. **Enter your tenant ID**: This is the unique identifier for your Microsoft Entra ID directory.
2. **Enter your application (client) ID**: This is the unique identifier assigned to your app registration in Microsoft Entra ID.
3. **Enter your client secret**: This is the secret key used to authenticate your application.
<img src="/images/m365-credentials.png" alt="Prowler Cloud M365 Credentials" width="700" />
<img src="/images/providers/secret-form.png" alt="M365 client secret authentication form" width="700" />
For full setup instructions, certificate generation commands, and required permissions, review the [Microsoft 365 provider requirements](/user-guide/providers/microsoft365/getting-started-m365).
#### Step 4.5.2: Application Certificate Authentication (Recommended)
1. **Enter your tenant ID**: This is the unique identifier for your Microsoft Entra ID directory.
2. **Enter your application (client) ID**: This is the unique identifier assigned to your app registration in Microsoft Entra ID.
3. **Upload your certificate file content**: This is the **Base64** encoded certificate content used to authenticate your application.
<img src="/images/providers/certificate-form.png" alt="M365 certificate authentication form" width="700" />
### **Step 4.6: GitHub Credentials**
For GitHub, you must enter your Provider ID (username or organization name) and choose the authentication method you want to use:
+1 -1
View File
@@ -1,3 +1,3 @@
PROWLER_APP_API_KEY="pk_your_api_key_here"
PROWLER_API_BASE_URL="https://api.prowler.com"
PROWLER_MCP_MODE="stdio"
PROWLER_MCP_TRANSPORT_MODE="stdio"
+3 -2
View File
@@ -2,7 +2,7 @@
All notable changes to the **Prowler MCP Server** are documented in this file.
## [0.1.0] (Prowler UNRELEASED)
## [0.1.0] (Prowler 5.13.0)
### Added
- Initial release of Prowler MCP Server [(#8695)](https://github.com/prowler-cloud/prowler/pull/8695)
@@ -13,4 +13,5 @@ All notable changes to the **Prowler MCP Server** are documented in this file.
- Add new MCP Server for Prowler Documentation [(#8795)](https://github.com/prowler-cloud/prowler/pull/8795)
- API key support for STDIO mode and enhanced HTTP mode authentication [(#8823)](https://github.com/prowler-cloud/prowler/pull/8823)
- Add health check endpoint [(#8905)](https://github.com/prowler-cloud/prowler/pull/8905)
- Update Prowler Documentation MCP Server to use Mintlify API [(#8915)](https://github.com/prowler-cloud/prowler/pull/8915)
- Update Prowler Documentation MCP Server to use Mintlify API [(#8916)](https://github.com/prowler-cloud/prowler/pull/8916)
- Add custom production deployment using uvicorn [(#8958)](https://github.com/prowler-cloud/prowler/pull/8958)
+6 -7
View File
@@ -47,13 +47,12 @@ COPY --from=builder --chown=prowler /app/prowler_mcp_server /app/prowler_mcp_ser
# 3. Project metadata file (may be needed by some packages at runtime)
COPY --from=builder --chown=prowler /app/pyproject.toml /app/pyproject.toml
# 4. Entrypoint helper script for selecting runtime mode
COPY --from=builder --chown=prowler /app/entrypoint.sh /app/entrypoint.sh
# Add virtual environment to PATH so prowler-mcp command is available
ENV PATH="/app/.venv/bin:$PATH"
# Entry point for the MCP server
# Default to stdio mode, but allow overriding via command arguments
# Examples:
# docker run -p 8000:8000 prowler-mcp --transport http --host 0.0.0.0 --port 8000
# docker run prowler-mcp --transport stdio
ENTRYPOINT ["prowler-mcp"]
CMD ["--transport", "stdio"]
# Entrypoint wrapper defaults to CLI mode; override with `uvicorn` to run ASGI app
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["main"]
+17 -3
View File
@@ -144,11 +144,11 @@ uv run prowler-mcp --transport http
uv run prowler-mcp --transport http --host 0.0.0.0 --port 8080
```
For self-deployed MCP remote server, you can use also configure the server to use a custom API base URL with the environment variable `PROWLER_API_BASE_URL`; and the transport mode with the environment variable `PROWLER_MCP_MODE`.
For self-deployed MCP remote server, you can use also configure the server to use a custom API base URL with the environment variable `PROWLER_API_BASE_URL`; and the transport mode with the environment variable `PROWLER_MCP_TRANSPORT_MODE`.
```bash
export PROWLER_API_BASE_URL="https://api.prowler.com"
export PROWLER_MCP_MODE="http"
export PROWLER_MCP_TRANSPORT_MODE="http"
```
### Using uv directly
@@ -190,6 +190,16 @@ docker run --rm --env-file ./.env -p 8000:8000 -it prowler-mcp --transport http
docker run --rm --env-file ./.env -p 8080:8080 -it prowler-mcp --transport http --host 0.0.0.0 --port 8080
```
## Production Deployment
For production deployments that require customization, it is recommended to use the ASGI application that can be found in `prowler_mcp_server.server`. This can be run with uvicorn:
```bash
uvicorn prowler_mcp_server.server:app --host 0.0.0.0 --port 8000
```
For more details on production deployment options, see the [FastMCP production deployment guide](https://gofastmcp.com/deployment/http#production-deployment) and [uvicorn settings](https://www.uvicorn.org/settings/).
## Command Line Arguments
The Prowler MCP server supports the following command line arguments:
@@ -482,6 +492,10 @@ If you want to have it globally available, add the example server to Cursor's co
If you want to have it only for the current project, add the example server to the project's root in a new `.cursor/mcp.json` file.
## Documentation
For detailed documentation about the Prowler MCP Server, including guides, tutorials, and use cases, visit the [official Prowler documentation](https://docs.prowler.com).
## License
This project follows the repositorys main license. See the [LICENSE](../LICENSE) file at the repository root.
This project follows the repository's main license. See the [LICENSE](../LICENSE) file at the repository root.
+50
View File
@@ -0,0 +1,50 @@
#!/bin/sh
set -eu
usage() {
cat <<'EOF'
Usage: ./entrypoint.sh [main|uvicorn] [args...]
Modes:
main (default) Run prowler-mcp
uvicorn Run uvicorn prowler_mcp_server.server:app
All additional arguments are forwarded to the selected command.
EOF
}
mode="main"
if [ "$#" -gt 0 ]; then
case "$1" in
main|cli)
mode="main"
shift
;;
uvicorn|asgi)
mode="uvicorn"
shift
;;
-h|--help)
usage
exit 0
;;
*)
mode="main"
;;
esac
fi
case "$mode" in
main)
exec prowler-mcp "$@"
;;
uvicorn)
export PROWLER_MCP_TRANSPORT_MODE="http"
exec uvicorn prowler_mcp_server.server:app "$@"
;;
*)
usage
exit 1
;;
esac
+18 -7
View File
@@ -1,10 +1,8 @@
import argparse
import asyncio
import os
import sys
from prowler_mcp_server.lib.logger import logger
from prowler_mcp_server.server import setup_main_server
def parse_arguments():
@@ -13,7 +11,7 @@ def parse_arguments():
parser.add_argument(
"--transport",
choices=["stdio", "http"],
default=os.getenv("PROWLER_MCP_MODE", "stdio"),
default=None,
help="Transport method (default: stdio)",
)
parser.add_argument(
@@ -35,13 +33,26 @@ def main():
try:
args = parse_arguments()
# Set up server with configuration
prowler_mcp_server = asyncio.run(setup_main_server(transport=args.transport))
print(f"args.transport: {args.transport}")
if args.transport is None:
args.transport = os.getenv("PROWLER_MCP_TRANSPORT_MODE", "stdio")
else:
os.environ["PROWLER_MCP_TRANSPORT_MODE"] = args.transport
from prowler_mcp_server.server import prowler_mcp_server
if args.transport == "stdio":
prowler_mcp_server.run(transport="stdio")
prowler_mcp_server.run(transport=args.transport, show_banner=False)
elif args.transport == "http":
prowler_mcp_server.run(transport="http", host=args.host, port=args.port)
prowler_mcp_server.run(
transport=args.transport,
host=args.host,
port=args.port,
show_banner=False,
)
else:
logger.error(f"Invalid transport: {args.transport}")
except KeyboardInterrupt:
logger.info("Shutting down Prowler MCP server...")
@@ -14,7 +14,7 @@ class ProwlerAppAuth:
def __init__(
self,
mode: str = os.getenv("PROWLER_MCP_MODE", "stdio"),
mode: str = os.getenv("PROWLER_MCP_TRANSPORT_MODE", "stdio"),
base_url: str = os.getenv("PROWLER_API_BASE_URL", "https://api.prowler.com"),
):
self.base_url = base_url.rstrip("/")
@@ -33,7 +33,14 @@ class ProwlerAppAuth:
raise ValueError("Prowler App API key format is incorrect")
def _parse_jwt(self, token: str) -> Optional[Dict]:
"""Parse JWT token and return payload, similar to JS parseJwt function."""
"""Parse JWT token and return payload
Args:
token: JWT token to parse
Returns:
Parsed JWT payload, or None if parsing fails
"""
if not token:
return None
+26 -13
View File
@@ -1,16 +1,16 @@
import asyncio
import os
from fastmcp import FastMCP
from prowler_mcp_server import __version__
from prowler_mcp_server.lib.logger import logger
from starlette.responses import JSONResponse
prowler_mcp_server = FastMCP("prowler-mcp-server")
async def setup_main_server(transport: str) -> FastMCP:
async def setup_main_server():
"""Set up the main Prowler MCP server with all available integrations."""
# Initialize main Prowler MCP server
prowler_mcp_server = FastMCP("prowler-mcp-server")
# Import Prowler Hub tools with prowler_hub_ prefix
try:
logger.info("Importing Prowler Hub server...")
@@ -21,12 +21,10 @@ async def setup_main_server(transport: str) -> FastMCP:
except Exception as e:
logger.error(f"Failed to import Prowler Hub server: {e}")
# Import Prowler App tools with prowler_app_ prefix
try:
logger.info("Importing Prowler App server...")
if os.getenv("PROWLER_MCP_MODE", None) is None:
os.environ["PROWLER_MCP_MODE"] = transport
if not os.path.exists(
os.path.join(os.path.dirname(__file__), "prowler_app", "server.py")
):
@@ -44,6 +42,7 @@ async def setup_main_server(transport: str) -> FastMCP:
except Exception as e:
logger.error(f"Failed to import Prowler App server: {e}")
# Import Prowler Documentation tools with prowler_docs_ prefix
try:
logger.info("Importing Prowler Documentation server...")
from prowler_mcp_server.prowler_documentation.server import docs_mcp_server
@@ -53,9 +52,23 @@ async def setup_main_server(transport: str) -> FastMCP:
except Exception as e:
logger.error(f"Failed to import Prowler Documentation server: {e}")
# Add health check endpoint
@prowler_mcp_server.custom_route("/health", methods=["GET"])
async def health_check(request):
return JSONResponse({"status": "healthy", "service": "prowler-mcp-server"})
return prowler_mcp_server
# Add health check endpoint
@prowler_mcp_server.custom_route("/health", methods=["GET"])
async def health_check(request) -> JSONResponse:
"""Health check endpoint."""
return JSONResponse(
{"status": "healthy", "service": "prowler-mcp-server", "version": __version__}
)
# Get or create the event loop
try:
loop = asyncio.get_running_loop()
# If we have a running loop, schedule the setup as a task
loop.create_task(setup_main_server())
except RuntimeError:
# No running loop, use asyncio.run (for standalone execution)
asyncio.run(setup_main_server())
app = prowler_mcp_server.http_app()
Generated
+35 -324
View File
@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.2.0 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -834,21 +834,6 @@ typing-extensions = ">=4.6.0"
[package.extras]
aio = ["azure-core[aio] (>=1.30.0)"]
[[package]]
name = "babel"
version = "2.17.0"
description = "Internationalization utilities"
optional = false
python-versions = ">=3.8"
groups = ["docs"]
files = [
{file = "babel-2.17.0-py3-none-any.whl", hash = "sha256:4d0b53093fdfb4b21c92b5213dba5a1b23885afa8383709427046b21c366e5f2"},
{file = "babel-2.17.0.tar.gz", hash = "sha256:0c54cffb19f690cdcc52a3b50bcbf71e07a808d1c80d549f2459b9d2cf0afb9d"},
]
[package.extras]
dev = ["backports.zoneinfo ; python_version < \"3.9\"", "freezegun (>=1.0,<2.0)", "jinja2 (>=3.0)", "pytest (>=6.0)", "pytest-cov", "pytz", "setuptools", "tzdata ; sys_platform == \"win32\""]
[[package]]
name = "bandit"
version = "1.8.3"
@@ -994,7 +979,7 @@ version = "2025.7.14"
description = "Python package for providing Mozilla's CA Bundle."
optional = false
python-versions = ">=3.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "certifi-2025.7.14-py3-none-any.whl", hash = "sha256:6b31f564a415d79ee77df69d757bb49a5bb53bd9f756cbbe24394ffd6fc1f4b2"},
{file = "certifi-2025.7.14.tar.gz", hash = "sha256:8ea99dbdfaaf2ba2f9bac77b9249ef62ec5218e7c2b2e903378ed5fccf765995"},
@@ -1126,7 +1111,7 @@ version = "3.4.2"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
optional = false
python-versions = ">=3.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "charset_normalizer-3.4.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7c48ed483eb946e6c04ccbe02c6b4d1d48e51944b6db70f697e089c193404941"},
{file = "charset_normalizer-3.4.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b2d318c11350e10662026ad0eb71bb51c7812fc8590825304ae0bdd4ac283acd"},
@@ -1240,7 +1225,7 @@ version = "8.1.8"
description = "Composable command line interface toolkit"
optional = false
python-versions = ">=3.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
markers = "python_version < \"3.10\""
files = [
{file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"},
@@ -1256,7 +1241,7 @@ version = "8.2.1"
description = "Composable command line interface toolkit"
optional = false
python-versions = ">=3.10"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
markers = "python_version >= \"3.10\""
files = [
{file = "click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b"},
@@ -1290,7 +1275,7 @@ version = "0.4.6"
description = "Cross-platform colored terminal text."
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
@@ -1921,59 +1906,6 @@ files = [
{file = "frozenlist-1.7.0.tar.gz", hash = "sha256:2e310d81923c2437ea8670467121cc3e9b0f76d3043cc1d2331d56c7fb7a3a8f"},
]
[[package]]
name = "ghp-import"
version = "2.1.0"
description = "Copy your docs directly to the gh-pages branch."
optional = false
python-versions = "*"
groups = ["docs"]
files = [
{file = "ghp-import-2.1.0.tar.gz", hash = "sha256:9c535c4c61193c2df8871222567d7fd7e5014d835f97dc7b7439069e2413d343"},
{file = "ghp_import-2.1.0-py3-none-any.whl", hash = "sha256:8337dd7b50877f163d4c0289bc1f1c7f127550241988d568c1db512c4324a619"},
]
[package.dependencies]
python-dateutil = ">=2.8.1"
[package.extras]
dev = ["flake8", "markdown", "twine", "wheel"]
[[package]]
name = "gitdb"
version = "4.0.12"
description = "Git Object Database"
optional = false
python-versions = ">=3.7"
groups = ["docs"]
files = [
{file = "gitdb-4.0.12-py3-none-any.whl", hash = "sha256:67073e15955400952c6565cc3e707c554a4eea2e428946f7a4c162fab9bd9bcf"},
{file = "gitdb-4.0.12.tar.gz", hash = "sha256:5ef71f855d191a3326fcfbc0d5da835f26b13fbcba60c32c21091c349ffdb571"},
]
[package.dependencies]
smmap = ">=3.0.1,<6"
[[package]]
name = "gitpython"
version = "3.1.45"
description = "GitPython is a Python library used to interact with Git repositories"
optional = false
python-versions = ">=3.7"
groups = ["docs"]
files = [
{file = "gitpython-3.1.45-py3-none-any.whl", hash = "sha256:8908cb2e02fb3b93b7eb0f2827125cb699869470432cc885f019b8fd0fccff77"},
{file = "gitpython-3.1.45.tar.gz", hash = "sha256:85b0ee964ceddf211c41b9f27a49086010a190fd8132a24e21f362a4b36a791c"},
]
[package.dependencies]
gitdb = ">=4.0.1,<5"
typing-extensions = {version = ">=3.10.0.2", markers = "python_version < \"3.10\""}
[package.extras]
doc = ["sphinx (>=7.1.2,<7.2)", "sphinx-autodoc-typehints", "sphinx_rtd_theme"]
test = ["coverage[toml]", "ddt (>=1.1.1,!=1.4.3)", "mock ; python_version < \"3.8\"", "mypy", "pre-commit", "pytest (>=7.3.1)", "pytest-cov", "pytest-instafail", "pytest-mock", "pytest-sugar", "typing-extensions ; python_version < \"3.11\""]
[[package]]
name = "google-api-core"
version = "2.25.1"
@@ -2258,7 +2190,7 @@ version = "3.10"
description = "Internationalized Domain Names in Applications (IDNA)"
optional = false
python-versions = ">=3.6"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
{file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
@@ -2273,12 +2205,12 @@ version = "8.7.0"
description = "Read metadata from Python packages"
optional = false
python-versions = ">=3.9"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd"},
{file = "importlib_metadata-8.7.0.tar.gz", hash = "sha256:d13b81ad223b890aa16c5471f2ac3056cf76c5f10f82d6f9292f0b415f389000"},
]
markers = {dev = "python_version < \"3.10\"", docs = "python_version < \"3.10\""}
markers = {dev = "python_version < \"3.10\""}
[package.dependencies]
zipp = ">=3.20"
@@ -2350,7 +2282,7 @@ version = "3.1.6"
description = "A very fast and expressive template engine."
optional = false
python-versions = ">=3.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67"},
{file = "jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d"},
@@ -2416,6 +2348,8 @@ python-versions = "*"
groups = ["dev"]
files = [
{file = "jsonpath-ng-1.7.0.tar.gz", hash = "sha256:f6f5f7fd4e5ff79c785f1573b394043b39849fb2bb47bcead935d12b00beab3c"},
{file = "jsonpath_ng-1.7.0-py2-none-any.whl", hash = "sha256:898c93fc173f0c336784a3fa63d7434297544b7198124a68f9a3ef9597b0ae6e"},
{file = "jsonpath_ng-1.7.0-py3-none-any.whl", hash = "sha256:f3d7f9e848cba1b6da28c55b1c26ff915dc9e0b1ba7e752a53d6da8d5cbd00b6"},
]
[package.dependencies]
@@ -2546,7 +2480,7 @@ version = "3.9"
description = "Python implementation of John Gruber's Markdown."
optional = false
python-versions = ">=3.9"
groups = ["main", "docs"]
groups = ["main"]
files = [
{file = "markdown-3.9-py3-none-any.whl", hash = "sha256:9f4d91ed810864ea88a6f32c07ba8bee1346c0cc1f6b1f9f6c822f2a9667d280"},
{file = "markdown-3.9.tar.gz", hash = "sha256:d2900fe1782bd33bdbbd56859defef70c2e78fc46668f8eb9df3128138f2cb6a"},
@@ -2590,7 +2524,7 @@ version = "3.0.2"
description = "Safely add untrusted strings to HTML/XML markup."
optional = false
python-versions = ">=3.9"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8"},
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158"},
@@ -2699,18 +2633,6 @@ files = [
{file = "mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba"},
]
[[package]]
name = "mergedeep"
version = "1.3.4"
description = "A deep merge function for 🐍."
optional = false
python-versions = ">=3.6"
groups = ["docs"]
files = [
{file = "mergedeep-1.3.4-py3-none-any.whl", hash = "sha256:70775750742b25c0d8f36c55aed03d24c3384d17c951b3175d898bd778ef0307"},
{file = "mergedeep-1.3.4.tar.gz", hash = "sha256:0096d52e9dad9939c3d975a774666af186eda617e6ca84df4c94dec30004f2a8"},
]
[[package]]
name = "microsoft-kiota-abstractions"
version = "1.9.2"
@@ -2825,116 +2747,6 @@ files = [
[package.dependencies]
microsoft-kiota-abstractions = ">=1.9.2,<1.10.0"
[[package]]
name = "mkdocs"
version = "1.6.1"
description = "Project documentation with Markdown."
optional = false
python-versions = ">=3.8"
groups = ["docs"]
files = [
{file = "mkdocs-1.6.1-py3-none-any.whl", hash = "sha256:db91759624d1647f3f34aa0c3f327dd2601beae39a366d6e064c03468d35c20e"},
{file = "mkdocs-1.6.1.tar.gz", hash = "sha256:7b432f01d928c084353ab39c57282f29f92136665bdd6abf7c1ec8d822ef86f2"},
]
[package.dependencies]
click = ">=7.0"
colorama = {version = ">=0.4", markers = "platform_system == \"Windows\""}
ghp-import = ">=1.0"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
jinja2 = ">=2.11.1"
markdown = ">=3.3.6"
markupsafe = ">=2.0.1"
mergedeep = ">=1.3.4"
mkdocs-get-deps = ">=0.2.0"
packaging = ">=20.5"
pathspec = ">=0.11.1"
pyyaml = ">=5.1"
pyyaml-env-tag = ">=0.1"
watchdog = ">=2.0"
[package.extras]
i18n = ["babel (>=2.9.0)"]
min-versions = ["babel (==2.9.0)", "click (==7.0)", "colorama (==0.4) ; platform_system == \"Windows\"", "ghp-import (==1.0)", "importlib-metadata (==4.4) ; python_version < \"3.10\"", "jinja2 (==2.11.1)", "markdown (==3.3.6)", "markupsafe (==2.0.1)", "mergedeep (==1.3.4)", "mkdocs-get-deps (==0.2.0)", "packaging (==20.5)", "pathspec (==0.11.1)", "pyyaml (==5.1)", "pyyaml-env-tag (==0.1)", "watchdog (==2.0)"]
[[package]]
name = "mkdocs-get-deps"
version = "0.2.0"
description = "MkDocs extension that lists all dependencies according to a mkdocs.yml file"
optional = false
python-versions = ">=3.8"
groups = ["docs"]
files = [
{file = "mkdocs_get_deps-0.2.0-py3-none-any.whl", hash = "sha256:2bf11d0b133e77a0dd036abeeb06dec8775e46efa526dc70667d8863eefc6134"},
{file = "mkdocs_get_deps-0.2.0.tar.gz", hash = "sha256:162b3d129c7fad9b19abfdcb9c1458a651628e4b1dea628ac68790fb3061c60c"},
]
[package.dependencies]
importlib-metadata = {version = ">=4.3", markers = "python_version < \"3.10\""}
mergedeep = ">=1.3.4"
platformdirs = ">=2.2.0"
pyyaml = ">=5.1"
[[package]]
name = "mkdocs-git-revision-date-localized-plugin"
version = "1.4.1"
description = "Mkdocs plugin that enables displaying the localized date of the last git modification of a markdown file."
optional = false
python-versions = ">=3.8"
groups = ["docs"]
files = [
{file = "mkdocs_git_revision_date_localized_plugin-1.4.1-py3-none-any.whl", hash = "sha256:bb1eca7f156e0c8a587167662923d76efed7f7e0c06b84471aa5ae72a744a434"},
{file = "mkdocs_git_revision_date_localized_plugin-1.4.1.tar.gz", hash = "sha256:364d7c4c45c4f333c750e34bc298ac685a7a8bf9b7b52890d52b2f90f1812c4b"},
]
[package.dependencies]
babel = ">=2.7.0"
gitpython = ">=3.1.44"
mkdocs = ">=1.0"
pytz = ">=2025.1"
[[package]]
name = "mkdocs-material"
version = "9.6.5"
description = "Documentation that simply works"
optional = false
python-versions = ">=3.8"
groups = ["docs"]
files = [
{file = "mkdocs_material-9.6.5-py3-none-any.whl", hash = "sha256:aad3e6fb860c20870f75fb2a69ef901f1be727891e41adb60b753efcae19453b"},
{file = "mkdocs_material-9.6.5.tar.gz", hash = "sha256:b714679a8c91b0ffe2188e11ed58c44d2523e9c2ae26a29cc652fa7478faa21f"},
]
[package.dependencies]
babel = ">=2.10,<3.0"
colorama = ">=0.4,<1.0"
jinja2 = ">=3.0,<4.0"
markdown = ">=3.2,<4.0"
mkdocs = ">=1.6,<2.0"
mkdocs-material-extensions = ">=1.3,<2.0"
paginate = ">=0.5,<1.0"
pygments = ">=2.16,<3.0"
pymdown-extensions = ">=10.2,<11.0"
regex = ">=2022.4"
requests = ">=2.26,<3.0"
[package.extras]
git = ["mkdocs-git-committers-plugin-2 (>=1.1,<3)", "mkdocs-git-revision-date-localized-plugin (>=1.2.4,<2.0)"]
imaging = ["cairosvg (>=2.6,<3.0)", "pillow (>=10.2,<11.0)"]
recommended = ["mkdocs-minify-plugin (>=0.7,<1.0)", "mkdocs-redirects (>=1.2,<2.0)", "mkdocs-rss-plugin (>=1.6,<2.0)"]
[[package]]
name = "mkdocs-material-extensions"
version = "1.3.1"
description = "Extension pack for Python Markdown and MkDocs Material."
optional = false
python-versions = ">=3.8"
groups = ["docs"]
files = [
{file = "mkdocs_material_extensions-1.3.1-py3-none-any.whl", hash = "sha256:adff8b62700b25cb77b53358dad940f3ef973dd6db797907c49e3c2ef3ab4e31"},
{file = "mkdocs_material_extensions-1.3.1.tar.gz", hash = "sha256:10c9511cea88f568257f960358a467d12b970e1f7b2c0e5fb2bb48cab1928443"},
]
[[package]]
name = "mock"
version = "5.2.0"
@@ -3582,28 +3394,12 @@ version = "25.0"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484"},
{file = "packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"},
]
[[package]]
name = "paginate"
version = "0.5.7"
description = "Divides large result sets into pages for easier browsing"
optional = false
python-versions = "*"
groups = ["docs"]
files = [
{file = "paginate-0.5.7-py2.py3-none-any.whl", hash = "sha256:b885e2af73abcf01d9559fd5216b57ef722f8c42affbb63942377668e35c7591"},
{file = "paginate-0.5.7.tar.gz", hash = "sha256:22bd083ab41e1a8b4f3690544afb2c60c25e5c9a63a30fa2f483f6c60c8e5945"},
]
[package.extras]
dev = ["pytest", "tox"]
lint = ["black"]
[[package]]
name = "pandas"
version = "2.2.3"
@@ -3709,7 +3505,7 @@ version = "0.12.1"
description = "Utility library for gitignore style pattern matching of file paths."
optional = false
python-versions = ">=3.8"
groups = ["dev", "docs"]
groups = ["dev"]
files = [
{file = "pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08"},
{file = "pathspec-0.12.1.tar.gz", hash = "sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712"},
@@ -3736,7 +3532,7 @@ version = "4.3.8"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
optional = false
python-versions = ">=3.9"
groups = ["dev", "docs"]
groups = ["dev"]
files = [
{file = "platformdirs-4.3.8-py3-none-any.whl", hash = "sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4"},
{file = "platformdirs-4.3.8.tar.gz", hash = "sha256:3d512d96e16bcb959a814c9f348431070822a6496326a4be0911c40b5a74c2bc"},
@@ -4264,7 +4060,7 @@ version = "2.19.2"
description = "Pygments is a syntax highlighting package written in Python."
optional = false
python-versions = ">=3.8"
groups = ["dev", "docs"]
groups = ["dev"]
files = [
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
@@ -4325,25 +4121,6 @@ typing-extensions = {version = ">=3.10.0", markers = "python_version < \"3.10\""
spelling = ["pyenchant (>=3.2,<4.0)"]
testutils = ["gitpython (>3)"]
[[package]]
name = "pymdown-extensions"
version = "10.16"
description = "Extension pack for Python Markdown."
optional = false
python-versions = ">=3.9"
groups = ["docs"]
files = [
{file = "pymdown_extensions-10.16-py3-none-any.whl", hash = "sha256:f5dd064a4db588cb2d95229fc4ee63a1b16cc8b4d0e6145c0899ed8723da1df2"},
{file = "pymdown_extensions-10.16.tar.gz", hash = "sha256:71dac4fca63fabeffd3eb9038b756161a33ec6e8d230853d3cecf562155ab3de"},
]
[package.dependencies]
markdown = ">=3.6"
pyyaml = "*"
[package.extras]
extra = ["pygments (>=2.19.1)"]
[[package]]
name = "pynacl"
version = "1.5.0"
@@ -4509,7 +4286,7 @@ version = "2.9.0.post0"
description = "Extensions to the standard Python datetime module"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"},
{file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"},
@@ -4524,7 +4301,7 @@ version = "2025.1"
description = "World timezone definitions, modern and historical"
optional = false
python-versions = "*"
groups = ["main", "docs"]
groups = ["main"]
files = [
{file = "pytz-2025.1-py2.py3-none-any.whl", hash = "sha256:89dd22dca55b46eac6eda23b2d72721bf1bdfef212645d81513ef5d03038de57"},
{file = "pytz-2025.1.tar.gz", hash = "sha256:c2db42be2a2518b28e65f9207c4d05e6ff547d1efa4086469ef855e4ab70178e"},
@@ -4567,7 +4344,7 @@ version = "6.0.2"
description = "YAML parser and emitter for Python"
optional = false
python-versions = ">=3.8"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"},
{file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"},
@@ -4624,21 +4401,6 @@ files = [
{file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"},
]
[[package]]
name = "pyyaml-env-tag"
version = "1.1"
description = "A custom YAML tag for referencing environment variables in YAML files."
optional = false
python-versions = ">=3.9"
groups = ["docs"]
files = [
{file = "pyyaml_env_tag-1.1-py3-none-any.whl", hash = "sha256:17109e1a528561e32f026364712fee1264bc2ea6715120891174ed1b980d2e04"},
{file = "pyyaml_env_tag-1.1.tar.gz", hash = "sha256:2eb38b75a2d21ee0475d6d97ec19c63287a7e140231e4214969d0eac923cd7ff"},
]
[package.dependencies]
pyyaml = "*"
[[package]]
name = "referencing"
version = "0.36.2"
@@ -4662,7 +4424,7 @@ version = "2024.11.6"
description = "Alternative regular expression module, to replace re."
optional = false
python-versions = ">=3.8"
groups = ["dev", "docs"]
groups = ["dev"]
files = [
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ff590880083d60acc0433f9c3f713c51f7ac6ebb9adf889c79a261ecf541aa91"},
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:658f90550f38270639e83ce492f27d2c8d2cd63805c65a13a14d36ca126753f0"},
@@ -4766,7 +4528,7 @@ version = "2.32.4"
description = "Python HTTP for Humans."
optional = false
python-versions = ">=3.8"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c"},
{file = "requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422"},
@@ -5085,6 +4847,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f66efbc1caa63c088dead1c4170d148eabc9b80d95fb75b6c92ac0aad2437d76"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:22353049ba4181685023b25b5b51a574bce33e7f51c759371a7422dcae5402a6"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:932205970b9f9991b34f55136be327501903f7c66830e9760a8ffb15b07f05cd"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a52d48f4e7bf9005e8f0a89209bf9a73f7190ddf0489eee5eb51377385f59f2a"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-win32.whl", hash = "sha256:3eac5a91891ceb88138c113f9db04f3cebdae277f5d44eaa3651a4f573e6a5da"},
{file = "ruamel.yaml.clib-0.2.12-cp310-cp310-win_amd64.whl", hash = "sha256:ab007f2f5a87bd08ab1499bdf96f3d5c6ad4dcfa364884cb4549aa0154b13a28"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:4a6679521a58256a90b0d89e03992c15144c5f3858f40d7c18886023d7943db6"},
@@ -5093,6 +4856,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:811ea1594b8a0fb466172c384267a4e5e367298af6b228931f273b111f17ef52"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cf12567a7b565cbf65d438dec6cfbe2917d3c1bdddfce84a9930b7d35ea59642"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7dd5adc8b930b12c8fc5b99e2d535a09889941aa0d0bd06f4749e9a9397c71d2"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1492a6051dab8d912fc2adeef0e8c72216b24d57bd896ea607cb90bb0c4981d3"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-win32.whl", hash = "sha256:bd0a08f0bab19093c54e18a14a10b4322e1eacc5217056f3c063bd2f59853ce4"},
{file = "ruamel.yaml.clib-0.2.12-cp311-cp311-win_amd64.whl", hash = "sha256:a274fb2cb086c7a3dea4322ec27f4cb5cc4b6298adb583ab0e211a4682f241eb"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:20b0f8dc160ba83b6dcc0e256846e1a02d044e13f7ea74a3d1d56ede4e48c632"},
@@ -5101,6 +4865,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:749c16fcc4a2b09f28843cda5a193e0283e47454b63ec4b81eaa2242f50e4ccd"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:bf165fef1f223beae7333275156ab2022cffe255dcc51c27f066b4370da81e31"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:32621c177bbf782ca5a18ba4d7af0f1082a3f6e517ac2a18b3974d4edf349680"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b82a7c94a498853aa0b272fd5bc67f29008da798d4f93a2f9f289feb8426a58d"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-win32.whl", hash = "sha256:e8c4ebfcfd57177b572e2040777b8abc537cdef58a2120e830124946aa9b42c5"},
{file = "ruamel.yaml.clib-0.2.12-cp312-cp312-win_amd64.whl", hash = "sha256:0467c5965282c62203273b838ae77c0d29d7638c8a4e3a1c8bdd3602c10904e4"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:4c8c5d82f50bb53986a5e02d1b3092b03622c02c2eb78e29bec33fd9593bae1a"},
@@ -5109,6 +4874,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96777d473c05ee3e5e3c3e999f5d23c6f4ec5b0c38c098b3a5229085f74236c6"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:3bc2a80e6420ca8b7d3590791e2dfc709c88ab9152c00eeb511c9875ce5778bf"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:e188d2699864c11c36cdfdada94d781fd5d6b0071cd9c427bceb08ad3d7c70e1"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4f6f3eac23941b32afccc23081e1f50612bdbe4e982012ef4f5797986828cd01"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-win32.whl", hash = "sha256:6442cb36270b3afb1b4951f060eccca1ce49f3d087ca1ca4563a6eb479cb3de6"},
{file = "ruamel.yaml.clib-0.2.12-cp313-cp313-win_amd64.whl", hash = "sha256:e5b8daf27af0b90da7bb903a876477a9e6d7270be6146906b276605997c7e9a3"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:fc4b630cd3fa2cf7fce38afa91d7cfe844a9f75d7f0f36393fa98815e911d987"},
@@ -5117,6 +4883,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e2f1c3765db32be59d18ab3953f43ab62a761327aafc1594a2a1fbe038b8b8a7"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d85252669dc32f98ebcd5d36768f5d4faeaeaa2d655ac0473be490ecdae3c285"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e143ada795c341b56de9418c58d028989093ee611aa27ffb9b7f609c00d813ed"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2c59aa6170b990d8d2719323e628aaf36f3bfbc1c26279c0eeeb24d05d2d11c7"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-win32.whl", hash = "sha256:beffaed67936fbbeffd10966a4eb53c402fafd3d6833770516bf7314bc6ffa12"},
{file = "ruamel.yaml.clib-0.2.12-cp39-cp39-win_amd64.whl", hash = "sha256:040ae85536960525ea62868b642bdb0c2cc6021c9f9d507810c0c604e66f5a7b"},
{file = "ruamel.yaml.clib-0.2.12.tar.gz", hash = "sha256:6c8fbb13ec503f99a91901ab46e0b07ae7941cd527393187039aec586fdfd36f"},
@@ -5268,7 +5035,7 @@ version = "1.17.0"
description = "Python 2 and 3 compatibility utilities"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274"},
{file = "six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81"},
@@ -5289,18 +5056,6 @@ files = [
[package.extras]
optional = ["SQLAlchemy (>=1.4,<3)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=9.1,<15)"]
[[package]]
name = "smmap"
version = "5.0.2"
description = "A pure Python implementation of a sliding window memory map manager"
optional = false
python-versions = ">=3.7"
groups = ["docs"]
files = [
{file = "smmap-5.0.2-py3-none-any.whl", hash = "sha256:b30115f0def7d7531d22a0fb6502488d879e75b260a9db4d0819cfb25403af5e"},
{file = "smmap-5.0.2.tar.gz", hash = "sha256:26ea65a03958fa0c8a1c7e8c7a58fdc77221b8910f6be2131affade476898ad5"},
]
[[package]]
name = "sniffio"
version = "1.3.1"
@@ -5474,12 +5229,11 @@ version = "4.14.1"
description = "Backported and Experimental Type Hints for Python 3.9+"
optional = false
python-versions = ">=3.9"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76"},
{file = "typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36"},
]
markers = {docs = "python_version < \"3.10\""}
[[package]]
name = "typing-inspection"
@@ -5544,7 +5298,7 @@ version = "1.26.20"
description = "HTTP library with thread-safe connection pooling, file post, and more."
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
markers = "python_version < \"3.10\""
files = [
{file = "urllib3-1.26.20-py2.py3-none-any.whl", hash = "sha256:0ed14ccfbf1c30a9072c7ca157e4319b70d65f623e91e7b32fadb2853431016e"},
@@ -5562,7 +5316,7 @@ version = "2.5.0"
description = "HTTP library with thread-safe connection pooling, file post, and more."
optional = false
python-versions = ">=3.9"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
markers = "python_version >= \"3.10\""
files = [
{file = "urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc"},
@@ -5611,49 +5365,6 @@ files = [
[package.dependencies]
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
[[package]]
name = "watchdog"
version = "6.0.0"
description = "Filesystem events monitoring"
optional = false
python-versions = ">=3.9"
groups = ["docs"]
files = [
{file = "watchdog-6.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d1cdb490583ebd691c012b3d6dae011000fe42edb7a82ece80965b42abd61f26"},
{file = "watchdog-6.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bc64ab3bdb6a04d69d4023b29422170b74681784ffb9463ed4870cf2f3e66112"},
{file = "watchdog-6.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c897ac1b55c5a1461e16dae288d22bb2e412ba9807df8397a635d88f671d36c3"},
{file = "watchdog-6.0.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6eb11feb5a0d452ee41f824e271ca311a09e250441c262ca2fd7ebcf2461a06c"},
{file = "watchdog-6.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ef810fbf7b781a5a593894e4f439773830bdecb885e6880d957d5b9382a960d2"},
{file = "watchdog-6.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:afd0fe1b2270917c5e23c2a65ce50c2a4abb63daafb0d419fde368e272a76b7c"},
{file = "watchdog-6.0.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:bdd4e6f14b8b18c334febb9c4425a878a2ac20efd1e0b231978e7b150f92a948"},
{file = "watchdog-6.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c7c15dda13c4eb00d6fb6fc508b3c0ed88b9d5d374056b239c4ad1611125c860"},
{file = "watchdog-6.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6f10cb2d5902447c7d0da897e2c6768bca89174d0c6e1e30abec5421af97a5b0"},
{file = "watchdog-6.0.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:490ab2ef84f11129844c23fb14ecf30ef3d8a6abafd3754a6f75ca1e6654136c"},
{file = "watchdog-6.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:76aae96b00ae814b181bb25b1b98076d5fc84e8a53cd8885a318b42b6d3a5134"},
{file = "watchdog-6.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a175f755fc2279e0b7312c0035d52e27211a5bc39719dd529625b1930917345b"},
{file = "watchdog-6.0.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e6f0e77c9417e7cd62af82529b10563db3423625c5fce018430b249bf977f9e8"},
{file = "watchdog-6.0.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:90c8e78f3b94014f7aaae121e6b909674df5b46ec24d6bebc45c44c56729af2a"},
{file = "watchdog-6.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e7631a77ffb1f7d2eefa4445ebbee491c720a5661ddf6df3498ebecae5ed375c"},
{file = "watchdog-6.0.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c7ac31a19f4545dd92fc25d200694098f42c9a8e391bc00bdd362c5736dbf881"},
{file = "watchdog-6.0.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9513f27a1a582d9808cf21a07dae516f0fab1cf2d7683a742c498b93eedabb11"},
{file = "watchdog-6.0.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7a0e56874cfbc4b9b05c60c8a1926fedf56324bb08cfbc188969777940aef3aa"},
{file = "watchdog-6.0.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:e6439e374fc012255b4ec786ae3c4bc838cd7309a540e5fe0952d03687d8804e"},
{file = "watchdog-6.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7607498efa04a3542ae3e05e64da8202e58159aa1fa4acddf7678d34a35d4f13"},
{file = "watchdog-6.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:9041567ee8953024c83343288ccc458fd0a2d811d6a0fd68c4c22609e3490379"},
{file = "watchdog-6.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:82dc3e3143c7e38ec49d61af98d6558288c415eac98486a5c581726e0737c00e"},
{file = "watchdog-6.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:212ac9b8bf1161dc91bd09c048048a95ca3a4c4f5e5d4a7d1b1a7d5752a7f96f"},
{file = "watchdog-6.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:e3df4cbb9a450c6d49318f6d14f4bbc80d763fa587ba46ec86f99f9e6876bb26"},
{file = "watchdog-6.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:2cce7cfc2008eb51feb6aab51251fd79b85d9894e98ba847408f662b3395ca3c"},
{file = "watchdog-6.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:20ffe5b202af80ab4266dcd3e91aae72bf2da48c0d33bdb15c66658e685e94e2"},
{file = "watchdog-6.0.0-py3-none-win32.whl", hash = "sha256:07df1fdd701c5d4c8e55ef6cf55b8f0120fe1aef7ef39a1c6fc6bc2e606d517a"},
{file = "watchdog-6.0.0-py3-none-win_amd64.whl", hash = "sha256:cbafb470cf848d93b5d013e2ecb245d4aa1c8fd0504e863ccefa32445359d680"},
{file = "watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f"},
{file = "watchdog-6.0.0.tar.gz", hash = "sha256:9ddf7c82fda3ae8e24decda1338ede66e1c99883db93711d8fb941eaa2d8c282"},
]
[package.extras]
watchmedo = ["PyYAML (>=3.10)"]
[[package]]
name = "websocket-client"
version = "1.8.0"
@@ -5927,12 +5638,12 @@ version = "3.23.0"
description = "Backport of pathlib-compatible object wrapper for zip files"
optional = false
python-versions = ">=3.9"
groups = ["main", "dev", "docs"]
groups = ["main", "dev"]
files = [
{file = "zipp-3.23.0-py3-none-any.whl", hash = "sha256:071652d6115ed432f5ce1d34c336c0adfd6a884660d1e9712a256d3d3bd4b14e"},
{file = "zipp-3.23.0.tar.gz", hash = "sha256:a07157588a12518c9d4034df3fbbee09c814741a33ff63c05fa29d26a2404166"},
]
markers = {dev = "python_version < \"3.10\"", docs = "python_version < \"3.10\""}
markers = {dev = "python_version < \"3.10\""}
[package.extras]
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1) ; sys_platform != \"cygwin\""]
@@ -5945,4 +5656,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">3.9.1,<3.13"
content-hash = "c2fb8567f1a6be319ae73f8c3a30e7b5be6f6fc65deee567d2a9e09eadd984c9"
content-hash = "ea79d82b4e255ec4604f440a507da6dac38b57af93356761ac793678aa615cf5"
+8 -7
View File
@@ -2,7 +2,7 @@
All notable changes to the **Prowler SDK** are documented in this file.
## [v5.13.0] (Prowler UNRELEASED)
## [v5.13.0] (Prowler v5.13.0)
### Added
- Support for AdditionalURLs in outputs [(#8651)](https://github.com/prowler-cloud/prowler/pull/8651)
@@ -17,6 +17,8 @@ All notable changes to the **Prowler SDK** are documented in this file.
- Oracle Cloud provider with CIS 3.0 benchmark [(#8893)](https://github.com/prowler-cloud/prowler/pull/8893)
- Support for Atlassian Document Format (ADF) in Jira integration [(#8878)](https://github.com/prowler-cloud/prowler/pull/8878)
- Add Common Cloud Controls for AWS, Azure and GCP [(#8000)](https://github.com/prowler-cloud/prowler/pull/8000)
- Improve Provider documentation guide [(#8430)](https://github.com/prowler-cloud/prowler/pull/8430)
- `cloudstorage_bucket_lifecycle_management_enabled` check for GCP provider [(#8936)](https://github.com/prowler-cloud/prowler/pull/8936)
### Changed
@@ -32,10 +34,14 @@ All notable changes to the **Prowler SDK** are documented in this file.
- Update AWS AppStream service metadata to new format [(#8789)](https://github.com/prowler-cloud/prowler/pull/8789)
- Update AWS API Gateway service metadata to new format [(#8788)](https://github.com/prowler-cloud/prowler/pull/8788)
- Update AWS Athena service metadata to new format [(#8790)](https://github.com/prowler-cloud/prowler/pull/8790)
- Update AWS CloudTrail service metadata to new format [(#8831)](https://github.com/prowler-cloud/prowler/pull/8831)
- Update AWS Auto Scaling service metadata to new format [(#8824)](https://github.com/prowler-cloud/prowler/pull/8824)
- Update AWS Backup service metadata to new format [(#8826)](https://github.com/prowler-cloud/prowler/pull/8826)
- Update AWS CloudFormation service metadata to new format [(#8828)](https://github.com/prowler-cloud/prowler/pull/8828)
- Update AWS Lambda service metadata to new format [(#8825)](https://github.com/prowler-cloud/prowler/pull/8825)
- Update AWS DLM service metadata to new format [(#8860)](https://github.com/prowler-cloud/prowler/pull/8860)
- Update AWS DMS service metadata to new format [(#8861)](https://github.com/prowler-cloud/prowler/pull/8861)
- Update AWS Directory Service service metadata to new format [(#8859)](https://github.com/prowler-cloud/prowler/pull/8859)
- Update AWS CloudFront service metadata to new format [(#8829)](https://github.com/prowler-cloud/prowler/pull/8829)
- Deprecate user authentication for M365 provider [(#8865)](https://github.com/prowler-cloud/prowler/pull/8865)
- Update AWS EFS service metadata to new format [(#8889)](https://github.com/prowler-cloud/prowler/pull/8889)
@@ -46,13 +52,8 @@ All notable changes to the **Prowler SDK** are documented in this file.
- Prowler ThreatScore scoring calculation CLI [(#8582)](https://github.com/prowler-cloud/prowler/pull/8582)
- Add missing attributes for Mitre Attack AWS, Azure and GCP [(#8907)](https://github.com/prowler-cloud/prowler/pull/8907)
- Fix KeyError in CloudSQL and Monitoring services in GCP provider [(#8909)](https://github.com/prowler-cloud/prowler/pull/8909)
- Fix Value Errors in Entra service for M365 provider [(#8919)](https://github.com/prowler-cloud/prowler/pull/8919)
- Fix ResourceName in GCP provider [(#8928)](https://github.com/prowler-cloud/prowler/pull/8928)
---
## [v5.12.4] (Prowler UNRELEASED)
### Fixed
- Fix KeyError in `elb_ssl_listeners_use_acm_certificate` check and handle None cluster version in `eks_cluster_uses_a_supported_version` check [(#8791)](https://github.com/prowler-cloud/prowler/pull/8791)
- Fix file extension parsing for compliance reports [(#8791)](https://github.com/prowler-cloud/prowler/pull/8791)
- Added user pagination to Entra and Admincenter services [(#8858)](https://github.com/prowler-cloud/prowler/pull/8858)
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -819,18 +819,6 @@
"aws-us-gov": []
}
},
"apptest": {
"regions": {
"aws": [
"ap-southeast-2",
"eu-central-1",
"sa-east-1",
"us-east-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"aps": {
"regions": {
"aws": [
@@ -8723,6 +8711,7 @@
"ap-southeast-5",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-2",
"eu-west-1",
@@ -9207,11 +9196,13 @@
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-5",
"ap-southeast-7",
"ca-central-1",
"eu-central-1",
@@ -12436,7 +12427,12 @@
"workspaces-instances": {
"regions": {
"aws": [
"ap-northeast-2"
"ap-east-1",
"ap-northeast-2",
"ap-southeast-5",
"eu-south-2",
"me-central-1",
"us-east-2"
],
"aws-cn": [],
"aws-us-gov": []
@@ -1,33 +1,38 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_bucket_requires_mfa_delete",
"CheckTitle": "Ensure the S3 bucket CloudTrail bucket requires MFA delete",
"CheckTitle": "CloudTrail trail S3 bucket has MFA delete enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure the S3 bucket CloudTrail bucket requires MFA",
"Risk": "If the S3 bucket CloudTrail bucket does not require MFA, it can be deleted by an attacker.",
"Description": "**CloudTrail log buckets** for actively logging trails are evaluated for **MFA Delete** on the associated S3 bucket. The assessment determines whether `MFA Delete` is configured on the in-account log bucket; *if the bucket resides in another account, its configuration should be verified separately*.",
"Risk": "Without **MFA Delete**, stolen or over-privileged credentials can permanently delete log versions or change versioning, compromising log **integrity** and **availability**. This enables attacker cover-ups, hinders **forensics**, and weakens evidence for investigations.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-bucket-mfa-delete-enabled.html"
],
"Remediation": {
"Code": {
"CLI": "aws s3api put-bucket-versioning --bucket DOC-EXAMPLE-BUCKET1 --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa \"SERIAL 123456\"",
"CLI": "aws s3api put-bucket-versioning --bucket <CLOUDTRAIL_BUCKET_NAME> --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa \"<MFA_SERIAL> <MFA_CODE>\"",
"NativeIaC": "",
"Other": "",
"Other": "1. Sign in to the AWS Management Console as the root user with MFA enabled\n2. Open AWS CloudShell (from the top navigation bar)\n3. Run:\n ```bash\n aws s3api put-bucket-versioning --bucket <CLOUDTRAIL_BUCKET_NAME> --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa \"<MFA_SERIAL> <MFA_CODE>\"\n ```",
"Terraform": ""
},
"Recommendation": {
"Text": "Configure MFA Delete for the S3 bucket CloudTrail bucket",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html"
"Text": "Enable `MFA Delete` on the CloudTrail log bucket with versioning enabled. Enforce **least privilege** so only tightly controlled identities can delete or alter logs, and require MFA for such actions. Apply **defense in depth** using a dedicated logging account and log file integrity validation.",
"Url": "https://hub.prowler.com/check/cloudtrail_bucket_requires_mfa_delete"
}
},
"Categories": [
"identity-access",
"forensics-ready"
],
"DependsOn": [],
@@ -1,35 +1,39 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_cloudwatch_logging_enabled",
"CheckTitle": "Ensure CloudTrail trails are integrated with CloudWatch Logs",
"CheckTitle": "CloudTrail trail has delivered logs to CloudWatch Logs in the last 24 hours",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail trails are integrated with CloudWatch Logs",
"Risk": "Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.",
"Description": "**CloudTrail trails** are configured to send events to **CloudWatch Logs**, and show recent delivery within the last `24h`. Trails without integration or without recent CloudWatch delivery are identified, across single-Region and multi-Region trails.",
"Risk": "Missing or stale CloudWatch delivery weakens visibility and delays detection, impacting confidentiality and integrity. Adversaries can:\n- Hide **privilege escalation**\n- Perform unauthorized **resource changes**\n- Exfiltrate data via API misuse",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.prowler.com/checks/aws/logging-policies/logging_4#aws-console",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --cloudwatch-logs-log-group- arn <cloudtrail_log_group_arn> --cloudwatch-logs-role-arn <cloudtrail_cloudwatchLogs_role_arn>",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_4#aws-console",
"Terraform": ""
"CLI": "aws cloudtrail update-trail --name <trail_name> --cloud-watch-logs-log-group-arn <cloudwatch_log_group_arn> --cloud-watch-logs-role-arn <cloudwatch_logs_role_arn>",
"NativeIaC": "```yaml\n# CloudFormation: enable CloudTrail delivery to CloudWatch Logs\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: \"<example_resource_name>\"\n CloudWatchLogsLogGroupArn: \"<cloudwatch_log_group_arn>\" # CRITICAL: sends CloudTrail events to CloudWatch Logs\n CloudWatchLogsRoleArn: \"<cloudwatch_logs_role_arn>\" # CRITICAL: role CloudTrail assumes to deliver events\n```",
"Other": "1. In AWS Console, go to CloudTrail > Trails and select the trail\n2. In the CloudWatch Logs section, click Edit\n3. Set CloudWatch Logs to Enabled\n4. Choose an existing Log group (or create new) and select an IAM role with permissions for CreateLogStream/PutLogEvents\n5. Click Save changes\n6. After a few minutes, verify events appear in the chosen CloudWatch Logs log group",
"Terraform": "```hcl\n# Terraform: enable CloudTrail delivery to CloudWatch Logs\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n cloud_watch_logs_group_arn = \"<cloudwatch_log_group_arn>\" # CRITICAL: sends CloudTrail events to CloudWatch Logs\n cloud_watch_logs_role_arn = \"<cloudwatch_logs_role_arn>\" # CRITICAL: role CloudTrail assumes to deliver events\n}\n```"
},
"Recommendation": {
"Text": "Validate that the trails in CloudTrail have an arn set in the CloudWatchLogsLogGroupArn property.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html"
"Text": "Integrate every trail with **CloudWatch Logs** and maintain continuous, near-real-time delivery. Enforce **least privilege** on the delivery role, prefer **multi-Region** coverage, and implement **metric filters and alerts** for sensitive actions. Centralize retention to support **defense in depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_cloudwatch_logging_enabled"
}
},
"Categories": [
"forensics-ready",
"logging"
"logging",
"forensics-ready"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,34 +1,39 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_insights_exist",
"CheckTitle": "Ensure CloudTrail Insight is enabled",
"CheckTitle": "CloudTrail trail has Insights enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail Insight is enabled",
"Risk": "CloudTrail Insights provides a powerful way to search and analyze CloudTrail log data using pre-built queries and machine learning algorithms. This can help you to identify potential security threats and suspicious activity in near real-time, such as unauthorized access attempts, policy changes, or resource modifications.",
"Description": "**CloudTrail trails** that are logging are evaluated for **Insights** via `insight selectors`, which enable anomaly detection on management-event patterns (API call and error rates). The finding pinpoints logging trails where these selectors are missing.",
"Risk": "Without **Insights**, abnormal API call or error rates can go unnoticed, delaying detection of credential abuse, privilege escalation, or runaway automation. Attackers may rapidly alter policies, delete resources, or exfiltrate data before response, impacting confidentiality and availability.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html",
"https://awscli.amazonaws.com/v2/documentation/api/2.18.18/reference/cloudtrail/put-insight-selectors.html",
"https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cloudtrail put-insight-selectors --trail-name <TRAIL_NAME> --insight-selectors '[{\"InsightType\":\"ApiCallRateInsight\"}]'",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n TrailName: <example_resource_name>\n S3BucketName: <example_resource_name>\n IsLogging: true\n InsightSelectors:\n - InsightType: ApiCallRateInsight # Critical fix: enables CloudTrail Insights on the trail\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails\n2. Select the trail that is logging\n3. Click Edit on the CloudTrail Insights section\n4. Enable Insights and select API call rate (or Error rate)\n5. Save changes",
"Terraform": "```hcl\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n enable_logging = true\n\n insight_selector {\n insight_type = \"ApiCallRateInsight\" # Critical fix: enables CloudTrail Insights on the trail\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable CloudTrail Insight",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html"
"Text": "Enable **CloudTrail Insights** on all logging trails (ideally all-Region or organization trails). Activate both `ApiCallRateInsight` and `ApiErrorRateInsight`. Integrate alerts with monitoring and review anomalies regularly. Apply **defense in depth** and least privilege to reduce potential blast radius.",
"Url": "https://hub.prowler.com/check/cloudtrail_insights_exist"
}
},
"Categories": [
"forensics-ready"
"threat-detection"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,34 +1,39 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_kms_encryption_enabled",
"CheckTitle": "Ensure CloudTrail logs are encrypted at rest using KMS CMKs",
"CheckTitle": "CloudTrail trail logs are encrypted at rest with a KMS key",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail logs are encrypted at rest using KMS CMKs",
"Risk": "By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMSmanaged keys (SSE-KMS) for your CloudTrail log files.",
"Description": "**AWS CloudTrail trails** are evaluated for use of **SSE-KMS** with a customer-managed KMS key to encrypt delivered log files at rest in S3. Trails without a configured KMS key are identified. *Applies to single-Region and multi-Region trails.*",
"Risk": "Absent a **customer-managed KMS key**, log protection relies only on storage permissions. Bucket misconfigurations or stolen credentials can expose audit data, aiding evasion and lateral movement. Missing key-level controls, rotation, and usage audit weaken **confidentiality** and **forensic integrity**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html",
"https://trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-logs-encrypted.html",
"https://www.stream.security/rules/ensure-cloudtrail-logs-are-encrypted-at-rest",
"https://www.clouddefense.ai/compliance-rules/cis-v130/logging/cis-v130-3-7"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --kms-id <cloudtrail_kms_key> aws kms put-key-policy --key-id <cloudtrail_kms_key> --policy <cloudtrail_kms_key_policy>",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_7#fix---buildtime",
"Other": "",
"Terraform": ""
"CLI": "aws cloudtrail update-trail --name <trail_name> --kms-key-id <kms_key_arn_or_id>",
"NativeIaC": "```yaml\n# CloudFormation: enable KMS encryption for an existing/new CloudTrail\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n KmsKeyId: <example_resource_id> # Critical: sets the KMS key to encrypt CloudTrail logs at rest\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails\n2. Select the trail <trail_name>, click Edit\n3. Under Log file encryption, choose Use a KMS key and select <cloudtrail_kms_key>\n4. Click Save changes",
"Terraform": "```hcl\n# Enable KMS encryption for CloudTrail\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n kms_key_id = \"<example_resource_id>\" # Critical: uses this KMS key to encrypt CloudTrail logs\n}\n```"
},
"Recommendation": {
"Text": "This approach has the following advantages: You can create and manage the CMK encryption keys yourself. You can use a single CMK to encrypt and decrypt log files for multiple accounts across all regions. You have control over who can use your key for encrypting and decrypting CloudTrail log files. You can assign permissions for the key to the users. You have enhanced security.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html"
"Text": "Enable **SSE-KMS** on every trail using a **customer-managed KMS key**. Apply **least privilege** so only authorized roles can `Decrypt`, and enforce **separation of duties** between key admins and log readers. Rotate keys and monitor key usage to provide **defense in depth** for CloudTrail data.",
"Url": "https://hub.prowler.com/check/cloudtrail_kms_encryption_enabled"
}
},
"Categories": [
"forensics-ready",
"encryption"
],
"DependsOn": [],
@@ -1,33 +1,40 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_log_file_validation_enabled",
"CheckTitle": "Ensure CloudTrail log file validation is enabled",
"CheckTitle": "CloudTrail trail has log file validation enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail log file validation is enabled",
"Risk": "Enabling log file validation will provide additional integrity checking of CloudTrail logs. ",
"Description": "**AWS CloudTrail trails** are evaluated for **log file integrity validation** being enabled (`LogFileValidationEnabled`).\n\nWhen enabled, CloudTrail generates signed digest files to verify that S3-delivered log files remain unchanged.",
"Risk": "Without validation, adversaries can alter, forge, or delete audit entries without detection, compromising log **integrity** and non-repudiation.\n\nThis impairs investigations, enables alert evasion, and obscures unauthorized changes across regions or accounts.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-enabling.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-log-file-integrity-validation.html",
"https://deepwiki.com/acantril/learn-cantrill-io-labs/7.1-cloudtrail-log-file-integrity"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --enable-log-file-validation",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_2#cloudformation",
"Other": "",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_2#terraform"
"CLI": "aws cloudtrail update-trail --name <trail_name> --enable-log-file-validation",
"NativeIaC": "```yaml\n# CloudFormation: Enable log file validation on a CloudTrail trail\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n EnableLogFileValidation: true # Critical: enables integrity validation for delivered log files\n```",
"Other": "1. Open the AWS Console and go to CloudTrail\n2. Click Trails and select <trail_name>\n3. Click Edit\n4. In Additional/Advanced settings, check Enable log file validation\n5. Click Save changes",
"Terraform": "```hcl\n# Enable log file validation on a CloudTrail trail\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n enable_log_file_validation = true # Critical: ensures CloudTrail writes signed digests to detect tampering\n}\n```"
},
"Recommendation": {
"Text": "Ensure LogFileValidationEnabled is set to true for each trail.",
"Url": "http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-filevalidation-enabling.html"
"Text": "Enable **log file integrity validation** on all trails (`LogFileValidationEnabled=true`).\n\nEnforce **least privilege** on the logs bucket, retain and protect digest files (e.g., S3 Object Lock/MFA Delete), and monitor validation results to support **defense in depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_log_file_validation_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,33 +1,38 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_logs_s3_bucket_access_logging_enabled",
"CheckTitle": "Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket",
"CheckTitle": "CloudTrail trail destination S3 bucket has access logging enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket",
"Risk": "Server access logs can assist you in security and access audits, help you learn about your customer base, and understand your Amazon S3 bill.",
"Description": "CloudTrail trails deliver logs to an S3 bucket; this evaluates whether that bucket has **S3 server access logging** enabled to record requests against it.\n\n*If the destination bucket is outside the account or audit scope, a manual review is indicated.*",
"Risk": "Without access logging on the CloudTrail logs bucket, access and changes to log files lack an independent audit trail. Attackers could read, delete, or replace logs without attribution, undermining **log confidentiality** and **integrity**, and slowing **incident response**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/cloudtrail-controls.html",
"https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_6#aws-console",
"Terraform": ""
"CLI": "aws s3api put-bucket-logging --bucket <CLOUDTRAIL_BUCKET_NAME> --bucket-logging-status \"{\\\"LoggingEnabled\\\":{\\\"TargetBucket\\\":\\\"<TARGET_BUCKET_NAME>\\\"}}\"",
"NativeIaC": "```yaml\n# CloudFormation: enable S3 access logging on the CloudTrail destination bucket\nResources:\n <example_log_bucket_name>:\n Type: AWS::S3::Bucket\n\n <example_cloudtrail_bucket>:\n Type: AWS::S3::Bucket\n Properties:\n LoggingConfiguration:\n DestinationBucketName: !Ref <example_log_bucket_name> # Critical: turns on server access logging to this destination bucket\n # This enables access logging so the check passes\n```",
"Other": "1. In the AWS Console, go to S3 and open the bucket used by your CloudTrail trail\n2. Select the Properties tab\n3. In Server access logging, click Edit\n4. Enable logging and choose a different destination S3 bucket for the logs\n5. Click Save changes",
"Terraform": "```hcl\n# Enable access logging on the CloudTrail S3 bucket\nresource \"aws_s3_bucket\" \"<example_log_bucket_name>\" {\n bucket = \"<example_log_bucket_name>\"\n}\n\nresource \"aws_s3_bucket\" \"<example_bucket_name>\" {\n bucket = \"<example_bucket_name>\"\n}\n\nresource \"aws_s3_bucket_logging\" \"<example_resource_name>\" {\n bucket = aws_s3_bucket.<example_bucket_name>.id\n target_bucket = aws_s3_bucket.<example_log_bucket_name>.id # Critical: enables server access logging to the target bucket\n}\n```"
},
"Recommendation": {
"Text": "Ensure that S3 buckets have Logging enabled. CloudTrail data events can be used in place of S3 bucket logging. If that is the case, this finding can be considered a false positive.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html"
"Text": "Enable **S3 server access logging** on the CloudTrail logs bucket and write logs to a separate, tightly controlled bucket. Apply **least privilege**, enable **versioning**, and consider **Object Lock** to deter tampering. Centralize monitoring to support defense-in-depth and rapid investigation.",
"Url": "https://hub.prowler.com/check/cloudtrail_logs_s3_bucket_access_logging_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,37 +1,45 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_logs_s3_bucket_is_not_publicly_accessible",
"CheckTitle": "Ensure the S3 bucket CloudTrail logs is not publicly accessible",
"CheckTitle": "CloudTrail trail S3 bucket is not publicly accessible",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Industry and Regulatory Standards/CIS AWS Foundations Benchmark",
"Effects/Data Exposure"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure the S3 bucket CloudTrail logs to is not publicly accessible",
"Risk": "Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected accounts use or configuration.",
"ResourceType": "AwsS3Bucket",
"Description": "CloudTrail log destination **S3 buckets** are inspected for ACL grants that expose data to the public `AllUsers` group.\n\nBuckets hosted in other accounts are flagged for out-of-scope review.",
"Risk": "Exposed CloudTrail logs erode **confidentiality** and **integrity**.\n\nAdversaries can harvest API activity to map accounts, roles, and keys, enabling **reconnaissance** and evasion. If write is allowed, logs can be **poisoned** or deleted, thwarting investigations and compromising incident timelines.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-bucket-publicly-accessible.html",
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html",
"https://docs.aws.amazon.com/config/latest/developerguide/cloudtrail-s3-bucket-public-access-prohibited.html",
"https://docs.panther.com/alerts/alert-runbooks/built-in-policies/aws-cloudtrail-logs-s3-bucket-not-publicly-accessible"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_3#aws-console",
"Terraform": ""
"CLI": "aws s3api put-bucket-acl --bucket <example_resource_name> --acl private",
"NativeIaC": "```yaml\n# CloudFormation: ensure the CloudTrail S3 bucket ACL is not public\nResources:\n CloudTrailLogsBucket:\n Type: AWS::S3::Bucket\n Properties:\n BucketName: <example_resource_name>\n AccessControl: Private # CRITICAL: sets bucket ACL to private, removing any AllUsers (public) grants\n```",
"Other": "1. Open the AWS S3 Console\n2. Select the bucket used by CloudTrail\n3. Go to Permissions > Access control list (ACL)\n4. Click Edit under Public access, remove any grants to \"Everyone (public access)\" (uncheck Read/Write)\n5. Save changes",
"Terraform": "```hcl\n# Ensure the CloudTrail S3 bucket ACL is private\nresource \"aws_s3_bucket_acl\" \"fix_cloudtrail_logs_bucket\" {\n bucket = \"<example_resource_name>\"\n acl = \"private\" # CRITICAL: removes any public (AllUsers) ACL grants\n}\n```"
},
"Recommendation": {
"Text": "Analyze Bucket policy to validate appropriate permissions. Ensure the AllUsers principal is not granted privileges. Ensure the AuthenticatedUsers principal is not granted privileges.",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html"
"Text": "Apply **least privilege** to the log bucket:\n- Enable S3 `Block Public Access` (account and bucket)\n- Remove `AllUsers`/`AuthenticatedUsers` ACLs; avoid wildcard principals\n- Permit only CloudTrail and constrain with `aws:SourceArn`\n\nUse a dedicated private bucket and monitor for permission changes.",
"Url": "https://hub.prowler.com/check/cloudtrail_logs_s3_bucket_is_not_publicly_accessible"
}
},
"Categories": [
"forensics-ready",
"internet-exposed"
],
"DependsOn": [],
"DependsOn": [
"s3_bucket_public_access"
],
"RelatedTo": [],
"Notes": ""
}
@@ -1,33 +1,37 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_multi_region_enabled",
"CheckTitle": "Ensure CloudTrail is enabled in all regions",
"CheckTitle": "Region has at least one CloudTrail trail logging",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail is enabled in all regions",
"Risk": "AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.",
"Description": "**AWS CloudTrail** has at least one trail with `logging` enabled in every region. A **multi-region trail** or a regional trail counts for coverage in that region.",
"Risk": "Missing coverage in any region creates **visibility gaps**.\n\nAttackers can use lesser-monitored regions to run API actions, hide **unauthorized changes**, and exfiltrate data without audit trails, weakening **detective controls**, hindering **forensics**, and delaying response (confidentiality and integrity).",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrailconcepts.html#cloudtrail-concepts-management-events"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail create-trail --name <trail_name> --bucket-name <s3_bucket_for_cloudtrail> --is-multi-region-trail aws cloudtrail update-trail --name <trail_name> --is-multi-region-trail ",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#cloudformation",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#aws-console",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#terraform"
"CLI": "",
"NativeIaC": "```yaml\n# CloudFormation: Create a multi-region CloudTrail and start logging\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n TrailName: <example_resource_name>\n S3BucketName: <example_resource_name>\n IsMultiRegionTrail: true # Critical: applies the trail to all regions\n IsLogging: true # Critical: ensures the trail is logging\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails\n2. If no trail exists: Click Create trail, enter a name, choose an S3 bucket, set Apply trail to all regions = Yes, then Create (logging starts)\n3. If a trail exists: Select it, click Edit, set Apply trail to all regions = Yes, Save\n4. If Status shows Not logging, click Start logging",
"Terraform": "```hcl\n# Terraform: Multi-region CloudTrail with logging enabled\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n is_multi_region_trail = true # Critical: applies the trail to all regions\n enable_logging = true # Critical: ensures the trail is logging\n}\n```"
},
"Recommendation": {
"Text": "Ensure Logging is set to ON on all regions (even if they are not being used at the moment.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrailconcepts.html#cloudtrail-concepts-management-events"
"Text": "Use a **multi-region CloudTrail trail** or per-region trails so `logging` is active in every region, including unused ones.\n\nCentralize logs, enforce **least privilege** to log stores, and add **defense-in-depth** with encryption, integrity validation, and retention. Continuously monitor trail health to catch gaps.",
"Url": "https://hub.prowler.com/check/cloudtrail_multi_region_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,31 +1,38 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_multi_region_enabled_logging_management_events",
"CheckTitle": "Ensure CloudTrail logging management events in All Regions",
"CheckTitle": "CloudTrail trail logs management events for read and write operations",
"CheckType": [
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail logging management events in All Regions",
"Risk": "AWS CloudTrail enables governance, compliance, operational auditing, and risk auditing of your AWS account. To meet FTR requirements, you must have management events enabled for all AWS accounts and in all regions and aggregate these logs into an Amazon Simple Storage Service (Amazon S3) bucket owned by a separate AWS account.",
"RelatedUrl": "https://docs.prowler.com/checks/aws/logging-policies/logging_14",
"Description": "**CloudTrail trails** record **management events** (`read` and `write`) in every AWS region and are actively logging, using a multi-region trail or per-region coverage.",
"Risk": "Without region-wide management event logging, changes to identities, networking, and audit settings can go untracked.\n\nAdversaries can operate in overlooked regions to create resources, modify permissions, or disable logging, undermining **integrity**, **confidentiality**, and incident response.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.prowler.com/checks/aws/logging-policies/logging_14#terraform",
"https://docs.prowler.com/checks/aws/logging-policies/logging_14"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --is-multi-region-trail",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_14",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_14#terraform"
"CLI": "",
"NativeIaC": "```yaml\n# CloudFormation: enable multi-region and log management events (read & write)\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n IsMultiRegionTrail: true # CRITICAL: apply the trail to all regions\n EventSelectors:\n - IncludeManagementEvents: true # CRITICAL: log management events\n ReadWriteType: All # CRITICAL: log both read and write\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails and select your trail\n2. Click Edit\n3. Set Apply trail to all regions to Yes\n4. Under Management events, set Read/write events to All\n5. Click Save changes\n6. If Logging is Off, click Start logging",
"Terraform": "```hcl\n# Terraform: enable multi-region and log management events (read & write)\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n is_multi_region_trail = true # CRITICAL: apply the trail to all regions\n\n event_selector {\n include_management_events = true # CRITICAL: log management events\n read_write_type = \"All\" # CRITICAL: log both read & write\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable CloudTrail logging management events in All Regions",
"Url": "https://docs.prowler.com/checks/aws/logging-policies/logging_14"
"Text": "Enable a **multi-region CloudTrail** that logs **management events** for `read` and `write` in all regions.\n\nCentralize logs in a separate, locked-down account; apply **least privilege**, encryption, retention, and integrity validation; and protect trails and storage with tamper-evident, deny-delete controls for **defense-in-depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_multi_region_enabled_logging_management_events"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,31 +1,41 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_s3_dataevents_read_enabled",
"CheckTitle": "Check if S3 buckets have Object-level logging for read events is enabled in CloudTrail.",
"CheckTitle": "CloudTrail trail records S3 object-level read events for all S3 buckets",
"CheckType": [
"Logging and Monitoring"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure that all your AWS CloudTrail trails are configured to log Data events in order to record S3 object-level API operations, such as GetObject, DeleteObject and PutObject, for individual S3 buckets or for all current and future S3 buckets provisioned in your AWS account.",
"Risk": "If logs are not enabled, monitoring of service use and threat analysis is not possible.",
"Description": "**CloudTrail trails** log **S3 object-level read data events** for all buckets, capturing object access (for example `GetObject`) via selectors targeting `AWS::S3::Object`",
"Risk": "Without **object-level read logging**, S3 access is opaque. Attackers or insiders can exfiltrate data via `GetObject` without audit trails, eroding **confidentiality** and hindering **forensics**, anomaly detection, and incident response.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://awswala.medium.com/enable-cloudtrail-data-events-logging-for-objects-in-an-s3-bucket-33cade51ae2b",
"https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-23",
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html",
"https://www.plerion.com/cloud-knowledge-base/ensure-object-level-logging-for-read-events-enabled-for-s3-bucket"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail put-event-selectors --trail-name <YOUR_TRAIL_NAME_HERE> --event-selectors '[{ 'ReadWriteType': 'ReadOnly', 'IncludeManagementEvents':true, 'DataResources': [{ 'Type': 'AWS::S3::Object', 'Values': ['arn:aws:s3'] }] }]'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-23",
"Terraform": ""
"CLI": "aws cloudtrail put-event-selectors --trail-name <example_resource_name> --event-selectors '[{\"ReadWriteType\":\"ReadOnly\",\"DataResources\":[{\"Type\":\"AWS::S3::Object\",\"Values\":[\"arn:aws:s3\"]}]}]'",
"NativeIaC": "```yaml\n# CloudFormation: enable S3 object-level READ data events for all buckets on a trail\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n EventSelectors:\n - ReadWriteType: ReadOnly # CRITICAL: log read-only data events\n DataResources:\n - Type: AWS::S3::Object # CRITICAL: target S3 object-level events\n Values:\n - arn:aws:s3 # CRITICAL: applies to all S3 buckets/objects\n```",
"Other": "1. In the AWS Console, open CloudTrail and select Trails\n2. Open your trail and go to the Data events section\n3. Add data event for S3 and choose All current and future S3 buckets\n4. Select only Read events (or All if Read-only is unavailable)\n5. Save changes",
"Terraform": "```hcl\n# Terraform: enable S3 object-level READ data events for all buckets on a trail\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n event_selector {\n read_write_type = \"ReadOnly\" # CRITICAL: log read-only data events\n data_resource {\n type = \"AWS::S3::Object\" # CRITICAL: target S3 object-level events\n values = [\"arn:aws:s3\"] # CRITICAL: apply to all S3 buckets/objects\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable logs. Create an S3 lifecycle policy. Define use cases, metrics and automated responses where applicable.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html"
"Text": "Enable CloudTrail **data events** for S3 objects with `ReadOnly` (or `All`) across all current and future buckets. Use a multi-Region trail, centralize logs in an encrypted bucket with lifecycle retention, and integrate monitoring/alerts to support **defense in depth** and accountable access.",
"Url": "https://hub.prowler.com/check/cloudtrail_s3_dataevents_read_enabled"
}
},
"Categories": [],
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,31 +1,41 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_s3_dataevents_write_enabled",
"CheckTitle": "Check if S3 buckets have Object-level logging for write events is enabled in CloudTrail.",
"CheckTitle": "CloudTrail trail records all S3 object-level API operations for all buckets",
"CheckType": [
"Logging and Monitoring"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure that all your AWS CloudTrail trails are configured to log Data events in order to record S3 object-level API operations, such as GetObject, DeleteObject and PutObject, for individual S3 buckets or for all current and future S3 buckets provisioned in your AWS account.",
"Risk": "If logs are not enabled, monitoring of service use and threat analysis is not possible.",
"Description": "**CloudTrail trails** include **S3 object-level data events** for **write (or all) operations** across **all current and future buckets**, via classic or advanced selectors. This records actions like `PutObject`, `DeleteObject`, and multipart uploads at the object level.",
"Risk": "Without object-level write logging, unauthorized or accidental changes and deletions can go unobserved, undermining data **integrity** and **availability**. Forensics lose visibility into who modified or removed objects, hindering detection of ransomware, rogue automation, or insider tampering.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html",
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html",
"https://www.go2share.net/article/s3-bucket-logging",
"https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-22"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail put-event-selectors --trail-name <YOUR_TRAIL_NAME_HERE> --event-selectors '[{ 'ReadWriteType': 'WriteOnly', 'IncludeManagementEvents':true, 'DataResources': [{ 'Type': 'AWS::S3::Object', 'Values': ['arn:aws:s3'] }] }]'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-22",
"Terraform": ""
"CLI": "aws cloudtrail put-event-selectors --trail-name <example_resource_name> --event-selectors '[{\"ReadWriteType\":\"WriteOnly\",\"DataResources\":[{\"Type\":\"AWS::S3::Object\",\"Values\":[\"arn:aws:s3\"]}]}]'",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n TrailName: <example_resource_name>\n S3BucketName: <example_resource_name>\n EventSelectors:\n - ReadWriteType: WriteOnly\n DataResources:\n - Type: AWS::S3::Object\n Values:\n - arn:aws:s3 # Critical: enables S3 object-level write data events for all buckets, fixing the check\n```",
"Other": "1. In the AWS Console, open CloudTrail and go to Trails\n2. Select <your trail> and click Edit under Data events\n3. For Data event source, choose S3\n4. Select All current and future S3 buckets\n5. Check Write events (or All events)\n6. Click Save changes",
"Terraform": "```hcl\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n event_selector {\n read_write_type = \"WriteOnly\"\n data_resource {\n type = \"AWS::S3::Object\"\n values = [\"arn:aws:s3\"] # Critical: logs S3 object-level write events for all buckets to pass the check\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable logs. Create an S3 lifecycle policy. Define use cases, metrics and automated responses where applicable.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html"
"Text": "Enable **CloudTrail S3 data events** for object-level **write** (and *optionally* read) across all buckets on a multi-Region trail. Apply **least privilege** to log storage, set **lifecycle** retention, and integrate alerts. Use **advanced selectors** to target sensitive buckets/operations for cost control and **defense in depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_s3_dataevents_write_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,26 +1,37 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_threat_detection_enumeration",
"CheckTitle": "Ensure there are no potential enumeration threats in CloudTrail",
"CheckType": [],
"CheckTitle": "CloudTrail logs show no potential enumeration activity",
"CheckType": [
"TTPs/Discovery",
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"Unusual Behaviors/User"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "This check ensures that there are no potential enumeration threats in CloudTrail.",
"Risk": "Potential enumeration threats in CloudTrail can lead to unauthorized access to resources.",
"Description": "**CloudTrail activity** is analyzed for AWS identities executing a broad mix of discovery APIs like `List*`, `Describe*`, and `Get*` within a recent time window.\n\nAn identity exceeding a configurable ratio of these actions indicates potential enumeration behavior by that principal.",
"Risk": "Concentrated discovery activity signals **reconnaissance** with valid credentials. Adversaries can map assets and policies to enable **privilege escalation**, target data stores for **exfiltration** (confidentiality), and identify services to disrupt (availability), supporting stealthy lateral movement.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://medium.com/falconforce/falconfriday-detecting-enumeration-in-aws-0xff25-orangecon-25-edition-4aee83651088",
"https://www.elastic.co/guide/en/security/8.19/aws-discovery-api-calls-via-cli-from-a-single-resource.html",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events",
"https://aws.plainenglish.io/aws-cloudtrail-event-cheatsheet-a-detection-engineers-guide-to-critical-api-calls-part-1-04fb1588556f",
"https://support.icompaas.com/support/solutions/articles/62000233455-ensure-there-are-no-potential-enumeration-threats-in-cloudtrail-"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws iam update-access-key --user-name <USER_NAME> --access-key-id <ACCESS_KEY_ID> --status Inactive",
"NativeIaC": "```yaml\n# CloudFormation: deny common enumeration APIs for a specific IAM user\nResources:\n DenyEnumerationPolicy:\n Type: AWS::IAM::Policy\n Properties:\n PolicyName: deny-enumeration\n PolicyDocument:\n Version: \"2012-10-17\"\n Statement:\n - Effect: Deny # CRITICAL: blocks typical enumeration calls\n Action:\n - ec2:Describe* # CRITICAL: deny EC2 describe APIs\n - iam:List* # CRITICAL: deny IAM list APIs\n - s3:List* # CRITICAL: deny S3 list APIs\n - s3:Get* # CRITICAL: deny S3 get APIs (e.g., GetBucketAcl)\n Resource: \"*\"\n Users:\n - \"<example_resource_name>\" # CRITICAL: target the enumerating user\n```",
"Other": "1. In AWS Console, go to IAM > Users and open the user shown in the alert (ARN in the finding)\n2. Select the Security credentials tab\n3. For each active Access key, click Deactivate to set status to Inactive\n4. If the activity came from an EC2 instance role: go to EC2 > Instances > select the instance > Security > IAM role > Detach IAM role\n5. Re-run the check to confirm no new enumeration events occur",
"Terraform": "```hcl\n# Deny common enumeration APIs for a specific IAM user\nresource \"aws_iam_user_policy\" \"<example_resource_name>\" {\n name = \"deny-enumeration\"\n user = \"<example_user_name>\"\n\n policy = jsonencode({\n Version = \"2012-10-17\",\n Statement = [{\n Effect = \"Deny\", # CRITICAL: blocks typical enumeration calls\n Action = [\n \"ec2:Describe*\", # CRITICAL\n \"iam:List*\", # CRITICAL\n \"s3:List*\", # CRITICAL\n \"s3:Get*\" # CRITICAL\n ],\n Resource = \"*\"\n }]\n })\n}\n```"
},
"Recommendation": {
"Text": "To remediate this issue, ensure that there are no potential enumeration threats in CloudTrail.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events"
"Text": "Apply **least privilege** to limit `List*`/`Describe*`/`Get*` to necessary resources and roles; use **separation of duties**.\n- Enforce MFA and short-lived sessions\n- Use **SCPs** to curb unnecessary discovery\n- Baseline expected reads and alert on spikes as **defense in depth**",
"Url": "https://hub.prowler.com/check/cloudtrail_threat_detection_enumeration"
}
},
"Categories": [
@@ -1,30 +1,43 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_threat_detection_llm_jacking",
"CheckTitle": "Ensure there are no potential LLM Jacking threats in CloudTrail.",
"CheckType": [],
"CheckTitle": "No potential LLM jacking activity detected in CloudTrail",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"TTPs/Discovery",
"TTPs/Execution",
"TTPs/Defense Evasion",
"Effects/Resource Consumption",
"Unusual Behaviors/User"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "This check ensures that there are no potential LLM Jacking threats in CloudTrail. LLM Jacking attacks involve unauthorized access to cloud-hosted large language model (LLM) services, such as AWS Bedrock, by exploiting exposed credentials or vulnerabilities. These attacks can lead to resource hijacking, unauthorized model invocations, and high operational costs for the victim organization.",
"Risk": "Potential LLM Jacking threats in CloudTrail can lead to unauthorized access to sensitive AI models, stolen credentials, resource hijacking, or running costly workloads. Attackers may use reverse proxies or malicious credentials to sell access to models, exfiltrate sensitive data, or disrupt business operations.",
"RelatedUrl": "https://sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack/",
"Description": "**CloudTrail Bedrock activity** is analyzed per identity for a high diversity of LLM-related API calls (e.g., `InvokeModel`, `InvokeModelWithResponseStream`, `GetFoundationModelAvailability`). *If an identity's share of these actions exceeds a configured threshold over a recent window*, it is surfaced as potential **LLM-jacking** behavior.",
"Risk": "Such patterns suggest **stolen credential** abuse to drive LLM usage.\n- Availability: cost exhaustion and service disruption\n- Confidentiality: leakage of prompts/outputs and model settings\n- Integrity: misuse of permissions for broader access\nAttackers may use reverse proxies to resell access and obfuscate sources.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://furkangungor.medium.com/automating-anomaly-detection-in-aws-cloudtrail-logs-4efb2ad9b958",
"https://help.sumologic.com/docs/integrations/amazon-aws/amazon-bedrock/",
"https://dzone.com/articles/ai-powered-aws-cloudtrail-analysis-strands-agent-bedrock"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"NativeIaC": "```yaml\n# CloudFormation SCP that blocks all Amazon Bedrock actions to stop LLM jacking\nResources:\n <example_resource_name>:\n Type: AWS::Organizations::Policy\n Properties:\n Name: <example_resource_name>\n Type: SERVICE_CONTROL_POLICY\n TargetIds:\n - \"<example_resource_id>\" # CRITICAL: Attach SCP to the root/OU/account to enforce the deny\n Content:\n Version: \"2012-10-17\"\n Statement:\n - Sid: DenyBedrock\n Effect: Deny\n Action: \"bedrock:*\" # CRITICAL: Denies all Bedrock APIs (Invoke/Converse/list/entitlements/etc.)\n Resource: \"*\" # CRITICAL: Apply deny to all resources\n```",
"Other": "1. In the AWS Console, go to Organizations > Policies > Service control policies\n2. Click Create policy\n3. Set Name to <example_resource_name>\n4. In Policy, paste a deny for Bedrock:\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [{\"Sid\":\"DenyBedrock\",\"Effect\":\"Deny\",\"Action\":\"bedrock:*\",\"Resource\":\"*\"}]\n }\n5. Save the policy and click Attach\n6. Select the target (Root, OU, or the affected account ID <example_resource_id>) and attach the policy\n7. Wait for propagation; no further Bedrock calls will occur, and the finding will clear after the detection window elapses",
"Terraform": "```hcl\n# SCP denying all Amazon Bedrock actions; attach it to the root/OU/account to halt LLM jacking\nresource \"aws_organizations_policy\" \"main\" {\n name = \"<example_resource_name>\"\n type = \"SERVICE_CONTROL_POLICY\"\n\n content = jsonencode({\n Version = \"2012-10-17\"\n Statement = [{\n Sid = \"DenyBedrock\"\n Effect = \"Deny\"\n Action = \"bedrock:*\" // CRITICAL: blocks all Bedrock APIs (prevents further suspicious activity)\n Resource = \"*\" // CRITICAL: deny across all resources\n }]\n })\n}\n\nresource \"aws_organizations_policy_attachment\" \"attach\" {\n policy_id = aws_organizations_policy.main.id\n target_id = \"<example_resource_id>\" // CRITICAL: attach to the affected account/OU/root to enforce the deny\n}\n```"
},
"Recommendation": {
"Text": "To remediate this issue, enable detailed CloudTrail logging for Bedrock API calls, monitor suspicious activities, and secure sensitive credentials. Enable logging of model invocation inputs and outputs, and restrict access using IAM policies. Review CloudTrail logs regularly for suspicious `InvokeModel` actions or unauthorized access to models.",
"Url": "https://permiso.io/blog/exploiting-hosted-models"
"Text": "Apply **least privilege** to Bedrock; restrict `Invoke*` only to required roles and deny broadly via **SCPs** where unused. Enforce **MFA** and short-lived creds; rotate/remove exposed keys. Enable **model invocation logging** and budgets/quotas. Continuously monitor for Bedrock enumeration plus invoke bursts. Use **defense in depth** across identities and networks.",
"Url": "https://hub.prowler.com/check/cloudtrail_threat_detection_llm_jacking"
}
},
"Categories": [
"threat-detection"
"threat-detection",
"gen-ai"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,26 +1,34 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_threat_detection_privilege_escalation",
"CheckTitle": "Ensure there are no potential privilege escalation threats in CloudTrail",
"CheckType": [],
"CheckTitle": "No potential privilege escalation activity detected in CloudTrail",
"CheckType": [
"TTPs/Privilege Escalation",
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "This check ensures that there are no potential privilege escalation threats in CloudTrail.",
"Risk": "Potential privilege escalation threats in CloudTrail can lead to unauthorized access to resources.",
"Description": "**CloudTrail** activity is analyzed for **identities** executing high-risk actions linked to **privilege escalation** (e.g., `Attach*Policy`, `PassRole`, `AssumeRole`, `CreateAccessKey`). Identities exceeding a configurable share of such events within a *recent time window* are highlighted for investigation.",
"Risk": "Escalation patterns can grant elevated entitlements, enabling:\n- Confidentiality loss via unauthorized data/secret access\n- Integrity compromise by changing IAM policies/roles\n- Availability impact by tampering with logging or resources\nThis also facilitates lateral movement and persistence.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events",
"https://signmycode.com/blog/what-is-privilege-escalation-in-aws-recommendations-to-prevent-it"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"NativeIaC": "```yaml\n# CloudFormation: Organization SCP to block common IAM privilege-escalation actions\nResources:\n <example_resource_name>:\n Type: AWS::Organizations::Policy\n Properties:\n Name: deny-iam-privesc\n Type: SERVICE_CONTROL_POLICY\n # Critical: This SCP denies risky IAM actions often used for privilege escalation\n # Explanation: Denying these actions organization-wide prevents future privesc activity detected by CloudTrail\n Content: |\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Deny\",\n \"Action\": [\n \"iam:AttachUserPolicy\",\n \"iam:AttachRolePolicy\",\n \"iam:PutUserPolicy\",\n \"iam:PutRolePolicy\",\n \"iam:PutGroupPolicy\",\n \"iam:AddUserToGroup\",\n \"iam:CreateAccessKey\",\n \"iam:CreateLoginProfile\",\n \"iam:UpdateLoginProfile\",\n \"iam:UpdateAssumeRolePolicy\",\n \"iam:CreatePolicyVersion\",\n \"iam:SetDefaultPolicyVersion\",\n \"iam:PassRole\"\n ],\n \"Resource\": \"*\"\n }\n ]\n }\n <example_resource_name>Attachment:\n Type: AWS::Organizations::PolicyAttachment\n Properties:\n # Critical: Attach the SCP so it is enforced\n PolicyId: !Ref <example_resource_name>\n TargetId: <example_resource_id> # OU, Root, or Account ID\n```",
"Other": "1. In AWS Console, open IAM and identify the AWS identity shown in the Prowler finding (user or role ARN)\n2. If it is an IAM user:\n - Go to Security credentials > Access keys, set active keys to Inactive\n - Go to Permissions, detach all managed policies and delete inline policies\n - Go to Groups, remove the user from privileged groups\n - Go to Console password, delete the login profile\n3. If it is an IAM role:\n - Go to Permissions, detach managed policies and delete inline policies\n - Go to Trust relationships, remove principals that should not assume the role and save\n4. Re-run the scan after the detection window elapses to confirm no further privilege-escalation activity is detected",
"Terraform": "```hcl\n# SCP to block common IAM privilege-escalation actions\nresource \"aws_organizations_policy\" \"<example_resource_name>\" {\n name = \"deny-iam-privesc\"\n type = \"SERVICE_CONTROL_POLICY\"\n\n # Critical: Deny risky IAM actions to prevent future privesc\n # Explanation: Blocks escalation techniques commonly seen in CloudTrail\n content = jsonencode({\n Version = \"2012-10-17\",\n Statement = [\n {\n Effect = \"Deny\",\n Action = [\n \"iam:AttachUserPolicy\",\n \"iam:AttachRolePolicy\",\n \"iam:PutUserPolicy\",\n \"iam:PutRolePolicy\",\n \"iam:PutGroupPolicy\",\n \"iam:AddUserToGroup\",\n \"iam:CreateAccessKey\",\n \"iam:CreateLoginProfile\",\n \"iam:UpdateLoginProfile\",\n \"iam:UpdateAssumeRolePolicy\",\n \"iam:CreatePolicyVersion\",\n \"iam:SetDefaultPolicyVersion\",\n \"iam:PassRole\"\n ],\n Resource = \"*\"\n }\n ]\n })\n}\n\nresource \"aws_organizations_policy_attachment\" \"<example_resource_name>_attach\" {\n # Critical: Attach the SCP so it takes effect\n policy_id = aws_organizations_policy.<example_resource_name>.id\n target_id = \"<example_resource_id>\" # OU, Root, or Account ID\n}\n```"
},
"Recommendation": {
"Text": "To remediate this issue, ensure that there are no potential privilege escalation threats in CloudTrail.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events"
"Text": "Apply **least privilege** and **defense in depth**:\n- Restrict `PassRole`, `Attach*Policy`, `UpdateAssumeRolePolicy`, `CreateAccessKey`\n- Enforce permission boundaries and SCPs\n- Require MFA and change approvals\n- Use multi-Region CloudTrail, immutable retention, and alerting on anomalous sequences",
"Url": "https://hub.prowler.com/check/cloudtrail_threat_detection_privilege_escalation"
}
},
"Categories": [
@@ -1,26 +1,35 @@
{
"Provider": "aws",
"CheckID": "directoryservice_directory_log_forwarding_enabled",
"CheckTitle": "Directory Service monitoring with CloudWatch logs.",
"CheckType": [],
"CheckTitle": "Directory Service directory has log forwarding to CloudWatch Logs enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "directoryservice",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:codeartifact:region:account-id:directory/directory-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Directory Service monitoring with CloudWatch logs.",
"Risk": "As a best practice, monitor your organization to ensure that changes are logged. This helps you to ensure that any unexpected change can be investigated and unwanted changes can be rolled back.",
"RelatedUrl": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/incident-response.html",
"Description": "**AWS Directory Service directories** are configured to forward domain controller security event logs to **CloudWatch Logs** using log subscriptions.\n\nEvaluation identifies directories with or without this forwarding in place.",
"Risk": "Without forwarding, visibility into AD security events is lost, delaying detection of suspicious authentications, policy changes, or privilege grants. Attackers can escalate and persist unnoticed, risking unauthorized access (confidentiality) and identity/policy manipulation (integrity), while hampering forensics and response.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.amazonaws.cn/en_us/directoryservice/latest/admin-guide/ms_ad_enable_log_forwarding.html",
"https://docs.aws.amazon.com/directoryservice/latest/admin-guide/incident-response.html",
"https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_enable_log_forwarding.html",
"https://support.icompaas.com/support/solutions/articles/62000233528--ensure-directory-service-monitoring-with-cloudwatch-logs"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"NativeIaC": "```yaml\n# CloudFormation: enable Directory Service log forwarding to CloudWatch Logs\nResources:\n LogGroup:\n Type: AWS::Logs::LogGroup\n Properties:\n LogGroupName: /aws/directoryservice/<example_resource_id>\n\n LogsResourcePolicy:\n Type: AWS::Logs::ResourcePolicy\n Properties:\n PolicyName: DSLogSubscription\n PolicyDocument: |\n {\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ds.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\",\"logs:DescribeLogStreams\"],\"Resource\":\"arn:aws:logs:*:*:log-group:/aws/directoryservice/*\"}]}\n\n DirectoryLogSubscription:\n Type: AWS::DirectoryService::LogSubscription\n Properties:\n DirectoryId: <example_resource_id> # CRITICAL: target Directory Service ID to enable log forwarding\n LogGroupName: /aws/directoryservice/<example_resource_id> # CRITICAL: CloudWatch Logs destination\n```",
"Other": "1. In the AWS Console, go to Directory Service > Directories and open your directory\n2. On the Directory details page, select the Networking & security tab\n3. In Log forwarding, click Enable\n4. Choose Create a new CloudWatch log group (or select an existing one)\n5. Click Enable to start forwarding logs",
"Terraform": "```hcl\n# Enable Directory Service log forwarding to CloudWatch Logs\nresource \"aws_cloudwatch_log_group\" \"ds\" {\n name = \"/aws/directoryservice/<example_resource_id>\"\n}\n\nresource \"aws_cloudwatch_log_resource_policy\" \"ds\" {\n policy_name = \"DSLogSubscription\"\n policy_document = jsonencode({\n Version = \"2012-10-17\",\n Statement = [{\n Effect = \"Allow\",\n Principal = { Service = \"ds.amazonaws.com\" },\n Action = [\"logs:CreateLogStream\", \"logs:PutLogEvents\", \"logs:DescribeLogStreams\"],\n Resource = \"arn:aws:logs:*:*:log-group:/aws/directoryservice/*\"\n }]\n })\n}\n\nresource \"aws_directory_service_log_subscription\" \"enable\" {\n directory_id = \"<example_resource_id>\" # CRITICAL: enables log forwarding for this directory\n log_group_name = aws_cloudwatch_log_group.ds.name # CRITICAL: CloudWatch Logs destination\n}\n```"
},
"Recommendation": {
"Text": "It is recommended that that the export of logs is enabled.",
"Url": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/incident-response.html"
"Text": "Enable and maintain **log forwarding** to CloudWatch Logs.\n\n- Centralize logs in a protected group with strict access and retention\n- Apply least privilege for delivery roles and readers; prevent tampering (immutability)\n- Alert on high-risk events and aggregate across Regions/accounts for defense in depth",
"Url": "https://hub.prowler.com/check/directoryservice_directory_log_forwarding_enabled"
}
},
"Categories": [
@@ -1,29 +1,37 @@
{
"Provider": "aws",
"CheckID": "directoryservice_directory_monitor_notifications",
"CheckTitle": "Directory Service has SNS Notifications enabled.",
"CheckType": [],
"CheckTitle": "Directory Service directory has SNS notifications enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
],
"ServiceName": "directoryservice",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:codeartifact:region:account-id:directory/directory-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Directory Service has SNS Notifications enabled.",
"Risk": "As a best practice, monitor status of Directory Service. This helps to avoid late actions to fix Directory Service issues.",
"RelatedUrl": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_enable_notifications.html",
"Description": "**AWS Directory Service** directories are associated with **Amazon SNS topics** to send status change notifications (e.g., `Active` `Impaired`).\n\nThe evaluation looks for directories that have SNS event topics configured for monitoring alerts.",
"Risk": "Missing directory notifications reduces visibility into health changes, causing delayed response to `Impaired` states. This threatens availability of authentication, Kerberos/LDAP lookups, and domain joins; increases MTTR; and can enable silent replication or trust failures that impact integrity across dependent workloads.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_enable_notifications.html",
"https://support.icompaas.com/support/solutions/articles/62000233533-ensure-directory-service-has-sns-notifications-enabled"
],
"Remediation": {
"Code": {
"CLI": "",
"CLI": "aws ds register-event-topic --directory-id <DIRECTORY_ID> --topic-name <SNS_TOPIC_NAME>",
"NativeIaC": "",
"Other": "",
"Other": "1. Open AWS Console > Directory Service > Directories and select your directory\n2. Go to the Maintenance or Monitoring/Notifications section\n3. Click Actions > Create notification (or Set up notifications)\n4. Select an existing SNS topic (or create one) and Save",
"Terraform": ""
},
"Recommendation": {
"Text": "It is recommended set up SNS messaging to send email or text messages when the status of your directory changes.",
"Url": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_enable_notifications.html"
"Text": "Configure **AWS Directory Service** to publish directory status changes to an **SNS topic**, and subscribe your operations channels for timely alerts.\n\nApply **least privilege** on topic permissions, integrate alerts with incident response, and use **defense in depth** by pairing notifications with logs and dashboards.",
"Url": "https://hub.prowler.com/check/directoryservice_directory_monitor_notifications"
}
},
"Categories": [],
"Categories": [
"logging"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,38 @@
{
"Provider": "aws",
"CheckID": "directoryservice_directory_snapshots_limit",
"CheckTitle": "Directory Service Manual Snapshots limit reached.",
"CheckType": [],
"CheckTitle": "Directory Service directory has adequate remaining manual snapshot quota",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Resource Consumption"
],
"ServiceName": "directoryservice",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:codeartifact:region:account-id:directory/directory-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "Other",
"Description": "Directory Service Manual Snapshots limit reached.",
"Risk": "A limit reached can bring unwanted results. The maximum number of manual snapshots is a hard limit.",
"RelatedUrl": "https://docs.aws.amazon.com/general/latest/gr/ds_region.html",
"Description": "**AWS Directory Service** directories with **manual snapshot capacity** fully consumed or nearly exhausted, based on current snapshot count relative to the directory's maximum allowed.",
"Risk": "With no remaining snapshot capacity, you cannot create new recovery points:\n- Reduced availability during outages or ransomware\n- Higher RPO from failed scheduled backups\n- Greater change risk (schema/OS updates) without a safe rollback",
"RelatedUrl": "",
"AdditionalURLs": [
"https://support.icompaas.com/support/solutions/articles/62000233531--ensure-directory-service-manual-snapshots-limit-reached",
"https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_limits.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Other": "1. In the AWS Console, go to Directory Service > Directories and open <example_resource_id>\n2. Click Snapshots\n3. Select older snapshots with Type = Manual and click Delete snapshot, confirm\n4. Repeat until the number of manual snapshots is less than (manual limit - 2). For the default limit of 5, keep at most 2 manual snapshots\n5. Verify Remaining manual snapshots > 2 on the Snapshots page",
"Terraform": ""
},
"Recommendation": {
"Text": "Monitor manual snapshots limit to ensure capacity when you need it.",
"Url": "https://docs.aws.amazon.com/general/latest/gr/ds_region.html"
"Text": "Adopt a **snapshot lifecycle policy**: rotate/expire old manual snapshots after verifying restores, and alert on low headroom. Prefer **automated backups** for cadence and retention. Enforce **least privilege** for snapshot creation. Design operations within the *hard per-directory cap* to prevent capacity exhaustion.",
"Url": "https://hub.prowler.com/check/directoryservice_directory_snapshots_limit"
}
},
"Categories": [],
"Categories": [
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,38 @@
{
"Provider": "aws",
"CheckID": "directoryservice_ldap_certificate_expiration",
"CheckTitle": "Directory Service LDAP Certificates expiration.",
"CheckType": [],
"CheckTitle": "Directory Service LDAP certificate expires in more than 90 days",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "directoryservice",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:codeartifact:region:account-id:directory/directory-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Directory Service Manual Snapshots limit reached.",
"Risk": "Expired certificates can impact service availability.",
"RelatedUrl": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_ldap.html",
"Description": "**AWS Directory Service** Secure LDAP (LDAPS) certificates are assessed for upcoming expiration by comparing each directory's certificate expiration to the current time and identifying those with `<= 90` days remaining.",
"Risk": "Expired LDAPS certificates cause TLS handshakes to fail, blocking directory binds and queries and disrupting authentication and app integrations (availability). If clients fall back to plain LDAP, credentials and directory data can be intercepted or altered (confidentiality and integrity).",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_ldap.html",
"https://support.icompaas.com/support/solutions/articles/62000229587-ensure-to-monitor-directory-service-ldap-certificates-expiration"
],
"Remediation": {
"Code": {
"CLI": "",
"CLI": "aws ds register-certificate --directory-id <DIRECTORY_ID> --certificate-data file://certificate.pem",
"NativeIaC": "",
"Other": "",
"Other": "1. In the AWS Console, open Directory Service and select your AWS Managed Microsoft AD (<example_resource_id>)\n2. Go to Networking & security > Secure LDAP\n3. Click Edit (Manage certificate)\n4. Choose Replace certificate (or Upload certificate)\n5. Upload a new LDAPS server certificate with private key from a trusted CA (valid for >90 days); enter the password if using a .pfx\n6. Save and wait until the certificate status is Active",
"Terraform": ""
},
"Recommendation": {
"Text": "Monitor certificate expiration and take automated action to alarm responsible team for taking care of the replacement or remove.",
"Url": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_ldap.html"
"Text": "Adopt certificate lifecycle management: inventory LDAPS certificates, alert well before expiry, and automate renewal with staged rollout and overlap. Enforce TLS-only LDAP and disable plaintext fallback. Apply **least privilege** and **separation of duties** to certificate issuance and deployment.",
"Url": "https://hub.prowler.com/check/directoryservice_ldap_certificate_expiration"
}
},
"Categories": [],
"Categories": [
"encryption"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,40 @@
{
"Provider": "aws",
"CheckID": "directoryservice_radius_server_security_protocol",
"CheckTitle": "Ensure Radius server in DS is using the recommended security protocol.",
"CheckType": [],
"CheckTitle": "Directory Service directory RADIUS server uses MS-CHAPv2",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Credential Access"
],
"ServiceName": "directoryservice",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:codeartifact:region:account-id:directory/directory-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Ensure Radius server in DS is using the recommended security protocol.",
"Risk": "As a best practice, you might need to configure the authentication protocol between the Microsoft AD DCs and the RADIUS/MFA server. Supported protocols are PAP, CHAP MS-CHAPv1, and MS-CHAPv2. MS-CHAPv2 is recommended because it provides the strongest security of the three options.",
"RelatedUrl": "https://aws.amazon.com/blogs/security/how-to-enable-multi-factor-authentication-for-amazon-workspaces-and-amazon-quicksight-by-using-microsoft-ad-and-on-premises-credentials/",
"Description": "AWS Directory Service RADIUS configuration uses the **authentication protocol** defined for MFA integration. The finding evaluates whether directories with RADIUS enabled are set to `MS-CHAPv2` instead of weaker options like `PAP`, `CHAP`, or `MS-CHAPv1`.",
"Risk": "Using `PAP`, `CHAP`, or `MS-CHAPv1` weakens RADIUS-based MFA.\n\n`PAP` exposes cleartext credentials, while legacy CHAP variants permit offline cracking and replay, enabling unauthorized access to AD-integrated services and lateral movement, degrading confidentiality and integrity.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.secureauth.com/0903/en/ms-chapv2-and-radius--sp-initiated--for-cisco-and-netscaler-configuration-guide.html",
"https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_mfa.html",
"https://www.freeradius.org/documentation/freeradius-server/4.0~alpha1/raddb/mods-available/mschap.html"
],
"Remediation": {
"Code": {
"CLI": "",
"CLI": "aws ds update-radius --directory-id <example_resource_id> --radius-settings AuthenticationProtocol=MS-CHAPv2",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"Other": "1. In the AWS Console, open Directory Service and select your directory\n2. Open the Networking & security tab (Multi-factor authentication section)\n3. Click Actions > Edit (or Enable)\n4. Set Protocol to MS-CHAPv2\n5. Click Save (or Enable) to apply",
"Terraform": "```hcl\nresource \"aws_directory_service_radius_settings\" \"<example_resource_name>\" {\n directory_id = \"<example_resource_id>\"\n radius_servers = [\"<RADIUS_SERVER_IP>\"]\n shared_secret = \"<SHARED_SECRET>\"\n\n authentication_protocol = \"MS-CHAPv2\" # Critical: sets the RADIUS auth protocol to MS-CHAPv2 to pass the check\n}\n```"
},
"Recommendation": {
"Text": "MS-CHAPv2 provides the strongest security of the options supported, and is therefore recommended.",
"Url": "https://aws.amazon.com/blogs/security/how-to-enable-multi-factor-authentication-for-amazon-workspaces-and-amazon-quicksight-by-using-microsoft-ad-and-on-premises-credentials/"
"Text": "Standardize on `MS-CHAPv2` for RADIUS authentication to MFA providers. Disable `PAP`, `CHAP`, and `MS-CHAPv1` to prevent downgrades. Apply least privilege and defense in depth: use strong shared secrets, restrict network access to RADIUS endpoints, and monitor authentication logs for anomalies.",
"Url": "https://hub.prowler.com/check/directoryservice_radius_server_security_protocol"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,29 +1,40 @@
{
"Provider": "aws",
"CheckID": "directoryservice_supported_mfa_radius_enabled",
"CheckTitle": "Ensure Multi-Factor Authentication (MFA) using Radius Server is enabled in DS.",
"CheckType": [],
"CheckTitle": "AWS Directory Service directory has RADIUS-based MFA enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Initial Access",
"TTPs/Credential Access"
],
"ServiceName": "directoryservice",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:codeartifact:region:account-id:directory/directory-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Ensure Multi-Factor Authentication (MFA) using Radius Server is enabled in DS.",
"Risk": "Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional username and password.",
"RelatedUrl": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_mfa.html",
"Description": "**AWS Directory Service directories** are evaluated for **RADIUS-backed multi-factor authentication**, confirming that MFA is configured and the RADIUS integration is active.",
"Risk": "Without **RADIUS MFA**, directory-based sign-ins to AWS-integrated services rely on a single factor, enabling credential stuffing and phishing to succeed. Compromised passwords can grant unauthorized access, drive data exfiltration, and enable privilege escalation, undermining confidentiality and integrity.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_mfa.html",
"https://support.icompaas.com/support/solutions/articles/62000233537-ensure-multi-factor-authentication-mfa-using-a-radius-server-is-enabled-in-directory-service"
],
"Remediation": {
"Code": {
"CLI": "",
"CLI": "aws ds enable-radius --directory-id <example_resource_id> --radius-settings '{\"RadiusServers\":[\"<RADIUS_IP_OR_DNS>\"],\"SharedSecret\":\"<SHARED_SECRET>\"}'",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"Other": "1. Sign in to the AWS Console and open Directory Service\n2. Select your directory and open it\n3. Go to the Networking & security tab\n4. In Multi-factor authentication, click Actions > Enable\n5. Enter RADIUS server IP(s) and the Shared secret, then click Enable\n6. Wait until the RADIUS status shows Completed",
"Terraform": "```hcl\nresource \"aws_directory_service_radius_settings\" \"<example_resource_name>\" {\n directory_id = \"<example_resource_id>\" # Directory to enable RADIUS MFA on\n radius_servers = [\"<RADIUS_IP_OR_DNS>\"] # Critical: RADIUS server endpoint(s)\n shared_secret = \"<SHARED_SECRET>\" # Critical: Shared secret for RADIUS\n}\n```"
},
"Recommendation": {
"Text": "Enabling MFA provides increased security to a user name and password as it requires the user to possess a solution that displays a time-sensitive authentication code.",
"Url": "https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_mfa.html"
"Text": "Enable and enforce **RADIUS-based MFA** for all Directory Service authentications. Apply **least privilege**, harden and monitor the RADIUS infrastructure, rotate shared secrets, and restrict network access (e.g., `UDP/1812`). Use **defense in depth** with segmentation and session controls to limit lateral movement and reduce blast radius.",
"Url": "https://hub.prowler.com/check/directoryservice_supported_mfa_radius_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,28 +1,34 @@
{
"Provider": "aws",
"CheckID": "dlm_ebs_snapshot_lifecycle_policy_exists",
"CheckTitle": "Ensure EBS Snapshot lifecycle policies are defined.",
"CheckTitle": "Region with EBS snapshots has at least one EBS snapshot lifecycle policy defined",
"CheckType": [
"Data Protection"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "dlm",
"SubServiceName": "ebs",
"ResourceIdTemplate": "arn:aws:iam::account-id:resource-id",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Ensure EBS Snapshot lifecycle policies are defined.",
"Risk": "With AWS DLM service, you can manage the lifecycle of your EBS volume snapshots. By automating the EBS volume backup management using lifecycle policies, you can protect your EBS data by enforcing a regular backup schedule, retain backups as required by auditors or internal compliance.",
"RelatedUrl": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html#dlm-elements",
"Description": "**EBS snapshots** are expected to be governed by **Data Lifecycle Manager (DLM) policies** in each Region where snapshots exist.\n\nThe evaluation looks for lifecycle policies that automate snapshot creation, retention, and cleanup for those snapshots.",
"Risk": "Without **automated lifecycle policies**, backups become inconsistent and error-prone, reducing availability and weakening recovery objectives. Missing retention rules cause premature deletion or snapshot sprawl, increasing cost and exposing stale data. Lack of cross-Region/account copies limits resilience to regional outages and malicious deletion.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DLM/ebs-snapshot-automation.html",
"https://repost.aws/articles/ARmYgZmA8MRQi89pWd9D7eFw/how-to-create-a-automate-backup-aws-data-lifecycle-management-using-snapshots",
"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html#dlm-elements"
],
"Remediation": {
"Code": {
"CLI": "aws dlm create-lifecycle-policy --region <region> --execution-role-arn <execution-role-arn> --description <description> --state ENABLED --policy-details file://lifecycle-policy-config.json",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DLM/ebs-snapshot-automation.html",
"Terraform": ""
"CLI": "aws dlm create-lifecycle-policy --region <region> --execution-role-arn <execution-role-arn> --description \"<description>\" --state ENABLED --policy-details '{\"PolicyType\":\"EBS_SNAPSHOT_MANAGEMENT\",\"ResourceTypes\":[\"VOLUME\"],\"TargetTags\":[{\"Key\":\"<tag_key>\",\"Value\":\"<tag_value>\"}],\"Schedules\":[{\"CreateRule\":{\"Interval\":24,\"IntervalUnit\":\"HOURS\"},\"RetainRule\":{\"Count\":1}}]}'",
"NativeIaC": "```yaml\n# CloudFormation: minimal EBS snapshot lifecycle policy\nResources:\n <example_resource_name>:\n Type: AWS::DLM::LifecyclePolicy\n Properties:\n Description: \"<description>\"\n ExecutionRoleArn: \"<example_resource_arn>\"\n State: ENABLED # Critical: enables the policy so it is counted by the check\n PolicyDetails:\n PolicyType: EBS_SNAPSHOT_MANAGEMENT # Critical: creates an EBS snapshot lifecycle policy\n ResourceTypes: [VOLUME]\n TargetTags:\n - Key: \"<tag_key>\" # Critical: selects target volumes by tag\n Value: \"<tag_value>\"\n Schedules:\n - CreateRule:\n Interval: 24\n IntervalUnit: HOURS\n RetainRule:\n Count: 1\n```",
"Other": "1. In the AWS console, switch to the Region that has EBS snapshots\n2. Open EC2 > Lifecycle Manager (DLM) > Create lifecycle policy\n3. Select EBS snapshot policy; Target resource: Volumes\n4. Add Target tags: Key = <tag_key>, Value = <tag_value>\n5. Set Schedule: Create every 24 hours; Retain 1 snapshot\n6. Ensure State is Enabled and click Create policy",
"Terraform": "```hcl\n# Terraform: minimal EBS snapshot lifecycle policy\nresource \"aws_dlm_lifecycle_policy\" \"<example_resource_name>\" {\n description = \"<description>\"\n execution_role_arn = \"<example_resource_arn>\"\n state = \"ENABLED\" # Critical: enables the policy so it is counted by the check\n\n policy_details {\n policy_type = \"EBS_SNAPSHOT_MANAGEMENT\" # Critical: creates an EBS snapshot lifecycle policy\n resource_types = [\"VOLUME\"]\n target_tags = {\n \"<tag_key>\" = \"<tag_value>\" # Critical: selects target volumes by tag\n }\n schedule {\n create_rule {\n interval = 24\n interval_unit = \"HOURS\"\n }\n retain_rule {\n count = 1\n }\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "To use Amazon Data Lifecycle Manager (DLM) service to manage the lifecycle of your EBS volume snapshots, you have to tag your AWS EBS volumes and create data lifecycle policies via Amazon DLM.",
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html#dlm-elements"
"Text": "Implement **DLM lifecycle policies** for all volumes that require backup.\n\n- Schedule creations to meet RPO/RTO\n- Define retention to prevent sprawl and enforce least data exposure\n- Use **least privilege** roles and separation of duties\n- Copy snapshots to another Region/account for **defense in depth**\n- Monitor policy health and coverage with tags",
"Url": "https://hub.prowler.com/check/dlm_ebs_snapshot_lifecycle_policy_exists"
}
},
"Categories": [
@@ -1,31 +1,38 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_mongodb_authentication_enabled",
"CheckTitle": "Check if DMS endpoints for MongoDB have an authentication mechanism enabled.",
"CheckTitle": "DMS MongoDB endpoint has an authentication mechanism enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:endpoint/endpoint-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsEndpoint",
"Description": "This control checks whether an AWS DMS endpoint for MongoDB is configured with an authentication mechanism. The control fails if an authentication type isn't set for the endpoint.",
"Risk": "Without an authentication mechanism enabled, unauthorized users may gain access to sensitive data during migration, increasing the risk of data breaches and security incidents.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html",
"Description": "**AWS DMS MongoDB endpoints** use an authentication mechanism. Configuration expects `AuthType` not `no` (e.g., `password`) with an `authMechanism` such as `scram_sha_1` or `mongodb_cr`.",
"Risk": "Without authentication, unauthenticated connections can access the source, degrading **confidentiality** and **integrity**. Adversaries could read or modify migrated documents, hijack CDC, inject data, or exfiltrate records during replication.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-11"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --username <username> --password <password> --authentication-type <authentication-type>",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-11",
"Terraform": ""
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --mongodb-settings '{\"AuthType\":\"password\"}' --username <username> --password <password>",
"NativeIaC": "```yaml\n# CloudFormation: enable authentication on a MongoDB DMS endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointIdentifier: <example_resource_name>\n EndpointType: source\n EngineName: mongodb\n MongoDbSettings:\n AuthType: password # CRITICAL: sets authentication mode to 'password' so auth is enabled\n```",
"Other": "1. In the AWS Console, go to Database Migration Service > Endpoints\n2. Select the MongoDB endpoint and click Modify\n3. Under MongoDB settings, set Authentication mode to Password\n4. Enter Username and Password\n5. Click Save changes",
"Terraform": "```hcl\n# Terraform: enable authentication on a MongoDB DMS endpoint\nresource \"aws_dms_endpoint\" \"<example_resource_name>\" {\n endpoint_id = \"<example_resource_name>\"\n endpoint_type = \"source\"\n engine_name = \"mongodb\"\n\n mongodb_settings {\n auth_type = \"password\" # CRITICAL: enables authentication for the MongoDB endpoint\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable an authentication mechanism on DMS endpoints for MongoDB to ensure secure access control during migration.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html"
"Text": "Enforce **strong authentication** on MongoDB endpoints: set `AuthType` to `password` and use `authMechanism` like `scram_sha_1`. Apply **least privilege** database accounts, store secrets in **Secrets Manager**, and pair with **TLS** for defense in depth.",
"Url": "https://hub.prowler.com/check/dms_endpoint_mongodb_authentication_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,31 +1,38 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_neptune_iam_authorization_enabled",
"CheckTitle": "Check if DMS endpoints for Neptune databases have IAM authorization enabled.",
"CheckTitle": "DMS endpoint for Neptune has IAM authorization enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:endpoint/endpoint-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsEndpoint",
"Description": "This control checks whether an AWS DMS endpoint for an Amazon Neptune database is configured with IAM authorization. The control fails if the DMS endpoint doesn't have IAM authorization enabled.",
"Risk": "Without IAM authorization, DMS endpoints for Neptune databases may lack granular access control, increasing the risk of unauthorized access to sensitive data.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html",
"Description": "**DMS Neptune endpoints** have **IAM authorization** enabled via the endpoint setting `IamAuthEnabled`.",
"Risk": "Without **IAM authorization**, migration components can interact with Neptune using broad trust, enabling unauthorized data loads, reads, or alterations.\n\nThis degrades **confidentiality** and **integrity** and increases the chance of privilege abuse and data exfiltration.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-10"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --service-access-role-arn <iam-role-arn>",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-10",
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --neptune-settings '{\"IamAuthEnabled\":true}'",
"NativeIaC": "```yaml\n# CloudFormation: Enable IAM authorization on a DMS Neptune endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointType: target\n EngineName: neptune\n NeptuneSettings:\n ServiceAccessRoleArn: <example_resource_arn>\n S3BucketName: <example_resource_name>\n S3BucketFolder: <example_resource_name>\n IamAuthEnabled: true # Critical: enables IAM authorization for the Neptune endpoint\n```",
"Other": "1. In the AWS Console, go to Database Migration Service > Endpoints\n2. Select the Neptune endpoint and click Modify\n3. Expand Endpoint settings (Neptune settings) and set IAM authorization to Enabled\n4. Ensure Service access role ARN is set, then click Save",
"Terraform": ""
},
"Recommendation": {
"Text": "Enable IAM authorization on DMS endpoints for Neptune databases by specifying a service role in the ServiceAccessRoleARN parameter.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html"
"Text": "Enable **IAM authorization** on Neptune endpoints (`IamAuthEnabled=true`) and use a **least privilege** service role limited to minimal Neptune and S3 permissions.\n\nApply **defense in depth**: restrict network paths, separate duties for migration roles, and monitor access with logs and alerts.",
"Url": "https://hub.prowler.com/check/dms_endpoint_neptune_iam_authorization_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,31 +1,41 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_redis_in_transit_encryption_enabled",
"CheckTitle": "Check if DMS endpoints for Redis OSS are encrypted in transit.",
"CheckTitle": "DMS endpoint for Redis OSS is encrypted in transit",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices/Encryption in Transit",
"Software and Configuration Checks/Industry and Regulatory Standards/NIST 800-53 Controls (USA)",
"Software and Configuration Checks/Industry and Regulatory Standards/PCI-DSS",
"Software and Configuration Checks/Industry and Regulatory Standards/ISO 27001 Controls"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:endpoint/endpoint-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsEndpoint",
"Description": "This control checks whether an AWS DMS endpoint for Redis OSS is configured with a TLS connection. The control fails if the endpoint doesn't have TLS enabled.",
"Risk": "Without TLS, data transmitted between databases may be vulnerable to interception or eavesdropping, increasing the risk of data breaches and other security incidents.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Redis.html",
"Description": "**DMS Redis OSS endpoints** are assessed for the presence of **TLS** in their endpoint settings, such as `ssl-encryption`, indicating encrypted connections between the DMS replication instance and Redis.",
"Risk": "Without **TLS**, traffic between DMS and Redis can be intercepted or altered, compromising **confidentiality** and **integrity**.\n\nAttackers can perform **man-in-the-middle** interception, steal auth tokens, and inject or corrupt migrated data.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-12",
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redis.html#CHAP_Target.Redis.EndpointSettings",
"https://support.icompaas.com/support/solutions/articles/62000233450-ensure-encryption-in-transit-for-dms-endpoints-for-redis-oss"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --redis-settings '{'SslSecurityProtocol': 'ssl-encryption'}'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-12",
"Terraform": ""
"CLI": "",
"NativeIaC": "```yaml\n# CloudFormation: Enable TLS for Redis OSS DMS endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointIdentifier: <example_resource_name>\n EndpointType: target\n EngineName: redis\n RedisSettings:\n ServerName: <example_resource_name>\n Port: 6379\n AuthType: none\n SslSecurityProtocol: ssl-encryption # Critical: enables TLS for in-transit encryption\n```",
"Other": "1. In the AWS Console, go to Database Migration Service > Endpoints\n2. Select the Redis OSS endpoint and click Modify\n3. Set SSL security protocol (Encryption in transit) to \"SSL encryption\"\n4. Save changes",
"Terraform": "```hcl\n# Enable TLS for Redis OSS DMS endpoint\nresource \"aws_dms_endpoint\" \"<example_resource_name>\" {\n endpoint_id = \"<example_resource_id>\"\n endpoint_type = \"target\"\n engine_name = \"redis\"\n\n redis_settings {\n server_name = \"<example_resource_name>\"\n port = 6379\n auth_type = \"none\"\n ssl_security_protocol = \"ssl-encryption\" # Critical: enables TLS for in-transit encryption\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable TLS for DMS endpoints for Redis OSS to ensure encrypted communication during data migration.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redis.html#CHAP_Target.Redis.EndpointSettings"
"Text": "Enable **TLS** on Redis OSS endpoints (e.g., `ssl-encryption`) and require server certificate validation. Prohibit plaintext connections, prefer private networking, and enforce **least privilege** for DMS roles to strengthen **defense in depth**.",
"Url": "https://hub.prowler.com/check/dms_endpoint_redis_in_transit_encryption_enabled"
}
},
"Categories": [],
"Categories": [
"encryption"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,32 +1,40 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_ssl_enabled",
"CheckTitle": "Ensure SSL mode is enabled in DMS endpoint",
"CheckType": ["Effects", "Data Exposure"],
"CheckTitle": "DMS endpoint has SSL enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"ServiceName": "dms",
"SubServiceName": "endpoint",
"ResourceIdTemplate": "arn:partition:dms:region:account-id:endpoint:resource-id",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "AwsDmsEndpoint",
"Description": "This check ensures that SSL mode is enabled for all AWS Database Migration Service (DMS) endpoints. Enabling SSL provides encryption in transit for data transferred through these endpoints.",
"Risk": "Without SSL enabled, data transferred through DMS endpoints is not encrypted, potentially exposing sensitive information to unauthorized access or interception during transit.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.SSL.html",
"Description": "**AWS DMS endpoints** have their SSL/TLS mode inspected; any value other than `none` denotes encrypted connections between the replication instance and databases.\n\nSupported modes include `require`, `verify-ca`, and `verify-full`.",
"Risk": "Without TLS, data in transit can be read or altered, affecting:\n- **Confidentiality** via packet sniffing and credential leakage\n- **Integrity** through **MITM** tampering of migration streams\n- **Availability** from session hijack or task disruption",
"RelatedUrl": "",
"AdditionalURLs": [
"https://aws.amazon.com/blogs/database/configuring-ssl-encryption-on-oracle-and-postgresql-endpoints-in-aws-dms/",
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.SSL.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-9"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint_arn> --ssl-mode require",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-9",
"Terraform": ""
},
"Recommendation": {
"Text": "Enable SSL mode for all DMS endpoints. Use 'require' as the minimum SSL mode, and consider using 'verify-ca' or 'verify-full' for higher security.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.SSL.html"
}
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --ssl-mode require",
"NativeIaC": "```yaml\n# CloudFormation: Set SSL on a DMS endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointIdentifier: <example_resource_name>\n EndpointType: source\n EngineName: sqlserver\n ServerName: <server_name>\n Port: 1433\n Username: <username>\n Password: <password>\n SslMode: require # CRITICAL: enables SSL (not \"none\"), fixing the finding\n```",
"Other": "1. In the AWS DMS console, go to Endpoints\n2. Select the non-compliant endpoint and choose Modify\n3. Set SSL mode to Require (or Verify-ca/Verify-full if required by your engine and certificate is available)\n4. If Verify-ca/Verify-full is selected, choose the appropriate CA certificate\n5. Save changes, then Test connection to confirm",
"Terraform": "```hcl\n# Terraform: Set SSL on a DMS endpoint\nresource \"aws_dms_endpoint\" \"<example_resource_name>\" {\n endpoint_id = \"<example_resource_name>\"\n endpoint_type = \"source\"\n engine_name = \"sqlserver\"\n server_name = \"<server_name>\"\n port = 1433\n username = \"<username>\"\n password = \"<password>\"\n\n ssl_mode = \"require\" # CRITICAL: enables SSL (not \"none\"), fixing the finding\n}\n```"
},
"Recommendation": {
"Text": "Configure endpoints to use SSL/TLS at least `require`; prefer `verify-ca` or `verify-full` where supported. Manage trusted CA material and rotate regularly. Apply **defense in depth** with private connectivity and strict IAM, and enforce this posture via policy-as-code and continuous validation.",
"Url": "https://hub.prowler.com/check/dms_endpoint_ssl_enabled"
}
},
"Categories": [
"encryption"
"encryption"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}
}
@@ -1,29 +1,39 @@
{
"Provider": "aws",
"CheckID": "dms_instance_minor_version_upgrade_enabled",
"CheckTitle": "Ensure DMS instances have auto minor version upgrade enabled.",
"CheckType": [],
"CheckTitle": "DMS replication instance has auto minor version upgrade enabled",
"CheckType": [
"Software and Configuration Checks/Patch Management",
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:rdmsds:region:account-id:rep",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsReplicationInstance",
"Description": "Ensure DMS instances have auto minor version upgrade enabled.",
"Risk": "Ensure that your Amazon Database Migration Service (DMS) replication instances have the Auto Minor Version Upgrade feature enabled in order to receive automatically minor engine upgrades.",
"RelatedUrl": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-6",
"Description": "**AWS DMS replication instances** are evaluated for the `auto_minor_version_upgrade` setting to confirm **automatic minor engine updates** are enabled during the maintenance window.",
"Risk": "Without **automatic minor upgrades**, DMS engines can miss security patches and fixes, enabling exploitation of known flaws and instability.\n- Confidentiality: exposure via unpatched components\n- Integrity: replication errors or data drift\n- Availability: outages during migration or CDC",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-6",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DMS/auto-minor-version-upgrade.html"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-replication-instance --region <REGION> --replication-instance-arn arn:aws:dms:<REGION>:<ACCOUNT_ID>:rep:<REPLICATION_ID> --auto-minor-version-upgrade --apply-immediately",
"NativeIaC": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/auto-minor-version-upgrade.html#",
"Other": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/auto-minor-version-upgrade.html#",
"Terraform": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/auto-minor-version-upgrade.html#"
"NativeIaC": "```yaml\n# CloudFormation: Enable auto minor version upgrade on a DMS replication instance\nResources:\n <example_resource_name>:\n Type: AWS::DMS::ReplicationInstance\n Properties:\n ReplicationInstanceIdentifier: <example_resource_id>\n ReplicationInstanceClass: dms.t3.micro\n AutoMinorVersionUpgrade: true # CRITICAL: turns on automatic minor version upgrades\n```",
"Other": "1. Open the AWS Console and go to Database Migration Service (DMS)\n2. Click Replication instances and select your instance\n3. Choose Actions > Modify\n4. Check Auto minor version upgrade\n5. Select Apply immediately\n6. Click Modify to save",
"Terraform": "```hcl\n# Terraform: Enable auto minor version upgrade on a DMS replication instance\nresource \"aws_dms_replication_instance\" \"<example_resource_name>\" {\n replication_instance_id = \"<example_resource_id>\"\n replication_instance_class = \"dms.t3.micro\"\n auto_minor_version_upgrade = true # CRITICAL: turns on automatic minor version upgrades\n}\n```"
},
"Recommendation": {
"Text": "Enable auto minor version upgrade for all DMS replication instances.",
"Url": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-6"
"Text": "Enable `auto_minor_version_upgrade` on all replication instances to maintain **continuous patching**.\n- Set a maintenance window and validate in non-prod\n- Monitor release notes and health metrics\n- Enforce **least privilege** for change control\n- Keep **backups** for rollback",
"Url": "https://hub.prowler.com/check/dms_instance_minor_version_upgrade_enabled"
}
},
"Categories": [],
"Categories": [
"vulnerabilities"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,30 +1,37 @@
{
"Provider": "aws",
"CheckID": "dms_instance_multi_az_enabled",
"CheckTitle": "Ensure DMS instances have multi az enabled.",
"CheckType": [],
"CheckTitle": "DMS replication instance has Multi-AZ enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Denial of Service"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:rdmsds:region:account-id:rep",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsReplicationInstance",
"Description": "Ensure DMS instances have multi az enabled.",
"Risk": "Ensure that your Amazon Database Migration Service (DMS) replication instances are using Multi-AZ deployment configurations to provide High Availability (HA) through automatic failover to standby replicas in the event of a failure such as an Availability Zone (AZ) outage, an internal hardware or network outage, a software failure or in case of a planned maintenance session.",
"RelatedUrl": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#",
"Description": "**AWS DMS replication instances** are evaluated for **Multi-AZ** configuration. Instances with `multi_az` enabled are treated as having a cross-AZ standby; those without it are identified as single-AZ.",
"Risk": "Without **Multi-AZ**, a single-AZ failure or maintenance event can halt migrations, causing extended downtime (**availability**) and replication gaps or rollbacks (**integrity**). Tasks may stall, increase cutover risk, and require manual recovery when the replication instance is unavailable.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DMS/multi-az.html"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-replication-instance --region <REGION> --replication-instance-arn arn:aws:dms:<REGION>:<ACCOUNT_ID>:rep:<REPLICATION_ID> --multi-az --apply-immediately",
"NativeIaC": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#",
"Other": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#",
"Terraform": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#"
"CLI": "aws dms modify-replication-instance --replication-instance-arn arn:aws:dms:<REGION>:<ACCOUNT_ID>:rep:<REPLICATION_ID> --multi-az --apply-immediately",
"NativeIaC": "```yaml\n# CloudFormation: enable Multi-AZ on a DMS replication instance\nResources:\n <example_resource_name>:\n Type: AWS::DMS::ReplicationInstance\n Properties:\n ReplicationInstanceClass: dms.t3.micro\n MultiAZ: true # Critical: enables Multi-AZ to pass the check\n```",
"Other": "1. Open the AWS DMS console\n2. Go to Replication instances and select your instance\n3. Click Modify\n4. Check Multi-AZ\n5. Check Apply immediately\n6. Click Modify to save",
"Terraform": "```hcl\n# Enable Multi-AZ on a DMS replication instance\nresource \"aws_dms_replication_instance\" \"<example_resource_name>\" {\n replication_instance_id = \"<example_resource_name>\"\n replication_instance_class = \"dms.t3.micro\"\n multi_az = true # Critical: enables Multi-AZ to pass the check\n}\n```"
},
"Recommendation": {
"Text": "Enable multi az for all DMS replication instances.",
"Url": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#"
"Text": "Enable **Multi-AZ** (set `multi_az` to `true`) on DMS replication instances that handle production or time-sensitive migrations to ensure redundancy and automatic failover.\n\nApply HA principles: distribute across AZs, test failover, monitor health, and plan maintenance to minimize impact.",
"Url": "https://hub.prowler.com/check/dms_instance_multi_az_enabled"
}
},
"Categories": [
"redundancy"
"resilience"
],
"DependsOn": [],
"RelatedTo": [],

Some files were not shown because too many files have changed in this diff Show More