Compare commits

..

29 Commits

Author SHA1 Message Date
Andoni A. 0cb4784187 docs: add AWS Orgs tip in bulk provisioning tutorial 2025-10-22 12:46:55 +02:00
Andoni A. 124676e893 docs: remove how to configure aws creds -- too much detailed 2025-10-22 12:40:27 +02:00
Andoni A. e08c2f2605 docs: review with docs styleguide 2025-10-22 12:28:13 +02:00
Andoni A. 9758fc36df chore: use Prowler API key instead of temporal token 2025-10-22 12:22:28 +02:00
Andoni A. 56bb5e92cc docs: include AWS Orgs bulk importer tutorial 2025-10-22 12:12:36 +02:00
Andoni A. 2ef750d133 Merge branch 'master' into DEVREL-99-provision-all-aws-accounts-in-an-organization 2025-10-22 11:49:05 +02:00
César Arroba 18f3bc098c chore(github): trigger only if repository is prowler (#8974) 2025-10-22 09:27:33 +02:00
César Arroba 67b1983d85 chore(github): fix action (#8973) 2025-10-22 09:10:47 +02:00
César Arroba a3db23af7d chore(github): improve conventional commits action (#8969) 2025-10-21 17:57:29 +02:00
César Arroba 3eaa21f06f chore(github): improve backport label action (#8970) 2025-10-21 17:57:04 +02:00
Rubén De la Torre Vico 5d5c109067 chore(aws): enhance metadata for dlm service (#8860)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-21 17:40:19 +02:00
César Arroba c6cb4e4814 chore(github): improve backport action (#8968) 2025-10-21 17:14:40 +02:00
César Arroba ab06a09173 chore(api): improve pull request action (#8963) 2025-10-21 17:10:48 +02:00
Rubén De la Torre Vico 9c6c007f73 fix(mcp): add missing argument to health check (#8967) 2025-10-21 16:45:05 +02:00
Rubén De la Torre Vico 206f23b5a5 chore(aws): enhance metadata for dms service (#8861)
Co-authored-by: Daniel Barranquero <danielbo2001@gmail.com>
2025-10-21 16:31:18 +02:00
Andoni Alonso 5c9e9bc86a docs: fix security heading (#8965) 2025-10-21 16:13:55 +02:00
Rubén De la Torre Vico 34554d6123 feat(mcp): add support for production deployment with uvicorn (#8958) 2025-10-21 16:03:24 +02:00
Pepe Fagoaga 000cb93157 chore: remove security template as it's already there (#8964) 2025-10-21 19:34:42 +05:45
Adrián Jesús Peña Rodríguez 524209bdf2 feat(api): add provider_id__in filter for ScanSummary queries (#8951) 2025-10-21 15:24:09 +02:00
César Arroba c4a0da8204 chore(github): review and update issue templates (#8961) 2025-10-21 13:40:25 +02:00
César Arroba f0cba0321c chore(codeql): improve API CodeQL action and settings (#8962) 2025-10-21 13:40:07 +02:00
dependabot[bot] 79888c9312 chore(deps): bump playwright and @playwright/test in /ui (#8956)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 13:22:21 +02:00
Rubén De la Torre Vico a79910a694 chore(aws): enhance metadata for cloudtrail service (#8831)
Co-authored-by: HugoPBrito <hugopbrit@gmail.com>
2025-10-21 12:45:31 +02:00
César Arroba 4cadee7bb1 chore(github): update codeowners file (#8960) 2025-10-21 11:48:21 +02:00
Pedro Martín 756d436a2f feat(compliance): improve CCC catalogs (#8944) 2025-10-21 03:16:05 +02:00
Alejandro Bailo 5e85ef5835 feat(ui): new card components and derivates for overview (#8921)
Co-authored-by: Alan Buscaglia <gentlemanprogramming@gmail.com>
2025-10-20 16:49:09 +02:00
Prowler Bot 0fa9e2da6c chore(regions_update): Changes in regions for AWS services (#8946)
Co-authored-by: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
2025-10-20 09:20:29 -04:00
Andoni Alonso ce7510db28 docs: remove anchors from redirects (#8953) 2025-10-20 14:58:53 +02:00
Andoni A. f734b249a4 chore: create script to generate AWS accounts list from AWS Org for bulk provisioning 2025-10-13 16:07:33 +02:00
99 changed files with 5096 additions and 11148 deletions
+27 -5
View File
@@ -1,6 +1,28 @@
# SDK
/* @prowler-cloud/sdk
/.github/ @prowler-cloud/sdk
prowler @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
tests @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
api @prowler-cloud/api
ui @prowler-cloud/ui
/prowler/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/tests/ @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
/dashboard/ @prowler-cloud/sdk
/docs/ @prowler-cloud/sdk
/examples/ @prowler-cloud/sdk
/util/ @prowler-cloud/sdk
/contrib/ @prowler-cloud/sdk
/permissions/ @prowler-cloud/sdk
/codecov.yml @prowler-cloud/sdk @prowler-cloud/api
# API
/api/ @prowler-cloud/api
# UI
/ui/ @prowler-cloud/ui
# AI
/mcp_server/ @prowler-cloud/ai
# Platform
/.github/ @prowler-cloud/platform
/Makefile @prowler-cloud/platform
/kubernetes/ @prowler-cloud/platform
**/Dockerfile* @prowler-cloud/platform
**/docker-compose*.yml @prowler-cloud/platform
**/docker-compose*.yaml @prowler-cloud/platform
+44
View File
@@ -3,6 +3,41 @@ description: Create a report to help us improve
labels: ["bug", "status/needs-triage"]
body:
- type: checkboxes
id: search
attributes:
label: Issue search
options:
- label: I have searched the existing issues and this bug has not been reported yet
required: true
- type: dropdown
id: component
attributes:
label: Which component is affected?
multiple: true
options:
- Prowler CLI/SDK
- Prowler API
- Prowler UI
- Prowler Dashboard
- Prowler MCP Server
- Documentation
- Other
validations:
required: true
- type: dropdown
id: provider
attributes:
label: Cloud Provider (if applicable)
multiple: true
options:
- AWS
- Azure
- GCP
- Kubernetes
- GitHub
- Microsoft 365
- Not applicable
- type: textarea
id: reproduce
attributes:
@@ -78,6 +113,15 @@ body:
prowler --version
validations:
required: true
- type: input
id: python-version
attributes:
label: Python version
description: Which Python version are you using?
placeholder: |-
python --version
validations:
required: true
- type: input
id: pip-version
attributes:
+10
View File
@@ -1 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: 📖 Documentation
url: https://docs.prowler.com
about: Check our comprehensive documentation for guides and tutorials
- name: 💬 GitHub Discussions
url: https://github.com/prowler-cloud/prowler/discussions
about: Ask questions and discuss with the community
- name: 🌟 Prowler Community
url: https://goto.prowler.com/slack
about: Join our community for support and updates
@@ -3,6 +3,42 @@ description: Suggest an idea for this project
labels: ["feature-request", "status/needs-triage"]
body:
- type: checkboxes
id: search
attributes:
label: Feature search
options:
- label: I have searched the existing issues and this feature has not been requested yet
required: true
- type: dropdown
id: component
attributes:
label: Which component would this feature affect?
multiple: true
options:
- Prowler CLI/SDK
- Prowler API
- Prowler UI
- Prowler Dashboard
- Prowler MCP Server
- Documentation
- New component/Integration
validations:
required: true
- type: dropdown
id: provider
attributes:
label: Related to specific cloud provider?
multiple: true
options:
- AWS
- Azure
- GCP
- Kubernetes
- GitHub
- Microsoft 365
- All providers
- Not provider-specific
- type: textarea
id: Problem
attributes:
@@ -19,6 +55,14 @@ body:
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: use-case
attributes:
label: Use case and benefits
description: Who would benefit from this feature and how?
placeholder: This would help security teams by...
validations:
required: true
- type: textarea
id: Alternatives
attributes:
@@ -0,0 +1,71 @@
name: 'Setup Python with Poetry'
description: 'Setup Python environment with Poetry and install dependencies'
author: 'Prowler'
inputs:
python-version:
description: 'Python version to use'
required: true
working-directory:
description: 'Working directory for Poetry'
required: false
default: '.'
poetry-version:
description: 'Poetry version to install'
required: false
default: '2.1.1'
install-dependencies:
description: 'Install Python dependencies with Poetry'
required: false
default: 'true'
runs:
using: 'composite'
steps:
- name: Replace @master with current branch in pyproject.toml
if: github.event_name == 'pull_request' && github.base_ref == 'master'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
shell: bash
run: |
python -m pip install --upgrade pip
pipx install poetry==${{ inputs.poetry-version }}
- name: Update SDK resolved_reference to latest commit
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
shell: bash
working-directory: ${{ inputs.working-directory }}
run: poetry lock
- name: Set up Python ${{ inputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ inputs.python-version }}
cache: 'poetry'
cache-dependency-path: ${{ inputs.working-directory }}/poetry.lock
- name: Install Python dependencies
if: inputs.install-dependencies == 'true'
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --no-root
poetry run pip list
+152
View File
@@ -0,0 +1,152 @@
name: 'Container Security Scan with Trivy'
description: 'Scans container images for vulnerabilities using Trivy and reports results'
author: 'Prowler'
inputs:
image-name:
description: 'Container image name to scan'
required: true
image-tag:
description: 'Container image tag to scan'
required: true
default: ${{ github.sha }}
severity:
description: 'Severities to scan for (comma-separated)'
required: false
default: 'CRITICAL,HIGH,MEDIUM,LOW'
fail-on-critical:
description: 'Fail the build if critical vulnerabilities are found'
required: false
default: 'false'
upload-sarif:
description: 'Upload results to GitHub Security tab'
required: false
default: 'true'
create-pr-comment:
description: 'Create a comment on the PR with scan results'
required: false
default: 'true'
artifact-retention-days:
description: 'Days to retain the Trivy report artifact'
required: false
default: '2'
outputs:
critical-count:
description: 'Number of critical vulnerabilities found'
value: ${{ steps.security-check.outputs.critical }}
high-count:
description: 'Number of high vulnerabilities found'
value: ${{ steps.security-check.outputs.high }}
total-count:
description: 'Total number of vulnerabilities found'
value: ${{ steps.security-check.outputs.total }}
runs:
using: 'composite'
steps:
- name: Run Trivy vulnerability scan (SARIF)
if: inputs.upload-sarif == 'true'
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '0'
- name: Upload Trivy results to GitHub Security tab
if: inputs.upload-sarif == 'true'
uses: github/codeql-action/upload-sarif@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
sarif_file: 'trivy-results.sarif'
category: 'trivy-container'
- name: Run Trivy vulnerability scan (JSON)
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
with:
image-ref: ${{ inputs.image-name }}:${{ inputs.image-tag }}
format: 'json'
output: 'trivy-report.json'
severity: ${{ inputs.severity }}
exit-code: '0'
- name: Upload Trivy report artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always()
with:
name: trivy-scan-report-${{ inputs.image-name }}
path: trivy-report.json
retention-days: ${{ inputs.artifact-retention-days }}
- name: Generate security summary
id: security-check
shell: bash
run: |
CRITICAL=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="CRITICAL")] | length' trivy-report.json)
HIGH=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="HIGH")] | length' trivy-report.json)
TOTAL=$(jq '[.Results[]?.Vulnerabilities[]?] | length' trivy-report.json)
echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
echo "high=$HIGH" >> $GITHUB_OUTPUT
echo "total=$TOTAL" >> $GITHUB_OUTPUT
echo "### 🔒 Container Security Scan" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Image:** \`${{ inputs.image-name }}:${{ inputs.image-tag }}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- 🔴 Critical: $CRITICAL" >> $GITHUB_STEP_SUMMARY
echo "- 🟠 High: $HIGH" >> $GITHUB_STEP_SUMMARY
echo "- **Total**: $TOTAL" >> $GITHUB_STEP_SUMMARY
- name: Comment scan results on PR
if: inputs.create-pr-comment == 'true' && github.event_name == 'pull_request'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
env:
IMAGE_NAME: ${{ inputs.image-name }}
GITHUB_SHA: ${{ inputs.image-tag }}
SEVERITY: ${{ inputs.severity }}
with:
script: |
const comment = require('./.github/scripts/trivy-pr-comment.js');
// Unique identifier to find our comment
const marker = '<!-- trivy-scan-comment:${{ inputs.image-name }} -->';
const body = marker + '\n' + comment;
// Find existing comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const existingComment = comments.find(c => c.body?.includes(marker));
if (existingComment) {
// Update existing comment
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existingComment.id,
body: body
});
console.log('✅ Updated existing Trivy scan comment');
} else {
// Create new comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: body
});
console.log('✅ Created new Trivy scan comment');
}
- name: Check for critical vulnerabilities
if: inputs.fail-on-critical == 'true' && steps.security-check.outputs.critical != '0'
shell: bash
run: |
echo "::error::Found ${{ steps.security-check.outputs.critical }} critical vulnerabilities"
echo "::warning::Please update packages or use a different base image"
exit 1
+11 -2
View File
@@ -1,3 +1,12 @@
name: "API - CodeQL Config"
name: 'API: CodeQL Config'
paths:
- "api/"
- 'api/'
paths-ignore:
- 'api/tests/**'
- 'api/**/__pycache__/**'
- 'api/**/migrations/**'
- 'api/**/*.md'
queries:
- uses: security-and-quality
+102
View File
@@ -0,0 +1,102 @@
const fs = require('fs');
// Configuration from environment variables
const REPORT_FILE = process.env.TRIVY_REPORT_FILE || 'trivy-report.json';
const IMAGE_NAME = process.env.IMAGE_NAME || 'container-image';
const GITHUB_SHA = process.env.GITHUB_SHA || 'unknown';
const GITHUB_REPOSITORY = process.env.GITHUB_REPOSITORY || '';
const GITHUB_RUN_ID = process.env.GITHUB_RUN_ID || '';
const SEVERITY = process.env.SEVERITY || 'CRITICAL,HIGH,MEDIUM,LOW';
// Parse severities to scan
const scannedSeverities = SEVERITY.split(',').map(s => s.trim());
// Read and parse the Trivy report
const report = JSON.parse(fs.readFileSync(REPORT_FILE, 'utf-8'));
let vulnCount = 0;
let vulnsByType = { CRITICAL: 0, HIGH: 0, MEDIUM: 0, LOW: 0 };
let affectedPackages = new Set();
if (report.Results && Array.isArray(report.Results)) {
for (const result of report.Results) {
if (result.Vulnerabilities && Array.isArray(result.Vulnerabilities)) {
for (const vuln of result.Vulnerabilities) {
vulnCount++;
if (vulnsByType[vuln.Severity] !== undefined) {
vulnsByType[vuln.Severity]++;
}
if (vuln.PkgName) {
affectedPackages.add(vuln.PkgName);
}
}
}
}
}
const shortSha = GITHUB_SHA.substring(0, 7);
const timestamp = new Date().toISOString().replace('T', ' ').substring(0, 19) + ' UTC';
// Severity icons and labels
const severityConfig = {
CRITICAL: { icon: '🔴', label: 'Critical' },
HIGH: { icon: '🟠', label: 'High' },
MEDIUM: { icon: '🟡', label: 'Medium' },
LOW: { icon: '🔵', label: 'Low' }
};
let comment = '## 🔒 Container Security Scan\n\n';
comment += `**Image:** \`${IMAGE_NAME}:${shortSha}\`\n`;
comment += `**Last scan:** ${timestamp}\n\n`;
if (vulnCount === 0) {
comment += '### ✅ No Vulnerabilities Detected\n\n';
comment += 'The container image passed all security checks. No known CVEs were found.\n';
} else {
comment += '### 📊 Vulnerability Summary\n\n';
comment += '| Severity | Count |\n';
comment += '|----------|-------|\n';
// Only show severities that were scanned
for (const severity of scannedSeverities) {
const config = severityConfig[severity];
const count = vulnsByType[severity] || 0;
const isBold = (severity === 'CRITICAL' || severity === 'HIGH') && count > 0;
const countDisplay = isBold ? `**${count}**` : count;
comment += `| ${config.icon} ${config.label} | ${countDisplay} |\n`;
}
comment += `| **Total** | **${vulnCount}** |\n\n`;
if (affectedPackages.size > 0) {
comment += `**${affectedPackages.size}** package(s) affected\n\n`;
}
if (vulnsByType.CRITICAL > 0) {
comment += '### ⚠️ Action Required\n\n';
comment += '**Critical severity vulnerabilities detected.** These should be addressed before merging:\n';
comment += '- Review the detailed scan results\n';
comment += '- Update affected packages to patched versions\n';
comment += '- Consider using a different base image if updates are unavailable\n\n';
} else if (vulnsByType.HIGH > 0) {
comment += '### ⚠️ Attention Needed\n\n';
comment += '**High severity vulnerabilities found.** Please review and plan remediation:\n';
comment += '- Assess the risk and exploitability\n';
comment += '- Prioritize updates in the next maintenance cycle\n\n';
} else {
comment += '### ️ Review Recommended\n\n';
comment += 'Medium/Low severity vulnerabilities found. Consider addressing during regular maintenance.\n\n';
}
}
comment += '---\n';
comment += '📋 **Resources:**\n';
if (GITHUB_REPOSITORY && GITHUB_RUN_ID) {
comment += `- [Download full report](https://github.com/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}) (see artifacts)\n`;
}
comment += '- [View in Security tab](https://github.com/' + (GITHUB_REPOSITORY || 'repository') + '/security/code-scanning)\n';
comment += '- Scanned with [Trivy](https://github.com/aquasecurity/trivy)\n';
module.exports = comment;
+30 -33
View File
@@ -1,36 +1,34 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: API - CodeQL
name: 'API: CodeQL'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "api/**"
- 'api/**'
- '.github/workflows/api-codeql.yml'
- '.github/codeql/api-codeql-config.yml'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- "api/**"
- 'api/**'
- '.github/workflows/api-codeql.yml'
- '.github/codeql/api-codeql-config.yml'
schedule:
- cron: '00 12 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
analyze:
name: Analyze
name: CodeQL Security Analysis
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
actions: read
contents: read
@@ -39,21 +37,20 @@ jobs:
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
language:
- 'python'
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Initialize CodeQL
uses: github/codeql-action/init@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: "/language:${{matrix.language}}"
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.30.5
with:
category: '/language:${{ matrix.language }}'
+148 -153
View File
@@ -1,20 +1,30 @@
name: API - Pull Request
name: 'API: Pull Request'
on:
push:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
- '.github/workflows/api-pull-request.yml'
- 'api/**'
- '!api/docs/**'
- '!api/README.md'
- '!api/CHANGELOG.md'
pull_request:
branches:
- "master"
- "v5.*"
- 'master'
- 'v5.*'
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
- '.github/workflows/api-pull-request.yml'
- 'api/**'
- '!api/docs/**'
- '!api/README.md'
- '!api/CHANGELOG.md'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POSTGRES_HOST: localhost
@@ -29,21 +39,94 @@ env:
VALKEY_DB: 0
API_WORKING_DIR: ./api
IMAGE_NAME: prowler-api
IGNORE_FILES: |
api/docs/**
api/README.md
api/CHANGELOG.md
jobs:
test:
code-quality:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
strategy:
matrix:
python-version: ["3.12"]
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
- name: Poetry check
run: poetry check --lock
- name: Ruff lint
run: poetry run ruff check . --exclude contrib
- name: Ruff format
run: poetry run ruff format --check . --exclude contrib
- name: Pylint
run: poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/
security-scans:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
strategy:
matrix:
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
working-directory: ./api
- name: Bandit
run: poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
run: poetry run vulture --exclude "contrib,tests,conftest.py" --min-confidence 100 .
tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
strategy:
matrix:
python-version:
- '3.12'
defaults:
run:
working-directory: ./api
# Service containers to run with `test`
services:
# Label used to access the service container
postgres:
image: postgres
env:
@@ -52,7 +135,6 @@ jobs:
POSTGRES_USER: ${{ env.POSTGRES_USER }}
POSTGRES_PASSWORD: ${{ env.POSTGRES_PASSWORD }}
POSTGRES_DB: ${{ env.POSTGRES_DB }}
# Set health checks to wait until postgres has started
ports:
- 5432:5432
options: >-
@@ -66,7 +148,6 @@ jobs:
VALKEY_HOST: ${{ env.VALKEY_HOST }}
VALKEY_PORT: ${{ env.VALKEY_PORT }}
VALKEY_DB: ${{ env.VALKEY_DB }}
# Set health checks to wait until postgres has started
ports:
- 6379:6379
options: >-
@@ -76,158 +157,72 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files: |
api/**
.github/workflows/api-pull-request.yml
files_ignore: ${{ env.IGNORE_FILES }}
- name: Replace @master with current branch in pyproject.toml - Only for pull requests to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'pull_request' && github.base_ref == 'master'
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push' && github.ref == 'refs/heads/master'
run: |
# Get the latest commit hash from the prowler-cloud/prowler repository
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
# Update the resolved_reference specifically for prowler-cloud/prowler repository
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
# Verify the change was made
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Update poetry.lock
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry lock
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- name: Setup Python with Poetry
uses: ./.github/actions/setup-python-poetry
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
working-directory: ./api
- name: Install dependencies
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry install --no-root
poetry run pip list
VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
) && curl -L -o /tmp/hadolint "https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64" \
&& chmod +x /tmp/hadolint
- name: Poetry check
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry check --lock
- name: Lint with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff check . --exclude contrib
- name: Check Format with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff format --check . --exclude contrib
- name: Lint with pylint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/
- name: Bandit
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: |
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run vulture --exclude "contrib,tests,conftest.py" --min-confidence 100 .
- name: Hadolint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest --cov=./src/backend --cov-report=xml src/backend
- name: Run tests with pytest
run: poetry run pytest --cov=./src/backend --cov-report=xml src/backend
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: api
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
dockerfile-lint:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Lint Dockerfile with Hadolint
uses: hadolint/hadolint-action@2332a7b74a6de0dda2e2221d575162eba76ba5e5 # v3.3.0
with:
files: api/**
files_ignore: ${{ env.IGNORE_FILES }}
dockerfile: api/Dockerfile
ignore: DL3013
container-build-and-scan:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
security-events: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Docker Buildx
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build Container
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
- name: Build container
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.API_WORKING_DIR }}
push: false
tags: ${{ env.IMAGE_NAME }}:latest
outputs: type=docker
load: true
tags: ${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan container with Trivy
uses: ./.github/actions/trivy-scan
with:
image-name: ${{ env.IMAGE_NAME }}
image-tag: ${{ github.sha }}
fail-on-critical: 'false'
severity: 'CRITICAL'
+22 -15
View File
@@ -1,28 +1,35 @@
name: Prowler - Automatic Backport
name: 'Tools: Backport'
on:
pull_request_target:
branches: ['master']
types: ['labeled', 'closed']
branches:
- 'master'
types:
- 'labeled'
- 'closed'
paths:
- '.github/workflows/backport.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: false
env:
# The prefix of the label that triggers the backport must not contain the branch name
# so, for example, if the branch is 'master', the label should be 'backport-to-<branch>'
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_IGNORE: was-backported
jobs:
backport:
name: Backport PR
if: github.event.pull_request.merged == true && !(contains(github.event.pull_request.labels.*.name, 'backport')) && !(contains(github.event.pull_request.labels.*.name, 'was-backported'))
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
id-token: write
pull-requests: write
contents: write
pull-requests: write
steps:
- name: Check labels
id: preview_label_check
id: label_check
uses: agilepathway/label-checker@c3d16ad512e7cea5961df85ff2486bb774caf3c5 # v1.6.65
with:
allow_failure: true
@@ -31,17 +38,17 @@ jobs:
none_of: ${{ env.BACKPORT_LABEL_IGNORE }}
repo_token: ${{ secrets.GITHUB_TOKEN }}
- name: Backport Action
if: steps.preview_label_check.outputs.label_check == 'success'
- name: Backport PR
if: steps.label_check.outputs.label_check == 'success'
uses: sorenlouv/backport-github-action@ad888e978060bc1b2798690dd9d03c4036560947 # v9.5.1
with:
github_token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
auto_backport_label_prefix: ${{ env.BACKPORT_LABEL_PREFIX }}
- name: Info log
if: ${{ success() && steps.preview_label_check.outputs.label_check == 'success' }}
- name: Display backport info log
if: success() && steps.label_check.outputs.label_check == 'success'
run: cat ~/.backport/backport.info.log
- name: Debug log
if: ${{ failure() && steps.preview_label_check.outputs.label_check == 'success' }}
- name: Display backport debug log
if: failure() && steps.label_check.outputs.label_check == 'success'
run: cat ~/.backport/backport.debug.log
+20 -13
View File
@@ -1,24 +1,31 @@
name: Prowler - Conventional Commit
name: 'Tools: Conventional Commit'
on:
pull_request:
types:
- "opened"
- "edited"
- "synchronize"
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
- 'master'
- 'v3'
- 'v4.*'
- 'v5.*'
types:
- 'opened'
- 'edited'
- 'synchronize'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
conventional-commit-check:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: read
steps:
- name: conventional-commit-check
id: conventional-commit-check
- name: Check PR title format
uses: agenthunt/conventional-commit-checker-action@9e552d650d0e205553ec7792d447929fc78e012b # v2.0.0
with:
pr-title-regex: '^(feat|fix|docs|style|refactor|perf|test|chore|build|ci|revert)(\([^)]+\))?!?: .+'
pr-title-regex: '^(feat|fix|docs|style|refactor|perf|test|chore|build|ci|revert)(\([^)]+\))?!?: .+'
+49 -46
View File
@@ -1,67 +1,70 @@
name: Prowler - Create Backport Label
name: 'Tools: Backport Label'
on:
release:
types: [published]
types:
- 'published'
concurrency:
group: ${{ github.workflow }}-${{ github.event.release.tag_name }}
cancel-in-progress: false
env:
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_COLOR: B60205
jobs:
create_label:
create-label:
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: write
contents: read
issues: write
steps:
- name: Create backport label
- name: Create backport label for minor releases
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_TAG: ${{ github.event.release.tag_name }}
OWNER_REPO: ${{ github.repository }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
VERSION_ONLY=${RELEASE_TAG#v} # Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
RELEASE_TAG="${{ github.event.release.tag_name }}"
if [ -z "$RELEASE_TAG" ]; then
echo "Error: No release tag provided"
exit 1
fi
echo "Processing release tag: $RELEASE_TAG"
# Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
VERSION_ONLY="${RELEASE_TAG#v}"
# Check if it's a minor version (X.Y.0)
if [[ "$VERSION_ONLY" =~ ^[0-9]+\.[0-9]+\.0$ ]]; then
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is a minor version. Proceeding to create backport label."
if [[ "$VERSION_ONLY" =~ ^([0-9]+)\.([0-9]+)\.0$ ]]; then
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is a minor version. Proceeding to create backport label."
TWO_DIGIT_VERSION=${VERSION_ONLY%.0} # Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
# Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
MAJOR="${BASH_REMATCH[1]}"
MINOR="${BASH_REMATCH[2]}"
TWO_DIGIT_VERSION="${MAJOR}.${MINOR}"
FINAL_LABEL_NAME="backport-to-v${TWO_DIGIT_VERSION}"
FINAL_DESCRIPTION="Backport PR to the v${TWO_DIGIT_VERSION} branch"
LABEL_NAME="${BACKPORT_LABEL_PREFIX}v${TWO_DIGIT_VERSION}"
LABEL_DESC="Backport PR to the v${TWO_DIGIT_VERSION} branch"
LABEL_COLOR="$BACKPORT_LABEL_COLOR"
echo "Effective label name will be: ${FINAL_LABEL_NAME}"
echo "Effective description will be: ${FINAL_DESCRIPTION}"
echo "Label name: $LABEL_NAME"
echo "Label description: $LABEL_DESC"
# Check if the label already exists
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token ${GITHUB_TOKEN}" "https://api.github.com/repos/${OWNER_REPO}/labels/${FINAL_LABEL_NAME}")
if [ "${STATUS_CODE}" -eq 200 ]; then
echo "Label '${FINAL_LABEL_NAME}' already exists."
elif [ "${STATUS_CODE}" -eq 404 ]; then
echo "Label '${FINAL_LABEL_NAME}' does not exist. Creating it..."
# Prepare JSON data payload
JSON_DATA=$(printf '{"name":"%s","description":"%s","color":"B60205"}' "${FINAL_LABEL_NAME}" "${FINAL_DESCRIPTION}")
CREATE_STATUS_CODE=$(curl -s -o /tmp/curl_create_response.json -w "%{http_code}" -X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${GITHUB_TOKEN}" \
--data "${JSON_DATA}" \
"https://api.github.com/repos/${OWNER_REPO}/labels")
CREATE_RESPONSE_BODY=$(cat /tmp/curl_create_response.json)
rm -f /tmp/curl_create_response.json
if [ "$CREATE_STATUS_CODE" -eq 201 ]; then
echo "Label '${FINAL_LABEL_NAME}' created successfully."
else
echo "Error creating label '${FINAL_LABEL_NAME}'. Status: $CREATE_STATUS_CODE"
echo "Response: $CREATE_RESPONSE_BODY"
exit 1
fi
# Check if label already exists
if gh label list --repo ${{ github.repository }} --limit 1000 | grep -q "^${LABEL_NAME}[[:space:]]"; then
echo "Label '$LABEL_NAME' already exists."
else
echo "Error checking for label '${FINAL_LABEL_NAME}'. HTTP Status: ${STATUS_CODE}"
exit 1
echo "Label '$LABEL_NAME' does not exist. Creating it..."
gh label create "$LABEL_NAME" \
--description "$LABEL_DESC" \
--color "$LABEL_COLOR" \
--repo ${{ github.repository }}
echo "Label '$LABEL_NAME' created successfully."
fi
else
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is not a minor version. Skipping backport label creation."
exit 0
echo "Release $RELEASE_TAG (version $VERSION_ONLY) is not a minor version. Skipping backport label creation."
fi
+1 -1
View File
@@ -1,4 +1,4 @@
name: Prowler - Find secrets
name: 'Tools: TruffleHog'
on: pull_request
+1 -44
View File
@@ -10,7 +10,6 @@ on:
- 'ui/**'
jobs:
e2e-tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
@@ -19,51 +18,9 @@ jobs:
AUTH_TRUST_HOST: true
NEXTAUTH_URL: 'http://localhost:3000'
NEXT_PUBLIC_API_BASE_URL: 'http://localhost:8080/api/v1'
E2E_ADMIN_USER: ${{ secrets.E2E_ADMIN_USER }}
E2E_ADMIN_PASSWORD: ${{ secrets.E2E_ADMIN_PASSWORD }}
E2E_AWS_PROVIDER_ACCOUNT_ID: ${{ secrets.E2E_AWS_PROVIDER_ACCOUNT_ID }}
E2E_AWS_PROVIDER_ACCESS_KEY: ${{ secrets.E2E_AWS_PROVIDER_ACCESS_KEY }}
E2E_AWS_PROVIDER_SECRET_KEY: ${{ secrets.E2E_AWS_PROVIDER_SECRET_KEY }}
E2E_AWS_PROVIDER_ROLE_ARN: ${{ secrets.E2E_AWS_PROVIDER_ROLE_ARN }}
E2E_AZURE_SUBSCRIPTION_ID: ${{ secrets.E2E_AZURE_SUBSCRIPTION_ID }}
E2E_AZURE_CLIENT_ID: ${{ secrets.E2E_AZURE_CLIENT_ID }}
E2E_AZURE_SECRET_ID: ${{ secrets.E2E_AZURE_SECRET_ID }}
E2E_AZURE_TENANT_ID: ${{ secrets.E2E_AZURE_TENANT_ID }}
E2E_M365_DOMAIN_ID: ${{ secrets.E2E_M365_DOMAIN_ID }}
E2E_M365_CLIENT_ID: ${{ secrets.E2E_M365_CLIENT_ID }}
E2E_M365_SECRET_ID: ${{ secrets.E2E_M365_SECRET_ID }}
E2E_M365_TENANT_ID: ${{ secrets.E2E_M365_TENANT_ID }}
E2E_M365_CERTIFICATE_CONTENT: ${{ secrets.E2E_M365_CERTIFICATE_CONTENT }}
E2E_KUBERNETES_CONTEXT: 'kind-kind'
E2E_KUBERNETES_KUBECONFIG_PATH: /home/runner/.kube/config
E2E_GITHUB_APP_ID: ${{ secrets.E2E_GITHUB_APP_ID }}
E2E_GITHUB_BASE64_APP_PRIVATE_KEY: ${{ secrets.E2E_GITHUB_BASE64_APP_PRIVATE_KEY }}
E2E_GITHUB_USERNAME: ${{ secrets.E2E_GITHUB_USERNAME }}
E2E_GITHUB_PERSONAL_ACCESS_TOKEN: ${{ secrets.E2E_GITHUB_PERSONAL_ACCESS_TOKEN }}
E2E_GITHUB_ORGANIZATION: ${{ secrets.E2E_GITHUB_ORGANIZATION }}
E2E_GITHUB_ORGANIZATION_ACCESS_TOKEN: ${{ secrets.E2E_GITHUB_ORGANIZATION_ACCESS_TOKEN }}
E2E_ORGANIZATION_ID: ${{ secrets.E2E_ORGANIZATION_ID }}
E2E_NEW_USER_PASSWORD: ${{ secrets.E2E_NEW_USER_PASSWORD }}
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Create k8s Kind Cluster
uses: helm/kind-action@v1
with:
cluster_name: kind
- name: Modify kubeconfig
run: |
# Modify the kubeconfig to use the kind cluster server to https://kind-control-plane:6443
# from worker service into docker-compose.yml
kubectl config set-cluster kind-kind --server=https://kind-control-plane:6443
kubectl config view
- name: Add network kind to docker compose
run: |
# Add the network kind to the docker compose to interconnect to kind cluster
yq -i '.networks.kind.external = true' docker-compose.yml
# Add network kind to worker service and default network too
yq -i '.services.worker.networks = ["kind","default"]' docker-compose.yml
- name: Fix API data directory permissions
run: docker run --rm -v $(pwd)/_data/api:/data alpine chown -R 1000:1000 /data
- name: Start API services
@@ -140,4 +97,4 @@ jobs:
run: |
echo "Shutting down services..."
docker compose down -v || true
echo "Cleanup completed"
echo "Cleanup completed"
+1
View File
@@ -14,6 +14,7 @@ All notable changes to the **Prowler API** are documented in this file.
- Support for `passed_findings` and `total_findings` fields in compliance requirement overview for accurate Prowler ThreatScore calculation [(#8582)](https://github.com/prowler-cloud/prowler/pull/8582)
- Database read replica support [(#8869)](https://github.com/prowler-cloud/prowler/pull/8869)
- Support Common Cloud Controls for AWS, Azure and GCP [(#8000)](https://github.com/prowler-cloud/prowler/pull/8000)
- Add `provider_id__in` filter support to findings and findings severity overview endpoints [(#8951)](https://github.com/prowler-cloud/prowler/pull/8951)
### Changed
- Now the MANAGE_ACCOUNT permission is required to modify or read user permissions instead of MANAGE_USERS [(#8281)](https://github.com/prowler-cloud/prowler/pull/8281)
+1
View File
@@ -765,6 +765,7 @@ class ComplianceOverviewFilter(FilterSet):
class ScanSummaryFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
provider_id = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider_id__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
+30
View File
@@ -3611,6 +3611,16 @@ paths:
schema:
type: string
format: uuid
- in: query
name: filter[provider_id__in]
schema:
type: array
items:
type: string
format: uuid
description: Multiple values may be separated by commas.
explode: false
style: form
- in: query
name: filter[provider_type]
schema:
@@ -3778,6 +3788,16 @@ paths:
schema:
type: string
format: uuid
- in: query
name: filter[provider_id__in]
schema:
type: array
items:
type: string
format: uuid
description: Multiple values may be separated by commas.
explode: false
style: form
- in: query
name: filter[provider_type]
schema:
@@ -3980,6 +4000,16 @@ paths:
schema:
type: string
format: uuid
- in: query
name: filter[provider_id__in]
schema:
type: array
items:
type: string
format: uuid
description: Multiple values may be separated by commas.
explode: false
style: form
- in: query
name: filter[provider_type]
schema:
+166
View File
@@ -46,6 +46,7 @@ from api.models import (
SAMLConfiguration,
SAMLToken,
Scan,
ScanSummary,
StateChoices,
Task,
TenantAPIKey,
@@ -5766,6 +5767,171 @@ class TestOverviewViewSet:
assert service1_data["attributes"]["muted"] == 1
assert service2_data["attributes"]["muted"] == 0
def test_overview_findings_provider_id_in_filter(
self, authenticated_client, tenants_fixture, providers_fixture
):
tenant = tenants_fixture[0]
provider1, provider2, *_ = providers_fixture
scan1 = Scan.objects.create(
name="scan-one",
provider=provider1,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
scan2 = Scan.objects.create(
name="scan-two",
provider=provider2,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan1,
check_id="check-provider-one",
service="service-a",
severity="high",
region="region-a",
_pass=5,
fail=1,
muted=2,
total=8,
new=5,
changed=2,
unchanged=1,
fail_new=1,
fail_changed=0,
pass_new=3,
pass_changed=2,
muted_new=1,
muted_changed=1,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan2,
check_id="check-provider-two",
service="service-b",
severity="medium",
region="region-b",
_pass=2,
fail=3,
muted=1,
total=6,
new=3,
changed=2,
unchanged=1,
fail_new=2,
fail_changed=1,
pass_new=1,
pass_changed=1,
muted_new=1,
muted_changed=0,
)
single_response = authenticated_client.get(
reverse("overview-findings"),
{"filter[provider_id__in]": str(provider1.id)},
)
assert single_response.status_code == status.HTTP_200_OK
single_attributes = single_response.json()["data"]["attributes"]
assert single_attributes["pass"] == 5
assert single_attributes["fail"] == 1
assert single_attributes["muted"] == 2
assert single_attributes["total"] == 8
combined_response = authenticated_client.get(
reverse("overview-findings"),
{"filter[provider_id__in]": f"{provider1.id},{provider2.id}"},
)
assert combined_response.status_code == status.HTTP_200_OK
combined_attributes = combined_response.json()["data"]["attributes"]
assert combined_attributes["pass"] == 7
assert combined_attributes["fail"] == 4
assert combined_attributes["muted"] == 3
assert combined_attributes["total"] == 14
def test_overview_findings_severity_provider_id_in_filter(
self, authenticated_client, tenants_fixture, providers_fixture
):
tenant = tenants_fixture[0]
provider1, provider2, *_ = providers_fixture
scan1 = Scan.objects.create(
name="severity-scan-one",
provider=provider1,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
scan2 = Scan.objects.create(
name="severity-scan-two",
provider=provider2,
trigger=Scan.TriggerChoices.MANUAL,
state=StateChoices.COMPLETED,
tenant=tenant,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan1,
check_id="severity-check-one",
service="service-a",
severity="high",
region="region-a",
_pass=4,
fail=4,
muted=0,
total=8,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan1,
check_id="severity-check-two",
service="service-a",
severity="medium",
region="region-b",
_pass=2,
fail=2,
muted=0,
total=4,
)
ScanSummary.objects.create(
tenant=tenant,
scan=scan2,
check_id="severity-check-three",
service="service-b",
severity="critical",
region="region-c",
_pass=1,
fail=2,
muted=0,
total=3,
)
single_response = authenticated_client.get(
reverse("overview-findings_severity"),
{"filter[provider_id__in]": str(provider1.id)},
)
assert single_response.status_code == status.HTTP_200_OK
single_attributes = single_response.json()["data"]["attributes"]
assert single_attributes["high"] == 8
assert single_attributes["medium"] == 4
assert single_attributes["critical"] == 0
combined_response = authenticated_client.get(
reverse("overview-findings_severity"),
{"filter[provider_id__in]": f"{provider1.id},{provider2.id}"},
)
assert combined_response.status_code == status.HTTP_200_OK
combined_attributes = combined_response.json()["data"]["attributes"]
assert combined_attributes["high"] == 8
assert combined_attributes["medium"] == 4
assert combined_attributes["critical"] == 3
@pytest.mark.django_db
class TestScheduleViewSet:
+2 -9
View File
@@ -114,7 +114,8 @@
"group": "Tutorials",
"pages": [
"user-guide/tutorials/prowler-app-sso-entra",
"user-guide/tutorials/bulk-provider-provisioning"
"user-guide/tutorials/bulk-provider-provisioning",
"user-guide/tutorials/aws-organizations-bulk-provisioning"
]
}
]
@@ -404,14 +405,6 @@
"source": "/projects/prowler-open-source/en/latest/tutorials/gcp/getting-started-gcp",
"destination": "/user-guide/providers/gcp/getting-started-gcp"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/prowler-app",
"destination": "/user-guide/tutorials/prowler-app#step-4-4%3A-kubernetes-credentials%3A"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/prowler-app/#step-3-add-a-provider",
"destination": "/user-guide/tutorials/prowler-app#step-3-add-a-provider"
},
{
"source": "/projects/prowler-open-source/en/latest/tutorials/microsoft365/getting-started-m365",
"destination": "/user-guide/providers/microsoft365/getting-started-m365"
@@ -182,19 +182,19 @@ Configure the server using environment variables:
|----------|-------------|----------|---------|
| `PROWLER_APP_API_KEY` | Prowler API key | Only for STDIO mode | - |
| `PROWLER_API_BASE_URL` | Custom Prowler API endpoint | No | `https://api.prowler.com` |
| `PROWLER_MCP_MODE` | Default transport mode (overwritten by `--transport` argument) | No | `stdio` |
| `PROWLER_MCP_TRANSPORT_MODE` | Default transport mode (overwritten by `--transport` argument) | No | `stdio` |
<CodeGroup>
```bash macOS/Linux
export PROWLER_APP_API_KEY="pk_your_api_key_here"
export PROWLER_API_BASE_URL="https://api.prowler.com"
export PROWLER_MCP_MODE="http"
export PROWLER_MCP_TRANSPORT_MODE="http"
```
```bash Windows PowerShell
$env:PROWLER_APP_API_KEY="pk_your_api_key_here"
$env:PROWLER_API_BASE_URL="https://api.prowler.com"
$env:PROWLER_MCP_MODE="http"
$env:PROWLER_MCP_TRANSPORT_MODE="http"
```
</CodeGroup>
@@ -209,7 +209,7 @@ For convenience, create a `.env` file in the `mcp_server` directory:
```bash .env
PROWLER_APP_API_KEY=pk_your_api_key_here
PROWLER_API_BASE_URL=https://api.prowler.com
PROWLER_MCP_MODE=stdio
PROWLER_MCP_TRANSPORT_MODE=stdio
```
When using Docker, pass the environment file:
@@ -228,6 +228,35 @@ uvx /path/to/prowler/mcp_server/
This is particularly useful when configuring MCP clients that need to launch the server from a specific path.
## Production Deployment
For production deployments that require customization, it is recommended to use the ASGI application that can be found in `prowler_mcp_server.server`. This can be run with uvicorn:
```bash
uvicorn prowler_mcp_server.server:app --host 0.0.0.0 --port 8000
```
For more details on production deployment options, see the [FastMCP production deployment guide](https://gofastmcp.com/deployment/http#production-deployment) and [uvicorn settings](https://www.uvicorn.org/settings/).
### Entrypoint Script
The source tree includes `entrypoint.sh` to simplify switching between the
standard CLI runner and the ASGI app. The first argument selects the mode and
any additional flags are passed straight through:
```bash
# Default CLI experience (prowler-mcp console script)
./entrypoint.sh main --transport http --host 0.0.0.0
# ASGI app via uvicorn
./entrypoint.sh uvicorn --host 0.0.0.0 --port 9000
```
Omitting the mode defaults to `main`, matching the `prowler-mcp` console script.
When `uvicorn` mode is selected, the script exports `PROWLER_MCP_TRANSPORT_MODE=http` automatically.
This is the default entrypoint for the Docker container.
## Next Steps
Now that you have the Prowler MCP Server installed, proceed to configure your MCP client:
+1 -1
View File
@@ -16,7 +16,7 @@ We use encryption everywhere possible. The data and communications used by **Pro
Prowler Cloud is GDPR compliant in regards to personal data and the ["right to be forgotten"](https://gdpr.eu/right-to-be-forgotten/). When a user deletes their account their user information will be deleted from Prowler Cloud online and backup systems within 10 calendar days.
## Software Security
## Software Security
We follow a **security-by-design approach** throughout our software development lifecycle. All changes go through automated checks at every stage, from local development to production deployment.
@@ -0,0 +1,491 @@
---
title: 'AWS Organizations Bulk Provisioning in Prowler'
---
Prowler offers an automated tool to discover and provision all AWS accounts within an AWS Organization. This streamlines onboarding for organizations managing multiple AWS accounts by automatically generating the configuration needed for bulk provisioning.
The tool, `aws_org_generator.py`, complements the [Bulk Provider Provisioning](./bulk-provider-provisioning) tool and is available in the Prowler repository at: [util/prowler-bulk-provisioning](https://github.com/prowler-cloud/prowler/tree/master/util/prowler-bulk-provisioning)
<Note>
Native support for bulk provisioning AWS Organizations and similar multi-account structures directly in the Prowler UI/API is on the official roadmap.
Track progress and vote for this feature at: [Bulk Provisioning in the UI/API for AWS Organizations](https://roadmap.prowler.com/p/builk-provisioning-in-the-uiapi-for-aws-organizations-and-alike)
</Note>
{/* TODO: Add screenshot of the tool in action */}
## Overview
The AWS Organizations Bulk Provisioning tool simplifies multi-account onboarding by:
* Automatically discovering all active accounts in an AWS Organization
* Generating YAML configuration files for bulk provisioning
* Supporting account filtering and custom role configurations
* Eliminating manual entry of account IDs and role ARNs
## Prerequisites
### Requirements
* Python 3.7 or higher
* AWS credentials with Organizations read access
* ProwlerRole (or custom role) deployed across all target accounts
* Prowler API key (from Prowler Cloud or self-hosted Prowler App)
* For self-hosted Prowler App, remember to [point to your API base URL](./bulk-provider-provisioning#custom-api-endpoints)
* Learn how to create API keys: [Prowler App API Keys](../providers/prowler-app-api-keys)
### Deploying ProwlerRole Across AWS Organizations
Before using the AWS Organizations generator, deploy the ProwlerRole across all accounts in the organization using CloudFormation StackSets.
<Note>
**Follow the official documentation:**
[Deploying Prowler IAM Roles Across AWS Organizations](../providers/aws/organizations#deploying-prowler-iam-roles-across-aws-organizations)
**Key points:**
* Use CloudFormation StackSets from the management account
* Deploy to all organizational units (OUs) or specific OUs
* Use an external ID for enhanced security
* Ensure the role has necessary permissions for Prowler scans
</Note>
### Installation
Clone the repository and install required dependencies:
```bash
git clone https://github.com/prowler-cloud/prowler.git
cd prowler/util/prowler-bulk-provisioning
pip install -r requirements-aws-org.txt
```
### AWS Credentials Setup
Configure AWS credentials with Organizations read access:
* **Management account credentials**, or
* **Delegated administrator account** with `organizations:ListAccounts` permission
Required IAM permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"organizations:ListAccounts",
"organizations:DescribeOrganization"
],
"Resource": "*"
}
]
}
```
### Prowler API Key Setup
Configure your Prowler API key:
```bash
export PROWLER_API_KEY="pk_example-api-key"
```
To create an API key:
1. Log in to Prowler Cloud or Prowler App
2. Click **Profile** → **Account**
3. Click **Create API Key**
4. Provide a descriptive name and optionally set an expiration date
5. Copy the generated API key (it will only be shown once)
For detailed instructions, see: [Prowler App API Keys](../providers/prowler-app-api-keys)
## Basic Usage
### Generate Configuration for All Accounts
To generate a YAML configuration file for all active accounts in the organization:
```bash
python aws_org_generator.py -o aws-accounts.yaml --external-id prowler-ext-id-2024
```
This command:
1. Lists all ACTIVE accounts in the organization
2. Generates YAML entries for each account
3. Saves the configuration to `aws-accounts.yaml`
**Output:**
```
Fetching accounts from AWS Organizations...
Found 47 active accounts in organization
Generated configuration for 47 accounts
Configuration written to: aws-accounts.yaml
Next steps:
1. Review the generated file: cat aws-accounts.yaml | head -n 20
2. Run bulk provisioning: python prowler_bulk_provisioning.py aws-accounts.yaml
```
### Review Generated Configuration
Review the generated YAML configuration:
```bash
head -n 20 aws-accounts.yaml
```
**Example output:**
```yaml
- provider: aws
uid: '111111111111'
alias: Production-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::111111111111:role/ProwlerRole
external_id: prowler-ext-id-2024
- provider: aws
uid: '222222222222'
alias: Development-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::222222222222:role/ProwlerRole
external_id: prowler-ext-id-2024
```
### Dry Run Mode
Test the configuration without writing a file:
```bash
python aws_org_generator.py \
--external-id prowler-ext-id-2024 \
--dry-run
```
## Advanced Configuration
### Using a Specific AWS Profile
Specify an AWS profile when multiple profiles are configured:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--profile org-management-admin \
--external-id prowler-ext-id-2024
```
### Excluding Specific Accounts
Exclude the management account or other accounts from provisioning:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id prowler-ext-id-2024 \
--exclude 123456789012,210987654321
```
Common exclusion scenarios:
* Management account (requires different permissions)
* Break-glass accounts (emergency access)
* Suspended or archived accounts
### Including Only Specific Accounts
Generate configuration for specific accounts only:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id prowler-ext-id-2024 \
--include 111111111111,222222222222,333333333333
```
### Custom Role Name
Specify a custom role name if not using the default `ProwlerRole`:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--role-name ProwlerExecutionRole \
--external-id prowler-ext-id-2024
```
### Custom Alias Format
Customize account aliases using template variables:
```bash
# Use account name and ID
python aws_org_generator.py \
-o aws-accounts.yaml \
--alias-format "{name}-{id}" \
--external-id prowler-ext-id-2024
# Use email prefix
python aws_org_generator.py \
-o aws-accounts.yaml \
--alias-format "{email}" \
--external-id prowler-ext-id-2024
```
Available template variables:
* `{name}` - Account name
* `{id}` - Account ID
* `{email}` - Account email
### Additional Role Assumption Options
Configure optional role assumption parameters:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--role-name ProwlerRole \
--external-id prowler-ext-id-2024 \
--session-name prowler-scan-session \
--duration-seconds 3600
```
## Complete Workflow Example
<Steps>
<Step title="Deploy ProwlerRole Using StackSets">
1. Log in to the AWS management account
2. Open CloudFormation → StackSets
3. Create a new StackSet using the [Prowler role template](https://github.com/prowler-cloud/prowler/blob/master/permissions/templates/cloudformation/prowler-scan-role.yml)
4. Deploy to all organizational units
5. Use a unique external ID (e.g., `prowler-org-2024-abc123`)
{/* TODO: Add screenshot of CloudFormation StackSets deployment */}
</Step>
<Step title="Generate YAML Configuration">
Configure AWS credentials and generate the YAML file:
```bash
# Using management account credentials
export AWS_PROFILE=org-management
# Generate configuration
python aws_org_generator.py \
-o aws-org-accounts.yaml \
--external-id prowler-org-2024-abc123 \
--exclude 123456789012
```
**Output:**
```
Fetching accounts from AWS Organizations...
Using AWS profile: org-management
Found 47 active accounts in organization
Generated configuration for 46 accounts
Configuration written to: aws-org-accounts.yaml
Next steps:
1. Review the generated file: cat aws-org-accounts.yaml | head -n 20
2. Run bulk provisioning: python prowler_bulk_provisioning.py aws-org-accounts.yaml
```
</Step>
<Step title="Review Generated Configuration">
Verify the generated YAML configuration:
```bash
# View first 20 lines
head -n 20 aws-org-accounts.yaml
# Check for unexpected accounts
grep "uid:" aws-org-accounts.yaml
# Verify role ARNs
grep "role_arn:" aws-org-accounts.yaml | head -5
# Count accounts
grep "provider: aws" aws-org-accounts.yaml | wc -l
```
</Step>
<Step title="Run Bulk Provisioning">
Provision all accounts to Prowler Cloud or Prowler App:
```bash
# Set Prowler API key
export PROWLER_API_KEY="pk_example-api-key"
# Run bulk provisioning with connection testing
python prowler_bulk_provisioning.py aws-org-accounts.yaml
```
**With custom options:**
```bash
python prowler_bulk_provisioning.py aws-org-accounts.yaml \
--concurrency 10 \
--timeout 120
```
**Successful output:**
```
[1] ✅ Created provider (id=db9a8985-f9ec-4dd8-b5a0-e05ab3880bed)
[1] ✅ Created secret (id=466f76c6-5878-4602-a4bc-13f9522c1fd2)
[1] ✅ Connection test: Connected
[2] ✅ Created provider (id=7a99f789-0cf5-4329-8279-2d443a962676)
[2] ✅ Created secret (id=c5702180-f7c4-40fd-be0e-f6433479b126)
[2] ✅ Connection test: Connected
Done. Success: 47 Failures: 0
```
{/* TODO: Add screenshot of successful bulk provisioning output */}
</Step>
</Steps>
## Command Reference
### Full Command-Line Options
```bash
python aws_org_generator.py \
-o OUTPUT_FILE \
--role-name ROLE_NAME \
--external-id EXTERNAL_ID \
--session-name SESSION_NAME \
--duration-seconds SECONDS \
--alias-format FORMAT \
--exclude ACCOUNT_IDS \
--include ACCOUNT_IDS \
--profile AWS_PROFILE \
--region AWS_REGION \
--dry-run
```
## Troubleshooting
### Error: "No AWS credentials found"
**Solution:** Configure AWS credentials using one of these methods:
```bash
# Method 1: AWS CLI configure
aws configure
# Method 2: Environment variables
export AWS_ACCESS_KEY_ID=your-key-id
export AWS_SECRET_ACCESS_KEY=your-secret-key
# Method 3: Use AWS profile
export AWS_PROFILE=org-management
```
### Error: "Access denied to AWS Organizations API"
**Cause:** Current credentials don't have permission to list organization accounts.
**Solution:**
* Ensure management account credentials are used
* Verify IAM permissions include `organizations:ListAccounts`
* Check IAM policies for Organizations access
### Error: "AWS Organizations is not enabled"
**Cause:** The account is not part of an organization.
**Solution:** This tool requires an AWS Organization. Create one in the AWS Organizations console or use standard bulk provisioning for standalone accounts.
### No Accounts Generated After Filters
**Cause:** All accounts were filtered out by `--exclude` or `--include` options.
**Solution:** Review filter options and verify account IDs are correct:
```bash
# List all accounts in organization
aws organizations list-accounts --query "Accounts[?Status=='ACTIVE'].[Id,Name]" --output table
```
### Connection Test Failures During Bulk Provisioning
**Cause:** ProwlerRole may not be deployed correctly or credentials are invalid.
**Solution:**
* Verify StackSet deployment status in CloudFormation
* Check role trust policy includes correct external ID
* Test role assumption manually:
```bash
aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/ProwlerRole \
--role-session-name test \
--external-id prowler-ext-id-2024
```
## Security Best Practices
### Use External ID
Always use an external ID when assuming cross-account roles:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id $(uuidgen | tr '[:upper:]' '[:lower:]')
```
The external ID must match the one configured in the ProwlerRole trust policy across all accounts.
### Exclude Sensitive Accounts
Exclude accounts that shouldn't be scanned or require special handling:
```bash
python aws_org_generator.py \
-o aws-accounts.yaml \
--external-id prowler-ext-id \
--exclude 123456789012,111111111111 # management, break-glass accounts
```
### Review Generated Configuration
Always review the generated YAML before provisioning:
```bash
# Check for unexpected accounts
grep "uid:" aws-org-accounts.yaml
# Verify role ARNs
grep "role_arn:" aws-org-accounts.yaml | head -5
# Count accounts
grep "provider: aws" aws-org-accounts.yaml | wc -l
```
## Next Steps
<Columns cols={2}>
<Card title="Bulk Provider Provisioning" icon="terminal" href="/user-guide/tutorials/bulk-provider-provisioning">
Learn how to bulk provision providers in Prowler.
</Card>
<Card title="Prowler App" icon="pen-to-square" href="/user-guide/tutorials/prowler-app">
Detailed instructions on how to use Prowler.
</Card>
</Columns>
@@ -17,14 +17,18 @@ The Bulk Provider Provisioning tool automates the creation of cloud providers in
* Testing connections to verify successful authentication
* Processing multiple providers concurrently for efficiency
<Tip>
**Using AWS Organizations?** For organizations with many AWS accounts, use the automated [AWS Organizations Bulk Provisioning](./aws-organizations-bulk-provisioning) tool to automatically discover and generate configuration for all accounts in your organization.
</Tip>
## Prerequisites
### Requirements
* Python 3.7 or higher
* Prowler API token (from Prowler Cloud or self-hosted Prowler App)
* Prowler API key (from Prowler Cloud or self-hosted Prowler App)
* For self-hosted Prowler App, remember to [point to your API base URL](#custom-api-endpoints)
* Learn how to create API keys: [Prowler App API Keys](../providers/prowler-app-api-keys)
* Authentication credentials for target cloud providers
### Installation
@@ -39,28 +43,21 @@ pip install -r requirements.txt
### Authentication Setup
Configure your Prowler API token:
Configure your Prowler API key:
```bash
export PROWLER_API_TOKEN="your-prowler-api-token"
export PROWLER_API_KEY="pk_example-api-key"
```
To obtain an API token programmatically:
To create an API key:
```bash
export PROWLER_API_TOKEN=$(curl --location 'https://api.prowler.com/api/v1/tokens' \
--header 'Content-Type: application/vnd.api+json' \
--header 'Accept: application/vnd.api+json' \
--data-raw '{
"data": {
"type": "tokens",
"attributes": {
"email": "your@email.com",
"password": "your-password"
}
}
}' | jq -r .data.attributes.access)
```
1. Log in to Prowler Cloud or Prowler App
2. Click **Profile** → **Account**
3. Click **Create API Key**
4. Provide a descriptive name and optionally set an expiration date
5. Copy the generated API key (it will only be shown once)
For detailed instructions, see: [Prowler App API Keys](../providers/prowler-app-api-keys)
## Configuration File Structure
@@ -340,11 +337,11 @@ Done. Success: 2 Failures: 0
## Troubleshooting
### Invalid API Token
### Invalid API Key
```
Error: 401 Unauthorized
Solution: Verify your PROWLER_API_TOKEN or --token parameter
Solution: Verify your PROWLER_API_KEY environment variable or --api-key parameter
```
### Network Timeouts
+1 -1
View File
@@ -1,3 +1,3 @@
PROWLER_APP_API_KEY="pk_your_api_key_here"
PROWLER_API_BASE_URL="https://api.prowler.com"
PROWLER_MCP_MODE="stdio"
PROWLER_MCP_TRANSPORT_MODE="stdio"
+2 -1
View File
@@ -13,4 +13,5 @@ All notable changes to the **Prowler MCP Server** are documented in this file.
- Add new MCP Server for Prowler Documentation [(#8795)](https://github.com/prowler-cloud/prowler/pull/8795)
- API key support for STDIO mode and enhanced HTTP mode authentication [(#8823)](https://github.com/prowler-cloud/prowler/pull/8823)
- Add health check endpoint [(#8905)](https://github.com/prowler-cloud/prowler/pull/8905)
- Update Prowler Documentation MCP Server to use Mintlify API [(#8915)](https://github.com/prowler-cloud/prowler/pull/8915)
- Update Prowler Documentation MCP Server to use Mintlify API [(#8916)](https://github.com/prowler-cloud/prowler/pull/8916)
- Add custom production deployment using uvicorn [(#8958)](https://github.com/prowler-cloud/prowler/pull/8958)
+6 -7
View File
@@ -47,13 +47,12 @@ COPY --from=builder --chown=prowler /app/prowler_mcp_server /app/prowler_mcp_ser
# 3. Project metadata file (may be needed by some packages at runtime)
COPY --from=builder --chown=prowler /app/pyproject.toml /app/pyproject.toml
# 4. Entrypoint helper script for selecting runtime mode
COPY --from=builder --chown=prowler /app/entrypoint.sh /app/entrypoint.sh
# Add virtual environment to PATH so prowler-mcp command is available
ENV PATH="/app/.venv/bin:$PATH"
# Entry point for the MCP server
# Default to stdio mode, but allow overriding via command arguments
# Examples:
# docker run -p 8000:8000 prowler-mcp --transport http --host 0.0.0.0 --port 8000
# docker run prowler-mcp --transport stdio
ENTRYPOINT ["prowler-mcp"]
CMD ["--transport", "stdio"]
# Entrypoint wrapper defaults to CLI mode; override with `uvicorn` to run ASGI app
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["main"]
+17 -3
View File
@@ -144,11 +144,11 @@ uv run prowler-mcp --transport http
uv run prowler-mcp --transport http --host 0.0.0.0 --port 8080
```
For self-deployed MCP remote server, you can use also configure the server to use a custom API base URL with the environment variable `PROWLER_API_BASE_URL`; and the transport mode with the environment variable `PROWLER_MCP_MODE`.
For self-deployed MCP remote server, you can use also configure the server to use a custom API base URL with the environment variable `PROWLER_API_BASE_URL`; and the transport mode with the environment variable `PROWLER_MCP_TRANSPORT_MODE`.
```bash
export PROWLER_API_BASE_URL="https://api.prowler.com"
export PROWLER_MCP_MODE="http"
export PROWLER_MCP_TRANSPORT_MODE="http"
```
### Using uv directly
@@ -190,6 +190,16 @@ docker run --rm --env-file ./.env -p 8000:8000 -it prowler-mcp --transport http
docker run --rm --env-file ./.env -p 8080:8080 -it prowler-mcp --transport http --host 0.0.0.0 --port 8080
```
## Production Deployment
For production deployments that require customization, it is recommended to use the ASGI application that can be found in `prowler_mcp_server.server`. This can be run with uvicorn:
```bash
uvicorn prowler_mcp_server.server:app --host 0.0.0.0 --port 8000
```
For more details on production deployment options, see the [FastMCP production deployment guide](https://gofastmcp.com/deployment/http#production-deployment) and [uvicorn settings](https://www.uvicorn.org/settings/).
## Command Line Arguments
The Prowler MCP server supports the following command line arguments:
@@ -482,6 +492,10 @@ If you want to have it globally available, add the example server to Cursor's co
If you want to have it only for the current project, add the example server to the project's root in a new `.cursor/mcp.json` file.
## Documentation
For detailed documentation about the Prowler MCP Server, including guides, tutorials, and use cases, visit the [official Prowler documentation](https://docs.prowler.com).
## License
This project follows the repositorys main license. See the [LICENSE](../LICENSE) file at the repository root.
This project follows the repository's main license. See the [LICENSE](../LICENSE) file at the repository root.
+50
View File
@@ -0,0 +1,50 @@
#!/bin/sh
set -eu
usage() {
cat <<'EOF'
Usage: ./entrypoint.sh [main|uvicorn] [args...]
Modes:
main (default) Run prowler-mcp
uvicorn Run uvicorn prowler_mcp_server.server:app
All additional arguments are forwarded to the selected command.
EOF
}
mode="main"
if [ "$#" -gt 0 ]; then
case "$1" in
main|cli)
mode="main"
shift
;;
uvicorn|asgi)
mode="uvicorn"
shift
;;
-h|--help)
usage
exit 0
;;
*)
mode="main"
;;
esac
fi
case "$mode" in
main)
exec prowler-mcp "$@"
;;
uvicorn)
export PROWLER_MCP_TRANSPORT_MODE="http"
exec uvicorn prowler_mcp_server.server:app "$@"
;;
*)
usage
exit 1
;;
esac
+18 -7
View File
@@ -1,10 +1,8 @@
import argparse
import asyncio
import os
import sys
from prowler_mcp_server.lib.logger import logger
from prowler_mcp_server.server import setup_main_server
def parse_arguments():
@@ -13,7 +11,7 @@ def parse_arguments():
parser.add_argument(
"--transport",
choices=["stdio", "http"],
default=os.getenv("PROWLER_MCP_MODE", "stdio"),
default=None,
help="Transport method (default: stdio)",
)
parser.add_argument(
@@ -35,13 +33,26 @@ def main():
try:
args = parse_arguments()
# Set up server with configuration
prowler_mcp_server = asyncio.run(setup_main_server(transport=args.transport))
print(f"args.transport: {args.transport}")
if args.transport is None:
args.transport = os.getenv("PROWLER_MCP_TRANSPORT_MODE", "stdio")
else:
os.environ["PROWLER_MCP_TRANSPORT_MODE"] = args.transport
from prowler_mcp_server.server import prowler_mcp_server
if args.transport == "stdio":
prowler_mcp_server.run(transport="stdio")
prowler_mcp_server.run(transport=args.transport, show_banner=False)
elif args.transport == "http":
prowler_mcp_server.run(transport="http", host=args.host, port=args.port)
prowler_mcp_server.run(
transport=args.transport,
host=args.host,
port=args.port,
show_banner=False,
)
else:
logger.error(f"Invalid transport: {args.transport}")
except KeyboardInterrupt:
logger.info("Shutting down Prowler MCP server...")
@@ -14,7 +14,7 @@ class ProwlerAppAuth:
def __init__(
self,
mode: str = os.getenv("PROWLER_MCP_MODE", "stdio"),
mode: str = os.getenv("PROWLER_MCP_TRANSPORT_MODE", "stdio"),
base_url: str = os.getenv("PROWLER_API_BASE_URL", "https://api.prowler.com"),
):
self.base_url = base_url.rstrip("/")
@@ -33,7 +33,14 @@ class ProwlerAppAuth:
raise ValueError("Prowler App API key format is incorrect")
def _parse_jwt(self, token: str) -> Optional[Dict]:
"""Parse JWT token and return payload, similar to JS parseJwt function."""
"""Parse JWT token and return payload
Args:
token: JWT token to parse
Returns:
Parsed JWT payload, or None if parsing fails
"""
if not token:
return None
+26 -13
View File
@@ -1,16 +1,16 @@
import asyncio
import os
from fastmcp import FastMCP
from prowler_mcp_server import __version__
from prowler_mcp_server.lib.logger import logger
from starlette.responses import JSONResponse
prowler_mcp_server = FastMCP("prowler-mcp-server")
async def setup_main_server(transport: str) -> FastMCP:
async def setup_main_server():
"""Set up the main Prowler MCP server with all available integrations."""
# Initialize main Prowler MCP server
prowler_mcp_server = FastMCP("prowler-mcp-server")
# Import Prowler Hub tools with prowler_hub_ prefix
try:
logger.info("Importing Prowler Hub server...")
@@ -21,12 +21,10 @@ async def setup_main_server(transport: str) -> FastMCP:
except Exception as e:
logger.error(f"Failed to import Prowler Hub server: {e}")
# Import Prowler App tools with prowler_app_ prefix
try:
logger.info("Importing Prowler App server...")
if os.getenv("PROWLER_MCP_MODE", None) is None:
os.environ["PROWLER_MCP_MODE"] = transport
if not os.path.exists(
os.path.join(os.path.dirname(__file__), "prowler_app", "server.py")
):
@@ -44,6 +42,7 @@ async def setup_main_server(transport: str) -> FastMCP:
except Exception as e:
logger.error(f"Failed to import Prowler App server: {e}")
# Import Prowler Documentation tools with prowler_docs_ prefix
try:
logger.info("Importing Prowler Documentation server...")
from prowler_mcp_server.prowler_documentation.server import docs_mcp_server
@@ -53,9 +52,23 @@ async def setup_main_server(transport: str) -> FastMCP:
except Exception as e:
logger.error(f"Failed to import Prowler Documentation server: {e}")
# Add health check endpoint
@prowler_mcp_server.custom_route("/health", methods=["GET"])
async def health_check(request):
return JSONResponse({"status": "healthy", "service": "prowler-mcp-server"})
return prowler_mcp_server
# Add health check endpoint
@prowler_mcp_server.custom_route("/health", methods=["GET"])
async def health_check(request) -> JSONResponse:
"""Health check endpoint."""
return JSONResponse(
{"status": "healthy", "service": "prowler-mcp-server", "version": __version__}
)
# Get or create the event loop
try:
loop = asyncio.get_running_loop()
# If we have a running loop, schedule the setup as a task
loop.create_task(setup_main_server())
except RuntimeError:
# No running loop, use asyncio.run (for standalone execution)
asyncio.run(setup_main_server())
app = prowler_mcp_server.http_app()
+3
View File
@@ -32,10 +32,13 @@ All notable changes to the **Prowler SDK** are documented in this file.
- Update AWS AppStream service metadata to new format [(#8789)](https://github.com/prowler-cloud/prowler/pull/8789)
- Update AWS API Gateway service metadata to new format [(#8788)](https://github.com/prowler-cloud/prowler/pull/8788)
- Update AWS Athena service metadata to new format [(#8790)](https://github.com/prowler-cloud/prowler/pull/8790)
- Update AWS CloudTrail service metadata to new format [(#8831)](https://github.com/prowler-cloud/prowler/pull/8831)
- Update AWS Auto Scaling service metadata to new format [(#8824)](https://github.com/prowler-cloud/prowler/pull/8824)
- Update AWS Backup service metadata to new format [(#8826)](https://github.com/prowler-cloud/prowler/pull/8826)
- Update AWS CloudFormation service metadata to new format [(#8828)](https://github.com/prowler-cloud/prowler/pull/8828)
- Update AWS Lambda service metadata to new format [(#8825)](https://github.com/prowler-cloud/prowler/pull/8825)
- Update AWS DLM service metadata to new format [(#8860)](https://github.com/prowler-cloud/prowler/pull/8860)
- Update AWS DMS service metadata to new format [(#8861)](https://github.com/prowler-cloud/prowler/pull/8861)
- Update AWS Directory Service service metadata to new format [(#8859)](https://github.com/prowler-cloud/prowler/pull/8859)
- Update AWS CloudFront service metadata to new format [(#8829)](https://github.com/prowler-cloud/prowler/pull/8829)
- Deprecate user authentication for M365 provider [(#8865)](https://github.com/prowler-cloud/prowler/pull/8865)
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -819,18 +819,6 @@
"aws-us-gov": []
}
},
"apptest": {
"regions": {
"aws": [
"ap-southeast-2",
"eu-central-1",
"sa-east-1",
"us-east-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"aps": {
"regions": {
"aws": [
@@ -8723,6 +8711,7 @@
"ap-southeast-5",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-2",
"eu-west-1",
@@ -9207,11 +9196,13 @@
"ap-east-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-5",
"ap-southeast-7",
"ca-central-1",
"eu-central-1",
@@ -12436,7 +12427,12 @@
"workspaces-instances": {
"regions": {
"aws": [
"ap-northeast-2"
"ap-east-1",
"ap-northeast-2",
"ap-southeast-5",
"eu-south-2",
"me-central-1",
"us-east-2"
],
"aws-cn": [],
"aws-us-gov": []
@@ -1,33 +1,38 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_bucket_requires_mfa_delete",
"CheckTitle": "Ensure the S3 bucket CloudTrail bucket requires MFA delete",
"CheckTitle": "CloudTrail trail S3 bucket has MFA delete enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure the S3 bucket CloudTrail bucket requires MFA",
"Risk": "If the S3 bucket CloudTrail bucket does not require MFA, it can be deleted by an attacker.",
"Description": "**CloudTrail log buckets** for actively logging trails are evaluated for **MFA Delete** on the associated S3 bucket. The assessment determines whether `MFA Delete` is configured on the in-account log bucket; *if the bucket resides in another account, its configuration should be verified separately*.",
"Risk": "Without **MFA Delete**, stolen or over-privileged credentials can permanently delete log versions or change versioning, compromising log **integrity** and **availability**. This enables attacker cover-ups, hinders **forensics**, and weakens evidence for investigations.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-bucket-mfa-delete-enabled.html"
],
"Remediation": {
"Code": {
"CLI": "aws s3api put-bucket-versioning --bucket DOC-EXAMPLE-BUCKET1 --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa \"SERIAL 123456\"",
"CLI": "aws s3api put-bucket-versioning --bucket <CLOUDTRAIL_BUCKET_NAME> --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa \"<MFA_SERIAL> <MFA_CODE>\"",
"NativeIaC": "",
"Other": "",
"Other": "1. Sign in to the AWS Management Console as the root user with MFA enabled\n2. Open AWS CloudShell (from the top navigation bar)\n3. Run:\n ```bash\n aws s3api put-bucket-versioning --bucket <CLOUDTRAIL_BUCKET_NAME> --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa \"<MFA_SERIAL> <MFA_CODE>\"\n ```",
"Terraform": ""
},
"Recommendation": {
"Text": "Configure MFA Delete for the S3 bucket CloudTrail bucket",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html"
"Text": "Enable `MFA Delete` on the CloudTrail log bucket with versioning enabled. Enforce **least privilege** so only tightly controlled identities can delete or alter logs, and require MFA for such actions. Apply **defense in depth** using a dedicated logging account and log file integrity validation.",
"Url": "https://hub.prowler.com/check/cloudtrail_bucket_requires_mfa_delete"
}
},
"Categories": [
"identity-access",
"forensics-ready"
],
"DependsOn": [],
@@ -1,35 +1,39 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_cloudwatch_logging_enabled",
"CheckTitle": "Ensure CloudTrail trails are integrated with CloudWatch Logs",
"CheckTitle": "CloudTrail trail has delivered logs to CloudWatch Logs in the last 24 hours",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail trails are integrated with CloudWatch Logs",
"Risk": "Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.",
"Description": "**CloudTrail trails** are configured to send events to **CloudWatch Logs**, and show recent delivery within the last `24h`. Trails without integration or without recent CloudWatch delivery are identified, across single-Region and multi-Region trails.",
"Risk": "Missing or stale CloudWatch delivery weakens visibility and delays detection, impacting confidentiality and integrity. Adversaries can:\n- Hide **privilege escalation**\n- Perform unauthorized **resource changes**\n- Exfiltrate data via API misuse",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.prowler.com/checks/aws/logging-policies/logging_4#aws-console",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --cloudwatch-logs-log-group- arn <cloudtrail_log_group_arn> --cloudwatch-logs-role-arn <cloudtrail_cloudwatchLogs_role_arn>",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_4#aws-console",
"Terraform": ""
"CLI": "aws cloudtrail update-trail --name <trail_name> --cloud-watch-logs-log-group-arn <cloudwatch_log_group_arn> --cloud-watch-logs-role-arn <cloudwatch_logs_role_arn>",
"NativeIaC": "```yaml\n# CloudFormation: enable CloudTrail delivery to CloudWatch Logs\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: \"<example_resource_name>\"\n CloudWatchLogsLogGroupArn: \"<cloudwatch_log_group_arn>\" # CRITICAL: sends CloudTrail events to CloudWatch Logs\n CloudWatchLogsRoleArn: \"<cloudwatch_logs_role_arn>\" # CRITICAL: role CloudTrail assumes to deliver events\n```",
"Other": "1. In AWS Console, go to CloudTrail > Trails and select the trail\n2. In the CloudWatch Logs section, click Edit\n3. Set CloudWatch Logs to Enabled\n4. Choose an existing Log group (or create new) and select an IAM role with permissions for CreateLogStream/PutLogEvents\n5. Click Save changes\n6. After a few minutes, verify events appear in the chosen CloudWatch Logs log group",
"Terraform": "```hcl\n# Terraform: enable CloudTrail delivery to CloudWatch Logs\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n cloud_watch_logs_group_arn = \"<cloudwatch_log_group_arn>\" # CRITICAL: sends CloudTrail events to CloudWatch Logs\n cloud_watch_logs_role_arn = \"<cloudwatch_logs_role_arn>\" # CRITICAL: role CloudTrail assumes to deliver events\n}\n```"
},
"Recommendation": {
"Text": "Validate that the trails in CloudTrail have an arn set in the CloudWatchLogsLogGroupArn property.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html"
"Text": "Integrate every trail with **CloudWatch Logs** and maintain continuous, near-real-time delivery. Enforce **least privilege** on the delivery role, prefer **multi-Region** coverage, and implement **metric filters and alerts** for sensitive actions. Centralize retention to support **defense in depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_cloudwatch_logging_enabled"
}
},
"Categories": [
"forensics-ready",
"logging"
"logging",
"forensics-ready"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,34 +1,39 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_insights_exist",
"CheckTitle": "Ensure CloudTrail Insight is enabled",
"CheckTitle": "CloudTrail trail has Insights enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail Insight is enabled",
"Risk": "CloudTrail Insights provides a powerful way to search and analyze CloudTrail log data using pre-built queries and machine learning algorithms. This can help you to identify potential security threats and suspicious activity in near real-time, such as unauthorized access attempts, policy changes, or resource modifications.",
"Description": "**CloudTrail trails** that are logging are evaluated for **Insights** via `insight selectors`, which enable anomaly detection on management-event patterns (API call and error rates). The finding pinpoints logging trails where these selectors are missing.",
"Risk": "Without **Insights**, abnormal API call or error rates can go unnoticed, delaying detection of credential abuse, privilege escalation, or runaway automation. Attackers may rapidly alter policies, delete resources, or exfiltrate data before response, impacting confidentiality and availability.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html",
"https://awscli.amazonaws.com/v2/documentation/api/2.18.18/reference/cloudtrail/put-insight-selectors.html",
"https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cloudtrail put-insight-selectors --trail-name <TRAIL_NAME> --insight-selectors '[{\"InsightType\":\"ApiCallRateInsight\"}]'",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n TrailName: <example_resource_name>\n S3BucketName: <example_resource_name>\n IsLogging: true\n InsightSelectors:\n - InsightType: ApiCallRateInsight # Critical fix: enables CloudTrail Insights on the trail\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails\n2. Select the trail that is logging\n3. Click Edit on the CloudTrail Insights section\n4. Enable Insights and select API call rate (or Error rate)\n5. Save changes",
"Terraform": "```hcl\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n enable_logging = true\n\n insight_selector {\n insight_type = \"ApiCallRateInsight\" # Critical fix: enables CloudTrail Insights on the trail\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable CloudTrail Insight",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html"
"Text": "Enable **CloudTrail Insights** on all logging trails (ideally all-Region or organization trails). Activate both `ApiCallRateInsight` and `ApiErrorRateInsight`. Integrate alerts with monitoring and review anomalies regularly. Apply **defense in depth** and least privilege to reduce potential blast radius.",
"Url": "https://hub.prowler.com/check/cloudtrail_insights_exist"
}
},
"Categories": [
"forensics-ready"
"threat-detection"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,34 +1,39 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_kms_encryption_enabled",
"CheckTitle": "Ensure CloudTrail logs are encrypted at rest using KMS CMKs",
"CheckTitle": "CloudTrail trail logs are encrypted at rest with a KMS key",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail logs are encrypted at rest using KMS CMKs",
"Risk": "By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMSmanaged keys (SSE-KMS) for your CloudTrail log files.",
"Description": "**AWS CloudTrail trails** are evaluated for use of **SSE-KMS** with a customer-managed KMS key to encrypt delivered log files at rest in S3. Trails without a configured KMS key are identified. *Applies to single-Region and multi-Region trails.*",
"Risk": "Absent a **customer-managed KMS key**, log protection relies only on storage permissions. Bucket misconfigurations or stolen credentials can expose audit data, aiding evasion and lateral movement. Missing key-level controls, rotation, and usage audit weaken **confidentiality** and **forensic integrity**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html",
"https://trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-logs-encrypted.html",
"https://www.stream.security/rules/ensure-cloudtrail-logs-are-encrypted-at-rest",
"https://www.clouddefense.ai/compliance-rules/cis-v130/logging/cis-v130-3-7"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --kms-id <cloudtrail_kms_key> aws kms put-key-policy --key-id <cloudtrail_kms_key> --policy <cloudtrail_kms_key_policy>",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_7#fix---buildtime",
"Other": "",
"Terraform": ""
"CLI": "aws cloudtrail update-trail --name <trail_name> --kms-key-id <kms_key_arn_or_id>",
"NativeIaC": "```yaml\n# CloudFormation: enable KMS encryption for an existing/new CloudTrail\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n KmsKeyId: <example_resource_id> # Critical: sets the KMS key to encrypt CloudTrail logs at rest\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails\n2. Select the trail <trail_name>, click Edit\n3. Under Log file encryption, choose Use a KMS key and select <cloudtrail_kms_key>\n4. Click Save changes",
"Terraform": "```hcl\n# Enable KMS encryption for CloudTrail\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n kms_key_id = \"<example_resource_id>\" # Critical: uses this KMS key to encrypt CloudTrail logs\n}\n```"
},
"Recommendation": {
"Text": "This approach has the following advantages: You can create and manage the CMK encryption keys yourself. You can use a single CMK to encrypt and decrypt log files for multiple accounts across all regions. You have control over who can use your key for encrypting and decrypting CloudTrail log files. You can assign permissions for the key to the users. You have enhanced security.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html"
"Text": "Enable **SSE-KMS** on every trail using a **customer-managed KMS key**. Apply **least privilege** so only authorized roles can `Decrypt`, and enforce **separation of duties** between key admins and log readers. Rotate keys and monitor key usage to provide **defense in depth** for CloudTrail data.",
"Url": "https://hub.prowler.com/check/cloudtrail_kms_encryption_enabled"
}
},
"Categories": [
"forensics-ready",
"encryption"
],
"DependsOn": [],
@@ -1,33 +1,40 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_log_file_validation_enabled",
"CheckTitle": "Ensure CloudTrail log file validation is enabled",
"CheckTitle": "CloudTrail trail has log file validation enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail log file validation is enabled",
"Risk": "Enabling log file validation will provide additional integrity checking of CloudTrail logs. ",
"Description": "**AWS CloudTrail trails** are evaluated for **log file integrity validation** being enabled (`LogFileValidationEnabled`).\n\nWhen enabled, CloudTrail generates signed digest files to verify that S3-delivered log files remain unchanged.",
"Risk": "Without validation, adversaries can alter, forge, or delete audit entries without detection, compromising log **integrity** and non-repudiation.\n\nThis impairs investigations, enables alert evasion, and obscures unauthorized changes across regions or accounts.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-enabling.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-log-file-integrity-validation.html",
"https://deepwiki.com/acantril/learn-cantrill-io-labs/7.1-cloudtrail-log-file-integrity"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --enable-log-file-validation",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_2#cloudformation",
"Other": "",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_2#terraform"
"CLI": "aws cloudtrail update-trail --name <trail_name> --enable-log-file-validation",
"NativeIaC": "```yaml\n# CloudFormation: Enable log file validation on a CloudTrail trail\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n EnableLogFileValidation: true # Critical: enables integrity validation for delivered log files\n```",
"Other": "1. Open the AWS Console and go to CloudTrail\n2. Click Trails and select <trail_name>\n3. Click Edit\n4. In Additional/Advanced settings, check Enable log file validation\n5. Click Save changes",
"Terraform": "```hcl\n# Enable log file validation on a CloudTrail trail\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n enable_log_file_validation = true # Critical: ensures CloudTrail writes signed digests to detect tampering\n}\n```"
},
"Recommendation": {
"Text": "Ensure LogFileValidationEnabled is set to true for each trail.",
"Url": "http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-filevalidation-enabling.html"
"Text": "Enable **log file integrity validation** on all trails (`LogFileValidationEnabled=true`).\n\nEnforce **least privilege** on the logs bucket, retain and protect digest files (e.g., S3 Object Lock/MFA Delete), and monitor validation results to support **defense in depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_log_file_validation_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,33 +1,38 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_logs_s3_bucket_access_logging_enabled",
"CheckTitle": "Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket",
"CheckTitle": "CloudTrail trail destination S3 bucket has access logging enabled",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket",
"Risk": "Server access logs can assist you in security and access audits, help you learn about your customer base, and understand your Amazon S3 bill.",
"Description": "CloudTrail trails deliver logs to an S3 bucket; this evaluates whether that bucket has **S3 server access logging** enabled to record requests against it.\n\n*If the destination bucket is outside the account or audit scope, a manual review is indicated.*",
"Risk": "Without access logging on the CloudTrail logs bucket, access and changes to log files lack an independent audit trail. Attackers could read, delete, or replace logs without attribution, undermining **log confidentiality** and **integrity**, and slowing **incident response**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/cloudtrail-controls.html",
"https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_6#aws-console",
"Terraform": ""
"CLI": "aws s3api put-bucket-logging --bucket <CLOUDTRAIL_BUCKET_NAME> --bucket-logging-status \"{\\\"LoggingEnabled\\\":{\\\"TargetBucket\\\":\\\"<TARGET_BUCKET_NAME>\\\"}}\"",
"NativeIaC": "```yaml\n# CloudFormation: enable S3 access logging on the CloudTrail destination bucket\nResources:\n <example_log_bucket_name>:\n Type: AWS::S3::Bucket\n\n <example_cloudtrail_bucket>:\n Type: AWS::S3::Bucket\n Properties:\n LoggingConfiguration:\n DestinationBucketName: !Ref <example_log_bucket_name> # Critical: turns on server access logging to this destination bucket\n # This enables access logging so the check passes\n```",
"Other": "1. In the AWS Console, go to S3 and open the bucket used by your CloudTrail trail\n2. Select the Properties tab\n3. In Server access logging, click Edit\n4. Enable logging and choose a different destination S3 bucket for the logs\n5. Click Save changes",
"Terraform": "```hcl\n# Enable access logging on the CloudTrail S3 bucket\nresource \"aws_s3_bucket\" \"<example_log_bucket_name>\" {\n bucket = \"<example_log_bucket_name>\"\n}\n\nresource \"aws_s3_bucket\" \"<example_bucket_name>\" {\n bucket = \"<example_bucket_name>\"\n}\n\nresource \"aws_s3_bucket_logging\" \"<example_resource_name>\" {\n bucket = aws_s3_bucket.<example_bucket_name>.id\n target_bucket = aws_s3_bucket.<example_log_bucket_name>.id # Critical: enables server access logging to the target bucket\n}\n```"
},
"Recommendation": {
"Text": "Ensure that S3 buckets have Logging enabled. CloudTrail data events can be used in place of S3 bucket logging. If that is the case, this finding can be considered a false positive.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html"
"Text": "Enable **S3 server access logging** on the CloudTrail logs bucket and write logs to a separate, tightly controlled bucket. Apply **least privilege**, enable **versioning**, and consider **Object Lock** to deter tampering. Centralize monitoring to support defense-in-depth and rapid investigation.",
"Url": "https://hub.prowler.com/check/cloudtrail_logs_s3_bucket_access_logging_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,37 +1,45 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_logs_s3_bucket_is_not_publicly_accessible",
"CheckTitle": "Ensure the S3 bucket CloudTrail logs is not publicly accessible",
"CheckTitle": "CloudTrail trail S3 bucket is not publicly accessible",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Industry and Regulatory Standards/CIS AWS Foundations Benchmark",
"Effects/Data Exposure"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure the S3 bucket CloudTrail logs to is not publicly accessible",
"Risk": "Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected accounts use or configuration.",
"ResourceType": "AwsS3Bucket",
"Description": "CloudTrail log destination **S3 buckets** are inspected for ACL grants that expose data to the public `AllUsers` group.\n\nBuckets hosted in other accounts are flagged for out-of-scope review.",
"Risk": "Exposed CloudTrail logs erode **confidentiality** and **integrity**.\n\nAdversaries can harvest API activity to map accounts, roles, and keys, enabling **reconnaissance** and evasion. If write is allowed, logs can be **poisoned** or deleted, thwarting investigations and compromising incident timelines.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudTrail/cloudtrail-bucket-publicly-accessible.html",
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html",
"https://docs.aws.amazon.com/config/latest/developerguide/cloudtrail-s3-bucket-public-access-prohibited.html",
"https://docs.panther.com/alerts/alert-runbooks/built-in-policies/aws-cloudtrail-logs-s3-bucket-not-publicly-accessible"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_3#aws-console",
"Terraform": ""
"CLI": "aws s3api put-bucket-acl --bucket <example_resource_name> --acl private",
"NativeIaC": "```yaml\n# CloudFormation: ensure the CloudTrail S3 bucket ACL is not public\nResources:\n CloudTrailLogsBucket:\n Type: AWS::S3::Bucket\n Properties:\n BucketName: <example_resource_name>\n AccessControl: Private # CRITICAL: sets bucket ACL to private, removing any AllUsers (public) grants\n```",
"Other": "1. Open the AWS S3 Console\n2. Select the bucket used by CloudTrail\n3. Go to Permissions > Access control list (ACL)\n4. Click Edit under Public access, remove any grants to \"Everyone (public access)\" (uncheck Read/Write)\n5. Save changes",
"Terraform": "```hcl\n# Ensure the CloudTrail S3 bucket ACL is private\nresource \"aws_s3_bucket_acl\" \"fix_cloudtrail_logs_bucket\" {\n bucket = \"<example_resource_name>\"\n acl = \"private\" # CRITICAL: removes any public (AllUsers) ACL grants\n}\n```"
},
"Recommendation": {
"Text": "Analyze Bucket policy to validate appropriate permissions. Ensure the AllUsers principal is not granted privileges. Ensure the AuthenticatedUsers principal is not granted privileges.",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html"
"Text": "Apply **least privilege** to the log bucket:\n- Enable S3 `Block Public Access` (account and bucket)\n- Remove `AllUsers`/`AuthenticatedUsers` ACLs; avoid wildcard principals\n- Permit only CloudTrail and constrain with `aws:SourceArn`\n\nUse a dedicated private bucket and monitor for permission changes.",
"Url": "https://hub.prowler.com/check/cloudtrail_logs_s3_bucket_is_not_publicly_accessible"
}
},
"Categories": [
"forensics-ready",
"internet-exposed"
],
"DependsOn": [],
"DependsOn": [
"s3_bucket_public_access"
],
"RelatedTo": [],
"Notes": ""
}
@@ -1,33 +1,37 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_multi_region_enabled",
"CheckTitle": "Ensure CloudTrail is enabled in all regions",
"CheckTitle": "Region has at least one CloudTrail trail logging",
"CheckType": [
"Software and Configuration Checks",
"Industry and Regulatory Standards",
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail is enabled in all regions",
"Risk": "AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.",
"Description": "**AWS CloudTrail** has at least one trail with `logging` enabled in every region. A **multi-region trail** or a regional trail counts for coverage in that region.",
"Risk": "Missing coverage in any region creates **visibility gaps**.\n\nAttackers can use lesser-monitored regions to run API actions, hide **unauthorized changes**, and exfiltrate data without audit trails, weakening **detective controls**, hindering **forensics**, and delaying response (confidentiality and integrity).",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrailconcepts.html#cloudtrail-concepts-management-events"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail create-trail --name <trail_name> --bucket-name <s3_bucket_for_cloudtrail> --is-multi-region-trail aws cloudtrail update-trail --name <trail_name> --is-multi-region-trail ",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#cloudformation",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#aws-console",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#terraform"
"CLI": "",
"NativeIaC": "```yaml\n# CloudFormation: Create a multi-region CloudTrail and start logging\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n TrailName: <example_resource_name>\n S3BucketName: <example_resource_name>\n IsMultiRegionTrail: true # Critical: applies the trail to all regions\n IsLogging: true # Critical: ensures the trail is logging\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails\n2. If no trail exists: Click Create trail, enter a name, choose an S3 bucket, set Apply trail to all regions = Yes, then Create (logging starts)\n3. If a trail exists: Select it, click Edit, set Apply trail to all regions = Yes, Save\n4. If Status shows Not logging, click Start logging",
"Terraform": "```hcl\n# Terraform: Multi-region CloudTrail with logging enabled\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n is_multi_region_trail = true # Critical: applies the trail to all regions\n enable_logging = true # Critical: ensures the trail is logging\n}\n```"
},
"Recommendation": {
"Text": "Ensure Logging is set to ON on all regions (even if they are not being used at the moment.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrailconcepts.html#cloudtrail-concepts-management-events"
"Text": "Use a **multi-region CloudTrail trail** or per-region trails so `logging` is active in every region, including unused ones.\n\nCentralize logs, enforce **least privilege** to log stores, and add **defense-in-depth** with encryption, integrity validation, and retention. Continuously monitor trail health to catch gaps.",
"Url": "https://hub.prowler.com/check/cloudtrail_multi_region_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,31 +1,38 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_multi_region_enabled_logging_management_events",
"CheckTitle": "Ensure CloudTrail logging management events in All Regions",
"CheckTitle": "CloudTrail trail logs management events for read and write operations",
"CheckType": [
"CIS AWS Foundations Benchmark"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail logging management events in All Regions",
"Risk": "AWS CloudTrail enables governance, compliance, operational auditing, and risk auditing of your AWS account. To meet FTR requirements, you must have management events enabled for all AWS accounts and in all regions and aggregate these logs into an Amazon Simple Storage Service (Amazon S3) bucket owned by a separate AWS account.",
"RelatedUrl": "https://docs.prowler.com/checks/aws/logging-policies/logging_14",
"Description": "**CloudTrail trails** record **management events** (`read` and `write`) in every AWS region and are actively logging, using a multi-region trail or per-region coverage.",
"Risk": "Without region-wide management event logging, changes to identities, networking, and audit settings can go untracked.\n\nAdversaries can operate in overlooked regions to create resources, modify permissions, or disable logging, undermining **integrity**, **confidentiality**, and incident response.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.prowler.com/checks/aws/logging-policies/logging_14#terraform",
"https://docs.prowler.com/checks/aws/logging-policies/logging_14"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --is-multi-region-trail",
"NativeIaC": "",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_14",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_14#terraform"
"CLI": "",
"NativeIaC": "```yaml\n# CloudFormation: enable multi-region and log management events (read & write)\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n IsMultiRegionTrail: true # CRITICAL: apply the trail to all regions\n EventSelectors:\n - IncludeManagementEvents: true # CRITICAL: log management events\n ReadWriteType: All # CRITICAL: log both read and write\n```",
"Other": "1. In the AWS Console, go to CloudTrail > Trails and select your trail\n2. Click Edit\n3. Set Apply trail to all regions to Yes\n4. Under Management events, set Read/write events to All\n5. Click Save changes\n6. If Logging is Off, click Start logging",
"Terraform": "```hcl\n# Terraform: enable multi-region and log management events (read & write)\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n is_multi_region_trail = true # CRITICAL: apply the trail to all regions\n\n event_selector {\n include_management_events = true # CRITICAL: log management events\n read_write_type = \"All\" # CRITICAL: log both read & write\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable CloudTrail logging management events in All Regions",
"Url": "https://docs.prowler.com/checks/aws/logging-policies/logging_14"
"Text": "Enable a **multi-region CloudTrail** that logs **management events** for `read` and `write` in all regions.\n\nCentralize logs in a separate, locked-down account; apply **least privilege**, encryption, retention, and integrity validation; and protect trails and storage with tamper-evident, deny-delete controls for **defense-in-depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_multi_region_enabled_logging_management_events"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,31 +1,41 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_s3_dataevents_read_enabled",
"CheckTitle": "Check if S3 buckets have Object-level logging for read events is enabled in CloudTrail.",
"CheckTitle": "CloudTrail trail records S3 object-level read events for all S3 buckets",
"CheckType": [
"Logging and Monitoring"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure that all your AWS CloudTrail trails are configured to log Data events in order to record S3 object-level API operations, such as GetObject, DeleteObject and PutObject, for individual S3 buckets or for all current and future S3 buckets provisioned in your AWS account.",
"Risk": "If logs are not enabled, monitoring of service use and threat analysis is not possible.",
"Description": "**CloudTrail trails** log **S3 object-level read data events** for all buckets, capturing object access (for example `GetObject`) via selectors targeting `AWS::S3::Object`",
"Risk": "Without **object-level read logging**, S3 access is opaque. Attackers or insiders can exfiltrate data via `GetObject` without audit trails, eroding **confidentiality** and hindering **forensics**, anomaly detection, and incident response.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://awswala.medium.com/enable-cloudtrail-data-events-logging-for-objects-in-an-s3-bucket-33cade51ae2b",
"https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-23",
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html",
"https://www.plerion.com/cloud-knowledge-base/ensure-object-level-logging-for-read-events-enabled-for-s3-bucket"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail put-event-selectors --trail-name <YOUR_TRAIL_NAME_HERE> --event-selectors '[{ 'ReadWriteType': 'ReadOnly', 'IncludeManagementEvents':true, 'DataResources': [{ 'Type': 'AWS::S3::Object', 'Values': ['arn:aws:s3'] }] }]'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-23",
"Terraform": ""
"CLI": "aws cloudtrail put-event-selectors --trail-name <example_resource_name> --event-selectors '[{\"ReadWriteType\":\"ReadOnly\",\"DataResources\":[{\"Type\":\"AWS::S3::Object\",\"Values\":[\"arn:aws:s3\"]}]}]'",
"NativeIaC": "```yaml\n# CloudFormation: enable S3 object-level READ data events for all buckets on a trail\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n S3BucketName: <example_resource_name>\n EventSelectors:\n - ReadWriteType: ReadOnly # CRITICAL: log read-only data events\n DataResources:\n - Type: AWS::S3::Object # CRITICAL: target S3 object-level events\n Values:\n - arn:aws:s3 # CRITICAL: applies to all S3 buckets/objects\n```",
"Other": "1. In the AWS Console, open CloudTrail and select Trails\n2. Open your trail and go to the Data events section\n3. Add data event for S3 and choose All current and future S3 buckets\n4. Select only Read events (or All if Read-only is unavailable)\n5. Save changes",
"Terraform": "```hcl\n# Terraform: enable S3 object-level READ data events for all buckets on a trail\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n event_selector {\n read_write_type = \"ReadOnly\" # CRITICAL: log read-only data events\n data_resource {\n type = \"AWS::S3::Object\" # CRITICAL: target S3 object-level events\n values = [\"arn:aws:s3\"] # CRITICAL: apply to all S3 buckets/objects\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable logs. Create an S3 lifecycle policy. Define use cases, metrics and automated responses where applicable.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html"
"Text": "Enable CloudTrail **data events** for S3 objects with `ReadOnly` (or `All`) across all current and future buckets. Use a multi-Region trail, centralize logs in an encrypted bucket with lifecycle retention, and integrate monitoring/alerts to support **defense in depth** and accountable access.",
"Url": "https://hub.prowler.com/check/cloudtrail_s3_dataevents_read_enabled"
}
},
"Categories": [],
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,31 +1,41 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_s3_dataevents_write_enabled",
"CheckTitle": "Check if S3 buckets have Object-level logging for write events is enabled in CloudTrail.",
"CheckTitle": "CloudTrail trail records all S3 object-level API operations for all buckets",
"CheckType": [
"Logging and Monitoring"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure that all your AWS CloudTrail trails are configured to log Data events in order to record S3 object-level API operations, such as GetObject, DeleteObject and PutObject, for individual S3 buckets or for all current and future S3 buckets provisioned in your AWS account.",
"Risk": "If logs are not enabled, monitoring of service use and threat analysis is not possible.",
"Description": "**CloudTrail trails** include **S3 object-level data events** for **write (or all) operations** across **all current and future buckets**, via classic or advanced selectors. This records actions like `PutObject`, `DeleteObject`, and multipart uploads at the object level.",
"Risk": "Without object-level write logging, unauthorized or accidental changes and deletions can go unobserved, undermining data **integrity** and **availability**. Forensics lose visibility into who modified or removed objects, hindering detection of ransomware, rogue automation, or insider tampering.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html",
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html",
"https://www.go2share.net/article/s3-bucket-logging",
"https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-22"
],
"Remediation": {
"Code": {
"CLI": "aws cloudtrail put-event-selectors --trail-name <YOUR_TRAIL_NAME_HERE> --event-selectors '[{ 'ReadWriteType': 'WriteOnly', 'IncludeManagementEvents':true, 'DataResources': [{ 'Type': 'AWS::S3::Object', 'Values': ['arn:aws:s3'] }] }]'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/s3-controls.html#s3-22",
"Terraform": ""
"CLI": "aws cloudtrail put-event-selectors --trail-name <example_resource_name> --event-selectors '[{\"ReadWriteType\":\"WriteOnly\",\"DataResources\":[{\"Type\":\"AWS::S3::Object\",\"Values\":[\"arn:aws:s3\"]}]}]'",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::CloudTrail::Trail\n Properties:\n TrailName: <example_resource_name>\n S3BucketName: <example_resource_name>\n EventSelectors:\n - ReadWriteType: WriteOnly\n DataResources:\n - Type: AWS::S3::Object\n Values:\n - arn:aws:s3 # Critical: enables S3 object-level write data events for all buckets, fixing the check\n```",
"Other": "1. In the AWS Console, open CloudTrail and go to Trails\n2. Select <your trail> and click Edit under Data events\n3. For Data event source, choose S3\n4. Select All current and future S3 buckets\n5. Check Write events (or All events)\n6. Click Save changes",
"Terraform": "```hcl\nresource \"aws_cloudtrail\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n s3_bucket_name = \"<example_resource_name>\"\n\n event_selector {\n read_write_type = \"WriteOnly\"\n data_resource {\n type = \"AWS::S3::Object\"\n values = [\"arn:aws:s3\"] # Critical: logs S3 object-level write events for all buckets to pass the check\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable logs. Create an S3 lifecycle policy. Define use cases, metrics and automated responses where applicable.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html"
"Text": "Enable **CloudTrail S3 data events** for object-level **write** (and *optionally* read) across all buckets on a multi-Region trail. Apply **least privilege** to log storage, set **lifecycle** retention, and integrate alerts. Use **advanced selectors** to target sensitive buckets/operations for cost control and **defense in depth**.",
"Url": "https://hub.prowler.com/check/cloudtrail_s3_dataevents_write_enabled"
}
},
"Categories": [
"logging",
"forensics-ready"
],
"DependsOn": [],
@@ -1,26 +1,37 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_threat_detection_enumeration",
"CheckTitle": "Ensure there are no potential enumeration threats in CloudTrail",
"CheckType": [],
"CheckTitle": "CloudTrail logs show no potential enumeration activity",
"CheckType": [
"TTPs/Discovery",
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"Unusual Behaviors/User"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "This check ensures that there are no potential enumeration threats in CloudTrail.",
"Risk": "Potential enumeration threats in CloudTrail can lead to unauthorized access to resources.",
"Description": "**CloudTrail activity** is analyzed for AWS identities executing a broad mix of discovery APIs like `List*`, `Describe*`, and `Get*` within a recent time window.\n\nAn identity exceeding a configurable ratio of these actions indicates potential enumeration behavior by that principal.",
"Risk": "Concentrated discovery activity signals **reconnaissance** with valid credentials. Adversaries can map assets and policies to enable **privilege escalation**, target data stores for **exfiltration** (confidentiality), and identify services to disrupt (availability), supporting stealthy lateral movement.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://medium.com/falconforce/falconfriday-detecting-enumeration-in-aws-0xff25-orangecon-25-edition-4aee83651088",
"https://www.elastic.co/guide/en/security/8.19/aws-discovery-api-calls-via-cli-from-a-single-resource.html",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events",
"https://aws.plainenglish.io/aws-cloudtrail-event-cheatsheet-a-detection-engineers-guide-to-critical-api-calls-part-1-04fb1588556f",
"https://support.icompaas.com/support/solutions/articles/62000233455-ensure-there-are-no-potential-enumeration-threats-in-cloudtrail-"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws iam update-access-key --user-name <USER_NAME> --access-key-id <ACCESS_KEY_ID> --status Inactive",
"NativeIaC": "```yaml\n# CloudFormation: deny common enumeration APIs for a specific IAM user\nResources:\n DenyEnumerationPolicy:\n Type: AWS::IAM::Policy\n Properties:\n PolicyName: deny-enumeration\n PolicyDocument:\n Version: \"2012-10-17\"\n Statement:\n - Effect: Deny # CRITICAL: blocks typical enumeration calls\n Action:\n - ec2:Describe* # CRITICAL: deny EC2 describe APIs\n - iam:List* # CRITICAL: deny IAM list APIs\n - s3:List* # CRITICAL: deny S3 list APIs\n - s3:Get* # CRITICAL: deny S3 get APIs (e.g., GetBucketAcl)\n Resource: \"*\"\n Users:\n - \"<example_resource_name>\" # CRITICAL: target the enumerating user\n```",
"Other": "1. In AWS Console, go to IAM > Users and open the user shown in the alert (ARN in the finding)\n2. Select the Security credentials tab\n3. For each active Access key, click Deactivate to set status to Inactive\n4. If the activity came from an EC2 instance role: go to EC2 > Instances > select the instance > Security > IAM role > Detach IAM role\n5. Re-run the check to confirm no new enumeration events occur",
"Terraform": "```hcl\n# Deny common enumeration APIs for a specific IAM user\nresource \"aws_iam_user_policy\" \"<example_resource_name>\" {\n name = \"deny-enumeration\"\n user = \"<example_user_name>\"\n\n policy = jsonencode({\n Version = \"2012-10-17\",\n Statement = [{\n Effect = \"Deny\", # CRITICAL: blocks typical enumeration calls\n Action = [\n \"ec2:Describe*\", # CRITICAL\n \"iam:List*\", # CRITICAL\n \"s3:List*\", # CRITICAL\n \"s3:Get*\" # CRITICAL\n ],\n Resource = \"*\"\n }]\n })\n}\n```"
},
"Recommendation": {
"Text": "To remediate this issue, ensure that there are no potential enumeration threats in CloudTrail.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events"
"Text": "Apply **least privilege** to limit `List*`/`Describe*`/`Get*` to necessary resources and roles; use **separation of duties**.\n- Enforce MFA and short-lived sessions\n- Use **SCPs** to curb unnecessary discovery\n- Baseline expected reads and alert on spikes as **defense in depth**",
"Url": "https://hub.prowler.com/check/cloudtrail_threat_detection_enumeration"
}
},
"Categories": [
@@ -1,30 +1,43 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_threat_detection_llm_jacking",
"CheckTitle": "Ensure there are no potential LLM Jacking threats in CloudTrail.",
"CheckType": [],
"CheckTitle": "No potential LLM jacking activity detected in CloudTrail",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"TTPs/Discovery",
"TTPs/Execution",
"TTPs/Defense Evasion",
"Effects/Resource Consumption",
"Unusual Behaviors/User"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "This check ensures that there are no potential LLM Jacking threats in CloudTrail. LLM Jacking attacks involve unauthorized access to cloud-hosted large language model (LLM) services, such as AWS Bedrock, by exploiting exposed credentials or vulnerabilities. These attacks can lead to resource hijacking, unauthorized model invocations, and high operational costs for the victim organization.",
"Risk": "Potential LLM Jacking threats in CloudTrail can lead to unauthorized access to sensitive AI models, stolen credentials, resource hijacking, or running costly workloads. Attackers may use reverse proxies or malicious credentials to sell access to models, exfiltrate sensitive data, or disrupt business operations.",
"RelatedUrl": "https://sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack/",
"Description": "**CloudTrail Bedrock activity** is analyzed per identity for a high diversity of LLM-related API calls (e.g., `InvokeModel`, `InvokeModelWithResponseStream`, `GetFoundationModelAvailability`). *If an identity's share of these actions exceeds a configured threshold over a recent window*, it is surfaced as potential **LLM-jacking** behavior.",
"Risk": "Such patterns suggest **stolen credential** abuse to drive LLM usage.\n- Availability: cost exhaustion and service disruption\n- Confidentiality: leakage of prompts/outputs and model settings\n- Integrity: misuse of permissions for broader access\nAttackers may use reverse proxies to resell access and obfuscate sources.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://furkangungor.medium.com/automating-anomaly-detection-in-aws-cloudtrail-logs-4efb2ad9b958",
"https://help.sumologic.com/docs/integrations/amazon-aws/amazon-bedrock/",
"https://dzone.com/articles/ai-powered-aws-cloudtrail-analysis-strands-agent-bedrock"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"NativeIaC": "```yaml\n# CloudFormation SCP that blocks all Amazon Bedrock actions to stop LLM jacking\nResources:\n <example_resource_name>:\n Type: AWS::Organizations::Policy\n Properties:\n Name: <example_resource_name>\n Type: SERVICE_CONTROL_POLICY\n TargetIds:\n - \"<example_resource_id>\" # CRITICAL: Attach SCP to the root/OU/account to enforce the deny\n Content:\n Version: \"2012-10-17\"\n Statement:\n - Sid: DenyBedrock\n Effect: Deny\n Action: \"bedrock:*\" # CRITICAL: Denies all Bedrock APIs (Invoke/Converse/list/entitlements/etc.)\n Resource: \"*\" # CRITICAL: Apply deny to all resources\n```",
"Other": "1. In the AWS Console, go to Organizations > Policies > Service control policies\n2. Click Create policy\n3. Set Name to <example_resource_name>\n4. In Policy, paste a deny for Bedrock:\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [{\"Sid\":\"DenyBedrock\",\"Effect\":\"Deny\",\"Action\":\"bedrock:*\",\"Resource\":\"*\"}]\n }\n5. Save the policy and click Attach\n6. Select the target (Root, OU, or the affected account ID <example_resource_id>) and attach the policy\n7. Wait for propagation; no further Bedrock calls will occur, and the finding will clear after the detection window elapses",
"Terraform": "```hcl\n# SCP denying all Amazon Bedrock actions; attach it to the root/OU/account to halt LLM jacking\nresource \"aws_organizations_policy\" \"main\" {\n name = \"<example_resource_name>\"\n type = \"SERVICE_CONTROL_POLICY\"\n\n content = jsonencode({\n Version = \"2012-10-17\"\n Statement = [{\n Sid = \"DenyBedrock\"\n Effect = \"Deny\"\n Action = \"bedrock:*\" // CRITICAL: blocks all Bedrock APIs (prevents further suspicious activity)\n Resource = \"*\" // CRITICAL: deny across all resources\n }]\n })\n}\n\nresource \"aws_organizations_policy_attachment\" \"attach\" {\n policy_id = aws_organizations_policy.main.id\n target_id = \"<example_resource_id>\" // CRITICAL: attach to the affected account/OU/root to enforce the deny\n}\n```"
},
"Recommendation": {
"Text": "To remediate this issue, enable detailed CloudTrail logging for Bedrock API calls, monitor suspicious activities, and secure sensitive credentials. Enable logging of model invocation inputs and outputs, and restrict access using IAM policies. Review CloudTrail logs regularly for suspicious `InvokeModel` actions or unauthorized access to models.",
"Url": "https://permiso.io/blog/exploiting-hosted-models"
"Text": "Apply **least privilege** to Bedrock; restrict `Invoke*` only to required roles and deny broadly via **SCPs** where unused. Enforce **MFA** and short-lived creds; rotate/remove exposed keys. Enable **model invocation logging** and budgets/quotas. Continuously monitor for Bedrock enumeration plus invoke bursts. Use **defense in depth** across identities and networks.",
"Url": "https://hub.prowler.com/check/cloudtrail_threat_detection_llm_jacking"
}
},
"Categories": [
"threat-detection"
"threat-detection",
"gen-ai"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,26 +1,34 @@
{
"Provider": "aws",
"CheckID": "cloudtrail_threat_detection_privilege_escalation",
"CheckTitle": "Ensure there are no potential privilege escalation threats in CloudTrail",
"CheckType": [],
"CheckTitle": "No potential privilege escalation activity detected in CloudTrail",
"CheckType": [
"TTPs/Privilege Escalation",
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis"
],
"ServiceName": "cloudtrail",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsCloudTrailTrail",
"Description": "This check ensures that there are no potential privilege escalation threats in CloudTrail.",
"Risk": "Potential privilege escalation threats in CloudTrail can lead to unauthorized access to resources.",
"Description": "**CloudTrail** activity is analyzed for **identities** executing high-risk actions linked to **privilege escalation** (e.g., `Attach*Policy`, `PassRole`, `AssumeRole`, `CreateAccessKey`). Identities exceeding a configurable share of such events within a *recent time window* are highlighted for investigation.",
"Risk": "Escalation patterns can grant elevated entitlements, enabling:\n- Confidentiality loss via unauthorized data/secret access\n- Integrity compromise by changing IAM policies/roles\n- Availability impact by tampering with logging or resources\nThis also facilitates lateral movement and persistence.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/",
"https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events",
"https://signmycode.com/blog/what-is-privilege-escalation-in-aws-recommendations-to-prevent-it"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"NativeIaC": "```yaml\n# CloudFormation: Organization SCP to block common IAM privilege-escalation actions\nResources:\n <example_resource_name>:\n Type: AWS::Organizations::Policy\n Properties:\n Name: deny-iam-privesc\n Type: SERVICE_CONTROL_POLICY\n # Critical: This SCP denies risky IAM actions often used for privilege escalation\n # Explanation: Denying these actions organization-wide prevents future privesc activity detected by CloudTrail\n Content: |\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Deny\",\n \"Action\": [\n \"iam:AttachUserPolicy\",\n \"iam:AttachRolePolicy\",\n \"iam:PutUserPolicy\",\n \"iam:PutRolePolicy\",\n \"iam:PutGroupPolicy\",\n \"iam:AddUserToGroup\",\n \"iam:CreateAccessKey\",\n \"iam:CreateLoginProfile\",\n \"iam:UpdateLoginProfile\",\n \"iam:UpdateAssumeRolePolicy\",\n \"iam:CreatePolicyVersion\",\n \"iam:SetDefaultPolicyVersion\",\n \"iam:PassRole\"\n ],\n \"Resource\": \"*\"\n }\n ]\n }\n <example_resource_name>Attachment:\n Type: AWS::Organizations::PolicyAttachment\n Properties:\n # Critical: Attach the SCP so it is enforced\n PolicyId: !Ref <example_resource_name>\n TargetId: <example_resource_id> # OU, Root, or Account ID\n```",
"Other": "1. In AWS Console, open IAM and identify the AWS identity shown in the Prowler finding (user or role ARN)\n2. If it is an IAM user:\n - Go to Security credentials > Access keys, set active keys to Inactive\n - Go to Permissions, detach all managed policies and delete inline policies\n - Go to Groups, remove the user from privileged groups\n - Go to Console password, delete the login profile\n3. If it is an IAM role:\n - Go to Permissions, detach managed policies and delete inline policies\n - Go to Trust relationships, remove principals that should not assume the role and save\n4. Re-run the scan after the detection window elapses to confirm no further privilege-escalation activity is detected",
"Terraform": "```hcl\n# SCP to block common IAM privilege-escalation actions\nresource \"aws_organizations_policy\" \"<example_resource_name>\" {\n name = \"deny-iam-privesc\"\n type = \"SERVICE_CONTROL_POLICY\"\n\n # Critical: Deny risky IAM actions to prevent future privesc\n # Explanation: Blocks escalation techniques commonly seen in CloudTrail\n content = jsonencode({\n Version = \"2012-10-17\",\n Statement = [\n {\n Effect = \"Deny\",\n Action = [\n \"iam:AttachUserPolicy\",\n \"iam:AttachRolePolicy\",\n \"iam:PutUserPolicy\",\n \"iam:PutRolePolicy\",\n \"iam:PutGroupPolicy\",\n \"iam:AddUserToGroup\",\n \"iam:CreateAccessKey\",\n \"iam:CreateLoginProfile\",\n \"iam:UpdateLoginProfile\",\n \"iam:UpdateAssumeRolePolicy\",\n \"iam:CreatePolicyVersion\",\n \"iam:SetDefaultPolicyVersion\",\n \"iam:PassRole\"\n ],\n Resource = \"*\"\n }\n ]\n })\n}\n\nresource \"aws_organizations_policy_attachment\" \"<example_resource_name>_attach\" {\n # Critical: Attach the SCP so it takes effect\n policy_id = aws_organizations_policy.<example_resource_name>.id\n target_id = \"<example_resource_id>\" # OU, Root, or Account ID\n}\n```"
},
"Recommendation": {
"Text": "To remediate this issue, ensure that there are no potential privilege escalation threats in CloudTrail.",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-logging-data-events"
"Text": "Apply **least privilege** and **defense in depth**:\n- Restrict `PassRole`, `Attach*Policy`, `UpdateAssumeRolePolicy`, `CreateAccessKey`\n- Enforce permission boundaries and SCPs\n- Require MFA and change approvals\n- Use multi-Region CloudTrail, immutable retention, and alerting on anomalous sequences",
"Url": "https://hub.prowler.com/check/cloudtrail_threat_detection_privilege_escalation"
}
},
"Categories": [
@@ -1,28 +1,34 @@
{
"Provider": "aws",
"CheckID": "dlm_ebs_snapshot_lifecycle_policy_exists",
"CheckTitle": "Ensure EBS Snapshot lifecycle policies are defined.",
"CheckTitle": "Region with EBS snapshots has at least one EBS snapshot lifecycle policy defined",
"CheckType": [
"Data Protection"
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "dlm",
"SubServiceName": "ebs",
"ResourceIdTemplate": "arn:aws:iam::account-id:resource-id",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"Description": "Ensure EBS Snapshot lifecycle policies are defined.",
"Risk": "With AWS DLM service, you can manage the lifecycle of your EBS volume snapshots. By automating the EBS volume backup management using lifecycle policies, you can protect your EBS data by enforcing a regular backup schedule, retain backups as required by auditors or internal compliance.",
"RelatedUrl": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html#dlm-elements",
"Description": "**EBS snapshots** are expected to be governed by **Data Lifecycle Manager (DLM) policies** in each Region where snapshots exist.\n\nThe evaluation looks for lifecycle policies that automate snapshot creation, retention, and cleanup for those snapshots.",
"Risk": "Without **automated lifecycle policies**, backups become inconsistent and error-prone, reducing availability and weakening recovery objectives. Missing retention rules cause premature deletion or snapshot sprawl, increasing cost and exposing stale data. Lack of cross-Region/account copies limits resilience to regional outages and malicious deletion.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DLM/ebs-snapshot-automation.html",
"https://repost.aws/articles/ARmYgZmA8MRQi89pWd9D7eFw/how-to-create-a-automate-backup-aws-data-lifecycle-management-using-snapshots",
"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html#dlm-elements"
],
"Remediation": {
"Code": {
"CLI": "aws dlm create-lifecycle-policy --region <region> --execution-role-arn <execution-role-arn> --description <description> --state ENABLED --policy-details file://lifecycle-policy-config.json",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DLM/ebs-snapshot-automation.html",
"Terraform": ""
"CLI": "aws dlm create-lifecycle-policy --region <region> --execution-role-arn <execution-role-arn> --description \"<description>\" --state ENABLED --policy-details '{\"PolicyType\":\"EBS_SNAPSHOT_MANAGEMENT\",\"ResourceTypes\":[\"VOLUME\"],\"TargetTags\":[{\"Key\":\"<tag_key>\",\"Value\":\"<tag_value>\"}],\"Schedules\":[{\"CreateRule\":{\"Interval\":24,\"IntervalUnit\":\"HOURS\"},\"RetainRule\":{\"Count\":1}}]}'",
"NativeIaC": "```yaml\n# CloudFormation: minimal EBS snapshot lifecycle policy\nResources:\n <example_resource_name>:\n Type: AWS::DLM::LifecyclePolicy\n Properties:\n Description: \"<description>\"\n ExecutionRoleArn: \"<example_resource_arn>\"\n State: ENABLED # Critical: enables the policy so it is counted by the check\n PolicyDetails:\n PolicyType: EBS_SNAPSHOT_MANAGEMENT # Critical: creates an EBS snapshot lifecycle policy\n ResourceTypes: [VOLUME]\n TargetTags:\n - Key: \"<tag_key>\" # Critical: selects target volumes by tag\n Value: \"<tag_value>\"\n Schedules:\n - CreateRule:\n Interval: 24\n IntervalUnit: HOURS\n RetainRule:\n Count: 1\n```",
"Other": "1. In the AWS console, switch to the Region that has EBS snapshots\n2. Open EC2 > Lifecycle Manager (DLM) > Create lifecycle policy\n3. Select EBS snapshot policy; Target resource: Volumes\n4. Add Target tags: Key = <tag_key>, Value = <tag_value>\n5. Set Schedule: Create every 24 hours; Retain 1 snapshot\n6. Ensure State is Enabled and click Create policy",
"Terraform": "```hcl\n# Terraform: minimal EBS snapshot lifecycle policy\nresource \"aws_dlm_lifecycle_policy\" \"<example_resource_name>\" {\n description = \"<description>\"\n execution_role_arn = \"<example_resource_arn>\"\n state = \"ENABLED\" # Critical: enables the policy so it is counted by the check\n\n policy_details {\n policy_type = \"EBS_SNAPSHOT_MANAGEMENT\" # Critical: creates an EBS snapshot lifecycle policy\n resource_types = [\"VOLUME\"]\n target_tags = {\n \"<tag_key>\" = \"<tag_value>\" # Critical: selects target volumes by tag\n }\n schedule {\n create_rule {\n interval = 24\n interval_unit = \"HOURS\"\n }\n retain_rule {\n count = 1\n }\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "To use Amazon Data Lifecycle Manager (DLM) service to manage the lifecycle of your EBS volume snapshots, you have to tag your AWS EBS volumes and create data lifecycle policies via Amazon DLM.",
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html#dlm-elements"
"Text": "Implement **DLM lifecycle policies** for all volumes that require backup.\n\n- Schedule creations to meet RPO/RTO\n- Define retention to prevent sprawl and enforce least data exposure\n- Use **least privilege** roles and separation of duties\n- Copy snapshots to another Region/account for **defense in depth**\n- Monitor policy health and coverage with tags",
"Url": "https://hub.prowler.com/check/dlm_ebs_snapshot_lifecycle_policy_exists"
}
},
"Categories": [
@@ -1,31 +1,38 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_mongodb_authentication_enabled",
"CheckTitle": "Check if DMS endpoints for MongoDB have an authentication mechanism enabled.",
"CheckTitle": "DMS MongoDB endpoint has an authentication mechanism enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:endpoint/endpoint-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsEndpoint",
"Description": "This control checks whether an AWS DMS endpoint for MongoDB is configured with an authentication mechanism. The control fails if an authentication type isn't set for the endpoint.",
"Risk": "Without an authentication mechanism enabled, unauthorized users may gain access to sensitive data during migration, increasing the risk of data breaches and security incidents.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html",
"Description": "**AWS DMS MongoDB endpoints** use an authentication mechanism. Configuration expects `AuthType` not `no` (e.g., `password`) with an `authMechanism` such as `scram_sha_1` or `mongodb_cr`.",
"Risk": "Without authentication, unauthenticated connections can access the source, degrading **confidentiality** and **integrity**. Adversaries could read or modify migrated documents, hijack CDC, inject data, or exfiltrate records during replication.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-11"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --username <username> --password <password> --authentication-type <authentication-type>",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-11",
"Terraform": ""
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --mongodb-settings '{\"AuthType\":\"password\"}' --username <username> --password <password>",
"NativeIaC": "```yaml\n# CloudFormation: enable authentication on a MongoDB DMS endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointIdentifier: <example_resource_name>\n EndpointType: source\n EngineName: mongodb\n MongoDbSettings:\n AuthType: password # CRITICAL: sets authentication mode to 'password' so auth is enabled\n```",
"Other": "1. In the AWS Console, go to Database Migration Service > Endpoints\n2. Select the MongoDB endpoint and click Modify\n3. Under MongoDB settings, set Authentication mode to Password\n4. Enter Username and Password\n5. Click Save changes",
"Terraform": "```hcl\n# Terraform: enable authentication on a MongoDB DMS endpoint\nresource \"aws_dms_endpoint\" \"<example_resource_name>\" {\n endpoint_id = \"<example_resource_name>\"\n endpoint_type = \"source\"\n engine_name = \"mongodb\"\n\n mongodb_settings {\n auth_type = \"password\" # CRITICAL: enables authentication for the MongoDB endpoint\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable an authentication mechanism on DMS endpoints for MongoDB to ensure secure access control during migration.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html"
"Text": "Enforce **strong authentication** on MongoDB endpoints: set `AuthType` to `password` and use `authMechanism` like `scram_sha_1`. Apply **least privilege** database accounts, store secrets in **Secrets Manager**, and pair with **TLS** for defense in depth.",
"Url": "https://hub.prowler.com/check/dms_endpoint_mongodb_authentication_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,31 +1,38 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_neptune_iam_authorization_enabled",
"CheckTitle": "Check if DMS endpoints for Neptune databases have IAM authorization enabled.",
"CheckTitle": "DMS endpoint for Neptune has IAM authorization enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:endpoint/endpoint-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsEndpoint",
"Description": "This control checks whether an AWS DMS endpoint for an Amazon Neptune database is configured with IAM authorization. The control fails if the DMS endpoint doesn't have IAM authorization enabled.",
"Risk": "Without IAM authorization, DMS endpoints for Neptune databases may lack granular access control, increasing the risk of unauthorized access to sensitive data.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html",
"Description": "**DMS Neptune endpoints** have **IAM authorization** enabled via the endpoint setting `IamAuthEnabled`.",
"Risk": "Without **IAM authorization**, migration components can interact with Neptune using broad trust, enabling unauthorized data loads, reads, or alterations.\n\nThis degrades **confidentiality** and **integrity** and increases the chance of privilege abuse and data exfiltration.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-10"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --service-access-role-arn <iam-role-arn>",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-10",
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --neptune-settings '{\"IamAuthEnabled\":true}'",
"NativeIaC": "```yaml\n# CloudFormation: Enable IAM authorization on a DMS Neptune endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointType: target\n EngineName: neptune\n NeptuneSettings:\n ServiceAccessRoleArn: <example_resource_arn>\n S3BucketName: <example_resource_name>\n S3BucketFolder: <example_resource_name>\n IamAuthEnabled: true # Critical: enables IAM authorization for the Neptune endpoint\n```",
"Other": "1. In the AWS Console, go to Database Migration Service > Endpoints\n2. Select the Neptune endpoint and click Modify\n3. Expand Endpoint settings (Neptune settings) and set IAM authorization to Enabled\n4. Ensure Service access role ARN is set, then click Save",
"Terraform": ""
},
"Recommendation": {
"Text": "Enable IAM authorization on DMS endpoints for Neptune databases by specifying a service role in the ServiceAccessRoleARN parameter.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html"
"Text": "Enable **IAM authorization** on Neptune endpoints (`IamAuthEnabled=true`) and use a **least privilege** service role limited to minimal Neptune and S3 permissions.\n\nApply **defense in depth**: restrict network paths, separate duties for migration roles, and monitor access with logs and alerts.",
"Url": "https://hub.prowler.com/check/dms_endpoint_neptune_iam_authorization_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,31 +1,41 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_redis_in_transit_encryption_enabled",
"CheckTitle": "Check if DMS endpoints for Redis OSS are encrypted in transit.",
"CheckTitle": "DMS endpoint for Redis OSS is encrypted in transit",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices/Encryption in Transit",
"Software and Configuration Checks/Industry and Regulatory Standards/NIST 800-53 Controls (USA)",
"Software and Configuration Checks/Industry and Regulatory Standards/PCI-DSS",
"Software and Configuration Checks/Industry and Regulatory Standards/ISO 27001 Controls"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:endpoint/endpoint-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsEndpoint",
"Description": "This control checks whether an AWS DMS endpoint for Redis OSS is configured with a TLS connection. The control fails if the endpoint doesn't have TLS enabled.",
"Risk": "Without TLS, data transmitted between databases may be vulnerable to interception or eavesdropping, increasing the risk of data breaches and other security incidents.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Redis.html",
"Description": "**DMS Redis OSS endpoints** are assessed for the presence of **TLS** in their endpoint settings, such as `ssl-encryption`, indicating encrypted connections between the DMS replication instance and Redis.",
"Risk": "Without **TLS**, traffic between DMS and Redis can be intercepted or altered, compromising **confidentiality** and **integrity**.\n\nAttackers can perform **man-in-the-middle** interception, steal auth tokens, and inject or corrupt migrated data.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-12",
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redis.html#CHAP_Target.Redis.EndpointSettings",
"https://support.icompaas.com/support/solutions/articles/62000233450-ensure-encryption-in-transit-for-dms-endpoints-for-redis-oss"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --redis-settings '{'SslSecurityProtocol': 'ssl-encryption'}'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-12",
"Terraform": ""
"CLI": "",
"NativeIaC": "```yaml\n# CloudFormation: Enable TLS for Redis OSS DMS endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointIdentifier: <example_resource_name>\n EndpointType: target\n EngineName: redis\n RedisSettings:\n ServerName: <example_resource_name>\n Port: 6379\n AuthType: none\n SslSecurityProtocol: ssl-encryption # Critical: enables TLS for in-transit encryption\n```",
"Other": "1. In the AWS Console, go to Database Migration Service > Endpoints\n2. Select the Redis OSS endpoint and click Modify\n3. Set SSL security protocol (Encryption in transit) to \"SSL encryption\"\n4. Save changes",
"Terraform": "```hcl\n# Enable TLS for Redis OSS DMS endpoint\nresource \"aws_dms_endpoint\" \"<example_resource_name>\" {\n endpoint_id = \"<example_resource_id>\"\n endpoint_type = \"target\"\n engine_name = \"redis\"\n\n redis_settings {\n server_name = \"<example_resource_name>\"\n port = 6379\n auth_type = \"none\"\n ssl_security_protocol = \"ssl-encryption\" # Critical: enables TLS for in-transit encryption\n }\n}\n```"
},
"Recommendation": {
"Text": "Enable TLS for DMS endpoints for Redis OSS to ensure encrypted communication during data migration.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redis.html#CHAP_Target.Redis.EndpointSettings"
"Text": "Enable **TLS** on Redis OSS endpoints (e.g., `ssl-encryption`) and require server certificate validation. Prohibit plaintext connections, prefer private networking, and enforce **least privilege** for DMS roles to strengthen **defense in depth**.",
"Url": "https://hub.prowler.com/check/dms_endpoint_redis_in_transit_encryption_enabled"
}
},
"Categories": [],
"Categories": [
"encryption"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,32 +1,40 @@
{
"Provider": "aws",
"CheckID": "dms_endpoint_ssl_enabled",
"CheckTitle": "Ensure SSL mode is enabled in DMS endpoint",
"CheckType": ["Effects", "Data Exposure"],
"CheckTitle": "DMS endpoint has SSL enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure"
],
"ServiceName": "dms",
"SubServiceName": "endpoint",
"ResourceIdTemplate": "arn:partition:dms:region:account-id:endpoint:resource-id",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "AwsDmsEndpoint",
"Description": "This check ensures that SSL mode is enabled for all AWS Database Migration Service (DMS) endpoints. Enabling SSL provides encryption in transit for data transferred through these endpoints.",
"Risk": "Without SSL enabled, data transferred through DMS endpoints is not encrypted, potentially exposing sensitive information to unauthorized access or interception during transit.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.SSL.html",
"Description": "**AWS DMS endpoints** have their SSL/TLS mode inspected; any value other than `none` denotes encrypted connections between the replication instance and databases.\n\nSupported modes include `require`, `verify-ca`, and `verify-full`.",
"Risk": "Without TLS, data in transit can be read or altered, affecting:\n- **Confidentiality** via packet sniffing and credential leakage\n- **Integrity** through **MITM** tampering of migration streams\n- **Availability** from session hijack or task disruption",
"RelatedUrl": "",
"AdditionalURLs": [
"https://aws.amazon.com/blogs/database/configuring-ssl-encryption-on-oracle-and-postgresql-endpoints-in-aws-dms/",
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.SSL.html",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-9"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint_arn> --ssl-mode require",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-9",
"Terraform": ""
},
"Recommendation": {
"Text": "Enable SSL mode for all DMS endpoints. Use 'require' as the minimum SSL mode, and consider using 'verify-ca' or 'verify-full' for higher security.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.SSL.html"
}
"Code": {
"CLI": "aws dms modify-endpoint --endpoint-arn <endpoint-arn> --ssl-mode require",
"NativeIaC": "```yaml\n# CloudFormation: Set SSL on a DMS endpoint\nResources:\n <example_resource_name>:\n Type: AWS::DMS::Endpoint\n Properties:\n EndpointIdentifier: <example_resource_name>\n EndpointType: source\n EngineName: sqlserver\n ServerName: <server_name>\n Port: 1433\n Username: <username>\n Password: <password>\n SslMode: require # CRITICAL: enables SSL (not \"none\"), fixing the finding\n```",
"Other": "1. In the AWS DMS console, go to Endpoints\n2. Select the non-compliant endpoint and choose Modify\n3. Set SSL mode to Require (or Verify-ca/Verify-full if required by your engine and certificate is available)\n4. If Verify-ca/Verify-full is selected, choose the appropriate CA certificate\n5. Save changes, then Test connection to confirm",
"Terraform": "```hcl\n# Terraform: Set SSL on a DMS endpoint\nresource \"aws_dms_endpoint\" \"<example_resource_name>\" {\n endpoint_id = \"<example_resource_name>\"\n endpoint_type = \"source\"\n engine_name = \"sqlserver\"\n server_name = \"<server_name>\"\n port = 1433\n username = \"<username>\"\n password = \"<password>\"\n\n ssl_mode = \"require\" # CRITICAL: enables SSL (not \"none\"), fixing the finding\n}\n```"
},
"Recommendation": {
"Text": "Configure endpoints to use SSL/TLS at least `require`; prefer `verify-ca` or `verify-full` where supported. Manage trusted CA material and rotate regularly. Apply **defense in depth** with private connectivity and strict IAM, and enforce this posture via policy-as-code and continuous validation.",
"Url": "https://hub.prowler.com/check/dms_endpoint_ssl_enabled"
}
},
"Categories": [
"encryption"
"encryption"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}
}
@@ -1,29 +1,39 @@
{
"Provider": "aws",
"CheckID": "dms_instance_minor_version_upgrade_enabled",
"CheckTitle": "Ensure DMS instances have auto minor version upgrade enabled.",
"CheckType": [],
"CheckTitle": "DMS replication instance has auto minor version upgrade enabled",
"CheckType": [
"Software and Configuration Checks/Patch Management",
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:rdmsds:region:account-id:rep",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsReplicationInstance",
"Description": "Ensure DMS instances have auto minor version upgrade enabled.",
"Risk": "Ensure that your Amazon Database Migration Service (DMS) replication instances have the Auto Minor Version Upgrade feature enabled in order to receive automatically minor engine upgrades.",
"RelatedUrl": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-6",
"Description": "**AWS DMS replication instances** are evaluated for the `auto_minor_version_upgrade` setting to confirm **automatic minor engine updates** are enabled during the maintenance window.",
"Risk": "Without **automatic minor upgrades**, DMS engines can miss security patches and fixes, enabling exploitation of known flaws and instability.\n- Confidentiality: exposure via unpatched components\n- Integrity: replication errors or data drift\n- Availability: outages during migration or CDC",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-6",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DMS/auto-minor-version-upgrade.html"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-replication-instance --region <REGION> --replication-instance-arn arn:aws:dms:<REGION>:<ACCOUNT_ID>:rep:<REPLICATION_ID> --auto-minor-version-upgrade --apply-immediately",
"NativeIaC": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/auto-minor-version-upgrade.html#",
"Other": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/auto-minor-version-upgrade.html#",
"Terraform": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/auto-minor-version-upgrade.html#"
"NativeIaC": "```yaml\n# CloudFormation: Enable auto minor version upgrade on a DMS replication instance\nResources:\n <example_resource_name>:\n Type: AWS::DMS::ReplicationInstance\n Properties:\n ReplicationInstanceIdentifier: <example_resource_id>\n ReplicationInstanceClass: dms.t3.micro\n AutoMinorVersionUpgrade: true # CRITICAL: turns on automatic minor version upgrades\n```",
"Other": "1. Open the AWS Console and go to Database Migration Service (DMS)\n2. Click Replication instances and select your instance\n3. Choose Actions > Modify\n4. Check Auto minor version upgrade\n5. Select Apply immediately\n6. Click Modify to save",
"Terraform": "```hcl\n# Terraform: Enable auto minor version upgrade on a DMS replication instance\nresource \"aws_dms_replication_instance\" \"<example_resource_name>\" {\n replication_instance_id = \"<example_resource_id>\"\n replication_instance_class = \"dms.t3.micro\"\n auto_minor_version_upgrade = true # CRITICAL: turns on automatic minor version upgrades\n}\n```"
},
"Recommendation": {
"Text": "Enable auto minor version upgrade for all DMS replication instances.",
"Url": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-6"
"Text": "Enable `auto_minor_version_upgrade` on all replication instances to maintain **continuous patching**.\n- Set a maintenance window and validate in non-prod\n- Monitor release notes and health metrics\n- Enforce **least privilege** for change control\n- Keep **backups** for rollback",
"Url": "https://hub.prowler.com/check/dms_instance_minor_version_upgrade_enabled"
}
},
"Categories": [],
"Categories": [
"vulnerabilities"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,30 +1,37 @@
{
"Provider": "aws",
"CheckID": "dms_instance_multi_az_enabled",
"CheckTitle": "Ensure DMS instances have multi az enabled.",
"CheckType": [],
"CheckTitle": "DMS replication instance has Multi-AZ enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Denial of Service"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:rdmsds:region:account-id:rep",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsReplicationInstance",
"Description": "Ensure DMS instances have multi az enabled.",
"Risk": "Ensure that your Amazon Database Migration Service (DMS) replication instances are using Multi-AZ deployment configurations to provide High Availability (HA) through automatic failover to standby replicas in the event of a failure such as an Availability Zone (AZ) outage, an internal hardware or network outage, a software failure or in case of a planned maintenance session.",
"RelatedUrl": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#",
"Description": "**AWS DMS replication instances** are evaluated for **Multi-AZ** configuration. Instances with `multi_az` enabled are treated as having a cross-AZ standby; those without it are identified as single-AZ.",
"Risk": "Without **Multi-AZ**, a single-AZ failure or maintenance event can halt migrations, causing extended downtime (**availability**) and replication gaps or rollbacks (**integrity**). Tasks may stall, increase cutover risk, and require manual recovery when the replication instance is unavailable.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DMS/multi-az.html"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-replication-instance --region <REGION> --replication-instance-arn arn:aws:dms:<REGION>:<ACCOUNT_ID>:rep:<REPLICATION_ID> --multi-az --apply-immediately",
"NativeIaC": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#",
"Other": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#",
"Terraform": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#"
"CLI": "aws dms modify-replication-instance --replication-instance-arn arn:aws:dms:<REGION>:<ACCOUNT_ID>:rep:<REPLICATION_ID> --multi-az --apply-immediately",
"NativeIaC": "```yaml\n# CloudFormation: enable Multi-AZ on a DMS replication instance\nResources:\n <example_resource_name>:\n Type: AWS::DMS::ReplicationInstance\n Properties:\n ReplicationInstanceClass: dms.t3.micro\n MultiAZ: true # Critical: enables Multi-AZ to pass the check\n```",
"Other": "1. Open the AWS DMS console\n2. Go to Replication instances and select your instance\n3. Click Modify\n4. Check Multi-AZ\n5. Check Apply immediately\n6. Click Modify to save",
"Terraform": "```hcl\n# Enable Multi-AZ on a DMS replication instance\nresource \"aws_dms_replication_instance\" \"<example_resource_name>\" {\n replication_instance_id = \"<example_resource_name>\"\n replication_instance_class = \"dms.t3.micro\"\n multi_az = true # Critical: enables Multi-AZ to pass the check\n}\n```"
},
"Recommendation": {
"Text": "Enable multi az for all DMS replication instances.",
"Url": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/multi-az.html#"
"Text": "Enable **Multi-AZ** (set `multi_az` to `true`) on DMS replication instances that handle production or time-sensitive migrations to ensure redundancy and automatic failover.\n\nApply HA principles: distribute across AZs, test failover, monitor health, and plan maintenance to minimize impact.",
"Url": "https://hub.prowler.com/check/dms_instance_multi_az_enabled"
}
},
"Categories": [
"redundancy"
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
@@ -1,26 +1,37 @@
{
"Provider": "aws",
"CheckID": "dms_instance_no_public_access",
"CheckTitle": "Ensure DMS instances are not publicly accessible.",
"CheckType": [],
"CheckTitle": "DMS replication instance is not publicly exposed to the Internet",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Network Reachability",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark",
"TTPs/Initial Access"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:rdmsds:region:account-id:rep",
"ResourceIdTemplate": "",
"Severity": "critical",
"ResourceType": "AwsDmsReplicationInstance",
"Description": "Ensure DMS instances are not publicly accessible.",
"Risk": "Ensure that your Amazon Database Migration Service (DMS) are not publicly accessible from the Internet in order to avoid exposing private data and minimize security risks. A DMS replication instance should have a private IP address and the Publicly Accessible feature disabled when both the source and the target databases are in the same network that is connected to the instance's VPC through a VPN, VPC peering connection, or using an AWS Direct Connect dedicated connection.",
"RelatedUrl": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-1",
"Description": "**AWS DMS replication instances** are evaluated for **public exposure**. Exposure is identified when `PubliclyAccessible` is enabled and an attached security group allows inbound traffic from any address. Private or allowlisted instances are not considered exposed.",
"Risk": "Publicly reachable replication instances threaten:\n- Confidentiality: migration data and credentials can be intercepted or exfiltrated.\n- Integrity: attackers may alter tasks or inject records.\n- Availability: abuse or DDoS can stall replication and delay cutovers.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-1",
"https://docs.aws.amazon.com/amazonq/detector-library/terraform/restrict-public-access-dms-terraform/",
"https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/DMS/publicly-accessible.html",
"https://support.icompaas.com/support/solutions/articles/62000233448-ensure-dms-instances-are-not-publicly-accessible"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/publicly-accessible.html#",
"Other": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/publicly-accessible.html#",
"Terraform": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/aws/DMS/publicly-accessible.html#"
"NativeIaC": "```yaml\n# CloudFormation: DMS instance not publicly accessible\nResources:\n <example_resource_name>:\n Type: AWS::DMS::ReplicationInstance\n Properties:\n ReplicationInstanceClass: dms.t3.micro\n PubliclyAccessible: false # Critical: disables public access to prevent Internet exposure\n```",
"Other": "1. In the AWS Console, open Database Migration Service > Replication instances and select the instance\n2. In Details > Networking, click each attached Security Group ID to open it in the EC2 console\n3. In Inbound rules, delete any rule with Source 0.0.0.0/0 or ::/0\n4. Save rules for each security group",
"Terraform": "```hcl\n# DMS instance not publicly accessible\nresource \"aws_dms_replication_instance\" \"<example_resource_name>\" {\n replication_instance_id = \"<example_resource_id>\"\n replication_instance_class = \"dms.t3.micro\"\n publicly_accessible = false # Critical: disables public access to prevent Internet exposure\n}\n```"
},
"Recommendation": {
"Text": "Restrict DMS Replication instances security groups to only required IPs, or re-create these instances that is only accessible privately.",
"Url": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-1"
"Text": "Adopt a **private-only** design:\n- Disable `PubliclyAccessible`; place instances in private subnets.\n- Enforce **least privilege** security groups (no `0.0.0.0/0`); allow only required sources/ports.\n- Provide access via **VPN**, peering, or Direct Connect.\n- Layer controls (ACLs, monitoring) and restrict IAM to necessary actions.",
"Url": "https://hub.prowler.com/check/dms_instance_no_public_access"
}
},
"Categories": [
@@ -1,31 +1,39 @@
{
"Provider": "aws",
"CheckID": "dms_replication_task_source_logging_enabled",
"CheckTitle": "Check if DMS replication tasks for the source database have logging enabled.",
"CheckTitle": "DMS replication task has logging enabled and SOURCE_CAPTURE and SOURCE_UNLOAD components set to at least Default severity",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices",
"TTPs/Defense Evasion"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:task/task-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsReplicationTask",
"Description": "This control checks whether logging is enabled with the minimum severity level of LOGGER_SEVERITY_DEFAULT for DMS replication tasks SOURCE_CAPTURE and SOURCE_UNLOAD. The control fails if logging isn't enabled for these tasks or if the minimum severity level is less than LOGGER_SEVERITY_DEFAULT.",
"Risk": "Without logging enabled, issues in data migration may go undetected, affecting the integrity and compliance of replicated data.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html#CHAP_Monitoring.ManagingLogs",
"Description": "**AWS DMS replication tasks** have **logging enabled** and configure `SOURCE_CAPTURE` and `SOURCE_UNLOAD` with severity at least `LOGGER_SEVERITY_DEFAULT` (or higher: `LOGGER_SEVERITY_DEBUG`, `LOGGER_SEVERITY_DETAILED_DEBUG`).",
"Risk": "Missing or low-severity source logs hinder visibility into **CDC** and full-load activity, risking undetected errors, stalls, or tampering. This can cause silent **data drift**, broken lineage, and failed recoveries, undermining **integrity** and **availability** and weakening auditability during investigations.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html",
"https://repost.aws/knowledge-center/dms-debug-logging",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-8"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-replication-task --replication-task-arn <task-arn> --task-settings '{\"Logging\":{\"EnableLogging\":true,\"LogComponents\":[{\"Id\":\"SOURCE_CAPTURE\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"},{\"Id\":\"SOURCE_UNLOAD\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"}]}}'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-8",
"Terraform": ""
"CLI": "aws dms modify-replication-task --replication-task-arn <example_resource_arn> --replication-task-settings '{\"Logging\":{\"EnableLogging\":true,\"LogComponents\":[{\"Id\":\"SOURCE_CAPTURE\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"},{\"Id\":\"SOURCE_UNLOAD\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"}]}}'",
"NativeIaC": "```yaml\n# CloudFormation: enable DMS source logging at minimum DEFAULT severity\nResources:\n <example_resource_name>:\n Type: AWS::DMS::ReplicationTask\n Properties:\n ReplicationInstanceArn: <example_resource_arn>\n SourceEndpointArn: <example_resource_arn>\n TargetEndpointArn: <example_resource_arn>\n MigrationType: full-load\n TableMappings: '{\"rules\":[]}'\n # Critical: Enables logging and sets SOURCE components to at least DEFAULT\n ReplicationTaskSettings: |\n {\n \"Logging\": {\n \"EnableLogging\": true,\n \"LogComponents\": [\n {\"Id\": \"SOURCE_CAPTURE\", \"Severity\": \"LOGGER_SEVERITY_DEFAULT\"},\n {\"Id\": \"SOURCE_UNLOAD\", \"Severity\": \"LOGGER_SEVERITY_DEFAULT\"}\n ]\n }\n }\n```",
"Other": "1. In the AWS console, go to Database Migration Service > Database migration tasks\n2. Select the task and choose Modify\n3. Click Modify task logging\n4. Turn on Enable logging\n5. For SOURCE_CAPTURE and SOURCE_UNLOAD, set Severity to Default (or higher)\n6. Save/Modify to apply",
"Terraform": "```hcl\n# Enable DMS source logging at minimum DEFAULT severity\nresource \"aws_dms_replication_task\" \"<example_resource_name>\" {\n replication_instance_arn = \"<example_resource_arn>\"\n source_endpoint_arn = \"<example_resource_arn>\"\n target_endpoint_arn = \"<example_resource_arn>\"\n migration_type = \"full-load\"\n table_mappings = \"{\\\"rules\\\":[]}\"\n\n # Critical: Enables logging and sets SOURCE components to at least DEFAULT\n replication_task_settings = <<JSON\n{\n \"Logging\": {\n \"EnableLogging\": true,\n \"LogComponents\": [\n {\"Id\": \"SOURCE_CAPTURE\", \"Severity\": \"LOGGER_SEVERITY_DEFAULT\"},\n {\"Id\": \"SOURCE_UNLOAD\", \"Severity\": \"LOGGER_SEVERITY_DEFAULT\"}\n ]\n }\n}\nJSON\n}\n```"
},
"Recommendation": {
"Text": "Enable logging for source database DMS replication tasks with a minimum severity level of LOGGER_SEVERITY_DEFAULT.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.html"
"Text": "Enable and standardize **task logging** for `SOURCE_CAPTURE` and `SOURCE_UNLOAD` at `LOGGER_SEVERITY_DEFAULT` or higher.\n- Centralize logs and alert on anomalies\n- Enforce **least privilege** for log access\n- Set retention to support audits\n- Avoid prolonged `DEBUG` levels, *except during troubleshooting*, to balance visibility and cost",
"Url": "https://hub.prowler.com/check/dms_replication_task_source_logging_enabled"
}
},
"Categories": [],
"Categories": [
"logging"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
@@ -1,31 +1,40 @@
{
"Provider": "aws",
"CheckID": "dms_replication_task_target_logging_enabled",
"CheckTitle": "Check if DMS replication tasks for the target database have logging enabled.",
"CheckTitle": "DMS replication task has TARGET_APPLY and TARGET_LOAD logging enabled with at least default severity",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices"
"Software and Configuration Checks/AWS Security Best Practices",
"TTPs/Defense Evasion"
],
"ServiceName": "dms",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:dms:region:account-id:task/task-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsDmsReplicationTask",
"Description": "This control checks whether logging is enabled with the minimum severity level of LOGGER_SEVERITY_DEFAULT for DMS replication tasks TARGET_APPLY and TARGET_LOAD. The control fails if logging isn't enabled for these tasks or if the minimum severity level is less than LOGGER_SEVERITY_DEFAULT.",
"Risk": "Without logging enabled, issues in data migration may go undetected, affecting the integrity and compliance of replicated data.",
"RelatedUrl": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html#CHAP_Monitoring.ManagingLogs",
"Description": "**AWS DMS replication tasks** have target logging enabled, including `TARGET_APPLY` and `TARGET_LOAD`, each set to at least `LOGGER_SEVERITY_DEFAULT`.",
"Risk": "Insufficient target logging limits visibility into load/apply activity, masking failures and anomalies. This risks **data integrity** (silent drift, partial loads) and **availability** (longer incident resolution), and reduces **auditability** of migration events.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://repost.aws/knowledge-center/dms-debug-logging",
"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.html",
"https://stackoverflow.com/questions/46913913/aws-dms-with-cloudformation-enabling-logging-needs-a-log-group",
"https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-7"
],
"Remediation": {
"Code": {
"CLI": "aws dms modify-replication-task --replication-task-arn <task-arn> --task-settings '{\"Logging\":{\"EnableLogging\":true,\"LogComponents\":[{\"Id\":\"TARGET_APPLY\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"},{\"Id\":\"TARGET_LOAD\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"}]}}'",
"NativeIaC": "",
"Other": "https://docs.aws.amazon.com/securityhub/latest/userguide/dms-controls.html#dms-7",
"Terraform": ""
"CLI": "aws dms modify-replication-task --replication-task-arn <task-arn> --replication-task-settings '{\"Logging\":{\"EnableLogging\":true,\"LogComponents\":[{\"Id\":\"TARGET_APPLY\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"},{\"Id\":\"TARGET_LOAD\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"}]}}'",
"NativeIaC": "```yaml\n# CloudFormation: enable DMS task logging for target components\nResources:\n <example_resource_name>:\n Type: AWS::DMS::ReplicationTask\n Properties:\n ReplicationInstanceArn: <example_resource_arn>\n SourceEndpointArn: <example_resource_arn>\n TargetEndpointArn: <example_resource_arn>\n MigrationType: full-load\n TableMappings: |\n {\"rules\":[{\"rule-type\":\"selection\",\"rule-id\":\"1\",\"rule-name\":\"1\",\"object-locator\":{\"schema-name\":\"%\",\"table-name\":\"%\"},\"rule-action\":\"include\"}]}\n ReplicationTaskSettings: |\n {\"Logging\":{\"EnableLogging\":true, \"LogComponents\":[\n {\"Id\":\"TARGET_APPLY\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"}, # Critical: ensure TARGET_APPLY logging at default\n {\"Id\":\"TARGET_LOAD\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"} # Critical: ensure TARGET_LOAD logging at default\n ]}}\n```",
"Other": "1. Open the AWS DMS console and go to Database migration tasks\n2. Select the replication task and choose Modify\n3. Expand Task settings (JSON) or Logging\n4. Enable CloudWatch logs (EnableLogging = true)\n5. Set log components:\n - TARGET_APPLY severity: DEFAULT\n - TARGET_LOAD severity: DEFAULT\n6. Save changes (Modify task), then rerun the task if required",
"Terraform": "```hcl\n# Enable DMS task logging for target components\nresource \"aws_dms_replication_task\" \"<example_resource_name>\" {\n replication_task_id = \"<example_resource_id>\"\n replication_instance_arn = \"<example_resource_arn>\"\n source_endpoint_arn = \"<example_resource_arn>\"\n target_endpoint_arn = \"<example_resource_arn>\"\n migration_type = \"full-load\"\n table_mappings = jsonencode({ rules = [{\n \"rule-type\" : \"selection\", \"rule-id\" : \"1\", \"rule-name\" : \"1\",\n \"object-locator\" : { \"schema-name\" : \"%\", \"table-name\" : \"%\" },\n \"rule-action\" : \"include\"\n }]} )\n\n # Critical: enables logging and sets TARGET_APPLY and TARGET_LOAD to minimum required severity\n replication_task_settings = jsonencode({\n Logging = {\n EnableLogging = true\n LogComponents = [\n { Id = \"TARGET_APPLY\", Severity = \"LOGGER_SEVERITY_DEFAULT\" },\n { Id = \"TARGET_LOAD\", Severity = \"LOGGER_SEVERITY_DEFAULT\" }\n ]\n }\n })\n}\n```"
},
"Recommendation": {
"Text": "Enable logging for target database DMS replication tasks with a minimum severity level of LOGGER_SEVERITY_DEFAULT.",
"Url": "https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.html"
"Text": "Enable and maintain **CloudWatch logging** at `LOGGER_SEVERITY_DEFAULT` or higher for target components:\n- Configure `TARGET_APPLY` and `TARGET_LOAD`\n- Enforce least-privilege log access\n- Monitor logs/alerts for anomalies\n- Standardize task settings and validate data for **defense in depth**",
"Url": "https://hub.prowler.com/check/dms_replication_task_target_logging_enabled"
}
},
"Categories": [],
"Categories": [
"logging"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
+21
View File
@@ -0,0 +1,21 @@
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "default",
"rsc": true,
"tsx": true,
"tailwind": {
"config": "",
"css": "styles/globals.css",
"baseColor": "neutral",
"cssVariables": true,
"prefix": ""
},
"aliases": {
"components": "@/components",
"utils": "@/lib/utils",
"ui": "@/components/shadcn",
"lib": "@/lib",
"hooks": "@/hooks"
},
"iconLibrary": "lucide"
}
+57
View File
@@ -0,0 +1,57 @@
# shadcn Components
This directory contains all shadcn/ui based components for the Prowler application.
## Directory Structure
```
shadcn/
├── card.tsx # shadcn Card component
├── resource-stats-card/ # Custom ResourceStatsCard built on shadcn
│ ├── resource-stats-card.tsx
│ ├── resource-stats-card.example.tsx
│ └── index.ts
├── index.ts # Barrel exports
└── README.md
```
## Usage
All shadcn components can be imported from `@/components/shadcn`:
```tsx
import { Card, CardHeader, CardContent } from "@/components/shadcn";
import { ResourceStatsCard } from "@/components/shadcn";
```
## Adding New shadcn Components
When adding new shadcn components using the CLI:
```bash
npx shadcn@latest add [component-name]
```
The component will be automatically added to this directory due to the configuration in `components.json`:
```json
{
"aliases": {
"ui": "@/components/shadcn"
}
}
```
## Component Guidelines
1. **shadcn base components** - Use as-is from shadcn/ui (e.g., `card.tsx`)
2. **Custom components built on shadcn** - Create in subdirectories (e.g., `resource-stats-card/`)
3. **CVA variants** - Use Class Variance Authority for type-safe variants
4. **Theme support** - Include `dark:` classes for dark/light theme compatibility
5. **TypeScript** - Always export types and use proper typing
## Resources
- [shadcn/ui Documentation](https://ui.shadcn.com)
- [CVA Documentation](https://cva.style/docs)
- [Tailwind CSS Documentation](https://tailwindcss.com/docs)
+92
View File
@@ -0,0 +1,92 @@
import * as React from "react";
import { cn } from "@/lib/utils";
function Card({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card"
className={cn(
"bg-card text-card-foreground flex flex-col gap-6 rounded-xl border py-6 shadow-sm",
className,
)}
{...props}
/>
);
}
function CardHeader({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-header"
className={cn(
"@container/card-header grid auto-rows-min grid-rows-[auto_auto] items-start gap-2 px-6 has-data-[slot=card-action]:grid-cols-[1fr_auto] [.border-b]:pb-6",
className,
)}
{...props}
/>
);
}
function CardTitle({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-title"
className={cn("leading-none font-semibold", className)}
{...props}
/>
);
}
function CardDescription({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-description"
className={cn("text-muted-foreground text-sm", className)}
{...props}
/>
);
}
function CardAction({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-action"
className={cn(
"col-start-2 row-span-2 row-start-1 self-start justify-self-end",
className,
)}
{...props}
/>
);
}
function CardContent({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-content"
className={cn("px-6", className)}
{...props}
/>
);
}
function CardFooter({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-footer"
className={cn("flex items-center px-6 [.border-t]:pt-6", className)}
{...props}
/>
);
}
export {
Card,
CardAction,
CardContent,
CardDescription,
CardFooter,
CardHeader,
CardTitle,
};
+21
View File
@@ -0,0 +1,21 @@
export {
Card,
CardContent,
CardDescription,
CardFooter,
CardHeader,
CardTitle,
} from "./card";
export {
ResourceStatsCard,
ResourceStatsCardContainer,
type ResourceStatsCardContainerProps,
ResourceStatsCardContent,
type ResourceStatsCardContentProps,
ResourceStatsCardDivider,
type ResourceStatsCardDividerProps,
ResourceStatsCardHeader,
type ResourceStatsCardHeaderProps,
type ResourceStatsCardProps,
type StatItem,
} from "./resource-stats-card";
@@ -0,0 +1,13 @@
export type { ResourceStatsCardProps } from "./resource-stats-card";
export { ResourceStatsCard } from "./resource-stats-card";
export type { ResourceStatsCardContainerProps } from "./resource-stats-card-container";
export { ResourceStatsCardContainer } from "./resource-stats-card-container";
export type {
ResourceStatsCardContentProps,
StatItem,
} from "./resource-stats-card-content";
export { ResourceStatsCardContent } from "./resource-stats-card-content";
export type { ResourceStatsCardDividerProps } from "./resource-stats-card-divider";
export { ResourceStatsCardDivider } from "./resource-stats-card-divider";
export type { ResourceStatsCardHeaderProps } from "./resource-stats-card-header";
export { ResourceStatsCardHeader } from "./resource-stats-card-header";
@@ -0,0 +1,55 @@
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "@/lib/utils";
const containerVariants = cva(
[
"flex",
"rounded-[12px]",
"border",
"backdrop-blur-[46px]",
"border-[rgba(38,38,38,0.70)]",
"bg-[rgba(23,23,23,0.50)]",
"dark:border-[rgba(38,38,38,0.70)]",
"dark:bg-[rgba(23,23,23,0.50)]",
],
{
variants: {
padding: {
sm: "px-3 py-2",
md: "px-[19px] py-[9px]",
lg: "px-6 py-3",
none: "p-0",
},
},
defaultVariants: {
padding: "md",
},
},
);
export interface ResourceStatsCardContainerProps
extends React.HTMLAttributes<HTMLDivElement>,
VariantProps<typeof containerVariants> {
ref?: React.Ref<HTMLDivElement>;
}
export const ResourceStatsCardContainer = ({
className,
children,
padding,
ref,
...props
}: ResourceStatsCardContainerProps) => {
return (
<div
ref={ref}
className={cn(containerVariants({ padding }), className)}
{...props}
>
{children}
</div>
);
};
ResourceStatsCardContainer.displayName = "ResourceStatsCardContainer";
@@ -0,0 +1,204 @@
import { cva } from "class-variance-authority";
import { LucideIcon } from "lucide-react";
import { cn } from "@/lib/utils";
export interface StatItem {
icon: LucideIcon;
label: string;
}
export const CardVariant = {
default: "default",
fail: "fail",
pass: "pass",
warning: "warning",
info: "info",
} as const;
export type CardVariant = (typeof CardVariant)[keyof typeof CardVariant];
const variantColors = {
default: "#868994",
fail: "#f54280",
pass: "#4ade80",
warning: "#fbbf24",
info: "#60a5fa",
} as const;
type BadgeVariant = keyof typeof variantColors;
const badgeVariants = cva(
["flex", "items-center", "justify-center", "gap-0.5", "rounded-full"],
{
variants: {
variant: {
[CardVariant.default]: "bg-[#535359]",
[CardVariant.fail]: "bg-[#432232]",
[CardVariant.pass]: "bg-[#204237]",
[CardVariant.warning]: "bg-[#3d3520]",
[CardVariant.info]: "bg-[#1e3a5f]",
},
size: {
sm: "px-1 text-xs",
md: "px-1.5 text-sm",
lg: "px-2 text-base",
},
},
defaultVariants: {
variant: CardVariant.fail,
size: "md",
},
},
);
const badgeIconVariants = cva("", {
variants: {
size: {
sm: "h-2.5 w-2.5",
md: "h-3 w-3",
lg: "h-4 w-4",
},
},
defaultVariants: {
size: "md",
},
});
const labelTextVariants = cva(
"leading-6 font-semibold text-zinc-300 dark:text-zinc-300",
{
variants: {
size: {
sm: "text-xs",
md: "text-sm",
lg: "text-base",
},
},
defaultVariants: {
size: "md",
},
},
);
const statIconVariants = cva("text-zinc-300 dark:text-zinc-300", {
variants: {
size: {
sm: "h-2.5 w-2.5",
md: "h-3 w-3",
lg: "h-3.5 w-3.5",
},
},
defaultVariants: {
size: "md",
},
});
const statLabelVariants = cva(
"leading-5 font-medium text-zinc-300 dark:text-zinc-300",
{
variants: {
size: {
sm: "text-xs",
md: "text-sm",
lg: "text-base",
},
},
defaultVariants: {
size: "md",
},
},
);
export interface ResourceStatsCardContentProps
extends React.HTMLAttributes<HTMLDivElement> {
badge: {
icon: LucideIcon;
count: number | string;
variant?: CardVariant;
};
label: string;
stats?: StatItem[];
accentColor?: string;
size?: "sm" | "md" | "lg";
ref?: React.Ref<HTMLDivElement>;
}
export const ResourceStatsCardContent = ({
badge,
label,
stats = [],
accentColor,
size = "md",
className,
ref,
...props
}: ResourceStatsCardContentProps) => {
const BadgeIcon = badge.icon;
const badgeVariant: BadgeVariant = badge.variant || "fail";
// Determine accent line color
const lineColor = accentColor || variantColors[badgeVariant] || "#d4d4d8";
return (
<div
ref={ref}
className={cn("flex flex-col gap-[5px]", className)}
{...props}
>
{/* Badge and Label Row */}
<div className="flex w-full items-center gap-1">
{/* Badge */}
<div className={cn(badgeVariants({ variant: badgeVariant, size }))}>
<BadgeIcon
className={badgeIconVariants({ size })}
strokeWidth={2.5}
style={{ color: variantColors[badgeVariant] }}
/>
<span
className="leading-6 font-bold"
style={{ color: variantColors[badgeVariant] }}
>
{badge.count}
</span>
</div>
{/* Label */}
<span className={labelTextVariants({ size })}>{label}</span>
</div>
{/* Stats Section */}
{stats.length > 0 && (
<div className="flex w-full items-stretch gap-0">
{/* Vertical Accent Line */}
<div className="flex items-stretch px-3 py-1">
<div
className="w-px rounded-full"
style={{ backgroundColor: lineColor }}
/>
</div>
{/* Stats List */}
<div className="flex flex-1 flex-col gap-0.5">
{stats.map((stat, index) => {
const StatIcon = stat.icon;
return (
<div key={index} className="flex items-center gap-1">
<StatIcon
className={statIconVariants({ size })}
strokeWidth={2}
/>
<span className={statLabelVariants({ size })}>
{stat.label}
</span>
</div>
);
})}
</div>
</div>
)}
</div>
);
};
ResourceStatsCardContent.displayName = "ResourceStatsCardContent";
@@ -0,0 +1,59 @@
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "@/lib/utils";
const dividerVariants = cva("flex items-center justify-center", {
variants: {
spacing: {
sm: "px-2",
md: "px-[23px]",
lg: "px-8",
},
orientation: {
vertical: "h-full",
horizontal: "w-full",
},
},
defaultVariants: {
spacing: "md",
orientation: "vertical",
},
});
const lineVariants = cva("bg-[rgba(39,39,42,1)]", {
variants: {
orientation: {
vertical: "h-full w-px",
horizontal: "w-full h-px",
},
},
defaultVariants: {
orientation: "vertical",
},
});
export interface ResourceStatsCardDividerProps
extends React.HTMLAttributes<HTMLDivElement>,
VariantProps<typeof dividerVariants> {
ref?: React.Ref<HTMLDivElement>;
}
export const ResourceStatsCardDivider = ({
className,
spacing,
orientation,
ref,
...props
}: ResourceStatsCardDividerProps) => {
return (
<div
ref={ref}
className={cn(dividerVariants({ spacing, orientation }), className)}
{...props}
>
<div className={lineVariants({ orientation })} />
</div>
);
};
ResourceStatsCardDivider.displayName = "ResourceStatsCardDivider";
@@ -0,0 +1,103 @@
import { cva, type VariantProps } from "class-variance-authority";
import { LucideIcon } from "lucide-react";
import { cn } from "@/lib/utils";
const headerVariants = cva("flex w-full items-center gap-1", {
variants: {
size: {
sm: "",
md: "",
lg: "",
},
},
defaultVariants: {
size: "md",
},
});
const iconVariants = cva("text-zinc-300 dark:text-zinc-300", {
variants: {
size: {
sm: "h-3.5 w-3.5",
md: "h-4 w-4",
lg: "h-5 w-5",
},
},
defaultVariants: {
size: "md",
},
});
const titleVariants = cva(
"leading-7 font-semibold text-zinc-300 dark:text-zinc-300",
{
variants: {
size: {
sm: "text-sm",
md: "text-base",
lg: "text-lg",
},
},
defaultVariants: {
size: "md",
},
},
);
const countVariants = cva(
"leading-4 font-normal text-zinc-300 dark:text-zinc-300",
{
variants: {
size: {
sm: "text-[9px]",
md: "text-[10px]",
lg: "text-xs",
},
},
defaultVariants: {
size: "md",
},
},
);
export interface ResourceStatsCardHeaderProps
extends React.HTMLAttributes<HTMLDivElement>,
VariantProps<typeof headerVariants> {
icon: LucideIcon;
title: string;
resourceCount?: number | string;
ref?: React.Ref<HTMLDivElement>;
}
export const ResourceStatsCardHeader = ({
icon: Icon,
title,
resourceCount,
size = "md",
className,
ref,
...props
}: ResourceStatsCardHeaderProps) => {
return (
<div
ref={ref}
className={cn(headerVariants({ size }), className)}
{...props}
>
<div className="flex flex-1 items-center gap-1">
<Icon className={iconVariants({ size })} strokeWidth={2} />
<span className={titleVariants({ size })}>{title}</span>
</div>
{resourceCount !== undefined && (
<span className={countVariants({ size })}>
{typeof resourceCount === "number"
? `${resourceCount} Resources`
: resourceCount}
</span>
)}
</div>
);
};
ResourceStatsCardHeader.displayName = "ResourceStatsCardHeader";
@@ -0,0 +1,164 @@
import { cva, type VariantProps } from "class-variance-authority";
import { LucideIcon } from "lucide-react";
import { cn } from "@/lib/utils";
import { ResourceStatsCardContainer } from "./resource-stats-card-container";
import type { StatItem } from "./resource-stats-card-content";
import {
CardVariant,
ResourceStatsCardContent,
} from "./resource-stats-card-content";
import { ResourceStatsCardHeader } from "./resource-stats-card-header";
export type { StatItem };
// Todo: when the design system is ready, we must use the colors from the design system (semantic colors)
// Variant styles using CVA for type safety and consistency
// Colors are exact HEX values from Figma design system
const cardVariants = cva("", {
variants: {
variant: {
[CardVariant.default]: "",
// Fail variant - rgba(67,34,50) from Figma
[CardVariant.fail]:
"border-[rgba(67,34,50,0.5)] bg-[rgba(67,34,50,0.2)] dark:border-[rgba(67,34,50,0.7)] dark:bg-[rgba(67,34,50,0.3)]",
// Pass variant - rgba(32,66,55) from Figma
[CardVariant.pass]:
"border-[rgba(32,66,55,0.5)] bg-[rgba(32,66,55,0.2)] dark:border-[rgba(32,66,55,0.7)] dark:bg-[rgba(32,66,55,0.3)]",
// Warning variant - rgba(61,53,32) from Figma
[CardVariant.warning]:
"border-[rgba(61,53,32,0.5)] bg-[rgba(61,53,32,0.2)] dark:border-[rgba(61,53,32,0.7)] dark:bg-[rgba(61,53,32,0.3)]",
// Info variant - rgba(30,58,95) from Figma
[CardVariant.info]:
"border-[rgba(30,58,95,0.5)] bg-[rgba(30,58,95,0.2)] dark:border-[rgba(30,58,95,0.7)] dark:bg-[rgba(30,58,95,0.3)]",
},
size: {
sm: "px-2 py-1.5 gap-1",
md: "px-3 py-2 gap-2",
lg: "px-4 py-3 gap-3",
},
},
defaultVariants: {
variant: CardVariant.default,
size: "md",
},
});
export interface ResourceStatsCardProps
extends Omit<React.HTMLAttributes<HTMLDivElement>, "color">,
VariantProps<typeof cardVariants> {
// Optional header (icon + title + resource count)
header?: {
icon: LucideIcon;
title: string;
resourceCount?: number | string;
};
// Empty state message (when there's no data to display)
emptyState?: {
message: string;
};
// Main badge (top section) - optional when using empty state
badge?: {
icon: LucideIcon;
count: number | string;
variant?: CardVariant;
};
// Main label - optional when using empty state
label?: string;
// Vertical accent line color (optional, auto-determined from variant)
accentColor?: string;
// Sub-statistics array (flexible items)
stats?: StatItem[];
// Render without container (no border, background, padding) - useful for composing multiple cards in a custom container
containerless?: boolean;
// Ref for the root element
ref?: React.Ref<HTMLDivElement>;
}
export const ResourceStatsCard = ({
header,
emptyState,
badge,
label,
accentColor,
stats = [],
variant = CardVariant.default,
size = "md",
containerless = false,
className,
ref,
...props
}: ResourceStatsCardProps) => {
// Resolve size to ensure it's not null (CVA can return null but we need a defined value)
const resolvedSize = size || "md";
// If containerless, render without outer wrapper
if (containerless) {
return (
<div
ref={ref}
className={cn("flex flex-col gap-[5px]", className)}
{...props}
>
{header && <ResourceStatsCardHeader {...header} size={resolvedSize} />}
{emptyState ? (
<div className="flex h-[51px] w-full flex-col items-center justify-center">
<p className="text-center text-sm leading-5 font-medium text-zinc-300 dark:text-zinc-300">
{emptyState.message}
</p>
</div>
) : (
badge &&
label && (
<ResourceStatsCardContent
badge={badge}
label={label}
stats={stats}
accentColor={accentColor}
size={resolvedSize}
/>
)
)}
</div>
);
}
// Otherwise, render with container
return (
<ResourceStatsCardContainer
ref={ref}
className={cn(cardVariants({ variant, size }), "flex-col", className)}
{...props}
>
{header && <ResourceStatsCardHeader {...header} size={resolvedSize} />}
{emptyState ? (
<div className="flex h-[51px] w-full flex-col items-center justify-center">
<p className="text-center text-sm leading-5 font-medium text-zinc-300 dark:text-zinc-300">
{emptyState.message}
</p>
</div>
) : (
badge &&
label && (
<ResourceStatsCardContent
badge={badge}
label={label}
stats={stats}
accentColor={accentColor}
size={resolvedSize}
/>
)
)}
</ResourceStatsCardContainer>
);
};
ResourceStatsCard.displayName = "ResourceStatsCard";
+8
View File
@@ -399,6 +399,14 @@
"strategy": "installed",
"generatedAt": "2025-09-10T11:50:17.548Z"
},
{
"section": "dependencies",
"name": "tw-animate-css",
"from": "1.4.0",
"to": "1.4.0",
"strategy": "installed",
"generatedAt": "2025-10-15T07:57:13.225Z"
},
{
"section": "dependencies",
"name": "uuid",
+394 -18
View File
File diff suppressed because it is too large Load Diff
+7 -6
View File
@@ -15,10 +15,10 @@
"format:check": "./node_modules/.bin/prettier --check ./app",
"format:write": "./node_modules/.bin/prettier --config .prettierrc.json --write ./app",
"prepare": "husky",
"test:e2e": "playwright test --project=chromium --project=sign-up --project=providers --project=invitations",
"test:e2e:ui": "playwright test --project=chromium --project=sign-up --project=providers --project=invitations --ui",
"test:e2e:debug": "playwright test --project=chromium --project=sign-up --project=providers --project=invitations --debug",
"test:e2e:headed": "playwright test --project=chromium --project=sign-up --project=providers --project=invitations --headed",
"test:e2e": "playwright test --project=chromium",
"test:e2e:ui": "playwright test --project=chromium --ui",
"test:e2e:debug": "playwright test --project=chromium --debug",
"test:e2e:headed": "playwright test --project=chromium --headed",
"test:e2e:report": "playwright show-report",
"test:e2e:install": "playwright install"
},
@@ -69,17 +69,17 @@
"recharts": "2.15.4",
"rss-parser": "3.13.0",
"server-only": "0.0.1",
"shadcn": "3.2.1",
"sharp": "0.33.5",
"tailwind-merge": "3.3.1",
"tailwindcss-animate": "1.0.7",
"tw-animate-css": "1.4.0",
"uuid": "11.1.0",
"zod": "4.1.11",
"zustand": "5.0.8"
},
"devDependencies": {
"@iconify/react": "5.2.1",
"@playwright/test": "1.53.2",
"@playwright/test": "1.56.1",
"@types/node": "20.5.7",
"@types/react": "19.1.13",
"@types/react-dom": "19.1.9",
@@ -105,6 +105,7 @@
"postcss": "8.4.38",
"prettier": "3.6.2",
"prettier-plugin-tailwindcss": "0.6.14",
"shadcn": "3.4.1",
"tailwind-variants": "0.1.20",
"tailwindcss": "4.1.13",
"typescript": "5.5.4"
-18
View File
@@ -89,24 +89,6 @@ export default defineConfig({
{
name: "chromium",
use: { ...devices["Desktop Chrome"] },
testMatch: "auth-login.spec.ts",
},
// This project runs the sign-up test suite
{
name: "sign-up",
testMatch: "sign-up.spec.ts",
},
// This project runs the providers test suite
{
name: "providers",
testMatch: "providers.spec.ts",
dependencies: ["admin.auth.setup"],
},
// This project runs the invitations test suite
{
name: "invitations",
testMatch: "invitations.spec.ts",
dependencies: ["admin.auth.setup"],
},
],
@@ -20,7 +20,7 @@ import {
ERROR_MESSAGES,
URLS,
verifyLoadingState,
} from "../helpers";
} from "./helpers";
test.describe("Login Flow", () => {
test.beforeEach(async ({ page }) => {
-159
View File
@@ -1,159 +0,0 @@
import { Page, Locator, expect } from "@playwright/test";
/**
* Base page object class containing common functionality
* that can be shared across all page objects
*/
export abstract class BasePage {
readonly page: Page;
// Common UI elements that appear on most pages
readonly title: Locator;
readonly loadingIndicator: Locator;
readonly themeToggle: Locator;
constructor(page: Page) {
this.page = page;
// Common locators that most pages share
this.title = page.locator("h1, h2, [role='heading']").first();
this.loadingIndicator = page.getByRole("status", { name: "Loading" });
this.themeToggle = page.getByRole("button", { name: "Toggle theme" });
}
// Common navigation methods
async goto(url: string): Promise<void> {
await this.page.goto(url);
await this.waitForPageLoad();
}
async waitForPageLoad(): Promise<void> {
await this.page.waitForLoadState("networkidle");
}
async refresh(): Promise<void> {
await this.page.reload();
await this.waitForPageLoad();
}
async goBack(): Promise<void> {
await this.page.goBack();
await this.waitForPageLoad();
}
// Common verification methods
async verifyPageTitle(expectedTitle: string | RegExp): Promise<void> {
await expect(this.page).toHaveTitle(expectedTitle);
}
async verifyLoadingState(): Promise<void> {
await expect(this.loadingIndicator).toBeVisible();
}
async verifyNoLoadingState(): Promise<void> {
await expect(this.loadingIndicator).not.toBeVisible();
}
// Common form interaction methods
async clearInput(input: Locator): Promise<void> {
await input.clear();
}
async fillInput(input: Locator, value: string): Promise<void> {
await input.fill(value);
}
async clickButton(button: Locator): Promise<void> {
await button.click();
}
// Common validation methods
async verifyElementVisible(element: Locator): Promise<void> {
await expect(element).toBeVisible();
}
async verifyElementNotVisible(element: Locator): Promise<void> {
await expect(element).not.toBeVisible();
}
async verifyElementText(element: Locator, expectedText: string): Promise<void> {
await expect(element).toHaveText(expectedText);
}
async verifyElementContainsText(element: Locator, expectedText: string): Promise<void> {
await expect(element).toContainText(expectedText);
}
// Common accessibility methods
async verifyKeyboardNavigation(elements: Locator[]): Promise<void> {
for (const element of elements) {
await this.page.keyboard.press("Tab");
await expect(element).toBeFocused();
}
}
async verifyAriaLabels(elements: { locator: Locator; expectedLabel: string }[]): Promise<void> {
for (const { locator, expectedLabel } of elements) {
await expect(locator).toHaveAttribute("aria-label", expectedLabel);
}
}
// Common utility methods
async getElementText(element: Locator): Promise<string> {
return await element.textContent() || "";
}
async getElementValue(element: Locator): Promise<string> {
return await element.inputValue();
}
async isElementVisible(element: Locator): Promise<boolean> {
return await element.isVisible();
}
async isElementEnabled(element: Locator): Promise<boolean> {
return await element.isEnabled();
}
// Common error handling methods
async getFormErrors(): Promise<string[]> {
const errorElements = await this.page.locator('[role="alert"], .error-message, [data-testid="error"]').all();
const errors: string[] = [];
for (const element of errorElements) {
const text = await element.textContent();
if (text) {
errors.push(text.trim());
}
}
return errors;
}
async verifyNoErrors(): Promise<void> {
const errors = await this.getFormErrors();
expect(errors).toHaveLength(0);
}
// Common wait methods
async waitForElement(element: Locator, timeout: number = 5000): Promise<void> {
await element.waitFor({ timeout });
}
async waitForElementToDisappear(element: Locator, timeout: number = 5000): Promise<void> {
await element.waitFor({ state: "hidden", timeout });
}
async waitForUrl(expectedUrl: string | RegExp, timeout: number = 5000): Promise<void> {
await this.page.waitForURL(expectedUrl, { timeout });
}
// Common screenshot methods
async takeScreenshot(name: string): Promise<void> {
await this.page.screenshot({ path: `screenshots/${name}.png` });
}
async takeElementScreenshot(element: Locator, name: string): Promise<void> {
await element.screenshot({ path: `screenshots/${name}.png` });
}
}
+2 -17
View File
@@ -1,6 +1,5 @@
import { Page, expect } from "@playwright/test";
import { SignInPage, SignInCredentials } from "./sign-in/sign-in-page";
import { ProvidersPage } from "./providers/providers-page";
import { SignInPage, SignInCredentials } from "./page-objects/sign-in-page";
export const ERROR_MESSAGES = {
INVALID_CREDENTIALS: "Invalid email or password",
@@ -147,9 +146,7 @@ export async function authenticateAndSaveState(
storagePath: string,
) {
if (!email || !password) {
throw new Error(
"Email and password are required for authentication and save state",
);
throw new Error('Email and password are required for authentication and save state');
}
// Create SignInPage instance
@@ -165,18 +162,6 @@ export async function authenticateAndSaveState(
await page.context().storageState({ path: storagePath });
}
/**
* Generate a random base36 suffix of specified length
* Used for creating unique test data to avoid conflicts
*/
export function makeSuffix(len: number): string {
let s = "";
while (s.length < len) {
s += Math.random().toString(36).slice(2);
}
return s.slice(0, len);
}
export async function getSession(page: Page) {
const response = await page.request.get("/api/auth/session");
return response.json();
-113
View File
@@ -1,113 +0,0 @@
import { Page, Locator, expect } from "@playwright/test";
import { BasePage } from "../base-page";
export class InvitationsPage extends BasePage {
// Page heading
readonly pageHeadingSendInvitation: Locator;
readonly pageHeadingInvitations: Locator;
// UI elements
readonly sendInviteButton: Locator;
readonly emailInput: Locator;
readonly roleSelect: Locator;
// Invitation details
readonly invitationDetails: Locator;
readonly shareUrl: Locator;
constructor(page: Page) {
super(page);
// Page heading
this.pageHeadingInvitations = page.getByRole("heading", { name: "Invitations" });
this.pageHeadingSendInvitation = page.getByRole("heading", { name: "Send Invitation" });
// Button to invite a new user
this.sendInviteButton = page.getByRole("button", { name: "Send Invitation", exact: true });
// Form inputs
this.emailInput = page.getByRole("textbox", { name: "Email" });
// Form select
this.roleSelect = page.getByRole("button", { name: /Role|Select a role/i });
// Form details
this.invitationDetails = page.getByRole('heading', { name: 'Invitation details' });
// Multiple strategies to find the share URL
this.shareUrl = page.locator('a[href*="/sign-up?invitation_token="], [data-testid="share-url"], .share-url, code, pre').first();
}
async goto(): Promise<void> {
// Navigate to the invitations page
await super.goto("/invitations");
}
async clickSendInviteButton(): Promise<void> {
// Click the send invitation button
await this.sendInviteButton.click();
await this.waitForPageLoad();
}
async verifyPageLoaded(): Promise<void> {
// Verify the invitations page is loaded
await expect(this.pageHeadingInvitations).toBeVisible();
await this.waitForPageLoad();
}
async verifyInvitePageLoaded(): Promise<void> {
// Verify the invite page is loaded
await expect(this.emailInput).toBeVisible();
await expect(this.sendInviteButton).toBeVisible();
await this.waitForPageLoad();
}
async fillEmail(email: string): Promise<void> {
// Fill the email input
await this.emailInput.fill(email);
}
async selectRole(role: string): Promise<void> {
// Select the role option
// Open the role dropdown
await this.roleSelect.click();
// Prefer ARIA role option inside listbox
const option = this.page.getByRole("option", { name: new RegExp(`^${role}$`, "i") });
if (await option.count()) {
await option.first().click();
} else {
throw new Error(`Role option ${role} not found`);
}
// Ensure the combobox now shows the chosen value
await expect(this.roleSelect).toContainText(new RegExp(role, "i"));
}
async verifyInviteDataPageLoaded(): Promise<void> {
// Verify the invite data page is loaded
await expect(this.invitationDetails).toBeVisible();
await this.waitForPageLoad();
}
async getShareUrl(): Promise<string> {
// Get the share url
// Get the share url text content
const text = await this.shareUrl.textContent();
if (!text) {
throw new Error("Share url not found");
}
return text;
}
}
-66
View File
@@ -1,66 +0,0 @@
### E2E Tests: Invitations Management
**Suite ID:** `INVITATION-E2E`
**Feature:** User Invitations.
---
## Test Case: `INVITATION-E2E-001` - Invite New User and Complete Sign-Up
**Priority:** `critical`
**Tags:**
- type → @e2e
- feature → @invitations
- id → @INVITATION-E2E-001
**Description/Objective:** Validates the full flow to invite a new user from the admin session, consume the invitation link, sign up as the invited user, authenticate, and verify the user is associated to the expected organization.
**Preconditions:**
- Admin authentication state available: `playwright/.auth/admin_user.json` (admin.auth.setup)
- Environment variables configured:
- `E2E_NEW_USER_PASSWORD` (password for the invited user)
- `E2E_ORGANIZATION_ID` (expected organization for membership verification)
- Application running with accessible UI/API endpoints
### Flow Steps:
1. Navigate to invitations page
2. Click "Send Invitation" button
3. Fill unique email address for the invite
4. Select role `e2e_admin`
5. Click "Send Invitation" to generate invitation
6. Read the generated share URL from the invitation details
7. Open a new browser context (no admin cookies) and navigate to the share URL
8. Complete sign-up with provided password and accept terms
9. Verify sign-up success (no errors) and redirect to login page
10. Log in with the newly created credentials in the new context
11. Verify successful login
12. Navigate to user profile and verify `organizationId` matches `E2E_ORGANIZATION_ID`
### Expected Result:
- Invitation is created and a valid share URL is provided
- Invited user can sign up successfully using the invitation link
- User is redirected to the login page after sign-up (OSS flow)
- Login succeeds with the new credentials
- User profile shows membership in the expected organization
### Key verification points:
- Invitations page loads and displays the heading
- Send Invitation form is visible (email + role select)
- Invitation details page shows share URL
- Sign-up page loads from invitation link and submits without errors
- Post sign-up, redirect to login is performed
- Login with the new account succeeds
- Profile page shows the expected organization id
### Notes:
- Test uses a fresh browser context for the invitee to avoid admin session leakage
- Email should be unique per run (the test uses a random suffix)
- Ensure `E2E_NEW_USER_PASSWORD` and `E2E_ORGANIZATION_ID` are set before execution
- The role `e2e_admin` must be available in the environment
-105
View File
@@ -1,105 +0,0 @@
import { test } from "@playwright/test";
import { InvitationsPage } from "./invitations-page";
import { makeSuffix } from "../helpers";
import { SignUpPage } from "../sign-up/sign-up-page";
import { SignInPage } from "../sign-in/sign-in-page";
import { UserProfilePage } from "../profile/profile-page";
test.describe("New user invitation", () => {
// Invitations page object
let invitationsPage: InvitationsPage;
// Setup before each test
test.beforeEach(async ({ page }) => {
invitationsPage = new InvitationsPage(page);
});
// Use admin authentication for invitations management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should invite a new user",
{
tag: ["@critical", "@e2e", "@invitations", "@INVITATION-E2E-001"],
},
async ({ page, browser }) => {
// Test data from environment variables
const password = process.env.E2E_NEW_USER_PASSWORD;
const organizationId = process.env.E2E_ORGANIZATION_ID;
// Validate required environment variables
if (!password || !organizationId) {
throw new Error(
"E2E_NEW_USER_PASSWORD or E2E_ORGANIZATION_ID environment variable is not set",
);
}
// Generate unique test data
const suffix = makeSuffix(10);
const uniqueEmail = `e2e+${suffix}@prowler.com`;
// Navigate to providers page
await invitationsPage.goto();
await invitationsPage.verifyPageLoaded();
// Press the invite button
await invitationsPage.clickSendInviteButton();
await invitationsPage.verifyInvitePageLoaded();
// Fill the email
await invitationsPage.fillEmail(uniqueEmail);
// Select the role option
await invitationsPage.selectRole("e2e_admin");
// Press the send invitation button
await invitationsPage.clickSendInviteButton();
await invitationsPage.verifyInviteDataPageLoaded();
// Get the share url
const shareUrl = await invitationsPage.getShareUrl();
// Navigate to the share url with a new context to avoid cookies from the admin context
const inviteContext = await browser.newContext({ storageState: { cookies: [], origins: [] } });
const signUpPage = new SignUpPage(await inviteContext.newPage());
// Navigate to the share url
await signUpPage.gotoInvite(shareUrl);
// Fill and submit the sign-up form
await signUpPage.signup({
name: `E2E User ${suffix}`,
email: uniqueEmail,
password: password,
confirmPassword: password,
acceptTerms: true,
});
// Verify no errors occurred during sign-up
await signUpPage.verifyNoErrors();
// Verify redirect to login page (OSS environment)
await signUpPage.verifyRedirectToLogin();
// Verify the newly created user can log in successfully with the new context
const signInPage = new SignInPage(await inviteContext.newPage());
await signInPage.goto();
await signInPage.login({
email: uniqueEmail,
password: password,
});
await signInPage.verifySuccessfulLogin();
// Navigate to the user profile page
const userProfilePage = new UserProfilePage(await inviteContext.newPage());
await userProfilePage.goto();
// Verify if user is added to the organization
await userProfilePage.verifyOrganizationId(organizationId);
// Close the invite context
await inviteContext.close();
},
);
});
@@ -1,7 +1,7 @@
import { Page, Locator, expect } from "@playwright/test";
import { BasePage } from "../base-page";
export class HomePage extends BasePage {
export class HomePage {
readonly page: Page;
// Main content elements
readonly mainContent: Locator;
@@ -18,14 +18,15 @@ export class HomePage extends BasePage {
readonly overviewSection: Locator;
// UI elements
readonly themeToggle: Locator;
readonly logo: Locator;
constructor(page: Page) {
super(page);
this.page = page;
// Main content elements
this.mainContent = page.locator("main");
this.breadcrumbs = page.getByRole("navigation", { name: "Breadcrumbs" });
this.breadcrumbs = page.getByLabel("Breadcrumbs");
this.overviewHeading = page.getByRole("heading", { name: "Overview", exact: true });
// Navigation elements
@@ -38,12 +39,18 @@ export class HomePage extends BasePage {
this.overviewSection = page.locator('[data-testid="overview-section"]');
// UI elements
this.themeToggle = page.getByLabel("Toggle theme");
this.logo = page.locator('svg[width="300"]');
}
// Navigation methods
async goto(): Promise<void> {
await super.goto("/");
await this.page.goto("/");
await this.waitForPageLoad();
}
async waitForPageLoad(): Promise<void> {
await this.page.waitForLoadState("networkidle");
}
// Verification methods
@@ -51,6 +58,7 @@ export class HomePage extends BasePage {
await expect(this.page).toHaveURL("/");
await expect(this.mainContent).toBeVisible();
await expect(this.overviewHeading).toBeVisible();
await this.waitForPageLoad();
}
async verifyBreadcrumbs(): Promise<void> {
@@ -86,6 +94,15 @@ export class HomePage extends BasePage {
}
// Utility methods
async refresh(): Promise<void> {
await this.page.reload();
await this.waitForPageLoad();
}
async goBack(): Promise<void> {
await this.page.goBack();
await this.waitForPageLoad();
}
// Accessibility methods
async verifyKeyboardNavigation(): Promise<void> {
@@ -94,6 +111,11 @@ export class HomePage extends BasePage {
await expect(this.themeToggle).toBeFocused();
}
// Wait methods
async waitForRedirect(expectedUrl: string): Promise<void> {
await this.page.waitForURL(expectedUrl);
}
async waitForContentLoad(): Promise<void> {
await this.page.waitForFunction(() => {
const main = document.querySelector("main");
@@ -1,6 +1,5 @@
import { Page, Locator, expect } from "@playwright/test";
import { BasePage } from "../base-page";
import { HomePage } from "../home/home-page";
import { HomePage } from "./home-page";
export interface SignInCredentials {
email: string;
@@ -12,7 +11,8 @@ export interface SocialAuthConfig {
githubEnabled: boolean;
}
export class SignInPage extends BasePage {
export class SignInPage {
readonly page: Page;
readonly homePage: HomePage;
// Form elements
@@ -31,48 +31,59 @@ export class SignInPage extends BasePage {
readonly backButton: Locator;
// UI elements
readonly title: Locator;
readonly logo: Locator;
readonly themeToggle: Locator;
// Error messages
readonly errorMessages: Locator;
readonly loadingIndicator: Locator;
// SAML specific elements
readonly samlModeTitle: Locator;
readonly samlEmailInput: Locator;
constructor(page: Page) {
super(page);
this.page = page;
this.homePage = new HomePage(page);
// Form elements
this.emailInput = page.getByRole("textbox", { name: "Email" });
this.passwordInput = page.getByRole("textbox", { name: "Password" });
this.emailInput = page.getByLabel("Email");
this.passwordInput = page.getByLabel("Password");
this.loginButton = page.getByRole("button", { name: "Log in" });
this.form = page.locator("form");
// Social authentication buttons
this.googleButton = page.getByRole("button", { name: "Continue with Google" });
this.githubButton = page.getByRole("button", { name: "Continue with Github" });
this.samlButton = page.getByRole("button", { name: "Continue with SAML SSO" });
this.googleButton = page.getByText("Continue with Google");
this.githubButton = page.getByText("Continue with Github");
this.samlButton = page.getByText("Continue with SAML SSO");
// Navigation elements
this.signUpLink = page.getByRole("link", { name: "Sign up" });
this.backButton = page.getByRole("button", { name: "Back" });
this.backButton = page.getByText("Back");
// UI elements
this.title = page.getByText("Sign in", { exact: true });
this.logo = page.locator('svg[width="300"]');
this.themeToggle = page.getByLabel("Toggle theme");
// Error messages
this.errorMessages = page.locator('[role="alert"], .error-message, [data-testid="error"]');
this.loadingIndicator = page.getByText("Loading");
// SAML specific elements
this.samlModeTitle = page.getByRole("heading", { name: "Sign in with SAML SSO" });
this.samlEmailInput = page.getByRole("textbox", { name: "Email" });
this.samlModeTitle = page.getByText("Sign in with SAML SSO");
this.samlEmailInput = page.getByLabel("Email");
}
// Navigation methods
async goto(): Promise<void> {
await super.goto("/sign-in");
await this.page.goto("/sign-in");
await this.waitForPageLoad();
}
async waitForPageLoad(): Promise<void> {
await this.page.waitForLoadState("networkidle");
}
// Form interaction methods
@@ -137,7 +148,8 @@ export class SignInPage extends BasePage {
async verifyPageLoaded(): Promise<void> {
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.logo).toBeVisible();
await expect(this.page.getByRole("heading", { name: "Sign in", exact: true })).toBeVisible();
await expect(this.title).toBeVisible();
await this.waitForPageLoad();
}
async verifyFormElements(): Promise<void> {
@@ -157,7 +169,7 @@ export class SignInPage extends BasePage {
}
async verifyNavigationLinks(): Promise<void> {
await expect(this.page.getByRole('link', { name: /Need to create an account\?/i })).toBeVisible();
await expect(this.page.getByText("Need to create an account?")).toBeVisible();
await expect(this.signUpLink).toBeVisible();
}
@@ -166,7 +178,7 @@ export class SignInPage extends BasePage {
}
async verifyLoginError(errorMessage: string = "Invalid email or password"): Promise<void> {
await expect(this.page.getByRole("alert", { name: errorMessage })).toBeVisible();
await expect(this.page.getByText(errorMessage).first()).toBeVisible();
await expect(this.page).toHaveURL("/sign-in");
}
@@ -177,19 +189,19 @@ export class SignInPage extends BasePage {
}
async verifyNormalModeActive(): Promise<void> {
await expect(this.page.getByRole("heading", { name: "Sign in", exact: true })).toBeVisible();
await expect(this.title).toBeVisible();
await expect(this.passwordInput).toBeVisible();
}
async verifyLoadingState(): Promise<void> {
await expect(this.loginButton).toHaveAttribute("aria-disabled", "true");
await super.verifyLoadingState();
await expect(this.loadingIndicator).toBeVisible();
}
async verifyFormValidation(): Promise<void> {
// Check for common validation messages
const emailError = this.page.getByRole("alert", { name: "Please enter a valid email address." });
const passwordError = this.page.getByRole("alert", { name: "Password is required." });
const emailError = this.page.getByText("Please enter a valid email address.");
const passwordError = this.page.getByText("Password is required.");
// At least one validation error should be visible
await expect(emailError.or(passwordError)).toBeVisible();
@@ -228,7 +240,30 @@ export class SignInPage extends BasePage {
return emailValue.length > 0 && passwordValue.length > 0;
}
async getFormErrors(): Promise<string[]> {
const errorElements = await this.errorMessages.all();
const errors: string[] = [];
for (const element of errorElements) {
const text = await element.textContent();
if (text) {
errors.push(text.trim());
}
}
return errors;
}
// Browser interaction methods
async refresh(): Promise<void> {
await this.page.reload();
await this.waitForPageLoad();
}
async goBack(): Promise<void> {
await this.page.goBack();
await this.waitForPageLoad();
}
// Session management methods
async logout(): Promise<void> {
@@ -237,7 +272,7 @@ export class SignInPage extends BasePage {
async verifyLogoutSuccess(): Promise<void> {
await expect(this.page).toHaveURL("/sign-in");
await expect(this.page.getByRole("heading", { name: "Sign in", exact: true })).toBeVisible();
await expect(this.title).toBeVisible();
}
// Advanced interaction methods
@@ -260,7 +295,7 @@ export class SignInPage extends BasePage {
// Error handling methods
async handleSamlError(): Promise<void> {
const samlError = this.page.getByRole("alert", { name: "SAML Authentication Error" });
const samlError = this.page.getByText("SAML Authentication Error");
if (await samlError.isVisible()) {
// Handle SAML error if present
console.log("SAML authentication error detected");
-32
View File
@@ -1,32 +0,0 @@
import { Page, Locator, expect } from "@playwright/test";
import { BasePage } from "../base-page";
export class UserProfilePage extends BasePage {
// Page heading
readonly pageHeadingUserProfile: Locator;
constructor(page: Page) {
super(page);
// Page heading
this.pageHeadingUserProfile = page.getByRole("heading", { name: "User Profile" });
}
async goto(): Promise<void> {
// Navigate to the user profile page
await super.goto("/profile");
}
async verifyOrganizationId(organizationId: string): Promise<void> {
// Verify the organization ID is visible
await expect(this.page.getByText(organizationId)).toBeVisible();
if (await this.page.getByText(organizationId).count() === 0) {
throw new Error(`Organization ID ${organizationId} not found`);
}
}
}
-957
View File
@@ -1,957 +0,0 @@
import { Page, Locator, expect } from "@playwright/test";
import { BasePage } from "../base-page";
// AWS provider data
export interface AWSProviderData {
accountId: string;
alias?: string;
}
// AZURE provider data
export interface AZUREProviderData {
subscriptionId: string;
alias?: string;
}
// M365 provider data
export interface M365ProviderData {
domainId: string;
alias?: string;
}
// Kubernetes provider data
export interface KubernetesProviderData {
context: string;
alias?: string;
}
// GCP provider data
export interface GCPProviderData {
projectId: string;
alias?: string;
}
// GitHub provider data
export interface GitHubProviderData {
username: string;
alias?: string;
}
// AWS credential options
export const AWS_CREDENTIAL_OPTIONS = {
AWS_ROLE_ARN: "role",
AWS_CREDENTIALS: "credentials"
} as const;
// AWS credential type
type AWSCredentialType = (typeof AWS_CREDENTIAL_OPTIONS)[keyof typeof AWS_CREDENTIAL_OPTIONS];
// AWS provider credential
export interface AWSProviderCredential {
type: AWSCredentialType;
roleArn?: string;
externalId?: string;
accessKeyId?: string;
secretAccessKey?: string;
}
// AZURE credential options
export const AZURE_CREDENTIAL_OPTIONS = {
AZURE_CREDENTIALS: "credentials"
} as const;
// AZURE credential type
type AZURECredentialType = (typeof AZURE_CREDENTIAL_OPTIONS)[keyof typeof AZURE_CREDENTIAL_OPTIONS];
// AZURE provider credential
export interface AZUREProviderCredential {
type: AZURECredentialType;
clientId:string;
clientSecret:string;
tenantId:string;
}
// M365 credential options
export const M365_CREDENTIAL_OPTIONS = {
M365_CREDENTIALS: "credentials",
M365_CERTIFICATE_CREDENTIALS: "certificate"
} as const;
// M365 credential type
type M365CredentialType = (typeof M365_CREDENTIAL_OPTIONS)[keyof typeof M365_CREDENTIAL_OPTIONS];
// M365 provider credential
export interface M365ProviderCredential {
type: M365CredentialType;
clientId:string;
clientSecret?:string;
tenantId:string;
certificateContent?:string;
}
// Kubernetes credential options
export const KUBERNETES_CREDENTIAL_OPTIONS = {
KUBECONFIG_CONTENT: "kubeconfig"
} as const;
// Kubernetes credential type
type KubernetesCredentialType = (typeof KUBERNETES_CREDENTIAL_OPTIONS)[keyof typeof KUBERNETES_CREDENTIAL_OPTIONS];
// Kubernetes provider credential
export interface KubernetesProviderCredential {
type: KubernetesCredentialType;
kubeconfigContent:string;
}
// GCP credential options
export const GCP_CREDENTIAL_OPTIONS = {
GCP_SERVICE_ACCOUNT: "service_account"
} as const;
// GCP credential type
type GCPCredentialType = (typeof GCP_CREDENTIAL_OPTIONS)[keyof typeof GCP_CREDENTIAL_OPTIONS];
// GCP provider credential
export interface GCPProviderCredential {
type: GCPCredentialType;
serviceAccountKey:string;
}
// GitHub credential options
export const GITHUB_CREDENTIAL_OPTIONS = {
GITHUB_PERSONAL_ACCESS_TOKEN: "personal_access_token",
GITHUB_ORGANIZATION_ACCESS_TOKEN: "organization_access_token",
GITHUB_APP: "github_app"
} as const;
// GitHub credential type
type GitHubCredentialType = (typeof GITHUB_CREDENTIAL_OPTIONS)[keyof typeof GITHUB_CREDENTIAL_OPTIONS];
// GitHub provider personal access token credential
export interface GitHubProviderCredential {
type: GitHubCredentialType;
personalAccessToken?:string;
githubAppId?:string;
githubAppPrivateKey?:string;
}
// Providers page
export class ProvidersPage extends BasePage {
// Button to add a new cloud provider
readonly addProviderButton: Locator;
readonly providersTable: Locator;
// Provider selection elements
readonly awsProviderRadio: Locator;
readonly gcpProviderRadio: Locator;
readonly azureProviderRadio: Locator;
readonly m365ProviderRadio: Locator;
readonly kubernetesProviderRadio: Locator;
readonly githubProviderRadio: Locator;
// AWS provider form elements
readonly accountIdInput: Locator;
readonly aliasInput: Locator;
readonly nextButton: Locator;
readonly backButton: Locator;
readonly saveButton: Locator;
readonly launchScanButton: Locator;
// AWS credentials type selection
readonly roleCredentialsRadio: Locator;
readonly staticCredentialsRadio: Locator;
// M365 credentials type selection
readonly m365StaticCredentialsRadio: Locator;
readonly m365CertificateCredentialsRadio: Locator;
// GitHub credentials type selection
readonly githubPersonalAccessTokenRadio: Locator;
readonly githubAppCredentialsRadio: Locator;
// AWS role credentials form
readonly roleArnInput: Locator;
readonly externalIdInput: Locator;
// AWS static credentials form
readonly accessKeyIdInput: Locator;
readonly secretAccessKeyInput: Locator;
// AZURE provider form elements
readonly azureSubscriptionIdInput: Locator;
readonly azureClientIdInput: Locator;
readonly azureClientSecretInput: Locator;
readonly azureTenantIdInput: Locator;
// M365 provider form elements
readonly m365domainIdInput: Locator;
readonly m365ClientIdInput: Locator;
readonly m365ClientSecretInput: Locator;
readonly m365TenantIdInput: Locator;
readonly m365CertificateContentInput: Locator;
// Kubernetes provider form elements
readonly kubernetesContextInput: Locator;
readonly kubernetesKubeconfigContentInput: Locator;
// GCP provider form elements
readonly gcpProjectIdInput: Locator;
readonly gcpServiceAccountKeyInput: Locator;
readonly gcpServiceAccountRadio: Locator;
// GitHub provider form elements
readonly githubUsernameInput: Locator;
readonly githubAppIdInput: Locator;
readonly githubAppPrivateKeyInput: Locator;
readonly githubPersonalAccessTokenInput: Locator;
// Delete button
readonly deleteProviderConfirmationButton: Locator;
constructor(page: Page) {
super(page);
// Button to add a new cloud provider
this.addProviderButton = page.getByRole("button", { name: "Add Cloud Provider", exact: true });
// Table displaying existing providers
this.providersTable = page.getByRole("table");
// Radio buttons to select the type of cloud provider
this.awsProviderRadio = page.getByRole("radio", {
name: /Amazon Web Services/i,
});
this.gcpProviderRadio = page.getByRole("radio", {
name: /Google Cloud Platform/i,
});
this.azureProviderRadio = page.getByRole("radio", {
name: /Microsoft Azure/i,
});
this.m365ProviderRadio = page.getByRole("radio", {
name: /Microsoft 365/i,
});
this.kubernetesProviderRadio = page.getByRole("radio", {
name: /Kubernetes/i,
});
this.githubProviderRadio = page.getByRole("radio", { name: /GitHub/i });
// AWS provider form inputs
this.accountIdInput = page.getByRole("textbox", { name: "Account ID" });
// AZURE provider form inputs
this.azureSubscriptionIdInput = page.getByRole("textbox", { name: "Subscription ID" });
this.azureClientIdInput = page.getByRole("textbox", { name: "Client ID" });
this.azureClientSecretInput = page.getByRole("textbox", { name: "Client Secret" });
this.azureTenantIdInput = page.getByRole("textbox", { name: "Tenant ID" });
// M365 provider form inputs
this.m365domainIdInput = page.getByRole("textbox", { name: "Domain ID" });
this.m365ClientIdInput = page.getByRole("textbox", { name: "Client ID" });
this.m365ClientSecretInput = page.getByRole("textbox", { name: "Client Secret" });
this.m365TenantIdInput = page.getByRole("textbox", { name: "Tenant ID" });
this.m365CertificateContentInput = page.getByRole("textbox", { name: "Certificate Content" });
// Kubernetes provider form inputs
this.kubernetesContextInput = page.getByRole("textbox", { name: "Context" });
this.kubernetesKubeconfigContentInput = page.getByRole("textbox", { name: "Kubeconfig Content" });
// GCP provider form inputs
this.gcpProjectIdInput = page.getByRole("textbox", { name: "Project ID" });
this.gcpServiceAccountKeyInput = page.getByRole("textbox", { name: "Service Account Key" });
// GitHub provider form inputs
this.githubUsernameInput = page.getByRole("textbox", { name: "Username" });
this.githubPersonalAccessTokenInput = page.getByRole("textbox", { name: "Personal Access Token" });
this.githubAppIdInput = page.getByRole("textbox", { name: "GitHub App ID" });
this.githubAppPrivateKeyInput = page.getByRole("textbox", { name: "GitHub App Private Key" });
// Alias input
this.aliasInput = page.getByRole("textbox", { name: "Provider alias (optional)" });
// Navigation buttons in the form (next and back)
this.nextButton = page
.locator("form")
.getByRole("button", { name: "Next", exact: true });
this.backButton = page.getByRole("button", { name: "Back" });
// Button to save the form
this.saveButton = page.getByRole("button", { name: "Save", exact: true });
// Button to launch a scan
this.launchScanButton = page.getByRole("button", {
name: "Launch scan",
exact: true,
});
// Radios for selecting AWS credentials method
this.roleCredentialsRadio = page.getByRole("radio", {
name: /Connect assuming IAM Role/i,
});
this.staticCredentialsRadio = page.getByRole("radio", {
name: /Connect via Credentials/i,
});
// Radios for selecting M365 credentials method
this.m365StaticCredentialsRadio = page.getByRole("radio", {
name: /App Client Secret Credentials/i,
});
this.m365CertificateCredentialsRadio = page.getByRole("radio", {
name: /App Certificate Credentials/i,
});
// Radios for selecting GCP credentials method
this.gcpServiceAccountRadio = page.getByRole("radio", {
name: /Service Account Key/i,
});
// Radios for selecting GitHub credentials method
this.githubPersonalAccessTokenRadio = page.getByRole("radio", {
name: /Personal Access Token/i,
});
this.githubAppCredentialsRadio = page.getByRole("radio", {
name: /GitHub App/i,
});
// Inputs for IAM Role credentials
this.roleArnInput = page.getByRole("textbox", { name: "Role ARN" });
this.externalIdInput = page.getByRole("textbox", { name: "External ID" });
// Inputs for static credentials
this.accessKeyIdInput = page.getByRole("textbox", { name: "Access Key ID" });
this.secretAccessKeyInput = page.getByRole("textbox", { name: "Secret Access Key" });
// Delete button in confirmation modal
this.deleteProviderConfirmationButton = page.getByRole("button", {
name: "Delete",
exact: true,
});
}
async goto(): Promise<void> {
// Go to the providers page
await super.goto("/providers");
}
async clickAddProvider(): Promise<void> {
// Click the add provider button
await this.addProviderButton.click();
await this.waitForPageLoad();
}
async selectAWSProvider(): Promise<void> {
// Prefer label-based click for radios, force if overlay intercepts
await this.awsProviderRadio.click({ force: true });
await this.waitForPageLoad();
}
async selectAZUREProvider(): Promise<void> {
// Prefer label-based click for radios, force if overlay intercepts
await this.azureProviderRadio.click({ force: true });
await this.waitForPageLoad();
}
async selectM365Provider(): Promise<void> {
// Select the M365 provider
await this.m365ProviderRadio.click({ force: true });
await this.waitForPageLoad();
}
async selectKubernetesProvider(): Promise<void> {
// Select the Kubernetes provider
await this.kubernetesProviderRadio.click({ force: true });
await this.waitForPageLoad();
}
async selectGCPProvider(): Promise<void> {
// Select the GCP provider
await this.gcpProviderRadio.click({ force: true });
await this.waitForPageLoad();
}
async selectGitHubProvider(): Promise<void> {
// Select the GitHub provider
await this.githubProviderRadio.click({ force: true });
await this.waitForPageLoad();
}
async fillAWSProviderDetails(data: AWSProviderData): Promise<void> {
// Fill the AWS provider details
await this.accountIdInput.fill(data.accountId);
if (data.alias) {
await this.aliasInput.fill(data.alias);
}
}
async fillAZUREProviderDetails(data: AZUREProviderData): Promise<void> {
// Fill the AWS provider details
await this.azureSubscriptionIdInput.fill(data.subscriptionId);
if (data.alias) {
await this.aliasInput.fill(data.alias);
}
}
async fillM365ProviderDetails(data: M365ProviderData): Promise<void> {
// Fill the M365 provider details
await this.m365domainIdInput.fill(data.domainId);
if (data.alias) {
await this.aliasInput.fill(data.alias);
}
}
async fillKubernetesProviderDetails(data: KubernetesProviderData): Promise<void> {
// Fill the Kubernetes provider details
await this.kubernetesContextInput.fill(data.context);
if (data.alias) {
await this.aliasInput.fill(data.alias);
}
}
async fillGCPProviderDetails(data: GCPProviderData): Promise<void> {
// Fill the GCP provider details
await this.gcpProjectIdInput.fill(data.projectId);
if (data.alias) {
await this.aliasInput.fill(data.alias);
}
}
async fillGitHubProviderDetails(data: GitHubProviderData): Promise<void> {
// Fill the GitHub provider details
await this.githubUsernameInput.fill(data.username);
if (data.alias) {
await this.aliasInput.fill(data.alias);
}
}
async clickNext(): Promise<void> {
// The wizard interface may use different labels for its primary action button on each step.
// This function determines which button to click depending on the current URL and page content.
// Get the current page URL
const url = this.page.url();
// If on the "connect-account" step, click the "Next" button
if (/\/providers\/connect-account/.test(url)) {
await this.nextButton.click();
await this.waitForPageLoad();
return;
}
// If on the "add-credentials" step, check for "Save" and "Next" buttons
if (/\/providers\/add-credentials/.test(url)) {
// Some UI implementations use "Save" instead of "Next" for primary action
const saveBtn = this.saveButton;
if (await saveBtn.count()) {
await saveBtn.click();
await this.waitForPageLoad();
return;
}
// If "Save" is not present, try clicking the "Next" button
if (await this.nextButton.count()) {
await this.nextButton.click();
await this.waitForPageLoad();
return;
}
}
// If on the "test-connection" step, click the "Launch scan" button
if (/\/providers\/test-connection/.test(url)) {
const buttonByText = this.page
.locator("button")
.filter({ hasText: "Launch scan" });
await buttonByText.click();
await this.waitForPageLoad();
// Wait for either success (redirect to scans) or error message to appear
// The error container has multiple p.text-danger elements, we want the first one with the technical error
const errorMessage = this.page.locator("p.text-danger").first();
try {
// Wait up to 15 seconds for either the error message or redirect
await Promise.race([
// Wait for error message to appear
errorMessage.waitFor({ state: "visible", timeout: 15000 }),
// Wait for redirect to scans page (success case)
this.page.waitForURL(/\/scans/, { timeout: 15000 }),
]);
// If we're still on test-connection page, check for error
if (/\/providers\/test-connection/.test(this.page.url())) {
const isErrorVisible = await errorMessage.isVisible().catch(() => false);
if (isErrorVisible) {
const errorText = await errorMessage.textContent();
throw new Error(
`Test connection failed with error: ${errorText?.trim() || "Unknown error"}`,
);
}
}
} catch (error) {
// If timeout or other error, check if error message is present
const isErrorVisible = await errorMessage.isVisible().catch(() => false);
if (isErrorVisible) {
const errorText = await errorMessage.textContent();
throw new Error(
`Test connection failed with error: ${errorText?.trim() || "Unknown error"}`,
);
}
// Re-throw original error if no error message found
throw error;
}
return;
}
// Fallback logic: try finding any common primary action buttons in expected order
const candidates = [
{ name: "Next" }, // Try the "Next" button
{ name: "Save" }, // Try the "Save" button
{ name: "Launch scan" }, // Try the "Launch scan" button
{ name: /Continue|Proceed/i }, // Try "Continue" or "Proceed" (case-insensitive)
] as const;
// Try each candidate name and click it if found
for (const candidate of candidates) {
// Try each candidate name and click it if found
const btn = this.page.getByRole("button", {
name: candidate.name as any,
});
if (await btn.count()) {
await btn.click();
await this.waitForPageLoad();
return;
}
}
// If none of the expected action buttons are present, throw an error
throw new Error(
"Could not find an actionable Next/Save/Launch scan button on this step",
);
}
async selectCredentialsType(type: AWSCredentialType): Promise<void> {
// Ensure we are on the add-credentials page where the selector exists
await expect(this.page).toHaveURL(/\/providers\/add-credentials/);
if (type === AWS_CREDENTIAL_OPTIONS.AWS_ROLE_ARN) {
await this.roleCredentialsRadio.click({ force: true });
} else if (type === AWS_CREDENTIAL_OPTIONS.AWS_CREDENTIALS) {
await this.staticCredentialsRadio.click({ force: true });
} else {
throw new Error(`Invalid AWS credential type: ${type}`);
}
// Wait for the page to load
await this.waitForPageLoad();
}
async selectM365CredentialsType(type: M365CredentialType): Promise<void> {
// Ensure we are on the add-credentials page where the selector exists
await expect(this.page).toHaveURL(/\/providers\/add-credentials/);
if (type === M365_CREDENTIAL_OPTIONS.M365_CREDENTIALS) {
await this.m365StaticCredentialsRadio.click({ force: true });
} else if (type === M365_CREDENTIAL_OPTIONS.M365_CERTIFICATE_CREDENTIALS) {
await this.m365CertificateCredentialsRadio.click({ force: true });
} else {
throw new Error(`Invalid M365 credential type: ${type}`);
}
// Wait for the page to load
await this.waitForPageLoad();
}
async selectGCPCredentialsType(type: GCPCredentialType): Promise<void> {
// Ensure we are on the add-credentials page where the selector exists
await expect(this.page).toHaveURL(/\/providers\/add-credentials/);
if (type === GCP_CREDENTIAL_OPTIONS.GCP_SERVICE_ACCOUNT) {
await this.gcpServiceAccountRadio.click({ force: true });
} else {
throw new Error(`Invalid GCP credential type: ${type}`);
}
// Wait for the page to load
await this.waitForPageLoad();
}
async selectGitHubCredentialsType(type: GitHubCredentialType): Promise<void> {
// Ensure we are on the add-credentials page where the selector exists
await expect(this.page).toHaveURL(/\/providers\/add-credentials/);
if (type === GITHUB_CREDENTIAL_OPTIONS.GITHUB_PERSONAL_ACCESS_TOKEN) {
await this.githubPersonalAccessTokenRadio.click({ force: true });
} else if (type === GITHUB_CREDENTIAL_OPTIONS.GITHUB_APP) {
await this.githubAppCredentialsRadio.click({ force: true });
} else {
throw new Error(`Invalid GitHub credential type: ${type}`);
}
// Wait for the page to load
await this.waitForPageLoad();
}
async fillRoleCredentials(credentials: AWSProviderCredential): Promise<void> {
// Fill the role credentials form
if (credentials.accessKeyId) {
await this.accessKeyIdInput.fill(credentials.accessKeyId);
}
if (credentials.secretAccessKey) {
await this.secretAccessKeyInput.fill(credentials.secretAccessKey);
}
if (credentials.roleArn) {
await this.roleArnInput.fill(credentials.roleArn);
}
if (credentials.externalId) {
// External ID may be prefilled and disabled; only fill if enabled
if (await this.externalIdInput.isEnabled()) {
await this.externalIdInput.fill(credentials.externalId);
}
}
}
async fillStaticCredentials(credentials: AWSProviderCredential): Promise<void> {
// Fill the static credentials form
if (credentials.accessKeyId) {
await this.accessKeyIdInput.fill(credentials.accessKeyId);
}
if (credentials.secretAccessKey) {
await this.secretAccessKeyInput.fill(credentials.secretAccessKey);
}
}
async fillAZURECredentials(credentials: AZUREProviderCredential): Promise<void> {
// Fill the azure credentials form
if (credentials.clientId) {
await this.azureClientIdInput.fill(credentials.clientId);
}
if (credentials.clientSecret) {
await this.azureClientSecretInput.fill(credentials.clientSecret);
}
if (credentials.tenantId) {
await this.azureTenantIdInput.fill(credentials.tenantId);
}
}
async fillM365Credentials(credentials: M365ProviderCredential): Promise<void> {
// Fill the m365 credentials form
if (credentials.clientId) {
await this.m365ClientIdInput.fill(credentials.clientId);
}
if (credentials.clientSecret) {
await this.m365ClientSecretInput.fill(credentials.clientSecret);
}
if (credentials.tenantId) {
await this.m365TenantIdInput.fill(credentials.tenantId);
}
}
async fillM365CertificateCredentials(credentials: M365ProviderCredential): Promise<void> {
// Fill the m365 certificate credentials form
if (credentials.clientId) {
await this.m365ClientIdInput.fill(credentials.clientId);
}
if (credentials.certificateContent) {
await this.m365CertificateContentInput.fill(credentials.certificateContent);
}
if (credentials.tenantId) {
await this.m365TenantIdInput.fill(credentials.tenantId);
}
}
async fillKubernetesCredentials(credentials: KubernetesProviderCredential): Promise<void> {
// Fill the Kubernetes credentials form
if (credentials.kubeconfigContent) {
await this.kubernetesKubeconfigContentInput.fill(credentials.kubeconfigContent);
}
}
async fillGCPServiceAccountKeyCredentials(credentials: GCPProviderCredential): Promise<void> {
// Fill the GCP credentials form
if (credentials.serviceAccountKey) {
await this.gcpServiceAccountKeyInput.fill(credentials.serviceAccountKey);
}
}
async fillGitHubPersonalAccessTokenCredentials(credentials: GitHubProviderCredential): Promise<void> {
// Fill the GitHub personal access token credentials form
if (credentials.personalAccessToken) {
await this.githubPersonalAccessTokenInput.fill(credentials.personalAccessToken);
}
}
async fillGitHubAppCredentials(credentials: GitHubProviderCredential): Promise<void> {
// Fill the GitHub app credentials form
if (credentials.githubAppId) {
await this.githubAppIdInput.fill(credentials.githubAppId);
}
if (credentials.githubAppPrivateKey) {
await this.githubAppPrivateKeyInput.fill(credentials.githubAppPrivateKey);
}
}
async verifyPageLoaded(): Promise<void> {
// Verify the providers page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.addProviderButton).toBeVisible();
await this.page.waitForLoadState('networkidle');
}
async verifyConnectAccountPageLoaded(): Promise<void> {
// Verify the connect account page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.awsProviderRadio).toBeVisible();
}
async verifyCredentialsPageLoaded(): Promise<void> {
// Verify the credentials page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.roleCredentialsRadio).toBeVisible();
}
async verifyM365CredentialsPageLoaded(): Promise<void> {
// Verify the M365 credentials page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.m365ClientIdInput).toBeVisible();
await expect(this.m365ClientSecretInput).toBeVisible();
await expect(this.m365TenantIdInput).toBeVisible();
}
async verifyM365CertificateCredentialsPageLoaded(): Promise<void> {
// Verify the M365 certificate credentials page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.m365ClientIdInput).toBeVisible();
await expect(this.m365TenantIdInput).toBeVisible();
await expect(this.m365CertificateContentInput).toBeVisible();
}
async verifyKubernetesCredentialsPageLoaded(): Promise<void> {
// Verify the Kubernetes credentials page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.kubernetesContextInput).toBeVisible();
}
async verifyGCPServiceAccountPageLoaded(): Promise<void> {
// Verify the GCP service account page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.gcpServiceAccountKeyInput).toBeVisible();
}
async verifyGitHubPersonalAccessTokenPageLoaded(): Promise<void> {
// Verify the GitHub personal access token page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.githubPersonalAccessTokenInput).toBeVisible();
}
async verifyGitHubAppPageLoaded(): Promise<void> {
// Verify the GitHub app page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.githubAppIdInput).toBeVisible();
await expect(this.githubAppPrivateKeyInput).toBeVisible();
}
async verifyLaunchScanPageLoaded(): Promise<void> {
// Verify the launch scan page is loaded
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.page).toHaveURL(/\/providers\/test-connection/);
// Verify the Launch scan button is visible
const launchScanButton = this.page
.locator("button")
.filter({ hasText: "Launch scan" });
await expect(launchScanButton).toBeVisible();
}
async verifyLoadProviderPageAfterNewProvider(): Promise<void> {
// Verify the provider page is loaded
await this.waitForPageLoad();
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.providersTable).toBeVisible();
}
async verifySingleRowForProviderUID(providerUID: string): Promise<boolean> {
// Verify if table has 1 row and that row contains providerUID
await expect(this.providersTable).toBeVisible();
// Get the matching rows
const matchingRows = this.providersTable.locator("tbody tr", {
hasText: providerUID,
});
// Verify the number of matching rows is 1
const count = await matchingRows.count();
if (count !== 1) return false;
return true;
}
async deleteProviderIfExists(providerUID: string): Promise<void> {
// Delete the provider if it exists
// Navigate to providers page
await this.goto();
await expect(this.providersTable).toBeVisible({ timeout: 10000 });
// Find and use the search input to filter the table
const searchInput = this.page.getByPlaceholder(/search|filter/i);
await expect(searchInput).toBeVisible({ timeout: 5000 });
// Clear and search for the specific provider
await searchInput.clear();
await searchInput.fill(providerUID);
await searchInput.press("Enter");
// Wait for the table to finish loading/filtering
await this.waitForPageLoad();
// Additional wait for React table to re-render with the server-filtered data
// The filtering happens on the server, but the table component needs time
// to process the response and update the DOM after network idle
await this.page.waitForTimeout(1500);
// Get all rows from the table
const allRows = this.providersTable.locator("tbody tr");
// Helper function to check if a row is the "No results" row
const isNoResultsRow = async (row: Locator): Promise<boolean> => {
const text = await row.textContent();
return text?.includes("No results") || text?.includes("No data") || false;
};
// Helper function to find the row with the specific UID
const findProviderRow = async (): Promise<Locator | null> => {
const count = await allRows.count();
for (let i = 0; i < count; i++) {
const row = allRows.nth(i);
// Skip "No results" rows
if (await isNoResultsRow(row)) {
continue;
}
// Check if this row contains the UID in the UID column (column 3)
const uidCell = row.locator("td").nth(3);
const uidText = await uidCell.textContent();
if (uidText?.includes(providerUID)) {
return row;
}
}
return null;
};
// Wait for filtering to complete (max 0 or 1 data rows)
await expect(async () => {
const targetRow = await findProviderRow();
const count = await allRows.count();
// Count only real data rows (not "No results")
let dataRowCount = 0;
for (let i = 0; i < count; i++) {
if (!(await isNoResultsRow(allRows.nth(i)))) {
dataRowCount++;
}
}
// Should have 0 or 1 data row
expect(dataRowCount).toBeLessThanOrEqual(1);
}).toPass({ timeout: 20000 });
// Find the provider row
const targetRow = await findProviderRow();
if (!targetRow) {
// Provider not found, nothing to delete
// Navigate back to providers page to ensure clean state
await this.goto();
await expect(this.providersTable).toBeVisible({ timeout: 10000 });
return;
}
// Find and click the action button (last cell = actions column)
const actionButton = targetRow.locator("td").last().locator("button").first();
await expect(actionButton).toBeVisible({ timeout: 5000 });
await actionButton.click();
// Wait for dropdown menu to appear and find delete option
const deleteMenuItem = this.page.getByRole("menuitem", {
name: /delete.*provider/i,
});
await expect(deleteMenuItem).toBeVisible({ timeout: 5000 });
await deleteMenuItem.click();
// Wait for confirmation modal to appear
const modal = this.page.locator('[role="dialog"], .modal, [data-testid*="modal"]').first();
await expect(modal).toBeVisible({ timeout: 10000 });
// Find and click the delete confirmation button
await expect(this.deleteProviderConfirmationButton).toBeVisible({ timeout: 5000 });
await this.deleteProviderConfirmationButton.click();
// Wait for modal to close (this indicates deletion was initiated)
await expect(modal).not.toBeVisible({ timeout: 10000 });
// Wait for page to reload
await this.waitForPageLoad();
// Navigate back to providers page to ensure clean state
await this.goto();
await expect(this.providersTable).toBeVisible({ timeout: 10000 });
}
}
-552
View File
@@ -1,552 +0,0 @@
### E2E Tests: AWS Provider Management
**Suite ID:** `PROVIDER-E2E`
**Feature:** AWS Provider Management.
---
## Test Case: `PROVIDER-E2E-001` - Add AWS Provider with Static Credentials
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @aws
**Description/Objective:** Validates the complete flow of adding a new AWS provider using static access key credentials
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_AWS_PROVIDER_ACCOUNT_ID, E2E_AWS_PROVIDER_ACCESS_KEY and E2E_AWS_PROVIDER_SECRET_KEY
- Remove any existing provider with the same Account ID before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Account ID not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select AWS provider type
4. Fill provider details (account ID and alias)
5. Select "credentials" authentication type
6. Fill static credentials (access key and secret key)
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- AWS provider successfully added with static credentials
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays AWS option
- Credentials form accepts static credentials
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for AWS credentials
- Provider cleanup performed before each test to ensure clean state
- Requires valid AWS account with appropriate permissions
---
## Test Case: `PROVIDER-E2E-002` - Add AWS Provider with Assume Role Credentials Access Key and Secret Key
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @aws
**Description/Objective:** Validates the complete flow of adding a new AWS provider using role-based authentication with Access Key and Secret Key
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_AWS_PROVIDER_ACCOUNT_ID, E2E_AWS_PROVIDER_ACCESS_KEY, E2E_AWS_PROVIDER_SECRET_KEY, E2E_AWS_PROVIDER_ROLE_ARN
- Remove any existing provider with the same Account ID before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Account ID not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select AWS provider type
4. Fill provider details (account ID and alias)
5. Select "role" authentication type
6. Fill role credentials (access key, secret key, and role ARN)
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- AWS provider successfully added with role credentials
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays AWS option
- Role credentials form accepts all required fields
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for AWS credentials and role ARN
- Provider cleanup performed before each test to ensure clean state
- Requires valid AWS account with role assumption permissions
- Role ARN must be properly configured
---
## Test Case: `PROVIDER-E2E-003` - Add Azure Provider with Static Credentials
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @azure
**Description/Objective:** Validates the complete flow of adding a new Azure provider using static client credentials (Client ID, Client Secret, Tenant ID)
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_AZURE_SUBSCRIPTION_ID, E2E_AZURE_CLIENT_ID, E2E_AZURE_SECRET_ID, E2E_AZURE_TENANT_ID
- Remove any existing provider with the same Subscription ID before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Subscription ID not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select Azure provider type
4. Fill provider details (subscription ID and alias)
5. Fill Azure credentials (client ID, client secret, tenant ID)
6. Launch initial scan
7. Verify redirect to provider management page
### Expected Result:
- Azure provider successfully added with static credentials
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays Azure option
- Azure credentials form accepts all required fields
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for Azure credentials
- Provider cleanup performed before each test to ensure clean state
- Requires valid Azure subscription with appropriate permissions
- Client credentials must have sufficient permissions for security scanning
---
## Test Case: `PROVIDER-E2E-004` - Add M365 Provider with Static Credentials
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @m365
**Description/Objective:** Validates the complete flow of adding a new Microsoft 365 provider using static client credentials (Client ID, Client Secret, Tenant ID) tied to a Domain ID.
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_M365_DOMAIN_ID, E2E_M365_CLIENT_ID, E2E_M365_SECRET_ID, E2E_M365_TENANT_ID
- Remove any existing provider with the same Domain ID before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Domain ID not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select M365 provider type
4. Fill provider details (domain ID and alias)
5. Select static credentials type
6. Fill M365 credentials (client ID, client secret, tenant ID)
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- M365 provider successfully added with static credentials
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays M365 option
- M365 credentials form accepts all required fields
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for M365 credentials
- Provider cleanup performed before each test to ensure clean state
- Requires valid Microsoft 365 tenant with appropriate permissions
- Client credentials must have sufficient permissions for security scanning
---
## Test Case: `PROVIDER-E2E-005` - Add M365 Provider with Certificate Credentials
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @m365
**Description/Objective:** Validates the complete flow of adding a new Microsoft 365 provider using certificate-based authentication (Client ID, Tenant ID, Certificate Content) tied to a Domain ID.
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_M365_DOMAIN_ID, E2E_M365_CLIENT_ID, E2E_M365_TENANT_ID, E2E_M365_CERTIFICATE_CONTENT
- Remove any existing provider with the same Domain ID before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Domain ID not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select M365 provider type
4. Fill provider details (domain ID and alias)
5. Select certificate credentials type
6. Fill M365 certificate credentials (client ID, tenant ID, certificate content)
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- M365 provider successfully added with certificate credentials
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays M365 option
- Certificate credentials form accepts all required fields
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for M365 certificate credentials
- Provider cleanup performed before each test to ensure clean state
- Requires valid Microsoft 365 tenant with certificate-based authentication
- Certificate must be properly configured and have sufficient permissions for security scanning
---
## Test Case: `PROVIDER-E2E-006` - Add Kubernetes Provider with Kubeconfig Content
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @kubernetes
**Description/Objective:** Validates the complete flow of adding a new Kubernetes provider using kubeconfig content authentication
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_KUBERNETES_CONTEXT, E2E_KUBERNETES_KUBECONFIG_PATH
- Kubeconfig file must exist at the specified path
- Remove any existing provider with the same Context before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Context not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select Kubernetes provider type
4. Fill provider details (context and alias)
5. Verify credentials page is loaded
6. Fill Kubernetes credentials (kubeconfig content)
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- Kubernetes provider successfully added with kubeconfig content
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays Kubernetes option
- Provider details form accepts context and alias
- Credentials page loads with kubeconfig content field
- Kubeconfig content is properly filled in the correct field
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for Kubernetes context and kubeconfig file path
- Kubeconfig content is read from file and used for authentication
- Provider cleanup performed before each test to ensure clean state
- Requires valid Kubernetes cluster with accessible kubeconfig
- Kubeconfig must have sufficient permissions for security scanning
- Test validates that kubeconfig content goes to the correct field (not the context field)
---
## Test Case: `PROVIDER-E2E-007` - Add GCP Provider with Service Account Key
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @gcp
**Description/Objective:** Validates the complete flow of adding a new GCP provider using service account key authentication
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_GCP_PROJECT_ID, E2E_GCP_BASE64_SERVICE_ACCOUNT_KEY
- Remove any existing provider with the same Project ID before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Project ID not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select GCP provider type
4. Fill provider details (project ID and alias)
5. Select service account credentials type
6. Fill GCP service account key credentials
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- GCP provider successfully added with service account key
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays GCP option
- Provider details form accepts project ID and alias
- Service account credentials page loads with service account key field
- Service account key is properly filled in the correct field
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for GCP project ID and service account key
- Service account key is provided as base64 encoded JSON content
- Provider cleanup performed before each test to ensure clean state
- Requires valid GCP project with service account having appropriate permissions
- Service account must have sufficient permissions for security scanning
- Test validates that service account key goes to the correct field
- Test uses base64 encoded environment variables for GCP service account key
---
## Test Case: `PROVIDER-E2E-008` - Add GitHub Provider with Personal Access Token
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @github
**Description/Objective:** Validates the complete flow of adding a new GitHub provider using personal access token authentication for a user account
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_GITHUB_USERNAME, E2E_GITHUB_PERSONAL_ACCESS_TOKEN
- Remove any existing provider with the same Username before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Username not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select GitHub provider type
4. Fill provider details (username and alias)
5. Select personal access token credentials type
6. Fill GitHub personal access token credentials
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- GitHub provider successfully added with personal access token
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays GitHub option
- Provider details form accepts username and alias
- Personal access token credentials page loads with token field
- Personal access token is properly filled in the correct field
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for GitHub username and personal access token
- Provider cleanup performed before each test to ensure clean state
- Requires valid GitHub account with personal access token
- Personal access token must have sufficient permissions for security scanning
- Test validates that personal access token goes to the correct field
---
## Test Case: `PROVIDER-E2E-009` - Add GitHub Provider with GitHub App
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @github
**Description/Objective:** Validates the complete flow of adding a new GitHub provider using GitHub App authentication for a user account
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_GITHUB_USERNAME, E2E_GITHUB_APP_ID, E2E_GITHUB_BASE64_APP_PRIVATE_KEY
- Remove any existing provider with the same Username before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Username not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select GitHub provider type
4. Fill provider details (username and alias)
5. Select GitHub App credentials type
6. Fill GitHub App credentials (App ID and private key)
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- GitHub provider successfully added with GitHub App credentials
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays GitHub option
- Provider details form accepts username and alias
- GitHub App credentials page loads with App ID and private key fields
- GitHub App credentials are properly filled in the correct fields
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for GitHub username, App ID, and base64 encoded private key
- Private key is base64 encoded and must be decoded before use
- Provider cleanup performed before each test to ensure clean state
- Requires valid GitHub App with App ID and private key
- GitHub App must have sufficient permissions for security scanning
- Test validates that GitHub App credentials go to the correct fields
---
## Test Case: `PROVIDER-E2E-010` - Add GitHub Provider with Organization Personal Access Token
**Priority:** `critical`
**Tags:**
- type → @e2e, @serial
- feature → @providers
- provider → @github
**Description/Objective:** Validates the complete flow of adding a new GitHub provider using organization personal access token authentication
**Preconditions:**
- Admin user authentication required (admin.auth.setup setup)
- Environment variables configured: E2E_GITHUB_ORGANIZATION, E2E_GITHUB_ORGANIZATION_ACCESS_TOKEN
- Remove any existing provider with the same Organization name before starting the test
- This test must be run serially and never in parallel with other tests, as it requires the Organization name not to be already registered beforehand.
### Flow Steps:
1. Navigate to providers page
2. Click "Add Provider" button
3. Select GitHub provider type
4. Fill provider details (organization name and alias)
5. Select personal access token credentials type
6. Fill GitHub organization personal access token credentials
7. Launch initial scan
8. Verify redirect to provider management page
### Expected Result:
- GitHub provider successfully added with organization personal access token
- Initial scan launched successfully
- User redirected to provider details page
### Key verification points:
- Provider page loads correctly
- Connect account page displays GitHub option
- Provider details form accepts organization name and alias
- Personal access token credentials page loads with token field
- Organization personal access token is properly filled in the correct field
- Launch scan page appears
- Successful redirect to provider page after scan launch
### Notes:
- Test uses environment variables for GitHub organization name and organization access token
- Provider cleanup performed before each test to ensure clean state
- Requires valid GitHub organization with organization access token
- Organization access token must have sufficient permissions for security scanning
- Test validates that organization personal access token goes to the correct field
-936
View File
@@ -1,936 +0,0 @@
import { test } from "@playwright/test";
import * as helpers from "../helpers";
import {
ProvidersPage,
AWSProviderData,
AWSProviderCredential,
AWS_CREDENTIAL_OPTIONS,
AZUREProviderData,
AZUREProviderCredential,
AZURE_CREDENTIAL_OPTIONS,
M365ProviderData,
M365ProviderCredential,
M365_CREDENTIAL_OPTIONS,
KubernetesProviderData,
KubernetesProviderCredential,
KUBERNETES_CREDENTIAL_OPTIONS,
GCPProviderData,
GCPProviderCredential,
GCP_CREDENTIAL_OPTIONS,
GitHubProviderData,
GitHubProviderCredential,
GITHUB_CREDENTIAL_OPTIONS,
} from "./providers-page";
import { ScansPage } from "../scans/scans-page";
import fs from "fs";
test.describe("Add Provider", () => {
test.describe.serial("Add AWS Provider", () => {
// Providers page object
let providersPage: ProvidersPage;
let scansPage: ScansPage;
// Test data from environment variables
const accountId = process.env.E2E_AWS_PROVIDER_ACCOUNT_ID;
const accessKey = process.env.E2E_AWS_PROVIDER_ACCESS_KEY;
const secretKey = process.env.E2E_AWS_PROVIDER_SECRET_KEY;
const roleArn = process.env.E2E_AWS_PROVIDER_ROLE_ARN;
// Validate required environment variables
if (!accountId) {
throw new Error(
"E2E_AWS_PROVIDER_ACCOUNT_ID environment variable is not set",
);
}
// Setup before each test
test.beforeEach(async ({ page }) => {
providersPage = new ProvidersPage(page);
// Clean up existing provider to ensure clean test state
await providersPage.deleteProviderIfExists(accountId);
});
// Use admin authentication for provider management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should add a new AWS provider with static credentials",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@aws",
"@serial",
"@PROVIDER-E2E-001",
],
},
async ({ page }) => {
// Validate required environment variables
if (!accountId || !accessKey || !secretKey) {
throw new Error(
"E2E_AWS_PROVIDER_ACCOUNT_ID, E2E_AWS_PROVIDER_ACCESS_KEY, and E2E_AWS_PROVIDER_SECRET_KEY environment variables are not set",
);
}
// Prepare test data for AWS provider
const awsProviderData: AWSProviderData = {
accountId: accountId,
alias: "Test E2E AWS Account - Credentials",
};
// Prepare static credentials
const staticCredentials: AWSProviderCredential = {
type: AWS_CREDENTIAL_OPTIONS.AWS_CREDENTIALS,
accessKeyId: accessKey,
secretAccessKey: secretKey,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select AWS provider
await providersPage.selectAWSProvider();
// Fill provider details
await providersPage.fillAWSProviderDetails(awsProviderData);
await providersPage.clickNext();
// Select static credentials type
await providersPage.selectCredentialsType(
AWS_CREDENTIAL_OPTIONS.AWS_CREDENTIALS,
);
await providersPage.verifyCredentialsPageLoaded();
// Fill static credentials
await providersPage.fillStaticCredentials(staticCredentials);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to provider page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
test(
"should add a new AWS provider with assume role credentials with Access Key and Secret Key",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@aws",
"@serial",
"@PROVIDER-E2E-002",
],
},
async ({ page }) => {
// Validate required environment variables
if (!accountId || !accessKey || !secretKey || !roleArn) {
throw new Error(
"E2E_AWS_PROVIDER_ACCOUNT_ID, E2E_AWS_PROVIDER_ACCESS_KEY, E2E_AWS_PROVIDER_SECRET_KEY, and E2E_AWS_PROVIDER_ROLE_ARN environment variables are not set",
);
}
// Prepare test data for AWS provider
const awsProviderData: AWSProviderData = {
accountId: accountId,
alias: "Test E2E AWS Account - Credentials",
};
// Prepare role-based credentials
const roleCredentials: AWSProviderCredential = {
type: AWS_CREDENTIAL_OPTIONS.AWS_ROLE_ARN,
accessKeyId: accessKey,
secretAccessKey: secretKey,
roleArn: roleArn,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select AWS provider
await providersPage.selectAWSProvider();
// Fill provider details
await providersPage.fillAWSProviderDetails(awsProviderData);
await providersPage.clickNext();
// Select role credentials type
await providersPage.selectCredentialsType(
AWS_CREDENTIAL_OPTIONS.AWS_ROLE_ARN,
);
await providersPage.verifyCredentialsPageLoaded();
// Fill role credentials
await providersPage.fillRoleCredentials(roleCredentials);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to provider page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
});
test.describe.serial("Add AZURE Provider", () => {
// Providers page object
let providersPage: ProvidersPage;
let scansPage: ScansPage;
// Test data from environment variables
const subscriptionId = process.env.E2E_AZURE_SUBSCRIPTION_ID;
const clientId = process.env.E2E_AZURE_CLIENT_ID;
const clientSecret = process.env.E2E_AZURE_SECRET_ID;
const tenantId = process.env.E2E_AZURE_TENANT_ID;
// Validate required environment variables
if (!subscriptionId || !clientId || !clientSecret || !tenantId) {
throw new Error(
"E2E_AZURE_SUBSCRIPTION_ID, E2E_AZURE_CLIENT_ID, E2E_AZURE_SECRET_ID, and E2E_AZURE_TENANT_ID environment variables are not set",
);
}
// Setup before each test
test.beforeEach(async ({ page }) => {
providersPage = new ProvidersPage(page);
// Clean up existing provider to ensure clean test state
await providersPage.deleteProviderIfExists(subscriptionId);
});
// Use admin authentication for provider management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should add a new Azure provider with static credentials",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@azure",
"@serial",
"@PROVIDER-E2E-003",
],
},
async ({ page }) => {
// Prepare test data for AZURE provider
const azureProviderData: AZUREProviderData = {
subscriptionId: subscriptionId,
alias: "Test E2E AZURE Account - Credentials",
};
// Prepare static credentials
const azureCredentials: AZUREProviderCredential = {
type: AZURE_CREDENTIAL_OPTIONS.AZURE_CREDENTIALS,
clientId: clientId,
clientSecret: clientSecret,
tenantId: tenantId,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select AZURE provider
await providersPage.selectAZUREProvider();
// Fill provider details
await providersPage.fillAZUREProviderDetails(azureProviderData);
await providersPage.clickNext();
// Fill static credentials details
await providersPage.fillAZURECredentials(azureCredentials);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to scan page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
});
test.describe.serial("Add M365 Provider", () => {
// Providers page object
let providersPage: ProvidersPage;
let scansPage: ScansPage;
// Test data from environment variables
const domainId = process.env.E2E_M365_DOMAIN_ID;
const clientId = process.env.E2E_M365_CLIENT_ID;
const tenantId = process.env.E2E_M365_TENANT_ID;
// Validate required environment variables
if (!domainId || !clientId || !tenantId) {
throw new Error(
"E2E_M365_DOMAIN_ID, E2E_M365_CLIENT_ID, and E2E_M365_TENANT_ID environment variables are not set",
);
}
// Setup before each test
test.beforeEach(async ({ page }) => {
providersPage = new ProvidersPage(page);
// Clean up existing provider to ensure clean test state
await providersPage.deleteProviderIfExists(domainId);
});
// Use admin authentication for provider management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should add a new M365 provider with static credentials",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@m365",
"@serial",
"@PROVIDER-E2E-004",
],
},
async ({ page }) => {
// Validate required environment variables
const clientSecret = process.env.E2E_M365_SECRET_ID;
if (!clientSecret) {
throw new Error("E2E_M365_SECRET_ID environment variable is not set");
}
// Prepare test data for M365 provider
const m365ProviderData: M365ProviderData = {
domainId: domainId,
alias: "Test E2E M365 Account - Credentials",
};
// Prepare static credentials
const m365Credentials: M365ProviderCredential = {
type: M365_CREDENTIAL_OPTIONS.M365_CREDENTIALS,
clientId: clientId,
clientSecret: clientSecret,
tenantId: tenantId,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select M365 provider
await providersPage.selectM365Provider();
// Fill provider details
await providersPage.fillM365ProviderDetails(m365ProviderData);
await providersPage.clickNext();
// Select static credentials type
await providersPage.selectM365CredentialsType(
M365_CREDENTIAL_OPTIONS.M365_CREDENTIALS,
);
// Verify M365 credentials page is loaded
await providersPage.verifyM365CredentialsPageLoaded();
// Fill static credentials details
await providersPage.fillM365Credentials(m365Credentials);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to scan page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
test(
"should add a new M365 provider with certificate",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@m365",
"@serial",
"@PROVIDER-E2E-005",
],
},
async ({ page }) => {
// Validate required environment variables
const certificateContent = process.env.E2E_M365_CERTIFICATE_CONTENT;
if (!certificateContent) {
throw new Error(
"E2E_M365_CERTIFICATE_CONTENT environment variable is not set",
);
}
// Prepare test data for M365 provider
const m365ProviderData: M365ProviderData = {
domainId: domainId,
alias: "Test E2E M365 Account - Certificate",
};
// Prepare static credentials
const m365Credentials: M365ProviderCredential = {
type: M365_CREDENTIAL_OPTIONS.M365_CERTIFICATE_CREDENTIALS,
clientId: clientId,
tenantId: tenantId,
certificateContent: certificateContent,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select M365 provider
await providersPage.selectM365Provider();
// Fill provider details
await providersPage.fillM365ProviderDetails(m365ProviderData);
await providersPage.clickNext();
// Select static credentials type
await providersPage.selectM365CredentialsType(
M365_CREDENTIAL_OPTIONS.M365_CERTIFICATE_CREDENTIALS,
);
// Verify M365 certificate credentials page is loaded
await providersPage.verifyM365CertificateCredentialsPageLoaded();
// Fill static credentials details
await providersPage.fillM365CertificateCredentials(m365Credentials);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to scan page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
});
test.describe.serial("Add Kubernetes Provider", () => {
// Providers page object
let providersPage: ProvidersPage;
let scansPage: ScansPage;
// Test data from environment variables
const context = process.env.E2E_KUBERNETES_CONTEXT;
const kubeconfigPath = process.env.E2E_KUBERNETES_KUBECONFIG_PATH;
// Validate required environment variables
if (!context || !kubeconfigPath) {
throw new Error(
"E2E_KUBERNETES_CONTEXT and E2E_KUBERNETES_KUBECONFIG_PATH environment variables are not set",
);
}
// Setup before each test
test.beforeEach(async ({ page }) => {
providersPage = new ProvidersPage(page);
// Clean up existing provider to ensure clean test state
await providersPage.deleteProviderIfExists(context);
});
// Use admin authentication for provider management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should add a new Kubernetes provider with kubeconfig context",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@kubernetes",
"@serial",
"@PROVIDER-E2E-006",
],
},
async ({ page }) => {
// Verify kubeconfig file exists
if (!fs.existsSync(kubeconfigPath)) {
throw new Error(`Kubeconfig file not found at ${kubeconfigPath}`);
}
// Read kubeconfig content from file
let kubeconfigContent: string;
try {
kubeconfigContent = fs.readFileSync(kubeconfigPath, "utf8");
} catch (error) {
throw new Error(
`Failed to read kubeconfig file at ${kubeconfigPath}: ${error}`,
);
}
// Prepare test data for Kubernetes provider
const kubernetesProviderData: KubernetesProviderData = {
context: context,
alias: "Test E2E Kubernetes Account - Kubeconfig Context",
};
// Prepare static credentials
const kubernetesCredentials: KubernetesProviderCredential = {
type: KUBERNETES_CREDENTIAL_OPTIONS.KUBECONFIG_CONTENT,
kubeconfigContent: kubeconfigContent,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select Kubernetes provider
await providersPage.selectKubernetesProvider();
// Fill provider details
await providersPage.fillKubernetesProviderDetails(
kubernetesProviderData,
);
await providersPage.clickNext();
// Verify credentials page is loaded
await providersPage.verifyKubernetesCredentialsPageLoaded();
// Fill static credentials details
await providersPage.fillKubernetesCredentials(kubernetesCredentials);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to provider page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
});
test.describe.serial("Add GCP Provider", () => {
// Providers page object
let providersPage: ProvidersPage;
let scansPage: ScansPage;
// Test data from environment variables
const projectId = process.env.E2E_GCP_PROJECT_ID;
// Validate required environment variables
if (!projectId) {
throw new Error("E2E_GCP_PROJECT_ID environment variable is not set");
}
// Setup before each test
test.beforeEach(async ({ page }) => {
providersPage = new ProvidersPage(page);
// Clean up existing provider to ensure clean test state
await providersPage.deleteProviderIfExists(projectId);
});
// Use admin authentication for provider management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should add a new GCP provider with service account key",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@gcp",
"@serial",
"@PROVIDER-E2E-007",
],
},
async ({ page }) => {
// Validate required environment variables
const serviceAccountKeyB64 =
process.env.E2E_GCP_BASE64_SERVICE_ACCOUNT_KEY;
// Verify service account key is base64 encoded
if (!serviceAccountKeyB64) {
throw new Error(
"E2E_GCP_BASE64_SERVICE_ACCOUNT_KEY environment variable is not set",
);
}
// Decode service account key from base64
const serviceAccountKey = Buffer.from(
serviceAccountKeyB64,
"base64",
).toString("utf8");
// Verify service account key is valid JSON
if (!JSON.parse(serviceAccountKey)) {
throw new Error("Invalid service account key format");
}
// Prepare test data for GCP provider
const gcpProviderData: GCPProviderData = {
projectId: projectId,
alias: "Test E2E GCP Account - Service Account Key",
};
// Prepare static credentials
const gcpCredentials: GCPProviderCredential = {
type: GCP_CREDENTIAL_OPTIONS.GCP_SERVICE_ACCOUNT,
serviceAccountKey: serviceAccountKey,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select M365 provider
await providersPage.selectGCPProvider();
// Fill provider details
await providersPage.fillGCPProviderDetails(gcpProviderData);
await providersPage.clickNext();
// Select static credentials type
await providersPage.selectGCPCredentialsType(
GCP_CREDENTIAL_OPTIONS.GCP_SERVICE_ACCOUNT,
);
// Verify GCP service account page is loaded
await providersPage.verifyGCPServiceAccountPageLoaded();
// Fill static service account key details
await providersPage.fillGCPServiceAccountKeyCredentials(gcpCredentials);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to scan page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
});
test.describe.serial("Add GitHub Provider", () => {
// Providers page object
let providersPage: ProvidersPage;
let scansPage: ScansPage;
test.describe("Add GitHub provider with username", () => {
// Test data from environment variables
const username = process.env.E2E_GITHUB_USERNAME;
// Validate required environment variables
if (!username) {
throw new Error("E2E_GITHUB_USERNAME environment variable is not set");
}
// Setup before each test
test.beforeEach(async ({ page }) => {
providersPage = new ProvidersPage(page);
// Clean up existing provider to ensure clean test state
await providersPage.deleteProviderIfExists(username);
});
// Use admin authentication for provider management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should add a new GitHub provider with personal access token",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@github",
"@serial",
"@PROVIDER-E2E-008",
],
},
async ({ page }) => {
// Validate required environment variables
const personalAccessToken =
process.env.E2E_GITHUB_PERSONAL_ACCESS_TOKEN;
// Verify username and personal access token are set in environment variables
if (!personalAccessToken) {
throw new Error(
"E2E_GITHUB_PERSONAL_ACCESS_TOKEN environment variables are not set",
);
}
// Prepare test data for GitHub provider
const githubProviderData: GitHubProviderData = {
username: username,
alias: "Test E2E GitHub Account - Personal Access Token",
};
// Prepare personal access token credentials
const githubCredentials: GitHubProviderCredential = {
type: GITHUB_CREDENTIAL_OPTIONS.GITHUB_PERSONAL_ACCESS_TOKEN,
personalAccessToken: personalAccessToken,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select GitHub provider
await providersPage.selectGitHubProvider();
// Fill provider details
await providersPage.fillGitHubProviderDetails(githubProviderData);
await providersPage.clickNext();
// Select GitHub personal access token credentials type
await providersPage.selectGitHubCredentialsType(
GITHUB_CREDENTIAL_OPTIONS.GITHUB_PERSONAL_ACCESS_TOKEN,
);
// Verify GitHub personal access token page is loaded
await providersPage.verifyGitHubPersonalAccessTokenPageLoaded();
// Fill static personal access token details
await providersPage.fillGitHubPersonalAccessTokenCredentials(
githubCredentials,
);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to scan page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
test(
"should add a new GitHub provider with github app",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@github",
"@serial",
"@PROVIDER-E2E-009",
],
},
async ({ page }) => {
// Validate required environment variables
const githubAppId =
process.env.E2E_GITHUB_APP_ID;
const githubAppPrivateKeyB64 =
process.env.E2E_GITHUB_BASE64_APP_PRIVATE_KEY;
// Verify github app id and private key are set in environment variables
if (!githubAppId || !githubAppPrivateKeyB64) {
throw new Error(
"E2E_GITHUB_APP_ID and E2E_GITHUB_APP_PRIVATE_KEY environment variables are not set",
);
}
// Decode github app private key from base64
const githubAppPrivateKey = Buffer.from(
githubAppPrivateKeyB64,
"base64",
).toString("utf8");
// Prepare test data for GitHub provider
const githubProviderData: GitHubProviderData = {
username: username,
alias: "Test E2E GitHub Account - GitHub App",
};
// Prepare github app credentials
const githubCredentials: GitHubProviderCredential = {
type: GITHUB_CREDENTIAL_OPTIONS.GITHUB_APP,
githubAppId: githubAppId,
githubAppPrivateKey: githubAppPrivateKey,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select GitHub provider
await providersPage.selectGitHubProvider();
// Fill provider details
await providersPage.fillGitHubProviderDetails(githubProviderData);
await providersPage.clickNext();
// Select static github app credentials type
await providersPage.selectGitHubCredentialsType(
GITHUB_CREDENTIAL_OPTIONS.GITHUB_APP,
);
// Verify GitHub github app page is loaded
await providersPage.verifyGitHubAppPageLoaded();
// Fill static github app credentials details
await providersPage.fillGitHubAppCredentials(
githubCredentials,
);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to scan page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
});
test.describe("Add GitHub provider with organization", () => {
// Test data from environment variables
const organization = process.env.E2E_GITHUB_ORGANIZATION;
// Validate required environment variables
if (!organization) {
throw new Error(
"E2E_GITHUB_ORGANIZATION environment variable is not set",
);
}
// Setup before each test
test.beforeEach(async ({ page }) => {
providersPage = new ProvidersPage(page);
// Clean up existing provider to ensure clean test state
await providersPage.deleteProviderIfExists(organization);
});
// Use admin authentication for provider management
test.use({ storageState: "playwright/.auth/admin_user.json" });
test(
"should add a new GitHub provider with organization personal access token",
{
tag: [
"@critical",
"@e2e",
"@providers",
"@github",
"@serial",
"@PROVIDER-E2E-010",
],
},
async ({ page }) => {
// Validate required environment variables
const organizationAccessToken =
process.env.E2E_GITHUB_ORGANIZATION_ACCESS_TOKEN;
// Verify username and personal access token are set in environment variables
if (!organizationAccessToken) {
throw new Error(
"E2E_GITHUB_ORGANIZATION_ACCESS_TOKEN environment variables are not set",
);
}
// Prepare test data for GitHub provider
const githubProviderData: GitHubProviderData = {
username: organization,
alias: "Test E2E GitHub Account - Organization Access Token",
};
// Prepare personal access token credentials
const githubCredentials: GitHubProviderCredential = {
type: GITHUB_CREDENTIAL_OPTIONS.GITHUB_PERSONAL_ACCESS_TOKEN,
personalAccessToken: organizationAccessToken,
};
// Navigate to providers page
await providersPage.goto();
await providersPage.verifyPageLoaded();
// Start adding new provider
await providersPage.clickAddProvider();
await providersPage.verifyConnectAccountPageLoaded();
// Select GitHub provider
await providersPage.selectGitHubProvider();
// Fill provider details
await providersPage.fillGitHubProviderDetails(githubProviderData);
await providersPage.clickNext();
// Select GitHub organization personal access token credentials type
await providersPage.selectGitHubCredentialsType(
GITHUB_CREDENTIAL_OPTIONS.GITHUB_PERSONAL_ACCESS_TOKEN,
);
// Verify GitHub personal access token page is loaded
await providersPage.verifyGitHubPersonalAccessTokenPageLoaded();
// Fill static personal access token details
await providersPage.fillGitHubPersonalAccessTokenCredentials(
githubCredentials,
);
await providersPage.clickNext();
// Launch scan
await providersPage.verifyLaunchScanPageLoaded();
await providersPage.clickNext();
// Wait for redirect to scan page
scansPage = new ScansPage(page);
await scansPage.verifyPageLoaded();
},
);
});
});
});
-28
View File
@@ -1,28 +0,0 @@
import { Page, Locator, expect } from "@playwright/test";
import { BasePage } from "../base-page";
// Scan page
export class ScansPage extends BasePage {
// Main content elements
readonly scanTable: Locator;
constructor(page: Page) {
super(page);
// Main content elements
this.scanTable = page.locator("table");
}
// Navigation methods
async goto(): Promise<void> {
await super.goto("/scans");
}
// Verification methods
async verifyPageLoaded(): Promise<void> {
await expect(this.page).toHaveTitle(/Prowler/);
await expect(this.scanTable).toBeVisible();
await this.waitForPageLoad();
}
}
-1
View File
@@ -4,7 +4,6 @@ import { authenticateAndSaveState } from '@/tests/helpers';
const adminUserFile = 'playwright/.auth/admin_user.json';
authAdminSetup('authenticate as admin e2e user', async ({ page }) => {
const adminEmail = process.env.E2E_ADMIN_USER;
const adminPassword = process.env.E2E_ADMIN_PASSWORD;
@@ -6,7 +6,7 @@ const manageCloudProvidersUserFile = 'playwright/.auth/manage_cloud_providers_us
authManageCloudProvidersSetup('authenticate as manage cloud providers e2e user', async ({ page }) => {
const cloudProvidersEmail = process.env.E2E_MANAGE_CLOUD_PROVIDERS_USER;
const cloudProvidersPassword = process.env.E2E_MANAGE_CLOUD_PROVIDERS_PASSWORD;
if (!cloudProvidersEmail || !cloudProvidersPassword) {
throw new Error('E2E_MANAGE_CLOUD_PROVIDERS_USER and E2E_MANAGE_CLOUD_PROVIDERS_PASSWORD environment variables are required');
}
-146
View File
@@ -1,146 +0,0 @@
import { Page, Locator, expect } from "@playwright/test";
import { BasePage } from "../base-page";
export interface SignUpData {
name: string;
email: string;
password: string;
confirmPassword: string;
company?: string;
invitationToken?: string | null;
acceptTerms?: boolean;
}
export class SignUpPage extends BasePage {
// Form inputs
readonly nameInput: Locator;
readonly companyInput: Locator;
readonly emailInput: Locator;
readonly passwordInput: Locator;
readonly confirmPasswordInput: Locator;
readonly invitationTokenInput: Locator;
// UI elements
readonly submitButton: Locator;
readonly loginLink: Locator;
readonly termsCheckbox: Locator;
constructor(page: Page) {
super(page);
// Prefer stable name attributes to avoid label ambiguity in composed inputs
this.nameInput = page.locator('input[name="name"]');
this.companyInput = page.locator('input[name="company"]');
this.emailInput = page.getByRole("textbox", { name: "Email" });
this.passwordInput = page.locator('input[name="password"]');
this.confirmPasswordInput = page.locator('input[name="confirmPassword"]');
this.invitationTokenInput = page.locator('input[name="invitationToken"]');
this.submitButton = page.getByRole("button", { name: "Sign up" });
this.loginLink = page.getByRole("link", { name: "Log in" });
this.termsCheckbox = page.getByRole("checkbox", { name: /I agree with the/i });
}
async goto(): Promise<void> {
// Navigate to the sign up page
await super.goto("/sign-up");
}
async gotoInvite(shareUrl: string): Promise<void> {
// Navigate to the share url
await super.goto(shareUrl);
}
async verifyPageLoaded(): Promise<void> {
// Verify the sign up page is loaded
await expect(this.page.getByRole("heading", { name: "Sign up" })).toBeVisible();
await expect(this.emailInput).toBeVisible();
await expect(this.submitButton).toBeVisible();
await this.waitForPageLoad();
}
async fillName(name: string): Promise<void> {
// Fill the name input
await this.nameInput.fill(name);
}
async fillCompany(company?: string): Promise<void> {
// Fill the company input
if (company) {
await this.companyInput.fill(company);
}
}
async fillEmail(email: string): Promise<void> {
// Fill the email input
await this.emailInput.fill(email);
}
async fillPassword(password: string): Promise<void> {
// Fill the password input
await this.passwordInput.fill(password);
}
async fillConfirmPassword(confirmPassword: string): Promise<void> {
// Fill the confirm password input
await this.confirmPasswordInput.fill(confirmPassword);
}
async fillInvitationToken(token?: string | null): Promise<void> {
// Fill the invitation token input
if (token) {
await this.invitationTokenInput.fill(token);
}
}
async acceptTermsIfPresent(accept: boolean = true): Promise<void> {
// Accept the terms and conditions if present
if (await this.termsCheckbox.isVisible()) {
if (accept) {
await this.termsCheckbox.click();
}
}
}
async submit(): Promise<void> {
// Submit the sign up form
await this.submitButton.click();
}
async signup(data: SignUpData): Promise<void> {
// Fill the sign up form
await this.fillName(data.name);
await this.fillCompany(data.company ?? undefined);
await this.fillEmail(data.email);
await this.fillPassword(data.password);
await this.fillConfirmPassword(data.confirmPassword);
await this.acceptTermsIfPresent(data.acceptTerms ?? true);
await this.submit();
}
async verifyRedirectToLogin(): Promise<void> {
// Verify redirect to login page
await expect(this.page).toHaveURL("/sign-in");
}
async verifyRedirectToEmailVerification(): Promise<void> {
// Verify redirect to email verification page
await expect(this.page).toHaveURL("/email-verification");
}
}
-43
View File
@@ -1,43 +0,0 @@
### E2E Tests: User Sign-Up
**Suite ID:** `SIGNUP-E2E`
**Feature:** New user registration.
---
## Test Case: `SIGNUP-E2E-001` - Successful new user registration and login
**Priority:** `critical`
**Tags:**
- type → @e2e
- feature → @signup
**Description/Objetive:** Registers a new user with valid data, verifies redirect to Login (OSS), and confirms the user can authenticate.
**Preconditions:**
- Application is running, email domain & password is acceptable for sign-up.
- No existing data in Prowler is required; the test can run on a clean state.
- `E2E_NEW_USER_PASSWORD` environment variable must be set with a valid password for the test.
### Flow Steps:
1. Navigate to the Sign up page.
2. Fill the form with valid data (unique email, valid password, terms accepted).
3. Submit the form.
4. Verify redirect to the Login page.
5. Log in with the newly created credentials.
### Expected Result:
- Sign-up succeeds and redirects to Login.
- User can log in successfully using the created credentials and reach the home page.
### Key verification points:
- After submitting sign-up, the URL changes to `/sign-in`.
- The newly created credentials can be used to sign in successfully.
- After login, the user lands on the home (`/`) and main content is visible.
### Notes:
- Test data uses a random base36 suffix to avoid collisions with email.
- The test requires the `E2E_NEW_USER_PASSWORD` environment variable to be set before running.
-49
View File
@@ -1,49 +0,0 @@
import { test } from "@playwright/test";
import { SignUpPage } from "./sign-up-page";
import { SignInPage } from "../sign-in/sign-in-page";
import { makeSuffix } from "../helpers";
test.describe("Sign Up Flow", () => {
test(
"should register a new user successfully",
{ tag: ["@critical", "@e2e", "@signup", "@SIGNUP-E2E-001"] },
async ({ page }) => {
const password = process.env.E2E_NEW_USER_PASSWORD;
if (!password) {
throw new Error("E2E_NEW_USER_PASSWORD environment variable is not set");
}
const signUpPage = new SignUpPage(page);
await signUpPage.goto();
// Generate unique test data
const suffix = makeSuffix(10);
const uniqueEmail = `e2e+${suffix}@prowler.com`;
// Fill and submit the sign-up form
await signUpPage.signup({
name: `E2E User ${suffix}`,
company: `Test E2E Co ${suffix}`,
email: uniqueEmail,
password: password,
confirmPassword: password,
acceptTerms: true,
});
// Verify no errors occurred during sign-up
await signUpPage.verifyNoErrors();
// Verify redirect to login page (OSS environment)
await signUpPage.verifyRedirectToLogin();
// Verify the newly created user can log in successfully
const signInPage = new SignInPage(page);
await signInPage.login({
email: uniqueEmail,
password: password,
});
await signInPage.verifySuccessfulLogin();
},
);
});
+95 -21
View File
@@ -19,6 +19,7 @@ A Python script to bulk-provision cloud providers in Prowler Cloud/App via REST
- **Flexible Authentication:** Supports various authentication methods per provider
- **Error Handling:** Comprehensive error reporting and validation
- **Connection Testing:** Built-in provider connection verification
- **AWS Organizations Support:** Automated YAML generation for all accounts in an AWS Organization
## How It Works
@@ -48,32 +49,105 @@ This two-step approach follows the Prowler API design where providers and their
pip install -r requirements.txt
```
3. Get your Prowler API token:
- **Prowler Cloud:** Generate token at https://api.prowler.com
- **Self-hosted Prowler App:** Generate token in your local instance
3. Get your Prowler API key:
- **Prowler Cloud:** Create an API key at https://api.prowler.com
- **Self-hosted Prowler App:** Create an API key in your local instance
- Click **Profile** → **Account** → **Create API Key**
```bash
export PROWLER_API_TOKEN=$(curl --location 'https://api.prowler.com/api/v1/tokens' \
--header 'Content-Type: application/vnd.api+json' \
--header 'Accept: application/vnd.api+json' \
--data-raw '{
"data": {
"type": "tokens",
"attributes": {
"email": "your@email.com",
"password": "your-password"
}
}
}' | jq -r .data.attributes.access)
export PROWLER_API_KEY="pk_example-api-key"
```
For detailed instructions on creating API keys, see: https://docs.prowler.com/user-guide/providers/prowler-app-api-keys
## AWS Organizations Integration
For organizations with many AWS accounts, use the included `aws_org_generator.py` script to automatically generate configuration for all accounts in your AWS Organization.
**📖 Full Guide:** See [AWS Organizations Bulk Provisioning Tutorial](https://docs.prowler.com/user-guide/tutorials/aws-organizations-bulk-provisioning) for complete documentation, examples, and troubleshooting.
### Prerequisites
Before using the AWS Organizations generator, deploy the ProwlerRole across all accounts using CloudFormation StackSets:
**Documentation:** [Deploying Prowler IAM Roles Across AWS Organizations](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/aws/organizations/#deploying-prowler-iam-roles-across-aws-organizations)
### Quick Start
1. Install additional dependencies:
```bash
pip install -r requirements-aws-org.txt
```
2. Generate YAML configuration for all organization accounts:
```bash
python aws_org_generator.py -o aws-accounts.yaml --external-id example-external-id
```
3. Run bulk provisioning:
```bash
python prowler_bulk_provisioning.py aws-accounts.yaml
```
### AWS Organizations Generator Options
```bash
python aws_org_generator.py -o aws-accounts.yaml \
--role-name ProwlerRole \
--external-id my-external-id \
--exclude 123456789012 \
--profile org-management
```
| Option | Description | Default |
|--------|-------------|---------|
| `-o, --output` | Output YAML file path | `aws-org-accounts.yaml` |
| `--role-name` | IAM role name across accounts | `ProwlerRole` |
| `--external-id` | External ID for role assumption | None (recommended) |
| `--session-name` | Session name for role assumption | None |
| `--duration-seconds` | Session duration in seconds | None |
| `--alias-format` | Alias template: `{name}`, `{id}`, `{email}` | `{name}` |
| `--exclude` | Comma-separated account IDs to exclude | None |
| `--include` | Comma-separated account IDs to include | None |
| `--profile` | AWS CLI profile name | Default credentials |
| `--region` | AWS region | `us-east-1` |
| `--dry-run` | Print to stdout without writing | `False` |
### Examples
**Generate config for all accounts with custom external ID:**
```bash
python aws_org_generator.py -o aws-accounts.yaml --external-id prowler-2024-abc123
```
**Exclude management account:**
```bash
python aws_org_generator.py -o aws-accounts.yaml \
--external-id prowler-ext-id \
--exclude 123456789012
```
**Use specific AWS profile:**
```bash
python aws_org_generator.py -o aws-accounts.yaml \
--profile org-admin \
--external-id prowler-ext-id
```
**Custom alias format:**
```bash
python aws_org_generator.py -o aws-accounts.yaml \
--alias-format "{name}-{id}" \
--external-id prowler-ext-id
```
## Configuration
### Environment Variables
```bash
export PROWLER_API_TOKEN="your-prowler-token"
export PROWLER_API_KEY="pk_example-api-key"
export PROWLER_API_BASE="https://api.prowler.com/api/v1" # Optional, defaults to Prowler Cloud
```
@@ -168,7 +242,7 @@ python prowler_bulk_provisioning.py providers.yaml \
|--------|-------------|---------|
| `input_file` | YAML file with provider entries | Required |
| `--base-url` | API base URL | `https://api.prowler.com/api/v1` |
| `--token` | Bearer token | `PROWLER_API_TOKEN` env var |
| `--api-key` | Prowler API key | `PROWLER_API_KEY` env var |
| `--providers-endpoint` | Providers API endpoint | `/providers` |
| `--concurrency` | Number of concurrent requests | `5` |
| `--timeout` | Per-request timeout in seconds | `60` |
@@ -241,8 +315,8 @@ The Prowler API supports the following authentication methods for GCP:
# OR inline:
# inline_json:
# type: "service_account"
# project_id: "your-project"
# private_key_id: "key-id"
# project_id: "example-project"
# private_key_id: "example-key-id"
# private_key: "-----BEGIN PRIVATE KEY-----\n..."
# client_email: "service-account@project.iam.gserviceaccount.com"
# client_id: "1234567890"
@@ -379,10 +453,10 @@ python prowler_bulk_provisioning.py providers.yaml --dry-run
### Common Issues
1. **Invalid API Token**
1. **Invalid API Key**
```
Error: 401 Unauthorized
Solution: Check your PROWLER_API_TOKEN or --token parameter
Solution: Check your PROWLER_API_KEY environment variable or --api-key parameter
```
2. **Network Timeouts**
+333
View File
@@ -0,0 +1,333 @@
#!/usr/bin/env python3
"""
AWS Organizations Account Generator for Prowler Bulk Provisioning
Generates YAML configuration for all accounts in an AWS Organization,
ready to be used with prowler_bulk_provisioning.py.
Prerequisites:
- ProwlerRole (or custom role) must be deployed across all accounts
- AWS credentials with Organizations read access (typically management account)
- See: https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/aws/organizations/#deploying-prowler-iam-roles-across-aws-organizations
"""
from __future__ import annotations
import argparse
import sys
from typing import Any, Dict, List, Optional
try:
import boto3
from botocore.exceptions import ClientError, NoCredentialsError
except ImportError:
sys.exit(
"boto3 is required. Install with: pip install boto3\n"
"Or install all dependencies: pip install -r requirements-aws-org.txt"
)
try:
import yaml
except ImportError:
sys.exit("PyYAML is required. Install with: pip install pyyaml")
def get_org_accounts(
profile: Optional[str] = None, region: Optional[str] = None
) -> List[Dict[str, Any]]:
"""
Retrieve all accounts from AWS Organizations.
Args:
profile: AWS CLI profile name
region: AWS region (defaults to us-east-1 for Organizations)
Returns:
List of account dictionaries with id, name, email, and status
"""
try:
session = boto3.Session(profile_name=profile, region_name=region or "us-east-1")
client = session.client("organizations")
accounts = []
paginator = client.get_paginator("list_accounts")
for page in paginator.paginate():
for account in page["Accounts"]:
# Only include ACTIVE accounts
if account["Status"] == "ACTIVE":
accounts.append(
{
"id": account["Id"],
"name": account["Name"],
"email": account["Email"],
"status": account["Status"],
}
)
return accounts
except NoCredentialsError:
sys.exit(
"No AWS credentials found. Configure credentials using:\n"
" - AWS CLI: aws configure\n"
" - Environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY\n"
" - IAM role if running on EC2/ECS/Lambda"
)
except ClientError as e:
error_code = e.response["Error"]["Code"]
if error_code == "AccessDeniedException":
sys.exit(
"Access denied to AWS Organizations API.\n"
"Ensure you are using credentials from the management account\n"
"with permissions to call organizations:ListAccounts"
)
elif error_code == "AWSOrganizationsNotInUseException":
sys.exit(
"AWS Organizations is not enabled for this account.\n"
"This script requires an AWS Organization to be set up."
)
else:
sys.exit(f"AWS API error: {e}")
except Exception as e:
sys.exit(f"Unexpected error listing accounts: {e}")
def generate_yaml_config(
accounts: List[Dict[str, Any]],
role_name: str = "ProwlerRole",
external_id: Optional[str] = None,
session_name: Optional[str] = None,
duration_seconds: Optional[int] = None,
alias_format: str = "{name}",
exclude_accounts: Optional[List[str]] = None,
include_accounts: Optional[List[str]] = None,
) -> List[Dict[str, Any]]:
"""
Generate YAML configuration for Prowler bulk provisioning.
Args:
accounts: List of account dictionaries from get_org_accounts
role_name: IAM role name (default: ProwlerRole)
external_id: External ID for role assumption (optional but recommended)
session_name: Session name for role assumption (optional)
duration_seconds: Session duration in seconds (optional)
alias_format: Format string for alias (supports {name}, {id}, {email})
exclude_accounts: List of account IDs to exclude
include_accounts: List of account IDs to include (if set, only these are included)
Returns:
List of provider configurations ready for YAML export
"""
exclude_accounts = exclude_accounts or []
include_accounts = include_accounts or []
providers = []
for account in accounts:
account_id = account["id"]
# Apply filters
if include_accounts and account_id not in include_accounts:
continue
if account_id in exclude_accounts:
continue
# Format alias using template
alias = alias_format.format(
name=account["name"], id=account_id, email=account["email"]
)
# Build role ARN
role_arn = f"arn:aws:iam::{account_id}:role/{role_name}"
# Build credentials section
credentials: Dict[str, Any] = {"role_arn": role_arn}
if external_id:
credentials["external_id"] = external_id
if session_name:
credentials["session_name"] = session_name
if duration_seconds:
credentials["duration_seconds"] = duration_seconds
# Build provider entry
provider = {
"provider": "aws",
"uid": account_id,
"alias": alias,
"auth_method": "role",
"credentials": credentials,
}
providers.append(provider)
return providers
def main():
"""Main function to generate AWS Organizations YAML configuration."""
parser = argparse.ArgumentParser(
description="Generate Prowler bulk provisioning YAML from AWS Organizations",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Basic usage - generate YAML for all accounts
python aws_org_generator.py -o aws-accounts.yaml
# Use custom role name and external ID
python aws_org_generator.py -o aws-accounts.yaml \\
--role-name ProwlerExecutionRole \\
--external-id my-external-id-12345
# Use specific AWS profile
python aws_org_generator.py -o aws-accounts.yaml \\
--profile org-management
# Exclude specific accounts (e.g., management account)
python aws_org_generator.py -o aws-accounts.yaml \\
--exclude 123456789012,210987654321
# Include only specific accounts
python aws_org_generator.py -o aws-accounts.yaml \\
--include 111111111111,222222222222
# Custom alias format
python aws_org_generator.py -o aws-accounts.yaml \\
--alias-format "{name}-{id}"
Prerequisites:
1. Deploy ProwlerRole across all accounts using CloudFormation StackSets:
https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/aws/organizations/#deploying-prowler-iam-roles-across-aws-organizations
2. Ensure AWS credentials have Organizations read access:
- organizations:ListAccounts
- organizations:DescribeOrganization (optional)
""",
)
parser.add_argument(
"-o",
"--output",
default="aws-org-accounts.yaml",
help="Output YAML file path (default: aws-org-accounts.yaml)",
)
parser.add_argument(
"--role-name",
default="ProwlerRole",
help="IAM role name deployed across accounts (default: ProwlerRole)",
)
parser.add_argument(
"--external-id",
help="External ID for role assumption (recommended for security)",
)
parser.add_argument(
"--session-name", help="Session name for role assumption (optional)"
)
parser.add_argument(
"--duration-seconds",
type=int,
help="Session duration in seconds (optional, default: 3600)",
)
parser.add_argument(
"--alias-format",
default="{name}",
help="Alias format template. Available: {name}, {id}, {email} (default: {name})",
)
parser.add_argument(
"--exclude",
help="Comma-separated list of account IDs to exclude",
)
parser.add_argument(
"--include",
help="Comma-separated list of account IDs to include (if set, only these are processed)",
)
parser.add_argument(
"--profile",
help="AWS CLI profile name (uses default credentials if not specified)",
)
parser.add_argument(
"--region",
help="AWS region (default: us-east-1, Organizations is global but needs a region)",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Print configuration to stdout without writing file",
)
args = parser.parse_args()
# Parse exclude/include lists
exclude_accounts = (
[acc.strip() for acc in args.exclude.split(",")] if args.exclude else []
)
include_accounts = (
[acc.strip() for acc in args.include.split(",")] if args.include else []
)
print("Fetching accounts from AWS Organizations...")
if args.profile:
print(f"Using AWS profile: {args.profile}")
# Get accounts from Organizations
accounts = get_org_accounts(profile=args.profile, region=args.region)
if not accounts:
print("No active accounts found in organization.")
return
print(f"Found {len(accounts)} active accounts in organization")
# Generate YAML configuration
providers = generate_yaml_config(
accounts=accounts,
role_name=args.role_name,
external_id=args.external_id,
session_name=args.session_name,
duration_seconds=args.duration_seconds,
alias_format=args.alias_format,
exclude_accounts=exclude_accounts,
include_accounts=include_accounts,
)
if not providers:
print("No providers generated after applying filters.")
return
print(f"Generated configuration for {len(providers)} accounts")
# Output YAML
yaml_content = yaml.dump(
providers, default_flow_style=False, sort_keys=False, allow_unicode=True
)
if args.dry_run:
print("\n--- Generated YAML Configuration ---\n")
print(yaml_content)
else:
with open(args.output, "w", encoding="utf-8") as f:
f.write(yaml_content)
print(f"\nConfiguration written to: {args.output}")
print("\nNext steps:")
print(f" 1. Review the generated file: cat {args.output} | head -n 20")
print(
f" 2. Run bulk provisioning: python prowler_bulk_provisioning.py {args.output}"
)
if __name__ == "__main__":
main()
@@ -0,0 +1,49 @@
# Example AWS Organizations Output
#
# This is an example of what aws_org_generator.py produces when run against
# an AWS Organization. This file can be directly used with prowler_bulk_provisioning.py
#
# Generated with:
# python aws_org_generator.py -o aws-org-accounts.yaml \
# --role-name ProwlerRole \
# --external-id prowler-ext-id-12345
- provider: aws
uid: '111111111111'
alias: Production-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::111111111111:role/ProwlerRole
external_id: prowler-ext-id-12345
- provider: aws
uid: '222222222222'
alias: Development-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::222222222222:role/ProwlerRole
external_id: prowler-ext-id-12345
- provider: aws
uid: '333333333333'
alias: Staging-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::333333333333:role/ProwlerRole
external_id: prowler-ext-id-12345
- provider: aws
uid: '444444444444'
alias: Security-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::444444444444:role/ProwlerRole
external_id: prowler-ext-id-12345
- provider: aws
uid: '555555555555'
alias: Logging-Account
auth_method: role
credentials:
role_arn: arn:aws:iam::555555555555:role/ProwlerRole
external_id: prowler-ext-id-12345
@@ -7,7 +7,7 @@ Use with extreme caution. There is no undo.
Environment:
PROWLER_API_BASE (default: https://api.prowler.com/api/v1)
PROWLER_API_TOKEN (required unless --token is provided)
PROWLER_API_KEY (required unless --api-key is provided)
Usage:
python nuke_providers.py --confirm
@@ -39,12 +39,14 @@ import requests
# ----------------------------- CLI / Utils --------------------------------- #
def env_or_arg(token_arg: Optional[str]) -> str:
"""Get API token from argument or environment variable."""
token = token_arg or os.getenv("PROWLER_API_TOKEN")
if not token:
sys.exit("Missing API token. Set --token or PROWLER_API_TOKEN.")
return token
def env_or_arg(api_key_arg: Optional[str]) -> str:
"""Get API key from argument or environment variable."""
api_key = api_key_arg or os.getenv("PROWLER_API_KEY")
if not api_key:
sys.exit(
"Missing API key. Set --api-key or PROWLER_API_KEY environment variable."
)
return api_key
def normalize_base_url(url: str) -> str:
@@ -63,14 +65,14 @@ class ApiClient:
"""HTTP client for Prowler API."""
base_url: str
token: str
api_key: str
verify_ssl: bool = True
timeout: int = 60
def _headers(self) -> Dict[str, str]:
"""Generate HTTP headers for API requests."""
return {
"Authorization": f"Bearer {self.token}",
"Authorization": f"Api-Key {self.api_key}",
"Content-Type": "application/vnd.api+json",
"Accept": "application/vnd.api+json",
}
@@ -266,7 +268,9 @@ def main():
help="API base URL (default: env PROWLER_API_BASE or Prowler Cloud SaaS)",
)
parser.add_argument(
"--token", default=None, help="Bearer token (default: PROWLER_API_TOKEN)"
"--api-key",
default=None,
help="Prowler API key (default: PROWLER_API_KEY env variable)",
)
parser.add_argument(
"--filter-provider",
@@ -307,12 +311,12 @@ def main():
args = parser.parse_args()
token = env_or_arg(args.token)
api_key = env_or_arg(args.api_key)
base_url = normalize_base_url(args.base_url)
client = ApiClient(
base_url=base_url,
token=token,
api_key=api_key,
verify_ssl=not args.insecure,
timeout=args.timeout,
)
@@ -132,12 +132,14 @@ def load_items(path: Path) -> List[Dict[str, Any]]:
sys.exit(f"Unsupported input file type: {ext}")
def env_or_arg(token_arg: Optional[str]) -> str:
"""Get API token from argument or environment variable."""
token = token_arg or os.getenv("PROWLER_API_TOKEN")
if not token:
sys.exit("Missing API token. Set --token or PROWLER_API_TOKEN.")
return token
def env_or_arg(api_key_arg: Optional[str]) -> str:
"""Get API key from argument or environment variable."""
api_key = api_key_arg or os.getenv("PROWLER_API_KEY")
if not api_key:
sys.exit(
"Missing API key. Set --api-key or PROWLER_API_KEY environment variable."
)
return api_key
def normalize_base_url(url: str) -> str:
@@ -395,14 +397,14 @@ class ApiClient:
"""HTTP client for Prowler API."""
base_url: str
token: str
api_key: str
verify_ssl: bool = True
timeout: int = 60
def _headers(self) -> Dict[str, str]:
"""Generate HTTP headers for API requests."""
return {
"Authorization": f"Bearer {self.token}",
"Authorization": f"Api-Key {self.api_key}",
"Content-Type": "application/vnd.api+json",
"Accept": "application/vnd.api+json",
}
@@ -564,7 +566,9 @@ def main():
help="API base URL (default: env PROWLER_API_BASE or Prowler Cloud SaaS).",
)
parser.add_argument(
"--token", default=None, help="Bearer token (default: PROWLER_API_TOKEN)."
"--api-key",
default=None,
help="Prowler API key (default: PROWLER_API_KEY env variable).",
)
parser.add_argument(
"--providers-endpoint",
@@ -600,7 +604,7 @@ def main():
)
args = parser.parse_args()
token = env_or_arg(args.token)
api_key = env_or_arg(args.api_key)
base_url = normalize_base_url(args.base_url)
items = load_items(Path(args.input_file))
@@ -610,7 +614,7 @@ def main():
client = ApiClient(
base_url=base_url,
token=token,
api_key=api_key,
verify_ssl=not args.insecure,
timeout=args.timeout,
)
@@ -0,0 +1,11 @@
# AWS Organizations Generator Dependencies
#
# This extends the base requirements.txt for the AWS Organizations generator
# Include base dependencies
-r requirements.txt
# AWS SDK for Python
boto3>=1.26.0
# Note: PyYAML is already included in requirements.txt