Merge branch 'master' of https://github.com/prowler-cloud/prowler into cloudflare-pr2-tls-email-checks

This commit is contained in:
HugoPBrito
2026-01-14 11:02:28 +01:00
117 changed files with 4775 additions and 731 deletions

View File

@@ -46,6 +46,7 @@ jobs:
api/docs/**
api/README.md
api/CHANGELOG.md
api/AGENTS.md
- name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -74,6 +74,7 @@ jobs:
api/docs/**
api/README.md
api/CHANGELOG.md
api/AGENTS.md
- name: Set up Docker Buildx
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -46,6 +46,7 @@ jobs:
api/docs/**
api/README.md
api/CHANGELOG.md
api/AGENTS.md
- name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -86,6 +86,7 @@ jobs:
api/docs/**
api/README.md
api/CHANGELOG.md
api/AGENTS.md
- name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -47,6 +47,7 @@ jobs:
ui/**
dashboard/**
mcp_server/**
skills/**
README.md
mkdocs.yml
.backportrc.json
@@ -55,6 +56,7 @@ jobs:
examples/**
.gitignore
contrib/**
**/AGENTS.md
- name: Install Poetry
if: steps.check-changes.outputs.any_changed == 'true'
@@ -83,7 +85,7 @@ jobs:
- name: Check format with black
if: steps.check-changes.outputs.any_changed == 'true'
run: poetry run black --exclude api ui skills --check .
run: poetry run black --exclude "api|ui|skills" --check .
- name: Lint with pylint
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -78,6 +78,7 @@ jobs:
ui/**
dashboard/**
mcp_server/**
skills/**
README.md
mkdocs.yml
.backportrc.json
@@ -86,6 +87,7 @@ jobs:
examples/**
.gitignore
contrib/**
**/AGENTS.md
- name: Set up Docker Buildx
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -42,6 +42,7 @@ jobs:
ui/**
dashboard/**
mcp_server/**
skills/**
README.md
mkdocs.yml
.backportrc.json
@@ -50,6 +51,7 @@ jobs:
examples/**
.gitignore
contrib/**
**/AGENTS.md
- name: Install Poetry
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -47,6 +47,7 @@ jobs:
ui/**
dashboard/**
mcp_server/**
skills/**
README.md
mkdocs.yml
.backportrc.json
@@ -55,6 +56,7 @@ jobs:
examples/**
.gitignore
contrib/**
**/AGENTS.md
- name: Install Poetry
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -73,6 +73,7 @@ jobs:
files_ignore: |
ui/CHANGELOG.md
ui/README.md
ui/AGENTS.md
- name: Set up Docker Buildx
if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -42,6 +42,7 @@ jobs:
files_ignore: |
ui/CHANGELOG.md
ui/README.md
ui/AGENTS.md
- name: Setup Node.js ${{ env.NODE_VERSION }}
if: steps.check-changes.outputs.any_changed == 'true'

4
.gitignore vendored
View File

@@ -150,8 +150,10 @@ node_modules
# Persistent data
_data/
# Claude
# AI Instructions (generated by skills/setup.sh from AGENTS.md)
CLAUDE.md
GEMINI.md
.github/copilot-instructions.md
# Compliance report
*.pdf

View File

@@ -36,11 +36,63 @@ Use these skills for detailed patterns on-demand:
| `prowler-test-api` | API testing (pytest-django + RLS) | [SKILL.md](skills/prowler-test-api/SKILL.md) |
| `prowler-test-ui` | E2E testing (Playwright) | [SKILL.md](skills/prowler-test-ui/SKILL.md) |
| `prowler-compliance` | Compliance framework structure | [SKILL.md](skills/prowler-compliance/SKILL.md) |
| `prowler-compliance-review` | Review compliance framework PRs | [SKILL.md](skills/prowler-compliance-review/SKILL.md) |
| `prowler-provider` | Add new cloud providers | [SKILL.md](skills/prowler-provider/SKILL.md) |
| `prowler-ci` | CI checks and PR gates (GitHub Actions) | [SKILL.md](skills/prowler-ci/SKILL.md) |
| `prowler-pr` | Pull request conventions | [SKILL.md](skills/prowler-pr/SKILL.md) |
| `prowler-docs` | Documentation style guide | [SKILL.md](skills/prowler-docs/SKILL.md) |
| `skill-creator` | Create new AI agent skills | [SKILL.md](skills/skill-creator/SKILL.md) |
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Adding new providers | `prowler-provider` |
| Adding services to existing providers | `prowler-provider` |
| After creating/modifying a skill | `skill-sync` |
| App Router / Server Actions | `nextjs-15` |
| Building AI chat features | `ai-sdk-5` |
| Create a PR with gh pr create | `prowler-pr` |
| Creating Zod schemas | `zod-4` |
| Creating new checks | `prowler-sdk-check` |
| Creating new skills | `skill-creator` |
| Creating/modifying Prowler UI components | `prowler-ui` |
| Creating/modifying models, views, serializers | `prowler-api` |
| Creating/updating compliance frameworks | `prowler-compliance` |
| Debug why a GitHub Actions job is failing | `prowler-ci` |
| Fill .github/pull_request_template.md (Context/Description/Steps to review/Checklist) | `prowler-pr` |
| General Prowler development questions | `prowler` |
| Generic DRF patterns | `django-drf` |
| Inspect PR CI checks and gates (.github/workflows/*) | `prowler-ci` |
| Inspect PR CI workflows (.github/workflows/*): conventional-commit, pr-check-changelog, pr-conflict-checker, labeler | `prowler-pr` |
| Mapping checks to compliance controls | `prowler-compliance` |
| Mocking AWS with moto in tests | `prowler-test-sdk` |
| Regenerate AGENTS.md Auto-invoke tables (sync.sh) | `skill-sync` |
| Review PR requirements: template, title conventions, changelog gate | `prowler-pr` |
| Reviewing compliance framework PRs | `prowler-compliance-review` |
| Testing RLS tenant isolation | `prowler-test-api` |
| Troubleshoot why a skill is missing from AGENTS.md auto-invoke | `skill-sync` |
| Understand CODEOWNERS/labeler-based automation | `prowler-ci` |
| Understand PR title conventional-commit validation | `prowler-ci` |
| Understand changelog gate and no-changelog label behavior | `prowler-ci` |
| Understand review ownership with CODEOWNERS | `prowler-pr` |
| Updating existing checks and metadata | `prowler-sdk-check` |
| Using Zustand stores | `zustand-5` |
| Working on MCP server tools | `prowler-mcp` |
| Working on Prowler UI structure (actions/adapters/types/hooks) | `prowler-ui` |
| Working with Prowler UI test helpers/pages | `prowler-test-ui` |
| Working with Tailwind classes | `tailwind-4` |
| Writing Playwright E2E tests | `playwright` |
| Writing Prowler API tests | `prowler-test-api` |
| Writing Prowler SDK tests | `prowler-test-sdk` |
| Writing Prowler UI E2E tests | `prowler-test-ui` |
| Writing Python tests with pytest | `pytest` |
| Writing React components | `react-19` |
| Writing TypeScript types/interfaces | `typescript` |
| Writing documentation | `prowler-docs` |
---
## Project Overview

View File

@@ -6,6 +6,20 @@
> - [`django-drf`](../skills/django-drf/SKILL.md) - Generic DRF patterns
> - [`pytest`](../skills/pytest/SKILL.md) - Generic pytest patterns
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Creating/modifying models, views, serializers | `prowler-api` |
| Generic DRF patterns | `django-drf` |
| Testing RLS tenant isolation | `prowler-test-api` |
| Writing Prowler API tests | `prowler-test-api` |
| Writing Python tests with pytest | `pytest` |
---
## CRITICAL RULES - NON-NEGOTIABLE
### Models

View File

@@ -128,8 +128,10 @@ flowchart TB
P5["prowler-mcp"]
P6["prowler-provider"]
P7["prowler-compliance"]
P8["prowler-docs"]
P9["prowler-pr"]
P8["prowler-compliance-review"]
P9["prowler-docs"]
P10["prowler-pr"]
P11["prowler-ci"]
end
subgraph TESTING["Testing Skills"]
@@ -140,6 +142,7 @@ flowchart TB
subgraph META["Meta Skills"]
M1["skill-creator"]
M2["skill-sync"]
end
end
@@ -189,9 +192,9 @@ flowchart TB
| Type | Skills |
|------|--------|
| **Generic** | typescript, react-19, nextjs-15, tailwind-4, pytest, playwright, django-drf, zod-4, zustand-5, ai-sdk-5 |
| **Prowler** | prowler, prowler-sdk-check, prowler-api, prowler-ui, prowler-mcp, prowler-provider, prowler-compliance, prowler-docs, prowler-pr |
| **Prowler** | prowler, prowler-sdk-check, prowler-api, prowler-ui, prowler-mcp, prowler-provider, prowler-compliance, prowler-compliance-review, prowler-docs, prowler-pr, prowler-ci |
| **Testing** | prowler-test-sdk, prowler-test-api, prowler-test-ui |
| **Meta** | skill-creator |
| **Meta** | skill-creator, skill-sync |
## Skill Structure

View File

@@ -7,6 +7,25 @@
> - [`prowler-compliance`](../skills/prowler-compliance/SKILL.md) - Compliance framework structure
> - [`pytest`](../skills/pytest/SKILL.md) - Generic pytest patterns
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Adding new providers | `prowler-provider` |
| Adding services to existing providers | `prowler-provider` |
| Creating new checks | `prowler-sdk-check` |
| Creating/updating compliance frameworks | `prowler-compliance` |
| Mapping checks to compliance controls | `prowler-compliance` |
| Mocking AWS with moto in tests | `prowler-test-sdk` |
| Reviewing compliance framework PRs | `prowler-compliance-review` |
| Updating existing checks and metadata | `prowler-sdk-check` |
| Writing Prowler SDK tests | `prowler-test-sdk` |
| Writing Python tests with pytest | `pytest` |
---
## Project Overview
The Prowler SDK is the core Python engine powering cloud security assessments across AWS, Azure, GCP, Kubernetes, GitHub, M365, and more. It includes 1000+ security checks and 30+ compliance frameworks.

View File

@@ -49,6 +49,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
- Update AWS RDS service metadata to new format [(#9551)](https://github.com/prowler-cloud/prowler/pull/9551)
- Update AWS Bedrock service metadata to new format [(#8827)](https://github.com/prowler-cloud/prowler/pull/8827)
- Update AWS IAM service metadata to new format [(#9550)](https://github.com/prowler-cloud/prowler/pull/9550)
- Update AWS Cognito service metadata to new format [(#8853)](https://github.com/prowler-cloud/prowler/pull/8853)
---

View File

@@ -10,7 +10,7 @@ Mutelist:
Accounts:
"example-account-id":
Checks:
"zones_dnssec_enabled":
"zone_dnssec_enabled":
Regions:
- "*"
Resources:

View File

@@ -1,30 +1,39 @@
{
"Provider": "aws",
"CheckID": "cognito_identity_pool_guest_access_disabled",
"CheckTitle": "Ensure Cognito Identity Pool has guest access disabled",
"CheckType": [],
"CheckTitle": "Cognito identity pool has guest access disabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"TTPs/Initial Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:identitypool/identitypool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Guest access allows unauthenticated users to access your identity pool. This is useful for public websites that allow users to sign in with a social identity provider, but it can also be a security risk. If you don't need guest access, you should disable it.",
"Risk": "If guest access is enabled, unauthenticated users can access your identity pool. This can be a security risk if you don't need guest access.",
"RelatedUrl": "https://docs.aws.amazon.com/location/latest/developerguide/authenticating-using-cognito.html",
"Description": "**Amazon Cognito identity pools** are evaluated for **guest access** to unauthenticated identities. The assessment considers the `allow_unauthenticated_identities` setting and whether an unauthenticated role can be assumed by guests.",
"Risk": "With **guest access**, unauthenticated users receive temporary credentials, reducing **confidentiality** and **integrity** controls. Overly permissive unauthenticated roles enable data reads/writes, API abuse, and resource consumption, risking **data exposure**, unauthorized changes, and **cost amplification**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/location/latest/developerguide/authenticating-using-cognito.html",
"https://support.icompaas.com/support/solutions/articles/62000233674-ensure-cognito-identity-pool-has-guest-access-disabled"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-identity update-identity-pool --identity-pool-id <example_resource_id> --identity-pool-name <example_resource_name> --no-allow-unauthenticated-identities",
"NativeIaC": "```yaml\n# CloudFormation: Disable guest (unauthenticated) access in an Identity Pool\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::IdentityPool\n Properties:\n AllowUnauthenticatedIdentities: false # Critical: disables unauthenticated (guest) identities\n```",
"Other": "1. Open the Amazon Cognito console and go to Identity pools\n2. Select the identity pool <example_resource_name>\n3. Click Edit (or Settings) for Authentication settings\n4. Turn off/clear \"Enable access to unauthenticated identities\"\n5. Save changes",
"Terraform": "```hcl\n# Disable guest (unauthenticated) access in an Identity Pool\nresource \"aws_cognito_identity_pool\" \"<example_resource_name>\" {\n identity_pool_name = \"<example_resource_name>\"\n allow_unauthenticated_identities = false # Critical: disables guest access\n}\n```"
},
"Recommendation": {
"Text": "Gues access should be disabled for Cognito Identity Pool. To disable guest access, follow the steps in the Amazon Cognito documentation.",
"Url": "https://docs.aws.amazon.com/location/latest/developerguide/authenticating-using-cognito.html"
"Text": "Disable guest access by setting `allow_unauthenticated_identities` to `false` unless strictly required.\n\nIf needed:\n- Enforce **least privilege** with tight resource scopes and conditions\n- Shorten session lifetimes and rate-limit usage\n- Prefer authenticated flows (user pools or federated IdPs)\n- Monitor access for **defense in depth**",
"Url": "https://hub.prowler.com/check/cognito_identity_pool_guest_access_disabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,40 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_advanced_security_enabled",
"CheckTitle": "Ensure cognito user pools has advanced security enabled with full-function",
"CheckType": [],
"CheckTitle": "Cognito user pool has advanced security enforced with full-function mode",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Advanced security features for Amazon Cognito User Pools provide additional security for your user pool. These features include compromised credentials protection, phone number verification, and account takeover protection.",
"Risk": "If advanced security features are not enabled, your user pool is more vulnerable to unauthorized access.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html",
"Description": "**Amazon Cognito user pools** are evaluated for **Threat protection (advanced security)** mode: `ENFORCED` (full-function) vs `AUDIT` or disabled. This indicates whether adaptive risk responses and compromised-credential checks are applied during authentication.",
"Risk": "Without enforced threat protection, risky sign-ins aren't blocked-only logged-enabling credential stuffing, brute force, and account takeover. This threatens confidentiality and integrity via unauthorized access and token misuse, and can degrade availability through automated abuse.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html",
"https://support.icompaas.com/support/solutions/articles/62000233667-ensure-cognito-user-pools-has-advanced-security-enabled-with-full-function"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <example_resource_id> --user-pool-add-ons AdvancedSecurityMode=ENFORCED",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n # Critical: Enables full-function threat protection (advanced security)\n UserPoolAddOns:\n AdvancedSecurityMode: ENFORCED # Sets advanced security to ENFORCED\n```",
"Other": "1. In the AWS Console, go to Cognito > User pools and select your pool\n2. Open Threat protection\n3. Click Activate (enable Plus feature plan if prompted)\n4. Set Enforcement mode to Full function (ENFORCED)\n5. Click Save changes",
"Terraform": "```hcl\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n\n # Critical: Enables full-function threat protection (advanced security)\n user_pool_add_ons {\n advanced_security_mode = \"ENFORCED\" # Set to ENFORCED to pass the check\n }\n}\n```"
},
"Recommendation": {
"Text": "To enable advanced security features for an Amazon Cognito User Pool, follow the instructions in the Amazon Cognito documentation.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html"
"Text": "Set Threat protection to `ENFORCED` to apply automatic mitigations.\n- Require step-up **MFA** on risky events\n- Block compromised credentials\n- Use IP allow/deny lists and export logs for monitoring\n*Baseline in* `AUDIT`, then enforce. Apply **defense in depth** and **least privilege** across apps and clients.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_advanced_security_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access",
"threat-detection"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,41 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_blocks_compromised_credentials_sign_in_attempts",
"CheckTitle": "Ensure that advanced security features are enabled for Amazon Cognito User Pools to block sign-in by users with suspected compromised credentials",
"CheckType": [],
"CheckTitle": "Cognito user pool blocks sign-in attempts with suspected compromised credentials",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices/Runtime Behavior Analysis",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Initial Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Amazon Cognito User Pools can be configured to block sign-in by users with suspected compromised credentials. This feature uses Amazon Cognito advanced security features to detect anomalous sign-in attempts and block them. When enabled, Amazon Cognito User Pools will block sign-in by users with suspected compromised credentials. This helps protect your users from unauthorized access to their accounts.",
"Risk": "If advanced security features are not enabled for an Amazon Cognito User Pool, users with compromised credentials may be able to sign in to their accounts. This could lead to unauthorized access to user data and other resources.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html",
"Description": "Amazon Cognito user pool threat protection **blocks sign-ins** when **compromised credentials** are detected. Advanced security is `ENFORCED`, and the compromised-credentials policy applies a `BLOCK` action to sign-in events.",
"Risk": "Allowing sign-in with leaked or reused passwords enables **account takeover**, exposing tokens and profile data (**confidentiality**), permitting unauthorized changes (**integrity**), and enabling abuse of linked APIs and sessions (**availability** impacts via misuse or lockout).",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html",
"https://support.icompaas.com/support/solutions/articles/62000233676-ensure-that-your-amazon-cognito-user-pool-blocks-potential-malicious-sign-in-attempts"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"NativeIaC": "```yaml\n# Enable threat protection and block compromised credentials on sign-in\nResources:\n UserPool:\n Type: AWS::Cognito::UserPool\n Properties:\n UserPoolName: <example_resource_name>\n UserPoolAddOns:\n AdvancedSecurityMode: ENFORCED # Critical: enables full threat protection required for blocking actions\n\n RiskConfig:\n Type: AWS::Cognito::UserPoolRiskConfigurationAttachment\n Properties:\n UserPoolId: !Ref UserPool\n CompromisedCredentialsRiskConfiguration:\n Actions:\n EventAction: BLOCK # Critical: block sign-in with suspected compromised credentials\n EventFilter:\n - SIGN_IN # Critical: apply the block action to sign-in events\n```",
"Other": "1. In the AWS Console, go to Amazon Cognito > User pools and select <example_resource_name>\n2. Open Threat protection and click Activate (if not already active)\n3. Set Enforcement mode to Full function (this sets Advanced security to ENFORCED)\n4. Under Compromised credentials, ensure Event detection includes Sign-in and set Action to Block sign-in\n5. Click Save changes",
"Terraform": "```hcl\n# Enable threat protection and block compromised credentials on sign-in\nresource \"aws_cognito_user_pool\" \"example\" {\n name = \"<example_resource_name>\"\n user_pool_add_ons {\n advanced_security_mode = \"ENFORCED\" # Critical: enables full threat protection required for blocking actions\n }\n}\n\nresource \"aws_cognito_risk_configuration\" \"example\" {\n user_pool_id = aws_cognito_user_pool.example.id\n compromised_credentials_risk_configuration {\n actions {\n event_action = \"BLOCK\" # Critical: block sign-in with suspected compromised credentials\n }\n event_filter = [\"SIGN_IN\"] # Critical: apply the block action to sign-in events\n }\n}\n```"
},
"Recommendation": {
"Text": "To enable advanced security features for an Amazon Cognito User Pool, follow the steps below:",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html"
"Text": "Enable threat protection with advanced security `ENFORCED` and set compromised-credential responses to `BLOCK` for sign-ins. Combine with **adaptive authentication** and **MFA** for higher assurance, monitor risk logs, and enforce strong password policies to prevent reuse-applying **defense in depth**.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_blocks_compromised_credentials_sign_in_attempts"
}
},
"Categories": [],
"Categories": [
"identity-access",
"threat-detection"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,41 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_blocks_potential_malicious_sign_in_attempts",
"CheckTitle": "Ensure that your Amazon Cognito user pool blocks potential malicious sign-in attempts",
"CheckType": [],
"CheckTitle": "Amazon Cognito user pool blocks all potential malicious sign-in attempts",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Initial Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Amazon Cognito provides adaptive authentication, which helps protect your applications from malicious actors and compromised credentials by evaluating the risk associated with each user login and providing the appropriate level of security to mitigate that risk. Adaptive authentication is a feature of advanced security that you can enable for your user pool. When adaptive authentication is enabled, Amazon Cognito evaluates the risk associated with each user login and provides the appropriate level of security to mitigate that risk. You can configure adaptive authentication to block sign-in attempts that are likely to be malicious.",
"Risk": "If adaptive authentication with automatic risk response as block sign-in is not enabled, your user pool may not be able to block sign-in attempts that are likely to be malicious.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html",
"Description": "**Amazon Cognito user pool** with **threat protection** in `ENFORCED` mode and **adaptive authentication** actions set to `BLOCK` for `low`, `medium`, and `high` account-takeover risk levels.\n\nEvaluates the user pool's risk configuration to confirm risky sign-in attempts are blocked across all severities.",
"Risk": "Permitting risky sign-ins degrades **confidentiality** and **integrity**. Attackers with **stolen or guessed credentials** can achieve **account takeover**, access data, change credentials, and escalate privileges, enabling lateral movement and persistence.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html",
"https://support.icompaas.com/support/solutions/articles/62000233676-ensure-that-your-amazon-cognito-user-pool-blocks-potential-malicious-sign-in-attempts"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"NativeIaC": "```yaml\n# CloudFormation: Enforce threat protection and block all risk levels\nResources:\n UserPool:\n Type: AWS::Cognito::UserPool\n Properties:\n UserPoolAddOns:\n AdvancedSecurityMode: ENFORCED # Critical: Enables Full function threat protection (required for PASS)\n\n RiskConfig:\n Type: AWS::Cognito::UserPoolRiskConfigurationAttachment\n Properties:\n UserPoolId: !Ref UserPool\n AccountTakeoverRiskConfiguration:\n Actions:\n LowAction:\n EventAction: BLOCK # Critical: Block low-risk sign-ins\n Notify: false\n MediumAction:\n EventAction: BLOCK # Critical: Block medium-risk sign-ins\n Notify: false\n HighAction:\n EventAction: BLOCK # Critical: Block high-risk sign-ins\n Notify: false\n```",
"Other": "1. In the AWS Console, go to Cognito > User pools and select <example_resource_name>\n2. Open Threat protection\n3. Set Enforcement mode to Full function and Save (enables Advanced security)\n4. In Account takeover risk configuration, set Low, Medium, and High to Block sign-in\n5. Save changes",
"Terraform": "```hcl\n# Enforce threat protection and block all risk levels\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n user_pool_add_ons {\n advanced_security_mode = \"ENFORCED\" # Critical: Enables Full function threat protection (required for PASS)\n }\n}\n\nresource \"aws_cognito_risk_configuration\" \"<example_resource_name>\" {\n user_pool_id = aws_cognito_user_pool.<example_resource_name>.id\n\n account_takeover_risk_configuration {\n actions {\n low_action {\n event_action = \"BLOCK\" # Critical: Block low-risk sign-ins\n notify = false\n }\n medium_action {\n event_action = \"BLOCK\" # Critical: Block medium-risk sign-ins\n notify = false\n }\n high_action {\n event_action = \"BLOCK\" # Critical: Block high-risk sign-ins\n notify = false\n }\n }\n }\n}\n```"
},
"Recommendation": {
"Text": "To enable adaptive authentication with automatic risk response as block sign-in, perform the following actions:",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html"
"Text": "Enable **threat protection** in `ENFORCED` mode and configure **adaptive authentication** to `BLOCK` at all risk levels.\n\nApply **least privilege** and **defense in depth**: require MFA, avoid broad Always-allow IPs, and monitor user event logs to tune responses and exceptions.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_blocks_potential_malicious_sign_in_attempts"
}
},
"Categories": [],
"Categories": [
"threat-detection",
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,43 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_client_prevent_user_existence_errors",
"CheckTitle": "Amazon Cognito User Pool should prevent user existence errors",
"CheckType": [],
"CheckTitle": "Amazon Cognito user pool client has Prevent User Existence Errors enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Discovery",
"Effects/Data Exposure"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPoolClient",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Amazon Cognito User Pool should be configured to prevent user existence errors. This setting prevents user existence errors by requiring the user to enter a username and password to sign in. If the user does not exist, the user will receive an error message.",
"Risk": "Revealing user existence errors can be a security risk as it can allow an attacker to determine if a user exists in the system. This can be used to perform user enumeration attacks.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-managing-errors.html",
"Description": "Amazon Cognito app clients use `PreventUserExistenceErrors` to suppress **user-existence disclosures**, keeping authentication, confirmation, and recovery responses generic rather than indicating whether a username exists.",
"Risk": "If responses reveal user existence, adversaries can **enumerate accounts**, enabling targeted **credential stuffing**, **brute force**, and **password-reset abuse**. This facilitates **account takeover**, leaks PII, and can degrade availability through automated lockouts.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://repost.aws/knowledge-center/cognito-prevent-user-existence-errors",
"https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-managing-errors.html",
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-client-apps.html",
"https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-cognito-userpoolclient.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool-client --user-pool-id <USER_POOL_ID> --client-id <APP_CLIENT_ID> --prevent-user-existence-errors ENABLED",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPoolClient\n Properties:\n UserPoolId: <example_resource_id>\n PreventUserExistenceErrors: ENABLED # Critical: enables suppression of user existence errors to pass the check\n ClientName: <example_resource_name>\n```",
"Other": "1. Open the Amazon Cognito console and go to User pools\n2. Select your user pool, then go to App integration > App clients\n3. Choose the target app client and click Edit\n4. Set Prevent user existence errors to Enabled\n5. Click Save changes",
"Terraform": "```hcl\nresource \"aws_cognito_user_pool_client\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n user_pool_id = \"<example_resource_id>\"\n\n prevent_user_existence_errors = \"ENABLED\" # Critical: prevents revealing if a user exists\n}\n```"
},
"Recommendation": {
"Text": "To prevent user existence errors, you should configure the Amazon Cognito User Pool to require a username and password to sign in. If the user does not exist, the user will receive an error message.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-managing-errors.html"
"Text": "Enable **user-existence suppression** on all app clients (`PreventUserExistenceErrors=ENABLED`). Apply **least disclosure** with generic messages across all auth flows and aliases. Strengthen with **MFA**, **rate limiting**, and **anomalous login detection** for **defense in depth**.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_client_prevent_user_existence_errors"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,40 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_client_token_revocation_enabled",
"CheckTitle": "Ensure that token revocation is enabled for Amazon Cognito User Pools",
"CheckType": [],
"CheckTitle": "Amazon Cognito user pool client has token revocation enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Persistence"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPoolClient",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Token revocation is a security feature that allows you to revoke tokens and end sessions for users. When you enable token revocation, Amazon Cognito automatically revokes tokens for users who sign out or are deleted. This helps protect your users' data and prevent unauthorized access to your resources.",
"Risk": "If token revocation is not enabled, users' tokens will not be revoked when they sign out or are deleted. This can lead to unauthorized access to your resources.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/token-revocation.html",
"Description": "**Amazon Cognito user pool app clients** are evaluated for **token revocation** being enabled via `EnableTokenRevocation`.\n\nThis identifies whether each client can invalidate refresh tokens and the access/ID tokens derived from them to end user sessions.",
"Risk": "Without **token revocation**, stolen or residual refresh tokens remain valid until expiry, enabling continued access after sign-out or account disablement. This undermines **confidentiality** and **integrity** by permitting unauthorized API calls, data exfiltration, and session hijacking.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://repost.aws/knowledge-center/cognito-revoke-refresh-tokens",
"https://docs.aws.amazon.com/cognito/latest/developerguide/token-revocation.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool-client --user-pool-id <USER_POOL_ID> --client-id <USER_POOL_CLIENT_ID> --enable-token-revocation",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPoolClient\n Properties:\n UserPoolId: \"<example_resource_id>\"\n EnableTokenRevocation: true # Critical: Enables token revocation so the client passes the check\n```",
"Other": "1. In the AWS Console, go to Amazon Cognito > User pools\n2. Select your user pool, then open App integration > App clients\n3. Click the target app client and choose Edit\n4. Under Advanced configuration, enable Token revocation\n5. Click Save changes",
"Terraform": "```hcl\nresource \"aws_cognito_user_pool_client\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n user_pool_id = \"<example_resource_id>\"\n enable_token_revocation = true # Critical: Enables token revocation so the client passes the check\n}\n```"
},
"Recommendation": {
"Text": "To enable token revocation for an Amazon Cognito User Pool, use the Amazon Cognito console or the AWS CLI. For more information, see the Amazon Cognito documentation.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/token-revocation.html"
"Text": "Enable `EnableTokenRevocation: true` on all app clients.\n\nAlso:\n- Use refresh token rotation\n- Shorten token lifetimes\n- Apply least privilege to scopes\n- Enforce user/admin sign-out to terminate sessions\n- Monitor for anomalous token reuse",
"Url": "https://hub.prowler.com/check/cognito_user_pool_client_token_revocation_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,41 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_deletion_protection_enabled",
"CheckTitle": "Ensure cognito user pools deletion protection enabled to prevent accidental deletion",
"CheckType": [],
"CheckTitle": "Cognito user pool has deletion protection enabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Effects/Data Destruction"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Deletion protection is a feature that allows you to lock a user pool to prevent it from being deleted. When deletion protection is enabled, you cannot delete the user pool. By default, deletion protection is disabled",
"Risk": "If deletion protection is not enabled, the user pool can be deleted by any user with the necessary permissions. This can lead to loss of data and service disruption",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-deletion-protection.html",
"Description": "**Amazon Cognito user pools** have **deletion protection** set to `ACTIVE`. The evaluation inspects each user pool's deletion protection status.",
"Risk": "Without **deletion protection**, any principal with delete rights can remove a user pool in one action, causing immediate **authentication outages**. Identities and configurations are lost, breaking sign-ins and tokens, harming **availability** and **integrity**, and prolonging recovery if exports/backups are stale.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-deletion-protection.html",
"https://repost.aws/questions/QUDX0aXegdThit0uD5kB_Fjw/cognito-user-pool-cannot-be-deleted-from-aws-console",
"https://support.icompaas.com/support/solutions/articles/62000233677-ensure-cognito-user-pools-deletion-protection-enabled-to-prevent-accidental-deletion"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <example_resource_id> --deletion-protection ACTIVE",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n DeletionProtection: ACTIVE # Critical: Enables deletion protection to prevent accidental pool deletion\n```",
"Other": "1. Open the AWS Management Console and go to Amazon Cognito\n2. Click User pools and select your pool\n3. Go to Settings > Deletion protection\n4. Click Activate (or toggle On) and Save",
"Terraform": "```hcl\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n deletion_protection = \"ACTIVE\" # Critical: Enables deletion protection to prevent accidental pool deletion\n}\n```"
},
"Recommendation": {
"Text": "Deletion protection should be enabled for the user pool to prevent accidental deletion",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-deletion-protection.html"
"Text": "Enable **deletion protection** (`ACTIVE`) on all production user pools.\n- Enforce **least privilege** by restricting delete permissions\n- Require **change control** and multi-party approval to deactivate protection\n- Add **monitoring and alerts** for status changes as **defense in depth**",
"Url": "https://hub.prowler.com/check/cognito_user_pool_deletion_protection_enabled"
}
},
"Categories": [],
"Categories": [
"resilience"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,39 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_mfa_enabled",
"CheckTitle": "Ensure Multi-Factor Authentication (MFA) is enabled for Amazon Cognito User Pools",
"CheckType": [],
"CheckTitle": "Amazon Cognito user pool requires Multi-Factor Authentication (MFA)",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Initial Access/Unauthorized Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Checks whether Multi-Factor Authentication (MFA) is enabled for Amazon Cognito User Pools.",
"Risk": "If MFA is not enabled, unauthorized users could gain access to the user pool and potentially compromise the security of the application.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html",
"Description": "**Amazon Cognito user pools** with **MFA** set to `ON`, indicating an additional factor is enforced during authentication",
"Risk": "Without **MFA**, password-only sign-in increases **account takeover** via phishing, brute force, and credential stuffing. Compromised accounts yield valid tokens to access data and APIs, alter configurations, and move laterally, eroding **confidentiality** and **integrity**, and potentially affecting **availability** through abuse.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp set-user-pool-mfa-config --user-pool-id <example_resource_id> --software-token-mfa-configuration Enabled=true --mfa-configuration ON",
"NativeIaC": "```yaml\n# CloudFormation: Require MFA and enable TOTP\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n MfaConfiguration: ON # Critical: sets MFA to required\n SoftwareTokenMfaConfiguration:\n Enabled: true # Critical: enables TOTP so ON is valid\n```",
"Other": "1. In AWS Console, go to Amazon Cognito > User pools\n2. Select your user pool\n3. Open Sign-in > Multi-factor authentication > Edit\n4. Set MFA enforcement to Require MFA\n5. Enable Authenticator app (TOTP) under MFA methods\n6. Click Save changes",
"Terraform": "```hcl\n# Terraform: Require MFA and enable TOTP\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n mfa_configuration = \"ON\" # Critical: sets MFA to required\n\n software_token_mfa_configuration {\n enabled = true # Critical: enables TOTP so ON is valid\n }\n}\n```"
},
"Recommendation": {
"Text": "To enable MFA for an Amazon Cognito User Pool, follow the instructions in the Amazon Cognito documentation.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html"
"Text": "Enable **MFA** at the user pool level (`Required` or risk-based) as a **defense-in-depth** control. Prefer **TOTP** or phishing-resistant methods over SMS. Require factor enrollment during onboarding, and enforce **least privilege** on downstream permissions. Complement with anomaly detection and session hardening to prevent and contain ATO.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_mfa_enabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,40 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_password_policy_lowercase",
"CheckTitle": "Ensure Cognito User Pool has password policy to require at least one lowercase letter",
"CheckType": [],
"CheckTitle": "Cognito user pool password policy requires at least one lowercase letter",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Initial Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "User pool password policy should require at least one lowercase letter.",
"Risk": "If the password policy does not require at least one lowercase letter, it may be easier for an attacker to crack the password.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"Description": "**Amazon Cognito user pools** are assessed for a password policy that includes a **lowercase character requirement**. Pools with `require_lowercase` set are distinguished from those without a policy, which inherently lack this requirement.",
"Risk": "Absent a **lowercase requirement** reduces password complexity and the overall **keyspace**, making **brute-force** and credential stuffing more feasible. Successful guessing enables account takeover, exposing user data and tokens and permitting profile changes, harming **confidentiality** and **integrity**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/managing-users-passwords.html",
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <example_resource_id> --policies \"PasswordPolicy={RequireLowercase=true}\"",
"NativeIaC": "```yaml\nResources:\n UserPool:\n Type: AWS::Cognito::UserPool\n Properties:\n Policies:\n PasswordPolicy:\n RequireLowercase: true # Critical: requires at least one lowercase letter in passwords\n```",
"Other": "1. Open the Amazon Cognito console and go to User pools\n2. Select your user pool\n3. Navigate to Authentication (or Authentication methods) > Password policy\n4. Enable Require lowercase (Lowercase letters)\n5. Click Save changes",
"Terraform": "```hcl\nresource \"aws_cognito_user_pool\" \"pool\" {\n name = \"<example_resource_name>\"\n\n password_policy {\n require_lowercase = true # Critical: enforces at least one lowercase character\n }\n}\n```"
},
"Recommendation": {
"Text": "To require at least one lowercase letter in the password, update the password policy for the user pool.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
"Text": "Enforce a strong password policy with `require_lowercase: true`, adequate length, and mixed character types. Complement with **defense in depth**: enable **MFA**, apply rate limiting or lockout for failed attempts, and block common passwords. Review regularly to match business risk and user population.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_password_policy_lowercase"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,40 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_password_policy_minimum_length_14",
"CheckTitle": "Ensure that the password policy for your user pools require a minimum length of 14 or greater",
"CheckType": [],
"CheckTitle": "Cognito user pool has a password policy with a minimum length of 14 characters or more",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"TTPs/Initial Access",
"TTPs/Credential Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "User pools allow you to configure a password policy for your user pool to specify complexity requirements for user passwords. The password policy for your user pools should require a minimum length of 14 or greater.",
"Risk": "If the password policy for your user pools does not require a minimum length of 14 or greater, it may be easier for attackers to guess or brute force user passwords.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"Description": "**Amazon Cognito user pools** should have a **password policy** requiring a **minimum length** of `14`.\n\nThis evaluation detects pools without a policy or with `minimum_length` below `14`.",
"Risk": "Low or missing password minimums enable weak credentials, increasing successful **brute force**, **password spraying**, and **credential stuffing** against sign-in endpoints.\n\nResulting **account takeover** threatens confidentiality (data exposure) and integrity/availability (unauthorized changes and abuse).",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"https://docs.aws.amazon.com/cognito/latest/developerguide/managing-users-passwords.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <example_resource_id> --policies \"PasswordPolicy={MinimumLength=14}\"",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n Policies:\n PasswordPolicy:\n MinimumLength: 14 # Critical: sets minimum password length to >=14 to pass the check\n```",
"Other": "1. Open the Amazon Cognito console and go to User pools\n2. Select your user pool\n3. Go to Authentication (or Authentication methods) > Password policy\n4. Set Minimum password length to 14\n5. Click Save changes",
"Terraform": "```hcl\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n\n password_policy {\n minimum_length = 14 # Critical: enforce min length >=14 to pass the check\n }\n}\n```"
},
"Recommendation": {
"Text": "To require a minimum length of 14 or greater for user passwords in your user pools, you can update the password policy for your user pool using the AWS Management Console, AWS CLI, or SDK.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
"Text": "Adopt a strong **password policy** with `minimum_length` `14`, favoring long passphrases.\n- Require mixed character types and block common passwords\n- Enforce password history where appropriate\n- Pair with **MFA** and adaptive risk controls for defense in depth",
"Url": "https://hub.prowler.com/check/cognito_user_pool_password_policy_minimum_length_14"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,42 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_password_policy_number",
"CheckTitle": "Ensure that the password policy for your user pool requires a number",
"CheckType": [],
"CheckTitle": "Cognito user pool password policy requires at least one number",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Credential Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Checks whether the password policy for your user pool requires a number.",
"Risk": "If the password policy for your user pool does not require a number, the user pool is less secure and more vulnerable to attacks.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"Description": "Amazon Cognito user pools are evaluated for a password policy that **requires at least one number**. The assessment checks whether the policy enforces a numeric character via `RequireNumbers` and also identifies pools with no password policy configured.",
"Risk": "Absent a numeric requirement-or any password policy-reduces password entropy, enabling **brute force** and **credential stuffing**. Successful account takeover grants valid tokens to protected APIs, risking data **confidentiality**, unauthorized actions affecting **integrity**, and resource abuse impacting **availability**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"https://docs.aws.amazon.com/cognito/latest/developerguide/managing-users-passwords.html",
"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cognito-userpool-passwordpolicy.html",
"https://support.icompaas.com/support/solutions/articles/62000233673-ensure-that-the-password-policy-for-your-user-pool-requires-a-number"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <USER_POOL_ID> --policies '{\"PasswordPolicy\":{\"RequireNumbers\":true}}'",
"NativeIaC": "```yaml\n# CloudFormation: Set password policy to require at least one number\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n Policies:\n PasswordPolicy:\n RequireNumbers: true # Critical: enforces at least one numeric character in passwords\n```",
"Other": "1. In the AWS Console, go to Amazon Cognito > User pools\n2. Select your user pool\n3. Open Authentication (or Password policy) settings\n4. Enable Requires at least one number (Require numbers)\n5. Save changes",
"Terraform": "```hcl\n# Terraform: Enable number requirement in Cognito password policy\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n\n password_policy {\n require_numbers = true # Critical: enforces at least one numeric character in passwords\n }\n}\n```"
},
"Recommendation": {
"Text": "To require a number in the password policy for your user pool, perform the following actions:",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
"Text": "Enforce a strong password policy: require numbers (`RequireNumbers=true`), adequate length (e.g., `>=8`), and mixed case/symbols. Complement with **MFA**, login throttling/lockout, and password reuse limits for **defense in depth**. Apply **least privilege** to applications using tokens and monitor authentication activity.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_password_policy_number"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,40 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_password_policy_symbol",
"CheckTitle": "Ensure that the password policy for your Amazon Cognito user pool requires at least one symbol.",
"CheckType": [],
"CheckTitle": "Cognito user pool password policy requires at least one symbol",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Credential Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Check whether the password policy for your Amazon Cognito user pool requires at least one symbol.",
"Risk": "If the password policy for your Amazon Cognito user pool does not require at least one symbol, it can be easier for attackers to crack passwords.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"Description": "**Amazon Cognito user pool** password policy includes a **symbol requirement** for user passwords.\n\nAssesses the presence of a policy and whether `require_symbols` is configured.",
"Risk": "Absent a **symbol requirement**, passwords have lower entropy, increasing success of **brute force** and **credential stuffing**.\n\nCompromised accounts enable unauthorized token issuance, data access, and profile changes, impacting **confidentiality** and **integrity** across apps relying on the pool.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"https://docs.aws.amazon.com/cognito/latest/developerguide/managing-users-passwords.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <example_resource_id> --policies \"PasswordPolicy={RequireSymbols=true}\"",
"NativeIaC": "```yaml\n# CloudFormation: ensure Cognito User Pool requires at least one symbol in passwords\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n Policies:\n PasswordPolicy:\n RequireSymbols: true # Critical: enforce at least one symbol to pass the check\n```",
"Other": "1. Open the Amazon Cognito console and go to User pools\n2. Select the target user pool\n3. Go to Authentication (or Sign-in experience) > Password policy\n4. Enable Require special characters (Require symbols)\n5. Click Save changes",
"Terraform": "```hcl\n# Terraform: ensure Cognito User Pool requires at least one symbol in passwords\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n\n password_policy {\n require_symbols = true # Critical: enforce at least one symbol to pass the check\n }\n}\n```"
},
"Recommendation": {
"Text": "To require at least one symbol in the password policy for your Amazon Cognito user pool, you can use the AWS Management Console or the AWS CLI.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
"Text": "Enforce a strong **password complexity** policy with `require_symbols=true`, adequate length, and mixed character sets. Combine with **MFA**, throttling or lockout, and credential hygiene to reduce takeover risk. Apply **defense in depth** and **least privilege** to limit blast radius if an account is compromised.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_password_policy_symbol"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,42 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_password_policy_uppercase",
"CheckTitle": "Ensure that the password policy for your user pool requires at least one uppercase letter",
"CheckType": [],
"CheckTitle": "Cognito user pool password policy requires at least one uppercase letter",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/NIST 800-53 Controls (USA)",
"Software and Configuration Checks/Industry and Regulatory Standards/NIST CSF Controls (USA)",
"Software and Configuration Checks/Industry and Regulatory Standards/PCI-DSS",
"TTPs/Initial Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "User pools allow you to configure a password policy for your user pool to specify requirements for user passwords. You can require that passwords have a minimum length, contain at least one uppercase letter, and contain at least one number. You can also require that passwords have at least one special character. You can also set the password policy to require that passwords be case-sensitive.",
"Risk": "If the password policy for your user pool does not require at least one uppercase letter, it may be easier for an attacker to guess or crack user passwords.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"Description": "Amazon Cognito user pool password policy is evaluated for an uppercase character requirement (`require_uppercase`). The check also identifies user pools that have no password policy configured.",
"Risk": "Missing an **uppercase requirement** lowers password entropy, easing **password spraying**, **brute force**, and offline cracking. Account takeover risks user data (**confidentiality**), enables unauthorized changes (**integrity**), and may disrupt services through abuse or lockouts (**availability**).",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <USER_POOL_ID> --policies PasswordPolicy={RequireUppercase=true}",
"NativeIaC": "```yaml\n# CloudFormation to require uppercase in Cognito User Pool password policy\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n Policies:\n PasswordPolicy:\n RequireUppercase: true # Critical: enforce at least one uppercase letter\n```",
"Other": "1. Open the Amazon Cognito console and go to User pools\n2. Select your user pool\n3. Go to Authentication methods (or Sign-in experience) > Password policy\n4. Check Requires at least one uppercase letter\n5. Click Save changes",
"Terraform": "```hcl\n# Require uppercase in Cognito User Pool password policy\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n\n password_policy {\n require_uppercase = true # Critical: enforce at least one uppercase letter\n }\n}\n```"
},
"Recommendation": {
"Text": "To require that the password policy for your user pool requires at least one uppercase letter, you can use the AWS Management Console or the AWS CLI. For more information, see the documentation on user pool settings and policies.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
"Text": "Enforce a **strong password policy** requiring **uppercase characters**, sufficient `minimum_length`, and diverse character sets. Layer defenses: **MFA**, **rate limiting/lockout**, and **password reuse history**. *Where feasible*, prefer long passphrases and monitor authentication events to prevent account takeover.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_password_policy_uppercase"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,41 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_self_registration_disabled",
"CheckTitle": "Ensure self registration is disabled for Amazon Cognito User Pools",
"CheckType": [],
"CheckTitle": "Amazon Cognito user pool has self registration disabled",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Software and Configuration Checks/Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"TTPs/Initial Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Checks whether self registration is disabled for the Amazon Cognito User Pool. Self registration allows users to sign up for an account in the user pool. If self registration is enabled, users can sign up for an account in the user pool without any intervention from the administrator. This can lead to unauthorized access to the application.",
"Risk": "If self registration is enabled, users can sign up for an account in the user pool without any intervention from the administrator. This can lead to unauthorized access to the application.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_SignUp.html",
"Description": "**Amazon Cognito user pools** are evaluated for **self-service sign-up**. The expected configuration is `AllowAdminCreateUserOnly=true` so only administrators create accounts.\n\n*When self sign-up is allowed*, the check also highlights any linked identity pools and the authenticated role(s) that new users could assume.",
"Risk": "Open sign-up lets untrusted users gain **authenticated identities**, potentially assuming **identity pool roles**. This can expose data (**confidentiality**), enable unauthorized actions (**integrity**), and drive abuse or cost via resource use (**availability**). Mass registrations and token harvesting increase the chance of lateral access.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html",
"https://docs.amazonaws.cn/en_us/cognito/latest/developerguide/signing-up-users-in-your-app.html",
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-admin-create-user-policy.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <USER_POOL_ID> --admin-create-user-config AllowAdminCreateUserOnly=true",
"NativeIaC": "```yaml\n# CloudFormation: Disable self-registration in a Cognito User Pool\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n AdminCreateUserConfig:\n AllowAdminCreateUserOnly: true # Critical: disables self sign-up; only admins can create users\n```",
"Other": "1. Open the AWS Console and go to Amazon Cognito > User pools\n2. Select the user pool\n3. Go to the Sign-up tab\n4. In Self-service sign-up, click Edit and disable (uncheck) Enable self-registration\n5. Click Save changes",
"Terraform": "```hcl\n# Terraform: Disable self-registration in a Cognito User Pool\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n admin_create_user_config {\n allow_admin_create_user_only = true # Critical: disables self sign-up; only admins can create users\n }\n}\n```"
},
"Recommendation": {
"Text": "To disable self registration for the Amazon Cognito User Pool, perform the following actions:",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html"
"Text": "Enforce **admin-only user creation**. If self sign-up is necessary, require **verification**, **MFA**, and bot protections; restrict app clients. Apply **least privilege** to any roles for authenticated users and minimize scopes. Use approval/invite flows, add **rate limits**, monitor sign-ups, and audit access for **defense in depth**.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_self_registration_disabled"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,39 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_temporary_password_expiration",
"CheckTitle": "Ensure that the user pool has a temporary password expiration period of 7 days or less",
"CheckType": [],
"CheckTitle": "Cognito user pool has temporary password expiration set to 7 days or less",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"TTPs/Initial Access",
"TTPs/Credential Access"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "Other",
"ResourceGroup": "IAM",
"Description": "Temporary passwords are set by the administrator and are used to allow users to sign in and change their password. Temporary passwords are valid for a limited period of time, after which they expire. Temporary passwords are used when an administrator creates a new user account or resets a user password. The temporary password expiration period is the length of time that the temporary password is valid. The default value is 7 days. You can set the expiration period to a value between 0 and 365 days.",
"Risk": "If the temporary password expiration period is too long, it increases the risk of unauthorized access to the user account. If the temporary password expiration period is too short, it increases the risk of users being unable to sign in and change their password.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html",
"Description": "**Amazon Cognito user pools** use **administrator-issued temporary passwords**. This evaluates whether a user pool defines a **password policy** and sets the temporary password validity to `7 days` or fewer.",
"Risk": "**Long-lived temporary passwords** or an **absent policy** expand the window for credential reuse or interception. An attacker who obtains a temp password can complete first sign-in and set a new secret, enabling account takeover, unauthorized data access, and changes that impact confidentiality and integrity.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws cognito-idp update-user-pool --user-pool-id <example_resource_id> --policies \"PasswordPolicy={TemporaryPasswordValidityDays=7}\"",
"NativeIaC": "```yaml\n# CloudFormation: Set Cognito temporary password expiration to 7 days or less\nResources:\n <example_resource_name>:\n Type: AWS::Cognito::UserPool\n Properties:\n Policies:\n PasswordPolicy:\n TemporaryPasswordValidityDays: 7 # Critical: ensures temp passwords expire in 7 days (PASS)\n```",
"Other": "1. Open the Amazon Cognito console and select **User pools**\n2. Choose your user pool\n3. Go to **Authentication** (or **Authentication methods**) > **Password policy**\n4. Set **Temporary passwords set by administrators expire in** to **7** (or fewer) days\n5. Click **Save changes**",
"Terraform": "```hcl\n# Terraform: Set Cognito temporary password expiration to 7 days or less\nresource \"aws_cognito_user_pool\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n\n password_policy {\n temporary_password_validity_days = 7 # Critical: 7 or less to pass the check\n }\n}\n```"
},
"Recommendation": {
"Text": "Set the temporary password expiration period to 7 days or less.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html"
"Text": "Define a **password policy** with temporary password validity `<= 7 days` (use the shortest practical). Require change on first sign-in, enable **MFA** during enrollment, and deliver secrets via secure channels. Apply **least privilege** and revoke or reissue unused temporary credentials promptly.",
"Url": "https://hub.prowler.com/check/cognito_user_pool_temporary_password_expiration"
}
},
"Categories": [],
"Categories": [
"identity-access"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -1,30 +1,40 @@
{
"Provider": "aws",
"CheckID": "cognito_user_pool_waf_acl_attached",
"CheckTitle": "Ensure that Amazon Cognito User Pool is associated with a WAF Web ACL",
"CheckType": [],
"CheckTitle": "Amazon Cognito user pool is associated with a WAF Web ACL",
"CheckType": [
"Software and Configuration Checks/AWS Security Best Practices",
"Industry and Regulatory Standards/AWS Foundational Security Best Practices",
"Effects/Denial of Service"
],
"ServiceName": "cognito",
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:cognito-idp:region:account:userpool/userpool-id",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "AwsCognitoUserPool",
"ResourceType": "AwsWafv2WebAcl",
"ResourceGroup": "IAM",
"Description": "Web ACLs are used to control access to your content. You can use a Web ACL to control who can access your content. You can also use a Web ACL to block requests based on IP address, HTTP headers, HTTP body, URI, or URI query string parameters. You can associate a Web ACL with a Cognito User Pool to control access to your content.",
"Risk": "If a Web ACL is not associated with a Cognito User Pool, then the content is not protected by the Web ACL. This could lead to unauthorized access to your content.",
"RelatedUrl": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-waf.html",
"Description": "Amazon Cognito user pools are evaluated for an association with an **AWS WAFv2 web ACL** that filters and controls requests to the hosted UI and public user pool API endpoints.",
"Risk": "Without a web ACL, Cognito endpoints lack layer-7 filtering, enabling:\n- Credential stuffing and account enumeration\n- Bot abuse and high-rate requests degrading service\n- Malicious payload probes\n\nThis threatens **availability**, risks unauthorized access to user data (**confidentiality**), and undermines session **integrity**.",
"RelatedUrl": "",
"AdditionalURLs": [
"https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-waf.html"
],
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
"CLI": "aws wafv2 associate-web-acl --web-acl-arn <WEB_ACL_ARN> --resource-arn <COGNITO_USER_POOL_ARN>",
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::WAFv2::WebACLAssociation\n Properties:\n ResourceArn: <example_resource_arn> # Critical: Cognito User Pool ARN to protect\n WebACLArn: <example_web_acl_arn> # Critical: WAF Web ACL ARN to associate\n```",
"Other": "1. Open the AWS Console and go to Cognito > User pools\n2. Select the user pool\n3. In Security, open the AWS WAF tab and click Edit\n4. Check Use AWS WAF with your user pool\n5. Select the existing regional Web ACL\n6. Click Save changes",
"Terraform": "```hcl\nresource \"aws_wafv2_web_acl_association\" \"<example_resource_name>\" {\n resource_arn = \"<example_resource_arn>\" # Critical: Cognito User Pool ARN\n web_acl_arn = \"<example_web_acl_arn>\" # Critical: WAF Web ACL ARN\n}\n```"
},
"Recommendation": {
"Text": "The Web ACL should be associated with the Cognito User Pool. To associate a Web ACL with a Cognito User Pool, use the AWS Management Console.",
"Url": "https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-waf.html"
"Text": "Associate an **AWS WAFv2 web ACL** with each user pool to enforce layer-7 controls. Use defense-in-depth: managed rule groups, `rate-based` limits, IP reputation, and bot mitigation. Enable request logging and continuously tune rules to reduce false positives. *Avoid rule sets incompatible with Cognito endpoints.*",
"Url": "https://hub.prowler.com/check/cognito_user_pool_waf_acl_attached"
}
},
"Categories": [],
"Categories": [
"threat-detection",
"internet-exposed"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""

View File

@@ -35,12 +35,12 @@ class CloudflareProvider(Provider):
_audit_config: dict
_fixer_config: dict
_mutelist: CloudflareMutelist
_filter_zones: set[str] | None
_filter_zone: set[str] | None
audit_metadata: Audit_Metadata
def __init__(
self,
filter_zones: Iterable[str] | None = None,
filter_zone: Iterable[str] | None = None,
config_path: str = None,
config_content: dict | None = None,
fixer_config: dict = {},
@@ -72,7 +72,7 @@ class CloudflareProvider(Provider):
self._mutelist = CloudflareMutelist(mutelist_path=mutelist_path)
# Store zone filter for filtering resources across services
self._filter_zones = set(filter_zones) if filter_zones else None
self._filter_zone = set(filter_zone) if filter_zone else None
Provider.set_global_provider(self)
@@ -101,9 +101,9 @@ class CloudflareProvider(Provider):
return self._mutelist
@property
def filter_zones(self) -> set[str] | None:
def filter_zone(self) -> set[str] | None:
"""Zone filter from --region argument to filter resources."""
return self._filter_zones
return self._filter_zone
@property
def accounts(self) -> list[CloudflareAccount]:

View File

@@ -30,7 +30,7 @@ class CloudflareIdentityInfo(BaseModel):
email: Optional[str] = None
accounts: list[CloudflareAccount] = Field(default_factory=list)
audited_accounts: list[str] = Field(default_factory=list)
audited_zones: list[str] = Field(default_factory=list)
audited_zone: list[str] = Field(default_factory=list)
class CloudflareOutputOptions(ProviderOutputOptions):

View File

@@ -0,0 +1,4 @@
from prowler.providers.cloudflare.services.zone.zone_service import Zone
from prowler.providers.common.provider import Provider
zone_client = Zone(Provider.get_global_provider())

View File

@@ -1,9 +1,9 @@
{
"Provider": "cloudflare",
"CheckID": "zones_dnssec_enabled",
"CheckID": "zone_dnssec_enabled",
"CheckTitle": "DNSSEC is enabled",
"CheckType": [],
"ServiceName": "zones",
"ServiceName": "zone",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
@@ -23,7 +23,7 @@
},
"Recommendation": {
"Text": "Enable **DNSSEC** and ensure **DS records** are properly configured at your domain registrar.\n- DNSSEC provides cryptographic authenticity for DNS responses\n- After enabling in Cloudflare, you must add the DS record at your registrar\n- Use online DNSSEC validators to verify correct configuration",
"Url": "https://hub.prowler.com/checks/cloudflare/zones_dnssec_enabled"
"Url": "https://hub.prowler.com/checks/cloudflare/zone_dnssec_enabled"
}
},
"Categories": [

View File

@@ -1,8 +1,8 @@
from prowler.lib.check.models import Check, CheckReportCloudflare
from prowler.providers.cloudflare.services.zones.zones_client import zones_client
from prowler.providers.cloudflare.services.zone.zone_client import zone_client
class zones_dnssec_enabled(Check):
class zone_dnssec_enabled(Check):
"""Ensure that DNSSEC is enabled for Cloudflare zones.
DNSSEC (Domain Name System Security Extensions) adds cryptographic signatures
@@ -23,7 +23,7 @@ class zones_dnssec_enabled(Check):
is active, or FAIL status if DNSSEC is not enabled for the zone.
"""
findings = []
for zone in zones_client.zones.values():
for zone in zone_client.zones.values():
report = CheckReportCloudflare(
metadata=self.metadata(),
resource=zone,

View File

@@ -1,9 +1,9 @@
{
"Provider": "cloudflare",
"CheckID": "zones_hsts_enabled",
"CheckID": "zone_hsts_enabled",
"CheckTitle": "HSTS is enabled with recommended max-age and includes subdomains",
"CheckType": [],
"ServiceName": "zones",
"ServiceName": "zone",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
@@ -23,7 +23,7 @@
},
"Recommendation": {
"Text": "Enable **HSTS** with at least a **6-month max-age** (12 months recommended).\n- Verify all resources work over HTTPS before enabling\n- Enable **include_subdomains** to protect all subdomains\n- Consider **HSTS preloading** for maximum protection against SSL stripping attacks\n- Test thoroughly as HSTS cannot be easily disabled once deployed",
"Url": "https://hub.prowler.com/checks/cloudflare/zones_hsts_enabled"
"Url": "https://hub.prowler.com/checks/cloudflare/zone_hsts_enabled"
}
},
"Categories": [

View File

@@ -1,8 +1,8 @@
from prowler.lib.check.models import Check, CheckReportCloudflare
from prowler.providers.cloudflare.services.zones.zones_client import zones_client
from prowler.providers.cloudflare.services.zone.zone_client import zone_client
class zones_hsts_enabled(Check):
class zone_hsts_enabled(Check):
"""Ensure that HSTS is enabled with secure settings for Cloudflare zones.
HTTP Strict Transport Security (HSTS) forces browsers to only connect via
@@ -29,7 +29,7 @@ class zones_hsts_enabled(Check):
# Recommended minimum max-age is 6 months (15768000 seconds)
recommended_max_age = 15768000
for zone in zones_client.zones.values():
for zone in zone_client.zones.values():
report = CheckReportCloudflare(
metadata=self.metadata(),
resource=zone,

View File

@@ -1,9 +1,9 @@
{
"Provider": "cloudflare",
"CheckID": "zones_https_redirect_enabled",
"CheckID": "zone_https_redirect_enabled",
"CheckTitle": "Always Use HTTPS is enabled",
"CheckType": [],
"ServiceName": "zones",
"ServiceName": "zone",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
@@ -23,7 +23,7 @@
},
"Recommendation": {
"Text": "Enable **Always Use HTTPS** to enforce encrypted connections for all visitors.\n- Combine with **HSTS** to prevent SSL stripping attacks\n- Ensure all resources (images, scripts, stylesheets) are served over HTTPS\n- Test for mixed content warnings before enabling",
"Url": "https://hub.prowler.com/checks/cloudflare/zones_https_redirect_enabled"
"Url": "https://hub.prowler.com/checks/cloudflare/zone_https_redirect_enabled"
}
},
"Categories": [

View File

@@ -1,8 +1,8 @@
from prowler.lib.check.models import Check, CheckReportCloudflare
from prowler.providers.cloudflare.services.zones.zones_client import zones_client
from prowler.providers.cloudflare.services.zone.zone_client import zone_client
class zones_https_redirect_enabled(Check):
class zone_https_redirect_enabled(Check):
"""Ensure that Always Use HTTPS redirect is enabled for Cloudflare zones.
The Always Use HTTPS setting automatically redirects all HTTP requests to
@@ -24,7 +24,7 @@ class zones_https_redirect_enabled(Check):
setting is disabled for the zone.
"""
findings = []
for zone in zones_client.zones.values():
for zone in zone_client.zones.values():
report = CheckReportCloudflare(
metadata=self.metadata(),
resource=zone,

View File

@@ -1,9 +1,9 @@
{
"Provider": "cloudflare",
"CheckID": "zones_min_tls_version_secure",
"CheckID": "zone_min_tls_version_secure",
"CheckTitle": "Minimum TLS version is set to 1.2 or higher",
"CheckType": [],
"ServiceName": "zones",
"ServiceName": "zone",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
@@ -24,7 +24,7 @@
},
"Recommendation": {
"Text": "Set **minimum TLS version** to `1.2` or higher.\n- **TLS 1.0 and 1.1** are deprecated by all major browsers and contain known vulnerabilities\n- Consider setting to `TLS 1.3` for environments with modern client requirements\n- Test client compatibility before upgrading minimum version",
"Url": "https://hub.prowler.com/checks/cloudflare/zones_min_tls_version_secure"
"Url": "https://hub.prowler.com/checks/cloudflare/zone_min_tls_version_secure"
}
},
"Categories": [

View File

@@ -1,8 +1,8 @@
from prowler.lib.check.models import Check, CheckReportCloudflare
from prowler.providers.cloudflare.services.zones.zones_client import zones_client
from prowler.providers.cloudflare.services.zone.zone_client import zone_client
class zones_min_tls_version_secure(Check):
class zone_min_tls_version_secure(Check):
"""Ensure that minimum TLS version is set to 1.2 or higher for Cloudflare zones.
TLS 1.0 and 1.1 have known vulnerabilities (BEAST, POODLE) and are deprecated.
@@ -26,7 +26,7 @@ class zones_min_tls_version_secure(Check):
"""
findings = []
for zone in zones_client.zones.values():
for zone in zone_client.zones.values():
report = CheckReportCloudflare(
metadata=self.metadata(),
resource=zone,

View File

@@ -7,7 +7,7 @@ from prowler.providers.cloudflare.lib.service.service import CloudflareService
from prowler.providers.cloudflare.models import CloudflareAccount
class Zones(CloudflareService):
class Zone(CloudflareService):
"""Retrieve Cloudflare zones with security-relevant settings."""
def __init__(self, provider):
@@ -20,9 +20,9 @@ class Zones(CloudflareService):
def _list_zones(self) -> None:
"""List all Cloudflare zones with their basic information."""
logger.info("Zones - Listing zones...")
logger.info("Zone - Listing zones...")
audited_accounts = self.provider.identity.audited_accounts
filter_zones = self.provider.filter_zones
filter_zone = self.provider.filter_zone
seen_zone_ids: set[str] = set()
try:
@@ -44,9 +44,9 @@ class Zones(CloudflareService):
# Apply zone filter if specified via --region
if (
filter_zones
and zone_id not in filter_zones
and zone_name not in filter_zones
filter_zone
and zone_id not in filter_zone
and zone_name not in filter_zone
):
continue
@@ -87,7 +87,7 @@ class Zones(CloudflareService):
def _get_zones_settings(self) -> None:
"""Get settings for all zones."""
logger.info("Zones - Getting zone settings...")
logger.info("Zone - Getting zone settings...")
for zone in self.zones.values():
try:
zone.settings = self._get_zone_settings(zone.id)
@@ -98,7 +98,7 @@ class Zones(CloudflareService):
def _get_zones_dnssec(self) -> None:
"""Get DNSSEC status for all zones."""
logger.info("Zones - Getting DNSSEC status...")
logger.info("Zone - Getting DNSSEC status...")
for zone in self.zones.values():
try:
dnssec = self.client.dns.dnssec.get(zone_id=zone.id)

View File

@@ -1,9 +1,9 @@
{
"Provider": "cloudflare",
"CheckID": "zones_ssl_strict",
"CheckID": "zone_ssl_strict",
"CheckTitle": "SSL/TLS encryption mode is set to Full (Strict)",
"CheckType": [],
"ServiceName": "zones",
"ServiceName": "zone",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
@@ -23,7 +23,7 @@
},
"Recommendation": {
"Text": "Configure **SSL/TLS mode** to `Full (Strict)` and install a valid certificate on your origin server.\n- Use **Cloudflare Origin CA certificates** for seamless integration\n- Ensure origin server presents a valid certificate matching your domain\n- Enable **Authenticated Origin Pulls** for additional security",
"Url": "https://hub.prowler.com/checks/cloudflare/zones_ssl_strict"
"Url": "https://hub.prowler.com/checks/cloudflare/zone_ssl_strict"
}
},
"Categories": [

View File

@@ -1,8 +1,8 @@
from prowler.lib.check.models import Check, CheckReportCloudflare
from prowler.providers.cloudflare.services.zones.zones_client import zones_client
from prowler.providers.cloudflare.services.zone.zone_client import zone_client
class zones_ssl_strict(Check):
class zone_ssl_strict(Check):
"""Ensure that SSL/TLS encryption mode is set to Full (Strict) for Cloudflare zones.
The SSL/TLS encryption mode determines how Cloudflare connects to the origin
@@ -26,7 +26,7 @@ class zones_ssl_strict(Check):
less secure modes like 'off', 'flexible', or 'full'.
"""
findings = []
for zone in zones_client.zones.values():
for zone in zone_client.zones.values():
report = CheckReportCloudflare(
metadata=self.metadata(),
resource=zone,

View File

@@ -1,4 +0,0 @@
from prowler.providers.cloudflare.services.zones.zones_service import Zones
from prowler.providers.common.provider import Provider
zones_client = Zones(Provider.get_global_provider())

View File

@@ -83,6 +83,7 @@ Patterns tailored for Prowler development:
| Skill | Description |
|-------|-------------|
| `skill-creator` | Create new AI agent skills |
| `skill-sync` | Sync skill metadata to AGENTS.md Auto-invoke sections |
## Directory Structure
@@ -96,6 +97,20 @@ skills/
└── README.md # This file
```
## Why Auto-invoke Sections?
**Problem**: AI assistants (Claude, Gemini, etc.) don't reliably auto-invoke skills even when the `Trigger:` in the skill description matches the user's request. They treat skill suggestions as "background noise" and barrel ahead with their default approach.
**Solution**: The `AGENTS.md` files in each directory contain an **Auto-invoke Skills** section that explicitly commands the AI: "When performing X action, ALWAYS invoke Y skill FIRST." This is a [known workaround](https://scottspence.com/posts/claude-code-skills-dont-auto-activate) that forces the AI to load skills.
**Automation**: Instead of manually maintaining these sections, run `skill-sync` after creating or modifying a skill:
```bash
./skills/skill-sync/assets/sync.sh
```
This reads `metadata.scope` and `metadata.auto_invoke` from each `SKILL.md` and generates the Auto-invoke tables in the corresponding `AGENTS.md` files.
## Creating New Skills
Use the `skill-creator` skill for guidance:
@@ -108,9 +123,11 @@ Read skills/skill-creator/SKILL.md
1. Create directory: `skills/{skill-name}/`
2. Add `SKILL.md` with required frontmatter
3. Keep content concise (under 500 lines)
4. Reference existing docs instead of duplicating
5. Add to `AGENTS.md` skills table
3. Add `metadata.scope` and `metadata.auto_invoke` fields
4. Keep content concise (under 500 lines)
5. Reference existing docs instead of duplicating
6. Run `./skills/skill-sync/assets/sync.sh` to update AGENTS.md
7. Add to `AGENTS.md` skills table (if not auto-generated)
## Design Principles

View File

@@ -2,11 +2,13 @@
name: ai-sdk-5
description: >
Vercel AI SDK 5 patterns.
Trigger: When building AI chat features - breaking changes from v4.
Trigger: When building AI features with AI SDK v5 (chat, streaming, tools/function calling, UIMessage parts), including migration from v4.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "Building AI chat features"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: django-drf
description: >
Django REST Framework patterns.
Trigger: When building REST APIs with Django - ViewSets, Serializers, Filters.
Trigger: When implementing generic DRF APIs (ViewSets, serializers, routers, permissions, filtersets). For Prowler API specifics (RLS/JSON:API), also use prowler-api.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, api]
auto_invoke: "Generic DRF patterns"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: nextjs-15
description: >
Next.js 15 App Router patterns.
Trigger: When working with Next.js - routing, Server Actions, data fetching.
Trigger: When working in Next.js App Router (app/), Server Components vs Client Components, Server Actions, Route Handlers, caching/revalidation, and streaming/Suspense.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "App Router / Server Actions"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: playwright
description: >
Playwright E2E testing patterns.
Trigger: When writing E2E tests - Page Objects, selectors, MCP workflow.
Trigger: When writing Playwright E2E tests (Page Object Model, selectors, MCP exploration workflow). For Prowler-specific UI conventions under ui/tests, also use prowler-test-ui.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "Writing Playwright E2E tests"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -1,12 +1,14 @@
---
name: prowler-api
description: >
Prowler API patterns: RLS, RBAC, providers, Celery tasks.
Trigger: When working on api/ - models, serializers, views, filters, tasks.
Prowler API patterns: JSON:API, RLS, RBAC, providers, Celery tasks.
Trigger: When working in api/ on models/serializers/viewsets/filters/tasks involving tenant isolation (RLS), RBAC, JSON:API, or provider lifecycle.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, api]
auto_invoke: "Creating/modifying models, views, serializers"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -0,0 +1,52 @@
---
name: prowler-ci
description: >
Helps with Prowler repository CI and PR gates (GitHub Actions workflows).
Trigger: When investigating CI checks failing on a PR, PR title validation, changelog gate/no-changelog label,
conflict marker checks, secret scanning, CODEOWNERS/labeler automation, or anything under .github/workflows.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke:
- "Inspect PR CI checks and gates (.github/workflows/*)"
- "Debug why a GitHub Actions job is failing"
- "Understand changelog gate and no-changelog label behavior"
- "Understand PR title conventional-commit validation"
- "Understand CODEOWNERS/labeler-based automation"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash
---
## What this skill covers
Use this skill whenever you are:
- Reading or changing GitHub Actions workflows under `.github/workflows/`
- Explaining why a PR fails checks (title, changelog, conflict markers, secret scanning)
- Figuring out which workflows run for UI/API/SDK changes and why
- Diagnosing path-filtering behavior (why a workflow did/didn't run)
## Quick map (where to look)
- PR template: `.github/pull_request_template.md`
- PR title validation: `.github/workflows/conventional-commit.yml`
- Changelog gate: `.github/workflows/pr-check-changelog.yml`
- Conflict markers check: `.github/workflows/pr-conflict-checker.yml`
- Secret scanning: `.github/workflows/find-secrets.yml`
- Auto labels: `.github/workflows/labeler.yml` and `.github/labeler.yml`
- Review ownership: `.github/CODEOWNERS`
## Debug checklist (PR failing checks)
1. Identify which workflow/job is failing (name + file under `.github/workflows/`).
2. Check path filters: is the workflow supposed to run for your changed files?
3. If it's a title check: verify PR title matches Conventional Commits.
4. If it's changelog: verify the right `CHANGELOG.md` is updated OR apply `no-changelog` label.
5. If it's conflict checker: remove `<<<<<<<`, `=======`, `>>>>>>>` markers.
6. If it's secrets: remove credentials and rotate anything leaked.
## Notes
- Keep `prowler-pr` focused on *creating* PRs and filling the template.
- Use `prowler-ci` for *CI policies and gates* that apply to PRs.

View File

@@ -0,0 +1,189 @@
---
name: prowler-compliance-review
description: >
Reviews Pull Requests that add or modify compliance frameworks.
Trigger: When reviewing PRs with compliance framework changes, CIS/NIST/PCI-DSS additions, or compliance JSON files.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, sdk]
auto_invoke: "Reviewing compliance framework PRs"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---
## When to Use
- Reviewing PRs that add new compliance frameworks
- Reviewing PRs that modify existing compliance frameworks
- Validating compliance framework JSON structure before merge
---
## Review Checklist (Critical)
| Check | Command/Method | Pass Criteria |
|-------|----------------|---------------|
| JSON Valid | `python3 -m json.tool file.json` | No syntax errors |
| All Checks Exist | Run validation script | 0 missing checks |
| No Duplicate IDs | Run validation script | 0 duplicate requirement IDs |
| CHANGELOG Entry | Manual review | Present under correct version |
| Dashboard File | Compare with existing | Follows established pattern |
| Framework Metadata | Manual review | All required fields populated |
---
## Commands
```bash
# 1. Validate JSON syntax
python3 -m json.tool prowler/compliance/{provider}/{framework}.json > /dev/null \
&& echo "Valid JSON" || echo "INVALID JSON"
# 2. Run full validation script
python3 skills/prowler-compliance-review/assets/validate_compliance.py \
prowler/compliance/{provider}/{framework}.json
# 3. Compare dashboard with existing (find similar framework)
diff dashboard/compliance/{new_framework}.py \
dashboard/compliance/{existing_framework}.py
```
---
## Decision Tree
```
JSON Valid?
├── No → FAIL: Fix JSON syntax errors
└── Yes ↓
All Checks Exist in Codebase?
├── Missing checks → FAIL: Add missing checks or remove from framework
└── All exist ↓
Duplicate Requirement IDs?
├── Yes → FAIL: Fix duplicate IDs
└── No ↓
CHANGELOG Entry Present?
├── No → REQUEST CHANGES: Add CHANGELOG entry
└── Yes ↓
Dashboard File Follows Pattern?
├── No → REQUEST CHANGES: Fix dashboard pattern
└── Yes ↓
Framework Metadata Complete?
├── No → REQUEST CHANGES: Add missing metadata
└── Yes → APPROVE
```
---
## Framework Structure Reference
Compliance frameworks are JSON files in: `prowler/compliance/{provider}/{framework}.json`
```json
{
"Framework": "CIS",
"Name": "CIS Provider Benchmark vX.Y.Z",
"Version": "X.Y",
"Provider": "AWS|Azure|GCP|...",
"Description": "Framework description...",
"Requirements": [
{
"Id": "1.1",
"Description": "Requirement description",
"Checks": ["check_name_1", "check_name_2"],
"Attributes": [
{
"Section": "1 Section Name",
"SubSection": "1.1 Subsection (optional)",
"Profile": "Level 1|Level 2",
"AssessmentStatus": "Automated|Manual",
"Description": "...",
"RationaleStatement": "...",
"ImpactStatement": "...",
"RemediationProcedure": "...",
"AuditProcedure": "...",
"AdditionalInformation": "...",
"References": "...",
"DefaultValue": "..."
}
]
}
]
}
```
---
## Common Issues
| Issue | How to Detect | Resolution |
|-------|---------------|------------|
| Missing checks | Validation script reports missing | Add check implementation or remove from Checks array |
| Duplicate IDs | Validation script reports duplicates | Ensure each requirement has unique ID |
| Empty Checks for Automated | AssessmentStatus is Automated but Checks is empty | Add checks or change to Manual |
| Wrong file location | Framework not in `prowler/compliance/{provider}/` | Move to correct directory |
| Missing dashboard file | No corresponding `dashboard/compliance/{framework}.py` | Create dashboard file following pattern |
| CHANGELOG missing | Not under correct version section | Add entry to prowler/CHANGELOG.md |
---
## Dashboard File Pattern
Dashboard files must be in `dashboard/compliance/` and follow this exact pattern:
```python
import warnings
from dashboard.common_methods import get_section_containers_cis
warnings.filterwarnings("ignore")
def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_DESCRIPTION",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
].copy()
return get_section_containers_cis(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
)
```
---
## Testing the Compliance Framework
After validation passes, test the framework with Prowler:
```bash
# Verify framework is detected
poetry run python prowler-cli.py {provider} --list-compliance | grep {framework}
# Run a quick test with a single check from the framework
poetry run python prowler-cli.py {provider} --compliance {framework} --check {check_name}
# Run full compliance scan (dry-run with limited checks)
poetry run python prowler-cli.py {provider} --compliance {framework} --checks-limit 5
# Generate compliance report in multiple formats
poetry run python prowler-cli.py {provider} --compliance {framework} -M csv json html
```
---
## Resources
- **Validation Script**: See [assets/validate_compliance.py](assets/validate_compliance.py)
- **Related Skills**: See [prowler-compliance](../prowler-compliance/SKILL.md) for creating frameworks
- **Documentation**: See [references/review-checklist.md](references/review-checklist.md)

View File

@@ -0,0 +1,236 @@
#!/usr/bin/env python3
"""
Prowler Compliance Framework Validator
Validates compliance framework JSON files for:
- JSON syntax validity
- Check existence in codebase
- Duplicate requirement IDs
- Required field completeness
- Assessment status consistency
Usage:
python validate_compliance.py <path_to_compliance_json>
Example:
python validate_compliance.py prowler/compliance/azure/cis_5.0_azure.json
"""
import json
import os
import sys
from pathlib import Path
def find_project_root():
"""Find the Prowler project root directory."""
current = Path(__file__).resolve()
for parent in current.parents:
if (parent / "prowler" / "providers").exists():
return parent
return None
def get_existing_checks(project_root: Path, provider: str) -> set:
"""Find all existing checks for a provider in the codebase."""
checks = set()
services_path = (
project_root / "prowler" / "providers" / provider.lower() / "services"
)
if not services_path.exists():
return checks
for service_dir in services_path.iterdir():
if service_dir.is_dir() and not service_dir.name.startswith("__"):
for check_dir in service_dir.iterdir():
if check_dir.is_dir() and not check_dir.name.startswith("__"):
check_file = check_dir / f"{check_dir.name}.py"
if check_file.exists():
checks.add(check_dir.name)
return checks
def validate_compliance_framework(json_path: str) -> dict:
"""Validate a compliance framework JSON file."""
results = {"valid": True, "errors": [], "warnings": [], "stats": {}}
# 1. Check file exists
if not os.path.exists(json_path):
results["valid"] = False
results["errors"].append(f"File not found: {json_path}")
return results
# 2. Validate JSON syntax
try:
with open(json_path, "r") as f:
data = json.load(f)
except json.JSONDecodeError as e:
results["valid"] = False
results["errors"].append(f"Invalid JSON syntax: {e}")
return results
# 3. Check required top-level fields
required_fields = [
"Framework",
"Name",
"Version",
"Provider",
"Description",
"Requirements",
]
for field in required_fields:
if field not in data:
results["valid"] = False
results["errors"].append(f"Missing required field: {field}")
if not results["valid"]:
return results
# 4. Extract provider
provider = data.get("Provider", "").lower()
# 5. Find project root and existing checks
project_root = find_project_root()
if project_root:
existing_checks = get_existing_checks(project_root, provider)
else:
existing_checks = set()
results["warnings"].append(
"Could not find project root - skipping check existence validation"
)
# 6. Validate requirements
requirements = data.get("Requirements", [])
all_checks = set()
requirement_ids = []
automated_count = 0
manual_count = 0
empty_automated = []
for req in requirements:
req_id = req.get("Id", "UNKNOWN")
requirement_ids.append(req_id)
# Collect checks
checks = req.get("Checks", [])
all_checks.update(checks)
# Check assessment status
attributes = req.get("Attributes", [{}])
if attributes:
status = attributes[0].get("AssessmentStatus", "Unknown")
if status == "Automated":
automated_count += 1
if not checks:
empty_automated.append(req_id)
elif status == "Manual":
manual_count += 1
# 7. Check for duplicate IDs
seen_ids = set()
duplicates = []
for req_id in requirement_ids:
if req_id in seen_ids:
duplicates.append(req_id)
seen_ids.add(req_id)
if duplicates:
results["valid"] = False
results["errors"].append(f"Duplicate requirement IDs: {duplicates}")
# 8. Check for missing checks
if existing_checks:
missing_checks = all_checks - existing_checks
if missing_checks:
results["valid"] = False
results["errors"].append(
f"Missing checks in codebase ({len(missing_checks)}): {sorted(missing_checks)}"
)
# 9. Warn about empty automated
if empty_automated:
results["warnings"].append(
f"Automated requirements with no checks: {empty_automated}"
)
# 10. Compile statistics
results["stats"] = {
"framework": data.get("Framework"),
"name": data.get("Name"),
"version": data.get("Version"),
"provider": data.get("Provider"),
"total_requirements": len(requirements),
"automated_requirements": automated_count,
"manual_requirements": manual_count,
"unique_checks_referenced": len(all_checks),
"checks_found_in_codebase": (
len(all_checks - (all_checks - existing_checks))
if existing_checks
else "N/A"
),
"missing_checks": (
len(all_checks - existing_checks) if existing_checks else "N/A"
),
}
return results
def print_report(results: dict):
"""Print a formatted validation report."""
print("\n" + "=" * 60)
print("PROWLER COMPLIANCE FRAMEWORK VALIDATION REPORT")
print("=" * 60)
stats = results.get("stats", {})
if stats:
print(f"\nFramework: {stats.get('name', 'N/A')}")
print(f"Provider: {stats.get('provider', 'N/A')}")
print(f"Version: {stats.get('version', 'N/A')}")
print("-" * 40)
print(f"Total Requirements: {stats.get('total_requirements', 0)}")
print(f" - Automated: {stats.get('automated_requirements', 0)}")
print(f" - Manual: {stats.get('manual_requirements', 0)}")
print(f"Unique Checks: {stats.get('unique_checks_referenced', 0)}")
print(f"Checks in Codebase: {stats.get('checks_found_in_codebase', 'N/A')}")
print(f"Missing Checks: {stats.get('missing_checks', 'N/A')}")
print("\n" + "-" * 40)
if results["errors"]:
print("\nERRORS:")
for error in results["errors"]:
print(f" [X] {error}")
if results["warnings"]:
print("\nWARNINGS:")
for warning in results["warnings"]:
print(f" [!] {warning}")
print("\n" + "-" * 40)
if results["valid"]:
print("RESULT: PASS - Framework is valid")
else:
print("RESULT: FAIL - Framework has errors")
print("=" * 60 + "\n")
def main():
if len(sys.argv) < 2:
print("Usage: python validate_compliance.py <path_to_compliance_json>")
print(
"Example: python validate_compliance.py prowler/compliance/azure/cis_5.0_azure.json"
)
sys.exit(1)
json_path = sys.argv[1]
results = validate_compliance_framework(json_path)
print_report(results)
sys.exit(0 if results["valid"] else 1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,57 @@
# Compliance PR Review References
## Related Skills
- [prowler-compliance](../../prowler-compliance/SKILL.md) - Creating compliance frameworks
- [prowler-pr](../../prowler-pr/SKILL.md) - PR conventions and checklist
## Documentation
- [Prowler Developer Guide](https://docs.prowler.com/developer-guide/introduction)
- [Compliance Framework Structure](https://docs.prowler.com/developer-guide/compliance)
## File Locations
| File Type | Location |
|-----------|----------|
| Compliance JSON | `prowler/compliance/{provider}/{framework}.json` |
| Dashboard | `dashboard/compliance/{framework}_{provider}.py` |
| CHANGELOG | `prowler/CHANGELOG.md` |
| Checks | `prowler/providers/{provider}/services/{service}/{check}/` |
## Validation Script
Run the validation script from the project root:
```bash
python3 skills/prowler-compliance-review/assets/validate_compliance.py \
prowler/compliance/{provider}/{framework}.json
```
## PR Review Summary Template
When completing a compliance framework review, use this summary format:
```markdown
## Compliance Framework Review Summary
| Check | Result |
|-------|--------|
| JSON Valid | PASS/FAIL |
| All Checks Exist | PASS/FAIL (N missing) |
| No Duplicate IDs | PASS/FAIL |
| CHANGELOG Entry | PASS/FAIL |
| Dashboard File | PASS/FAIL |
### Statistics
- Total Requirements: N
- Automated: N
- Manual: N
- Unique Checks: N
### Recommendation
APPROVE / REQUEST CHANGES / FAIL
### Issues Found
1. ...
```

View File

@@ -2,11 +2,15 @@
name: prowler-compliance
description: >
Creates and manages Prowler compliance frameworks.
Trigger: When working with compliance frameworks (CIS, NIST, PCI-DSS, SOC2, GDPR).
Trigger: When working with compliance frameworks (CIS, NIST, PCI-DSS, SOC2, GDPR, ISO27001, ENS, MITRE ATT&CK).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
version: "1.1"
scope: [root, sdk]
auto_invoke:
- "Creating/updating compliance frameworks"
- "Mapping checks to compliance controls"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---
@@ -16,98 +20,472 @@ Use this skill when:
- Creating a new compliance framework for any provider
- Adding requirements to existing frameworks
- Mapping checks to compliance controls
- Understanding compliance framework structures and attributes
## Compliance Framework Structure
## Compliance Framework Location
Frameworks are JSON files in: `prowler/compliance/{provider}/{framework}.json`
Frameworks are JSON files located in: `prowler/compliance/{provider}/{framework_name}_{provider}.json`
**Supported Providers:**
- `aws` - Amazon Web Services
- `azure` - Microsoft Azure
- `gcp` - Google Cloud Platform
- `kubernetes` - Kubernetes
- `github` - GitHub
- `m365` - Microsoft 365
- `alibabacloud` - Alibaba Cloud
- `oraclecloud` - Oracle Cloud
- `oci` - Oracle Cloud Infrastructure
- `nhn` - NHN Cloud
- `mongodbatlas` - MongoDB Atlas
- `iac` - Infrastructure as Code
- `llm` - Large Language Models
## Base Framework Structure
All compliance frameworks share this base structure:
```json
{
"Framework": "CIS",
"Name": "CIS Amazon Web Services Foundations Benchmark v2.0.0",
"Version": "2.0",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance...",
"Framework": "FRAMEWORK_NAME",
"Name": "Full Framework Name with Version",
"Version": "X.X",
"Provider": "PROVIDER",
"Description": "Framework description...",
"Requirements": [
{
"Id": "1.1",
"Name": "Requirement name",
"Description": "Detailed description of the requirement",
"Attributes": [
{
"Section": "1. Identity and Access Management",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "Attribute description"
}
],
"Id": "requirement_id",
"Description": "Requirement description",
"Name": "Optional requirement name",
"Attributes": [...],
"Checks": ["check_name_1", "check_name_2"]
}
]
}
```
## Supported Frameworks
## Framework-Specific Attribute Structures
**Industry standards:**
- CIS (Center for Internet Security)
- NIST 800-53, NIST CSF
- CISA
Each framework type has its own attribute model. Below are the exact structures used by Prowler:
**Regulatory compliance:**
- PCI-DSS
- HIPAA
- GDPR
- FedRAMP
- SOC2
### CIS (Center for Internet Security)
**Cloud-specific:**
- AWS Well-Architected Framework (Security Pillar)
- AWS Foundational Technical Review (FTR)
- Azure Security Benchmark
- GCP Security Best Practices
## Framework Requirement Mapping
Each requirement maps to one or more checks:
**Framework ID format:** `cis_{version}_{provider}` (e.g., `cis_5.0_aws`)
```json
{
"Id": "2.1.1",
"Name": "Ensure MFA is enabled for all IAM users",
"Description": "Multi-Factor Authentication adds an extra layer of protection...",
"Checks": [
"iam_user_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_hardware_mfa_enabled"
"Id": "1.1",
"Description": "Maintain current contact details",
"Checks": ["account_maintain_current_contact_details"],
"Attributes": [
{
"Section": "1 Identity and Access Management",
"SubSection": "Optional subsection",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "Detailed attribute description",
"RationaleStatement": "Why this control matters",
"ImpactStatement": "Impact of implementing this control",
"RemediationProcedure": "Steps to fix the issue",
"AuditProcedure": "Steps to verify compliance",
"AdditionalInformation": "Extra notes",
"DefaultValue": "Default configuration value",
"References": "https://docs.example.com/reference"
}
]
}
```
**Profile values:** `Level 1`, `Level 2`, `E3 Level 1`, `E3 Level 2`, `E5 Level 1`, `E5 Level 2`
**AssessmentStatus values:** `Automated`, `Manual`
---
### ISO 27001
**Framework ID format:** `iso27001_{year}_{provider}` (e.g., `iso27001_2022_aws`)
```json
{
"Id": "A.5.1",
"Description": "Policies for information security should be defined...",
"Name": "Policies for information security",
"Checks": ["securityhub_enabled"],
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.1",
"Objetive_Name": "Policies for information security",
"Check_Summary": "Summary of what is being checked"
}
]
}
```
**Note:** `Objetive_ID` and `Objetive_Name` use this exact spelling (not "Objective").
---
### ENS (Esquema Nacional de Seguridad - Spain)
**Framework ID format:** `ens_rd2022_{provider}` (e.g., `ens_rd2022_aws`)
```json
{
"Id": "op.acc.1.aws.iam.2",
"Description": "Proveedor de identidad centralizado",
"Checks": ["iam_check_saml_providers_sts"],
"Attributes": [
{
"IdGrupoControl": "op.acc.1",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Detailed control description in Spanish",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": ["trazabilidad", "autenticidad"],
"ModoEjecucion": "automatico",
"Dependencias": []
}
]
}
```
**Nivel values:** `opcional`, `bajo`, `medio`, `alto`
**Tipo values:** `refuerzo`, `requisito`, `recomendacion`, `medida`
**Dimensiones values:** `confidencialidad`, `integridad`, `trazabilidad`, `autenticidad`, `disponibilidad`
---
### MITRE ATT&CK
**Framework ID format:** `mitre_attack_{provider}` (e.g., `mitre_attack_aws`)
MITRE uses a different requirement structure:
```json
{
"Name": "Exploit Public-Facing Application",
"Id": "T1190",
"Tactics": ["Initial Access"],
"SubTechniques": [],
"Platforms": ["Containers", "IaaS", "Linux", "Network", "Windows", "macOS"],
"Description": "Adversaries may attempt to exploit a weakness...",
"TechniqueURL": "https://attack.mitre.org/techniques/T1190/",
"Checks": ["guardduty_is_enabled", "inspector2_is_enabled"],
"Attributes": [
{
"AWSService": "Amazon GuardDuty",
"Category": "Detect",
"Value": "Minimal",
"Comment": "Explanation of how this service helps..."
}
]
}
```
**For Azure:** Use `AzureService` instead of `AWSService`
**For GCP:** Use `GCPService` instead of `AWSService`
**Category values:** `Detect`, `Protect`, `Respond`
**Value values:** `Minimal`, `Partial`, `Significant`
---
### NIST 800-53
**Framework ID format:** `nist_800_53_revision_{version}_{provider}` (e.g., `nist_800_53_revision_5_aws`)
```json
{
"Id": "ac_2_1",
"Name": "AC-2(1) Automated System Account Management",
"Description": "Support the management of system accounts...",
"Checks": ["iam_password_policy_minimum_length_14"],
"Attributes": [
{
"ItemId": "ac_2_1",
"Section": "Access Control (AC)",
"SubSection": "Account Management (AC-2)",
"SubGroup": "AC-2(3) Disable Accounts",
"Service": "iam"
}
]
}
```
---
### Generic Compliance (Fallback)
For frameworks without specific attribute models:
```json
{
"Id": "requirement_id",
"Description": "Requirement description",
"Name": "Optional name",
"Checks": ["check_name"],
"Attributes": [
{
"ItemId": "item_id",
"Section": "Section name",
"SubSection": "Subsection name",
"SubGroup": "Subgroup name",
"Service": "service_name",
"Type": "type"
}
]
}
```
---
### AWS Well-Architected Framework
**Framework ID format:** `aws_well_architected_framework_{pillar}_pillar_aws`
```json
{
"Id": "SEC01-BP01",
"Description": "Establish common guardrails...",
"Name": "Establish common guardrails",
"Checks": ["account_part_of_organizations"],
"Attributes": [
{
"Name": "Establish common guardrails",
"WellArchitectedQuestionId": "securely-operate",
"WellArchitectedPracticeId": "sec_securely_operate_multi_accounts",
"Section": "Security",
"SubSection": "Security foundations",
"LevelOfRisk": "High",
"AssessmentMethod": "Automated",
"Description": "Detailed description",
"ImplementationGuidanceUrl": "https://docs.aws.amazon.com/..."
}
]
}
```
---
### KISA ISMS-P (Korea)
**Framework ID format:** `kisa_isms_p_{year}_{provider}` (e.g., `kisa_isms_p_2023_aws`)
```json
{
"Id": "1.1.1",
"Description": "Requirement description",
"Name": "Requirement name",
"Checks": ["check_name"],
"Attributes": [
{
"Domain": "1. Management System",
"Subdomain": "1.1 Management System Establishment",
"Section": "1.1.1 Section Name",
"AuditChecklist": ["Checklist item 1", "Checklist item 2"],
"RelatedRegulations": ["Regulation 1"],
"AuditEvidence": ["Evidence type 1"],
"NonComplianceCases": ["Non-compliance example"]
}
]
}
```
---
### C5 (Germany Cloud Computing Compliance Criteria Catalogue)
**Framework ID format:** `c5_{provider}` (e.g., `c5_aws`)
```json
{
"Id": "BCM-01",
"Description": "Requirement description",
"Name": "Requirement name",
"Checks": ["check_name"],
"Attributes": [
{
"Section": "BCM Business Continuity Management",
"SubSection": "BCM-01",
"Type": "Basic Criteria",
"AboutCriteria": "Description of criteria",
"ComplementaryCriteria": "Additional criteria"
}
]
}
```
---
### CCC (Cloud Computing Compliance)
**Framework ID format:** `ccc_{provider}` (e.g., `ccc_aws`)
```json
{
"Id": "CCC.C01",
"Description": "Requirement description",
"Name": "Requirement name",
"Checks": ["check_name"],
"Attributes": [
{
"FamilyName": "Cryptography & Key Management",
"FamilyDescription": "Family description",
"Section": "CCC.C01",
"SubSection": "Key Management",
"SubSectionObjective": "Objective description",
"Applicability": ["IaaS", "PaaS", "SaaS"],
"Recommendation": "Recommended action",
"SectionThreatMappings": [{"threat": "T1190"}],
"SectionGuidelineMappings": [{"guideline": "NIST"}]
}
]
}
```
---
### Prowler ThreatScore
**Framework ID format:** `prowler_threatscore_{provider}` (e.g., `prowler_threatscore_aws`)
Prowler ThreatScore is a custom security scoring framework developed by Prowler that evaluates AWS account security based on **four main pillars**:
| Pillar | Description |
|--------|-------------|
| **1. IAM** | Identity and Access Management controls (authentication, authorization, credentials) |
| **2. Attack Surface** | Network exposure, public resources, security group rules |
| **3. Logging and Monitoring** | Audit logging, threat detection, forensic readiness |
| **4. Encryption** | Data at rest and in transit encryption |
**Scoring System:**
- **LevelOfRisk** (1-5): Severity of the security issue
- `5` = Critical (e.g., root MFA, public S3 buckets)
- `4` = High (e.g., user MFA, public EC2)
- `3` = Medium (e.g., password policies, encryption)
- `2` = Low
- `1` = Informational
- **Weight**: Impact multiplier for score calculation
- `1000` = Critical controls (root security, public exposure)
- `100` = High-impact controls (user authentication, monitoring)
- `10` = Standard controls (password policies, encryption)
- `1` = Low-impact controls (best practices)
```json
{
"Id": "1.1.1",
"Description": "Ensure MFA is enabled for the 'root' user account",
"Checks": ["iam_root_mfa_enabled"],
"Attributes": [
{
"Title": "MFA enabled for 'root'",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root user account holds the highest level of privileges within an AWS account. Enabling MFA enhances security by adding an additional layer of protection.",
"AdditionalInformation": "Enabling MFA enhances console security by requiring the authenticating user to both possess a time-sensitive key-generating device and have knowledge of their credentials.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
}
```
**Available for providers:** AWS, Kubernetes, M365
---
## Available Compliance Frameworks
### AWS (41 frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 1.4, 1.5, 2.0, 3.0, 4.0, 5.0 | `cis_{version}_aws.json` |
| ISO 27001:2013, 2022 | `iso27001_{year}_aws.json` |
| NIST 800-53 Rev 4, 5 | `nist_800_53_revision_{version}_aws.json` |
| NIST 800-171 Rev 2 | `nist_800_171_revision_2_aws.json` |
| NIST CSF 1.1, 2.0 | `nist_csf_{version}_aws.json` |
| PCI DSS 3.2.1, 4.0 | `pci_{version}_aws.json` |
| HIPAA | `hipaa_aws.json` |
| GDPR | `gdpr_aws.json` |
| SOC 2 | `soc2_aws.json` |
| FedRAMP Low/Moderate | `fedramp_{level}_revision_4_aws.json` |
| ENS RD2022 | `ens_rd2022_aws.json` |
| MITRE ATT&CK | `mitre_attack_aws.json` |
| C5 Germany | `c5_aws.json` |
| CISA | `cisa_aws.json` |
| FFIEC | `ffiec_aws.json` |
| RBI Cyber Security | `rbi_cyber_security_framework_aws.json` |
| AWS Well-Architected | `aws_well_architected_framework_{pillar}_pillar_aws.json` |
| AWS FTR | `aws_foundational_technical_review_aws.json` |
| GxP 21 CFR Part 11, EU Annex 11 | `gxp_{standard}_aws.json` |
| KISA ISMS-P 2023 | `kisa_isms_p_2023_aws.json` |
| NIS2 | `nis2_aws.json` |
### Azure (15+ frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 2.0, 2.1, 3.0, 4.0 | `cis_{version}_azure.json` |
| ISO 27001:2022 | `iso27001_2022_azure.json` |
| ENS RD2022 | `ens_rd2022_azure.json` |
| MITRE ATT&CK | `mitre_attack_azure.json` |
| PCI DSS 4.0 | `pci_4.0_azure.json` |
| NIST CSF 2.0 | `nist_csf_2.0_azure.json` |
### GCP (15+ frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 2.0, 3.0, 4.0 | `cis_{version}_gcp.json` |
| ISO 27001:2022 | `iso27001_2022_gcp.json` |
| HIPAA | `hipaa_gcp.json` |
| MITRE ATT&CK | `mitre_attack_gcp.json` |
| PCI DSS 4.0 | `pci_4.0_gcp.json` |
| NIST CSF 2.0 | `nist_csf_2.0_gcp.json` |
### Kubernetes (6 frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 1.8, 1.10, 1.11 | `cis_{version}_kubernetes.json` |
| ISO 27001:2022 | `iso27001_2022_kubernetes.json` |
| PCI DSS 4.0 | `pci_4.0_kubernetes.json` |
### Other Providers
- **GitHub:** `cis_1.0_github.json`
- **M365:** `cis_4.0_m365.json`, `iso27001_2022_m365.json`
- **NHN:** `iso27001_2022_nhn.json`
## Best Practices
1. **Requirement IDs**: Follow the original framework numbering (e.g., "1.1", "2.3.4")
2. **Check Mapping**: Map to existing checks when possible, create new checks only if needed
3. **Completeness**: Include all framework requirements, even if no check exists (document as manual)
4. **Version Control**: Include framework version in the name and file
1. **Requirement IDs**: Follow the original framework numbering exactly (e.g., "1.1", "A.5.1", "T1190", "ac_2_1")
2. **Check Mapping**: Map to existing checks when possible. Use `Checks: []` for manual-only requirements
3. **Completeness**: Include all framework requirements, even those without automated checks
4. **Version Control**: Include framework version in `Name` and `Version` fields
5. **File Naming**: Use format `{framework}_{version}_{provider}.json`
6. **Validation**: Prowler validates JSON against Pydantic models at startup - invalid JSON will cause errors
## Commands
```bash
# List available frameworks for a provider
poetry run python prowler-cli.py {provider} --list-compliance
prowler {provider} --list-compliance
# Run scan with specific compliance framework
poetry run python prowler-cli.py {provider} --compliance {framework}
prowler aws --compliance cis_5.0_aws
# Run scan with multiple frameworks
poetry run python prowler-cli.py {provider} --compliance cis_aws_benchmark_v2 pci_dss_3.2.1
prowler aws --compliance cis_5.0_aws pci_4.0_aws
# Output compliance report
poetry run python prowler-cli.py {provider} --compliance {framework} -M csv json html
# Output compliance report in multiple formats
prowler aws --compliance cis_5.0_aws -M csv json html
```
## Code References
- **Compliance Models:** `prowler/lib/check/compliance_models.py`
- **Compliance Processing:** `prowler/lib/check/compliance.py`
- **Compliance Output:** `prowler/lib/outputs/compliance/`
## Resources
- **Templates**: See [assets/](assets/) for complete CIS framework JSON template
- **Documentation**: See [references/compliance-docs.md](references/compliance-docs.md) for official Prowler Developer Guide links
- **Templates:** See [assets/](assets/) for framework JSON templates
- **Documentation:** See [references/compliance-docs.md](references/compliance-docs.md) for additional resources

View File

@@ -3,7 +3,7 @@
"Name": "CIS Amazon Web Services Foundations Benchmark v5.0.0",
"Version": "5.0",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services.",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
"Requirements": [
{
"Id": "1.1",
@@ -17,13 +17,35 @@
"Profile": "Level 1",
"AssessmentStatus": "Manual",
"Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.",
"RationaleStatement": "If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed.",
"RationaleStatement": "If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior is not corrected then AWS may suspend the account.",
"ImpactStatement": "",
"RemediationProcedure": "This activity can only be performed via the AWS Console. Navigate to Account Settings and update contact information.",
"AuditProcedure": "This activity can only be performed via the AWS Console. Navigate to Account Settings and verify contact information is current.",
"AdditionalInformation": "",
"References": "https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html",
"DefaultValue": ""
"DefaultValue": "",
"References": "https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html"
}
]
},
{
"Id": "1.2",
"Description": "Ensure security contact information is registered",
"Checks": [
"account_security_contact_information_is_registered"
],
"Attributes": [
{
"Section": "1 Identity and Access Management",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "AWS provides customers with the option to specify the contact information for the account's security team. It is recommended that this information be provided.",
"RationaleStatement": "Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them.",
"ImpactStatement": "",
"RemediationProcedure": "Navigate to AWS Console > Account > Alternate Contacts and add security contact information.",
"AuditProcedure": "Run: aws account get-alternate-contact --alternate-contact-type SECURITY",
"AdditionalInformation": "",
"DefaultValue": "By default, no security contact is registered.",
"References": "https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-alternate.html"
}
]
},
@@ -38,37 +60,81 @@
"Section": "1 Identity and Access Management",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account.",
"RationaleStatement": "Deleting access keys associated with the 'root' user account limits vectors by which the account can be compromised.",
"Description": "The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be deleted.",
"RationaleStatement": "Deleting access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, deleting the root access keys encourages the creation and use of role based accounts that are least privileged.",
"ImpactStatement": "",
"RemediationProcedure": "Navigate to IAM console, select root user, Security credentials tab, and delete any access keys.",
"AuditProcedure": "Run: aws iam get-account-summary | grep 'AccountAccessKeysPresent'",
"AdditionalInformation": "IAM User account root for us-gov cloud regions is not enabled by default.",
"References": "https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html",
"DefaultValue": ""
"DefaultValue": "By default, no root access keys exist.",
"References": "https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html"
}
]
},
{
"Id": "1.11",
"Description": "Ensure credentials unused for 45 days or more are disabled",
"Id": "1.4",
"Description": "Ensure MFA is enabled for the 'root' user account",
"Checks": [
"iam_user_accesskey_unused",
"iam_user_console_access_unused"
"iam_root_mfa_enabled"
],
"Attributes": [
{
"Section": "1 Identity and Access Management",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "AWS IAM users can access AWS resources using different types of credentials. It is recommended that all credentials unused for 45 days or more be deactivated or removed.",
"RationaleStatement": "Disabling or removing unnecessary credentials reduces the window of opportunity for compromised accounts.",
"ImpactStatement": "Users with deactivated credentials will lose access until re-enabled.",
"RemediationProcedure": "Use IAM console or CLI to deactivate unused access keys and remove unused passwords.",
"AuditProcedure": "Generate credential report and review password_last_used and access_key_last_used fields.",
"Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.",
"RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential.",
"ImpactStatement": "",
"RemediationProcedure": "Using IAM console, navigate to Dashboard and choose Activate MFA on your root account.",
"AuditProcedure": "Run: aws iam get-account-summary | grep 'AccountMFAEnabled'. Ensure the value is 1.",
"AdditionalInformation": "",
"References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html",
"DefaultValue": ""
"DefaultValue": "MFA is not enabled by default.",
"References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#id_root-user_manage_mfa"
}
]
},
{
"Id": "1.5",
"Description": "Ensure hardware MFA is enabled for the 'root' user account",
"Checks": [
"iam_root_hardware_mfa_enabled"
],
"Attributes": [
{
"Section": "1 Identity and Access Management",
"Profile": "Level 2",
"AssessmentStatus": "Automated",
"Description": "The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the root user account be protected with a hardware MFA.",
"RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer from the attack surface introduced by the mobile smartphone on which a virtual MFA resides.",
"ImpactStatement": "Using a hardware MFA device instead of a virtual MFA may result in additional hardware costs.",
"RemediationProcedure": "Using IAM console, navigate to Dashboard, select root user, and configure hardware MFA device.",
"AuditProcedure": "Run: aws iam list-virtual-mfa-devices and verify the root account is not using a virtual MFA.",
"AdditionalInformation": "For recommendations on protecting hardware MFA devices, refer to https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.html",
"DefaultValue": "MFA is not enabled by default.",
"References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_physical.html"
}
]
},
{
"Id": "2.1.1",
"Description": "Ensure S3 Bucket Policy is set to deny HTTP requests",
"Checks": [
"s3_bucket_secure_transport_policy"
],
"Attributes": [
{
"Section": "2 Storage",
"SubSection": "2.1 Simple Storage Service (S3)",
"Profile": "Level 2",
"AssessmentStatus": "Automated",
"Description": "At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS.",
"RationaleStatement": "By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation.",
"ImpactStatement": "Enabling this setting will result in rejection of requests that do not use HTTPS for S3 bucket operations.",
"RemediationProcedure": "Add a bucket policy with condition aws:SecureTransport: false that denies all s3 actions.",
"AuditProcedure": "Review bucket policies for Deny statements with aws:SecureTransport: false condition.",
"AdditionalInformation": "",
"DefaultValue": "By default, S3 buckets allow both HTTP and HTTPS requests.",
"References": "https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/"
}
]
}

View File

@@ -0,0 +1,128 @@
{
"Framework": "ENS",
"Name": "ENS RD 311/2022 - Categoria Alta",
"Version": "RD2022",
"Provider": "AWS",
"Description": "The accreditation scheme of the ENS (Esquema Nacional de Seguridad - National Security Scheme of Spain) has been developed by the Ministry of Finance and Public Administrations and the CCN (National Cryptological Center). This includes the basic principles and minimum requirements necessary for the adequate protection of information.",
"Requirements": [
{
"Id": "op.acc.1.aws.iam.2",
"Description": "Proveedor de identidad centralizado",
"Attributes": [
{
"IdGrupoControl": "op.acc.1",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Es muy recomendable la utilizacion de un proveedor de identidades que permita administrar las identidades en un lugar centralizado, en vez de utilizar IAM para ello.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"iam_check_saml_providers_sts"
]
},
{
"Id": "op.acc.2.aws.iam.4",
"Description": "Requisitos de acceso",
"Attributes": [
{
"IdGrupoControl": "op.acc.2",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Se debera delegar en cuentas administradoras la administracion de la organizacion, dejando la cuenta maestra sin uso y con las medidas de seguridad pertinentes.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"iam_avoid_root_usage"
]
},
{
"Id": "op.acc.3.r1.aws.iam.1",
"Description": "Segregacion rigurosa",
"Attributes": [
{
"IdGrupoControl": "op.acc.3.r1",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "En caso de ser de aplicacion, la segregacion debera tener en cuenta la separacion de las funciones de configuracion y mantenimiento y de auditoria de cualquier otra.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"iam_support_role_created"
]
},
{
"Id": "op.exp.8.aws.cloudwatch.1",
"Description": "Registro de la actividad",
"Attributes": [
{
"IdGrupoControl": "op.exp.8",
"Marco": "operacional",
"Categoria": "explotacion",
"DescripcionControl": "Se registraran las actividades de los usuarios en el sistema, de forma que se pueda identificar que acciones ha realizado cada usuario.",
"Nivel": "medio",
"Tipo": "requisito",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled"
]
},
{
"Id": "mp.info.3.aws.s3.1",
"Description": "Cifrado de la informacion",
"Attributes": [
{
"IdGrupoControl": "mp.info.3",
"Marco": "medidas de proteccion",
"Categoria": "proteccion de la informacion",
"DescripcionControl": "La informacion con un nivel de clasificacion CONFIDENCIAL o superior debera ser cifrada.",
"Nivel": "bajo",
"Tipo": "medida",
"Dimensiones": [
"confidencialidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"s3_bucket_default_encryption",
"s3_bucket_kms_encryption"
]
}
]
}

View File

@@ -0,0 +1,103 @@
{
"Framework": "CUSTOM-FRAMEWORK",
"Name": "Custom Security Framework Example v1.0",
"Version": "1.0",
"Provider": "AWS",
"Description": "This is a template for creating custom compliance frameworks using the generic attribute model. Use this when creating frameworks that don't match existing attribute types (CIS, ISO, ENS, MITRE, etc.).",
"Requirements": [
{
"Id": "SEC-001",
"Description": "Ensure all storage resources are encrypted at rest",
"Name": "Storage Encryption",
"Attributes": [
{
"ItemId": "SEC-001",
"Section": "Data Protection",
"SubSection": "Encryption",
"SubGroup": "Storage",
"Service": "s3",
"Type": "Automated"
}
],
"Checks": [
"s3_bucket_default_encryption",
"rds_instance_storage_encrypted",
"ec2_ebs_volume_encryption"
]
},
{
"Id": "SEC-002",
"Description": "Ensure all network traffic is encrypted in transit",
"Name": "Network Encryption",
"Attributes": [
{
"ItemId": "SEC-002",
"Section": "Data Protection",
"SubSection": "Encryption",
"SubGroup": "Network",
"Service": "multiple",
"Type": "Automated"
}
],
"Checks": [
"s3_bucket_secure_transport_policy",
"elb_ssl_listeners",
"cloudfront_distributions_https_enabled"
]
},
{
"Id": "IAM-001",
"Description": "Ensure MFA is enabled for all privileged accounts",
"Name": "Multi-Factor Authentication",
"Attributes": [
{
"ItemId": "IAM-001",
"Section": "Identity and Access Management",
"SubSection": "Authentication",
"SubGroup": "MFA",
"Service": "iam",
"Type": "Automated"
}
],
"Checks": [
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "LOG-001",
"Description": "Ensure logging is enabled for all critical services",
"Name": "Centralized Logging",
"Attributes": [
{
"ItemId": "LOG-001",
"Section": "Logging and Monitoring",
"SubSection": "Audit Logs",
"SubGroup": "CloudTrail",
"Service": "cloudtrail",
"Type": "Automated"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled"
]
},
{
"Id": "MANUAL-001",
"Description": "Ensure security policies are reviewed annually",
"Name": "Policy Review",
"Attributes": [
{
"ItemId": "MANUAL-001",
"Section": "Governance",
"SubSection": "Policy Management",
"Service": "manual",
"Type": "Manual"
}
],
"Checks": []
}
]
}

View File

@@ -0,0 +1,91 @@
{
"Framework": "ISO27001",
"Name": "ISO/IEC 27001 Information Security Management Standard 2022",
"Version": "2022",
"Provider": "AWS",
"Description": "ISO (the International Organization for Standardization) and IEC (the International Electrotechnical Commission) form the specialized system for worldwide standardization. This framework maps AWS security controls to ISO 27001:2022 requirements.",
"Requirements": [
{
"Id": "A.5.1",
"Description": "Information security policy and topic-specific policies should be defined, approved by management, published, communicated to and acknowledged by relevant personnel and relevant interested parties, and reviewed at planned intervals and if significant changes occur.",
"Name": "Policies for information security",
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.1",
"Objetive_Name": "Policies for information security",
"Check_Summary": "Verify that information security policies are defined and implemented through security monitoring services."
}
],
"Checks": [
"securityhub_enabled",
"wellarchitected_workload_no_high_or_medium_risks"
]
},
{
"Id": "A.5.2",
"Description": "Information security roles and responsibilities should be defined and allocated according to the organisation needs.",
"Name": "Roles and Responsibilities",
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.2",
"Objetive_Name": "Roles and Responsibilities",
"Check_Summary": "Verify that IAM roles and responsibilities are properly defined."
}
],
"Checks": []
},
{
"Id": "A.5.3",
"Description": "Conflicting duties and conflicting areas of responsibility should be segregated.",
"Name": "Segregation of Duties",
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.3",
"Objetive_Name": "Segregation of Duties",
"Check_Summary": "Verify that duties are segregated through separate IAM roles."
}
],
"Checks": [
"iam_securityaudit_role_created"
]
},
{
"Id": "A.8.1",
"Description": "User end point devices should be protected.",
"Name": "User End Point Devices",
"Attributes": [
{
"Category": "A.8 Technological controls",
"Objetive_ID": "A.8.1",
"Objetive_Name": "User End Point Devices",
"Check_Summary": "Verify that endpoint protection and monitoring are enabled."
}
],
"Checks": [
"guardduty_is_enabled",
"ssm_managed_compliant_patching"
]
},
{
"Id": "A.8.24",
"Description": "Rules for the effective use of cryptography, including cryptographic key management, should be defined and implemented.",
"Name": "Use of Cryptography",
"Attributes": [
{
"Category": "A.8 Technological controls",
"Objetive_ID": "A.8.24",
"Objetive_Name": "Use of Cryptography",
"Check_Summary": "Verify that encryption is enabled for data at rest and in transit."
}
],
"Checks": [
"s3_bucket_default_encryption",
"rds_instance_storage_encrypted",
"ec2_ebs_volume_encryption"
]
}
]
}

View File

@@ -0,0 +1,142 @@
{
"Framework": "MITRE-ATTACK",
"Name": "MITRE ATT&CK compliance framework",
"Version": "",
"Provider": "AWS",
"Description": "MITRE ATT&CK is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.",
"Requirements": [
{
"Name": "Exploit Public-Facing Application",
"Id": "T1190",
"Tactics": [
"Initial Access"
],
"SubTechniques": [],
"Platforms": [
"Containers",
"IaaS",
"Linux",
"Network",
"Windows",
"macOS"
],
"Description": "Adversaries may attempt to exploit a weakness in an Internet-facing host or system to initially access a network. The weakness in the system can be a software bug, a temporary glitch, or a misconfiguration.",
"TechniqueURL": "https://attack.mitre.org/techniques/T1190/",
"Checks": [
"guardduty_is_enabled",
"inspector2_is_enabled",
"securityhub_enabled",
"elbv2_waf_acl_attached",
"awslambda_function_not_publicly_accessible",
"ec2_instance_public_ip"
],
"Attributes": [
{
"AWSService": "Amazon GuardDuty",
"Category": "Detect",
"Value": "Minimal",
"Comment": "GuardDuty can detect when vulnerable publicly facing resources are leveraged to capture data not intended to be viewable."
},
{
"AWSService": "AWS Web Application Firewall",
"Category": "Protect",
"Value": "Significant",
"Comment": "AWS WAF protects public-facing applications against vulnerabilities including OWASP Top 10 via managed rule sets."
},
{
"AWSService": "Amazon Inspector",
"Category": "Protect",
"Value": "Partial",
"Comment": "Amazon Inspector can detect known vulnerabilities on various Windows and Linux endpoints."
}
]
},
{
"Name": "Valid Accounts",
"Id": "T1078",
"Tactics": [
"Defense Evasion",
"Persistence",
"Privilege Escalation",
"Initial Access"
],
"SubTechniques": [
"T1078.001",
"T1078.002",
"T1078.003",
"T1078.004"
],
"Platforms": [
"Azure AD",
"Containers",
"Google Workspace",
"IaaS",
"Linux",
"Network",
"Office 365",
"SaaS",
"Windows",
"macOS"
],
"Description": "Adversaries may obtain and abuse credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion.",
"TechniqueURL": "https://attack.mitre.org/techniques/T1078/",
"Checks": [
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_accesskey_unused",
"cloudtrail_multi_region_enabled"
],
"Attributes": [
{
"AWSService": "AWS IAM",
"Category": "Protect",
"Value": "Significant",
"Comment": "IAM MFA and access key rotation help prevent unauthorized access with valid credentials."
},
{
"AWSService": "AWS CloudTrail",
"Category": "Detect",
"Value": "Significant",
"Comment": "CloudTrail logs all API calls, enabling detection of unauthorized account usage."
}
]
},
{
"Name": "Data from Cloud Storage",
"Id": "T1530",
"Tactics": [
"Collection"
],
"SubTechniques": [],
"Platforms": [
"IaaS",
"SaaS"
],
"Description": "Adversaries may access data from improperly secured cloud storage. Many cloud service providers offer solutions for online data object storage.",
"TechniqueURL": "https://attack.mitre.org/techniques/T1530/",
"Checks": [
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_acl_prohibited",
"s3_bucket_default_encryption",
"macie_is_enabled"
],
"Attributes": [
{
"AWSService": "Amazon S3",
"Category": "Protect",
"Value": "Significant",
"Comment": "S3 bucket policies and ACLs can prevent public access to sensitive data."
},
{
"AWSService": "Amazon Macie",
"Category": "Detect",
"Value": "Significant",
"Comment": "Macie can detect and alert on sensitive data exposure in S3 buckets."
}
]
}
]
}

View File

@@ -0,0 +1,189 @@
{
"Framework": "ProwlerThreatScore",
"Name": "Prowler ThreatScore Compliance Framework for AWS",
"Version": "1.0",
"Provider": "AWS",
"Description": "Prowler ThreatScore Compliance Framework for AWS ensures that the AWS account is compliant taking into account four main pillars: Identity and Access Management, Attack Surface, Logging and Monitoring, and Encryption. Each check has a LevelOfRisk (1-5) and Weight that contribute to calculating the overall threat score.",
"Requirements": [
{
"Id": "1.1.1",
"Description": "Ensure MFA is enabled for the 'root' user account",
"Checks": [
"iam_root_mfa_enabled"
],
"Attributes": [
{
"Title": "MFA enabled for 'root'",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root user account holds the highest level of privileges within an AWS account. Enabling Multi-Factor Authentication (MFA) enhances security by adding an additional layer of protection beyond just a username and password.",
"AdditionalInformation": "Enabling MFA enhances console security by requiring the authenticating user to both possess a time-sensitive key-generating device and have knowledge of their credentials.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "1.1.2",
"Description": "Ensure hardware MFA is enabled for the 'root' user account",
"Checks": [
"iam_root_hardware_mfa_enabled"
],
"Attributes": [
{
"Title": "Hardware MFA enabled for 'root'",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root user account in AWS has the highest level of privileges. A hardware MFA has a smaller attack surface compared to a virtual MFA.",
"AdditionalInformation": "Unlike a virtual MFA, which relies on a mobile device that may be vulnerable to malware, a hardware MFA operates independently, reducing exposure to potential security threats.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "1.1.13",
"Description": "Ensure no root account access key exists",
"Checks": [
"iam_no_root_access_key"
],
"Attributes": [
{
"Title": "No root access key",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root account in AWS has unrestricted administrative privileges. It is recommended that no access keys be associated with the root account.",
"AdditionalInformation": "Eliminating root access keys reduces the risk of unauthorized access and enforces the use of role-based IAM accounts with least privilege.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "2.1.1",
"Description": "Ensure EC2 instances do not have public IP addresses",
"Checks": [
"ec2_instance_public_ip"
],
"Attributes": [
{
"Title": "EC2 without public IP",
"Section": "2. Attack Surface",
"SubSection": "2.1 Network Exposure",
"AttributeDescription": "EC2 instances with public IP addresses are directly accessible from the internet, increasing the attack surface.",
"AdditionalInformation": "Use private subnets and NAT gateways or VPC endpoints for internet access when needed.",
"LevelOfRisk": 4,
"Weight": 100
}
]
},
{
"Id": "2.2.1",
"Description": "Ensure S3 buckets are not publicly accessible",
"Checks": [
"s3_bucket_public_access"
],
"Attributes": [
{
"Title": "S3 bucket not public",
"Section": "2. Attack Surface",
"SubSection": "2.2 Storage Exposure",
"AttributeDescription": "Publicly accessible S3 buckets can lead to data breaches and unauthorized access to sensitive information.",
"AdditionalInformation": "Enable S3 Block Public Access settings at the account and bucket level.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "3.1.1",
"Description": "Ensure CloudTrail is enabled in all regions",
"Checks": [
"cloudtrail_multi_region_enabled"
],
"Attributes": [
{
"Title": "CloudTrail multi-region enabled",
"Section": "3. Logging and Monitoring",
"SubSection": "3.1 Audit Logging",
"AttributeDescription": "CloudTrail provides a record of API calls made in your AWS account. Multi-region trails ensure all activity is captured.",
"AdditionalInformation": "Without comprehensive logging, security incidents may go undetected and forensic analysis becomes impossible.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "3.2.1",
"Description": "Ensure GuardDuty is enabled",
"Checks": [
"guardduty_is_enabled"
],
"Attributes": [
{
"Title": "GuardDuty enabled",
"Section": "3. Logging and Monitoring",
"SubSection": "3.2 Threat Detection",
"AttributeDescription": "Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior.",
"AdditionalInformation": "GuardDuty analyzes CloudTrail, VPC Flow Logs, and DNS logs to identify threats.",
"LevelOfRisk": 4,
"Weight": 100
}
]
},
{
"Id": "4.1.1",
"Description": "Ensure S3 buckets have default encryption enabled",
"Checks": [
"s3_bucket_default_encryption"
],
"Attributes": [
{
"Title": "S3 default encryption",
"Section": "4. Encryption",
"SubSection": "4.1 Data at Rest",
"AttributeDescription": "Enabling default encryption on S3 buckets ensures all objects are encrypted when stored.",
"AdditionalInformation": "Use SSE-S3, SSE-KMS, or SSE-C depending on your key management requirements.",
"LevelOfRisk": 3,
"Weight": 10
}
]
},
{
"Id": "4.1.2",
"Description": "Ensure EBS volumes are encrypted",
"Checks": [
"ec2_ebs_volume_encryption"
],
"Attributes": [
{
"Title": "EBS volume encryption",
"Section": "4. Encryption",
"SubSection": "4.1 Data at Rest",
"AttributeDescription": "EBS volume encryption protects data at rest on EC2 instance storage.",
"AdditionalInformation": "Enable default EBS encryption at the account level to ensure all new volumes are encrypted.",
"LevelOfRisk": 3,
"Weight": 10
}
]
},
{
"Id": "4.2.1",
"Description": "Ensure data in transit is encrypted using TLS",
"Checks": [
"s3_bucket_secure_transport_policy"
],
"Attributes": [
{
"Title": "S3 secure transport",
"Section": "4. Encryption",
"SubSection": "4.2 Data in Transit",
"AttributeDescription": "Requiring HTTPS for S3 bucket access ensures data is encrypted during transmission.",
"AdditionalInformation": "Use bucket policies to deny requests that do not use TLS.",
"LevelOfRisk": 3,
"Weight": 10
}
]
}
]
}

View File

@@ -1,15 +1,137 @@
# Compliance Framework Documentation
## Local Documentation
## Code References
For detailed compliance framework patterns, see:
Key files for understanding and modifying compliance frameworks:
- `docs/developer-guide/security-compliance-framework.mdx` - Complete guide for creating compliance frameworks (CIS, NIST, PCI-DSS, SOC2, GDPR)
| File | Purpose |
|------|---------|
| `prowler/lib/check/compliance_models.py` | Pydantic models defining attribute structures for each framework type |
| `prowler/lib/check/compliance.py` | Core compliance processing logic |
| `prowler/lib/check/utils.py` | Utility functions including `list_compliance_modules()` |
| `prowler/lib/outputs/compliance/` | Framework-specific output generators |
| `prowler/compliance/{provider}/` | JSON compliance framework definitions |
## Contents
## Attribute Model Classes
The documentation covers:
- Framework JSON structure
- Framework metadata (name, version, provider)
- Requirements array with IDs, descriptions, and attributes
- Check mappings for each requirement
Each framework type has a specific Pydantic model in `compliance_models.py`:
| Framework | Model Class |
|-----------|-------------|
| CIS | `CIS_Requirement_Attribute` |
| ISO 27001 | `ISO27001_2013_Requirement_Attribute` |
| ENS | `ENS_Requirement_Attribute` |
| MITRE ATT&CK | `Mitre_Requirement` (uses different structure) |
| AWS Well-Architected | `AWS_Well_Architected_Requirement_Attribute` |
| KISA ISMS-P | `KISA_ISMSP_Requirement_Attribute` |
| Prowler ThreatScore | `Prowler_ThreatScore_Requirement_Attribute` |
| CCC | `CCC_Requirement_Attribute` |
| C5 Germany | `C5Germany_Requirement_Attribute` |
| Generic/Fallback | `Generic_Compliance_Requirement_Attribute` |
## How Compliance Frameworks are Loaded
1. `Compliance.get_bulk(provider)` is called at startup
2. Scans `prowler/compliance/{provider}/` for `.json` files
3. Each file is parsed using `load_compliance_framework()`
4. Pydantic validates against `Compliance` model
5. Framework is stored in dictionary with filename (without `.json`) as key
## How Checks Map to Compliance
1. After loading, `update_checks_metadata_with_compliance()` is called
2. For each check, it finds all compliance requirements that reference it
3. Compliance info is attached to `CheckMetadata.Compliance` list
4. During output, `get_check_compliance()` retrieves mappings per finding
## File Naming Convention
```
{framework}_{version}_{provider}.json
```
Examples:
- `cis_5.0_aws.json`
- `iso27001_2022_azure.json`
- `mitre_attack_gcp.json`
- `ens_rd2022_aws.json`
- `nist_800_53_revision_5_aws.json`
## Validation
Prowler validates compliance JSON at startup. Invalid files cause:
- `ValidationError` logged with details
- Application exit with error code
Common validation errors:
- Missing required fields (`Id`, `Description`, `Checks`, `Attributes`)
- Invalid enum values (e.g., `Profile` must be "Level 1" or "Level 2" for CIS)
- Type mismatches (e.g., `Checks` must be array of strings)
## Adding a New Framework
1. Create JSON file in `prowler/compliance/{provider}/`
2. Use appropriate attribute model (see table above)
3. Map existing checks to requirements via `Checks` array
4. Use empty `Checks: []` for manual-only requirements
5. Test with `prowler {provider} --list-compliance` to verify loading
6. Run `prowler {provider} --compliance {framework_name}` to test execution
## Templates
See `assets/` directory for example templates:
- `cis_framework.json` - CIS Benchmark template
- `iso27001_framework.json` - ISO 27001 template
- `ens_framework.json` - ENS (Spain) template
- `mitre_attack_framework.json` - MITRE ATT&CK template
- `prowler_threatscore_framework.json` - Prowler ThreatScore template
- `generic_framework.json` - Generic/custom framework template
## Prowler ThreatScore Details
Prowler ThreatScore is a custom security scoring framework that calculates an overall security posture score based on:
### Four Pillars
1. **IAM (Identity and Access Management)**
- SubSections: Authentication, Authorization, Credentials Management
2. **Attack Surface**
- SubSections: Network Exposure, Storage Exposure, Service Exposure
3. **Logging and Monitoring**
- SubSections: Audit Logging, Threat Detection, Alerting
4. **Encryption**
- SubSections: Data at Rest, Data in Transit
### Scoring Algorithm
The ThreatScore uses `LevelOfRisk` and `Weight` to calculate severity:
| LevelOfRisk | Weight | Example Controls |
|-------------|--------|------------------|
| 5 (Critical) | 1000 | Root MFA, No root access keys, Public S3 buckets |
| 4 (High) | 100 | User MFA, Public EC2, GuardDuty enabled |
| 3 (Medium) | 10 | Password policies, EBS encryption, CloudTrail |
| 2 (Low) | 1-10 | Best practice recommendations |
| 1 (Info) | 1 | Informational controls |
### ID Numbering Convention
- `1.x.x` - IAM controls
- `2.x.x` - Attack Surface controls
- `3.x.x` - Logging and Monitoring controls
- `4.x.x` - Encryption controls
## External Resources
### Official Framework Documentation
- [CIS Benchmarks](https://www.cisecurity.org/cis-benchmarks)
- [ISO 27001:2022](https://www.iso.org/standard/27001)
- [NIST 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final)
- [NIST CSF](https://www.nist.gov/cyberframework)
- [PCI DSS](https://www.pcisecuritystandards.org/)
- [MITRE ATT&CK](https://attack.mitre.org/)
- [ENS (Spain)](https://www.ccn-cert.cni.es/es/ens.html)
### Prowler Documentation
- [Prowler Docs - Compliance](https://docs.prowler.com/projects/prowler-open-source/en/latest/)
- [Prowler GitHub](https://github.com/prowler-cloud/prowler)

View File

@@ -7,6 +7,8 @@ license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke: "Writing documentation"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,14 @@
name: prowler-mcp
description: >
Creates MCP tools for Prowler MCP Server. Covers BaseTool pattern, model design,
and API client usage. Use when working on mcp_server/ directory.
and API client usage.
Trigger: When working in mcp_server/ on tools (BaseTool), models (MinimalSerializerMixin/from_api_response), or API client patterns.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke: "Working on MCP server tools"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,18 @@
name: prowler-pr
description: >
Creates Pull Requests for Prowler following the project template and conventions.
Trigger: When user asks to create a PR, submit changes, or open a pull request.
Trigger: When working on pull request requirements or creation (PR template sections, PR title Conventional Commits check, changelog gate/no-changelog label), or when inspecting PR-related GitHub workflows like conventional-commit.yml, pr-check-changelog.yml, pr-conflict-checker.yml, labeler.yml, or CODEOWNERS.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke:
- "Create a PR with gh pr create"
- "Review PR requirements: template, title conventions, changelog gate"
- "Fill .github/pull_request_template.md (Context/Description/Steps to review/Checklist)"
- "Inspect PR CI workflows (.github/workflows/*): conventional-commit, pr-check-changelog, pr-conflict-checker, labeler"
- "Understand review ownership with CODEOWNERS"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,15 @@
name: prowler-provider
description: >
Creates new Prowler cloud providers or adds services to existing providers.
Trigger: When adding a new cloud provider or service to Prowler SDK.
Trigger: When extending Prowler SDK provider architecture (adding a new provider or a new service to an existing provider).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, sdk]
auto_invoke:
- "Adding new providers"
- "Adding services to existing providers"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,15 @@
name: prowler-sdk-check
description: >
Creates Prowler security checks following SDK architecture patterns.
Trigger: When user asks to create a new security check for any provider (AWS, Azure, GCP, K8s, GitHub, etc.)
Trigger: When creating or updating a Prowler SDK security check (implementation + metadata) for any provider (AWS, Azure, GCP, K8s, GitHub, etc.).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, sdk]
auto_invoke:
- "Creating new checks"
- "Updating existing checks and metadata"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -1,12 +1,16 @@
---
name: prowler-test-api
description: >
Testing patterns for Prowler API: ViewSets, Celery tasks, RLS isolation, RBAC.
Trigger: When writing tests for api/ - viewsets, serializers, tasks, models.
Testing patterns for Prowler API: JSON:API, Celery tasks, RLS isolation, RBAC.
Trigger: When writing tests for api/ (JSON:API requests/assertions, cross-tenant isolation, RBAC, Celery tasks, viewsets/serializers).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, api]
auto_invoke:
- "Writing Prowler API tests"
- "Testing RLS tenant isolation"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,15 @@
name: prowler-test-sdk
description: >
Testing patterns for Prowler SDK (Python).
Trigger: When writing tests for checks, services, or providers.
Trigger: When writing tests for the Prowler SDK (checks/services/providers), including provider-specific mocking rules (moto for AWS only).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, sdk]
auto_invoke:
- "Writing Prowler SDK tests"
- "Mocking AWS with moto in tests"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,15 @@
name: prowler-test-ui
description: >
E2E testing patterns for Prowler UI (Playwright).
Trigger: When writing E2E tests for the Next.js frontend.
Trigger: When writing Playwright E2E tests under ui/tests in the Prowler UI (Prowler-specific base page/helpers, tags, flows).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke:
- "Writing Prowler UI E2E tests"
- "Working with Prowler UI test helpers/pages"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,15 @@
name: prowler-ui
description: >
Prowler UI-specific patterns. For generic patterns, see: typescript, react-19, nextjs-15, tailwind-4.
Trigger: When working on ui/ directory - components, pages, actions, hooks.
Trigger: When working inside ui/ on Prowler-specific conventions (shadcn vs HeroUI legacy, folder placement, actions/adapters, shared types/hooks/lib).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke:
- "Creating/modifying Prowler UI components"
- "Working on Prowler UI structure (actions/adapters/types/hooks)"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: prowler
description: >
Main entry point for Prowler development - quick reference for all components.
Trigger: General Prowler development questions, project overview, component navigation.
Trigger: General Prowler development questions, project overview, component navigation (NOT PR CI gates or GitHub Actions workflows).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke: "General Prowler development questions"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: pytest
description: >
Pytest testing patterns for Python.
Trigger: When writing Python tests - fixtures, mocking, markers.
Trigger: When writing or refactoring pytest tests (fixtures, mocking, parametrize, markers). For Prowler-specific API/SDK testing conventions, also use prowler-test-api or prowler-test-sdk.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, sdk, api]
auto_invoke: "Writing Python tests with pytest"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: react-19
description: >
React 19 patterns with React Compiler.
Trigger: When writing React components - no useMemo/useCallback needed.
Trigger: When writing React 19 components/hooks in .tsx (React Compiler rules, hook patterns, refs as props). If using Next.js App Router/Server Actions, also use nextjs-15.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "Writing React components"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -1,10 +1,16 @@
#!/bin/bash
# Setup AI Skills for Prowler development
# Configures AI coding assistants that follow agentskills.io standard:
# - Claude Code: .claude/skills/ symlink (auto-discovery)
# - Gemini CLI: .gemini/skills/ symlink (auto-discovery)
# - Codex (OpenAI): .codex/skills/ symlink + AGENTS.md
# - GitHub Copilot: reads AGENTS.md from repo root (no symlink needed)
# - Claude Code: .claude/skills/ symlink + CLAUDE.md copies
# - Gemini CLI: .gemini/skills/ symlink + GEMINI.md copies
# - Codex (OpenAI): .codex/skills/ symlink + AGENTS.md (native)
# - GitHub Copilot: .github/copilot-instructions.md copy
#
# Usage:
# ./setup.sh # Interactive mode (select AI assistants)
# ./setup.sh --all # Configure all AI assistants
# ./setup.sh --claude # Configure only Claude Code
# ./setup.sh --claude --codex # Configure multiple
set -e
@@ -12,23 +18,224 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
SKILLS_SOURCE="$SCRIPT_DIR"
# Target locations
CLAUDE_SKILLS_TARGET="$REPO_ROOT/.claude/skills"
CODEX_SKILLS_TARGET="$REPO_ROOT/.codex/skills"
GEMINI_SKILLS_TARGET="$REPO_ROOT/.gemini/skills"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m' # No Color
# Selection flags
SETUP_CLAUDE=false
SETUP_GEMINI=false
SETUP_CODEX=false
SETUP_COPILOT=false
# =============================================================================
# HELPER FUNCTIONS
# =============================================================================
show_help() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Configure AI coding assistants for Prowler development."
echo ""
echo "Options:"
echo " --all Configure all AI assistants"
echo " --claude Configure Claude Code"
echo " --gemini Configure Gemini CLI"
echo " --codex Configure Codex (OpenAI)"
echo " --copilot Configure GitHub Copilot"
echo " --help Show this help message"
echo ""
echo "If no options provided, runs in interactive mode."
echo ""
echo "Examples:"
echo " $0 # Interactive selection"
echo " $0 --all # All AI assistants"
echo " $0 --claude --codex # Only Claude and Codex"
}
show_menu() {
echo -e "${BOLD}Which AI assistants do you use?${NC}"
echo -e "${CYAN}(Use numbers to toggle, Enter to confirm)${NC}"
echo ""
local options=("Claude Code" "Gemini CLI" "Codex (OpenAI)" "GitHub Copilot")
local selected=(true false false false) # Claude selected by default
while true; do
for i in "${!options[@]}"; do
if [ "${selected[$i]}" = true ]; then
echo -e " ${GREEN}[x]${NC} $((i+1)). ${options[$i]}"
else
echo -e " [ ] $((i+1)). ${options[$i]}"
fi
done
echo ""
echo -e " ${YELLOW}a${NC}. Select all"
echo -e " ${YELLOW}n${NC}. Select none"
echo ""
echo -n "Toggle (1-4, a, n) or Enter to confirm: "
read -r choice
case $choice in
1) selected[0]=$([ "${selected[0]}" = true ] && echo false || echo true) ;;
2) selected[1]=$([ "${selected[1]}" = true ] && echo false || echo true) ;;
3) selected[2]=$([ "${selected[2]}" = true ] && echo false || echo true) ;;
4) selected[3]=$([ "${selected[3]}" = true ] && echo false || echo true) ;;
a|A) selected=(true true true true) ;;
n|N) selected=(false false false false) ;;
"") break ;;
*) echo -e "${RED}Invalid option${NC}" ;;
esac
# Move cursor up to redraw menu
echo -en "\033[10A\033[J"
done
SETUP_CLAUDE=${selected[0]}
SETUP_GEMINI=${selected[1]}
SETUP_CODEX=${selected[2]}
SETUP_COPILOT=${selected[3]}
}
setup_claude() {
local target="$REPO_ROOT/.claude/skills"
if [ ! -d "$REPO_ROOT/.claude" ]; then
mkdir -p "$REPO_ROOT/.claude"
fi
if [ -L "$target" ]; then
rm "$target"
elif [ -d "$target" ]; then
mv "$target" "$REPO_ROOT/.claude/skills.backup.$(date +%s)"
fi
ln -s "$SKILLS_SOURCE" "$target"
echo -e "${GREEN} ✓ .claude/skills -> skills/${NC}"
# Copy AGENTS.md to CLAUDE.md
copy_agents_md "CLAUDE.md"
}
setup_gemini() {
local target="$REPO_ROOT/.gemini/skills"
if [ ! -d "$REPO_ROOT/.gemini" ]; then
mkdir -p "$REPO_ROOT/.gemini"
fi
if [ -L "$target" ]; then
rm "$target"
elif [ -d "$target" ]; then
mv "$target" "$REPO_ROOT/.gemini/skills.backup.$(date +%s)"
fi
ln -s "$SKILLS_SOURCE" "$target"
echo -e "${GREEN} ✓ .gemini/skills -> skills/${NC}"
# Copy AGENTS.md to GEMINI.md
copy_agents_md "GEMINI.md"
}
setup_codex() {
local target="$REPO_ROOT/.codex/skills"
if [ ! -d "$REPO_ROOT/.codex" ]; then
mkdir -p "$REPO_ROOT/.codex"
fi
if [ -L "$target" ]; then
rm "$target"
elif [ -d "$target" ]; then
mv "$target" "$REPO_ROOT/.codex/skills.backup.$(date +%s)"
fi
ln -s "$SKILLS_SOURCE" "$target"
echo -e "${GREEN} ✓ .codex/skills -> skills/${NC}"
echo -e "${GREEN} ✓ Codex uses AGENTS.md natively${NC}"
}
setup_copilot() {
if [ -f "$REPO_ROOT/AGENTS.md" ]; then
mkdir -p "$REPO_ROOT/.github"
cp "$REPO_ROOT/AGENTS.md" "$REPO_ROOT/.github/copilot-instructions.md"
echo -e "${GREEN} ✓ AGENTS.md -> .github/copilot-instructions.md${NC}"
fi
}
copy_agents_md() {
local target_name="$1"
local agents_files
local count=0
agents_files=$(find "$REPO_ROOT" -name "AGENTS.md" -not -path "*/node_modules/*" -not -path "*/.git/*" 2>/dev/null)
for agents_file in $agents_files; do
local agents_dir
agents_dir=$(dirname "$agents_file")
cp "$agents_file" "$agents_dir/$target_name"
count=$((count + 1))
done
echo -e "${GREEN} ✓ Copied $count AGENTS.md -> $target_name${NC}"
}
# =============================================================================
# PARSE ARGUMENTS
# =============================================================================
while [[ $# -gt 0 ]]; do
case $1 in
--all)
SETUP_CLAUDE=true
SETUP_GEMINI=true
SETUP_CODEX=true
SETUP_COPILOT=true
shift
;;
--claude)
SETUP_CLAUDE=true
shift
;;
--gemini)
SETUP_GEMINI=true
shift
;;
--codex)
SETUP_CODEX=true
shift
;;
--copilot)
SETUP_COPILOT=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
show_help
exit 1
;;
esac
done
# =============================================================================
# MAIN
# =============================================================================
echo "🤖 Prowler AI Skills Setup"
echo "=========================="
echo ""
# Count skills (directories with SKILL.md)
# Count skills
SKILL_COUNT=$(find "$SKILLS_SOURCE" -maxdepth 2 -name "SKILL.md" | wc -l | tr -d ' ')
if [ "$SKILL_COUNT" -eq 0 ]; then
@@ -39,81 +246,60 @@ fi
echo -e "${BLUE}Found $SKILL_COUNT skills to configure${NC}"
echo ""
# =============================================================================
# CLAUDE CODE SETUP (.claude/skills symlink - auto-discovery)
# =============================================================================
echo -e "${YELLOW}[1/3] Setting up Claude Code...${NC}"
if [ ! -d "$REPO_ROOT/.claude" ]; then
mkdir -p "$REPO_ROOT/.claude"
# Interactive mode if no flags provided
if [ "$SETUP_CLAUDE" = false ] && [ "$SETUP_GEMINI" = false ] && [ "$SETUP_CODEX" = false ] && [ "$SETUP_COPILOT" = false ]; then
show_menu
echo ""
fi
if [ -L "$CLAUDE_SKILLS_TARGET" ]; then
rm "$CLAUDE_SKILLS_TARGET"
elif [ -d "$CLAUDE_SKILLS_TARGET" ]; then
mv "$CLAUDE_SKILLS_TARGET" "$REPO_ROOT/.claude/skills.backup.$(date +%s)"
# Check if at least one selected
if [ "$SETUP_CLAUDE" = false ] && [ "$SETUP_GEMINI" = false ] && [ "$SETUP_CODEX" = false ] && [ "$SETUP_COPILOT" = false ]; then
echo -e "${YELLOW}No AI assistants selected. Nothing to do.${NC}"
exit 0
fi
ln -s "$SKILLS_SOURCE" "$CLAUDE_SKILLS_TARGET"
echo -e "${GREEN} ✓ .claude/skills -> skills/${NC}"
# Run selected setups
STEP=1
TOTAL=0
[ "$SETUP_CLAUDE" = true ] && TOTAL=$((TOTAL + 1))
[ "$SETUP_GEMINI" = true ] && TOTAL=$((TOTAL + 1))
[ "$SETUP_CODEX" = true ] && TOTAL=$((TOTAL + 1))
[ "$SETUP_COPILOT" = true ] && TOTAL=$((TOTAL + 1))
# =============================================================================
# CODEX (OPENAI) SETUP (.codex/skills symlink)
# =============================================================================
echo -e "${YELLOW}[2/3] Setting up Codex (OpenAI)...${NC}"
if [ ! -d "$REPO_ROOT/.codex" ]; then
mkdir -p "$REPO_ROOT/.codex"
if [ "$SETUP_CLAUDE" = true ]; then
echo -e "${YELLOW}[$STEP/$TOTAL] Setting up Claude Code...${NC}"
setup_claude
STEP=$((STEP + 1))
fi
if [ -L "$CODEX_SKILLS_TARGET" ]; then
rm "$CODEX_SKILLS_TARGET"
elif [ -d "$CODEX_SKILLS_TARGET" ]; then
mv "$CODEX_SKILLS_TARGET" "$REPO_ROOT/.codex/skills.backup.$(date +%s)"
if [ "$SETUP_GEMINI" = true ]; then
echo -e "${YELLOW}[$STEP/$TOTAL] Setting up Gemini CLI...${NC}"
setup_gemini
STEP=$((STEP + 1))
fi
ln -s "$SKILLS_SOURCE" "$CODEX_SKILLS_TARGET"
echo -e "${GREEN} ✓ .codex/skills -> skills/${NC}"
# =============================================================================
# GEMINI CLI SETUP (.gemini/skills symlink - auto-discovery)
# =============================================================================
echo -e "${YELLOW}[3/3] Setting up Gemini CLI...${NC}"
if [ ! -d "$REPO_ROOT/.gemini" ]; then
mkdir -p "$REPO_ROOT/.gemini"
if [ "$SETUP_CODEX" = true ]; then
echo -e "${YELLOW}[$STEP/$TOTAL] Setting up Codex (OpenAI)...${NC}"
setup_codex
STEP=$((STEP + 1))
fi
if [ -L "$GEMINI_SKILLS_TARGET" ]; then
rm "$GEMINI_SKILLS_TARGET"
elif [ -d "$GEMINI_SKILLS_TARGET" ]; then
mv "$GEMINI_SKILLS_TARGET" "$REPO_ROOT/.gemini/skills.backup.$(date +%s)"
if [ "$SETUP_COPILOT" = true ]; then
echo -e "${YELLOW}[$STEP/$TOTAL] Setting up GitHub Copilot...${NC}"
setup_copilot
fi
ln -s "$SKILLS_SOURCE" "$GEMINI_SKILLS_TARGET"
echo -e "${GREEN} ✓ .gemini/skills -> skills/${NC}"
# =============================================================================
# SUMMARY
# =============================================================================
echo ""
echo -e "${GREEN}✅ Successfully configured $SKILL_COUNT AI skills!${NC}"
echo ""
echo "Configuration created:"
echo " • Claude Code: .claude/skills/ (symlink, auto-discovery)"
echo " • Codex (OpenAI): .codex/skills/ (symlink, reads AGENTS.md)"
echo " • Gemini CLI: .gemini/skills/ (symlink, auto-discovery)"
echo " • GitHub Copilot: reads AGENTS.md from repo root (no setup needed)"
echo "Configured:"
[ "$SETUP_CLAUDE" = true ] && echo " • Claude Code: .claude/skills/ + CLAUDE.md"
[ "$SETUP_CODEX" = true ] && echo " • Codex (OpenAI): .codex/skills/ + AGENTS.md (native)"
[ "$SETUP_GEMINI" = true ] && echo " • Gemini CLI: .gemini/skills/ + GEMINI.md"
[ "$SETUP_COPILOT" = true ] && echo " • GitHub Copilot: .github/copilot-instructions.md"
echo ""
echo "Available skills:"
echo " Generic: typescript, react-19, nextjs-15, playwright, pytest,"
echo " django-drf, zod-4, zustand-5, tailwind-4, ai-sdk-5"
echo ""
echo " Prowler: prowler, prowler-api, prowler-ui, prowler-mcp,"
echo " prowler-sdk-check, prowler-test-ui, prowler-test-api,"
echo " prowler-test-sdk, prowler-compliance, prowler-docs,"
echo " prowler-provider, prowler-pr"
echo ""
echo -e "${BLUE}Note: Restart your AI coding assistant to load the skills.${NC}"
echo -e "${BLUE} Claude/Gemini auto-discover skills from SKILL.md descriptions.${NC}"
echo -e "${BLUE} Codex/Copilot use AGENTS.md instructions to reference skills.${NC}"
echo -e "${BLUE}Note: Restart your AI assistant to load the skills.${NC}"
echo -e "${BLUE} AGENTS.md is the source of truth - edit it, then re-run this script.${NC}"

340
skills/setup_test.sh Executable file
View File

@@ -0,0 +1,340 @@
#!/bin/bash
# Unit tests for setup.sh
# Run: ./skills/setup_test.sh
#
# shellcheck disable=SC2317
# Reason: Test functions are discovered and called dynamically via declare -F
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SETUP_SCRIPT="$SCRIPT_DIR/setup.sh"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Test counters
TESTS_RUN=0
TESTS_PASSED=0
TESTS_FAILED=0
# Test environment
TEST_DIR=""
# =============================================================================
# TEST FRAMEWORK
# =============================================================================
setup_test_env() {
TEST_DIR=$(mktemp -d)
# Create mock repo structure
mkdir -p "$TEST_DIR/skills/typescript"
mkdir -p "$TEST_DIR/skills/react-19"
mkdir -p "$TEST_DIR/api"
mkdir -p "$TEST_DIR/ui"
mkdir -p "$TEST_DIR/.github"
# Create mock SKILL.md files
echo "# TypeScript Skill" > "$TEST_DIR/skills/typescript/SKILL.md"
echo "# React 19 Skill" > "$TEST_DIR/skills/react-19/SKILL.md"
# Create mock AGENTS.md files
echo "# Root AGENTS" > "$TEST_DIR/AGENTS.md"
echo "# API AGENTS" > "$TEST_DIR/api/AGENTS.md"
echo "# UI AGENTS" > "$TEST_DIR/ui/AGENTS.md"
# Copy setup.sh to test dir
cp "$SETUP_SCRIPT" "$TEST_DIR/skills/setup.sh"
}
teardown_test_env() {
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
rm -rf "$TEST_DIR"
fi
}
run_setup() {
(cd "$TEST_DIR/skills" && bash setup.sh "$@" 2>&1)
}
# Assertions return 0 on success, 1 on failure
assert_equals() {
local expected="$1" actual="$2" message="$3"
if [ "$expected" = "$actual" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Expected: $expected"
echo " Actual: $actual"
return 1
}
assert_contains() {
local haystack="$1" needle="$2" message="$3"
if echo "$haystack" | grep -q -F -- "$needle"; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " String not found: $needle"
return 1
}
assert_file_exists() {
local file="$1" message="$2"
if [ -f "$file" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File not found: $file"
return 1
}
assert_file_not_exists() {
local file="$1" message="$2"
if [ ! -f "$file" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File should not exist: $file"
return 1
}
assert_symlink_exists() {
local link="$1" message="$2"
if [ -L "$link" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Symlink not found: $link"
return 1
}
assert_symlink_not_exists() {
local link="$1" message="$2"
if [ ! -L "$link" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Symlink should not exist: $link"
return 1
}
assert_dir_exists() {
local dir="$1" message="$2"
if [ -d "$dir" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Directory not found: $dir"
return 1
}
# =============================================================================
# TESTS: FLAG PARSING
# =============================================================================
test_flag_help_shows_usage() {
local output
output=$(run_setup --help)
assert_contains "$output" "Usage:" "Help should show usage" && \
assert_contains "$output" "--all" "Help should mention --all flag" && \
assert_contains "$output" "--claude" "Help should mention --claude flag"
}
test_flag_unknown_reports_error() {
local output
output=$(run_setup --unknown 2>&1) || true
assert_contains "$output" "Unknown option" "Should report unknown option"
}
test_flag_all_configures_everything() {
local output
output=$(run_setup --all)
assert_contains "$output" "Claude Code" "Should setup Claude" && \
assert_contains "$output" "Gemini CLI" "Should setup Gemini" && \
assert_contains "$output" "Codex" "Should setup Codex" && \
assert_contains "$output" "Copilot" "Should setup Copilot"
}
test_flag_single_claude() {
local output
output=$(run_setup --claude)
assert_contains "$output" "Claude Code" "Should setup Claude" && \
assert_contains "$output" "[1/1]" "Should show 1/1 steps"
}
test_flag_multiple_combined() {
local output
output=$(run_setup --claude --codex)
assert_contains "$output" "[1/2]" "Should show step 1/2" && \
assert_contains "$output" "[2/2]" "Should show step 2/2"
}
# =============================================================================
# TESTS: SYMLINK CREATION
# =============================================================================
test_symlink_claude_created() {
run_setup --claude > /dev/null
assert_symlink_exists "$TEST_DIR/.claude/skills" "Claude skills symlink should exist"
}
test_symlink_gemini_created() {
run_setup --gemini > /dev/null
assert_symlink_exists "$TEST_DIR/.gemini/skills" "Gemini skills symlink should exist"
}
test_symlink_codex_created() {
run_setup --codex > /dev/null
assert_symlink_exists "$TEST_DIR/.codex/skills" "Codex skills symlink should exist"
}
test_symlink_not_created_without_flag() {
run_setup --copilot > /dev/null
assert_symlink_not_exists "$TEST_DIR/.claude/skills" "Claude symlink should not exist" && \
assert_symlink_not_exists "$TEST_DIR/.gemini/skills" "Gemini symlink should not exist" && \
assert_symlink_not_exists "$TEST_DIR/.codex/skills" "Codex symlink should not exist"
}
# =============================================================================
# TESTS: AGENTS.md COPYING
# =============================================================================
test_copy_claude_agents_md() {
run_setup --claude > /dev/null
assert_file_exists "$TEST_DIR/CLAUDE.md" "Root CLAUDE.md should exist" && \
assert_file_exists "$TEST_DIR/api/CLAUDE.md" "api/CLAUDE.md should exist" && \
assert_file_exists "$TEST_DIR/ui/CLAUDE.md" "ui/CLAUDE.md should exist"
}
test_copy_gemini_agents_md() {
run_setup --gemini > /dev/null
assert_file_exists "$TEST_DIR/GEMINI.md" "Root GEMINI.md should exist" && \
assert_file_exists "$TEST_DIR/api/GEMINI.md" "api/GEMINI.md should exist" && \
assert_file_exists "$TEST_DIR/ui/GEMINI.md" "ui/GEMINI.md should exist"
}
test_copy_copilot_to_github() {
run_setup --copilot > /dev/null
assert_file_exists "$TEST_DIR/.github/copilot-instructions.md" "Copilot instructions should exist"
}
test_copy_codex_no_extra_files() {
run_setup --codex > /dev/null
assert_file_not_exists "$TEST_DIR/CODEX.md" "CODEX.md should not be created"
}
test_copy_not_created_without_flag() {
run_setup --codex > /dev/null
assert_file_not_exists "$TEST_DIR/CLAUDE.md" "CLAUDE.md should not exist" && \
assert_file_not_exists "$TEST_DIR/GEMINI.md" "GEMINI.md should not exist"
}
test_copy_content_matches_source() {
run_setup --claude > /dev/null
local source_content target_content
source_content=$(cat "$TEST_DIR/AGENTS.md")
target_content=$(cat "$TEST_DIR/CLAUDE.md")
assert_equals "$source_content" "$target_content" "CLAUDE.md content should match AGENTS.md"
}
# =============================================================================
# TESTS: DIRECTORY CREATION
# =============================================================================
test_dir_claude_created() {
rm -rf "$TEST_DIR/.claude"
run_setup --claude > /dev/null
assert_dir_exists "$TEST_DIR/.claude" ".claude directory should be created"
}
test_dir_gemini_created() {
rm -rf "$TEST_DIR/.gemini"
run_setup --gemini > /dev/null
assert_dir_exists "$TEST_DIR/.gemini" ".gemini directory should be created"
}
test_dir_codex_created() {
rm -rf "$TEST_DIR/.codex"
run_setup --codex > /dev/null
assert_dir_exists "$TEST_DIR/.codex" ".codex directory should be created"
}
# =============================================================================
# TESTS: IDEMPOTENCY
# =============================================================================
test_idempotent_multiple_runs() {
run_setup --claude > /dev/null
run_setup --claude > /dev/null
assert_symlink_exists "$TEST_DIR/.claude/skills" "Symlink should still exist after second run" && \
assert_file_exists "$TEST_DIR/CLAUDE.md" "CLAUDE.md should still exist after second run"
}
# =============================================================================
# TEST RUNNER (autodiscovery)
# =============================================================================
run_all_tests() {
local test_functions current_section=""
# Discover all test_* functions
test_functions=$(declare -F | awk '{print $3}' | grep '^test_' | sort)
for test_func in $test_functions; do
# Extract section from function name (e.g., test_flag_* -> "Flag")
local section
section=$(echo "$test_func" | sed 's/^test_//' | cut -d'_' -f1)
section="$(echo "${section:0:1}" | tr '[:lower:]' '[:upper:]')${section:1}"
# Print section header if changed
if [ "$section" != "$current_section" ]; then
[ -n "$current_section" ] && echo ""
echo -e "${YELLOW}${section} tests:${NC}"
current_section="$section"
fi
# Convert function name to readable test name
local test_name
test_name=$(echo "$test_func" | sed 's/^test_//' | tr '_' ' ')
TESTS_RUN=$((TESTS_RUN + 1))
echo -n " $test_name... "
setup_test_env
if $test_func; then
echo -e "${GREEN}PASS${NC}"
TESTS_PASSED=$((TESTS_PASSED + 1))
else
TESTS_FAILED=$((TESTS_FAILED + 1))
fi
teardown_test_env
done
}
# =============================================================================
# MAIN
# =============================================================================
echo ""
echo "🧪 Running setup.sh unit tests"
echo "==============================="
echo ""
run_all_tests
echo ""
echo "==============================="
if [ $TESTS_FAILED -eq 0 ]; then
echo -e "${GREEN}✅ All $TESTS_RUN tests passed!${NC}"
exit 0
else
echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}"
exit 1
fi

View File

@@ -7,6 +7,8 @@ license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke: "Creating new skills"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

120
skills/skill-sync/SKILL.md Normal file
View File

@@ -0,0 +1,120 @@
---
name: skill-sync
description: >
Syncs skill metadata to AGENTS.md Auto-invoke sections.
Trigger: When updating skill metadata (metadata.scope/metadata.auto_invoke), regenerating Auto-invoke tables, or running ./skills/skill-sync/assets/sync.sh (including --dry-run/--scope).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke:
- "After creating/modifying a skill"
- "Regenerate AGENTS.md Auto-invoke tables (sync.sh)"
- "Troubleshoot why a skill is missing from AGENTS.md auto-invoke"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash
---
## Purpose
Keeps AGENTS.md Auto-invoke sections in sync with skill metadata. When you create or modify a skill, run the sync script to automatically update all affected AGENTS.md files.
## Required Skill Metadata
Each skill that should appear in Auto-invoke sections needs these fields in `metadata`.
`auto_invoke` can be either a single string **or** a list of actions:
```yaml
metadata:
author: prowler-cloud
version: "1.0"
scope: [ui] # Which AGENTS.md: ui, api, sdk, root
# Option A: single action
auto_invoke: "Creating/modifying components"
# Option B: multiple actions
# auto_invoke:
# - "Creating/modifying components"
# - "Refactoring component folder placement"
```
### Scope Values
| Scope | Updates |
|-------|---------|
| `root` | `AGENTS.md` (repo root) |
| `ui` | `ui/AGENTS.md` |
| `api` | `api/AGENTS.md` |
| `sdk` | `prowler/AGENTS.md` |
Skills can have multiple scopes: `scope: [ui, api]`
---
## Usage
### After Creating/Modifying a Skill
```bash
./skills/skill-sync/assets/sync.sh
```
### What It Does
1. Reads all `skills/*/SKILL.md` files
2. Extracts `metadata.scope` and `metadata.auto_invoke`
3. Generates Auto-invoke tables for each AGENTS.md
4. Updates the `### Auto-invoke Skills` section in each file
---
## Example
Given this skill metadata:
```yaml
# skills/prowler-ui/SKILL.md
metadata:
author: prowler-cloud
version: "1.0"
scope: [ui]
auto_invoke: "Creating/modifying React components"
```
The sync script generates in `ui/AGENTS.md`:
```markdown
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Creating/modifying React components | `prowler-ui` |
```
---
## Commands
```bash
# Sync all AGENTS.md files
./skills/skill-sync/assets/sync.sh
# Dry run (show what would change)
./skills/skill-sync/assets/sync.sh --dry-run
# Sync specific scope only
./skills/skill-sync/assets/sync.sh --scope ui
```
---
## Checklist After Modifying Skills
- [ ] Added `metadata.scope` to new/modified skill
- [ ] Added `metadata.auto_invoke` with action description
- [ ] Ran `./skills/skill-sync/assets/sync.sh`
- [ ] Verified AGENTS.md files updated correctly

325
skills/skill-sync/assets/sync.sh Executable file
View File

@@ -0,0 +1,325 @@
#!/usr/bin/env bash
# Sync skill metadata to AGENTS.md Auto-invoke sections
# Usage: ./sync.sh [--dry-run] [--scope <scope>]
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$(dirname "$(dirname "$SCRIPT_DIR")")")"
SKILLS_DIR="$REPO_ROOT/skills"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Options
DRY_RUN=false
FILTER_SCOPE=""
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--dry-run)
DRY_RUN=true
shift
;;
--scope)
FILTER_SCOPE="$2"
shift 2
;;
--help|-h)
echo "Usage: $0 [--dry-run] [--scope <scope>]"
echo ""
echo "Options:"
echo " --dry-run Show what would change without modifying files"
echo " --scope Only sync specific scope (root, ui, api, sdk)"
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
exit 1
;;
esac
done
# Map scope to AGENTS.md path
get_agents_path() {
local scope="$1"
case "$scope" in
root) echo "$REPO_ROOT/AGENTS.md" ;;
ui) echo "$REPO_ROOT/ui/AGENTS.md" ;;
api) echo "$REPO_ROOT/api/AGENTS.md" ;;
sdk) echo "$REPO_ROOT/prowler/AGENTS.md" ;;
*) echo "" ;;
esac
}
# Extract YAML frontmatter field using awk
extract_field() {
local file="$1"
local field="$2"
awk -v field="$field" '
/^---$/ { in_frontmatter = !in_frontmatter; next }
in_frontmatter && $1 == field":" {
# Handle single line value
sub(/^[^:]+:[[:space:]]*/, "")
if ($0 != "" && $0 != ">") {
gsub(/^["'\'']|["'\'']$/, "") # Remove quotes
print
exit
}
# Handle multi-line value
getline
while (/^[[:space:]]/ && !/^---$/) {
sub(/^[[:space:]]+/, "")
printf "%s ", $0
if (!getline) break
}
print ""
exit
}
' "$file" | sed 's/[[:space:]]*$//'
}
# Extract nested metadata field
#
# Supports either:
# auto_invoke: "Single Action"
# or:
# auto_invoke:
# - "Action A"
# - "Action B"
#
# For list values, this returns a pipe-delimited string: "Action A|Action B"
extract_metadata() {
local file="$1"
local field="$2"
awk -v field="$field" '
function trim(s) {
sub(/^[[:space:]]+/, "", s)
sub(/[[:space:]]+$/, "", s)
return s
}
/^---$/ { in_frontmatter = !in_frontmatter; next }
in_frontmatter && /^metadata:/ { in_metadata = 1; next }
in_frontmatter && in_metadata && /^[a-z]/ && !/^[[:space:]]/ { in_metadata = 0 }
in_frontmatter && in_metadata && $1 == field":" {
# Remove "field:" prefix
sub(/^[^:]+:[[:space:]]*/, "")
# Single-line scalar: auto_invoke: "Action"
if ($0 != "") {
v = $0
gsub(/^["'\'']|["'\'']$/, "", v)
gsub(/^\[|\]$/, "", v) # legacy: allow inline [a, b]
print trim(v)
exit
}
# Multi-line list:
# auto_invoke:
# - "Action A"
# - "Action B"
out = ""
while (getline) {
# Stop when leaving metadata block
if (!in_frontmatter) break
if (!in_metadata) break
if ($0 ~ /^[a-z]/ && $0 !~ /^[[:space:]]/) break
# On multi-line list, only accept "- item" lines. Anything else ends the list.
line = $0
if (line ~ /^[[:space:]]*-[[:space:]]*/) {
sub(/^[[:space:]]*-[[:space:]]*/, "", line)
line = trim(line)
gsub(/^["'\'']|["'\'']$/, "", line)
if (line != "") {
if (out == "") out = line
else out = out "|" line
}
} else {
break
}
}
if (out != "") print out
exit
}
' "$file"
}
echo -e "${BLUE}Skill Sync - Updating AGENTS.md Auto-invoke sections${NC}"
echo "========================================================"
echo ""
# Collect skills by scope
declare -A SCOPE_SKILLS # scope -> "skill1:action1|skill2:action2|..."
# Deterministic iteration order (stable diffs)
# Note: macOS ships BSD find; avoid GNU-only flags.
while IFS= read -r skill_file; do
[ -f "$skill_file" ] || continue
skill_name=$(extract_field "$skill_file" "name")
scope_raw=$(extract_metadata "$skill_file" "scope")
auto_invoke_raw=$(extract_metadata "$skill_file" "auto_invoke")
# extract_metadata() returns:
# - single action: "Action"
# - multiple actions: "Action A|Action B" (pipe-delimited)
# But SCOPE_SKILLS also uses '|' to separate entries, so we protect it.
auto_invoke=${auto_invoke_raw//|/;;}
# Skip if no scope or auto_invoke defined
[ -z "$scope_raw" ] || [ -z "$auto_invoke" ] && continue
# Parse scope (can be comma-separated or space-separated)
IFS=', ' read -ra scopes <<< "$scope_raw"
for scope in "${scopes[@]}"; do
scope=$(echo "$scope" | tr -d '[:space:]')
[ -z "$scope" ] && continue
# Filter by scope if specified
[ -n "$FILTER_SCOPE" ] && [ "$scope" != "$FILTER_SCOPE" ] && continue
# Append to scope's skill list
if [ -z "${SCOPE_SKILLS[$scope]}" ]; then
SCOPE_SKILLS[$scope]="$skill_name:$auto_invoke"
else
SCOPE_SKILLS[$scope]="${SCOPE_SKILLS[$scope]}|$skill_name:$auto_invoke"
fi
done
done < <(find "$SKILLS_DIR" -mindepth 2 -maxdepth 2 -name SKILL.md -print | sort)
# Generate Auto-invoke section for each scope
# Deterministic scope order (stable diffs)
scopes_sorted=()
while IFS= read -r scope; do
scopes_sorted+=("$scope")
done < <(printf "%s\n" "${!SCOPE_SKILLS[@]}" | sort)
for scope in "${scopes_sorted[@]}"; do
agents_path=$(get_agents_path "$scope")
if [ -z "$agents_path" ] || [ ! -f "$agents_path" ]; then
echo -e "${YELLOW}Warning: No AGENTS.md found for scope '$scope'${NC}"
continue
fi
echo -e "${BLUE}Processing: $scope -> $(basename "$(dirname "$agents_path")")/AGENTS.md${NC}"
# Build the Auto-invoke table
auto_invoke_section="### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|"
# Expand into sortable rows: "action<TAB>skill"
rows=()
IFS='|' read -ra skill_entries <<< "${SCOPE_SKILLS[$scope]}"
for entry in "${skill_entries[@]}"; do
skill_name="${entry%%:*}"
actions_raw="${entry#*:}"
actions_raw=${actions_raw//;;/|}
IFS='|' read -ra actions <<< "$actions_raw"
for action in "${actions[@]}"; do
action="$(echo "$action" | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')"
[ -z "$action" ] && continue
rows+=("$action $skill_name")
done
done
# Deterministic row order: Action then Skill
while IFS=$'\t' read -r action skill_name; do
[ -z "$action" ] && continue
auto_invoke_section="$auto_invoke_section
| $action | \`$skill_name\` |"
done < <(printf "%s\n" "${rows[@]}" | LC_ALL=C sort -t $'\t' -k1,1 -k2,2)
if $DRY_RUN; then
echo -e "${YELLOW}[DRY RUN] Would update $agents_path with:${NC}"
echo "$auto_invoke_section"
echo ""
else
# Write new section to temp file (avoids awk multi-line string issues on macOS)
section_file=$(mktemp)
echo "$auto_invoke_section" > "$section_file"
# Check if Auto-invoke section exists
if grep -q "### Auto-invoke Skills" "$agents_path"; then
# Replace existing section (up to next --- or ## heading)
awk '
/^### Auto-invoke Skills/ {
while ((getline line < "'"$section_file"'") > 0) print line
close("'"$section_file"'")
skip = 1
next
}
skip && /^(---|## )/ {
skip = 0
print ""
}
!skip { print }
' "$agents_path" > "$agents_path.tmp"
mv "$agents_path.tmp" "$agents_path"
echo -e "${GREEN} ✓ Updated Auto-invoke section${NC}"
else
# Insert after Skills Reference blockquote
awk '
/^>.*SKILL\.md\)$/ && !inserted {
print
getline
if (/^$/) {
print ""
while ((getline line < "'"$section_file"'") > 0) print line
close("'"$section_file"'")
print ""
inserted = 1
next
}
}
{ print }
' "$agents_path" > "$agents_path.tmp"
mv "$agents_path.tmp" "$agents_path"
echo -e "${GREEN} ✓ Inserted Auto-invoke section${NC}"
fi
rm -f "$section_file"
fi
done
echo ""
echo -e "${GREEN}Done!${NC}"
# Show skills without metadata
echo ""
echo -e "${BLUE}Skills missing sync metadata:${NC}"
missing=0
while IFS= read -r skill_file; do
[ -f "$skill_file" ] || continue
skill_name=$(extract_field "$skill_file" "name")
scope_raw=$(extract_metadata "$skill_file" "scope")
auto_invoke_raw=$(extract_metadata "$skill_file" "auto_invoke")
auto_invoke=${auto_invoke_raw//|/;;}
if [ -z "$scope_raw" ] || [ -z "$auto_invoke" ]; then
echo -e " ${YELLOW}$skill_name${NC} - missing: ${scope_raw:+}${scope_raw:-scope} ${auto_invoke:+}${auto_invoke:-auto_invoke}"
missing=$((missing + 1))
fi
done < <(find "$SKILLS_DIR" -mindepth 2 -maxdepth 2 -name SKILL.md -print | sort)
if [ $missing -eq 0 ]; then
echo -e " ${GREEN}All skills have sync metadata${NC}"
fi

View File

@@ -0,0 +1,604 @@
#!/bin/bash
# Unit tests for sync.sh
# Run: ./skills/skill-sync/assets/sync_test.sh
#
# shellcheck disable=SC2317
# Reason: Test functions are discovered and called dynamically via declare -F
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SYNC_SCRIPT="$SCRIPT_DIR/sync.sh"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Test counters
TESTS_RUN=0
TESTS_PASSED=0
TESTS_FAILED=0
# Test environment
TEST_DIR=""
# =============================================================================
# TEST FRAMEWORK
# =============================================================================
setup_test_env() {
TEST_DIR=$(mktemp -d)
# Create mock repo structure
mkdir -p "$TEST_DIR/skills/mock-ui-skill"
mkdir -p "$TEST_DIR/skills/mock-api-skill"
mkdir -p "$TEST_DIR/skills/mock-sdk-skill"
mkdir -p "$TEST_DIR/skills/mock-root-skill"
mkdir -p "$TEST_DIR/skills/mock-no-metadata"
mkdir -p "$TEST_DIR/skills/skill-sync/assets"
mkdir -p "$TEST_DIR/ui"
mkdir -p "$TEST_DIR/api"
mkdir -p "$TEST_DIR/prowler"
# Create mock SKILL.md files with metadata
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: >
Mock UI skill for testing.
Trigger: When testing UI.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke: "Testing UI components"
allowed-tools: Read
---
# Mock UI Skill
EOF
cat > "$TEST_DIR/skills/mock-api-skill/SKILL.md" << 'EOF'
---
name: mock-api-skill
description: >
Mock API skill for testing.
Trigger: When testing API.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [api]
auto_invoke: "Testing API endpoints"
allowed-tools: Read
---
# Mock API Skill
EOF
cat > "$TEST_DIR/skills/mock-sdk-skill/SKILL.md" << 'EOF'
---
name: mock-sdk-skill
description: >
Mock SDK skill for testing.
Trigger: When testing SDK.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [sdk]
auto_invoke: "Testing SDK checks"
allowed-tools: Read
---
# Mock SDK Skill
EOF
cat > "$TEST_DIR/skills/mock-root-skill/SKILL.md" << 'EOF'
---
name: mock-root-skill
description: >
Mock root skill for testing.
Trigger: When testing root.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [root]
auto_invoke: "Testing root actions"
allowed-tools: Read
---
# Mock Root Skill
EOF
# Skill without sync metadata
cat > "$TEST_DIR/skills/mock-no-metadata/SKILL.md" << 'EOF'
---
name: mock-no-metadata
description: >
Skill without sync metadata.
license: Apache-2.0
metadata:
author: test
version: "1.0"
allowed-tools: Read
---
# No Metadata Skill
EOF
# Create mock AGENTS.md files with Skills Reference section
cat > "$TEST_DIR/AGENTS.md" << 'EOF'
# Root AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-root-skill`](skills/mock-root-skill/SKILL.md)
## Project Overview
This is the root agents file.
EOF
cat > "$TEST_DIR/ui/AGENTS.md" << 'EOF'
# UI AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-ui-skill`](../skills/mock-ui-skill/SKILL.md)
## CRITICAL RULES
UI rules here.
EOF
cat > "$TEST_DIR/api/AGENTS.md" << 'EOF'
# API AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-api-skill`](../skills/mock-api-skill/SKILL.md)
## CRITICAL RULES
API rules here.
EOF
cat > "$TEST_DIR/prowler/AGENTS.md" << 'EOF'
# SDK AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-sdk-skill`](../skills/mock-sdk-skill/SKILL.md)
## Project Overview
SDK overview here.
EOF
# Copy sync.sh to test dir
cp "$SYNC_SCRIPT" "$TEST_DIR/skills/skill-sync/assets/sync.sh"
chmod +x "$TEST_DIR/skills/skill-sync/assets/sync.sh"
}
teardown_test_env() {
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
rm -rf "$TEST_DIR"
fi
}
run_sync() {
(cd "$TEST_DIR/skills/skill-sync/assets" && bash sync.sh "$@" 2>&1)
}
# Assertions
assert_equals() {
local expected="$1" actual="$2" message="$3"
if [ "$expected" = "$actual" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Expected: $expected"
echo " Actual: $actual"
return 1
}
assert_contains() {
local haystack="$1" needle="$2" message="$3"
if echo "$haystack" | grep -q -F -- "$needle"; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " String not found: $needle"
return 1
}
assert_not_contains() {
local haystack="$1" needle="$2" message="$3"
if ! echo "$haystack" | grep -q -F -- "$needle"; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " String should not be found: $needle"
return 1
}
assert_file_contains() {
local file="$1" needle="$2" message="$3"
if grep -q -F -- "$needle" "$file" 2>/dev/null; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File: $file"
echo " String not found: $needle"
return 1
}
assert_file_not_contains() {
local file="$1" needle="$2" message="$3"
if ! grep -q -F -- "$needle" "$file" 2>/dev/null; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File: $file"
echo " String should not be found: $needle"
return 1
}
# =============================================================================
# TESTS: FLAG PARSING
# =============================================================================
test_flag_help_shows_usage() {
local output
output=$(run_sync --help)
assert_contains "$output" "Usage:" "Help should show usage" && \
assert_contains "$output" "--dry-run" "Help should mention --dry-run" && \
assert_contains "$output" "--scope" "Help should mention --scope"
}
test_flag_unknown_reports_error() {
local output
output=$(run_sync --unknown 2>&1) || true
assert_contains "$output" "Unknown option" "Should report unknown option"
}
test_flag_dryrun_shows_changes() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "[DRY RUN]" "Should show dry run marker" && \
assert_contains "$output" "Would update" "Should say would update"
}
test_flag_dryrun_no_file_changes() {
run_sync --dry-run > /dev/null
assert_file_not_contains "$TEST_DIR/ui/AGENTS.md" "### Auto-invoke Skills" \
"AGENTS.md should not be modified in dry run"
}
test_flag_scope_filters_correctly() {
local output
output=$(run_sync --scope ui)
assert_contains "$output" "Processing: ui" "Should process ui scope" && \
assert_not_contains "$output" "Processing: api" "Should not process api scope"
}
# =============================================================================
# TESTS: METADATA EXTRACTION
# =============================================================================
test_metadata_extracts_scope() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "Processing: ui" "Should detect ui scope" && \
assert_contains "$output" "Processing: api" "Should detect api scope" && \
assert_contains "$output" "Processing: sdk" "Should detect sdk scope" && \
assert_contains "$output" "Processing: root" "Should detect root scope"
}
test_metadata_extracts_auto_invoke() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "Testing UI components" "Should extract UI auto_invoke" && \
assert_contains "$output" "Testing API endpoints" "Should extract API auto_invoke" && \
assert_contains "$output" "Testing SDK checks" "Should extract SDK auto_invoke"
}
test_metadata_missing_reports_skills() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "Skills missing sync metadata" "Should report missing metadata section" && \
assert_contains "$output" "mock-no-metadata" "Should list skill without metadata"
}
test_metadata_skips_without_scope_in_processing() {
local output
output=$(run_sync --dry-run)
# Should not appear in "Processing:" lines, only in "missing metadata" section
local processing_lines
processing_lines=$(echo "$output" | grep "Processing:")
assert_not_contains "$processing_lines" "mock-no-metadata" "Should not process skill without scope"
}
# =============================================================================
# TESTS: AUTO-INVOKE GENERATION
# =============================================================================
test_generate_creates_table() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "### Auto-invoke Skills" \
"Should create Auto-invoke section" && \
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "| Action | Skill |" \
"Should create table header"
}
test_generate_correct_skill_in_ui() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "mock-ui-skill" \
"UI AGENTS should contain mock-ui-skill" && \
assert_file_not_contains "$TEST_DIR/ui/AGENTS.md" "mock-api-skill" \
"UI AGENTS should not contain mock-api-skill"
}
test_generate_correct_skill_in_api() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/api/AGENTS.md" "mock-api-skill" \
"API AGENTS should contain mock-api-skill" && \
assert_file_not_contains "$TEST_DIR/api/AGENTS.md" "mock-ui-skill" \
"API AGENTS should not contain mock-ui-skill"
}
test_generate_correct_skill_in_sdk() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/prowler/AGENTS.md" "mock-sdk-skill" \
"SDK AGENTS should contain mock-sdk-skill" && \
assert_file_not_contains "$TEST_DIR/prowler/AGENTS.md" "mock-ui-skill" \
"SDK AGENTS should not contain mock-ui-skill"
}
test_generate_correct_skill_in_root() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/AGENTS.md" "mock-root-skill" \
"Root AGENTS should contain mock-root-skill" && \
assert_file_not_contains "$TEST_DIR/AGENTS.md" "mock-ui-skill" \
"Root AGENTS should not contain mock-ui-skill"
}
test_generate_includes_action_text() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "Testing UI components" \
"Should include auto_invoke action text"
}
test_generate_splits_multi_action_auto_invoke_list() {
# Change UI skill to use list auto_invoke (two actions)
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: Mock UI skill with multi-action auto_invoke list.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke:
- "Action B"
- "Action A"
allowed-tools: Read
---
EOF
run_sync > /dev/null
# Both actions should produce rows
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "| Action A | \`mock-ui-skill\` |" \
"Should create row for Action A" && \
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "| Action B | \`mock-ui-skill\` |" \
"Should create row for Action B"
}
test_generate_orders_rows_by_action_then_skill() {
# Two skills, intentionally out-of-order actions, same scope
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: Mock UI skill.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke:
- "Z action"
- "A action"
allowed-tools: Read
---
EOF
mkdir -p "$TEST_DIR/skills/mock-ui-skill-2"
cat > "$TEST_DIR/skills/mock-ui-skill-2/SKILL.md" << 'EOF'
---
name: mock-ui-skill-2
description: Second UI skill.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke: "A action"
allowed-tools: Read
---
EOF
run_sync > /dev/null
# Verify order within the table is: "A action" rows first, then "Z action"
local table_segment
table_segment=$(awk '
/^\| Action \| Skill \|/ { in_table=1 }
in_table && /^---$/ { next }
in_table && /^\|/ { print }
in_table && !/^\|/ { exit }
' "$TEST_DIR/ui/AGENTS.md")
local first_a_index first_z_index
first_a_index=$(echo "$table_segment" | awk '/\| A action \|/ { print NR; exit }')
first_z_index=$(echo "$table_segment" | awk '/\| Z action \|/ { print NR; exit }')
# Both must exist and A must come before Z
[ -n "$first_a_index" ] && [ -n "$first_z_index" ] && [ "$first_a_index" -lt "$first_z_index" ]
}
# =============================================================================
# TESTS: AGENTS.MD UPDATE
# =============================================================================
test_update_preserves_header() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "# UI AGENTS" \
"Should preserve original header"
}
test_update_preserves_skills_reference() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "Skills Reference" \
"Should preserve Skills Reference section"
}
test_update_preserves_content_after() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "## CRITICAL RULES" \
"Should preserve content after Auto-invoke section"
}
test_update_replaces_existing_section() {
# First run creates section
run_sync > /dev/null
# Modify a skill's auto_invoke (portable: BSD/GNU sed)
# macOS/BSD sed needs -i '' (separate arg). GNU sed accepts it too.
sed -i '' 's/Testing UI components/Modified UI action/' "$TEST_DIR/skills/mock-ui-skill/SKILL.md"
# Second run should replace
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "Modified UI action" \
"Should update with new auto_invoke text" && \
assert_file_not_contains "$TEST_DIR/ui/AGENTS.md" "Testing UI components" \
"Should remove old auto_invoke text"
}
# =============================================================================
# TESTS: IDEMPOTENCY
# =============================================================================
test_idempotent_multiple_runs() {
run_sync > /dev/null
local first_content
first_content=$(cat "$TEST_DIR/ui/AGENTS.md")
run_sync > /dev/null
local second_content
second_content=$(cat "$TEST_DIR/ui/AGENTS.md")
assert_equals "$first_content" "$second_content" \
"Multiple runs should produce identical output"
}
test_idempotent_no_duplicate_sections() {
run_sync > /dev/null
run_sync > /dev/null
run_sync > /dev/null
local count
count=$(grep -c "### Auto-invoke Skills" "$TEST_DIR/ui/AGENTS.md")
assert_equals "1" "$count" "Should have exactly one Auto-invoke section"
}
# =============================================================================
# TESTS: MULTI-SCOPE SKILLS
# =============================================================================
test_multiscope_skill_appears_in_multiple() {
# Create a skill with multiple scopes
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: Mock skill with multiple scopes.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui, api]
auto_invoke: "Multi-scope action"
allowed-tools: Read
---
EOF
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "mock-ui-skill" \
"Multi-scope skill should appear in UI" && \
assert_file_contains "$TEST_DIR/api/AGENTS.md" "mock-ui-skill" \
"Multi-scope skill should appear in API"
}
# =============================================================================
# TEST RUNNER
# =============================================================================
run_all_tests() {
local test_functions current_section=""
test_functions=$(declare -F | awk '{print $3}' | grep '^test_' | sort)
for test_func in $test_functions; do
local section
section=$(echo "$test_func" | sed 's/^test_//' | cut -d'_' -f1)
section="$(echo "${section:0:1}" | tr '[:lower:]' '[:upper:]')${section:1}"
if [ "$section" != "$current_section" ]; then
[ -n "$current_section" ] && echo ""
echo -e "${YELLOW}${section} tests:${NC}"
current_section="$section"
fi
local test_name
test_name=$(echo "$test_func" | sed 's/^test_//' | tr '_' ' ')
TESTS_RUN=$((TESTS_RUN + 1))
echo -n " $test_name... "
setup_test_env
if $test_func; then
echo -e "${GREEN}PASS${NC}"
TESTS_PASSED=$((TESTS_PASSED + 1))
else
TESTS_FAILED=$((TESTS_FAILED + 1))
fi
teardown_test_env
done
}
# =============================================================================
# MAIN
# =============================================================================
echo ""
echo "🧪 Running sync.sh unit tests"
echo "=============================="
echo ""
run_all_tests
echo ""
echo "=============================="
if [ $TESTS_FAILED -eq 0 ]; then
echo -e "${GREEN}✅ All $TESTS_RUN tests passed!${NC}"
exit 0
else
echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}"
exit 1
fi

View File

@@ -2,11 +2,13 @@
name: tailwind-4
description: >
Tailwind CSS 4 patterns and best practices.
Trigger: When styling with Tailwind - cn(), theme variables, no var() in className.
Trigger: When styling with Tailwind (className, variants, cn()), especially when dynamic styling or CSS variables are involved (no var() in className).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "Working with Tailwind classes"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: typescript
description: >
TypeScript strict patterns and best practices.
Trigger: When writing TypeScript code - types, interfaces, generics.
Trigger: When implementing or refactoring TypeScript in .ts/.tsx (types, interfaces, generics, const maps, type guards, removing any, tightening unknown).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "Writing TypeScript types/interfaces"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: zod-4
description: >
Zod 4 schema validation patterns.
Trigger: When using Zod for validation - breaking changes from v3.
Trigger: When creating or updating Zod v4 schemas for validation/parsing (forms, request payloads, adapters), including v3 -> v4 migration patterns.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "Creating Zod schemas"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -2,11 +2,13 @@
name: zustand-5
description: >
Zustand 5 state management patterns.
Trigger: When managing React state with Zustand.
Trigger: When implementing client-side state with Zustand (stores, selectors, persist middleware, slices).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, ui]
auto_invoke: "Using Zustand stores"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---

View File

@@ -169,7 +169,7 @@ class TestCloudflareProvider:
with pytest.raises(CloudflareCredentialsError):
CloudflareProvider()
def test_cloudflare_provider_with_filter_zones(self):
def test_cloudflare_provider_with_filter_zone(self):
with (
patch(
"prowler.providers.cloudflare.cloudflare_provider.CloudflareProvider.setup_session",
@@ -196,10 +196,10 @@ class TestCloudflareProvider:
),
),
):
filter_zones = ["zone1", "zone2"]
provider = CloudflareProvider(filter_zones=filter_zones)
filter_zone = ["zone1", "zone2"]
provider = CloudflareProvider(filter_zone=filter_zone)
assert provider.filter_zones == set(filter_zones)
assert provider.filter_zone == set(filter_zone)
def test_cloudflare_provider_properties(self):
with (

View File

@@ -45,7 +45,7 @@ class TestCloudflareMutelist:
"Accounts": {
"test-account-id": {
"Checks": {
"zones_dnssec_enabled": {
"zone_dnssec_enabled": {
"Regions": ["*"],
"Resources": ["test-zone-id"],
}
@@ -58,7 +58,7 @@ class TestCloudflareMutelist:
finding = MagicMock()
finding.check_metadata = MagicMock()
finding.check_metadata.CheckID = "zones_dnssec_enabled"
finding.check_metadata.CheckID = "zone_dnssec_enabled"
finding.status = "FAIL"
finding.resource_id = "test-zone-id"
finding.resource_name = "example.com"
@@ -71,7 +71,7 @@ class TestCloudflareMutelist:
"Accounts": {
"test-account-id": {
"Checks": {
"zones_dnssec_enabled": {
"zone_dnssec_enabled": {
"Regions": ["*"],
"Resources": ["other-zone-id"],
}
@@ -84,7 +84,7 @@ class TestCloudflareMutelist:
finding = MagicMock()
finding.check_metadata = MagicMock()
finding.check_metadata.CheckID = "zones_dnssec_enabled"
finding.check_metadata.CheckID = "zone_dnssec_enabled"
finding.status = "FAIL"
finding.resource_id = "test-zone-id"
finding.resource_name = "example.com"

View File

@@ -2,7 +2,7 @@ Mutelist:
Accounts:
"test-account-id":
Checks:
"zones_dnssec_enabled":
"zone_dnssec_enabled":
Regions:
- "*"
Resources:

View File

@@ -1,6 +1,6 @@
from unittest import mock
from prowler.providers.cloudflare.services.zones.zones_service import (
from prowler.providers.cloudflare.services.zone.zone_service import (
CloudflareZone,
CloudflareZoneSettings,
)
@@ -11,10 +11,10 @@ from tests.providers.cloudflare.cloudflare_fixtures import (
)
class Test_zones_dnssec_enabled:
class Test_zone_dnssec_enabled:
def test_no_zones(self):
zones_client = mock.MagicMock
zones_client.zones = {}
zone_client = mock.MagicMock
zone_client.zones = {}
with (
mock.patch(
@@ -22,21 +22,21 @@ class Test_zones_dnssec_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled import (
zones_dnssec_enabled,
from prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled import (
zone_dnssec_enabled,
)
check = zones_dnssec_enabled()
check = zone_dnssec_enabled()
result = check.execute()
assert len(result) == 0
def test_zone_dnssec_enabled(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -53,15 +53,15 @@ class Test_zones_dnssec_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled import (
zones_dnssec_enabled,
from prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled import (
zone_dnssec_enabled,
)
check = zones_dnssec_enabled()
check = zone_dnssec_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].resource_id == ZONE_ID
@@ -72,8 +72,8 @@ class Test_zones_dnssec_enabled:
)
def test_zone_dnssec_disabled(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -90,15 +90,15 @@ class Test_zones_dnssec_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled import (
zones_dnssec_enabled,
from prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled import (
zone_dnssec_enabled,
)
check = zones_dnssec_enabled()
check = zone_dnssec_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].resource_id == ZONE_ID
@@ -110,8 +110,8 @@ class Test_zones_dnssec_enabled:
)
def test_zone_dnssec_pending(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -128,15 +128,15 @@ class Test_zones_dnssec_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_dnssec_enabled.zones_dnssec_enabled import (
zones_dnssec_enabled,
from prowler.providers.cloudflare.services.zone.zone_dnssec_enabled.zone_dnssec_enabled import (
zone_dnssec_enabled,
)
check = zones_dnssec_enabled()
check = zone_dnssec_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"

View File

@@ -1,6 +1,6 @@
from unittest import mock
from prowler.providers.cloudflare.services.zones.zones_service import (
from prowler.providers.cloudflare.services.zone.zone_service import (
CloudflareZone,
CloudflareZoneSettings,
StrictTransportSecurity,
@@ -12,10 +12,10 @@ from tests.providers.cloudflare.cloudflare_fixtures import (
)
class Test_zones_hsts_enabled:
class Test_zone_hsts_enabled:
def test_no_zones(self):
zones_client = mock.MagicMock
zones_client.zones = {}
zone_client = mock.MagicMock
zone_client.zones = {}
with (
mock.patch(
@@ -23,21 +23,21 @@ class Test_zones_hsts_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled import (
zones_hsts_enabled,
from prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled import (
zone_hsts_enabled,
)
check = zones_hsts_enabled()
check = zone_hsts_enabled()
result = check.execute()
assert len(result) == 0
def test_zone_hsts_enabled_properly_configured(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -60,15 +60,15 @@ class Test_zones_hsts_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled import (
zones_hsts_enabled,
from prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled import (
zone_hsts_enabled,
)
check = zones_hsts_enabled()
check = zone_hsts_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].resource_id == ZONE_ID
@@ -77,8 +77,8 @@ class Test_zones_hsts_enabled:
assert "HSTS is enabled" in result[0].status_extended
def test_zone_hsts_disabled(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -98,23 +98,23 @@ class Test_zones_hsts_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled import (
zones_hsts_enabled,
from prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled import (
zone_hsts_enabled,
)
check = zones_hsts_enabled()
check = zone_hsts_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"
assert "HSTS is not enabled" in result[0].status_extended
def test_zone_hsts_enabled_no_subdomains(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -136,23 +136,23 @@ class Test_zones_hsts_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled import (
zones_hsts_enabled,
from prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled import (
zone_hsts_enabled,
)
check = zones_hsts_enabled()
check = zone_hsts_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"
assert "does not include subdomains" in result[0].status_extended
def test_zone_hsts_enabled_low_max_age(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -174,15 +174,15 @@ class Test_zones_hsts_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_hsts_enabled.zones_hsts_enabled import (
zones_hsts_enabled,
from prowler.providers.cloudflare.services.zone.zone_hsts_enabled.zone_hsts_enabled import (
zone_hsts_enabled,
)
check = zones_hsts_enabled()
check = zone_hsts_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"

View File

@@ -1,6 +1,6 @@
from unittest import mock
from prowler.providers.cloudflare.services.zones.zones_service import (
from prowler.providers.cloudflare.services.zone.zone_service import (
CloudflareZone,
CloudflareZoneSettings,
)
@@ -11,10 +11,10 @@ from tests.providers.cloudflare.cloudflare_fixtures import (
)
class Test_zones_https_redirect_enabled:
class Test_zone_https_redirect_enabled:
def test_no_zones(self):
zones_client = mock.MagicMock
zones_client.zones = {}
zone_client = mock.MagicMock
zone_client.zones = {}
with (
mock.patch(
@@ -22,21 +22,21 @@ class Test_zones_https_redirect_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled import (
zones_https_redirect_enabled,
from prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled import (
zone_https_redirect_enabled,
)
check = zones_https_redirect_enabled()
check = zone_https_redirect_enabled()
result = check.execute()
assert len(result) == 0
def test_zone_https_redirect_enabled(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -54,15 +54,15 @@ class Test_zones_https_redirect_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled import (
zones_https_redirect_enabled,
from prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled import (
zone_https_redirect_enabled,
)
check = zones_https_redirect_enabled()
check = zone_https_redirect_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].resource_id == ZONE_ID
@@ -71,8 +71,8 @@ class Test_zones_https_redirect_enabled:
assert "Always Use HTTPS is enabled" in result[0].status_extended
def test_zone_https_redirect_disabled(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -90,15 +90,15 @@ class Test_zones_https_redirect_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled import (
zones_https_redirect_enabled,
from prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled import (
zone_https_redirect_enabled,
)
check = zones_https_redirect_enabled()
check = zone_https_redirect_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].resource_id == ZONE_ID
@@ -107,8 +107,8 @@ class Test_zones_https_redirect_enabled:
assert "Always Use HTTPS is not enabled" in result[0].status_extended
def test_zone_https_redirect_none(self):
zones_client = mock.MagicMock
zones_client.zones = {
zone_client = mock.MagicMock
zone_client.zones = {
ZONE_ID: CloudflareZone(
id=ZONE_ID,
name=ZONE_NAME,
@@ -126,15 +126,15 @@ class Test_zones_https_redirect_enabled:
return_value=set_mocked_cloudflare_provider(),
),
mock.patch(
"prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled.zones_client",
new=zones_client,
"prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled.zone_client",
new=zone_client,
),
):
from prowler.providers.cloudflare.services.zones.zones_https_redirect_enabled.zones_https_redirect_enabled import (
zones_https_redirect_enabled,
from prowler.providers.cloudflare.services.zone.zone_https_redirect_enabled.zone_https_redirect_enabled import (
zone_https_redirect_enabled,
)
check = zones_https_redirect_enabled()
check = zone_https_redirect_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"

Some files were not shown because too many files have changed in this diff Show More