feat(skills): sync AGENTS.md to AI-specific formats (#9751)

Co-authored-by: Alan-TheGentleman <alan@thegentleman.dev>
Co-authored-by: pedrooot <pedromarting3@gmail.com>
Co-authored-by: Andoni A. <14891798+andoniaf@users.noreply.github.com>
This commit is contained in:
Alan Buscaglia
2026-01-13 11:44:44 +01:00
committed by GitHub
parent b0eea61468
commit c8fab497fd
56 changed files with 3714 additions and 184 deletions

View File

@@ -46,6 +46,7 @@ jobs:
api/docs/** api/docs/**
api/README.md api/README.md
api/CHANGELOG.md api/CHANGELOG.md
api/AGENTS.md
- name: Setup Python with Poetry - name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -74,6 +74,7 @@ jobs:
api/docs/** api/docs/**
api/README.md api/README.md
api/CHANGELOG.md api/CHANGELOG.md
api/AGENTS.md
- name: Set up Docker Buildx - name: Set up Docker Buildx
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -46,6 +46,7 @@ jobs:
api/docs/** api/docs/**
api/README.md api/README.md
api/CHANGELOG.md api/CHANGELOG.md
api/AGENTS.md
- name: Setup Python with Poetry - name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -86,6 +86,7 @@ jobs:
api/docs/** api/docs/**
api/README.md api/README.md
api/CHANGELOG.md api/CHANGELOG.md
api/AGENTS.md
- name: Setup Python with Poetry - name: Setup Python with Poetry
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -47,6 +47,7 @@ jobs:
ui/** ui/**
dashboard/** dashboard/**
mcp_server/** mcp_server/**
skills/**
README.md README.md
mkdocs.yml mkdocs.yml
.backportrc.json .backportrc.json
@@ -55,6 +56,7 @@ jobs:
examples/** examples/**
.gitignore .gitignore
contrib/** contrib/**
**/AGENTS.md
- name: Install Poetry - name: Install Poetry
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -78,6 +78,7 @@ jobs:
ui/** ui/**
dashboard/** dashboard/**
mcp_server/** mcp_server/**
skills/**
README.md README.md
mkdocs.yml mkdocs.yml
.backportrc.json .backportrc.json
@@ -86,6 +87,7 @@ jobs:
examples/** examples/**
.gitignore .gitignore
contrib/** contrib/**
**/AGENTS.md
- name: Set up Docker Buildx - name: Set up Docker Buildx
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -42,6 +42,7 @@ jobs:
ui/** ui/**
dashboard/** dashboard/**
mcp_server/** mcp_server/**
skills/**
README.md README.md
mkdocs.yml mkdocs.yml
.backportrc.json .backportrc.json
@@ -50,6 +51,7 @@ jobs:
examples/** examples/**
.gitignore .gitignore
contrib/** contrib/**
**/AGENTS.md
- name: Install Poetry - name: Install Poetry
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -47,6 +47,7 @@ jobs:
ui/** ui/**
dashboard/** dashboard/**
mcp_server/** mcp_server/**
skills/**
README.md README.md
mkdocs.yml mkdocs.yml
.backportrc.json .backportrc.json
@@ -55,6 +56,7 @@ jobs:
examples/** examples/**
.gitignore .gitignore
contrib/** contrib/**
**/AGENTS.md
- name: Install Poetry - name: Install Poetry
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -73,6 +73,7 @@ jobs:
files_ignore: | files_ignore: |
ui/CHANGELOG.md ui/CHANGELOG.md
ui/README.md ui/README.md
ui/AGENTS.md
- name: Set up Docker Buildx - name: Set up Docker Buildx
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

View File

@@ -42,6 +42,7 @@ jobs:
files_ignore: | files_ignore: |
ui/CHANGELOG.md ui/CHANGELOG.md
ui/README.md ui/README.md
ui/AGENTS.md
- name: Setup Node.js ${{ env.NODE_VERSION }} - name: Setup Node.js ${{ env.NODE_VERSION }}
if: steps.check-changes.outputs.any_changed == 'true' if: steps.check-changes.outputs.any_changed == 'true'

4
.gitignore vendored
View File

@@ -150,8 +150,10 @@ node_modules
# Persistent data # Persistent data
_data/ _data/
# Claude # AI Instructions (generated by skills/setup.sh from AGENTS.md)
CLAUDE.md CLAUDE.md
GEMINI.md
.github/copilot-instructions.md
# Compliance report # Compliance report
*.pdf *.pdf

View File

@@ -36,11 +36,63 @@ Use these skills for detailed patterns on-demand:
| `prowler-test-api` | API testing (pytest-django + RLS) | [SKILL.md](skills/prowler-test-api/SKILL.md) | | `prowler-test-api` | API testing (pytest-django + RLS) | [SKILL.md](skills/prowler-test-api/SKILL.md) |
| `prowler-test-ui` | E2E testing (Playwright) | [SKILL.md](skills/prowler-test-ui/SKILL.md) | | `prowler-test-ui` | E2E testing (Playwright) | [SKILL.md](skills/prowler-test-ui/SKILL.md) |
| `prowler-compliance` | Compliance framework structure | [SKILL.md](skills/prowler-compliance/SKILL.md) | | `prowler-compliance` | Compliance framework structure | [SKILL.md](skills/prowler-compliance/SKILL.md) |
| `prowler-compliance-review` | Review compliance framework PRs | [SKILL.md](skills/prowler-compliance-review/SKILL.md) |
| `prowler-provider` | Add new cloud providers | [SKILL.md](skills/prowler-provider/SKILL.md) | | `prowler-provider` | Add new cloud providers | [SKILL.md](skills/prowler-provider/SKILL.md) |
| `prowler-ci` | CI checks and PR gates (GitHub Actions) | [SKILL.md](skills/prowler-ci/SKILL.md) |
| `prowler-pr` | Pull request conventions | [SKILL.md](skills/prowler-pr/SKILL.md) | | `prowler-pr` | Pull request conventions | [SKILL.md](skills/prowler-pr/SKILL.md) |
| `prowler-docs` | Documentation style guide | [SKILL.md](skills/prowler-docs/SKILL.md) | | `prowler-docs` | Documentation style guide | [SKILL.md](skills/prowler-docs/SKILL.md) |
| `skill-creator` | Create new AI agent skills | [SKILL.md](skills/skill-creator/SKILL.md) | | `skill-creator` | Create new AI agent skills | [SKILL.md](skills/skill-creator/SKILL.md) |
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Adding new providers | `prowler-provider` |
| Adding services to existing providers | `prowler-provider` |
| After creating/modifying a skill | `skill-sync` |
| App Router / Server Actions | `nextjs-15` |
| Building AI chat features | `ai-sdk-5` |
| Create a PR with gh pr create | `prowler-pr` |
| Creating Zod schemas | `zod-4` |
| Creating new checks | `prowler-sdk-check` |
| Creating new skills | `skill-creator` |
| Creating/modifying Prowler UI components | `prowler-ui` |
| Creating/modifying models, views, serializers | `prowler-api` |
| Creating/updating compliance frameworks | `prowler-compliance` |
| Debug why a GitHub Actions job is failing | `prowler-ci` |
| Fill .github/pull_request_template.md (Context/Description/Steps to review/Checklist) | `prowler-pr` |
| General Prowler development questions | `prowler` |
| Generic DRF patterns | `django-drf` |
| Inspect PR CI checks and gates (.github/workflows/*) | `prowler-ci` |
| Inspect PR CI workflows (.github/workflows/*): conventional-commit, pr-check-changelog, pr-conflict-checker, labeler | `prowler-pr` |
| Mapping checks to compliance controls | `prowler-compliance` |
| Mocking AWS with moto in tests | `prowler-test-sdk` |
| Regenerate AGENTS.md Auto-invoke tables (sync.sh) | `skill-sync` |
| Review PR requirements: template, title conventions, changelog gate | `prowler-pr` |
| Reviewing compliance framework PRs | `prowler-compliance-review` |
| Testing RLS tenant isolation | `prowler-test-api` |
| Troubleshoot why a skill is missing from AGENTS.md auto-invoke | `skill-sync` |
| Understand CODEOWNERS/labeler-based automation | `prowler-ci` |
| Understand PR title conventional-commit validation | `prowler-ci` |
| Understand changelog gate and no-changelog label behavior | `prowler-ci` |
| Understand review ownership with CODEOWNERS | `prowler-pr` |
| Updating existing checks and metadata | `prowler-sdk-check` |
| Using Zustand stores | `zustand-5` |
| Working on MCP server tools | `prowler-mcp` |
| Working on Prowler UI structure (actions/adapters/types/hooks) | `prowler-ui` |
| Working with Prowler UI test helpers/pages | `prowler-test-ui` |
| Working with Tailwind classes | `tailwind-4` |
| Writing Playwright E2E tests | `playwright` |
| Writing Prowler API tests | `prowler-test-api` |
| Writing Prowler SDK tests | `prowler-test-sdk` |
| Writing Prowler UI E2E tests | `prowler-test-ui` |
| Writing Python tests with pytest | `pytest` |
| Writing React components | `react-19` |
| Writing TypeScript types/interfaces | `typescript` |
| Writing documentation | `prowler-docs` |
--- ---
## Project Overview ## Project Overview

View File

@@ -6,6 +6,20 @@
> - [`django-drf`](../skills/django-drf/SKILL.md) - Generic DRF patterns > - [`django-drf`](../skills/django-drf/SKILL.md) - Generic DRF patterns
> - [`pytest`](../skills/pytest/SKILL.md) - Generic pytest patterns > - [`pytest`](../skills/pytest/SKILL.md) - Generic pytest patterns
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Creating/modifying models, views, serializers | `prowler-api` |
| Generic DRF patterns | `django-drf` |
| Testing RLS tenant isolation | `prowler-test-api` |
| Writing Prowler API tests | `prowler-test-api` |
| Writing Python tests with pytest | `pytest` |
---
## CRITICAL RULES - NON-NEGOTIABLE ## CRITICAL RULES - NON-NEGOTIABLE
### Models ### Models

View File

@@ -128,8 +128,10 @@ flowchart TB
P5["prowler-mcp"] P5["prowler-mcp"]
P6["prowler-provider"] P6["prowler-provider"]
P7["prowler-compliance"] P7["prowler-compliance"]
P8["prowler-docs"] P8["prowler-compliance-review"]
P9["prowler-pr"] P9["prowler-docs"]
P10["prowler-pr"]
P11["prowler-ci"]
end end
subgraph TESTING["Testing Skills"] subgraph TESTING["Testing Skills"]
@@ -140,6 +142,7 @@ flowchart TB
subgraph META["Meta Skills"] subgraph META["Meta Skills"]
M1["skill-creator"] M1["skill-creator"]
M2["skill-sync"]
end end
end end
@@ -189,9 +192,9 @@ flowchart TB
| Type | Skills | | Type | Skills |
|------|--------| |------|--------|
| **Generic** | typescript, react-19, nextjs-15, tailwind-4, pytest, playwright, django-drf, zod-4, zustand-5, ai-sdk-5 | | **Generic** | typescript, react-19, nextjs-15, tailwind-4, pytest, playwright, django-drf, zod-4, zustand-5, ai-sdk-5 |
| **Prowler** | prowler, prowler-sdk-check, prowler-api, prowler-ui, prowler-mcp, prowler-provider, prowler-compliance, prowler-docs, prowler-pr | | **Prowler** | prowler, prowler-sdk-check, prowler-api, prowler-ui, prowler-mcp, prowler-provider, prowler-compliance, prowler-compliance-review, prowler-docs, prowler-pr, prowler-ci |
| **Testing** | prowler-test-sdk, prowler-test-api, prowler-test-ui | | **Testing** | prowler-test-sdk, prowler-test-api, prowler-test-ui |
| **Meta** | skill-creator | | **Meta** | skill-creator, skill-sync |
## Skill Structure ## Skill Structure

View File

@@ -7,6 +7,25 @@
> - [`prowler-compliance`](../skills/prowler-compliance/SKILL.md) - Compliance framework structure > - [`prowler-compliance`](../skills/prowler-compliance/SKILL.md) - Compliance framework structure
> - [`pytest`](../skills/pytest/SKILL.md) - Generic pytest patterns > - [`pytest`](../skills/pytest/SKILL.md) - Generic pytest patterns
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Adding new providers | `prowler-provider` |
| Adding services to existing providers | `prowler-provider` |
| Creating new checks | `prowler-sdk-check` |
| Creating/updating compliance frameworks | `prowler-compliance` |
| Mapping checks to compliance controls | `prowler-compliance` |
| Mocking AWS with moto in tests | `prowler-test-sdk` |
| Reviewing compliance framework PRs | `prowler-compliance-review` |
| Updating existing checks and metadata | `prowler-sdk-check` |
| Writing Prowler SDK tests | `prowler-test-sdk` |
| Writing Python tests with pytest | `pytest` |
---
## Project Overview ## Project Overview
The Prowler SDK is the core Python engine powering cloud security assessments across AWS, Azure, GCP, Kubernetes, GitHub, M365, and more. It includes 1000+ security checks and 30+ compliance frameworks. The Prowler SDK is the core Python engine powering cloud security assessments across AWS, Azure, GCP, Kubernetes, GitHub, M365, and more. It includes 1000+ security checks and 30+ compliance frameworks.

View File

@@ -83,6 +83,7 @@ Patterns tailored for Prowler development:
| Skill | Description | | Skill | Description |
|-------|-------------| |-------|-------------|
| `skill-creator` | Create new AI agent skills | | `skill-creator` | Create new AI agent skills |
| `skill-sync` | Sync skill metadata to AGENTS.md Auto-invoke sections |
## Directory Structure ## Directory Structure
@@ -96,6 +97,20 @@ skills/
└── README.md # This file └── README.md # This file
``` ```
## Why Auto-invoke Sections?
**Problem**: AI assistants (Claude, Gemini, etc.) don't reliably auto-invoke skills even when the `Trigger:` in the skill description matches the user's request. They treat skill suggestions as "background noise" and barrel ahead with their default approach.
**Solution**: The `AGENTS.md` files in each directory contain an **Auto-invoke Skills** section that explicitly commands the AI: "When performing X action, ALWAYS invoke Y skill FIRST." This is a [known workaround](https://scottspence.com/posts/claude-code-skills-dont-auto-activate) that forces the AI to load skills.
**Automation**: Instead of manually maintaining these sections, run `skill-sync` after creating or modifying a skill:
```bash
./skills/skill-sync/assets/sync.sh
```
This reads `metadata.scope` and `metadata.auto_invoke` from each `SKILL.md` and generates the Auto-invoke tables in the corresponding `AGENTS.md` files.
## Creating New Skills ## Creating New Skills
Use the `skill-creator` skill for guidance: Use the `skill-creator` skill for guidance:
@@ -108,9 +123,11 @@ Read skills/skill-creator/SKILL.md
1. Create directory: `skills/{skill-name}/` 1. Create directory: `skills/{skill-name}/`
2. Add `SKILL.md` with required frontmatter 2. Add `SKILL.md` with required frontmatter
3. Keep content concise (under 500 lines) 3. Add `metadata.scope` and `metadata.auto_invoke` fields
4. Reference existing docs instead of duplicating 4. Keep content concise (under 500 lines)
5. Add to `AGENTS.md` skills table 5. Reference existing docs instead of duplicating
6. Run `./skills/skill-sync/assets/sync.sh` to update AGENTS.md
7. Add to `AGENTS.md` skills table (if not auto-generated)
## Design Principles ## Design Principles

View File

@@ -2,11 +2,13 @@
name: ai-sdk-5 name: ai-sdk-5
description: > description: >
Vercel AI SDK 5 patterns. Vercel AI SDK 5 patterns.
Trigger: When building AI chat features - breaking changes from v4. Trigger: When building AI features with AI SDK v5 (chat, streaming, tools/function calling, UIMessage parts), including migration from v4.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "Building AI chat features"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: django-drf name: django-drf
description: > description: >
Django REST Framework patterns. Django REST Framework patterns.
Trigger: When building REST APIs with Django - ViewSets, Serializers, Filters. Trigger: When implementing generic DRF APIs (ViewSets, serializers, routers, permissions, filtersets). For Prowler API specifics (RLS/JSON:API), also use prowler-api.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, api]
auto_invoke: "Generic DRF patterns"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: nextjs-15 name: nextjs-15
description: > description: >
Next.js 15 App Router patterns. Next.js 15 App Router patterns.
Trigger: When working with Next.js - routing, Server Actions, data fetching. Trigger: When working in Next.js App Router (app/), Server Components vs Client Components, Server Actions, Route Handlers, caching/revalidation, and streaming/Suspense.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "App Router / Server Actions"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: playwright name: playwright
description: > description: >
Playwright E2E testing patterns. Playwright E2E testing patterns.
Trigger: When writing E2E tests - Page Objects, selectors, MCP workflow. Trigger: When writing Playwright E2E tests (Page Object Model, selectors, MCP exploration workflow). For Prowler-specific UI conventions under ui/tests, also use prowler-test-ui.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "Writing Playwright E2E tests"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -1,12 +1,14 @@
--- ---
name: prowler-api name: prowler-api
description: > description: >
Prowler API patterns: RLS, RBAC, providers, Celery tasks. Prowler API patterns: JSON:API, RLS, RBAC, providers, Celery tasks.
Trigger: When working on api/ - models, serializers, views, filters, tasks. Trigger: When working in api/ on models/serializers/viewsets/filters/tasks involving tenant isolation (RLS), RBAC, JSON:API, or provider lifecycle.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, api]
auto_invoke: "Creating/modifying models, views, serializers"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -0,0 +1,52 @@
---
name: prowler-ci
description: >
Helps with Prowler repository CI and PR gates (GitHub Actions workflows).
Trigger: When investigating CI checks failing on a PR, PR title validation, changelog gate/no-changelog label,
conflict marker checks, secret scanning, CODEOWNERS/labeler automation, or anything under .github/workflows.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke:
- "Inspect PR CI checks and gates (.github/workflows/*)"
- "Debug why a GitHub Actions job is failing"
- "Understand changelog gate and no-changelog label behavior"
- "Understand PR title conventional-commit validation"
- "Understand CODEOWNERS/labeler-based automation"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash
---
## What this skill covers
Use this skill whenever you are:
- Reading or changing GitHub Actions workflows under `.github/workflows/`
- Explaining why a PR fails checks (title, changelog, conflict markers, secret scanning)
- Figuring out which workflows run for UI/API/SDK changes and why
- Diagnosing path-filtering behavior (why a workflow did/didn't run)
## Quick map (where to look)
- PR template: `.github/pull_request_template.md`
- PR title validation: `.github/workflows/conventional-commit.yml`
- Changelog gate: `.github/workflows/pr-check-changelog.yml`
- Conflict markers check: `.github/workflows/pr-conflict-checker.yml`
- Secret scanning: `.github/workflows/find-secrets.yml`
- Auto labels: `.github/workflows/labeler.yml` and `.github/labeler.yml`
- Review ownership: `.github/CODEOWNERS`
## Debug checklist (PR failing checks)
1. Identify which workflow/job is failing (name + file under `.github/workflows/`).
2. Check path filters: is the workflow supposed to run for your changed files?
3. If it's a title check: verify PR title matches Conventional Commits.
4. If it's changelog: verify the right `CHANGELOG.md` is updated OR apply `no-changelog` label.
5. If it's conflict checker: remove `<<<<<<<`, `=======`, `>>>>>>>` markers.
6. If it's secrets: remove credentials and rotate anything leaked.
## Notes
- Keep `prowler-pr` focused on *creating* PRs and filling the template.
- Use `prowler-ci` for *CI policies and gates* that apply to PRs.

View File

@@ -0,0 +1,189 @@
---
name: prowler-compliance-review
description: >
Reviews Pull Requests that add or modify compliance frameworks.
Trigger: When reviewing PRs with compliance framework changes, CIS/NIST/PCI-DSS additions, or compliance JSON files.
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root, sdk]
auto_invoke: "Reviewing compliance framework PRs"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
---
## When to Use
- Reviewing PRs that add new compliance frameworks
- Reviewing PRs that modify existing compliance frameworks
- Validating compliance framework JSON structure before merge
---
## Review Checklist (Critical)
| Check | Command/Method | Pass Criteria |
|-------|----------------|---------------|
| JSON Valid | `python3 -m json.tool file.json` | No syntax errors |
| All Checks Exist | Run validation script | 0 missing checks |
| No Duplicate IDs | Run validation script | 0 duplicate requirement IDs |
| CHANGELOG Entry | Manual review | Present under correct version |
| Dashboard File | Compare with existing | Follows established pattern |
| Framework Metadata | Manual review | All required fields populated |
---
## Commands
```bash
# 1. Validate JSON syntax
python3 -m json.tool prowler/compliance/{provider}/{framework}.json > /dev/null \
&& echo "Valid JSON" || echo "INVALID JSON"
# 2. Run full validation script
python3 skills/prowler-compliance-review/assets/validate_compliance.py \
prowler/compliance/{provider}/{framework}.json
# 3. Compare dashboard with existing (find similar framework)
diff dashboard/compliance/{new_framework}.py \
dashboard/compliance/{existing_framework}.py
```
---
## Decision Tree
```
JSON Valid?
├── No → FAIL: Fix JSON syntax errors
└── Yes ↓
All Checks Exist in Codebase?
├── Missing checks → FAIL: Add missing checks or remove from framework
└── All exist ↓
Duplicate Requirement IDs?
├── Yes → FAIL: Fix duplicate IDs
└── No ↓
CHANGELOG Entry Present?
├── No → REQUEST CHANGES: Add CHANGELOG entry
└── Yes ↓
Dashboard File Follows Pattern?
├── No → REQUEST CHANGES: Fix dashboard pattern
└── Yes ↓
Framework Metadata Complete?
├── No → REQUEST CHANGES: Add missing metadata
└── Yes → APPROVE
```
---
## Framework Structure Reference
Compliance frameworks are JSON files in: `prowler/compliance/{provider}/{framework}.json`
```json
{
"Framework": "CIS",
"Name": "CIS Provider Benchmark vX.Y.Z",
"Version": "X.Y",
"Provider": "AWS|Azure|GCP|...",
"Description": "Framework description...",
"Requirements": [
{
"Id": "1.1",
"Description": "Requirement description",
"Checks": ["check_name_1", "check_name_2"],
"Attributes": [
{
"Section": "1 Section Name",
"SubSection": "1.1 Subsection (optional)",
"Profile": "Level 1|Level 2",
"AssessmentStatus": "Automated|Manual",
"Description": "...",
"RationaleStatement": "...",
"ImpactStatement": "...",
"RemediationProcedure": "...",
"AuditProcedure": "...",
"AdditionalInformation": "...",
"References": "...",
"DefaultValue": "..."
}
]
}
]
}
```
---
## Common Issues
| Issue | How to Detect | Resolution |
|-------|---------------|------------|
| Missing checks | Validation script reports missing | Add check implementation or remove from Checks array |
| Duplicate IDs | Validation script reports duplicates | Ensure each requirement has unique ID |
| Empty Checks for Automated | AssessmentStatus is Automated but Checks is empty | Add checks or change to Manual |
| Wrong file location | Framework not in `prowler/compliance/{provider}/` | Move to correct directory |
| Missing dashboard file | No corresponding `dashboard/compliance/{framework}.py` | Create dashboard file following pattern |
| CHANGELOG missing | Not under correct version section | Add entry to prowler/CHANGELOG.md |
---
## Dashboard File Pattern
Dashboard files must be in `dashboard/compliance/` and follow this exact pattern:
```python
import warnings
from dashboard.common_methods import get_section_containers_cis
warnings.filterwarnings("ignore")
def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_DESCRIPTION",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
].copy()
return get_section_containers_cis(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
)
```
---
## Testing the Compliance Framework
After validation passes, test the framework with Prowler:
```bash
# Verify framework is detected
poetry run python prowler-cli.py {provider} --list-compliance | grep {framework}
# Run a quick test with a single check from the framework
poetry run python prowler-cli.py {provider} --compliance {framework} --check {check_name}
# Run full compliance scan (dry-run with limited checks)
poetry run python prowler-cli.py {provider} --compliance {framework} --checks-limit 5
# Generate compliance report in multiple formats
poetry run python prowler-cli.py {provider} --compliance {framework} -M csv json html
```
---
## Resources
- **Validation Script**: See [assets/validate_compliance.py](assets/validate_compliance.py)
- **Related Skills**: See [prowler-compliance](../prowler-compliance/SKILL.md) for creating frameworks
- **Documentation**: See [references/review-checklist.md](references/review-checklist.md)

View File

@@ -0,0 +1,234 @@
#!/usr/bin/env python3
"""
Prowler Compliance Framework Validator
Validates compliance framework JSON files for:
- JSON syntax validity
- Check existence in codebase
- Duplicate requirement IDs
- Required field completeness
- Assessment status consistency
Usage:
python validate_compliance.py <path_to_compliance_json>
Example:
python validate_compliance.py prowler/compliance/azure/cis_5.0_azure.json
"""
import json
import os
import sys
from pathlib import Path
def find_project_root():
"""Find the Prowler project root directory."""
current = Path(__file__).resolve()
for parent in current.parents:
if (parent / "prowler" / "providers").exists():
return parent
return None
def get_existing_checks(project_root: Path, provider: str) -> set:
"""Find all existing checks for a provider in the codebase."""
checks = set()
services_path = (
project_root / "prowler" / "providers" / provider.lower() / "services"
)
if not services_path.exists():
return checks
for service_dir in services_path.iterdir():
if service_dir.is_dir() and not service_dir.name.startswith("__"):
for check_dir in service_dir.iterdir():
if check_dir.is_dir() and not check_dir.name.startswith("__"):
check_file = check_dir / f"{check_dir.name}.py"
if check_file.exists():
checks.add(check_dir.name)
return checks
def validate_compliance_framework(json_path: str) -> dict:
"""Validate a compliance framework JSON file."""
results = {"valid": True, "errors": [], "warnings": [], "stats": {}}
# 1. Check file exists
if not os.path.exists(json_path):
results["valid"] = False
results["errors"].append(f"File not found: {json_path}")
return results
# 2. Validate JSON syntax
try:
with open(json_path, "r") as f:
data = json.load(f)
except json.JSONDecodeError as e:
results["valid"] = False
results["errors"].append(f"Invalid JSON syntax: {e}")
return results
# 3. Check required top-level fields
required_fields = [
"Framework",
"Name",
"Version",
"Provider",
"Description",
"Requirements",
]
for field in required_fields:
if field not in data:
results["valid"] = False
results["errors"].append(f"Missing required field: {field}")
if not results["valid"]:
return results
# 4. Extract provider
provider = data.get("Provider", "").lower()
# 5. Find project root and existing checks
project_root = find_project_root()
if project_root:
existing_checks = get_existing_checks(project_root, provider)
else:
existing_checks = set()
results["warnings"].append(
"Could not find project root - skipping check existence validation"
)
# 6. Validate requirements
requirements = data.get("Requirements", [])
all_checks = set()
requirement_ids = []
automated_count = 0
manual_count = 0
empty_automated = []
for req in requirements:
req_id = req.get("Id", "UNKNOWN")
requirement_ids.append(req_id)
# Collect checks
checks = req.get("Checks", [])
all_checks.update(checks)
# Check assessment status
attributes = req.get("Attributes", [{}])
if attributes:
status = attributes[0].get("AssessmentStatus", "Unknown")
if status == "Automated":
automated_count += 1
if not checks:
empty_automated.append(req_id)
elif status == "Manual":
manual_count += 1
# 7. Check for duplicate IDs
seen_ids = set()
duplicates = []
for req_id in requirement_ids:
if req_id in seen_ids:
duplicates.append(req_id)
seen_ids.add(req_id)
if duplicates:
results["valid"] = False
results["errors"].append(f"Duplicate requirement IDs: {duplicates}")
# 8. Check for missing checks
if existing_checks:
missing_checks = all_checks - existing_checks
if missing_checks:
results["valid"] = False
results["errors"].append(
f"Missing checks in codebase ({len(missing_checks)}): {sorted(missing_checks)}"
)
# 9. Warn about empty automated
if empty_automated:
results["warnings"].append(
f"Automated requirements with no checks: {empty_automated}"
)
# 10. Compile statistics
results["stats"] = {
"framework": data.get("Framework"),
"name": data.get("Name"),
"version": data.get("Version"),
"provider": data.get("Provider"),
"total_requirements": len(requirements),
"automated_requirements": automated_count,
"manual_requirements": manual_count,
"unique_checks_referenced": len(all_checks),
"checks_found_in_codebase": len(all_checks - (all_checks - existing_checks))
if existing_checks
else "N/A",
"missing_checks": len(all_checks - existing_checks)
if existing_checks
else "N/A",
}
return results
def print_report(results: dict):
"""Print a formatted validation report."""
print("\n" + "=" * 60)
print("PROWLER COMPLIANCE FRAMEWORK VALIDATION REPORT")
print("=" * 60)
stats = results.get("stats", {})
if stats:
print(f"\nFramework: {stats.get('name', 'N/A')}")
print(f"Provider: {stats.get('provider', 'N/A')}")
print(f"Version: {stats.get('version', 'N/A')}")
print("-" * 40)
print(f"Total Requirements: {stats.get('total_requirements', 0)}")
print(f" - Automated: {stats.get('automated_requirements', 0)}")
print(f" - Manual: {stats.get('manual_requirements', 0)}")
print(f"Unique Checks: {stats.get('unique_checks_referenced', 0)}")
print(f"Checks in Codebase: {stats.get('checks_found_in_codebase', 'N/A')}")
print(f"Missing Checks: {stats.get('missing_checks', 'N/A')}")
print("\n" + "-" * 40)
if results["errors"]:
print("\nERRORS:")
for error in results["errors"]:
print(f" [X] {error}")
if results["warnings"]:
print("\nWARNINGS:")
for warning in results["warnings"]:
print(f" [!] {warning}")
print("\n" + "-" * 40)
if results["valid"]:
print("RESULT: PASS - Framework is valid")
else:
print("RESULT: FAIL - Framework has errors")
print("=" * 60 + "\n")
def main():
if len(sys.argv) < 2:
print("Usage: python validate_compliance.py <path_to_compliance_json>")
print(
"Example: python validate_compliance.py prowler/compliance/azure/cis_5.0_azure.json"
)
sys.exit(1)
json_path = sys.argv[1]
results = validate_compliance_framework(json_path)
print_report(results)
sys.exit(0 if results["valid"] else 1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,57 @@
# Compliance PR Review References
## Related Skills
- [prowler-compliance](../../prowler-compliance/SKILL.md) - Creating compliance frameworks
- [prowler-pr](../../prowler-pr/SKILL.md) - PR conventions and checklist
## Documentation
- [Prowler Developer Guide](https://docs.prowler.com/developer-guide/introduction)
- [Compliance Framework Structure](https://docs.prowler.com/developer-guide/compliance)
## File Locations
| File Type | Location |
|-----------|----------|
| Compliance JSON | `prowler/compliance/{provider}/{framework}.json` |
| Dashboard | `dashboard/compliance/{framework}_{provider}.py` |
| CHANGELOG | `prowler/CHANGELOG.md` |
| Checks | `prowler/providers/{provider}/services/{service}/{check}/` |
## Validation Script
Run the validation script from the project root:
```bash
python3 skills/prowler-compliance-review/assets/validate_compliance.py \
prowler/compliance/{provider}/{framework}.json
```
## PR Review Summary Template
When completing a compliance framework review, use this summary format:
```markdown
## Compliance Framework Review Summary
| Check | Result |
|-------|--------|
| JSON Valid | PASS/FAIL |
| All Checks Exist | PASS/FAIL (N missing) |
| No Duplicate IDs | PASS/FAIL |
| CHANGELOG Entry | PASS/FAIL |
| Dashboard File | PASS/FAIL |
### Statistics
- Total Requirements: N
- Automated: N
- Manual: N
- Unique Checks: N
### Recommendation
APPROVE / REQUEST CHANGES / FAIL
### Issues Found
1. ...
```

View File

@@ -2,11 +2,15 @@
name: prowler-compliance name: prowler-compliance
description: > description: >
Creates and manages Prowler compliance frameworks. Creates and manages Prowler compliance frameworks.
Trigger: When working with compliance frameworks (CIS, NIST, PCI-DSS, SOC2, GDPR). Trigger: When working with compliance frameworks (CIS, NIST, PCI-DSS, SOC2, GDPR, ISO27001, ENS, MITRE ATT&CK).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.1"
scope: [root, sdk]
auto_invoke:
- "Creating/updating compliance frameworks"
- "Mapping checks to compliance controls"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---
@@ -16,98 +20,472 @@ Use this skill when:
- Creating a new compliance framework for any provider - Creating a new compliance framework for any provider
- Adding requirements to existing frameworks - Adding requirements to existing frameworks
- Mapping checks to compliance controls - Mapping checks to compliance controls
- Understanding compliance framework structures and attributes
## Compliance Framework Structure ## Compliance Framework Location
Frameworks are JSON files in: `prowler/compliance/{provider}/{framework}.json` Frameworks are JSON files located in: `prowler/compliance/{provider}/{framework_name}_{provider}.json`
**Supported Providers:**
- `aws` - Amazon Web Services
- `azure` - Microsoft Azure
- `gcp` - Google Cloud Platform
- `kubernetes` - Kubernetes
- `github` - GitHub
- `m365` - Microsoft 365
- `alibabacloud` - Alibaba Cloud
- `oraclecloud` - Oracle Cloud
- `oci` - Oracle Cloud Infrastructure
- `nhn` - NHN Cloud
- `mongodbatlas` - MongoDB Atlas
- `iac` - Infrastructure as Code
- `llm` - Large Language Models
## Base Framework Structure
All compliance frameworks share this base structure:
```json ```json
{ {
"Framework": "CIS", "Framework": "FRAMEWORK_NAME",
"Name": "CIS Amazon Web Services Foundations Benchmark v2.0.0", "Name": "Full Framework Name with Version",
"Version": "2.0", "Version": "X.X",
"Provider": "AWS", "Provider": "PROVIDER",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance...", "Description": "Framework description...",
"Requirements": [ "Requirements": [
{ {
"Id": "1.1", "Id": "requirement_id",
"Name": "Requirement name", "Description": "Requirement description",
"Description": "Detailed description of the requirement", "Name": "Optional requirement name",
"Attributes": [ "Attributes": [...],
{
"Section": "1. Identity and Access Management",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "Attribute description"
}
],
"Checks": ["check_name_1", "check_name_2"] "Checks": ["check_name_1", "check_name_2"]
} }
] ]
} }
``` ```
## Supported Frameworks ## Framework-Specific Attribute Structures
**Industry standards:** Each framework type has its own attribute model. Below are the exact structures used by Prowler:
- CIS (Center for Internet Security)
- NIST 800-53, NIST CSF
- CISA
**Regulatory compliance:** ### CIS (Center for Internet Security)
- PCI-DSS
- HIPAA
- GDPR
- FedRAMP
- SOC2
**Cloud-specific:** **Framework ID format:** `cis_{version}_{provider}` (e.g., `cis_5.0_aws`)
- AWS Well-Architected Framework (Security Pillar)
- AWS Foundational Technical Review (FTR)
- Azure Security Benchmark
- GCP Security Best Practices
## Framework Requirement Mapping
Each requirement maps to one or more checks:
```json ```json
{ {
"Id": "2.1.1", "Id": "1.1",
"Name": "Ensure MFA is enabled for all IAM users", "Description": "Maintain current contact details",
"Description": "Multi-Factor Authentication adds an extra layer of protection...", "Checks": ["account_maintain_current_contact_details"],
"Checks": [ "Attributes": [
"iam_user_mfa_enabled", {
"iam_root_mfa_enabled", "Section": "1 Identity and Access Management",
"iam_user_hardware_mfa_enabled" "SubSection": "Optional subsection",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "Detailed attribute description",
"RationaleStatement": "Why this control matters",
"ImpactStatement": "Impact of implementing this control",
"RemediationProcedure": "Steps to fix the issue",
"AuditProcedure": "Steps to verify compliance",
"AdditionalInformation": "Extra notes",
"DefaultValue": "Default configuration value",
"References": "https://docs.example.com/reference"
}
] ]
} }
``` ```
**Profile values:** `Level 1`, `Level 2`, `E3 Level 1`, `E3 Level 2`, `E5 Level 1`, `E5 Level 2`
**AssessmentStatus values:** `Automated`, `Manual`
---
### ISO 27001
**Framework ID format:** `iso27001_{year}_{provider}` (e.g., `iso27001_2022_aws`)
```json
{
"Id": "A.5.1",
"Description": "Policies for information security should be defined...",
"Name": "Policies for information security",
"Checks": ["securityhub_enabled"],
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.1",
"Objetive_Name": "Policies for information security",
"Check_Summary": "Summary of what is being checked"
}
]
}
```
**Note:** `Objetive_ID` and `Objetive_Name` use this exact spelling (not "Objective").
---
### ENS (Esquema Nacional de Seguridad - Spain)
**Framework ID format:** `ens_rd2022_{provider}` (e.g., `ens_rd2022_aws`)
```json
{
"Id": "op.acc.1.aws.iam.2",
"Description": "Proveedor de identidad centralizado",
"Checks": ["iam_check_saml_providers_sts"],
"Attributes": [
{
"IdGrupoControl": "op.acc.1",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Detailed control description in Spanish",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": ["trazabilidad", "autenticidad"],
"ModoEjecucion": "automatico",
"Dependencias": []
}
]
}
```
**Nivel values:** `opcional`, `bajo`, `medio`, `alto`
**Tipo values:** `refuerzo`, `requisito`, `recomendacion`, `medida`
**Dimensiones values:** `confidencialidad`, `integridad`, `trazabilidad`, `autenticidad`, `disponibilidad`
---
### MITRE ATT&CK
**Framework ID format:** `mitre_attack_{provider}` (e.g., `mitre_attack_aws`)
MITRE uses a different requirement structure:
```json
{
"Name": "Exploit Public-Facing Application",
"Id": "T1190",
"Tactics": ["Initial Access"],
"SubTechniques": [],
"Platforms": ["Containers", "IaaS", "Linux", "Network", "Windows", "macOS"],
"Description": "Adversaries may attempt to exploit a weakness...",
"TechniqueURL": "https://attack.mitre.org/techniques/T1190/",
"Checks": ["guardduty_is_enabled", "inspector2_is_enabled"],
"Attributes": [
{
"AWSService": "Amazon GuardDuty",
"Category": "Detect",
"Value": "Minimal",
"Comment": "Explanation of how this service helps..."
}
]
}
```
**For Azure:** Use `AzureService` instead of `AWSService`
**For GCP:** Use `GCPService` instead of `AWSService`
**Category values:** `Detect`, `Protect`, `Respond`
**Value values:** `Minimal`, `Partial`, `Significant`
---
### NIST 800-53
**Framework ID format:** `nist_800_53_revision_{version}_{provider}` (e.g., `nist_800_53_revision_5_aws`)
```json
{
"Id": "ac_2_1",
"Name": "AC-2(1) Automated System Account Management",
"Description": "Support the management of system accounts...",
"Checks": ["iam_password_policy_minimum_length_14"],
"Attributes": [
{
"ItemId": "ac_2_1",
"Section": "Access Control (AC)",
"SubSection": "Account Management (AC-2)",
"SubGroup": "AC-2(3) Disable Accounts",
"Service": "iam"
}
]
}
```
---
### Generic Compliance (Fallback)
For frameworks without specific attribute models:
```json
{
"Id": "requirement_id",
"Description": "Requirement description",
"Name": "Optional name",
"Checks": ["check_name"],
"Attributes": [
{
"ItemId": "item_id",
"Section": "Section name",
"SubSection": "Subsection name",
"SubGroup": "Subgroup name",
"Service": "service_name",
"Type": "type"
}
]
}
```
---
### AWS Well-Architected Framework
**Framework ID format:** `aws_well_architected_framework_{pillar}_pillar_aws`
```json
{
"Id": "SEC01-BP01",
"Description": "Establish common guardrails...",
"Name": "Establish common guardrails",
"Checks": ["account_part_of_organizations"],
"Attributes": [
{
"Name": "Establish common guardrails",
"WellArchitectedQuestionId": "securely-operate",
"WellArchitectedPracticeId": "sec_securely_operate_multi_accounts",
"Section": "Security",
"SubSection": "Security foundations",
"LevelOfRisk": "High",
"AssessmentMethod": "Automated",
"Description": "Detailed description",
"ImplementationGuidanceUrl": "https://docs.aws.amazon.com/..."
}
]
}
```
---
### KISA ISMS-P (Korea)
**Framework ID format:** `kisa_isms_p_{year}_{provider}` (e.g., `kisa_isms_p_2023_aws`)
```json
{
"Id": "1.1.1",
"Description": "Requirement description",
"Name": "Requirement name",
"Checks": ["check_name"],
"Attributes": [
{
"Domain": "1. Management System",
"Subdomain": "1.1 Management System Establishment",
"Section": "1.1.1 Section Name",
"AuditChecklist": ["Checklist item 1", "Checklist item 2"],
"RelatedRegulations": ["Regulation 1"],
"AuditEvidence": ["Evidence type 1"],
"NonComplianceCases": ["Non-compliance example"]
}
]
}
```
---
### C5 (Germany Cloud Computing Compliance Criteria Catalogue)
**Framework ID format:** `c5_{provider}` (e.g., `c5_aws`)
```json
{
"Id": "BCM-01",
"Description": "Requirement description",
"Name": "Requirement name",
"Checks": ["check_name"],
"Attributes": [
{
"Section": "BCM Business Continuity Management",
"SubSection": "BCM-01",
"Type": "Basic Criteria",
"AboutCriteria": "Description of criteria",
"ComplementaryCriteria": "Additional criteria"
}
]
}
```
---
### CCC (Cloud Computing Compliance)
**Framework ID format:** `ccc_{provider}` (e.g., `ccc_aws`)
```json
{
"Id": "CCC.C01",
"Description": "Requirement description",
"Name": "Requirement name",
"Checks": ["check_name"],
"Attributes": [
{
"FamilyName": "Cryptography & Key Management",
"FamilyDescription": "Family description",
"Section": "CCC.C01",
"SubSection": "Key Management",
"SubSectionObjective": "Objective description",
"Applicability": ["IaaS", "PaaS", "SaaS"],
"Recommendation": "Recommended action",
"SectionThreatMappings": [{"threat": "T1190"}],
"SectionGuidelineMappings": [{"guideline": "NIST"}]
}
]
}
```
---
### Prowler ThreatScore
**Framework ID format:** `prowler_threatscore_{provider}` (e.g., `prowler_threatscore_aws`)
Prowler ThreatScore is a custom security scoring framework developed by Prowler that evaluates AWS account security based on **four main pillars**:
| Pillar | Description |
|--------|-------------|
| **1. IAM** | Identity and Access Management controls (authentication, authorization, credentials) |
| **2. Attack Surface** | Network exposure, public resources, security group rules |
| **3. Logging and Monitoring** | Audit logging, threat detection, forensic readiness |
| **4. Encryption** | Data at rest and in transit encryption |
**Scoring System:**
- **LevelOfRisk** (1-5): Severity of the security issue
- `5` = Critical (e.g., root MFA, public S3 buckets)
- `4` = High (e.g., user MFA, public EC2)
- `3` = Medium (e.g., password policies, encryption)
- `2` = Low
- `1` = Informational
- **Weight**: Impact multiplier for score calculation
- `1000` = Critical controls (root security, public exposure)
- `100` = High-impact controls (user authentication, monitoring)
- `10` = Standard controls (password policies, encryption)
- `1` = Low-impact controls (best practices)
```json
{
"Id": "1.1.1",
"Description": "Ensure MFA is enabled for the 'root' user account",
"Checks": ["iam_root_mfa_enabled"],
"Attributes": [
{
"Title": "MFA enabled for 'root'",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root user account holds the highest level of privileges within an AWS account. Enabling MFA enhances security by adding an additional layer of protection.",
"AdditionalInformation": "Enabling MFA enhances console security by requiring the authenticating user to both possess a time-sensitive key-generating device and have knowledge of their credentials.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
}
```
**Available for providers:** AWS, Kubernetes, M365
---
## Available Compliance Frameworks
### AWS (41 frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 1.4, 1.5, 2.0, 3.0, 4.0, 5.0 | `cis_{version}_aws.json` |
| ISO 27001:2013, 2022 | `iso27001_{year}_aws.json` |
| NIST 800-53 Rev 4, 5 | `nist_800_53_revision_{version}_aws.json` |
| NIST 800-171 Rev 2 | `nist_800_171_revision_2_aws.json` |
| NIST CSF 1.1, 2.0 | `nist_csf_{version}_aws.json` |
| PCI DSS 3.2.1, 4.0 | `pci_{version}_aws.json` |
| HIPAA | `hipaa_aws.json` |
| GDPR | `gdpr_aws.json` |
| SOC 2 | `soc2_aws.json` |
| FedRAMP Low/Moderate | `fedramp_{level}_revision_4_aws.json` |
| ENS RD2022 | `ens_rd2022_aws.json` |
| MITRE ATT&CK | `mitre_attack_aws.json` |
| C5 Germany | `c5_aws.json` |
| CISA | `cisa_aws.json` |
| FFIEC | `ffiec_aws.json` |
| RBI Cyber Security | `rbi_cyber_security_framework_aws.json` |
| AWS Well-Architected | `aws_well_architected_framework_{pillar}_pillar_aws.json` |
| AWS FTR | `aws_foundational_technical_review_aws.json` |
| GxP 21 CFR Part 11, EU Annex 11 | `gxp_{standard}_aws.json` |
| KISA ISMS-P 2023 | `kisa_isms_p_2023_aws.json` |
| NIS2 | `nis2_aws.json` |
### Azure (15+ frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 2.0, 2.1, 3.0, 4.0 | `cis_{version}_azure.json` |
| ISO 27001:2022 | `iso27001_2022_azure.json` |
| ENS RD2022 | `ens_rd2022_azure.json` |
| MITRE ATT&CK | `mitre_attack_azure.json` |
| PCI DSS 4.0 | `pci_4.0_azure.json` |
| NIST CSF 2.0 | `nist_csf_2.0_azure.json` |
### GCP (15+ frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 2.0, 3.0, 4.0 | `cis_{version}_gcp.json` |
| ISO 27001:2022 | `iso27001_2022_gcp.json` |
| HIPAA | `hipaa_gcp.json` |
| MITRE ATT&CK | `mitre_attack_gcp.json` |
| PCI DSS 4.0 | `pci_4.0_gcp.json` |
| NIST CSF 2.0 | `nist_csf_2.0_gcp.json` |
### Kubernetes (6 frameworks)
| Framework | File Name |
|-----------|-----------|
| CIS 1.8, 1.10, 1.11 | `cis_{version}_kubernetes.json` |
| ISO 27001:2022 | `iso27001_2022_kubernetes.json` |
| PCI DSS 4.0 | `pci_4.0_kubernetes.json` |
### Other Providers
- **GitHub:** `cis_1.0_github.json`
- **M365:** `cis_4.0_m365.json`, `iso27001_2022_m365.json`
- **NHN:** `iso27001_2022_nhn.json`
## Best Practices ## Best Practices
1. **Requirement IDs**: Follow the original framework numbering (e.g., "1.1", "2.3.4") 1. **Requirement IDs**: Follow the original framework numbering exactly (e.g., "1.1", "A.5.1", "T1190", "ac_2_1")
2. **Check Mapping**: Map to existing checks when possible, create new checks only if needed 2. **Check Mapping**: Map to existing checks when possible. Use `Checks: []` for manual-only requirements
3. **Completeness**: Include all framework requirements, even if no check exists (document as manual) 3. **Completeness**: Include all framework requirements, even those without automated checks
4. **Version Control**: Include framework version in the name and file 4. **Version Control**: Include framework version in `Name` and `Version` fields
5. **File Naming**: Use format `{framework}_{version}_{provider}.json`
6. **Validation**: Prowler validates JSON against Pydantic models at startup - invalid JSON will cause errors
## Commands ## Commands
```bash ```bash
# List available frameworks for a provider # List available frameworks for a provider
poetry run python prowler-cli.py {provider} --list-compliance prowler {provider} --list-compliance
# Run scan with specific compliance framework # Run scan with specific compliance framework
poetry run python prowler-cli.py {provider} --compliance {framework} prowler aws --compliance cis_5.0_aws
# Run scan with multiple frameworks # Run scan with multiple frameworks
poetry run python prowler-cli.py {provider} --compliance cis_aws_benchmark_v2 pci_dss_3.2.1 prowler aws --compliance cis_5.0_aws pci_4.0_aws
# Output compliance report # Output compliance report in multiple formats
poetry run python prowler-cli.py {provider} --compliance {framework} -M csv json html prowler aws --compliance cis_5.0_aws -M csv json html
``` ```
## Code References
- **Compliance Models:** `prowler/lib/check/compliance_models.py`
- **Compliance Processing:** `prowler/lib/check/compliance.py`
- **Compliance Output:** `prowler/lib/outputs/compliance/`
## Resources ## Resources
- **Templates**: See [assets/](assets/) for complete CIS framework JSON template - **Templates:** See [assets/](assets/) for framework JSON templates
- **Documentation**: See [references/compliance-docs.md](references/compliance-docs.md) for official Prowler Developer Guide links - **Documentation:** See [references/compliance-docs.md](references/compliance-docs.md) for additional resources

View File

@@ -3,7 +3,7 @@
"Name": "CIS Amazon Web Services Foundations Benchmark v5.0.0", "Name": "CIS Amazon Web Services Foundations Benchmark v5.0.0",
"Version": "5.0", "Version": "5.0",
"Provider": "AWS", "Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services.", "Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
"Requirements": [ "Requirements": [
{ {
"Id": "1.1", "Id": "1.1",
@@ -17,13 +17,35 @@
"Profile": "Level 1", "Profile": "Level 1",
"AssessmentStatus": "Manual", "AssessmentStatus": "Manual",
"Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.", "Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.",
"RationaleStatement": "If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed.", "RationaleStatement": "If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior is not corrected then AWS may suspend the account.",
"ImpactStatement": "", "ImpactStatement": "",
"RemediationProcedure": "This activity can only be performed via the AWS Console. Navigate to Account Settings and update contact information.", "RemediationProcedure": "This activity can only be performed via the AWS Console. Navigate to Account Settings and update contact information.",
"AuditProcedure": "This activity can only be performed via the AWS Console. Navigate to Account Settings and verify contact information is current.", "AuditProcedure": "This activity can only be performed via the AWS Console. Navigate to Account Settings and verify contact information is current.",
"AdditionalInformation": "", "AdditionalInformation": "",
"References": "https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html", "DefaultValue": "",
"DefaultValue": "" "References": "https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html"
}
]
},
{
"Id": "1.2",
"Description": "Ensure security contact information is registered",
"Checks": [
"account_security_contact_information_is_registered"
],
"Attributes": [
{
"Section": "1 Identity and Access Management",
"Profile": "Level 1",
"AssessmentStatus": "Automated",
"Description": "AWS provides customers with the option to specify the contact information for the account's security team. It is recommended that this information be provided.",
"RationaleStatement": "Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them.",
"ImpactStatement": "",
"RemediationProcedure": "Navigate to AWS Console > Account > Alternate Contacts and add security contact information.",
"AuditProcedure": "Run: aws account get-alternate-contact --alternate-contact-type SECURITY",
"AdditionalInformation": "",
"DefaultValue": "By default, no security contact is registered.",
"References": "https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-alternate.html"
} }
] ]
}, },
@@ -38,37 +60,81 @@
"Section": "1 Identity and Access Management", "Section": "1 Identity and Access Management",
"Profile": "Level 1", "Profile": "Level 1",
"AssessmentStatus": "Automated", "AssessmentStatus": "Automated",
"Description": "The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account.", "Description": "The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be deleted.",
"RationaleStatement": "Deleting access keys associated with the 'root' user account limits vectors by which the account can be compromised.", "RationaleStatement": "Deleting access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, deleting the root access keys encourages the creation and use of role based accounts that are least privileged.",
"ImpactStatement": "", "ImpactStatement": "",
"RemediationProcedure": "Navigate to IAM console, select root user, Security credentials tab, and delete any access keys.", "RemediationProcedure": "Navigate to IAM console, select root user, Security credentials tab, and delete any access keys.",
"AuditProcedure": "Run: aws iam get-account-summary | grep 'AccountAccessKeysPresent'", "AuditProcedure": "Run: aws iam get-account-summary | grep 'AccountAccessKeysPresent'",
"AdditionalInformation": "IAM User account root for us-gov cloud regions is not enabled by default.", "AdditionalInformation": "IAM User account root for us-gov cloud regions is not enabled by default.",
"References": "https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html", "DefaultValue": "By default, no root access keys exist.",
"DefaultValue": "" "References": "https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html"
} }
] ]
}, },
{ {
"Id": "1.11", "Id": "1.4",
"Description": "Ensure credentials unused for 45 days or more are disabled", "Description": "Ensure MFA is enabled for the 'root' user account",
"Checks": [ "Checks": [
"iam_user_accesskey_unused", "iam_root_mfa_enabled"
"iam_user_console_access_unused"
], ],
"Attributes": [ "Attributes": [
{ {
"Section": "1 Identity and Access Management", "Section": "1 Identity and Access Management",
"Profile": "Level 1", "Profile": "Level 1",
"AssessmentStatus": "Automated", "AssessmentStatus": "Automated",
"Description": "AWS IAM users can access AWS resources using different types of credentials. It is recommended that all credentials unused for 45 days or more be deactivated or removed.", "Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.",
"RationaleStatement": "Disabling or removing unnecessary credentials reduces the window of opportunity for compromised accounts.", "RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential.",
"ImpactStatement": "Users with deactivated credentials will lose access until re-enabled.", "ImpactStatement": "",
"RemediationProcedure": "Use IAM console or CLI to deactivate unused access keys and remove unused passwords.", "RemediationProcedure": "Using IAM console, navigate to Dashboard and choose Activate MFA on your root account.",
"AuditProcedure": "Generate credential report and review password_last_used and access_key_last_used fields.", "AuditProcedure": "Run: aws iam get-account-summary | grep 'AccountMFAEnabled'. Ensure the value is 1.",
"AdditionalInformation": "", "AdditionalInformation": "",
"References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html", "DefaultValue": "MFA is not enabled by default.",
"DefaultValue": "" "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#id_root-user_manage_mfa"
}
]
},
{
"Id": "1.5",
"Description": "Ensure hardware MFA is enabled for the 'root' user account",
"Checks": [
"iam_root_hardware_mfa_enabled"
],
"Attributes": [
{
"Section": "1 Identity and Access Management",
"Profile": "Level 2",
"AssessmentStatus": "Automated",
"Description": "The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the root user account be protected with a hardware MFA.",
"RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer from the attack surface introduced by the mobile smartphone on which a virtual MFA resides.",
"ImpactStatement": "Using a hardware MFA device instead of a virtual MFA may result in additional hardware costs.",
"RemediationProcedure": "Using IAM console, navigate to Dashboard, select root user, and configure hardware MFA device.",
"AuditProcedure": "Run: aws iam list-virtual-mfa-devices and verify the root account is not using a virtual MFA.",
"AdditionalInformation": "For recommendations on protecting hardware MFA devices, refer to https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.html",
"DefaultValue": "MFA is not enabled by default.",
"References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_physical.html"
}
]
},
{
"Id": "2.1.1",
"Description": "Ensure S3 Bucket Policy is set to deny HTTP requests",
"Checks": [
"s3_bucket_secure_transport_policy"
],
"Attributes": [
{
"Section": "2 Storage",
"SubSection": "2.1 Simple Storage Service (S3)",
"Profile": "Level 2",
"AssessmentStatus": "Automated",
"Description": "At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS.",
"RationaleStatement": "By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation.",
"ImpactStatement": "Enabling this setting will result in rejection of requests that do not use HTTPS for S3 bucket operations.",
"RemediationProcedure": "Add a bucket policy with condition aws:SecureTransport: false that denies all s3 actions.",
"AuditProcedure": "Review bucket policies for Deny statements with aws:SecureTransport: false condition.",
"AdditionalInformation": "",
"DefaultValue": "By default, S3 buckets allow both HTTP and HTTPS requests.",
"References": "https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/"
} }
] ]
} }

View File

@@ -0,0 +1,128 @@
{
"Framework": "ENS",
"Name": "ENS RD 311/2022 - Categoria Alta",
"Version": "RD2022",
"Provider": "AWS",
"Description": "The accreditation scheme of the ENS (Esquema Nacional de Seguridad - National Security Scheme of Spain) has been developed by the Ministry of Finance and Public Administrations and the CCN (National Cryptological Center). This includes the basic principles and minimum requirements necessary for the adequate protection of information.",
"Requirements": [
{
"Id": "op.acc.1.aws.iam.2",
"Description": "Proveedor de identidad centralizado",
"Attributes": [
{
"IdGrupoControl": "op.acc.1",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Es muy recomendable la utilizacion de un proveedor de identidades que permita administrar las identidades en un lugar centralizado, en vez de utilizar IAM para ello.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"iam_check_saml_providers_sts"
]
},
{
"Id": "op.acc.2.aws.iam.4",
"Description": "Requisitos de acceso",
"Attributes": [
{
"IdGrupoControl": "op.acc.2",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Se debera delegar en cuentas administradoras la administracion de la organizacion, dejando la cuenta maestra sin uso y con las medidas de seguridad pertinentes.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"iam_avoid_root_usage"
]
},
{
"Id": "op.acc.3.r1.aws.iam.1",
"Description": "Segregacion rigurosa",
"Attributes": [
{
"IdGrupoControl": "op.acc.3.r1",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "En caso de ser de aplicacion, la segregacion debera tener en cuenta la separacion de las funciones de configuracion y mantenimiento y de auditoria de cualquier otra.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"iam_support_role_created"
]
},
{
"Id": "op.exp.8.aws.cloudwatch.1",
"Description": "Registro de la actividad",
"Attributes": [
{
"IdGrupoControl": "op.exp.8",
"Marco": "operacional",
"Categoria": "explotacion",
"DescripcionControl": "Se registraran las actividades de los usuarios en el sistema, de forma que se pueda identificar que acciones ha realizado cada usuario.",
"Nivel": "medio",
"Tipo": "requisito",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled"
]
},
{
"Id": "mp.info.3.aws.s3.1",
"Description": "Cifrado de la informacion",
"Attributes": [
{
"IdGrupoControl": "mp.info.3",
"Marco": "medidas de proteccion",
"Categoria": "proteccion de la informacion",
"DescripcionControl": "La informacion con un nivel de clasificacion CONFIDENCIAL o superior debera ser cifrada.",
"Nivel": "bajo",
"Tipo": "medida",
"Dimensiones": [
"confidencialidad"
],
"ModoEjecucion": "automatico",
"Dependencias": []
}
],
"Checks": [
"s3_bucket_default_encryption",
"s3_bucket_kms_encryption"
]
}
]
}

View File

@@ -0,0 +1,103 @@
{
"Framework": "CUSTOM-FRAMEWORK",
"Name": "Custom Security Framework Example v1.0",
"Version": "1.0",
"Provider": "AWS",
"Description": "This is a template for creating custom compliance frameworks using the generic attribute model. Use this when creating frameworks that don't match existing attribute types (CIS, ISO, ENS, MITRE, etc.).",
"Requirements": [
{
"Id": "SEC-001",
"Description": "Ensure all storage resources are encrypted at rest",
"Name": "Storage Encryption",
"Attributes": [
{
"ItemId": "SEC-001",
"Section": "Data Protection",
"SubSection": "Encryption",
"SubGroup": "Storage",
"Service": "s3",
"Type": "Automated"
}
],
"Checks": [
"s3_bucket_default_encryption",
"rds_instance_storage_encrypted",
"ec2_ebs_volume_encryption"
]
},
{
"Id": "SEC-002",
"Description": "Ensure all network traffic is encrypted in transit",
"Name": "Network Encryption",
"Attributes": [
{
"ItemId": "SEC-002",
"Section": "Data Protection",
"SubSection": "Encryption",
"SubGroup": "Network",
"Service": "multiple",
"Type": "Automated"
}
],
"Checks": [
"s3_bucket_secure_transport_policy",
"elb_ssl_listeners",
"cloudfront_distributions_https_enabled"
]
},
{
"Id": "IAM-001",
"Description": "Ensure MFA is enabled for all privileged accounts",
"Name": "Multi-Factor Authentication",
"Attributes": [
{
"ItemId": "IAM-001",
"Section": "Identity and Access Management",
"SubSection": "Authentication",
"SubGroup": "MFA",
"Service": "iam",
"Type": "Automated"
}
],
"Checks": [
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "LOG-001",
"Description": "Ensure logging is enabled for all critical services",
"Name": "Centralized Logging",
"Attributes": [
{
"ItemId": "LOG-001",
"Section": "Logging and Monitoring",
"SubSection": "Audit Logs",
"SubGroup": "CloudTrail",
"Service": "cloudtrail",
"Type": "Automated"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled"
]
},
{
"Id": "MANUAL-001",
"Description": "Ensure security policies are reviewed annually",
"Name": "Policy Review",
"Attributes": [
{
"ItemId": "MANUAL-001",
"Section": "Governance",
"SubSection": "Policy Management",
"Service": "manual",
"Type": "Manual"
}
],
"Checks": []
}
]
}

View File

@@ -0,0 +1,91 @@
{
"Framework": "ISO27001",
"Name": "ISO/IEC 27001 Information Security Management Standard 2022",
"Version": "2022",
"Provider": "AWS",
"Description": "ISO (the International Organization for Standardization) and IEC (the International Electrotechnical Commission) form the specialized system for worldwide standardization. This framework maps AWS security controls to ISO 27001:2022 requirements.",
"Requirements": [
{
"Id": "A.5.1",
"Description": "Information security policy and topic-specific policies should be defined, approved by management, published, communicated to and acknowledged by relevant personnel and relevant interested parties, and reviewed at planned intervals and if significant changes occur.",
"Name": "Policies for information security",
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.1",
"Objetive_Name": "Policies for information security",
"Check_Summary": "Verify that information security policies are defined and implemented through security monitoring services."
}
],
"Checks": [
"securityhub_enabled",
"wellarchitected_workload_no_high_or_medium_risks"
]
},
{
"Id": "A.5.2",
"Description": "Information security roles and responsibilities should be defined and allocated according to the organisation needs.",
"Name": "Roles and Responsibilities",
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.2",
"Objetive_Name": "Roles and Responsibilities",
"Check_Summary": "Verify that IAM roles and responsibilities are properly defined."
}
],
"Checks": []
},
{
"Id": "A.5.3",
"Description": "Conflicting duties and conflicting areas of responsibility should be segregated.",
"Name": "Segregation of Duties",
"Attributes": [
{
"Category": "A.5 Organizational controls",
"Objetive_ID": "A.5.3",
"Objetive_Name": "Segregation of Duties",
"Check_Summary": "Verify that duties are segregated through separate IAM roles."
}
],
"Checks": [
"iam_securityaudit_role_created"
]
},
{
"Id": "A.8.1",
"Description": "User end point devices should be protected.",
"Name": "User End Point Devices",
"Attributes": [
{
"Category": "A.8 Technological controls",
"Objetive_ID": "A.8.1",
"Objetive_Name": "User End Point Devices",
"Check_Summary": "Verify that endpoint protection and monitoring are enabled."
}
],
"Checks": [
"guardduty_is_enabled",
"ssm_managed_compliant_patching"
]
},
{
"Id": "A.8.24",
"Description": "Rules for the effective use of cryptography, including cryptographic key management, should be defined and implemented.",
"Name": "Use of Cryptography",
"Attributes": [
{
"Category": "A.8 Technological controls",
"Objetive_ID": "A.8.24",
"Objetive_Name": "Use of Cryptography",
"Check_Summary": "Verify that encryption is enabled for data at rest and in transit."
}
],
"Checks": [
"s3_bucket_default_encryption",
"rds_instance_storage_encrypted",
"ec2_ebs_volume_encryption"
]
}
]
}

View File

@@ -0,0 +1,142 @@
{
"Framework": "MITRE-ATTACK",
"Name": "MITRE ATT&CK compliance framework",
"Version": "",
"Provider": "AWS",
"Description": "MITRE ATT&CK is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.",
"Requirements": [
{
"Name": "Exploit Public-Facing Application",
"Id": "T1190",
"Tactics": [
"Initial Access"
],
"SubTechniques": [],
"Platforms": [
"Containers",
"IaaS",
"Linux",
"Network",
"Windows",
"macOS"
],
"Description": "Adversaries may attempt to exploit a weakness in an Internet-facing host or system to initially access a network. The weakness in the system can be a software bug, a temporary glitch, or a misconfiguration.",
"TechniqueURL": "https://attack.mitre.org/techniques/T1190/",
"Checks": [
"guardduty_is_enabled",
"inspector2_is_enabled",
"securityhub_enabled",
"elbv2_waf_acl_attached",
"awslambda_function_not_publicly_accessible",
"ec2_instance_public_ip"
],
"Attributes": [
{
"AWSService": "Amazon GuardDuty",
"Category": "Detect",
"Value": "Minimal",
"Comment": "GuardDuty can detect when vulnerable publicly facing resources are leveraged to capture data not intended to be viewable."
},
{
"AWSService": "AWS Web Application Firewall",
"Category": "Protect",
"Value": "Significant",
"Comment": "AWS WAF protects public-facing applications against vulnerabilities including OWASP Top 10 via managed rule sets."
},
{
"AWSService": "Amazon Inspector",
"Category": "Protect",
"Value": "Partial",
"Comment": "Amazon Inspector can detect known vulnerabilities on various Windows and Linux endpoints."
}
]
},
{
"Name": "Valid Accounts",
"Id": "T1078",
"Tactics": [
"Defense Evasion",
"Persistence",
"Privilege Escalation",
"Initial Access"
],
"SubTechniques": [
"T1078.001",
"T1078.002",
"T1078.003",
"T1078.004"
],
"Platforms": [
"Azure AD",
"Containers",
"Google Workspace",
"IaaS",
"Linux",
"Network",
"Office 365",
"SaaS",
"Windows",
"macOS"
],
"Description": "Adversaries may obtain and abuse credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion.",
"TechniqueURL": "https://attack.mitre.org/techniques/T1078/",
"Checks": [
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_accesskey_unused",
"cloudtrail_multi_region_enabled"
],
"Attributes": [
{
"AWSService": "AWS IAM",
"Category": "Protect",
"Value": "Significant",
"Comment": "IAM MFA and access key rotation help prevent unauthorized access with valid credentials."
},
{
"AWSService": "AWS CloudTrail",
"Category": "Detect",
"Value": "Significant",
"Comment": "CloudTrail logs all API calls, enabling detection of unauthorized account usage."
}
]
},
{
"Name": "Data from Cloud Storage",
"Id": "T1530",
"Tactics": [
"Collection"
],
"SubTechniques": [],
"Platforms": [
"IaaS",
"SaaS"
],
"Description": "Adversaries may access data from improperly secured cloud storage. Many cloud service providers offer solutions for online data object storage.",
"TechniqueURL": "https://attack.mitre.org/techniques/T1530/",
"Checks": [
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_acl_prohibited",
"s3_bucket_default_encryption",
"macie_is_enabled"
],
"Attributes": [
{
"AWSService": "Amazon S3",
"Category": "Protect",
"Value": "Significant",
"Comment": "S3 bucket policies and ACLs can prevent public access to sensitive data."
},
{
"AWSService": "Amazon Macie",
"Category": "Detect",
"Value": "Significant",
"Comment": "Macie can detect and alert on sensitive data exposure in S3 buckets."
}
]
}
]
}

View File

@@ -0,0 +1,189 @@
{
"Framework": "ProwlerThreatScore",
"Name": "Prowler ThreatScore Compliance Framework for AWS",
"Version": "1.0",
"Provider": "AWS",
"Description": "Prowler ThreatScore Compliance Framework for AWS ensures that the AWS account is compliant taking into account four main pillars: Identity and Access Management, Attack Surface, Logging and Monitoring, and Encryption. Each check has a LevelOfRisk (1-5) and Weight that contribute to calculating the overall threat score.",
"Requirements": [
{
"Id": "1.1.1",
"Description": "Ensure MFA is enabled for the 'root' user account",
"Checks": [
"iam_root_mfa_enabled"
],
"Attributes": [
{
"Title": "MFA enabled for 'root'",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root user account holds the highest level of privileges within an AWS account. Enabling Multi-Factor Authentication (MFA) enhances security by adding an additional layer of protection beyond just a username and password.",
"AdditionalInformation": "Enabling MFA enhances console security by requiring the authenticating user to both possess a time-sensitive key-generating device and have knowledge of their credentials.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "1.1.2",
"Description": "Ensure hardware MFA is enabled for the 'root' user account",
"Checks": [
"iam_root_hardware_mfa_enabled"
],
"Attributes": [
{
"Title": "Hardware MFA enabled for 'root'",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root user account in AWS has the highest level of privileges. A hardware MFA has a smaller attack surface compared to a virtual MFA.",
"AdditionalInformation": "Unlike a virtual MFA, which relies on a mobile device that may be vulnerable to malware, a hardware MFA operates independently, reducing exposure to potential security threats.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "1.1.13",
"Description": "Ensure no root account access key exists",
"Checks": [
"iam_no_root_access_key"
],
"Attributes": [
{
"Title": "No root access key",
"Section": "1. IAM",
"SubSection": "1.1 Authentication",
"AttributeDescription": "The root account in AWS has unrestricted administrative privileges. It is recommended that no access keys be associated with the root account.",
"AdditionalInformation": "Eliminating root access keys reduces the risk of unauthorized access and enforces the use of role-based IAM accounts with least privilege.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "2.1.1",
"Description": "Ensure EC2 instances do not have public IP addresses",
"Checks": [
"ec2_instance_public_ip"
],
"Attributes": [
{
"Title": "EC2 without public IP",
"Section": "2. Attack Surface",
"SubSection": "2.1 Network Exposure",
"AttributeDescription": "EC2 instances with public IP addresses are directly accessible from the internet, increasing the attack surface.",
"AdditionalInformation": "Use private subnets and NAT gateways or VPC endpoints for internet access when needed.",
"LevelOfRisk": 4,
"Weight": 100
}
]
},
{
"Id": "2.2.1",
"Description": "Ensure S3 buckets are not publicly accessible",
"Checks": [
"s3_bucket_public_access"
],
"Attributes": [
{
"Title": "S3 bucket not public",
"Section": "2. Attack Surface",
"SubSection": "2.2 Storage Exposure",
"AttributeDescription": "Publicly accessible S3 buckets can lead to data breaches and unauthorized access to sensitive information.",
"AdditionalInformation": "Enable S3 Block Public Access settings at the account and bucket level.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "3.1.1",
"Description": "Ensure CloudTrail is enabled in all regions",
"Checks": [
"cloudtrail_multi_region_enabled"
],
"Attributes": [
{
"Title": "CloudTrail multi-region enabled",
"Section": "3. Logging and Monitoring",
"SubSection": "3.1 Audit Logging",
"AttributeDescription": "CloudTrail provides a record of API calls made in your AWS account. Multi-region trails ensure all activity is captured.",
"AdditionalInformation": "Without comprehensive logging, security incidents may go undetected and forensic analysis becomes impossible.",
"LevelOfRisk": 5,
"Weight": 1000
}
]
},
{
"Id": "3.2.1",
"Description": "Ensure GuardDuty is enabled",
"Checks": [
"guardduty_is_enabled"
],
"Attributes": [
{
"Title": "GuardDuty enabled",
"Section": "3. Logging and Monitoring",
"SubSection": "3.2 Threat Detection",
"AttributeDescription": "Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior.",
"AdditionalInformation": "GuardDuty analyzes CloudTrail, VPC Flow Logs, and DNS logs to identify threats.",
"LevelOfRisk": 4,
"Weight": 100
}
]
},
{
"Id": "4.1.1",
"Description": "Ensure S3 buckets have default encryption enabled",
"Checks": [
"s3_bucket_default_encryption"
],
"Attributes": [
{
"Title": "S3 default encryption",
"Section": "4. Encryption",
"SubSection": "4.1 Data at Rest",
"AttributeDescription": "Enabling default encryption on S3 buckets ensures all objects are encrypted when stored.",
"AdditionalInformation": "Use SSE-S3, SSE-KMS, or SSE-C depending on your key management requirements.",
"LevelOfRisk": 3,
"Weight": 10
}
]
},
{
"Id": "4.1.2",
"Description": "Ensure EBS volumes are encrypted",
"Checks": [
"ec2_ebs_volume_encryption"
],
"Attributes": [
{
"Title": "EBS volume encryption",
"Section": "4. Encryption",
"SubSection": "4.1 Data at Rest",
"AttributeDescription": "EBS volume encryption protects data at rest on EC2 instance storage.",
"AdditionalInformation": "Enable default EBS encryption at the account level to ensure all new volumes are encrypted.",
"LevelOfRisk": 3,
"Weight": 10
}
]
},
{
"Id": "4.2.1",
"Description": "Ensure data in transit is encrypted using TLS",
"Checks": [
"s3_bucket_secure_transport_policy"
],
"Attributes": [
{
"Title": "S3 secure transport",
"Section": "4. Encryption",
"SubSection": "4.2 Data in Transit",
"AttributeDescription": "Requiring HTTPS for S3 bucket access ensures data is encrypted during transmission.",
"AdditionalInformation": "Use bucket policies to deny requests that do not use TLS.",
"LevelOfRisk": 3,
"Weight": 10
}
]
}
]
}

View File

@@ -1,15 +1,137 @@
# Compliance Framework Documentation # Compliance Framework Documentation
## Local Documentation ## Code References
For detailed compliance framework patterns, see: Key files for understanding and modifying compliance frameworks:
- `docs/developer-guide/security-compliance-framework.mdx` - Complete guide for creating compliance frameworks (CIS, NIST, PCI-DSS, SOC2, GDPR) | File | Purpose |
|------|---------|
| `prowler/lib/check/compliance_models.py` | Pydantic models defining attribute structures for each framework type |
| `prowler/lib/check/compliance.py` | Core compliance processing logic |
| `prowler/lib/check/utils.py` | Utility functions including `list_compliance_modules()` |
| `prowler/lib/outputs/compliance/` | Framework-specific output generators |
| `prowler/compliance/{provider}/` | JSON compliance framework definitions |
## Contents ## Attribute Model Classes
The documentation covers: Each framework type has a specific Pydantic model in `compliance_models.py`:
- Framework JSON structure
- Framework metadata (name, version, provider) | Framework | Model Class |
- Requirements array with IDs, descriptions, and attributes |-----------|-------------|
- Check mappings for each requirement | CIS | `CIS_Requirement_Attribute` |
| ISO 27001 | `ISO27001_2013_Requirement_Attribute` |
| ENS | `ENS_Requirement_Attribute` |
| MITRE ATT&CK | `Mitre_Requirement` (uses different structure) |
| AWS Well-Architected | `AWS_Well_Architected_Requirement_Attribute` |
| KISA ISMS-P | `KISA_ISMSP_Requirement_Attribute` |
| Prowler ThreatScore | `Prowler_ThreatScore_Requirement_Attribute` |
| CCC | `CCC_Requirement_Attribute` |
| C5 Germany | `C5Germany_Requirement_Attribute` |
| Generic/Fallback | `Generic_Compliance_Requirement_Attribute` |
## How Compliance Frameworks are Loaded
1. `Compliance.get_bulk(provider)` is called at startup
2. Scans `prowler/compliance/{provider}/` for `.json` files
3. Each file is parsed using `load_compliance_framework()`
4. Pydantic validates against `Compliance` model
5. Framework is stored in dictionary with filename (without `.json`) as key
## How Checks Map to Compliance
1. After loading, `update_checks_metadata_with_compliance()` is called
2. For each check, it finds all compliance requirements that reference it
3. Compliance info is attached to `CheckMetadata.Compliance` list
4. During output, `get_check_compliance()` retrieves mappings per finding
## File Naming Convention
```
{framework}_{version}_{provider}.json
```
Examples:
- `cis_5.0_aws.json`
- `iso27001_2022_azure.json`
- `mitre_attack_gcp.json`
- `ens_rd2022_aws.json`
- `nist_800_53_revision_5_aws.json`
## Validation
Prowler validates compliance JSON at startup. Invalid files cause:
- `ValidationError` logged with details
- Application exit with error code
Common validation errors:
- Missing required fields (`Id`, `Description`, `Checks`, `Attributes`)
- Invalid enum values (e.g., `Profile` must be "Level 1" or "Level 2" for CIS)
- Type mismatches (e.g., `Checks` must be array of strings)
## Adding a New Framework
1. Create JSON file in `prowler/compliance/{provider}/`
2. Use appropriate attribute model (see table above)
3. Map existing checks to requirements via `Checks` array
4. Use empty `Checks: []` for manual-only requirements
5. Test with `prowler {provider} --list-compliance` to verify loading
6. Run `prowler {provider} --compliance {framework_name}` to test execution
## Templates
See `assets/` directory for example templates:
- `cis_framework.json` - CIS Benchmark template
- `iso27001_framework.json` - ISO 27001 template
- `ens_framework.json` - ENS (Spain) template
- `mitre_attack_framework.json` - MITRE ATT&CK template
- `prowler_threatscore_framework.json` - Prowler ThreatScore template
- `generic_framework.json` - Generic/custom framework template
## Prowler ThreatScore Details
Prowler ThreatScore is a custom security scoring framework that calculates an overall security posture score based on:
### Four Pillars
1. **IAM (Identity and Access Management)**
- SubSections: Authentication, Authorization, Credentials Management
2. **Attack Surface**
- SubSections: Network Exposure, Storage Exposure, Service Exposure
3. **Logging and Monitoring**
- SubSections: Audit Logging, Threat Detection, Alerting
4. **Encryption**
- SubSections: Data at Rest, Data in Transit
### Scoring Algorithm
The ThreatScore uses `LevelOfRisk` and `Weight` to calculate severity:
| LevelOfRisk | Weight | Example Controls |
|-------------|--------|------------------|
| 5 (Critical) | 1000 | Root MFA, No root access keys, Public S3 buckets |
| 4 (High) | 100 | User MFA, Public EC2, GuardDuty enabled |
| 3 (Medium) | 10 | Password policies, EBS encryption, CloudTrail |
| 2 (Low) | 1-10 | Best practice recommendations |
| 1 (Info) | 1 | Informational controls |
### ID Numbering Convention
- `1.x.x` - IAM controls
- `2.x.x` - Attack Surface controls
- `3.x.x` - Logging and Monitoring controls
- `4.x.x` - Encryption controls
## External Resources
### Official Framework Documentation
- [CIS Benchmarks](https://www.cisecurity.org/cis-benchmarks)
- [ISO 27001:2022](https://www.iso.org/standard/27001)
- [NIST 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final)
- [NIST CSF](https://www.nist.gov/cyberframework)
- [PCI DSS](https://www.pcisecuritystandards.org/)
- [MITRE ATT&CK](https://attack.mitre.org/)
- [ENS (Spain)](https://www.ccn-cert.cni.es/es/ens.html)
### Prowler Documentation
- [Prowler Docs - Compliance](https://docs.prowler.com/projects/prowler-open-source/en/latest/)
- [Prowler GitHub](https://github.com/prowler-cloud/prowler)

View File

@@ -7,6 +7,8 @@ license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root]
auto_invoke: "Writing documentation"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,14 @@
name: prowler-mcp name: prowler-mcp
description: > description: >
Creates MCP tools for Prowler MCP Server. Covers BaseTool pattern, model design, Creates MCP tools for Prowler MCP Server. Covers BaseTool pattern, model design,
and API client usage. Use when working on mcp_server/ directory. and API client usage.
Trigger: When working in mcp_server/ on tools (BaseTool), models (MinimalSerializerMixin/from_api_response), or API client patterns.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root]
auto_invoke: "Working on MCP server tools"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,18 @@
name: prowler-pr name: prowler-pr
description: > description: >
Creates Pull Requests for Prowler following the project template and conventions. Creates Pull Requests for Prowler following the project template and conventions.
Trigger: When user asks to create a PR, submit changes, or open a pull request. Trigger: When working on pull request requirements or creation (PR template sections, PR title Conventional Commits check, changelog gate/no-changelog label), or when inspecting PR-related GitHub workflows like conventional-commit.yml, pr-check-changelog.yml, pr-conflict-checker.yml, labeler.yml, or CODEOWNERS.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root]
auto_invoke:
- "Create a PR with gh pr create"
- "Review PR requirements: template, title conventions, changelog gate"
- "Fill .github/pull_request_template.md (Context/Description/Steps to review/Checklist)"
- "Inspect PR CI workflows (.github/workflows/*): conventional-commit, pr-check-changelog, pr-conflict-checker, labeler"
- "Understand review ownership with CODEOWNERS"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,15 @@
name: prowler-provider name: prowler-provider
description: > description: >
Creates new Prowler cloud providers or adds services to existing providers. Creates new Prowler cloud providers or adds services to existing providers.
Trigger: When adding a new cloud provider or service to Prowler SDK. Trigger: When extending Prowler SDK provider architecture (adding a new provider or a new service to an existing provider).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, sdk]
auto_invoke:
- "Adding new providers"
- "Adding services to existing providers"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,15 @@
name: prowler-sdk-check name: prowler-sdk-check
description: > description: >
Creates Prowler security checks following SDK architecture patterns. Creates Prowler security checks following SDK architecture patterns.
Trigger: When user asks to create a new security check for any provider (AWS, Azure, GCP, K8s, GitHub, etc.) Trigger: When creating or updating a Prowler SDK security check (implementation + metadata) for any provider (AWS, Azure, GCP, K8s, GitHub, etc.).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, sdk]
auto_invoke:
- "Creating new checks"
- "Updating existing checks and metadata"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -1,12 +1,16 @@
--- ---
name: prowler-test-api name: prowler-test-api
description: > description: >
Testing patterns for Prowler API: ViewSets, Celery tasks, RLS isolation, RBAC. Testing patterns for Prowler API: JSON:API, Celery tasks, RLS isolation, RBAC.
Trigger: When writing tests for api/ - viewsets, serializers, tasks, models. Trigger: When writing tests for api/ (JSON:API requests/assertions, cross-tenant isolation, RBAC, Celery tasks, viewsets/serializers).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, api]
auto_invoke:
- "Writing Prowler API tests"
- "Testing RLS tenant isolation"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,15 @@
name: prowler-test-sdk name: prowler-test-sdk
description: > description: >
Testing patterns for Prowler SDK (Python). Testing patterns for Prowler SDK (Python).
Trigger: When writing tests for checks, services, or providers. Trigger: When writing tests for the Prowler SDK (checks/services/providers), including provider-specific mocking rules (moto for AWS only).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, sdk]
auto_invoke:
- "Writing Prowler SDK tests"
- "Mocking AWS with moto in tests"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,15 @@
name: prowler-test-ui name: prowler-test-ui
description: > description: >
E2E testing patterns for Prowler UI (Playwright). E2E testing patterns for Prowler UI (Playwright).
Trigger: When writing E2E tests for the Next.js frontend. Trigger: When writing Playwright E2E tests under ui/tests in the Prowler UI (Prowler-specific base page/helpers, tags, flows).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke:
- "Writing Prowler UI E2E tests"
- "Working with Prowler UI test helpers/pages"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,15 @@
name: prowler-ui name: prowler-ui
description: > description: >
Prowler UI-specific patterns. For generic patterns, see: typescript, react-19, nextjs-15, tailwind-4. Prowler UI-specific patterns. For generic patterns, see: typescript, react-19, nextjs-15, tailwind-4.
Trigger: When working on ui/ directory - components, pages, actions, hooks. Trigger: When working inside ui/ on Prowler-specific conventions (shadcn vs HeroUI legacy, folder placement, actions/adapters, shared types/hooks/lib).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke:
- "Creating/modifying Prowler UI components"
- "Working on Prowler UI structure (actions/adapters/types/hooks)"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: prowler name: prowler
description: > description: >
Main entry point for Prowler development - quick reference for all components. Main entry point for Prowler development - quick reference for all components.
Trigger: General Prowler development questions, project overview, component navigation. Trigger: General Prowler development questions, project overview, component navigation (NOT PR CI gates or GitHub Actions workflows).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root]
auto_invoke: "General Prowler development questions"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: pytest name: pytest
description: > description: >
Pytest testing patterns for Python. Pytest testing patterns for Python.
Trigger: When writing Python tests - fixtures, mocking, markers. Trigger: When writing or refactoring pytest tests (fixtures, mocking, parametrize, markers). For Prowler-specific API/SDK testing conventions, also use prowler-test-api or prowler-test-sdk.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, sdk, api]
auto_invoke: "Writing Python tests with pytest"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: react-19 name: react-19
description: > description: >
React 19 patterns with React Compiler. React 19 patterns with React Compiler.
Trigger: When writing React components - no useMemo/useCallback needed. Trigger: When writing React 19 components/hooks in .tsx (React Compiler rules, hook patterns, refs as props). If using Next.js App Router/Server Actions, also use nextjs-15.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "Writing React components"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -1,10 +1,16 @@
#!/bin/bash #!/bin/bash
# Setup AI Skills for Prowler development # Setup AI Skills for Prowler development
# Configures AI coding assistants that follow agentskills.io standard: # Configures AI coding assistants that follow agentskills.io standard:
# - Claude Code: .claude/skills/ symlink (auto-discovery) # - Claude Code: .claude/skills/ symlink + CLAUDE.md copies
# - Gemini CLI: .gemini/skills/ symlink (auto-discovery) # - Gemini CLI: .gemini/skills/ symlink + GEMINI.md copies
# - Codex (OpenAI): .codex/skills/ symlink + AGENTS.md # - Codex (OpenAI): .codex/skills/ symlink + AGENTS.md (native)
# - GitHub Copilot: reads AGENTS.md from repo root (no symlink needed) # - GitHub Copilot: .github/copilot-instructions.md copy
#
# Usage:
# ./setup.sh # Interactive mode (select AI assistants)
# ./setup.sh --all # Configure all AI assistants
# ./setup.sh --claude # Configure only Claude Code
# ./setup.sh --claude --codex # Configure multiple
set -e set -e
@@ -12,23 +18,224 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")" REPO_ROOT="$(dirname "$SCRIPT_DIR")"
SKILLS_SOURCE="$SCRIPT_DIR" SKILLS_SOURCE="$SCRIPT_DIR"
# Target locations
CLAUDE_SKILLS_TARGET="$REPO_ROOT/.claude/skills"
CODEX_SKILLS_TARGET="$REPO_ROOT/.codex/skills"
GEMINI_SKILLS_TARGET="$REPO_ROOT/.gemini/skills"
# Colors for output # Colors for output
RED='\033[0;31m' RED='\033[0;31m'
GREEN='\033[0;32m' GREEN='\033[0;32m'
YELLOW='\033[1;33m' YELLOW='\033[1;33m'
BLUE='\033[0;34m' BLUE='\033[0;34m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m' # No Color NC='\033[0m' # No Color
# Selection flags
SETUP_CLAUDE=false
SETUP_GEMINI=false
SETUP_CODEX=false
SETUP_COPILOT=false
# =============================================================================
# HELPER FUNCTIONS
# =============================================================================
show_help() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Configure AI coding assistants for Prowler development."
echo ""
echo "Options:"
echo " --all Configure all AI assistants"
echo " --claude Configure Claude Code"
echo " --gemini Configure Gemini CLI"
echo " --codex Configure Codex (OpenAI)"
echo " --copilot Configure GitHub Copilot"
echo " --help Show this help message"
echo ""
echo "If no options provided, runs in interactive mode."
echo ""
echo "Examples:"
echo " $0 # Interactive selection"
echo " $0 --all # All AI assistants"
echo " $0 --claude --codex # Only Claude and Codex"
}
show_menu() {
echo -e "${BOLD}Which AI assistants do you use?${NC}"
echo -e "${CYAN}(Use numbers to toggle, Enter to confirm)${NC}"
echo ""
local options=("Claude Code" "Gemini CLI" "Codex (OpenAI)" "GitHub Copilot")
local selected=(true false false false) # Claude selected by default
while true; do
for i in "${!options[@]}"; do
if [ "${selected[$i]}" = true ]; then
echo -e " ${GREEN}[x]${NC} $((i+1)). ${options[$i]}"
else
echo -e " [ ] $((i+1)). ${options[$i]}"
fi
done
echo ""
echo -e " ${YELLOW}a${NC}. Select all"
echo -e " ${YELLOW}n${NC}. Select none"
echo ""
echo -n "Toggle (1-4, a, n) or Enter to confirm: "
read -r choice
case $choice in
1) selected[0]=$([ "${selected[0]}" = true ] && echo false || echo true) ;;
2) selected[1]=$([ "${selected[1]}" = true ] && echo false || echo true) ;;
3) selected[2]=$([ "${selected[2]}" = true ] && echo false || echo true) ;;
4) selected[3]=$([ "${selected[3]}" = true ] && echo false || echo true) ;;
a|A) selected=(true true true true) ;;
n|N) selected=(false false false false) ;;
"") break ;;
*) echo -e "${RED}Invalid option${NC}" ;;
esac
# Move cursor up to redraw menu
echo -en "\033[10A\033[J"
done
SETUP_CLAUDE=${selected[0]}
SETUP_GEMINI=${selected[1]}
SETUP_CODEX=${selected[2]}
SETUP_COPILOT=${selected[3]}
}
setup_claude() {
local target="$REPO_ROOT/.claude/skills"
if [ ! -d "$REPO_ROOT/.claude" ]; then
mkdir -p "$REPO_ROOT/.claude"
fi
if [ -L "$target" ]; then
rm "$target"
elif [ -d "$target" ]; then
mv "$target" "$REPO_ROOT/.claude/skills.backup.$(date +%s)"
fi
ln -s "$SKILLS_SOURCE" "$target"
echo -e "${GREEN} ✓ .claude/skills -> skills/${NC}"
# Copy AGENTS.md to CLAUDE.md
copy_agents_md "CLAUDE.md"
}
setup_gemini() {
local target="$REPO_ROOT/.gemini/skills"
if [ ! -d "$REPO_ROOT/.gemini" ]; then
mkdir -p "$REPO_ROOT/.gemini"
fi
if [ -L "$target" ]; then
rm "$target"
elif [ -d "$target" ]; then
mv "$target" "$REPO_ROOT/.gemini/skills.backup.$(date +%s)"
fi
ln -s "$SKILLS_SOURCE" "$target"
echo -e "${GREEN} ✓ .gemini/skills -> skills/${NC}"
# Copy AGENTS.md to GEMINI.md
copy_agents_md "GEMINI.md"
}
setup_codex() {
local target="$REPO_ROOT/.codex/skills"
if [ ! -d "$REPO_ROOT/.codex" ]; then
mkdir -p "$REPO_ROOT/.codex"
fi
if [ -L "$target" ]; then
rm "$target"
elif [ -d "$target" ]; then
mv "$target" "$REPO_ROOT/.codex/skills.backup.$(date +%s)"
fi
ln -s "$SKILLS_SOURCE" "$target"
echo -e "${GREEN} ✓ .codex/skills -> skills/${NC}"
echo -e "${GREEN} ✓ Codex uses AGENTS.md natively${NC}"
}
setup_copilot() {
if [ -f "$REPO_ROOT/AGENTS.md" ]; then
mkdir -p "$REPO_ROOT/.github"
cp "$REPO_ROOT/AGENTS.md" "$REPO_ROOT/.github/copilot-instructions.md"
echo -e "${GREEN} ✓ AGENTS.md -> .github/copilot-instructions.md${NC}"
fi
}
copy_agents_md() {
local target_name="$1"
local agents_files
local count=0
agents_files=$(find "$REPO_ROOT" -name "AGENTS.md" -not -path "*/node_modules/*" -not -path "*/.git/*" 2>/dev/null)
for agents_file in $agents_files; do
local agents_dir
agents_dir=$(dirname "$agents_file")
cp "$agents_file" "$agents_dir/$target_name"
count=$((count + 1))
done
echo -e "${GREEN} ✓ Copied $count AGENTS.md -> $target_name${NC}"
}
# =============================================================================
# PARSE ARGUMENTS
# =============================================================================
while [[ $# -gt 0 ]]; do
case $1 in
--all)
SETUP_CLAUDE=true
SETUP_GEMINI=true
SETUP_CODEX=true
SETUP_COPILOT=true
shift
;;
--claude)
SETUP_CLAUDE=true
shift
;;
--gemini)
SETUP_GEMINI=true
shift
;;
--codex)
SETUP_CODEX=true
shift
;;
--copilot)
SETUP_COPILOT=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
show_help
exit 1
;;
esac
done
# =============================================================================
# MAIN
# =============================================================================
echo "🤖 Prowler AI Skills Setup" echo "🤖 Prowler AI Skills Setup"
echo "==========================" echo "=========================="
echo "" echo ""
# Count skills (directories with SKILL.md) # Count skills
SKILL_COUNT=$(find "$SKILLS_SOURCE" -maxdepth 2 -name "SKILL.md" | wc -l | tr -d ' ') SKILL_COUNT=$(find "$SKILLS_SOURCE" -maxdepth 2 -name "SKILL.md" | wc -l | tr -d ' ')
if [ "$SKILL_COUNT" -eq 0 ]; then if [ "$SKILL_COUNT" -eq 0 ]; then
@@ -39,81 +246,60 @@ fi
echo -e "${BLUE}Found $SKILL_COUNT skills to configure${NC}" echo -e "${BLUE}Found $SKILL_COUNT skills to configure${NC}"
echo "" echo ""
# ============================================================================= # Interactive mode if no flags provided
# CLAUDE CODE SETUP (.claude/skills symlink - auto-discovery) if [ "$SETUP_CLAUDE" = false ] && [ "$SETUP_GEMINI" = false ] && [ "$SETUP_CODEX" = false ] && [ "$SETUP_COPILOT" = false ]; then
# ============================================================================= show_menu
echo -e "${YELLOW}[1/3] Setting up Claude Code...${NC}" echo ""
if [ ! -d "$REPO_ROOT/.claude" ]; then
mkdir -p "$REPO_ROOT/.claude"
fi fi
if [ -L "$CLAUDE_SKILLS_TARGET" ]; then # Check if at least one selected
rm "$CLAUDE_SKILLS_TARGET" if [ "$SETUP_CLAUDE" = false ] && [ "$SETUP_GEMINI" = false ] && [ "$SETUP_CODEX" = false ] && [ "$SETUP_COPILOT" = false ]; then
elif [ -d "$CLAUDE_SKILLS_TARGET" ]; then echo -e "${YELLOW}No AI assistants selected. Nothing to do.${NC}"
mv "$CLAUDE_SKILLS_TARGET" "$REPO_ROOT/.claude/skills.backup.$(date +%s)" exit 0
fi fi
ln -s "$SKILLS_SOURCE" "$CLAUDE_SKILLS_TARGET" # Run selected setups
echo -e "${GREEN} ✓ .claude/skills -> skills/${NC}" STEP=1
TOTAL=0
[ "$SETUP_CLAUDE" = true ] && TOTAL=$((TOTAL + 1))
[ "$SETUP_GEMINI" = true ] && TOTAL=$((TOTAL + 1))
[ "$SETUP_CODEX" = true ] && TOTAL=$((TOTAL + 1))
[ "$SETUP_COPILOT" = true ] && TOTAL=$((TOTAL + 1))
# ============================================================================= if [ "$SETUP_CLAUDE" = true ]; then
# CODEX (OPENAI) SETUP (.codex/skills symlink) echo -e "${YELLOW}[$STEP/$TOTAL] Setting up Claude Code...${NC}"
# ============================================================================= setup_claude
echo -e "${YELLOW}[2/3] Setting up Codex (OpenAI)...${NC}" STEP=$((STEP + 1))
if [ ! -d "$REPO_ROOT/.codex" ]; then
mkdir -p "$REPO_ROOT/.codex"
fi fi
if [ -L "$CODEX_SKILLS_TARGET" ]; then if [ "$SETUP_GEMINI" = true ]; then
rm "$CODEX_SKILLS_TARGET" echo -e "${YELLOW}[$STEP/$TOTAL] Setting up Gemini CLI...${NC}"
elif [ -d "$CODEX_SKILLS_TARGET" ]; then setup_gemini
mv "$CODEX_SKILLS_TARGET" "$REPO_ROOT/.codex/skills.backup.$(date +%s)" STEP=$((STEP + 1))
fi fi
ln -s "$SKILLS_SOURCE" "$CODEX_SKILLS_TARGET" if [ "$SETUP_CODEX" = true ]; then
echo -e "${GREEN} ✓ .codex/skills -> skills/${NC}" echo -e "${YELLOW}[$STEP/$TOTAL] Setting up Codex (OpenAI)...${NC}"
setup_codex
# ============================================================================= STEP=$((STEP + 1))
# GEMINI CLI SETUP (.gemini/skills symlink - auto-discovery)
# =============================================================================
echo -e "${YELLOW}[3/3] Setting up Gemini CLI...${NC}"
if [ ! -d "$REPO_ROOT/.gemini" ]; then
mkdir -p "$REPO_ROOT/.gemini"
fi fi
if [ -L "$GEMINI_SKILLS_TARGET" ]; then if [ "$SETUP_COPILOT" = true ]; then
rm "$GEMINI_SKILLS_TARGET" echo -e "${YELLOW}[$STEP/$TOTAL] Setting up GitHub Copilot...${NC}"
elif [ -d "$GEMINI_SKILLS_TARGET" ]; then setup_copilot
mv "$GEMINI_SKILLS_TARGET" "$REPO_ROOT/.gemini/skills.backup.$(date +%s)"
fi fi
ln -s "$SKILLS_SOURCE" "$GEMINI_SKILLS_TARGET"
echo -e "${GREEN} ✓ .gemini/skills -> skills/${NC}"
# ============================================================================= # =============================================================================
# SUMMARY # SUMMARY
# ============================================================================= # =============================================================================
echo "" echo ""
echo -e "${GREEN}✅ Successfully configured $SKILL_COUNT AI skills!${NC}" echo -e "${GREEN}✅ Successfully configured $SKILL_COUNT AI skills!${NC}"
echo "" echo ""
echo "Configuration created:" echo "Configured:"
echo " • Claude Code: .claude/skills/ (symlink, auto-discovery)" [ "$SETUP_CLAUDE" = true ] && echo " • Claude Code: .claude/skills/ + CLAUDE.md"
echo " • Codex (OpenAI): .codex/skills/ (symlink, reads AGENTS.md)" [ "$SETUP_CODEX" = true ] && echo " • Codex (OpenAI): .codex/skills/ + AGENTS.md (native)"
echo " • Gemini CLI: .gemini/skills/ (symlink, auto-discovery)" [ "$SETUP_GEMINI" = true ] && echo " • Gemini CLI: .gemini/skills/ + GEMINI.md"
echo " • GitHub Copilot: reads AGENTS.md from repo root (no setup needed)" [ "$SETUP_COPILOT" = true ] && echo " • GitHub Copilot: .github/copilot-instructions.md"
echo "" echo ""
echo "Available skills:" echo -e "${BLUE}Note: Restart your AI assistant to load the skills.${NC}"
echo " Generic: typescript, react-19, nextjs-15, playwright, pytest," echo -e "${BLUE} AGENTS.md is the source of truth - edit it, then re-run this script.${NC}"
echo " django-drf, zod-4, zustand-5, tailwind-4, ai-sdk-5"
echo ""
echo " Prowler: prowler, prowler-api, prowler-ui, prowler-mcp,"
echo " prowler-sdk-check, prowler-test-ui, prowler-test-api,"
echo " prowler-test-sdk, prowler-compliance, prowler-docs,"
echo " prowler-provider, prowler-pr"
echo ""
echo -e "${BLUE}Note: Restart your AI coding assistant to load the skills.${NC}"
echo -e "${BLUE} Claude/Gemini auto-discover skills from SKILL.md descriptions.${NC}"
echo -e "${BLUE} Codex/Copilot use AGENTS.md instructions to reference skills.${NC}"

340
skills/setup_test.sh Executable file
View File

@@ -0,0 +1,340 @@
#!/bin/bash
# Unit tests for setup.sh
# Run: ./skills/setup_test.sh
#
# shellcheck disable=SC2317
# Reason: Test functions are discovered and called dynamically via declare -F
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SETUP_SCRIPT="$SCRIPT_DIR/setup.sh"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Test counters
TESTS_RUN=0
TESTS_PASSED=0
TESTS_FAILED=0
# Test environment
TEST_DIR=""
# =============================================================================
# TEST FRAMEWORK
# =============================================================================
setup_test_env() {
TEST_DIR=$(mktemp -d)
# Create mock repo structure
mkdir -p "$TEST_DIR/skills/typescript"
mkdir -p "$TEST_DIR/skills/react-19"
mkdir -p "$TEST_DIR/api"
mkdir -p "$TEST_DIR/ui"
mkdir -p "$TEST_DIR/.github"
# Create mock SKILL.md files
echo "# TypeScript Skill" > "$TEST_DIR/skills/typescript/SKILL.md"
echo "# React 19 Skill" > "$TEST_DIR/skills/react-19/SKILL.md"
# Create mock AGENTS.md files
echo "# Root AGENTS" > "$TEST_DIR/AGENTS.md"
echo "# API AGENTS" > "$TEST_DIR/api/AGENTS.md"
echo "# UI AGENTS" > "$TEST_DIR/ui/AGENTS.md"
# Copy setup.sh to test dir
cp "$SETUP_SCRIPT" "$TEST_DIR/skills/setup.sh"
}
teardown_test_env() {
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
rm -rf "$TEST_DIR"
fi
}
run_setup() {
(cd "$TEST_DIR/skills" && bash setup.sh "$@" 2>&1)
}
# Assertions return 0 on success, 1 on failure
assert_equals() {
local expected="$1" actual="$2" message="$3"
if [ "$expected" = "$actual" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Expected: $expected"
echo " Actual: $actual"
return 1
}
assert_contains() {
local haystack="$1" needle="$2" message="$3"
if echo "$haystack" | grep -q -F -- "$needle"; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " String not found: $needle"
return 1
}
assert_file_exists() {
local file="$1" message="$2"
if [ -f "$file" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File not found: $file"
return 1
}
assert_file_not_exists() {
local file="$1" message="$2"
if [ ! -f "$file" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File should not exist: $file"
return 1
}
assert_symlink_exists() {
local link="$1" message="$2"
if [ -L "$link" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Symlink not found: $link"
return 1
}
assert_symlink_not_exists() {
local link="$1" message="$2"
if [ ! -L "$link" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Symlink should not exist: $link"
return 1
}
assert_dir_exists() {
local dir="$1" message="$2"
if [ -d "$dir" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Directory not found: $dir"
return 1
}
# =============================================================================
# TESTS: FLAG PARSING
# =============================================================================
test_flag_help_shows_usage() {
local output
output=$(run_setup --help)
assert_contains "$output" "Usage:" "Help should show usage" && \
assert_contains "$output" "--all" "Help should mention --all flag" && \
assert_contains "$output" "--claude" "Help should mention --claude flag"
}
test_flag_unknown_reports_error() {
local output
output=$(run_setup --unknown 2>&1) || true
assert_contains "$output" "Unknown option" "Should report unknown option"
}
test_flag_all_configures_everything() {
local output
output=$(run_setup --all)
assert_contains "$output" "Claude Code" "Should setup Claude" && \
assert_contains "$output" "Gemini CLI" "Should setup Gemini" && \
assert_contains "$output" "Codex" "Should setup Codex" && \
assert_contains "$output" "Copilot" "Should setup Copilot"
}
test_flag_single_claude() {
local output
output=$(run_setup --claude)
assert_contains "$output" "Claude Code" "Should setup Claude" && \
assert_contains "$output" "[1/1]" "Should show 1/1 steps"
}
test_flag_multiple_combined() {
local output
output=$(run_setup --claude --codex)
assert_contains "$output" "[1/2]" "Should show step 1/2" && \
assert_contains "$output" "[2/2]" "Should show step 2/2"
}
# =============================================================================
# TESTS: SYMLINK CREATION
# =============================================================================
test_symlink_claude_created() {
run_setup --claude > /dev/null
assert_symlink_exists "$TEST_DIR/.claude/skills" "Claude skills symlink should exist"
}
test_symlink_gemini_created() {
run_setup --gemini > /dev/null
assert_symlink_exists "$TEST_DIR/.gemini/skills" "Gemini skills symlink should exist"
}
test_symlink_codex_created() {
run_setup --codex > /dev/null
assert_symlink_exists "$TEST_DIR/.codex/skills" "Codex skills symlink should exist"
}
test_symlink_not_created_without_flag() {
run_setup --copilot > /dev/null
assert_symlink_not_exists "$TEST_DIR/.claude/skills" "Claude symlink should not exist" && \
assert_symlink_not_exists "$TEST_DIR/.gemini/skills" "Gemini symlink should not exist" && \
assert_symlink_not_exists "$TEST_DIR/.codex/skills" "Codex symlink should not exist"
}
# =============================================================================
# TESTS: AGENTS.md COPYING
# =============================================================================
test_copy_claude_agents_md() {
run_setup --claude > /dev/null
assert_file_exists "$TEST_DIR/CLAUDE.md" "Root CLAUDE.md should exist" && \
assert_file_exists "$TEST_DIR/api/CLAUDE.md" "api/CLAUDE.md should exist" && \
assert_file_exists "$TEST_DIR/ui/CLAUDE.md" "ui/CLAUDE.md should exist"
}
test_copy_gemini_agents_md() {
run_setup --gemini > /dev/null
assert_file_exists "$TEST_DIR/GEMINI.md" "Root GEMINI.md should exist" && \
assert_file_exists "$TEST_DIR/api/GEMINI.md" "api/GEMINI.md should exist" && \
assert_file_exists "$TEST_DIR/ui/GEMINI.md" "ui/GEMINI.md should exist"
}
test_copy_copilot_to_github() {
run_setup --copilot > /dev/null
assert_file_exists "$TEST_DIR/.github/copilot-instructions.md" "Copilot instructions should exist"
}
test_copy_codex_no_extra_files() {
run_setup --codex > /dev/null
assert_file_not_exists "$TEST_DIR/CODEX.md" "CODEX.md should not be created"
}
test_copy_not_created_without_flag() {
run_setup --codex > /dev/null
assert_file_not_exists "$TEST_DIR/CLAUDE.md" "CLAUDE.md should not exist" && \
assert_file_not_exists "$TEST_DIR/GEMINI.md" "GEMINI.md should not exist"
}
test_copy_content_matches_source() {
run_setup --claude > /dev/null
local source_content target_content
source_content=$(cat "$TEST_DIR/AGENTS.md")
target_content=$(cat "$TEST_DIR/CLAUDE.md")
assert_equals "$source_content" "$target_content" "CLAUDE.md content should match AGENTS.md"
}
# =============================================================================
# TESTS: DIRECTORY CREATION
# =============================================================================
test_dir_claude_created() {
rm -rf "$TEST_DIR/.claude"
run_setup --claude > /dev/null
assert_dir_exists "$TEST_DIR/.claude" ".claude directory should be created"
}
test_dir_gemini_created() {
rm -rf "$TEST_DIR/.gemini"
run_setup --gemini > /dev/null
assert_dir_exists "$TEST_DIR/.gemini" ".gemini directory should be created"
}
test_dir_codex_created() {
rm -rf "$TEST_DIR/.codex"
run_setup --codex > /dev/null
assert_dir_exists "$TEST_DIR/.codex" ".codex directory should be created"
}
# =============================================================================
# TESTS: IDEMPOTENCY
# =============================================================================
test_idempotent_multiple_runs() {
run_setup --claude > /dev/null
run_setup --claude > /dev/null
assert_symlink_exists "$TEST_DIR/.claude/skills" "Symlink should still exist after second run" && \
assert_file_exists "$TEST_DIR/CLAUDE.md" "CLAUDE.md should still exist after second run"
}
# =============================================================================
# TEST RUNNER (autodiscovery)
# =============================================================================
run_all_tests() {
local test_functions current_section=""
# Discover all test_* functions
test_functions=$(declare -F | awk '{print $3}' | grep '^test_' | sort)
for test_func in $test_functions; do
# Extract section from function name (e.g., test_flag_* -> "Flag")
local section
section=$(echo "$test_func" | sed 's/^test_//' | cut -d'_' -f1)
section="$(echo "${section:0:1}" | tr '[:lower:]' '[:upper:]')${section:1}"
# Print section header if changed
if [ "$section" != "$current_section" ]; then
[ -n "$current_section" ] && echo ""
echo -e "${YELLOW}${section} tests:${NC}"
current_section="$section"
fi
# Convert function name to readable test name
local test_name
test_name=$(echo "$test_func" | sed 's/^test_//' | tr '_' ' ')
TESTS_RUN=$((TESTS_RUN + 1))
echo -n " $test_name... "
setup_test_env
if $test_func; then
echo -e "${GREEN}PASS${NC}"
TESTS_PASSED=$((TESTS_PASSED + 1))
else
TESTS_FAILED=$((TESTS_FAILED + 1))
fi
teardown_test_env
done
}
# =============================================================================
# MAIN
# =============================================================================
echo ""
echo "🧪 Running setup.sh unit tests"
echo "==============================="
echo ""
run_all_tests
echo ""
echo "==============================="
if [ $TESTS_FAILED -eq 0 ]; then
echo -e "${GREEN}✅ All $TESTS_RUN tests passed!${NC}"
exit 0
else
echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}"
exit 1
fi

View File

@@ -7,6 +7,8 @@ license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root]
auto_invoke: "Creating new skills"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

120
skills/skill-sync/SKILL.md Normal file
View File

@@ -0,0 +1,120 @@
---
name: skill-sync
description: >
Syncs skill metadata to AGENTS.md Auto-invoke sections.
Trigger: When updating skill metadata (metadata.scope/metadata.auto_invoke), regenerating Auto-invoke tables, or running ./skills/skill-sync/assets/sync.sh (including --dry-run/--scope).
license: Apache-2.0
metadata:
author: prowler-cloud
version: "1.0"
scope: [root]
auto_invoke:
- "After creating/modifying a skill"
- "Regenerate AGENTS.md Auto-invoke tables (sync.sh)"
- "Troubleshoot why a skill is missing from AGENTS.md auto-invoke"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash
---
## Purpose
Keeps AGENTS.md Auto-invoke sections in sync with skill metadata. When you create or modify a skill, run the sync script to automatically update all affected AGENTS.md files.
## Required Skill Metadata
Each skill that should appear in Auto-invoke sections needs these fields in `metadata`.
`auto_invoke` can be either a single string **or** a list of actions:
```yaml
metadata:
author: prowler-cloud
version: "1.0"
scope: [ui] # Which AGENTS.md: ui, api, sdk, root
# Option A: single action
auto_invoke: "Creating/modifying components"
# Option B: multiple actions
# auto_invoke:
# - "Creating/modifying components"
# - "Refactoring component folder placement"
```
### Scope Values
| Scope | Updates |
|-------|---------|
| `root` | `AGENTS.md` (repo root) |
| `ui` | `ui/AGENTS.md` |
| `api` | `api/AGENTS.md` |
| `sdk` | `prowler/AGENTS.md` |
Skills can have multiple scopes: `scope: [ui, api]`
---
## Usage
### After Creating/Modifying a Skill
```bash
./skills/skill-sync/assets/sync.sh
```
### What It Does
1. Reads all `skills/*/SKILL.md` files
2. Extracts `metadata.scope` and `metadata.auto_invoke`
3. Generates Auto-invoke tables for each AGENTS.md
4. Updates the `### Auto-invoke Skills` section in each file
---
## Example
Given this skill metadata:
```yaml
# skills/prowler-ui/SKILL.md
metadata:
author: prowler-cloud
version: "1.0"
scope: [ui]
auto_invoke: "Creating/modifying React components"
```
The sync script generates in `ui/AGENTS.md`:
```markdown
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| Creating/modifying React components | `prowler-ui` |
```
---
## Commands
```bash
# Sync all AGENTS.md files
./skills/skill-sync/assets/sync.sh
# Dry run (show what would change)
./skills/skill-sync/assets/sync.sh --dry-run
# Sync specific scope only
./skills/skill-sync/assets/sync.sh --scope ui
```
---
## Checklist After Modifying Skills
- [ ] Added `metadata.scope` to new/modified skill
- [ ] Added `metadata.auto_invoke` with action description
- [ ] Ran `./skills/skill-sync/assets/sync.sh`
- [ ] Verified AGENTS.md files updated correctly

325
skills/skill-sync/assets/sync.sh Executable file
View File

@@ -0,0 +1,325 @@
#!/usr/bin/env bash
# Sync skill metadata to AGENTS.md Auto-invoke sections
# Usage: ./sync.sh [--dry-run] [--scope <scope>]
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$(dirname "$(dirname "$SCRIPT_DIR")")")"
SKILLS_DIR="$REPO_ROOT/skills"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Options
DRY_RUN=false
FILTER_SCOPE=""
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--dry-run)
DRY_RUN=true
shift
;;
--scope)
FILTER_SCOPE="$2"
shift 2
;;
--help|-h)
echo "Usage: $0 [--dry-run] [--scope <scope>]"
echo ""
echo "Options:"
echo " --dry-run Show what would change without modifying files"
echo " --scope Only sync specific scope (root, ui, api, sdk)"
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
exit 1
;;
esac
done
# Map scope to AGENTS.md path
get_agents_path() {
local scope="$1"
case "$scope" in
root) echo "$REPO_ROOT/AGENTS.md" ;;
ui) echo "$REPO_ROOT/ui/AGENTS.md" ;;
api) echo "$REPO_ROOT/api/AGENTS.md" ;;
sdk) echo "$REPO_ROOT/prowler/AGENTS.md" ;;
*) echo "" ;;
esac
}
# Extract YAML frontmatter field using awk
extract_field() {
local file="$1"
local field="$2"
awk -v field="$field" '
/^---$/ { in_frontmatter = !in_frontmatter; next }
in_frontmatter && $1 == field":" {
# Handle single line value
sub(/^[^:]+:[[:space:]]*/, "")
if ($0 != "" && $0 != ">") {
gsub(/^["'\'']|["'\'']$/, "") # Remove quotes
print
exit
}
# Handle multi-line value
getline
while (/^[[:space:]]/ && !/^---$/) {
sub(/^[[:space:]]+/, "")
printf "%s ", $0
if (!getline) break
}
print ""
exit
}
' "$file" | sed 's/[[:space:]]*$//'
}
# Extract nested metadata field
#
# Supports either:
# auto_invoke: "Single Action"
# or:
# auto_invoke:
# - "Action A"
# - "Action B"
#
# For list values, this returns a pipe-delimited string: "Action A|Action B"
extract_metadata() {
local file="$1"
local field="$2"
awk -v field="$field" '
function trim(s) {
sub(/^[[:space:]]+/, "", s)
sub(/[[:space:]]+$/, "", s)
return s
}
/^---$/ { in_frontmatter = !in_frontmatter; next }
in_frontmatter && /^metadata:/ { in_metadata = 1; next }
in_frontmatter && in_metadata && /^[a-z]/ && !/^[[:space:]]/ { in_metadata = 0 }
in_frontmatter && in_metadata && $1 == field":" {
# Remove "field:" prefix
sub(/^[^:]+:[[:space:]]*/, "")
# Single-line scalar: auto_invoke: "Action"
if ($0 != "") {
v = $0
gsub(/^["'\'']|["'\'']$/, "", v)
gsub(/^\[|\]$/, "", v) # legacy: allow inline [a, b]
print trim(v)
exit
}
# Multi-line list:
# auto_invoke:
# - "Action A"
# - "Action B"
out = ""
while (getline) {
# Stop when leaving metadata block
if (!in_frontmatter) break
if (!in_metadata) break
if ($0 ~ /^[a-z]/ && $0 !~ /^[[:space:]]/) break
# On multi-line list, only accept "- item" lines. Anything else ends the list.
line = $0
if (line ~ /^[[:space:]]*-[[:space:]]*/) {
sub(/^[[:space:]]*-[[:space:]]*/, "", line)
line = trim(line)
gsub(/^["'\'']|["'\'']$/, "", line)
if (line != "") {
if (out == "") out = line
else out = out "|" line
}
} else {
break
}
}
if (out != "") print out
exit
}
' "$file"
}
echo -e "${BLUE}Skill Sync - Updating AGENTS.md Auto-invoke sections${NC}"
echo "========================================================"
echo ""
# Collect skills by scope
declare -A SCOPE_SKILLS # scope -> "skill1:action1|skill2:action2|..."
# Deterministic iteration order (stable diffs)
# Note: macOS ships BSD find; avoid GNU-only flags.
while IFS= read -r skill_file; do
[ -f "$skill_file" ] || continue
skill_name=$(extract_field "$skill_file" "name")
scope_raw=$(extract_metadata "$skill_file" "scope")
auto_invoke_raw=$(extract_metadata "$skill_file" "auto_invoke")
# extract_metadata() returns:
# - single action: "Action"
# - multiple actions: "Action A|Action B" (pipe-delimited)
# But SCOPE_SKILLS also uses '|' to separate entries, so we protect it.
auto_invoke=${auto_invoke_raw//|/;;}
# Skip if no scope or auto_invoke defined
[ -z "$scope_raw" ] || [ -z "$auto_invoke" ] && continue
# Parse scope (can be comma-separated or space-separated)
IFS=', ' read -ra scopes <<< "$scope_raw"
for scope in "${scopes[@]}"; do
scope=$(echo "$scope" | tr -d '[:space:]')
[ -z "$scope" ] && continue
# Filter by scope if specified
[ -n "$FILTER_SCOPE" ] && [ "$scope" != "$FILTER_SCOPE" ] && continue
# Append to scope's skill list
if [ -z "${SCOPE_SKILLS[$scope]}" ]; then
SCOPE_SKILLS[$scope]="$skill_name:$auto_invoke"
else
SCOPE_SKILLS[$scope]="${SCOPE_SKILLS[$scope]}|$skill_name:$auto_invoke"
fi
done
done < <(find "$SKILLS_DIR" -mindepth 2 -maxdepth 2 -name SKILL.md -print | sort)
# Generate Auto-invoke section for each scope
# Deterministic scope order (stable diffs)
scopes_sorted=()
while IFS= read -r scope; do
scopes_sorted+=("$scope")
done < <(printf "%s\n" "${!SCOPE_SKILLS[@]}" | sort)
for scope in "${scopes_sorted[@]}"; do
agents_path=$(get_agents_path "$scope")
if [ -z "$agents_path" ] || [ ! -f "$agents_path" ]; then
echo -e "${YELLOW}Warning: No AGENTS.md found for scope '$scope'${NC}"
continue
fi
echo -e "${BLUE}Processing: $scope -> $(basename "$(dirname "$agents_path")")/AGENTS.md${NC}"
# Build the Auto-invoke table
auto_invoke_section="### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|"
# Expand into sortable rows: "action<TAB>skill"
rows=()
IFS='|' read -ra skill_entries <<< "${SCOPE_SKILLS[$scope]}"
for entry in "${skill_entries[@]}"; do
skill_name="${entry%%:*}"
actions_raw="${entry#*:}"
actions_raw=${actions_raw//;;/|}
IFS='|' read -ra actions <<< "$actions_raw"
for action in "${actions[@]}"; do
action="$(echo "$action" | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')"
[ -z "$action" ] && continue
rows+=("$action $skill_name")
done
done
# Deterministic row order: Action then Skill
while IFS=$'\t' read -r action skill_name; do
[ -z "$action" ] && continue
auto_invoke_section="$auto_invoke_section
| $action | \`$skill_name\` |"
done < <(printf "%s\n" "${rows[@]}" | LC_ALL=C sort -t $'\t' -k1,1 -k2,2)
if $DRY_RUN; then
echo -e "${YELLOW}[DRY RUN] Would update $agents_path with:${NC}"
echo "$auto_invoke_section"
echo ""
else
# Write new section to temp file (avoids awk multi-line string issues on macOS)
section_file=$(mktemp)
echo "$auto_invoke_section" > "$section_file"
# Check if Auto-invoke section exists
if grep -q "### Auto-invoke Skills" "$agents_path"; then
# Replace existing section (up to next --- or ## heading)
awk '
/^### Auto-invoke Skills/ {
while ((getline line < "'"$section_file"'") > 0) print line
close("'"$section_file"'")
skip = 1
next
}
skip && /^(---|## )/ {
skip = 0
print ""
}
!skip { print }
' "$agents_path" > "$agents_path.tmp"
mv "$agents_path.tmp" "$agents_path"
echo -e "${GREEN} ✓ Updated Auto-invoke section${NC}"
else
# Insert after Skills Reference blockquote
awk '
/^>.*SKILL\.md\)$/ && !inserted {
print
getline
if (/^$/) {
print ""
while ((getline line < "'"$section_file"'") > 0) print line
close("'"$section_file"'")
print ""
inserted = 1
next
}
}
{ print }
' "$agents_path" > "$agents_path.tmp"
mv "$agents_path.tmp" "$agents_path"
echo -e "${GREEN} ✓ Inserted Auto-invoke section${NC}"
fi
rm -f "$section_file"
fi
done
echo ""
echo -e "${GREEN}Done!${NC}"
# Show skills without metadata
echo ""
echo -e "${BLUE}Skills missing sync metadata:${NC}"
missing=0
while IFS= read -r skill_file; do
[ -f "$skill_file" ] || continue
skill_name=$(extract_field "$skill_file" "name")
scope_raw=$(extract_metadata "$skill_file" "scope")
auto_invoke_raw=$(extract_metadata "$skill_file" "auto_invoke")
auto_invoke=${auto_invoke_raw//|/;;}
if [ -z "$scope_raw" ] || [ -z "$auto_invoke" ]; then
echo -e " ${YELLOW}$skill_name${NC} - missing: ${scope_raw:+}${scope_raw:-scope} ${auto_invoke:+}${auto_invoke:-auto_invoke}"
missing=$((missing + 1))
fi
done < <(find "$SKILLS_DIR" -mindepth 2 -maxdepth 2 -name SKILL.md -print | sort)
if [ $missing -eq 0 ]; then
echo -e " ${GREEN}All skills have sync metadata${NC}"
fi

View File

@@ -0,0 +1,604 @@
#!/bin/bash
# Unit tests for sync.sh
# Run: ./skills/skill-sync/assets/sync_test.sh
#
# shellcheck disable=SC2317
# Reason: Test functions are discovered and called dynamically via declare -F
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SYNC_SCRIPT="$SCRIPT_DIR/sync.sh"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Test counters
TESTS_RUN=0
TESTS_PASSED=0
TESTS_FAILED=0
# Test environment
TEST_DIR=""
# =============================================================================
# TEST FRAMEWORK
# =============================================================================
setup_test_env() {
TEST_DIR=$(mktemp -d)
# Create mock repo structure
mkdir -p "$TEST_DIR/skills/mock-ui-skill"
mkdir -p "$TEST_DIR/skills/mock-api-skill"
mkdir -p "$TEST_DIR/skills/mock-sdk-skill"
mkdir -p "$TEST_DIR/skills/mock-root-skill"
mkdir -p "$TEST_DIR/skills/mock-no-metadata"
mkdir -p "$TEST_DIR/skills/skill-sync/assets"
mkdir -p "$TEST_DIR/ui"
mkdir -p "$TEST_DIR/api"
mkdir -p "$TEST_DIR/prowler"
# Create mock SKILL.md files with metadata
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: >
Mock UI skill for testing.
Trigger: When testing UI.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke: "Testing UI components"
allowed-tools: Read
---
# Mock UI Skill
EOF
cat > "$TEST_DIR/skills/mock-api-skill/SKILL.md" << 'EOF'
---
name: mock-api-skill
description: >
Mock API skill for testing.
Trigger: When testing API.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [api]
auto_invoke: "Testing API endpoints"
allowed-tools: Read
---
# Mock API Skill
EOF
cat > "$TEST_DIR/skills/mock-sdk-skill/SKILL.md" << 'EOF'
---
name: mock-sdk-skill
description: >
Mock SDK skill for testing.
Trigger: When testing SDK.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [sdk]
auto_invoke: "Testing SDK checks"
allowed-tools: Read
---
# Mock SDK Skill
EOF
cat > "$TEST_DIR/skills/mock-root-skill/SKILL.md" << 'EOF'
---
name: mock-root-skill
description: >
Mock root skill for testing.
Trigger: When testing root.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [root]
auto_invoke: "Testing root actions"
allowed-tools: Read
---
# Mock Root Skill
EOF
# Skill without sync metadata
cat > "$TEST_DIR/skills/mock-no-metadata/SKILL.md" << 'EOF'
---
name: mock-no-metadata
description: >
Skill without sync metadata.
license: Apache-2.0
metadata:
author: test
version: "1.0"
allowed-tools: Read
---
# No Metadata Skill
EOF
# Create mock AGENTS.md files with Skills Reference section
cat > "$TEST_DIR/AGENTS.md" << 'EOF'
# Root AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-root-skill`](skills/mock-root-skill/SKILL.md)
## Project Overview
This is the root agents file.
EOF
cat > "$TEST_DIR/ui/AGENTS.md" << 'EOF'
# UI AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-ui-skill`](../skills/mock-ui-skill/SKILL.md)
## CRITICAL RULES
UI rules here.
EOF
cat > "$TEST_DIR/api/AGENTS.md" << 'EOF'
# API AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-api-skill`](../skills/mock-api-skill/SKILL.md)
## CRITICAL RULES
API rules here.
EOF
cat > "$TEST_DIR/prowler/AGENTS.md" << 'EOF'
# SDK AGENTS
> **Skills Reference**: For detailed patterns, use these skills:
> - [`mock-sdk-skill`](../skills/mock-sdk-skill/SKILL.md)
## Project Overview
SDK overview here.
EOF
# Copy sync.sh to test dir
cp "$SYNC_SCRIPT" "$TEST_DIR/skills/skill-sync/assets/sync.sh"
chmod +x "$TEST_DIR/skills/skill-sync/assets/sync.sh"
}
teardown_test_env() {
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
rm -rf "$TEST_DIR"
fi
}
run_sync() {
(cd "$TEST_DIR/skills/skill-sync/assets" && bash sync.sh "$@" 2>&1)
}
# Assertions
assert_equals() {
local expected="$1" actual="$2" message="$3"
if [ "$expected" = "$actual" ]; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " Expected: $expected"
echo " Actual: $actual"
return 1
}
assert_contains() {
local haystack="$1" needle="$2" message="$3"
if echo "$haystack" | grep -q -F -- "$needle"; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " String not found: $needle"
return 1
}
assert_not_contains() {
local haystack="$1" needle="$2" message="$3"
if ! echo "$haystack" | grep -q -F -- "$needle"; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " String should not be found: $needle"
return 1
}
assert_file_contains() {
local file="$1" needle="$2" message="$3"
if grep -q -F -- "$needle" "$file" 2>/dev/null; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File: $file"
echo " String not found: $needle"
return 1
}
assert_file_not_contains() {
local file="$1" needle="$2" message="$3"
if ! grep -q -F -- "$needle" "$file" 2>/dev/null; then
return 0
fi
echo -e "${RED} FAIL: $message${NC}"
echo " File: $file"
echo " String should not be found: $needle"
return 1
}
# =============================================================================
# TESTS: FLAG PARSING
# =============================================================================
test_flag_help_shows_usage() {
local output
output=$(run_sync --help)
assert_contains "$output" "Usage:" "Help should show usage" && \
assert_contains "$output" "--dry-run" "Help should mention --dry-run" && \
assert_contains "$output" "--scope" "Help should mention --scope"
}
test_flag_unknown_reports_error() {
local output
output=$(run_sync --unknown 2>&1) || true
assert_contains "$output" "Unknown option" "Should report unknown option"
}
test_flag_dryrun_shows_changes() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "[DRY RUN]" "Should show dry run marker" && \
assert_contains "$output" "Would update" "Should say would update"
}
test_flag_dryrun_no_file_changes() {
run_sync --dry-run > /dev/null
assert_file_not_contains "$TEST_DIR/ui/AGENTS.md" "### Auto-invoke Skills" \
"AGENTS.md should not be modified in dry run"
}
test_flag_scope_filters_correctly() {
local output
output=$(run_sync --scope ui)
assert_contains "$output" "Processing: ui" "Should process ui scope" && \
assert_not_contains "$output" "Processing: api" "Should not process api scope"
}
# =============================================================================
# TESTS: METADATA EXTRACTION
# =============================================================================
test_metadata_extracts_scope() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "Processing: ui" "Should detect ui scope" && \
assert_contains "$output" "Processing: api" "Should detect api scope" && \
assert_contains "$output" "Processing: sdk" "Should detect sdk scope" && \
assert_contains "$output" "Processing: root" "Should detect root scope"
}
test_metadata_extracts_auto_invoke() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "Testing UI components" "Should extract UI auto_invoke" && \
assert_contains "$output" "Testing API endpoints" "Should extract API auto_invoke" && \
assert_contains "$output" "Testing SDK checks" "Should extract SDK auto_invoke"
}
test_metadata_missing_reports_skills() {
local output
output=$(run_sync --dry-run)
assert_contains "$output" "Skills missing sync metadata" "Should report missing metadata section" && \
assert_contains "$output" "mock-no-metadata" "Should list skill without metadata"
}
test_metadata_skips_without_scope_in_processing() {
local output
output=$(run_sync --dry-run)
# Should not appear in "Processing:" lines, only in "missing metadata" section
local processing_lines
processing_lines=$(echo "$output" | grep "Processing:")
assert_not_contains "$processing_lines" "mock-no-metadata" "Should not process skill without scope"
}
# =============================================================================
# TESTS: AUTO-INVOKE GENERATION
# =============================================================================
test_generate_creates_table() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "### Auto-invoke Skills" \
"Should create Auto-invoke section" && \
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "| Action | Skill |" \
"Should create table header"
}
test_generate_correct_skill_in_ui() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "mock-ui-skill" \
"UI AGENTS should contain mock-ui-skill" && \
assert_file_not_contains "$TEST_DIR/ui/AGENTS.md" "mock-api-skill" \
"UI AGENTS should not contain mock-api-skill"
}
test_generate_correct_skill_in_api() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/api/AGENTS.md" "mock-api-skill" \
"API AGENTS should contain mock-api-skill" && \
assert_file_not_contains "$TEST_DIR/api/AGENTS.md" "mock-ui-skill" \
"API AGENTS should not contain mock-ui-skill"
}
test_generate_correct_skill_in_sdk() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/prowler/AGENTS.md" "mock-sdk-skill" \
"SDK AGENTS should contain mock-sdk-skill" && \
assert_file_not_contains "$TEST_DIR/prowler/AGENTS.md" "mock-ui-skill" \
"SDK AGENTS should not contain mock-ui-skill"
}
test_generate_correct_skill_in_root() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/AGENTS.md" "mock-root-skill" \
"Root AGENTS should contain mock-root-skill" && \
assert_file_not_contains "$TEST_DIR/AGENTS.md" "mock-ui-skill" \
"Root AGENTS should not contain mock-ui-skill"
}
test_generate_includes_action_text() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "Testing UI components" \
"Should include auto_invoke action text"
}
test_generate_splits_multi_action_auto_invoke_list() {
# Change UI skill to use list auto_invoke (two actions)
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: Mock UI skill with multi-action auto_invoke list.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke:
- "Action B"
- "Action A"
allowed-tools: Read
---
EOF
run_sync > /dev/null
# Both actions should produce rows
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "| Action A | \`mock-ui-skill\` |" \
"Should create row for Action A" && \
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "| Action B | \`mock-ui-skill\` |" \
"Should create row for Action B"
}
test_generate_orders_rows_by_action_then_skill() {
# Two skills, intentionally out-of-order actions, same scope
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: Mock UI skill.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke:
- "Z action"
- "A action"
allowed-tools: Read
---
EOF
mkdir -p "$TEST_DIR/skills/mock-ui-skill-2"
cat > "$TEST_DIR/skills/mock-ui-skill-2/SKILL.md" << 'EOF'
---
name: mock-ui-skill-2
description: Second UI skill.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui]
auto_invoke: "A action"
allowed-tools: Read
---
EOF
run_sync > /dev/null
# Verify order within the table is: "A action" rows first, then "Z action"
local table_segment
table_segment=$(awk '
/^\| Action \| Skill \|/ { in_table=1 }
in_table && /^---$/ { next }
in_table && /^\|/ { print }
in_table && !/^\|/ { exit }
' "$TEST_DIR/ui/AGENTS.md")
local first_a_index first_z_index
first_a_index=$(echo "$table_segment" | awk '/\| A action \|/ { print NR; exit }')
first_z_index=$(echo "$table_segment" | awk '/\| Z action \|/ { print NR; exit }')
# Both must exist and A must come before Z
[ -n "$first_a_index" ] && [ -n "$first_z_index" ] && [ "$first_a_index" -lt "$first_z_index" ]
}
# =============================================================================
# TESTS: AGENTS.MD UPDATE
# =============================================================================
test_update_preserves_header() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "# UI AGENTS" \
"Should preserve original header"
}
test_update_preserves_skills_reference() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "Skills Reference" \
"Should preserve Skills Reference section"
}
test_update_preserves_content_after() {
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "## CRITICAL RULES" \
"Should preserve content after Auto-invoke section"
}
test_update_replaces_existing_section() {
# First run creates section
run_sync > /dev/null
# Modify a skill's auto_invoke (portable: BSD/GNU sed)
# macOS/BSD sed needs -i '' (separate arg). GNU sed accepts it too.
sed -i '' 's/Testing UI components/Modified UI action/' "$TEST_DIR/skills/mock-ui-skill/SKILL.md"
# Second run should replace
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "Modified UI action" \
"Should update with new auto_invoke text" && \
assert_file_not_contains "$TEST_DIR/ui/AGENTS.md" "Testing UI components" \
"Should remove old auto_invoke text"
}
# =============================================================================
# TESTS: IDEMPOTENCY
# =============================================================================
test_idempotent_multiple_runs() {
run_sync > /dev/null
local first_content
first_content=$(cat "$TEST_DIR/ui/AGENTS.md")
run_sync > /dev/null
local second_content
second_content=$(cat "$TEST_DIR/ui/AGENTS.md")
assert_equals "$first_content" "$second_content" \
"Multiple runs should produce identical output"
}
test_idempotent_no_duplicate_sections() {
run_sync > /dev/null
run_sync > /dev/null
run_sync > /dev/null
local count
count=$(grep -c "### Auto-invoke Skills" "$TEST_DIR/ui/AGENTS.md")
assert_equals "1" "$count" "Should have exactly one Auto-invoke section"
}
# =============================================================================
# TESTS: MULTI-SCOPE SKILLS
# =============================================================================
test_multiscope_skill_appears_in_multiple() {
# Create a skill with multiple scopes
cat > "$TEST_DIR/skills/mock-ui-skill/SKILL.md" << 'EOF'
---
name: mock-ui-skill
description: Mock skill with multiple scopes.
license: Apache-2.0
metadata:
author: test
version: "1.0"
scope: [ui, api]
auto_invoke: "Multi-scope action"
allowed-tools: Read
---
EOF
run_sync > /dev/null
assert_file_contains "$TEST_DIR/ui/AGENTS.md" "mock-ui-skill" \
"Multi-scope skill should appear in UI" && \
assert_file_contains "$TEST_DIR/api/AGENTS.md" "mock-ui-skill" \
"Multi-scope skill should appear in API"
}
# =============================================================================
# TEST RUNNER
# =============================================================================
run_all_tests() {
local test_functions current_section=""
test_functions=$(declare -F | awk '{print $3}' | grep '^test_' | sort)
for test_func in $test_functions; do
local section
section=$(echo "$test_func" | sed 's/^test_//' | cut -d'_' -f1)
section="$(echo "${section:0:1}" | tr '[:lower:]' '[:upper:]')${section:1}"
if [ "$section" != "$current_section" ]; then
[ -n "$current_section" ] && echo ""
echo -e "${YELLOW}${section} tests:${NC}"
current_section="$section"
fi
local test_name
test_name=$(echo "$test_func" | sed 's/^test_//' | tr '_' ' ')
TESTS_RUN=$((TESTS_RUN + 1))
echo -n " $test_name... "
setup_test_env
if $test_func; then
echo -e "${GREEN}PASS${NC}"
TESTS_PASSED=$((TESTS_PASSED + 1))
else
TESTS_FAILED=$((TESTS_FAILED + 1))
fi
teardown_test_env
done
}
# =============================================================================
# MAIN
# =============================================================================
echo ""
echo "🧪 Running sync.sh unit tests"
echo "=============================="
echo ""
run_all_tests
echo ""
echo "=============================="
if [ $TESTS_FAILED -eq 0 ]; then
echo -e "${GREEN}✅ All $TESTS_RUN tests passed!${NC}"
exit 0
else
echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}"
exit 1
fi

View File

@@ -2,11 +2,13 @@
name: tailwind-4 name: tailwind-4
description: > description: >
Tailwind CSS 4 patterns and best practices. Tailwind CSS 4 patterns and best practices.
Trigger: When styling with Tailwind - cn(), theme variables, no var() in className. Trigger: When styling with Tailwind (className, variants, cn()), especially when dynamic styling or CSS variables are involved (no var() in className).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "Working with Tailwind classes"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: typescript name: typescript
description: > description: >
TypeScript strict patterns and best practices. TypeScript strict patterns and best practices.
Trigger: When writing TypeScript code - types, interfaces, generics. Trigger: When implementing or refactoring TypeScript in .ts/.tsx (types, interfaces, generics, const maps, type guards, removing any, tightening unknown).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "Writing TypeScript types/interfaces"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: zod-4 name: zod-4
description: > description: >
Zod 4 schema validation patterns. Zod 4 schema validation patterns.
Trigger: When using Zod for validation - breaking changes from v3. Trigger: When creating or updating Zod v4 schemas for validation/parsing (forms, request payloads, adapters), including v3 -> v4 migration patterns.
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "Creating Zod schemas"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -2,11 +2,13 @@
name: zustand-5 name: zustand-5
description: > description: >
Zustand 5 state management patterns. Zustand 5 state management patterns.
Trigger: When managing React state with Zustand. Trigger: When implementing client-side state with Zustand (stores, selectors, persist middleware, slices).
license: Apache-2.0 license: Apache-2.0
metadata: metadata:
author: prowler-cloud author: prowler-cloud
version: "1.0" version: "1.0"
scope: [root, ui]
auto_invoke: "Using Zustand stores"
allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task allowed-tools: Read, Edit, Write, Glob, Grep, Bash, WebFetch, WebSearch, Task
--- ---

View File

@@ -12,6 +12,27 @@
> - [`ai-sdk-5`](../skills/ai-sdk-5/SKILL.md) - UIMessage, sendMessage > - [`ai-sdk-5`](../skills/ai-sdk-5/SKILL.md) - UIMessage, sendMessage
> - [`playwright`](../skills/playwright/SKILL.md) - Page Object Model, selectors > - [`playwright`](../skills/playwright/SKILL.md) - Page Object Model, selectors
### Auto-invoke Skills
When performing these actions, ALWAYS invoke the corresponding skill FIRST:
| Action | Skill |
|--------|-------|
| App Router / Server Actions | `nextjs-15` |
| Building AI chat features | `ai-sdk-5` |
| Creating Zod schemas | `zod-4` |
| Creating/modifying Prowler UI components | `prowler-ui` |
| Using Zustand stores | `zustand-5` |
| Working on Prowler UI structure (actions/adapters/types/hooks) | `prowler-ui` |
| Working with Prowler UI test helpers/pages | `prowler-test-ui` |
| Working with Tailwind classes | `tailwind-4` |
| Writing Playwright E2E tests | `playwright` |
| Writing Prowler UI E2E tests | `prowler-test-ui` |
| Writing React components | `react-19` |
| Writing TypeScript types/interfaces | `typescript` |
---
## CRITICAL RULES - NON-NEGOTIABLE ## CRITICAL RULES - NON-NEGOTIABLE
### React ### React