mirror of
https://github.com/prowler-cloud/prowler.git
synced 2026-04-03 05:55:54 +00:00
Compare commits
14 Commits
add-search
...
PROWLER-48
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
924f6089d9 | ||
|
|
e2f30e0987 | ||
|
|
c80710adfc | ||
|
|
1410fe2ff1 | ||
|
|
284910d402 | ||
|
|
04f795bd49 | ||
|
|
8b5e00163e | ||
|
|
57d7f77c81 | ||
|
|
16b1052ff1 | ||
|
|
978e2c82af | ||
|
|
0c3ba0b737 | ||
|
|
4addfcc848 | ||
|
|
8588cc03f4 | ||
|
|
7507fea24b |
40
README.md
40
README.md
@@ -6,7 +6,7 @@
|
||||
<b><i>Prowler</b> is the Open Cloud Security platform trusted by thousands to automate security and compliance in any cloud environment. With hundreds of ready-to-use checks and compliance frameworks, Prowler delivers real-time, customizable monitoring and seamless integrations, making cloud security simple, scalable, and cost-effective for organizations of any size.
|
||||
</p>
|
||||
<p align="center">
|
||||
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
|
||||
<b>Secure ANY cloud at AI Speed at <a href="https://prowler.com">prowler.com</i></b>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -35,28 +35,32 @@
|
||||
</p>
|
||||
<hr>
|
||||
<p align="center">
|
||||
<img align="center" src="/docs/img/prowler-cli-quick.gif" width="100%" height="100%">
|
||||
<img align="center" src="/docs/img/prowler-cloud.gif" width="100%" height="100%">
|
||||
</p>
|
||||
|
||||
# Description
|
||||
|
||||
**Prowler** is an open-source security tool designed to assess and enforce security best practices across AWS, Azure, Google Cloud, and Kubernetes. It supports tasks such as security audits, incident response, continuous monitoring, system hardening, forensic readiness, and remediation processes.
|
||||
**Prowler** is the world’s most widely used _open-source cloud security platform_ that automates security and compliance across **any cloud environment**. With hundreds of ready-to-use security checks, remediation guidance, and compliance frameworks, Prowler is built to _“Secure ANY cloud at AI Speed”_. Prowler delivers **AI-driven**, **customizable**, and **easy-to-use** assessments, dashboards, reports, and integrations, making cloud security **simple**, **scalable**, and **cost-effective** for organizations of any size.
|
||||
|
||||
Prowler includes hundreds of built-in controls to ensure compliance with standards and frameworks, including:
|
||||
|
||||
- **Industry Standards:** CIS, NIST 800, NIST CSF, and CISA
|
||||
- **Regulatory Compliance and Governance:** RBI, FedRAMP, and PCI-DSS
|
||||
- **Prowler ThreatScore:** Weighted risk prioritization scoring that helps you focus on the most critical security findings first
|
||||
- **Industry Standards:** CIS, NIST 800, NIST CSF, CISA, and MITRE ATT&CK
|
||||
- **Regulatory Compliance and Governance:** RBI, FedRAMP, PCI-DSS, and NIS2
|
||||
- **Frameworks for Sensitive Data and Privacy:** GDPR, HIPAA, and FFIEC
|
||||
- **Frameworks for Organizational Governance and Quality Control:** SOC2 and GXP
|
||||
- **AWS-Specific Frameworks:** AWS Foundational Technical Review (FTR) and AWS Well-Architected Framework (Security Pillar)
|
||||
- **National Security Standards:** ENS (Spanish National Security Scheme)
|
||||
- **Frameworks for Organizational Governance and Quality Control:** SOC2, GXP, and ISO 27001
|
||||
- **Cloud-Specific Frameworks:** AWS Foundational Technical Review (FTR), AWS Well-Architected Framework, and BSI C5
|
||||
- **National Security Standards:** ENS (Spanish National Security Scheme) and KISA ISMS-P (Korean)
|
||||
- **Custom Security Frameworks:** Tailored to your needs
|
||||
|
||||
## Prowler App
|
||||
## Prowler App / Prowler Cloud
|
||||
|
||||
Prowler App is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
|
||||
Prowler App / [Prowler Cloud](https://cloud.prowler.com/) is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
|
||||
>For more details, refer to the [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
|
||||
|
||||
@@ -82,16 +86,16 @@ prowler dashboard
|
||||
|
||||
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Interface |
|
||||
|---|---|---|---|---|---|---|
|
||||
| AWS | 576 | 82 | 39 | 10 | Official | UI, API, CLI |
|
||||
| GCP | 79 | 13 | 13 | 3 | Official | UI, API, CLI |
|
||||
| Azure | 162 | 19 | 13 | 4 | Official | UI, API, CLI |
|
||||
| Kubernetes | 83 | 7 | 5 | 7 | Official | UI, API, CLI |
|
||||
| GitHub | 17 | 2 | 1 | 0 | Official | Stable | UI, API, CLI |
|
||||
| AWS | 584 | 85 | 40 | 17 | Official | UI, API, CLI |
|
||||
| GCP | 89 | 17 | 14 | 5 | Official | UI, API, CLI |
|
||||
| Azure | 169 | 22 | 15 | 8 | Official | UI, API, CLI |
|
||||
| Kubernetes | 84 | 7 | 6 | 9 | Official | UI, API, CLI |
|
||||
| GitHub | 20 | 2 | 1 | 2 | Official | UI, API, CLI |
|
||||
| M365 | 70 | 7 | 3 | 2 | Official | UI, API, CLI |
|
||||
| OCI | 51 | 13 | 1 | 10 | Official | UI, API, CLI |
|
||||
| Alibaba Cloud | 61 | 9 | 1 | 9 | Official | CLI |
|
||||
| OCI | 52 | 15 | 1 | 12 | Official | UI, API, CLI |
|
||||
| Alibaba Cloud | 63 | 10 | 1 | 9 | Official | CLI |
|
||||
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | UI, API, CLI |
|
||||
| MongoDB Atlas | 10 | 3 | 0 | 0 | Official | UI, API, CLI |
|
||||
| MongoDB Atlas | 10 | 4 | 0 | 3 | Official | UI, API, CLI |
|
||||
| LLM | [See `promptfoo` docs.](https://www.promptfoo.dev/docs/red-team/plugins/) | N/A | N/A | N/A | Official | CLI |
|
||||
| NHN | 6 | 2 | 1 | 0 | Unofficial | CLI |
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
## [1.16.0] (Unreleased)
|
||||
## [1.16.0] (Prowler v5.15.0)
|
||||
|
||||
### Added
|
||||
- New endpoint to retrieve an overview of the attack surfaces [(#9309)](https://github.com/prowler-cloud/prowler/pull/9309)
|
||||
|
||||
@@ -0,0 +1,30 @@
|
||||
# Generated by Django 5.1.14 on 2025-12-10
|
||||
|
||||
from django.db import migrations
|
||||
from tasks.tasks import backfill_daily_severity_summaries_task
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.rls import Tenant
|
||||
|
||||
|
||||
def trigger_backfill_task(apps, schema_editor):
|
||||
"""
|
||||
Trigger the backfill task for all tenants.
|
||||
|
||||
This dispatches backfill_daily_severity_summaries_task for each tenant
|
||||
in the system to populate DailySeveritySummary records from historical scans.
|
||||
"""
|
||||
tenant_ids = Tenant.objects.using(MainRouter.admin_db).values_list("id", flat=True)
|
||||
|
||||
for tenant_id in tenant_ids:
|
||||
backfill_daily_severity_summaries_task.delay(tenant_id=str(tenant_id), days=90)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0061_daily_severity_summary"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(trigger_backfill_task, migrations.RunPython.noop),
|
||||
]
|
||||
@@ -1,6 +1,8 @@
|
||||
from collections import defaultdict
|
||||
from datetime import timedelta
|
||||
|
||||
from django.db.models import Sum
|
||||
from django.utils import timezone
|
||||
|
||||
from api.db_router import READ_REPLICA_ALIAS
|
||||
from api.db_utils import rls_transaction
|
||||
@@ -186,10 +188,6 @@ def backfill_daily_severity_summaries(tenant_id: str, days: int = None):
|
||||
Backfill DailySeveritySummary from completed scans.
|
||||
Groups by provider+date, keeps latest scan per day.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
from django.utils import timezone
|
||||
|
||||
created_count = 0
|
||||
updated_count = 0
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 420 KiB After Width: | Height: | Size: 743 KiB |
BIN
docs/images/products/risk-pipeline.png
Normal file
BIN
docs/images/products/risk-pipeline.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 690 KiB |
BIN
docs/images/products/threat-map.png
Normal file
BIN
docs/images/products/threat-map.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 872 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 552 KiB |
BIN
docs/img/prowler-cloud.gif
Normal file
BIN
docs/img/prowler-cloud.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.4 MiB |
@@ -2,11 +2,15 @@
|
||||
|
||||
All notable changes to the **Prowler MCP Server** are documented in this file.
|
||||
|
||||
## [0.2.0] (Prowler UNRELEASED)
|
||||
## [0.2.0] (Prowler v5.15.0)
|
||||
|
||||
### Added
|
||||
|
||||
- Remove all Prowler App MCP tools; and add new MCP Server tools for Prowler Findings and Compliance [(#9300)](https://github.com/prowler-cloud/prowler/pull/9300)
|
||||
- Add new MCP Server tools for Prowler Providers Management [(#9350)](https://github.com/prowler-cloud/prowler/pull/9350)
|
||||
- Add new MCP Server tools for Prowler Resources Management [(#9380)](https://github.com/prowler-cloud/prowler/pull/9380)
|
||||
- Add new MCP Server tools for Prowler Scans Management [(#9509)](https://github.com/prowler-cloud/prowler/pull/9509)
|
||||
- Add new MCP Server tools for Prowler Muting Management [(#9510)](https://github.com/prowler-cloud/prowler/pull/9510)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
"""Pydantic models for Prowler App MCP Server."""
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.base import MinimalSerializerMixin
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.findings import (
|
||||
CheckMetadata,
|
||||
CheckRemediation,
|
||||
@@ -10,6 +9,12 @@ from prowler_mcp_server.prowler_app.models.findings import (
|
||||
FindingsOverview,
|
||||
SimplifiedFinding,
|
||||
)
|
||||
from prowler_mcp_server.prowler_app.models.muting import (
|
||||
DetailedMuteRule,
|
||||
MutelistResponse,
|
||||
MuteRulesListResponse,
|
||||
SimplifiedMuteRule,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Base models
|
||||
@@ -21,4 +26,9 @@ __all__ = [
|
||||
"FindingsListResponse",
|
||||
"FindingsOverview",
|
||||
"SimplifiedFinding",
|
||||
# Muting models
|
||||
"DetailedMuteRule",
|
||||
"MutelistResponse",
|
||||
"MuteRulesListResponse",
|
||||
"SimplifiedMuteRule",
|
||||
]
|
||||
|
||||
@@ -27,18 +27,19 @@ class MinimalSerializerMixin(BaseModel):
|
||||
Dictionary with non-empty values only
|
||||
"""
|
||||
data = handler(self)
|
||||
return {k: v for k, v in data.items() if not self._should_exclude(v)}
|
||||
return {k: v for k, v in data.items() if not self._should_exclude(k, v)}
|
||||
|
||||
def _should_exclude(self, value: Any) -> bool:
|
||||
"""Determine if a value should be excluded from serialization.
|
||||
def _should_exclude(self, key: str, value: Any) -> bool:
|
||||
"""Determine if a key-value pair should be excluded from serialization.
|
||||
|
||||
Override this method in subclasses for custom exclusion logic.
|
||||
|
||||
Args:
|
||||
key: Field name
|
||||
value: Field value
|
||||
|
||||
Returns:
|
||||
True if the value should be excluded, False otherwise
|
||||
True if the field should be excluded, False otherwise
|
||||
"""
|
||||
# None values
|
||||
if value is None:
|
||||
|
||||
196
mcp_server/prowler_mcp_server/prowler_app/models/muting.py
Normal file
196
mcp_server/prowler_mcp_server/prowler_app/models/muting.py
Normal file
@@ -0,0 +1,196 @@
|
||||
"""Pydantic models for simplified muting responses."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.base import MinimalSerializerMixin
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
|
||||
|
||||
class MutelistResponse(MinimalSerializerMixin, BaseModel):
|
||||
"""Simplified mutelist response with Prowler configuration.
|
||||
|
||||
Represents a mutelist configuration that defines which findings
|
||||
should be automatically muted based on account patterns, check IDs, regions,
|
||||
resources, tags, and exceptions.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(frozen=True)
|
||||
|
||||
id: str = Field(
|
||||
description="Unique UUIDv4 identifier for this mutelist in Prowler database"
|
||||
)
|
||||
configuration: dict[str, Any] = Field(
|
||||
description="Mutelist configuration following Prowler format with nested structure: Mutelist → Accounts → Checks → Regions/Resources/Tags/Exceptions"
|
||||
)
|
||||
inserted_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp when this mutelist was created",
|
||||
)
|
||||
updated_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp when this mutelist was last modified",
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict[str, Any]) -> "MutelistResponse":
|
||||
"""Transform JSON:API processor response to simplified format.
|
||||
|
||||
The configuration structure follows the Prowler mutelist format:
|
||||
{
|
||||
"Mutelist": {
|
||||
"Accounts": {
|
||||
"<account-pattern>": {
|
||||
"Checks": {
|
||||
"<check-id>": {
|
||||
"Regions": [...],
|
||||
"Resources": [...],
|
||||
"Tags": [...],
|
||||
"Exceptions": {...}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
attributes = data.get("attributes", {})
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
configuration=attributes.get("configuration", {}),
|
||||
inserted_at=attributes.get("inserted_at"),
|
||||
updated_at=attributes.get("updated_at"),
|
||||
)
|
||||
|
||||
|
||||
class SimplifiedMuteRule(MinimalSerializerMixin, BaseModel):
|
||||
"""Simplified mute rule for list/search operations.
|
||||
|
||||
Provides lightweight mute rule information without the full list of finding UIDs.
|
||||
Use this for listing and searching operations where you need basic rule information
|
||||
but don't need the complete list of affected findings.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(frozen=True)
|
||||
|
||||
id: str = Field(
|
||||
description="Unique UUIDv4 identifier for this mute rule in Prowler database"
|
||||
)
|
||||
name: str = Field(description="Human-readable name for this mute rule")
|
||||
reason: str = Field(description="Documented reason for muting these findings")
|
||||
enabled: bool = Field(
|
||||
description="Whether this mute rule is currently active and applying muting to findings"
|
||||
)
|
||||
finding_count: int = Field(
|
||||
description="Number of findings currently muted by this rule", ge=0
|
||||
)
|
||||
inserted_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp when this mute rule was created",
|
||||
)
|
||||
updated_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp when this mute rule was last modified",
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict[str, Any]) -> "SimplifiedMuteRule":
|
||||
"""Transform JSON:API mute rule response to simplified format."""
|
||||
attributes = data.get("attributes", {})
|
||||
|
||||
# Calculate finding count from finding_uids list length
|
||||
finding_uids = attributes.get("finding_uids", [])
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
name=attributes["name"],
|
||||
reason=attributes["reason"],
|
||||
enabled=attributes["enabled"],
|
||||
finding_count=len(finding_uids),
|
||||
inserted_at=attributes.get("inserted_at"),
|
||||
updated_at=attributes.get("updated_at"),
|
||||
)
|
||||
|
||||
|
||||
class DetailedMuteRule(SimplifiedMuteRule):
|
||||
"""Detailed mute rule with complete information including finding UIDs.
|
||||
|
||||
Extends SimplifiedMuteRule with the full list of finding UIDs being muted and
|
||||
creator information (user/service account that created the rule).
|
||||
Use this when you need complete context about a specific mute rule, including
|
||||
all affected findings and audit trail information.
|
||||
"""
|
||||
|
||||
finding_uids: list[str] = Field(
|
||||
description="List of finding UIDs that are muted by this rule"
|
||||
)
|
||||
user_creator_id: str | None = Field(
|
||||
default=None,
|
||||
description="UUIDv4 identifier of the Prowler user from the tenant that created this rule",
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict[str, Any]) -> "DetailedMuteRule":
|
||||
"""Transform JSON:API mute rule response to detailed format."""
|
||||
attributes = data.get("attributes", {})
|
||||
relationships = data.get("relationships", {})
|
||||
|
||||
# Extract creator information
|
||||
user_creator_id = None
|
||||
creator_data = relationships.get("created_by", {}).get("data")
|
||||
if creator_data:
|
||||
user_creator_id = creator_data.get("id")
|
||||
|
||||
finding_uids = attributes.get("finding_uids", [])
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
name=attributes["name"],
|
||||
reason=attributes["reason"],
|
||||
enabled=attributes["enabled"],
|
||||
finding_count=len(finding_uids),
|
||||
finding_uids=finding_uids,
|
||||
inserted_at=attributes.get("inserted_at"),
|
||||
updated_at=attributes.get("updated_at"),
|
||||
user_creator_id=user_creator_id,
|
||||
)
|
||||
|
||||
|
||||
class MuteRulesListResponse(BaseModel):
|
||||
"""Simplified response for mute rules list queries with pagination.
|
||||
|
||||
Contains a list of simplified mute rules and pagination metadata.
|
||||
Use this for paginated list/search operations to get multiple rules efficiently.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(frozen=True)
|
||||
|
||||
mute_rules: list[SimplifiedMuteRule] = Field(
|
||||
description="List of simplified mute rules matching the query filters"
|
||||
)
|
||||
total_num_mute_rules: int = Field(
|
||||
description="Total number of mute rules matching the query across all pages",
|
||||
ge=0,
|
||||
)
|
||||
total_num_pages: int = Field(
|
||||
description="Total number of pages available for the query results", ge=0
|
||||
)
|
||||
current_page: int = Field(
|
||||
description="Current page number in the paginated results (1-indexed)", ge=1
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, response: dict[str, Any]) -> "MuteRulesListResponse":
|
||||
"""Transform JSON:API response to simplified format."""
|
||||
data = response.get("data", [])
|
||||
meta = response.get("meta", {})
|
||||
pagination = meta.get("pagination", {})
|
||||
|
||||
mute_rules = [SimplifiedMuteRule.from_api_response(item) for item in data]
|
||||
|
||||
return cls(
|
||||
mute_rules=mute_rules,
|
||||
total_num_mute_rules=pagination.get("count", 0),
|
||||
total_num_pages=pagination.get("pages", 1),
|
||||
current_page=pagination.get("page", 1),
|
||||
)
|
||||
134
mcp_server/prowler_mcp_server/prowler_app/models/providers.py
Normal file
134
mcp_server/prowler_mcp_server/prowler_app/models/providers.py
Normal file
@@ -0,0 +1,134 @@
|
||||
"""Pydantic models for simplified provider responses."""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.base import MinimalSerializerMixin
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class SimplifiedProvider(MinimalSerializerMixin, BaseModel):
|
||||
"""Simplified provider for list/search operations."""
|
||||
|
||||
id: str
|
||||
uid: str
|
||||
alias: str | None = None
|
||||
provider: str
|
||||
connected: bool | None = None
|
||||
secret_type: Literal["role", "service_account", "static"] | None = None
|
||||
|
||||
def _should_exclude(self, key: str, value: Any) -> bool:
|
||||
"""Override to always include connected and secret_type fields even when None."""
|
||||
# Always include these fields regardless of value (None has semantic meaning)
|
||||
if key == "connected" or key == "secret_type":
|
||||
return False
|
||||
# Use parent class logic for other fields
|
||||
return super()._should_exclude(key, value)
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict[str, Any]) -> "SimplifiedProvider":
|
||||
"""Transform JSON:API provider response to simplified format."""
|
||||
attributes = data["attributes"]
|
||||
connection_data = attributes.get("connection", {})
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
uid=attributes["uid"],
|
||||
alias=attributes.get("alias"),
|
||||
provider=attributes["provider"],
|
||||
connected=connection_data.get("connected"),
|
||||
secret_type=None, # Will be populated separately via secret endpoint
|
||||
)
|
||||
|
||||
|
||||
class DetailedProvider(SimplifiedProvider):
|
||||
"""Detailed provider with complete information for deep analysis.
|
||||
|
||||
Extends SimplifiedProvider with temporal metadata and relationships.
|
||||
Use this when you need complete context about a specific provider.
|
||||
"""
|
||||
|
||||
inserted_at: str | None = None
|
||||
updated_at: str | None = None
|
||||
last_checked_at: str | None = None
|
||||
provider_group_ids: list[str] | None = None
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict[str, Any]) -> "DetailedProvider":
|
||||
"""Transform JSON:API provider response to detailed format."""
|
||||
attributes = data["attributes"]
|
||||
connection_data = attributes.get("connection", {})
|
||||
relationships = data.get("relationships", {})
|
||||
|
||||
# Extract provider groups relationship
|
||||
provider_group_ids = None
|
||||
groups_data = relationships.get("provider_groups", {}).get("data", [])
|
||||
if groups_data:
|
||||
provider_group_ids = [group["id"] for group in groups_data]
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
uid=attributes["uid"],
|
||||
alias=attributes.get("alias"),
|
||||
provider=attributes["provider"],
|
||||
connected=connection_data.get("connected"),
|
||||
inserted_at=attributes.get("inserted_at"),
|
||||
updated_at=attributes.get("updated_at"),
|
||||
last_checked_at=connection_data.get("last_checked_at"),
|
||||
provider_group_ids=provider_group_ids,
|
||||
)
|
||||
|
||||
|
||||
class ProvidersListResponse(BaseModel):
|
||||
"""Simplified response for providers list queries."""
|
||||
|
||||
providers: list[SimplifiedProvider]
|
||||
total_num_providers: int
|
||||
total_num_pages: int
|
||||
current_page: int
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, response: dict[str, Any]) -> "ProvidersListResponse":
|
||||
"""Transform JSON:API response to simplified format."""
|
||||
data = response["data"]
|
||||
meta = response["meta"]
|
||||
pagination = meta["pagination"]
|
||||
|
||||
providers = [SimplifiedProvider.from_api_response(item) for item in data]
|
||||
|
||||
return cls(
|
||||
providers=providers,
|
||||
total_num_providers=pagination["count"],
|
||||
total_num_pages=pagination["pages"],
|
||||
current_page=pagination["page"],
|
||||
)
|
||||
|
||||
|
||||
class ProviderConnectionStatus(MinimalSerializerMixin, BaseModel):
|
||||
"""Result of provider connection operation."""
|
||||
|
||||
provider: DetailedProvider
|
||||
connected: Literal["connected", "failed", "not_tested"]
|
||||
error: str | None = None
|
||||
|
||||
@classmethod
|
||||
def create(
|
||||
cls,
|
||||
provider_data: dict[str, Any],
|
||||
connection_status: dict[str, Any],
|
||||
) -> "ProviderConnectionStatus":
|
||||
"""Create connection status from provider data and connection test result."""
|
||||
|
||||
connected: str | None = connection_status.get("connected", None)
|
||||
|
||||
if connected is None:
|
||||
connected = "not_tested"
|
||||
elif connected:
|
||||
connected = "connected"
|
||||
else:
|
||||
connected = "failed"
|
||||
|
||||
return cls(
|
||||
provider=DetailedProvider.from_api_response(provider_data),
|
||||
connected=connected,
|
||||
error=connection_status.get("error", None),
|
||||
)
|
||||
137
mcp_server/prowler_mcp_server/prowler_app/models/resources.py
Normal file
137
mcp_server/prowler_mcp_server/prowler_app/models/resources.py
Normal file
@@ -0,0 +1,137 @@
|
||||
"""Pydantic models for simplified resources responses."""
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.base import MinimalSerializerMixin
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class SimplifiedResource(MinimalSerializerMixin, BaseModel):
|
||||
"""Simplified resource with only LLM-relevant information for list operations."""
|
||||
|
||||
id: str
|
||||
uid: str
|
||||
name: str
|
||||
region: str
|
||||
service: str
|
||||
type: str
|
||||
failed_findings_count: int
|
||||
tags: dict[str, str] | None = None
|
||||
provider_id: str | None = None
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict) -> "SimplifiedResource":
|
||||
"""Transform JSON:API resource response to simplified format."""
|
||||
attributes = data["attributes"]
|
||||
relationships = data.get("relationships", {})
|
||||
|
||||
# Extract provider information from relationships if available
|
||||
provider_id = None
|
||||
provider_data = relationships.get("provider", {}).get("data", {})
|
||||
if provider_data:
|
||||
provider_id = provider_data["id"]
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
uid=attributes["uid"],
|
||||
name=attributes["name"],
|
||||
region=attributes["region"],
|
||||
service=attributes["service"],
|
||||
type=attributes["type"],
|
||||
failed_findings_count=attributes["failed_findings_count"],
|
||||
tags=attributes["tags"],
|
||||
provider_id=provider_id,
|
||||
)
|
||||
|
||||
|
||||
class DetailedResource(SimplifiedResource):
|
||||
"""Detailed resource with comprehensive information for deep analysis.
|
||||
|
||||
Extends SimplifiedResource with tags, metadata, configuration details,
|
||||
temporal information, and relationships.
|
||||
Use this when you need complete context about a specific resource.
|
||||
"""
|
||||
|
||||
metadata: str | None = None
|
||||
partition: str | None = None
|
||||
inserted_at: str
|
||||
updated_at: str
|
||||
finding_ids: list[str] | None = None
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict) -> "DetailedResource":
|
||||
"""Transform JSON:API resource response to detailed format."""
|
||||
attributes = data["attributes"]
|
||||
relationships = data.get("relationships", {})
|
||||
|
||||
# Parse findings relationship
|
||||
finding_ids = None
|
||||
findings_data = relationships.get("findings", {}).get("data", [])
|
||||
if findings_data:
|
||||
finding_ids = [f["id"] for f in findings_data]
|
||||
|
||||
# Extract provider information from relationships if available
|
||||
provider_id = None
|
||||
provider_data = relationships.get("provider", {}).get("data", {})
|
||||
if provider_data:
|
||||
provider_id = provider_data["id"]
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
uid=attributes["uid"],
|
||||
name=attributes["name"],
|
||||
region=attributes["region"],
|
||||
service=attributes["service"],
|
||||
type=attributes["type"],
|
||||
failed_findings_count=attributes["failed_findings_count"],
|
||||
tags=attributes["tags"],
|
||||
metadata=attributes["metadata"],
|
||||
partition=attributes["partition"],
|
||||
inserted_at=attributes["inserted_at"],
|
||||
updated_at=attributes["updated_at"],
|
||||
finding_ids=finding_ids,
|
||||
provider_id=provider_id,
|
||||
)
|
||||
|
||||
|
||||
class ResourcesListResponse(BaseModel):
|
||||
"""Simplified response for resources list queries."""
|
||||
|
||||
resources: list[SimplifiedResource]
|
||||
total_num_resources: int
|
||||
total_num_pages: int
|
||||
current_page: int
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, response: dict) -> "ResourcesListResponse":
|
||||
"""Transform JSON:API response to simplified format."""
|
||||
data = response["data"]
|
||||
meta = response["meta"]
|
||||
pagination = meta["pagination"]
|
||||
|
||||
resources = [SimplifiedResource.from_api_response(item) for item in data]
|
||||
|
||||
return cls(
|
||||
resources=resources,
|
||||
total_num_resources=pagination["count"],
|
||||
total_num_pages=pagination["pages"],
|
||||
current_page=pagination["page"],
|
||||
)
|
||||
|
||||
|
||||
class ResourcesMetadataResponse(BaseModel):
|
||||
"""Metadata response with unique filter values for resource discovery."""
|
||||
|
||||
services: list[str] | None = None
|
||||
regions: list[str] | None = None
|
||||
types: list[str] | None = None
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, response: dict) -> "ResourcesMetadataResponse":
|
||||
"""Transform JSON:API metadata response to simplified format."""
|
||||
data = response["data"]
|
||||
attributes = data["attributes"]
|
||||
|
||||
return cls(
|
||||
services=attributes.get("services"),
|
||||
regions=attributes.get("regions"),
|
||||
types=attributes.get("types"),
|
||||
)
|
||||
222
mcp_server/prowler_mcp_server/prowler_app/models/scans.py
Normal file
222
mcp_server/prowler_mcp_server/prowler_app/models/scans.py
Normal file
@@ -0,0 +1,222 @@
|
||||
"""Data models for Prowler scans.
|
||||
|
||||
This module provides Pydantic models for representing Prowler security scans
|
||||
with two-tier complexity:
|
||||
- SimplifiedScan: For list operations with essential fields
|
||||
- DetailedScan: Extends simplified with additional operational fields
|
||||
|
||||
All models inherit from MinimalSerializerMixin to exclude None/empty values
|
||||
for optimal LLM token usage.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.base import MinimalSerializerMixin
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
|
||||
|
||||
class SimplifiedScan(MinimalSerializerMixin, BaseModel):
|
||||
"""Simplified scan representation for list operations.
|
||||
|
||||
Includes core scan fields for efficient overview.
|
||||
Used by list_scans() tool.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(frozen=True)
|
||||
|
||||
id: str = Field(
|
||||
description="Unique UUIDv4 identifier for this scan in Prowler database"
|
||||
)
|
||||
name: str | None = Field(
|
||||
default=None,
|
||||
description="Optional custom name for the scan to help identify it",
|
||||
)
|
||||
trigger: Literal["manual", "scheduled"] = Field(
|
||||
description="How the scan was initiated: 'manual' (user-triggered) or 'scheduled' (automated)"
|
||||
)
|
||||
state: Literal[
|
||||
"available", "scheduled", "executing", "completed", "failed", "cancelled"
|
||||
] = Field(
|
||||
description="Current state of the scan: available, scheduled, executing, completed, failed, or cancelled"
|
||||
)
|
||||
started_at: str | None = Field(
|
||||
default=None, description="ISO 8601 timestamp when the scan started execution"
|
||||
)
|
||||
completed_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp when the scan finished (completed or failed)",
|
||||
)
|
||||
provider_id: str = Field(
|
||||
description="UUIDv4 identifier of the provider this scan is associated with"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict[str, Any]) -> "SimplifiedScan":
|
||||
"""Transform JSON:API scan response to simplified model.
|
||||
|
||||
Args:
|
||||
data: Scan data from API response['data'] (single item or list item)
|
||||
|
||||
Returns:
|
||||
SimplifiedScan instance
|
||||
"""
|
||||
attributes = data["attributes"]
|
||||
relationships = data.get("relationships", {})
|
||||
|
||||
provider_id = relationships.get("provider", {}).get("data", {}).get("id", None)
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
name=attributes.get("name"),
|
||||
trigger=attributes["trigger"],
|
||||
state=attributes["state"],
|
||||
started_at=attributes.get("started_at"),
|
||||
completed_at=attributes.get("completed_at"),
|
||||
provider_id=provider_id,
|
||||
)
|
||||
|
||||
|
||||
class DetailedScan(SimplifiedScan):
|
||||
"""Detailed scan representation with full operational data.
|
||||
|
||||
Extends SimplifiedScan with progress, duration, resources, and relationships.
|
||||
Used by get_scan() and create_scan() tools.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(frozen=True)
|
||||
|
||||
progress: int | None = Field(
|
||||
default=None, description="Scan completion progress as percentage (0-100)"
|
||||
)
|
||||
duration: int | None = Field(
|
||||
default=None,
|
||||
description="Total scan duration in seconds from start to completion",
|
||||
)
|
||||
unique_resource_count: int | None = Field(
|
||||
default=None,
|
||||
description="Number of unique cloud resources discovered during the scan",
|
||||
)
|
||||
inserted_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp when the scan was created in the database",
|
||||
)
|
||||
scheduled_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp when the scan was scheduled to run",
|
||||
)
|
||||
next_scan_at: str | None = Field(
|
||||
default=None,
|
||||
description="ISO 8601 timestamp for the next scheduled scan (for recurring scans)",
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, data: dict[str, Any]) -> "DetailedScan":
|
||||
"""Transform JSON:API scan response to detailed model.
|
||||
|
||||
Args:
|
||||
data: Scan data from API response['data']
|
||||
|
||||
Returns:
|
||||
DetailedScan instance with all fields populated
|
||||
"""
|
||||
attributes = data["attributes"]
|
||||
relationships = data.get("relationships", {})
|
||||
|
||||
# Extract provider ID from relationship
|
||||
provider_rel = relationships.get("provider", {}).get("data", {})
|
||||
provider_id = provider_rel.get("id", "")
|
||||
|
||||
# Extract task relationship
|
||||
task_rel = relationships.get("task", {}).get("data")
|
||||
task_id = task_rel.get("id") if task_rel else None
|
||||
|
||||
# Extract processor relationship
|
||||
processor_rel = relationships.get("processor", {}).get("data")
|
||||
processor_id = processor_rel.get("id") if processor_rel else None
|
||||
|
||||
return cls(
|
||||
id=data["id"],
|
||||
name=attributes.get("name"),
|
||||
trigger=attributes["trigger"],
|
||||
state=attributes["state"],
|
||||
started_at=attributes.get("started_at"),
|
||||
completed_at=attributes.get("completed_at"),
|
||||
provider_id=provider_id,
|
||||
progress=attributes.get("progress"),
|
||||
duration=attributes.get("duration"),
|
||||
unique_resource_count=attributes.get("unique_resource_count"),
|
||||
inserted_at=attributes.get("inserted_at"),
|
||||
scheduled_at=attributes.get("scheduled_at"),
|
||||
next_scan_at=attributes.get("next_scan_at"),
|
||||
task_id=task_id,
|
||||
processor_id=processor_id,
|
||||
)
|
||||
|
||||
|
||||
class ScansListResponse(BaseModel):
|
||||
"""Response model for list_scans() with pagination metadata.
|
||||
|
||||
Follows established pattern from FindingsListResponse and ProvidersListResponse.
|
||||
"""
|
||||
|
||||
scans: list[SimplifiedScan]
|
||||
total_num_scans: int
|
||||
total_num_pages: int
|
||||
current_page: int
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, response: dict[str, Any]) -> "ScansListResponse":
|
||||
"""Transform JSON:API list response to scans list with pagination.
|
||||
|
||||
Args:
|
||||
response: Full API response with data and meta
|
||||
|
||||
Returns:
|
||||
ScansListResponse with simplified scans and pagination metadata
|
||||
"""
|
||||
data = response.get("data", [])
|
||||
meta = response.get("meta", {})
|
||||
pagination = meta.get("pagination", {})
|
||||
|
||||
# Transform each scan
|
||||
scans = [SimplifiedScan.from_api_response(item) for item in data]
|
||||
|
||||
return cls(
|
||||
scans=scans,
|
||||
total_num_scans=pagination.get("count", 0),
|
||||
total_num_pages=pagination.get("pages", 0),
|
||||
current_page=pagination.get("page", 1),
|
||||
)
|
||||
|
||||
|
||||
class ScanCreationResult(MinimalSerializerMixin, BaseModel):
|
||||
"""Result of scan creation operation.
|
||||
|
||||
Used by trigger_scan() to communicate the outcome of scan creation.
|
||||
Status indicates whether scan was created successfully or failed.
|
||||
"""
|
||||
|
||||
scan: DetailedScan | None = Field(
|
||||
default=None,
|
||||
description="Detailed scan information if creation succeeded, None otherwise",
|
||||
)
|
||||
status: Literal["success", "failed"] = Field(
|
||||
description="Outcome of scan creation: success (scan created successfully) or failed (error)"
|
||||
)
|
||||
message: str = Field(
|
||||
description="Human-readable message describing the scan creation result"
|
||||
)
|
||||
|
||||
|
||||
class ScheduleCreationResult(MinimalSerializerMixin, BaseModel):
|
||||
"""Result of async schedule creation operation.
|
||||
|
||||
Used by schedule_daily_scan() to communicate scheduling outcome.
|
||||
"""
|
||||
|
||||
scheduled: bool = Field(
|
||||
description="Whether the daily scan schedule was created successfully"
|
||||
)
|
||||
message: str = Field(
|
||||
description="Human-readable message describing the scheduling result"
|
||||
)
|
||||
477
mcp_server/prowler_mcp_server/prowler_app/tools/muting.py
Normal file
477
mcp_server/prowler_mcp_server/prowler_app/tools/muting.py
Normal file
@@ -0,0 +1,477 @@
|
||||
"""Muting tools for Prowler App MCP Server.
|
||||
|
||||
This module provides tools for managing finding muting in Prowler, including:
|
||||
- Mutelist management (pattern-based bulk muting)
|
||||
- Mute rules management (finding-specific muting)
|
||||
"""
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.muting import (
|
||||
DetailedMuteRule,
|
||||
MutelistResponse,
|
||||
MuteRulesListResponse,
|
||||
)
|
||||
from prowler_mcp_server.prowler_app.tools.base import BaseTool
|
||||
from pydantic import Field
|
||||
|
||||
|
||||
class MutingTools(BaseTool):
|
||||
"""Tools for muting operations.
|
||||
|
||||
Provides tools for:
|
||||
- Managing mutelist (pattern-based bulk muting)
|
||||
- Managing mute rules (finding-specific muting)
|
||||
"""
|
||||
|
||||
# ===== MUTELIST TOOLS =====
|
||||
|
||||
async def get_mutelist(self) -> dict[str, Any]:
|
||||
"""Retrieve the current mutelist configuration for the tenant.
|
||||
|
||||
IMPORTANT: Only one mutelist can exist per tenant. Returns an error message if no mutelist exists.
|
||||
For detailed information about mutelist structure and configuration, search Prowler documentation
|
||||
using prowler_docs_search tool available in this MCP Server.
|
||||
|
||||
The mutelist includes:
|
||||
- Core identification: id (UUID for processor operations)
|
||||
- Configuration: Nested structure with Accounts → Checks → Regions/Resources/Tags/Exceptions patterns
|
||||
- Temporal data: inserted_at, updated_at timestamps
|
||||
|
||||
Workflow:
|
||||
1. Use this tool to check if a mutelist is configured
|
||||
2. Examine current muting patterns before making updates
|
||||
3. Use prowler_app_set_mutelist to create or update the configuration
|
||||
"""
|
||||
self.logger.info("Retrieving mutelist configuration...")
|
||||
|
||||
# Query processors filtered by type=mutelist
|
||||
params = {
|
||||
"filter[processor_type]": "mutelist",
|
||||
"fields[processors]": "processor_type,configuration,inserted_at,updated_at",
|
||||
}
|
||||
|
||||
clean_params = self.api_client.build_filter_params(params)
|
||||
api_response = await self.api_client.get(
|
||||
"/api/v1/processors", params=clean_params
|
||||
)
|
||||
|
||||
data = api_response.get("data", [])
|
||||
|
||||
if len(data) == 0:
|
||||
return {
|
||||
"error": "No mutelist found",
|
||||
"message": "No mutelist configuration exists for this tenant. Use prowler_app_set_mutelist to create one.",
|
||||
}
|
||||
|
||||
# Return the first (and only) mutelist
|
||||
mutelist = MutelistResponse.from_api_response(data[0])
|
||||
return mutelist.model_dump()
|
||||
|
||||
async def set_mutelist(
|
||||
self,
|
||||
configuration: dict[str, Any] | str = Field(
|
||||
description="""Mutelist configuration object following the Accounts/Checks/Regions/Resources/Tags/Exceptions structure.
|
||||
Accepts either a dictionary or JSON string. The configuration replaces the entire mutelist (not merged with existing).
|
||||
|
||||
Structure:
|
||||
{
|
||||
"Mutelist": {
|
||||
"Accounts": {
|
||||
"<account-pattern>": { // "*" for all accounts, or specific account ID
|
||||
"Checks": {
|
||||
"<check-id>": { // Prowler check ID
|
||||
"Regions": ["us-east-1", "eu-west-1"], // Optional
|
||||
"Resources": ["arn:aws:s3:::my-bucket"], // Optional
|
||||
"Tags": ["Environment:dev"], // Optional
|
||||
"Exceptions": { // Optional
|
||||
"Accounts": ["123456789012"],
|
||||
"Regions": ["us-west-2"],
|
||||
"Resources": ["arn:aws:s3:::critical-bucket"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}"""
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Create or update the mutelist configuration for pattern-based bulk muting.
|
||||
|
||||
IMPORTANT: Automatically creates a new mutelist or updates the existing one (only one mutelist per tenant).
|
||||
The configuration completely replaces any existing mutelist (not merged).
|
||||
For detailed information about mutelist structure and configuration, search Prowler documentation
|
||||
using prowler_docs_search tool available in this MCP Server.
|
||||
|
||||
Default behavior:
|
||||
- Creates new mutelist if none exists
|
||||
- Updates existing mutelist with complete replacement
|
||||
- Applies to findings from future scans
|
||||
|
||||
The mutelist supports:
|
||||
- Account patterns: Specific account IDs or "*" for all
|
||||
- Check-based muting: Per-check ID configuration
|
||||
- Scope filtering: Regions, Resources, Tags
|
||||
- Exceptions: Accounts, Regions, Resources to exclude from muting
|
||||
|
||||
Workflow:
|
||||
1. Use prowler_app_get_mutelist to check existing configuration
|
||||
2. Build configuration object following Prowler mutelist format
|
||||
3. Use this tool to create or update the mutelist
|
||||
4. Verify with prowler_app_get_mutelist
|
||||
"""
|
||||
self.logger.info("Setting mutelist configuration...")
|
||||
|
||||
# Parse configuration if it's a string
|
||||
if isinstance(configuration, str):
|
||||
configuration = json.loads(configuration)
|
||||
|
||||
# Check if mutelist already exists
|
||||
existing_mutelist = await self.get_mutelist()
|
||||
|
||||
if "error" in existing_mutelist:
|
||||
# Create new mutelist
|
||||
self.logger.info("Creating new mutelist...")
|
||||
create_body = {
|
||||
"data": {
|
||||
"type": "processors",
|
||||
"attributes": {
|
||||
"processor_type": "mutelist",
|
||||
"configuration": configuration,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
api_response = await self.api_client.post(
|
||||
"/api/v1/processors", json_data=create_body
|
||||
)
|
||||
mutelist = MutelistResponse.from_api_response(api_response.get("data", {}))
|
||||
return mutelist.model_dump()
|
||||
else:
|
||||
# Update existing mutelist
|
||||
self.logger.info(f"Updating existing mutelist {existing_mutelist['id']}...")
|
||||
update_body = {
|
||||
"data": {
|
||||
"type": "processors",
|
||||
"id": existing_mutelist["id"],
|
||||
"attributes": {
|
||||
"configuration": configuration,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
api_response = await self.api_client.patch(
|
||||
f"/api/v1/processors/{existing_mutelist['id']}", json_data=update_body
|
||||
)
|
||||
mutelist = MutelistResponse.from_api_response(api_response.get("data", {}))
|
||||
return mutelist.model_dump()
|
||||
|
||||
async def delete_mutelist(self) -> dict[str, Any]:
|
||||
"""Remove the mutelist configuration from the tenant.
|
||||
|
||||
WARNING: This is a destructive operation that cannot be undone.
|
||||
- The mutelist will need to be re-created with prowler_app_set_mutelist
|
||||
- New findings from future scans will NOT be muted by the deleted mutelist
|
||||
- Previously muted findings remain muted (deletion doesn't un-mute them)
|
||||
|
||||
Workflow:
|
||||
1. Use prowler_app_get_mutelist to confirm what will be deleted
|
||||
2. Use this tool to permanently remove the mutelist
|
||||
3. New scans will no longer apply mutelist-based muting
|
||||
"""
|
||||
self.logger.info("Deleting mutelist configuration...")
|
||||
|
||||
# Get existing mutelist
|
||||
existing_mutelist = await self.get_mutelist()
|
||||
|
||||
if "error" in existing_mutelist:
|
||||
return {
|
||||
"success": False,
|
||||
"message": "No mutelist found to delete",
|
||||
}
|
||||
|
||||
# Delete the mutelist
|
||||
mutelist_id = existing_mutelist["id"]
|
||||
await self.api_client.delete(f"/api/v1/processors/{mutelist_id}")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Mutelist deleted successfully",
|
||||
}
|
||||
|
||||
# ===== MUTE RULES TOOLS =====
|
||||
|
||||
async def list_mute_rules(
|
||||
self,
|
||||
name: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by exact rule name",
|
||||
),
|
||||
enabled: (
|
||||
bool | str | None
|
||||
) = Field( # Wrong `str` hint type due to bad MCP Clients implementation
|
||||
default=None,
|
||||
description="Filter by enabled status. True for enabled rules only, False for disabled rules only. If not specified, returns both enabled and disabled rules. Strings 'true' and 'false' are also accepted.",
|
||||
),
|
||||
search: str | None = Field(
|
||||
default=None,
|
||||
description="Free-text search term across multiple fields (name, reason). Use this for general keyword search.",
|
||||
),
|
||||
page_size: int = Field(
|
||||
default=50, description="Number of results to return per page."
|
||||
),
|
||||
page_number: int = Field(
|
||||
default=1,
|
||||
description="Page number to retrieve (1-indexed)",
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Search and filter mute rules with pagination support.
|
||||
|
||||
IMPORTANT: This tool returns LIGHTWEIGHT mute rules without the full list of finding UIDs.
|
||||
Use prowler_app_get_mute_rule to get complete details including all finding UIDs and creator information.
|
||||
|
||||
Default behavior:
|
||||
- Returns all mute rules (both enabled and disabled)
|
||||
- Returns 50 rules per page
|
||||
- Includes basic rule information without full finding UID lists
|
||||
|
||||
Each mute rule includes:
|
||||
- Core identification: id (UUID for prowler_app_get_mute_rule), name
|
||||
- Contextual information: reason, enabled status
|
||||
- State tracking: finding_count (number of findings currently muted)
|
||||
- Temporal data: inserted_at, updated_at timestamps
|
||||
|
||||
Workflow:
|
||||
1. Use this tool to search and filter mute rules by name, enabled status, or keywords
|
||||
2. Use prowler_app_get_mute_rule with the mute rule 'id' to get complete details including all finding UIDs
|
||||
3. Use prowler_app_update_mute_rule or prowler_app_delete_mute_rule to modify rules
|
||||
"""
|
||||
self.logger.info("Listing mute rules...")
|
||||
self.api_client.validate_page_size(page_size)
|
||||
|
||||
params = {
|
||||
"fields[mute-rules]": "name,reason,enabled,finding_uids,inserted_at,updated_at",
|
||||
"page[size]": page_size,
|
||||
"page[number]": page_number,
|
||||
}
|
||||
|
||||
# Build filter parameters
|
||||
if name:
|
||||
params["filter[name]"] = name
|
||||
if enabled is not None:
|
||||
if isinstance(enabled, bool):
|
||||
params["filter[enabled]"] = enabled
|
||||
else:
|
||||
if enabled.lower() == "true":
|
||||
params["filter[enabled]"] = True
|
||||
elif enabled.lower() == "false":
|
||||
params["filter[enabled]"] = False
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Invalid enabled value: {enabled}. Valid values are True, False, 'true', 'false' or None."
|
||||
)
|
||||
if search:
|
||||
params["filter[search]"] = search
|
||||
|
||||
clean_params = self.api_client.build_filter_params(params)
|
||||
api_response = await self.api_client.get(
|
||||
"/api/v1/mute-rules", params=clean_params
|
||||
)
|
||||
|
||||
simplified_response = MuteRulesListResponse.from_api_response(api_response)
|
||||
return simplified_response.model_dump()
|
||||
|
||||
async def get_mute_rule(
|
||||
self,
|
||||
rule_id: str = Field(
|
||||
description="UUID of the mute rule to retrieve. Must be a valid UUID format (e.g., '019ac0d6-90d5-73e9-9acf-c22e256f1bac')."
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Retrieve comprehensive details about a specific mute rule by its ID.
|
||||
|
||||
IMPORTANT: This tool returns COMPLETE mute rule details including the full list of finding UIDs.
|
||||
Use this after finding a rule via prowler_app_list_mute_rules.
|
||||
|
||||
This tool provides ALL information that prowler_app_list_mute_rules returns PLUS:
|
||||
- finding_uids: Complete list of finding UIDs that are muted by this rule
|
||||
- user_creator_id: UUID of the user who created the rule (audit trail)
|
||||
|
||||
Workflow:
|
||||
1. Use prowler_app_list_mute_rules to find rules by name or filter criteria
|
||||
2. Use this tool with the rule 'id' to get complete details
|
||||
3. Examine finding_uids list to understand which findings are muted
|
||||
4. Use prowler_app_update_mute_rule or prowler_app_delete_mute_rule to modify if needed
|
||||
"""
|
||||
self.logger.info(f"Retrieving mute rule {rule_id}...")
|
||||
|
||||
params = {
|
||||
"include": "created_by",
|
||||
}
|
||||
|
||||
api_response = await self.api_client.get(
|
||||
f"/api/v1/mute-rules/{rule_id}", params=params
|
||||
)
|
||||
|
||||
detailed_rule = DetailedMuteRule.from_api_response(api_response.get("data", {}))
|
||||
return detailed_rule.model_dump()
|
||||
|
||||
async def create_mute_rule(
|
||||
self,
|
||||
name: str = Field(
|
||||
description="Name for the mute rule. Should be descriptive and meaningful (e.g., 'Dev S3 Public Access', 'Test Environment IMDSv1')."
|
||||
),
|
||||
reason: str = Field(
|
||||
description="Reason for muting these findings. Document why this security issue is acceptable or intentional (e.g., 'Development environment with controlled access', 'Legacy application requires IMDSv1')."
|
||||
),
|
||||
finding_ids: list[str] = Field(
|
||||
description="List of finding IDs (UUIDs) to mute. Get these from the prowler_app_search_security_findings tool. Must provide at least 1 finding ID."
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Create a new mute rule to mute specific findings with documentation and audit trail.
|
||||
|
||||
IMPORTANT: This immediately mutes the specified findings AND all previous findings with matching UIDs (this could take some time to complete).
|
||||
The rule is enabled by default. Muting is permanent.
|
||||
|
||||
Default behavior:
|
||||
- Rule is created in enabled state
|
||||
- Applies to current and previous findings with matching UIDs
|
||||
- Records creator for audit trail
|
||||
|
||||
The mute rule includes:
|
||||
- Core identification: id (UUID for prowler_app_get_mute_rule), name, reason
|
||||
- Configuration: enabled status, finding_uids list
|
||||
- Audit trail: user_creator_id (UUID of the Prowler user from the tenant that created the rule), timestamps when the rule was created and last modified
|
||||
|
||||
Workflow:
|
||||
1. Use prowler_app_search_security_findings to identify findings to mute
|
||||
2. Use this tool with finding IDs, descriptive name, and documented reason
|
||||
3. Verify with prowler_app_get_mute_rule to confirm rule creation
|
||||
4. Check findings are muted with prowler_app_search_security_findings (filter by muted=true)
|
||||
"""
|
||||
self.logger.info(f"Creating mute rule '{name}'...")
|
||||
|
||||
create_body = {
|
||||
"data": {
|
||||
"type": "mute-rules",
|
||||
"attributes": {
|
||||
"name": name,
|
||||
"reason": reason,
|
||||
"finding_ids": finding_ids,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
api_response = await self.api_client.post(
|
||||
"/api/v1/mute-rules", json_data=create_body
|
||||
)
|
||||
|
||||
detailed_rule = DetailedMuteRule.from_api_response(api_response.get("data", {}))
|
||||
return detailed_rule.model_dump()
|
||||
|
||||
async def update_mute_rule(
|
||||
self,
|
||||
rule_id: str = Field(
|
||||
description="UUID of the mute rule to update. Must be a valid UUID format."
|
||||
),
|
||||
name: str | None = Field(
|
||||
default=None,
|
||||
description="New name for the rule. If not specified, name remains unchanged.",
|
||||
),
|
||||
reason: str | None = Field(
|
||||
default=None,
|
||||
description="New reason for the rule. If not specified, reason remains unchanged.",
|
||||
),
|
||||
enabled: bool | None = Field(
|
||||
default=None,
|
||||
description="Enable (True) or disable (False) the rule. If not specified, enabled status remains unchanged. IMPORTANT: Disabling a rule does not un-mute findings - they remain muted.",
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Update a mute rule's name, reason, or enabled status.
|
||||
|
||||
IMPORTANT: Cannot change which findings are muted (finding_uids are immutable).
|
||||
Disabling a rule does NOT un-mute findings - they remain muted permanently.
|
||||
|
||||
Default behavior:
|
||||
- Only specified fields are updated
|
||||
- Unspecified fields remain unchanged
|
||||
- If no parameters provided, returns current rule state
|
||||
|
||||
Updatable fields:
|
||||
- name: Change rule name for better organization
|
||||
- reason: Update documentation/justification
|
||||
- enabled: Toggle rule active status (doesn't affect already-muted findings)
|
||||
|
||||
Workflow:
|
||||
1. Use prowler_app_get_mute_rule to see current rule state
|
||||
2. Use this tool to update name, reason, or enabled status
|
||||
3. Verify changes with prowler_app_get_mute_rule
|
||||
"""
|
||||
self.logger.info(f"Updating mute rule {rule_id}...")
|
||||
|
||||
# Build update body with only provided fields
|
||||
attributes = {}
|
||||
if name is not None:
|
||||
attributes["name"] = name
|
||||
if reason is not None:
|
||||
attributes["reason"] = reason
|
||||
if enabled is not None:
|
||||
attributes["enabled"] = enabled
|
||||
|
||||
if not attributes:
|
||||
# No updates provided, just return current state
|
||||
return await self.get_mute_rule(rule_id)
|
||||
|
||||
update_body = {
|
||||
"data": {
|
||||
"type": "mute-rules",
|
||||
"id": rule_id,
|
||||
"attributes": attributes,
|
||||
}
|
||||
}
|
||||
|
||||
api_response = await self.api_client.patch(
|
||||
f"/api/v1/mute-rules/{rule_id}", json_data=update_body
|
||||
)
|
||||
|
||||
self.logger.info(f"API response: {api_response}")
|
||||
detailed_rule = DetailedMuteRule.from_api_response(api_response.get("data", {}))
|
||||
return detailed_rule.model_dump()
|
||||
|
||||
async def delete_mute_rule(
|
||||
self,
|
||||
rule_id: str = Field(
|
||||
description="UUID of the mute rule to delete. Must be a valid UUID format."
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Delete a mute rule from the system.
|
||||
|
||||
WARNING: Findings that were muted by this rule REMAIN MUTED after deletion.
|
||||
This only removes the rule itself from management, not the muting effect on findings.
|
||||
The muted findings will stay muted permanently.
|
||||
|
||||
Deletion behavior:
|
||||
- Rule is permanently removed from the system
|
||||
- Muted findings remain muted (deletion doesn't un-mute them)
|
||||
- Cannot be undone - rule must be recreated to restore
|
||||
|
||||
Workflow:
|
||||
1. Use prowler_app_get_mute_rule to review what will be deleted
|
||||
2. Use this tool to permanently remove the rule
|
||||
3. Verify deletion with prowler_app_list_mute_rules (rule should no longer appear)
|
||||
"""
|
||||
self.logger.info(f"Deleting mute rule {rule_id}...")
|
||||
|
||||
result = await self.api_client.delete(f"/api/v1/mute-rules/{rule_id}")
|
||||
|
||||
if result.get("success"):
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Mute rule deleted successfully",
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"success": False,
|
||||
"message": "Failed to delete mute rule",
|
||||
}
|
||||
623
mcp_server/prowler_mcp_server/prowler_app/tools/providers.py
Normal file
623
mcp_server/prowler_mcp_server/prowler_app/tools/providers.py
Normal file
@@ -0,0 +1,623 @@
|
||||
"""Provider Management tools for Prowler App MCP Server.
|
||||
|
||||
This module provides tools for managing provider connections,
|
||||
including searching, connecting, and deleting providers.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.providers import (
|
||||
ProviderConnectionStatus,
|
||||
ProvidersListResponse,
|
||||
)
|
||||
from prowler_mcp_server.prowler_app.tools.base import BaseTool
|
||||
from pydantic import Field
|
||||
|
||||
|
||||
class ProvidersTools(BaseTool):
|
||||
"""Tools for provider management operations
|
||||
|
||||
Provides tools for:
|
||||
- prowler_app_search_providers: Search and view configured providers with their connection status
|
||||
- prowler_app_connect_provider: Connect or register a provider for security scanning in Prowler
|
||||
- prowler_app_delete_provider: Permanently remove a provider from Prowler
|
||||
"""
|
||||
|
||||
async def search_providers(
|
||||
self,
|
||||
provider_id: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by Prowler's internal UUID(s) (v4) for the provider(s), generated when the provider is registered in the system.",
|
||||
),
|
||||
provider_uid: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by provider's unique identifier(s), this ID is the one provided by the provider itself. Format varies by provider type: AWS Account ID (12 digits), Azure Subscription ID (UUID), GCP Project ID (string), Kubernetes namespace, GitHub username/organization, M365 domain ID, etc. All supported provider types are listed in the Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server",
|
||||
),
|
||||
provider_type: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by provider type. Valid values include: 'aws', 'azure', 'gcp', 'kubernetes'... For more valid values, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server.",
|
||||
),
|
||||
alias: str | None = Field(
|
||||
default=None,
|
||||
description="Search by provider alias/friendly name. Partial match supported (case-insensitive). Use this to find providers by their human-readable name (e.g., 'Production', 'Dev', 'AWS Main')",
|
||||
),
|
||||
connected: (
|
||||
bool | str | None
|
||||
) = Field( # Wrong `str` hint type due to bad MCP Clients implementation
|
||||
default=None,
|
||||
description="Filter by connection status. True returns only successfully connected providers (credentials work), False returns only providers with failed connections (credentials invalid). If not specified, returns all connected, failed and not tested providers. Strings 'true' and 'false' are also accepted.",
|
||||
),
|
||||
page_size: int = Field(
|
||||
default=50, description="Number of results to return per page"
|
||||
),
|
||||
page_number: int = Field(
|
||||
default=1,
|
||||
description="Page number to retrieve (1-indexed)",
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Search and view configured providers to be scanned with Prowler.
|
||||
|
||||
This tool returns a unified view of all providers configured in Prowler.
|
||||
|
||||
For getting more details about what types of providers are available to be scanned with Prowler or
|
||||
what are the UIDs are accepted for each provider type, please refer to Prowler Hub/Prowler Documentation
|
||||
that you can also find in form of tools in this MCP Server.
|
||||
|
||||
Each provider includes:
|
||||
- Provider identification: Prowler Internal ID, External Provider UID, Provider Alias
|
||||
- Provider context: Provider Type
|
||||
- Connection status: Connected (true), Failed (false), Not Tested (null)
|
||||
"""
|
||||
self.api_client.validate_page_size(page_size)
|
||||
|
||||
params = {
|
||||
"fields[providers]": "uid,alias,provider,connection,secret",
|
||||
"page[number]": page_number,
|
||||
"page[size]": page_size,
|
||||
}
|
||||
|
||||
# Build filter parameters
|
||||
if provider_id:
|
||||
params["filter[id__in]"] = provider_id
|
||||
if provider_uid:
|
||||
params["filter[uid__in]"] = provider_uid
|
||||
if provider_type:
|
||||
params["filter[provider__in]"] = provider_type
|
||||
if alias:
|
||||
params["filter[alias__icontains]"] = alias
|
||||
if connected is not None:
|
||||
if isinstance(connected, bool):
|
||||
params["filter[connected]"] = connected
|
||||
else:
|
||||
if connected.lower() == "true":
|
||||
params["filter[connected]"] = True
|
||||
elif connected.lower() == "false":
|
||||
params["filter[connected]"] = False
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Invalid connected value: {connected}. Valid values are True, False, 'true', 'false' or None."
|
||||
)
|
||||
|
||||
clean_params = self.api_client.build_filter_params(params)
|
||||
|
||||
api_response = await self.api_client.get(
|
||||
"/api/v1/providers", params=clean_params
|
||||
)
|
||||
simplified_response = ProvidersListResponse.from_api_response(api_response)
|
||||
|
||||
# Fetch secret_type for each provider that has a secret
|
||||
for provider in simplified_response.providers:
|
||||
# Get the provider data from the API response to access relationships
|
||||
provider_data = next(
|
||||
(
|
||||
provider_api_response
|
||||
for provider_api_response in api_response["data"]
|
||||
if provider_api_response["id"] == provider.id
|
||||
),
|
||||
None,
|
||||
)
|
||||
if provider_data:
|
||||
secret_relationship = provider_data.get("relationships", {}).get(
|
||||
"secret", {}
|
||||
)
|
||||
secret_data = secret_relationship.get("data")
|
||||
if secret_data:
|
||||
secret_id = secret_data["id"]
|
||||
provider.secret_type = await self._get_secret_type(secret_id)
|
||||
|
||||
return simplified_response.model_dump()
|
||||
|
||||
async def connect_provider(
|
||||
self,
|
||||
provider_uid: str = Field(
|
||||
description="Provider's unique identifier. For supported UID provider formats, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server"
|
||||
),
|
||||
provider_type: str = Field(
|
||||
description="Type of provider to be scanned with Prowler. Valid values include: 'aws', 'azure', 'gcp', 'kubernetes'... For more valid values, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server."
|
||||
),
|
||||
alias: str | None = Field(
|
||||
default=None,
|
||||
description="Human-friendly name for this provider. Optional but recommended for easy identification. Use descriptive names to distinguish multiple accounts of the same type.",
|
||||
),
|
||||
credentials: dict[str, Any] | None = Field(
|
||||
default=None,
|
||||
description="Provider-specific credentials for authentication. Optional - if not provided, provider is created but not connected. Structure varies by provider type. For supported provider types, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server",
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Register a provider to be scanned with Prowler.
|
||||
|
||||
This tool will register a provider in Prowler App, even if the UID is wrong.
|
||||
If the provider is already registered, it will be updated with the new provided alias or credentials if provided.
|
||||
If credentials are provided, they will be added to the indicated provider, if the provider does not exist, it will be created and the credentials will be added to it.
|
||||
If the connection test is successful, the provider will be connected.
|
||||
If the connection test fails, the provider will be created but not connected.
|
||||
The tool always returns the provider details after its registration or update.
|
||||
|
||||
Example Input:
|
||||
- AWS Static Credentials:
|
||||
```json
|
||||
{
|
||||
"provider_uid": "123456789012",
|
||||
"provider_type": "aws",
|
||||
"alias": "production-aws-account",
|
||||
"credentials": {
|
||||
"aws_access_key_id": "AKIA...",
|
||||
"aws_secret_access_key": "...",
|
||||
"aws_session_token": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
- AWS Assume Role:
|
||||
```json
|
||||
{
|
||||
"provider_uid": "987654321098",
|
||||
"provider_type": "aws",
|
||||
"alias": "staging-aws-account",
|
||||
"credentials": {
|
||||
"role_arn": "arn:aws:iam::987654321098:role/ProwlerScanRole",
|
||||
"external_id": "...",
|
||||
"aws_access_key_id": "AKIA...", # Optional
|
||||
"aws_secret_access_key": "...", # Optional
|
||||
"aws_session_token": "...", # Optional
|
||||
"session_duration": 3600, # Optional
|
||||
"role_session_name": "..." # Optional
|
||||
}
|
||||
}
|
||||
```
|
||||
- Azure/M365 Static Credentials:
|
||||
```json
|
||||
{
|
||||
"provider_uid": "a1b2c3d4-e5f6-4a5b-8c9d-0e1f2a3b4c5d",
|
||||
"provider_type": "azure",
|
||||
"alias": "production-azure-subscription",
|
||||
"credentials": {
|
||||
"client_id": "...",
|
||||
"client_secret": "...",
|
||||
"tenant_id": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
- GCP Service Account Account Key:
|
||||
```json
|
||||
{
|
||||
"provider_uid": "my-gcp-project-prod",
|
||||
"provider_type": "gcp",
|
||||
"alias": "production-gcp-project",
|
||||
"credentials": {
|
||||
"service_account_key": {
|
||||
"type": "service_account",
|
||||
"project_id": "...",
|
||||
"private_key_id": "...",
|
||||
"private_key": "...",
|
||||
"client_email": "...",
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
- Kubernetes Static Credentials:
|
||||
```json
|
||||
{
|
||||
"provider_uid": "prod-k8s-cluster",
|
||||
"provider_type": "kubernetes",
|
||||
"alias": "production-kubernetes-cluster",
|
||||
"credentials": {
|
||||
"kubeconfig_content": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
- GitHub OAuth App Token:
|
||||
```json
|
||||
{
|
||||
"provider_uid": "my-organization",
|
||||
"provider_type": "github",
|
||||
"alias": "my-github-organization",
|
||||
"credentials": {
|
||||
"oauth_app_token": "..."
|
||||
}
|
||||
}
|
||||
|
||||
NOTE: THERE ARE MORE PROVIDER TYPES AND CREDENTIAL TYPES AVAILABLE, PLEASE REFER TO THE Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server.
|
||||
"""
|
||||
# Step 1: Check if provider already exists
|
||||
prowler_provider_id = await self._check_provider_exists(provider_uid)
|
||||
|
||||
# Step 2: Create or update provider
|
||||
if prowler_provider_id is None:
|
||||
prowler_provider_id = await self._create_provider(
|
||||
provider_uid, provider_type, alias
|
||||
)
|
||||
elif alias:
|
||||
await self._update_provider_alias(prowler_provider_id, alias)
|
||||
|
||||
# Step 3: Handle credentials if provided and capture secret response
|
||||
secret_response = None
|
||||
if credentials:
|
||||
secret_response = await self._store_credentials(
|
||||
prowler_provider_id, credentials
|
||||
)
|
||||
|
||||
# Step 4: Test connection
|
||||
connection_status = await self._test_connection(prowler_provider_id)
|
||||
|
||||
# Step 5: Get final provider state with relationships
|
||||
final_provider = await self._get_final_provider_state(prowler_provider_id)
|
||||
|
||||
# Transform to structured response using model
|
||||
connection_result = ProviderConnectionStatus.create(
|
||||
provider_data=final_provider["data"],
|
||||
connection_status=connection_status,
|
||||
)
|
||||
|
||||
if secret_response:
|
||||
# We just stored credentials, use the secret_type from the response
|
||||
connection_result.provider.secret_type = (
|
||||
secret_response.get("data", {}).get("attributes", {}).get("secret_type")
|
||||
)
|
||||
else:
|
||||
# No new credentials provided, check if provider has an existing secret
|
||||
secret_data = (
|
||||
final_provider.get("data", {})
|
||||
.get("relationships", {})
|
||||
.get("secret", {})
|
||||
.get("data")
|
||||
)
|
||||
if secret_data:
|
||||
# Provider has existing secret, fetch its type
|
||||
secret_id = secret_data["id"]
|
||||
connection_result.provider.secret_type = await self._get_secret_type(
|
||||
secret_id
|
||||
)
|
||||
|
||||
return connection_result.model_dump()
|
||||
|
||||
async def delete_provider(
|
||||
self,
|
||||
provider_id: str = Field(
|
||||
description="Prowler's internal UUID (v4) for the provider to permanently remove, generated when the provider was registered in the system. Use `prowler_app_search_providers` tool to find the provider_id if you only know the alias or the provider's own identifier (provider_uid)"
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Permanently remove a registered provider from Prowler.
|
||||
|
||||
WARNING: This is a destructive operation that cannot be undone. The provider will need to be
|
||||
re-added with prowler_app_connect_provider if you want to scan it again.
|
||||
|
||||
The tool always returns the deletion status and message.
|
||||
"""
|
||||
self.logger.info(f"Deleting provider {provider_id}...")
|
||||
try:
|
||||
# Initiate the deletion task
|
||||
task_response = await self.api_client.delete(
|
||||
f"/api/v1/providers/{provider_id}"
|
||||
)
|
||||
task_id = task_response.get("data", {}).get("id")
|
||||
|
||||
# Poll until task completes (with 60 second timeout)
|
||||
await self.api_client.poll_task_until_complete(
|
||||
task_id=task_id, timeout=60, poll_interval=1.0
|
||||
)
|
||||
|
||||
# If we reach here, the task completed successfully
|
||||
return {
|
||||
"deleted": True,
|
||||
"message": f"Provider {provider_id} deleted successfully",
|
||||
}
|
||||
except Exception as e:
|
||||
self.logger.error(f"Provider deletion failed: {e}")
|
||||
return {
|
||||
"deleted": False,
|
||||
"message": f"Provider {provider_id} deletion failed: {str(e)}",
|
||||
}
|
||||
|
||||
# Private helper methods
|
||||
|
||||
async def _check_provider_exists(self, provider_uid: str) -> str | None:
|
||||
"""Check if a provider already exists by its UID.
|
||||
|
||||
Args:
|
||||
provider_uid: The provider's unique identifier (e.g., AWS account ID)
|
||||
|
||||
Returns:
|
||||
The Prowler-generated provider ID if exists, None otherwise
|
||||
|
||||
Raises:
|
||||
Exception: If multiple providers with the same UID are found (data integrity issue)
|
||||
Exception: If API request fails
|
||||
"""
|
||||
self.logger.info(f"Checking if provider {provider_uid} exists...")
|
||||
response = await self.api_client.get(
|
||||
"/api/v1/providers", params={"filter[uid]": provider_uid}
|
||||
)
|
||||
providers = response.get("data", [])
|
||||
|
||||
if len(providers) == 0:
|
||||
self.logger.info(f"Provider {provider_uid} does not exist")
|
||||
return None
|
||||
elif len(providers) == 1:
|
||||
prowler_provider_id = providers[0].get("id")
|
||||
self.logger.info(
|
||||
f"Provider {provider_uid} exists with ID {prowler_provider_id}"
|
||||
)
|
||||
return prowler_provider_id
|
||||
else:
|
||||
# Multiple providers with the same UID is a data integrity issue
|
||||
raise Exception(
|
||||
f"Data integrity error: Found {len(providers)} providers with UID '{provider_uid}'. "
|
||||
f"Each provider UID should be unique. Please contact support or manually clean up duplicate providers."
|
||||
)
|
||||
|
||||
async def _create_provider(
|
||||
self, provider_uid: str, provider_type: str, alias: str | None
|
||||
) -> str:
|
||||
"""Create a new provider.
|
||||
|
||||
Args:
|
||||
provider_uid: The provider's unique identifier
|
||||
provider_type: Type of provider to be scanned with Prowler (aws, azure, gcp, etc.)
|
||||
alias: Optional human-friendly name for the provider
|
||||
|
||||
Returns:
|
||||
The provider UID (which is used as the ID)
|
||||
"""
|
||||
self.logger.info(f"Creating provider {provider_uid} (type: {provider_type})...")
|
||||
provider_body = {
|
||||
"data": {
|
||||
"type": "providers",
|
||||
"attributes": {
|
||||
"uid": provider_uid,
|
||||
"provider": provider_type,
|
||||
},
|
||||
}
|
||||
}
|
||||
if alias:
|
||||
provider_body["data"]["attributes"]["alias"] = alias
|
||||
|
||||
await self.api_client.post("/api/v1/providers", json_data=provider_body)
|
||||
|
||||
provider_id = await self._check_provider_exists(provider_uid)
|
||||
if provider_id is None:
|
||||
raise Exception(f"Provider {provider_uid} creation failed")
|
||||
return provider_id
|
||||
|
||||
async def _update_provider_alias(
|
||||
self, prowler_provider_id: str, alias: str
|
||||
) -> None:
|
||||
"""Update the alias of an existing provider.
|
||||
|
||||
Args:
|
||||
prowler_provider_id: The Prowler-generated provider ID
|
||||
alias: New human-friendly name for the provider
|
||||
"""
|
||||
self.logger.info(f"Updating provider {prowler_provider_id} alias...")
|
||||
update_body = {
|
||||
"data": {
|
||||
"type": "providers",
|
||||
"id": prowler_provider_id,
|
||||
"attributes": {
|
||||
"alias": alias,
|
||||
},
|
||||
}
|
||||
}
|
||||
result = await self.api_client.patch(
|
||||
f"/api/v1/providers/{prowler_provider_id}", json_data=update_body
|
||||
)
|
||||
if result.get("data", {}).get("attributes", {}).get("alias") != alias:
|
||||
raise Exception(f"Provider {prowler_provider_id} alias update failed")
|
||||
|
||||
def _determine_secret_type(self, credentials: dict[str, Any]) -> str:
|
||||
"""Determine the secret type from credentials structure.
|
||||
|
||||
Args:
|
||||
credentials: The credentials dictionary
|
||||
|
||||
Returns:
|
||||
Secret type: "role", "service_account", or "static"
|
||||
"""
|
||||
if "role_arn" in credentials:
|
||||
return "role"
|
||||
elif "service_account_key" in credentials:
|
||||
return "service_account"
|
||||
else:
|
||||
return "static"
|
||||
|
||||
async def _get_provider_secret_id(self, prowler_provider_id: str) -> str | None:
|
||||
"""Get the secret ID for a provider if it exists.
|
||||
|
||||
Args:
|
||||
prowler_provider_id: The Prowler-generated provider ID
|
||||
|
||||
Returns:
|
||||
The secret ID if exists, None otherwise
|
||||
"""
|
||||
try:
|
||||
response = await self.api_client.get(
|
||||
"/api/v1/providers/secrets",
|
||||
params={"filter[provider]": prowler_provider_id},
|
||||
)
|
||||
secrets = response.get("data", [])
|
||||
|
||||
if len(secrets) > 0:
|
||||
secret_id = secrets[0].get("id")
|
||||
self.logger.info(
|
||||
f"Found existing secret {secret_id} for provider {prowler_provider_id}"
|
||||
)
|
||||
return secret_id
|
||||
else:
|
||||
self.logger.info(
|
||||
f"No existing secret found for provider {prowler_provider_id}"
|
||||
)
|
||||
return None
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error checking for existing secret: {e}")
|
||||
return None
|
||||
|
||||
async def _get_secret_type(self, secret_id: str) -> str | None:
|
||||
"""Get the secret type for a given secret ID.
|
||||
|
||||
Args:
|
||||
secret_id: The secret ID from provider relationships
|
||||
|
||||
Returns:
|
||||
The secret type ("role", "service_account", or "static") if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
response = await self.api_client.get(
|
||||
f"/api/v1/providers/secrets/{secret_id}",
|
||||
params={"fields[provider-secrets]": "secret_type"},
|
||||
)
|
||||
secret_type = (
|
||||
response.get("data", {}).get("attributes", {}).get("secret_type")
|
||||
)
|
||||
return secret_type
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error fetching secret type for {secret_id}: {e}")
|
||||
return None
|
||||
|
||||
async def _store_credentials(
|
||||
self, prowler_provider_id: str, credentials: dict[str, Any]
|
||||
) -> dict[str, Any]:
|
||||
"""Store or update credentials for a provider.
|
||||
|
||||
Args:
|
||||
prowler_provider_id: The Prowler-generated provider ID
|
||||
credentials: The credentials to store
|
||||
|
||||
Returns:
|
||||
The API response with the secret data
|
||||
"""
|
||||
self.logger.info(
|
||||
f"Adding/updating credentials for provider {prowler_provider_id}..."
|
||||
)
|
||||
|
||||
secret_type = self._determine_secret_type(credentials)
|
||||
|
||||
# Check if a secret already exists for this provider
|
||||
existing_secret_id = await self._get_provider_secret_id(prowler_provider_id)
|
||||
|
||||
if existing_secret_id:
|
||||
# Update existing secret
|
||||
self.logger.info(f"Updating existing secret {existing_secret_id}...")
|
||||
update_body = {
|
||||
"data": {
|
||||
"type": "provider-secrets",
|
||||
"id": existing_secret_id,
|
||||
"attributes": {
|
||||
"secret_type": secret_type,
|
||||
"secret": credentials,
|
||||
},
|
||||
"relationships": {
|
||||
"provider": {
|
||||
"data": {
|
||||
"type": "providers",
|
||||
"id": prowler_provider_id,
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
try:
|
||||
response = await self.api_client.patch(
|
||||
f"/api/v1/providers/secrets/{existing_secret_id}",
|
||||
json_data=update_body,
|
||||
)
|
||||
self.logger.info("Credentials updated successfully")
|
||||
return response
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error updating credentials: {e}")
|
||||
raise
|
||||
else:
|
||||
# Create new secret
|
||||
self.logger.info("Creating new secret...")
|
||||
secret_body = {
|
||||
"data": {
|
||||
"type": "provider-secrets",
|
||||
"attributes": {
|
||||
"secret_type": secret_type,
|
||||
"secret": credentials,
|
||||
},
|
||||
"relationships": {
|
||||
"provider": {
|
||||
"data": {
|
||||
"type": "providers",
|
||||
"id": prowler_provider_id,
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
try:
|
||||
response = await self.api_client.post(
|
||||
"/api/v1/providers/secrets", json_data=secret_body
|
||||
)
|
||||
self.logger.info("Credentials added successfully")
|
||||
return response
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error adding credentials: {e}")
|
||||
raise
|
||||
|
||||
async def _test_connection(self, prowler_provider_id: str) -> dict[str, Any]:
|
||||
"""Test connection to a provider.
|
||||
|
||||
Args:
|
||||
prowler_provider_id: The Prowler-generated provider ID
|
||||
|
||||
Returns:
|
||||
Connection status dictionary with 'connected' boolean and optional 'error' message
|
||||
"""
|
||||
self.logger.info(f"Testing connection for provider {prowler_provider_id}...")
|
||||
try:
|
||||
# Initiate the connection test task
|
||||
task_response = await self.api_client.post(
|
||||
f"/api/v1/providers/{prowler_provider_id}/connection", json_data={}
|
||||
)
|
||||
task_id = task_response.get("data", {}).get("id")
|
||||
|
||||
# Poll until task completes (with 60 second timeout)
|
||||
completed_task = await self.api_client.poll_task_until_complete(
|
||||
task_id=task_id, timeout=60, poll_interval=1.0
|
||||
)
|
||||
|
||||
# Extract the result from the completed task
|
||||
task_result = (
|
||||
completed_task.get("data", {}).get("attributes", {}).get("result", {})
|
||||
)
|
||||
|
||||
return task_result
|
||||
except Exception as e:
|
||||
self.logger.error(f"Connection test failed: {e}")
|
||||
return {"connected": False, "error": str(e)}
|
||||
|
||||
async def _get_final_provider_state(
|
||||
self, prowler_provider_id: str
|
||||
) -> dict[str, Any]:
|
||||
"""Get final provider state with relationships.
|
||||
|
||||
Args:
|
||||
prowler_provider_id: The Prowler-generated provider ID
|
||||
|
||||
Returns:
|
||||
Provider data dictionary
|
||||
"""
|
||||
return await self.api_client.get(
|
||||
f"/api/v1/providers/{prowler_provider_id}",
|
||||
)
|
||||
345
mcp_server/prowler_mcp_server/prowler_app/tools/resources.py
Normal file
345
mcp_server/prowler_mcp_server/prowler_app/tools/resources.py
Normal file
@@ -0,0 +1,345 @@
|
||||
"""Cloud Resources tools for Prowler App MCP Server.
|
||||
|
||||
This module provides tools for searching, viewing, and analyzing cloud resources
|
||||
across all providers.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.resources import (
|
||||
DetailedResource,
|
||||
ResourcesListResponse,
|
||||
ResourcesMetadataResponse,
|
||||
)
|
||||
from prowler_mcp_server.prowler_app.tools.base import BaseTool
|
||||
from pydantic import Field
|
||||
|
||||
|
||||
class ResourcesTools(BaseTool):
|
||||
"""Tools for cloud resources operations.
|
||||
|
||||
Provides tools for:
|
||||
- Searching and filtering cloud resources
|
||||
- Getting detailed resource information
|
||||
- Viewing resources overview with statistics
|
||||
"""
|
||||
|
||||
async def list_resources(
|
||||
self,
|
||||
provider_type: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by provider type. Multiple values allowed. If empty, all providers are returned. For valid values, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server.",
|
||||
),
|
||||
provider_alias: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by specific provider alias/name (partial match supported). Useful for finding resources in specific accounts like 'production' or 'dev'.",
|
||||
),
|
||||
provider_uid: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by provider's native ID (e.g., AWS account ID, Azure subscription ID, GCP project ID). All supported provider types are listed in the Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server",
|
||||
),
|
||||
region: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by regions. Multiple values allowed (e.g., us-east-1, westus2, europe-west1), format may vary depending on the provider. If empty, all regions are returned.",
|
||||
),
|
||||
service: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by service. Multiple values allowed (e.g., s3, ec2, iam, keyvault). If empty, all services are returned.",
|
||||
),
|
||||
resource_type: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by resource type. Format may vary depending on the provider. If empty, all resource types are returned.",
|
||||
),
|
||||
resource_name: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by resource name (partial match supported). Useful for finding specific resources like 'prod-db' or 'test-bucket'.",
|
||||
),
|
||||
tag_key: str | None = Field(
|
||||
default=None,
|
||||
description="Filter resources by tag key (e.g., 'Environment', 'CostCenter', 'Owner').",
|
||||
),
|
||||
tag_value: str | None = Field(
|
||||
default=None,
|
||||
description="Filter resources by tag value (e.g., 'production', 'staging', 'development').",
|
||||
),
|
||||
date_from: str | None = Field(
|
||||
default=None,
|
||||
description="Start date for range query in ISO 8601 format (YYYY-MM-DD, e.g., '2025-01-15'). Full date required. IMPORTANT: Maximum date range is 2 days. If only date_from is provided, date_to is automatically set to 2 days later.",
|
||||
),
|
||||
date_to: str | None = Field(
|
||||
default=None,
|
||||
description="End date for range query in ISO 8601 format (YYYY-MM-DD, e.g., '2025-01-15'). Full date required. If only date_to is provided, date_from is automatically set to 2 days earlier.",
|
||||
),
|
||||
search: str | None = Field(
|
||||
default=None, description="Free-text search term across resource details"
|
||||
),
|
||||
page_size: int = Field(
|
||||
default=50, description="Number of results to return per page (max 1000)"
|
||||
),
|
||||
page_number: int = Field(
|
||||
default=1, description="Page number to retrieve (1-indexed)"
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""List and filter all resources scanned by Prowler.
|
||||
|
||||
IMPORTANT: This tool returns LIGHTWEIGHT resource information. Use this for fast searching
|
||||
and filtering across many resources. For complete configuration details, metadata, and finding
|
||||
relationships, use prowler_app_get_resource on specific resources of interest.
|
||||
|
||||
This is the primary tool for browsing resources with rich filtering capabilities.
|
||||
Returns current state by default (latest scan per provider). Specify dates to query
|
||||
historical data (2-day maximum window).
|
||||
|
||||
Default behavior:
|
||||
- Returns latest resources from most recent scans (no date parameters needed)
|
||||
- Returns 50 results per page
|
||||
- Sorted by service, region, and name for logical grouping
|
||||
|
||||
Date filtering:
|
||||
- Without dates: queries resources from the most recent completed scan per provider (most efficient)
|
||||
- With dates: queries historical resource state (2-day maximum range between date_from and date_to)
|
||||
|
||||
Each resource includes:
|
||||
- Core identification: id (UUID for prowler_app_get_resource), uid, name
|
||||
- Location context: region, service, type
|
||||
- Security context: failed_findings_count (number of active security issues)
|
||||
- Tags: tags associated with the resource
|
||||
|
||||
Useful Workflow:
|
||||
1. Use this tool to search and filter resources by provider, region, service, tags, etc.
|
||||
2. Use prowler_app_get_resource with the resource 'id' to get complete configuration and metadata
|
||||
3. Use prowler_app_search_security_findings to find security issues for specific resources
|
||||
4. Use prowler_app_get_finding_details to get details about the security issues for specific resources
|
||||
"""
|
||||
# Validate page_size parameter
|
||||
self.api_client.validate_page_size(page_size)
|
||||
|
||||
# Determine endpoint based on date parameters
|
||||
date_range = self.api_client.normalize_date_range(
|
||||
date_from, date_to, max_days=2
|
||||
)
|
||||
|
||||
if date_range is None:
|
||||
# No dates provided - use latest resources endpoint
|
||||
endpoint = "/api/v1/resources/latest"
|
||||
params = {}
|
||||
else:
|
||||
# Dates provided - use historical resources endpoint
|
||||
endpoint = "/api/v1/resources"
|
||||
params = {
|
||||
"filter[updated_at__gte]": date_range[0],
|
||||
"filter[updated_at__lte]": date_range[1],
|
||||
}
|
||||
|
||||
# Build filter parameters
|
||||
if provider_type:
|
||||
params["filter[provider_type__in]"] = provider_type
|
||||
if provider_alias:
|
||||
params["filter[provider_alias__icontains]"] = provider_alias
|
||||
if provider_uid:
|
||||
params["filter[provider_uid__icontains]"] = provider_uid
|
||||
if region:
|
||||
params["filter[region__in]"] = region
|
||||
if service:
|
||||
params["filter[service__in]"] = service
|
||||
if resource_type:
|
||||
params["filter[type__in]"] = resource_type
|
||||
if resource_name:
|
||||
params["filter[name__icontains]"] = resource_name
|
||||
if tag_key:
|
||||
params["filter[tag_key]"] = tag_key
|
||||
if tag_value:
|
||||
params["filter[tag_value]"] = tag_value
|
||||
if search:
|
||||
params["filter[search]"] = search
|
||||
|
||||
# Pagination
|
||||
params["page[size]"] = page_size
|
||||
params["page[number]"] = page_number
|
||||
|
||||
# Return only LLM-relevant fields
|
||||
params["fields[resources]"] = (
|
||||
"uid,name,region,service,type,failed_findings_count,tags"
|
||||
)
|
||||
params["sort"] = "service,region,name"
|
||||
|
||||
# Convert lists to comma-separated strings
|
||||
clean_params = self.api_client.build_filter_params(params)
|
||||
|
||||
# Get API response and transform to simplified format
|
||||
api_response = await self.api_client.get(endpoint, params=clean_params)
|
||||
simplified_response = ResourcesListResponse.from_api_response(api_response)
|
||||
|
||||
return simplified_response.model_dump()
|
||||
|
||||
async def get_resource(
|
||||
self,
|
||||
resource_id: str = Field(
|
||||
description="Prowler's internal UUID (v4) for the resource to retrieve, generated when the resource was discovered in the system. Use `prowler_app_list_resources` tool to find the right ID"
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Retrieve comprehensive details about a specific resource by its ID.
|
||||
|
||||
IMPORTANT: This tool provides COMPLETE resource details with all available information.
|
||||
Use this after finding a specific resource via prowler_app_list_resources.
|
||||
|
||||
This tool provides ALL information that prowler_app_list_resources returns PLUS:
|
||||
|
||||
1. Configuration Details:
|
||||
- metadata: Provider-specific configuration (tags, policies, encryption settings, network rules)
|
||||
- partition: Provider-specific partition/region grouping (e.g., aws, aws-cn, aws-us-gov for AWS)
|
||||
|
||||
2. Temporal Tracking:
|
||||
- inserted_at: When Prowler first discovered this resource
|
||||
- updated_at: When resource configuration last changed
|
||||
|
||||
3. Security Relationships:
|
||||
- finding_ids: Prowler's internal UUIDs (v4) of all security findings associated with this resource
|
||||
- Use prowler_app_get_finding_details on these IDs to get remediation guidance
|
||||
|
||||
Useful Workflow:
|
||||
1. Use prowler_app_list_resources to browse and filter across many resources
|
||||
2. Use this tool to drill down into specific resources of interest
|
||||
3. Use prowler_app_get_finding_details to get details about the security issues for specific resources
|
||||
"""
|
||||
params = {}
|
||||
|
||||
# Get API response and transform to detailed format
|
||||
api_response = await self.api_client.get(
|
||||
f"/api/v1/resources/{resource_id}", params=params
|
||||
)
|
||||
self.logger.info(f"API response: {api_response}")
|
||||
detailed_resource = DetailedResource.from_api_response(
|
||||
api_response.get("data", {})
|
||||
)
|
||||
|
||||
return detailed_resource.model_dump()
|
||||
|
||||
async def get_resources_overview(
|
||||
self,
|
||||
provider_type: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by provider type. Multiple values allowed. If empty, all providers are returned. For valid values, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server.",
|
||||
),
|
||||
provider_alias: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by specific provider alias/name (partial match supported).",
|
||||
),
|
||||
provider_uid: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by provider's native ID (e.g., AWS account ID, Azure subscription ID).",
|
||||
),
|
||||
date_from: str | None = Field(
|
||||
default=None,
|
||||
description="Start date for range query in ISO 8601 format (YYYY-MM-DD). Maximum 2-day range.",
|
||||
),
|
||||
date_to: str | None = Field(
|
||||
default=None,
|
||||
description="End date for range query in ISO 8601 format (YYYY-MM-DD).",
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Generate a markdown overview of your resources with statistics and insights.
|
||||
|
||||
IMPORTANT: This tool provides HIGH-LEVEL STATISTICS without returning individual resources.
|
||||
Use this when you need a summary view before drilling into details.
|
||||
|
||||
The report includes:
|
||||
- Total number of resources
|
||||
- Available services across your providers
|
||||
- Regions where resources are deployed
|
||||
- Resource types present in your providers
|
||||
|
||||
Output format: Markdown-formatted report ready to present to users or include in documentation.
|
||||
|
||||
Use cases:
|
||||
- Understanding infrastructure footprint
|
||||
- Identifying resource concentration (which regions, services)
|
||||
- Multi-provider deployment auditing
|
||||
- Resource inventory reporting
|
||||
- Tags planning (by provider, service, region)
|
||||
"""
|
||||
# Determine endpoint based on date parameters
|
||||
date_range = self.api_client.normalize_date_range(
|
||||
date_from, date_to, max_days=2
|
||||
)
|
||||
|
||||
if date_range is None:
|
||||
# No dates provided - use latest metadata endpoint
|
||||
metadata_endpoint = "/api/v1/resources/metadata/latest"
|
||||
list_endpoint = "/api/v1/resources/latest"
|
||||
params = {}
|
||||
else:
|
||||
# Dates provided - use historical endpoints
|
||||
metadata_endpoint = "/api/v1/resources/metadata"
|
||||
list_endpoint = "/api/v1/resources"
|
||||
params = {
|
||||
"filter[updated_at__gte]": date_range[0],
|
||||
"filter[updated_at__lte]": date_range[1],
|
||||
}
|
||||
|
||||
# Build common filter parameters
|
||||
if provider_type:
|
||||
params["filter[provider_type__in]"] = provider_type
|
||||
if provider_alias:
|
||||
params["filter[provider_alias__icontains]"] = provider_alias
|
||||
if provider_uid:
|
||||
params["filter[provider_uid__icontains]"] = provider_uid
|
||||
|
||||
# Convert lists to comma-separated strings
|
||||
clean_params = self.api_client.build_filter_params(params)
|
||||
|
||||
# Get metadata (services, regions, types)
|
||||
metadata_params = clean_params.copy()
|
||||
metadata_params["fields[resources-metadata]"] = "services,regions,types"
|
||||
metadata_response = await self.api_client.get(
|
||||
metadata_endpoint, params=metadata_params
|
||||
)
|
||||
metadata = ResourcesMetadataResponse.from_api_response(metadata_response)
|
||||
|
||||
# Get total count (using page_size=1 for efficiency)
|
||||
count_params = clean_params.copy()
|
||||
count_params["page[size]"] = 1
|
||||
count_params["page[number]"] = 1
|
||||
count_response = await self.api_client.get(list_endpoint, params=count_params)
|
||||
total_resources = (
|
||||
count_response.get("meta", {}).get("pagination", {}).get("count", 0)
|
||||
)
|
||||
|
||||
# Build markdown report
|
||||
report_lines = ["# Cloud Resources Overview", ""]
|
||||
|
||||
# Total resources
|
||||
report_lines.append(f"**Total Resources**: {total_resources:,} resources")
|
||||
report_lines.append("")
|
||||
|
||||
# Services
|
||||
if metadata.services:
|
||||
report_lines.append("## Services")
|
||||
report_lines.append(f"**{len(metadata.services)}** unique services found")
|
||||
report_lines.append("")
|
||||
for i, service in enumerate(metadata.services, 1):
|
||||
report_lines.append(f"{i}. {service}")
|
||||
report_lines.append("")
|
||||
|
||||
# Regions
|
||||
if metadata.regions:
|
||||
report_lines.append("## Regions")
|
||||
report_lines.append(f"**{len(metadata.regions)}** unique regions found")
|
||||
report_lines.append("")
|
||||
for i, region in enumerate(metadata.regions, 1):
|
||||
report_lines.append(f"{i}. {region}")
|
||||
report_lines.append("")
|
||||
|
||||
# Resource types
|
||||
if metadata.types:
|
||||
report_lines.append("## Resource Types")
|
||||
report_lines.append(
|
||||
f"**{len(metadata.types)}** unique resource types found"
|
||||
)
|
||||
report_lines.append("")
|
||||
for i, rtype in enumerate(metadata.types, 1):
|
||||
report_lines.append(f"{i}. {rtype}")
|
||||
report_lines.append("")
|
||||
|
||||
report = "\n".join(report_lines)
|
||||
return {"report": report}
|
||||
330
mcp_server/prowler_mcp_server/prowler_app/tools/scans.py
Normal file
330
mcp_server/prowler_mcp_server/prowler_app/tools/scans.py
Normal file
@@ -0,0 +1,330 @@
|
||||
"""Security Scans tools for Prowler App MCP Server.
|
||||
|
||||
This module provides tools for managing and monitoring Prowler security scans.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from prowler_mcp_server.prowler_app.models.scans import (
|
||||
DetailedScan,
|
||||
ScanCreationResult,
|
||||
ScansListResponse,
|
||||
ScheduleCreationResult,
|
||||
)
|
||||
from prowler_mcp_server.prowler_app.tools.base import BaseTool
|
||||
from pydantic import Field
|
||||
|
||||
|
||||
class ScansTools(BaseTool):
|
||||
"""Tools for security scan operations.
|
||||
|
||||
Provides tools for:
|
||||
- prowler_app_list_scans: Search and filter scans with rich filtering capabilities
|
||||
- prowler_app_get_scan: Get comprehensive details about a specific scan
|
||||
- prowler_app_trigger_scan: Trigger manual security scans for providers
|
||||
- prowler_app_schedule_daily_scan: Schedule automated daily scans for continuous monitoring
|
||||
- prowler_app_update_scan: Update scan names for better organization
|
||||
"""
|
||||
|
||||
async def list_scans(
|
||||
self,
|
||||
provider_id: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by Prowler's internal UUID(s) (v4) for specific provider(s), generated when the provider was registered. Use `prowler_app_search_providers` tool to find provider IDs",
|
||||
),
|
||||
provider_type: list[str] = Field(
|
||||
default=[],
|
||||
description="Filter by cloud provider type. For all valid values, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server",
|
||||
),
|
||||
provider_alias: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by provider alias/friendly name. Partial match supported (case-insensitive)",
|
||||
),
|
||||
state: list[
|
||||
Literal[
|
||||
"available",
|
||||
"scheduled",
|
||||
"executing",
|
||||
"completed",
|
||||
"failed",
|
||||
"cancelled",
|
||||
]
|
||||
] = Field(
|
||||
default=[],
|
||||
description="Filter by scan execution state.",
|
||||
),
|
||||
trigger: Literal["manual", "scheduled"] | None = Field(
|
||||
default=None,
|
||||
description="Filter by how the scan was initiated. Options: 'manual' (user-initiated via prowler_app_trigger_scan), 'scheduled' (automated via prowler_app_schedule_daily_scan)",
|
||||
),
|
||||
name: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by scan name. Partial match supported (case-insensitive)",
|
||||
),
|
||||
page_size: int = Field(
|
||||
default=50,
|
||||
description="Number of results to return per page",
|
||||
),
|
||||
page_number: int = Field(
|
||||
default=1,
|
||||
description="Page number to retrieve (1-indexed)",
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""List and filter security scans across all providers with rich filtering capabilities.
|
||||
|
||||
IMPORTANT: This tool returns LIGHTWEIGHT scan information. Use this for fast searching and filtering
|
||||
across many scans. For complete scan details including progress, duration, and resource counts,
|
||||
use prowler_app_get_scan on specific scans of interest.
|
||||
|
||||
Default behavior:
|
||||
- Returns all scans
|
||||
- Returns 50 scans per page
|
||||
- Includes all scan states (available, scheduled, executing, completed, failed, cancelled)
|
||||
|
||||
Each scan includes:
|
||||
- Core identification: id (UUID for prowler_app_get_scan), name
|
||||
- Execution context: state, trigger (manual/scheduled)
|
||||
- Temporal data: started_at, completed_at
|
||||
- Provider relationship: provider_id
|
||||
|
||||
Workflow:
|
||||
1. Use this tool to search and filter scans by provider, state, or date range
|
||||
2. Use prowler_app_get_scan with the scan 'id' to get progress, duration, and resource counts
|
||||
3. Use prowler_app_search_security_findings filtered by scan dates to analyze scan results
|
||||
"""
|
||||
# Validate pagination
|
||||
self.api_client.validate_page_size(page_size)
|
||||
|
||||
# Build query parameters
|
||||
params: dict[str, Any] = {
|
||||
"page[size]": page_size,
|
||||
"page[number]": page_number,
|
||||
}
|
||||
|
||||
# Apply provider filters
|
||||
if provider_id:
|
||||
params["filter[provider__in]"] = provider_id
|
||||
if provider_type:
|
||||
params["filter[provider_type__in]"] = provider_type
|
||||
if provider_alias:
|
||||
params["filter[provider_alias__icontains]"] = provider_alias
|
||||
|
||||
# Apply scan filters
|
||||
if state:
|
||||
params["filter[state__in]"] = state
|
||||
if trigger:
|
||||
params["filter[trigger]"] = trigger
|
||||
if name:
|
||||
params["filter[name__icontains]"] = name
|
||||
|
||||
clean_params = self.api_client.build_filter_params(params)
|
||||
|
||||
api_response = await self.api_client.get("/api/v1/scans", params=clean_params)
|
||||
simplified_response = ScansListResponse.from_api_response(api_response)
|
||||
|
||||
return simplified_response.model_dump()
|
||||
|
||||
async def get_scan(
|
||||
self,
|
||||
scan_id: str = Field(
|
||||
description="Prowler's internal UUID (v4) for the scan to retrieve, generated when the scan was created (e.g., '123e4567-e89b-12d3-a456-426614174000'). Use `prowler_app_list_scans` tool to find scan IDs"
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Retrieve comprehensive details about a specific scan by its ID.
|
||||
|
||||
IMPORTANT: This tool returns COMPLETE scan details.
|
||||
Use this after finding a specific scan via prowler_app_list_scans.
|
||||
|
||||
This tool provides ALL information that prowler_app_list_scans returns PLUS:
|
||||
|
||||
1. Execution Details:
|
||||
- progress: Scan completion progress as percentage (0-100%)
|
||||
- duration: Total scan duration in seconds from start to completion
|
||||
- unique_resource_count: Number of unique cloud resources discovered during the scan
|
||||
|
||||
2. Temporal Metadata:
|
||||
- inserted_at: When the scan was created in the database
|
||||
- scheduled_at: When the scan was scheduled to run (for scheduled scans)
|
||||
- next_scan_at: When the next scan will run (for recurring daily scans)
|
||||
|
||||
Useful for:
|
||||
- Monitoring scan progress during execution (via progress field)
|
||||
- Viewing scan results and metrics after completion
|
||||
- Debugging failed scans with detailed state information
|
||||
- Understanding scan scheduling patterns
|
||||
|
||||
Workflow:
|
||||
1. Use prowler_app_list_scans to browse and filter scans
|
||||
2. Use this tool with the scan 'id' to monitor progress or view detailed results
|
||||
3. For completed scans, use prowler_app_search_security_findings filtered by date to analyze findings
|
||||
"""
|
||||
# Fetch scan with all fields
|
||||
params = {
|
||||
"fields[scans]": "name,trigger,state,progress,duration,unique_resource_count,started_at,completed_at,scheduled_at,next_scan_at,inserted_at"
|
||||
}
|
||||
|
||||
api_response = await self.api_client.get(
|
||||
f"/api/v1/scans/{scan_id}", params=params
|
||||
)
|
||||
detailed_scan = DetailedScan.from_api_response(api_response["data"])
|
||||
|
||||
return detailed_scan.model_dump()
|
||||
|
||||
async def trigger_scan(
|
||||
self,
|
||||
provider_id: str = Field(
|
||||
description="Prowler's internal UUID (v4) for the provider to scan, generated when the provider was registered in the system (e.g., '4d0e2614-6385-4fa7-bf0b-c2e2f75c6877'). Use `prowler_app_search_providers` tool to find the provider ID"
|
||||
),
|
||||
name: str | None = Field(
|
||||
default=None,
|
||||
description="Optional human-friendly name for the scan. Use descriptive names to identify scan purpose or context, e.g., 'Weekly Production Security Audit', 'Pre-Deployment Validation', 'Compliance Check Q4 2025'",
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Trigger a manual security scan for a provider.
|
||||
|
||||
IMPORTANT: This tool returns immediately once the scan is created.
|
||||
The scan will continue running in the background. Use `prowler_app_get_scan`
|
||||
with the returned scan ID to monitor progress and check when it completes.
|
||||
|
||||
Example Useful Workflow:
|
||||
1. Use `prowler_app_search_providers` to find the provider_id you want to scan
|
||||
2. Use this tool to trigger the scan
|
||||
3. Use `prowler_app_get_scan` with the returned scan 'id' to monitor progress
|
||||
4. Once completed, use `prowler_app_search_security_findings` to analyze results
|
||||
"""
|
||||
try:
|
||||
# Build request data
|
||||
request_data: dict[str, Any] = {
|
||||
"data": {
|
||||
"type": "scans",
|
||||
"attributes": {},
|
||||
"relationships": {
|
||||
"provider": {
|
||||
"data": {
|
||||
"type": "providers",
|
||||
"id": provider_id,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
if name:
|
||||
request_data["data"]["attributes"]["name"] = name
|
||||
|
||||
# Create scan (returns Task)
|
||||
self.logger.info(f"Creating scan for provider {provider_id}")
|
||||
task_response = await self.api_client.post(
|
||||
"/api/v1/scans", json_data=request_data
|
||||
)
|
||||
|
||||
scan_id = (
|
||||
task_response.get("data", {})
|
||||
.get("attributes", {})
|
||||
.get("task_args", {})
|
||||
.get("scan_id", None)
|
||||
)
|
||||
|
||||
if not scan_id:
|
||||
raise Exception("No scan_id returned from scan creation")
|
||||
|
||||
self.logger.info(f"Scan created successfully: {scan_id}")
|
||||
scan_response = await self.api_client.get(f"/api/v1/scans/{scan_id}")
|
||||
scan_info = DetailedScan.from_api_response(scan_response["data"])
|
||||
|
||||
return ScanCreationResult(
|
||||
scan=scan_info,
|
||||
status="success",
|
||||
message=f"Scan {scan_id} created successfully. The scan may take some time to complete. Use prowler_app_get_scan tool with this ID to monitor progress.",
|
||||
).model_dump()
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Scan creation failed: {e}")
|
||||
return ScanCreationResult(
|
||||
scan=None,
|
||||
status="failed",
|
||||
message=f"Scan creation failed: {str(e)}",
|
||||
).model_dump()
|
||||
|
||||
async def schedule_daily_scan(
|
||||
self,
|
||||
provider_id: str = Field(
|
||||
description="Prowler's internal UUID (v4) for the provider to scan, generated when the provider was registered in the system (e.g., '4d0e2614-6385-4fa7-bf0b-c2e2f75c6877'). Use `prowler_app_search_providers` tool to find the provider ID"
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Schedule automated daily scans for a provider for continuous security monitoring.
|
||||
|
||||
Creates a recurring daily scan schedule that will automatically trigger
|
||||
scans every 24 hours (starting from the moment the schedule is created).
|
||||
The schedule persists until manually removed and will execute even when
|
||||
you're not actively using the system.
|
||||
|
||||
IMPORTANT: This tool returns immediately once the daily schedule is created.
|
||||
The schedule will be set up in the background. Use `prowler_app_list_scans`
|
||||
filtered by provider_id and trigger='scheduled' to view scheduled scans.
|
||||
|
||||
IMPORTANT: This creates a PERSISTENT schedule. The provider will be scanned
|
||||
automatically every 24 hours until the provider is deleted.
|
||||
|
||||
Example Useful Workflow:
|
||||
1. Use `prowler_app_search_providers` to find the provider_id you want to monitor
|
||||
2. Use this tool to create the daily schedule
|
||||
3. Use `prowler_app_list_scans` filtered by provider_id to view scheduled and completed scans
|
||||
4. Monitor findings over time with `prowler_app_search_security_findings`
|
||||
"""
|
||||
self.logger.info(f"Creating daily schedule for provider {provider_id}")
|
||||
task_response = await self.api_client.post(
|
||||
"/api/v1/schedules/daily",
|
||||
json_data={
|
||||
"data": {
|
||||
"type": "daily-schedules",
|
||||
"attributes": {
|
||||
"provider_id": provider_id,
|
||||
},
|
||||
},
|
||||
},
|
||||
)
|
||||
task_state = (
|
||||
task_response.get("data", {}).get("attributes", {}).get("state", None)
|
||||
)
|
||||
|
||||
if task_state == "available":
|
||||
return_message = "Daily schedule created successfully. The schedule is being set up in the background. Use prowler_app_list_scans with provider_id filter to view scheduled scans."
|
||||
else:
|
||||
return_message = "Daily schedule creation failed. Please try again later."
|
||||
|
||||
return ScheduleCreationResult(
|
||||
scheduled=(task_state == "available"),
|
||||
message=return_message,
|
||||
).model_dump()
|
||||
|
||||
async def update_scan(
|
||||
self,
|
||||
scan_id: str = Field(
|
||||
description="Prowler's internal UUID (v4) for the scan to update, generated when the scan was created (e.g., '123e4567-e89b-12d3-a456-426614174000'). Use `prowler_app_list_scans` tool to find the scan ID if you only know the provider or scan name. Returns an error if the scan ID is invalid or not found."
|
||||
),
|
||||
name: str = Field(
|
||||
description="New human-friendly name for the scan (3-100 characters). Use descriptive names to improve organization and tracking, e.g., 'Production Security Audit - Q4 2025', 'Post-Deployment Compliance Check'. IMPORTANT: Only the scan name can be updated - other attributes (state, progress, duration) are read-only and managed by the system."
|
||||
),
|
||||
) -> dict[str, Any]:
|
||||
"""Update a scan's name for better organization and tracking.
|
||||
|
||||
IMPORTANT: Only the scan name can be updated. Other scan attributes
|
||||
(state, progress, duration, etc.) are read-only and managed by the system.
|
||||
|
||||
Example Useful Workflow:
|
||||
1. Use `prowler_app_list_scans` to find the scan you want to rename
|
||||
2. Use this tool with the scan 'id' and new name
|
||||
"""
|
||||
api_response = await self.api_client.patch(
|
||||
f"/api/v1/scans/{scan_id}",
|
||||
json_data={
|
||||
"data": {
|
||||
"type": "scans",
|
||||
"id": scan_id,
|
||||
"attributes": {"name": name},
|
||||
},
|
||||
},
|
||||
)
|
||||
detailed_scan = DetailedScan.from_api_response(api_response["data"])
|
||||
|
||||
return detailed_scan.model_dump()
|
||||
@@ -1,5 +1,6 @@
|
||||
"""Shared API client utilities for Prowler App tools."""
|
||||
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
from enum import Enum
|
||||
from typing import Any, Dict
|
||||
@@ -83,7 +84,13 @@ class ProwlerAPIClient(metaclass=SingletonMeta):
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
return response.json()
|
||||
if not response.content:
|
||||
return {
|
||||
"success": True,
|
||||
"status_code": response.status_code,
|
||||
}
|
||||
else:
|
||||
return response.json()
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"HTTP error during {method.value} {path}: {e}")
|
||||
error_detail: str = ""
|
||||
@@ -180,6 +187,68 @@ class ProwlerAPIClient(metaclass=SingletonMeta):
|
||||
"""
|
||||
return await self._make_request(HTTPMethod.DELETE, path, params=params)
|
||||
|
||||
async def poll_task_until_complete(
|
||||
self,
|
||||
task_id: str,
|
||||
timeout: int = 60,
|
||||
poll_interval: float = 1.0,
|
||||
) -> dict[str, any]:
|
||||
"""Poll a task until it reaches a terminal state.
|
||||
|
||||
This method polls the task endpoint at regular intervals until the task
|
||||
completes, fails, or times out. It's designed for async operations like
|
||||
provider connection tests and deletions that return task IDs.
|
||||
|
||||
Args:
|
||||
task_id: The UUID of the task to poll (UUID object or string)
|
||||
timeout: Maximum time to wait in seconds (default: 60)
|
||||
poll_interval: Time between polls in seconds (default: 1.0)
|
||||
|
||||
Returns:
|
||||
The complete task response when terminal state is reached
|
||||
|
||||
Raises:
|
||||
Exception: If task fails, is cancelled, or timeout is exceeded
|
||||
"""
|
||||
terminal_states = {"completed", "failed", "cancelled"}
|
||||
start_time = asyncio.get_event_loop().time()
|
||||
max_time = start_time + timeout
|
||||
|
||||
logger.info(
|
||||
f"Polling task {task_id} (timeout: {timeout}s, interval: {poll_interval}s)"
|
||||
)
|
||||
|
||||
while True:
|
||||
# Check if we've exceeded the timeout
|
||||
current_time = asyncio.get_event_loop().time()
|
||||
if current_time >= max_time:
|
||||
raise Exception(
|
||||
f"Task {task_id} polling timed out after {timeout} seconds. "
|
||||
f"The task may still be running. Try increasing the timeout or check task status manually."
|
||||
)
|
||||
|
||||
# Fetch current task state
|
||||
response = await self.get(f"/api/v1/tasks/{task_id}")
|
||||
task_data = response.get("data", {})
|
||||
task_attrs = task_data.get("attributes", {})
|
||||
state = task_attrs.get("state")
|
||||
|
||||
logger.debug(f"Task {task_id} state: {state}")
|
||||
|
||||
# Check if we've reached a terminal state
|
||||
if state in terminal_states:
|
||||
if state == "completed":
|
||||
logger.info(f"Task {task_id} completed successfully")
|
||||
return response
|
||||
elif state == "failed":
|
||||
error_msg = task_attrs.get("error", "Unknown error")
|
||||
raise Exception(f"Task {task_id} failed: {error_msg}")
|
||||
elif state == "cancelled":
|
||||
raise Exception(f"Task {task_id} was cancelled")
|
||||
|
||||
# Wait before next poll
|
||||
await asyncio.sleep(poll_interval)
|
||||
|
||||
def _validate_date_format(self, date_str: str, param_name: str) -> datetime:
|
||||
"""Validate date string format.
|
||||
|
||||
@@ -253,6 +322,14 @@ class ProwlerAPIClient(metaclass=SingletonMeta):
|
||||
elif to_date and not from_date:
|
||||
from_date = to_date - timedelta(days=max_days - 1)
|
||||
|
||||
# Validate that date_from is before or equal to date_to
|
||||
if from_date > to_date:
|
||||
raise ValueError(
|
||||
f"Invalid date range: date_from must be before or equal to date_to. "
|
||||
f"Got date_from='{from_date.date()}' and date_to='{to_date.date()}'. "
|
||||
f"Please swap the dates or use the correct order."
|
||||
)
|
||||
|
||||
# Validate range doesn't exceed max_days
|
||||
delta: int = (to_date - from_date).days + 1
|
||||
if delta > max_days:
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
## [v5.15.0] (Prowler UNRELEASED)
|
||||
## [5.15.0] (Prowler v5.15.0)
|
||||
|
||||
### Added
|
||||
- `cloudstorage_uses_vpc_service_controls` check for GCP provider [(#9256)](https://github.com/prowler-cloud/prowler/pull/9256)
|
||||
@@ -16,7 +16,6 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
- Update SOC2 - AWS with Processing Integrity requirements [(#9462)](https://github.com/prowler-cloud/prowler/pull/9462)
|
||||
- RBI Cyber Security Framework compliance for Azure provider [(#8822)](https://github.com/prowler-cloud/prowler/pull/8822)
|
||||
|
||||
|
||||
### Changed
|
||||
- Update AWS Macie service metadata to new format [(#9265)](https://github.com/prowler-cloud/prowler/pull/9265)
|
||||
- Update AWS Lightsail service metadata to new format [(#9264)](https://github.com/prowler-cloud/prowler/pull/9264)
|
||||
@@ -27,19 +26,13 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
- Update AWS Lightsail service metadata to new format [(#9264)](https://github.com/prowler-cloud/prowler/pull/9264)
|
||||
|
||||
### Fixed
|
||||
- Fix duplicate requirement IDs in ISO 27001:2013 AWS compliance framework by adding unique letter suffixes
|
||||
- Removed incorrect threat-detection category from checks metadata [(#9489)](https://github.com/prowler-cloud/prowler/pull/9489)
|
||||
- GCP `cloudstorage_uses_vpc_service_controls` check to handle VPC Service Controls blocked API access [(#9478)](https://github.com/prowler-cloud/prowler/pull/9478)
|
||||
|
||||
---
|
||||
|
||||
## [v5.14.3] (Prowler UNRELEASED)
|
||||
|
||||
### Fixed
|
||||
- Fix duplicate requirement IDs in ISO 27001:2013 AWS compliance framework by adding unique letter suffixes
|
||||
- Removed incorrect threat-detection category from checks metadata [(#9489)](https://github.com/prowler-cloud/prowler/pull/9489)
|
||||
|
||||
---
|
||||
|
||||
## [v5.14.2] (Prowler 5.14.2)
|
||||
## [5.14.2] (Prowler v5.14.2)
|
||||
|
||||
### Fixed
|
||||
- Custom check folder metadata validation [(#9335)](https://github.com/prowler-cloud/prowler/pull/9335)
|
||||
@@ -47,7 +40,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.14.1] (Prowler v5.14.1)
|
||||
## [5.14.1] (Prowler v5.14.1)
|
||||
|
||||
### Fixed
|
||||
- `sharepoint_external_sharing_managed` check to handle external sharing disabled at organization level [(#9298)](https://github.com/prowler-cloud/prowler/pull/9298)
|
||||
@@ -55,7 +48,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.14.0] (Prowler v5.14.0)
|
||||
## [5.14.0] (Prowler v5.14.0)
|
||||
|
||||
### Added
|
||||
- GitHub provider check `organization_default_repository_permission_strict` [(#8785)](https://github.com/prowler-cloud/prowler/pull/8785)
|
||||
@@ -133,7 +126,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.13.1] (Prowler v5.13.1)
|
||||
## [5.13.1] (Prowler v5.13.1)
|
||||
|
||||
### Fixed
|
||||
- Add `resource_name` for checks under `logging` for the GCP provider [(#9023)](https://github.com/prowler-cloud/prowler/pull/9023)
|
||||
@@ -149,7 +142,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.13.0] (Prowler v5.13.0)
|
||||
## [5.13.0] (Prowler v5.13.0)
|
||||
|
||||
### Added
|
||||
- Support for AdditionalURLs in outputs [(#8651)](https://github.com/prowler-cloud/prowler/pull/8651)
|
||||
@@ -207,7 +200,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.12.1] (Prowler v5.12.1)
|
||||
## [5.12.1] (Prowler v5.12.1)
|
||||
|
||||
### Fixed
|
||||
- Replaced old check id with new ones for compliance files [(#8682)](https://github.com/prowler-cloud/prowler/pull/8682)
|
||||
@@ -216,7 +209,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.12.0] (Prowler v5.12.0)
|
||||
## [5.12.0] (Prowler v5.12.0)
|
||||
|
||||
### Added
|
||||
- Add more fields for the Jira ticket and handle custom fields errors [(#8601)](https://github.com/prowler-cloud/prowler/pull/8601)
|
||||
@@ -252,7 +245,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.11.0] (Prowler v5.11.0)
|
||||
## [5.11.0] (Prowler v5.11.0)
|
||||
|
||||
### Added
|
||||
- Certificate authentication for M365 provider [(#8404)](https://github.com/prowler-cloud/prowler/pull/8404)
|
||||
@@ -283,7 +276,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.10.2] (Prowler v5.10.2)
|
||||
## [5.10.2] (Prowler v5.10.2)
|
||||
|
||||
### Fixed
|
||||
- Order requirements by ID in Prowler ThreatScore AWS compliance framework [(#8495)](https://github.com/prowler-cloud/prowler/pull/8495)
|
||||
@@ -297,14 +290,14 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.10.1] (Prowler v5.10.1)
|
||||
## [5.10.1] (Prowler v5.10.1)
|
||||
|
||||
### Fixed
|
||||
- Remove invalid requirements from CIS 1.0 for GitHub provider [(#8472)](https://github.com/prowler-cloud/prowler/pull/8472)
|
||||
|
||||
---
|
||||
|
||||
## [v5.10.0] (Prowler v5.10.0)
|
||||
## [5.10.0] (Prowler v5.10.0)
|
||||
|
||||
### Added
|
||||
- `bedrock_api_key_no_administrative_privileges` check for AWS provider [(#8321)](https://github.com/prowler-cloud/prowler/pull/8321)
|
||||
@@ -344,14 +337,14 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.9.2] (Prowler v5.9.2)
|
||||
## [5.9.2] (Prowler v5.9.2)
|
||||
|
||||
### Fixed
|
||||
- Use the correct resource name in `defender_domain_dkim_enabled` check [(#8334)](https://github.com/prowler-cloud/prowler/pull/8334)
|
||||
|
||||
---
|
||||
|
||||
## [v5.9.0] (Prowler v5.9.0)
|
||||
## [5.9.0] (Prowler v5.9.0)
|
||||
|
||||
### Added
|
||||
- `storage_smb_channel_encryption_with_secure_algorithm` check for Azure provider [(#8123)](https://github.com/prowler-cloud/prowler/pull/8123)
|
||||
@@ -385,7 +378,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.8.1] (Prowler 5.8.1)
|
||||
## [5.8.1] (Prowler v5.8.1)
|
||||
|
||||
### Fixed
|
||||
- Detect wildcarded ARNs in sts:AssumeRole policy resources [(#8164)](https://github.com/prowler-cloud/prowler/pull/8164)
|
||||
@@ -395,7 +388,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.8.0] (Prowler v5.8.0)
|
||||
## [5.8.0] (Prowler v5.8.0)
|
||||
|
||||
### Added
|
||||
|
||||
@@ -457,7 +450,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.7.5] (Prowler v5.7.5)
|
||||
## [5.7.5] (Prowler v5.7.5)
|
||||
|
||||
### Fixed
|
||||
- Use unified timestamp for all requirements [(#8059)](https://github.com/prowler-cloud/prowler/pull/8059)
|
||||
@@ -475,7 +468,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.7.3] (Prowler v5.7.3)
|
||||
## [5.7.3] (Prowler v5.7.3)
|
||||
|
||||
### Fixed
|
||||
- Automatically encrypt password in Microsoft365 provider [(#7784)](https://github.com/prowler-cloud/prowler/pull/7784)
|
||||
@@ -483,7 +476,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.7.2] (Prowler v5.7.2)
|
||||
## [5.7.2] (Prowler v5.7.2)
|
||||
|
||||
### Fixed
|
||||
- `m365_powershell test_credentials` to use sanitized credentials [(#7761)](https://github.com/prowler-cloud/prowler/pull/7761)
|
||||
@@ -495,7 +488,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.7.0] (Prowler v5.7.0)
|
||||
## [5.7.0] (Prowler v5.7.0)
|
||||
|
||||
### Added
|
||||
- Update the compliance list supported for each provider from docs [(#7694)](https://github.com/prowler-cloud/prowler/pull/7694)
|
||||
@@ -523,7 +516,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.6.0] (Prowler v5.6.0)
|
||||
## [5.6.0] (Prowler v5.6.0)
|
||||
|
||||
### Added
|
||||
- SOC2 compliance framework to Azure [(#7489)](https://github.com/prowler-cloud/prowler/pull/7489)
|
||||
@@ -592,7 +585,7 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [v5.5.1] (Prowler v5.5.1)
|
||||
## [5.5.1] (Prowler v5.5.1)
|
||||
|
||||
### Fixed
|
||||
- Default name to contacts in Azure Defender [(#7483)](https://github.com/prowler-cloud/prowler/pull/7483)
|
||||
|
||||
@@ -391,6 +391,8 @@ class IacProvider(Provider):
|
||||
"--parallel",
|
||||
"0",
|
||||
"--include-non-failures",
|
||||
"--timeout",
|
||||
"300m",
|
||||
]
|
||||
if exclude_path:
|
||||
trivy_command.extend(["--skip-dirs", ",".join(exclude_path)])
|
||||
|
||||
@@ -2,13 +2,13 @@
|
||||
|
||||
All notable changes to the **Prowler UI** are documented in this file.
|
||||
|
||||
## [1.15.0] (Prowler Unreleased)
|
||||
## [1.15.0] (Prowler v5.15.0)
|
||||
|
||||
### 🚀 Added
|
||||
|
||||
- Risk Plot component with interactive legend and severity navigation to Overview page [(#9469)](https://github.com/prowler-cloud/prowler/pull/9469)
|
||||
- Navigation progress bar for page transitions using Next.js `onRouterTransitionStart` [(#9465)](https://github.com/prowler-cloud/prowler/pull/9465)
|
||||
- Finding Severity Over Time chart component to Overview page [(#9405)](https://github.com/prowler-cloud/prowler/pull/9405)
|
||||
- Findings Severity Over Time chart component to Overview page [(#9405)](https://github.com/prowler-cloud/prowler/pull/9405)
|
||||
- Attack Surface component to Overview page [(#9412)](https://github.com/prowler-cloud/prowler/pull/9412)
|
||||
|
||||
### 🔄 Changed
|
||||
@@ -22,11 +22,8 @@ All notable changes to the **Prowler UI** are documented in this file.
|
||||
- MongoDB Atlas provider support [(#9253)](https://github.com/prowler-cloud/prowler/pull/9253)
|
||||
- Lighthouse AI support for Amazon Bedrock API key [(#9343)](https://github.com/prowler-cloud/prowler/pull/9343)
|
||||
|
||||
---
|
||||
|
||||
## [1.14.3] (Prowler Unreleased)
|
||||
|
||||
### 🐞 Fixed
|
||||
|
||||
- Show top failed requirements in compliance specific view for compliance without sections [(#9471)](https://github.com/prowler-cloud/prowler/pull/9471)
|
||||
|
||||
---
|
||||
|
||||
@@ -37,11 +37,13 @@ export const FindingSeverityOverTime = ({
|
||||
dataKey?: string;
|
||||
}) => {
|
||||
const params = new URLSearchParams();
|
||||
params.set("filter[inserted_at]", point.date);
|
||||
|
||||
// Always filter by FAIL status since this chart shows failed findings
|
||||
params.set("filter[status__in]", "FAIL");
|
||||
|
||||
// Exclude muted findings
|
||||
params.set("filter[muted]", "false");
|
||||
|
||||
// Add scan_ids filter
|
||||
if (
|
||||
point.scan_ids &&
|
||||
|
||||
@@ -13,7 +13,7 @@ const EmptyState = ({ message }: { message: string }) => (
|
||||
<Card variant="base" className="flex h-full min-h-[405px] flex-1 flex-col">
|
||||
<CardHeader className="flex flex-col gap-4">
|
||||
<div className="flex items-center justify-between">
|
||||
<CardTitle>Finding Severity Over Time</CardTitle>
|
||||
<CardTitle>Findings Severity Over Time</CardTitle>
|
||||
</div>
|
||||
</CardHeader>
|
||||
<CardContent className="flex flex-1 items-center justify-center">
|
||||
@@ -44,7 +44,7 @@ export const FindingSeverityOverTimeSSR = async ({
|
||||
<Card variant="base" className="flex h-full flex-1 flex-col">
|
||||
<CardHeader className="flex flex-col gap-4">
|
||||
<div className="flex items-center justify-between">
|
||||
<CardTitle>Finding Severity Over Time</CardTitle>
|
||||
<CardTitle>Findings Severity Over Time</CardTitle>
|
||||
</div>
|
||||
</CardHeader>
|
||||
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
"use client";
|
||||
|
||||
/**
|
||||
* Client-side Sentry instrumentation
|
||||
*
|
||||
@@ -8,13 +10,12 @@
|
||||
* For runtime-specific configs, see: sentry/sentry.server.config.ts and sentry/sentry.edge.config.ts
|
||||
*/
|
||||
|
||||
import { browserTracingIntegration } from "@sentry/browser";
|
||||
import * as Sentry from "@sentry/nextjs";
|
||||
|
||||
const SENTRY_DSN = process.env.NEXT_PUBLIC_SENTRY_DSN;
|
||||
|
||||
// Only initialize Sentry if DSN is configured
|
||||
if (SENTRY_DSN) {
|
||||
// Only initialize Sentry in the browser (not during SSR)
|
||||
if (typeof window !== "undefined" && SENTRY_DSN) {
|
||||
const isDevelopment = process.env.NEXT_PUBLIC_SENTRY_ENVIRONMENT === "local";
|
||||
|
||||
/**
|
||||
@@ -43,12 +44,12 @@ if (SENTRY_DSN) {
|
||||
tracesSampleRate: isDevelopment ? 1.0 : 0.5,
|
||||
profilesSampleRate: isDevelopment ? 1.0 : 0.5,
|
||||
|
||||
// 🔌 Integrations
|
||||
// 🔌 Integrations - browserTracingIntegration is client-only
|
||||
integrations: [
|
||||
// 📊 Performance Monitoring: Core Web Vitals + RUM
|
||||
// Tracks LCP, FID, CLS, INP
|
||||
// Real User Monitoring captures actual user experience, not synthetic tests
|
||||
browserTracingIntegration({
|
||||
Sentry.browserTracingIntegration({
|
||||
enableLongTask: true, // Detect tasks that block UI (>50ms)
|
||||
enableInp: true, // Interaction to Next Paint (Core Web Vital)
|
||||
}),
|
||||
|
||||
@@ -15,6 +15,14 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@aws-sdk/client-bedrock-runtime",
|
||||
"from": "3.943.0",
|
||||
"to": "3.943.0",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@heroui/react",
|
||||
@@ -31,6 +39,14 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@internationalized/date",
|
||||
"from": "3.10.0",
|
||||
"to": "3.10.0",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@langchain/aws",
|
||||
@@ -42,10 +58,10 @@
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@langchain/core",
|
||||
"from": "0.3.77",
|
||||
"to": "0.3.78",
|
||||
"from": "0.3.78",
|
||||
"to": "0.3.77",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-11-03T07:43:34.628Z"
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
@@ -87,6 +103,22 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-avatar",
|
||||
"from": "1.1.11",
|
||||
"to": "1.1.11",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-collapsible",
|
||||
"from": "1.1.12",
|
||||
"to": "1.1.12",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-dialog",
|
||||
@@ -127,6 +159,14 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-11-20T08:20:16.313Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-scroll-area",
|
||||
"from": "1.2.10",
|
||||
"to": "1.2.10",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-select",
|
||||
@@ -151,6 +191,14 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-tabs",
|
||||
"from": "1.1.13",
|
||||
"to": "1.1.13",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-toast",
|
||||
@@ -159,6 +207,22 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@radix-ui/react-tooltip",
|
||||
"from": "1.2.8",
|
||||
"to": "1.2.8",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@react-aria/i18n",
|
||||
"from": "3.12.13",
|
||||
"to": "3.12.13",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@react-aria/ssr",
|
||||
@@ -175,13 +239,37 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@react-stately/utils",
|
||||
"from": "3.10.8",
|
||||
"to": "3.10.8",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@react-types/datepicker",
|
||||
"from": "3.13.2",
|
||||
"to": "3.13.2",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@react-types/shared",
|
||||
"from": "3.26.0",
|
||||
"to": "3.26.0",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "@sentry/nextjs",
|
||||
"from": "10.11.0",
|
||||
"to": "10.11.0",
|
||||
"to": "10.27.0",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T15:52:15.849Z"
|
||||
"generatedAt": "2025-12-01T10:01:42.332Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
@@ -299,9 +387,9 @@
|
||||
"section": "dependencies",
|
||||
"name": "js-yaml",
|
||||
"from": "4.1.0",
|
||||
"to": "4.1.0",
|
||||
"to": "4.1.1",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
"generatedAt": "2025-12-01T10:01:42.332Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
@@ -327,6 +415,14 @@
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "nanoid",
|
||||
"from": "5.1.6",
|
||||
"to": "5.1.6",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-12-10T11:34:11.122Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
"name": "next",
|
||||
@@ -339,9 +435,9 @@
|
||||
"section": "dependencies",
|
||||
"name": "next-auth",
|
||||
"from": "5.0.0-beta.29",
|
||||
"to": "5.0.0-beta.29",
|
||||
"to": "5.0.0-beta.30",
|
||||
"strategy": "installed",
|
||||
"generatedAt": "2025-10-22T12:36:37.962Z"
|
||||
"generatedAt": "2025-12-01T10:01:42.332Z"
|
||||
},
|
||||
{
|
||||
"section": "dependencies",
|
||||
|
||||
@@ -20,7 +20,8 @@
|
||||
"test:e2e:debug": "playwright test --project=chromium --project=sign-up --project=providers --project=invitations --project=scans --debug",
|
||||
"test:e2e:headed": "playwright test --project=chromium --project=sign-up --project=providers --project=invitations --project=scans --headed",
|
||||
"test:e2e:report": "playwright show-report",
|
||||
"test:e2e:install": "playwright install"
|
||||
"test:e2e:install": "playwright install",
|
||||
"audit:fix": "pnpm audit fix"
|
||||
},
|
||||
"dependencies": {
|
||||
"@ai-sdk/langchain": "1.0.59",
|
||||
@@ -56,8 +57,7 @@
|
||||
"@react-stately/utils": "3.10.8",
|
||||
"@react-types/datepicker": "3.13.2",
|
||||
"@react-types/shared": "3.26.0",
|
||||
"@sentry/browser": "10.11.0",
|
||||
"@sentry/nextjs": "10.11.0",
|
||||
"@sentry/nextjs": "10.27.0",
|
||||
"@tailwindcss/postcss": "4.1.13",
|
||||
"@tailwindcss/typography": "0.5.16",
|
||||
"@tanstack/react-table": "8.21.3",
|
||||
@@ -72,13 +72,13 @@
|
||||
"framer-motion": "11.18.2",
|
||||
"intl-messageformat": "10.7.16",
|
||||
"jose": "5.10.0",
|
||||
"js-yaml": "4.1.0",
|
||||
"js-yaml": "4.1.1",
|
||||
"jwt-decode": "4.0.0",
|
||||
"lucide-react": "0.543.0",
|
||||
"marked": "15.0.12",
|
||||
"nanoid": "5.1.6",
|
||||
"next": "15.5.7",
|
||||
"next-auth": "5.0.0-beta.29",
|
||||
"next-auth": "5.0.0-beta.30",
|
||||
"next-themes": "0.2.1",
|
||||
"radix-ui": "1.4.2",
|
||||
"react": "19.2.1",
|
||||
|
||||
588
ui/pnpm-lock.yaml
generated
588
ui/pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user