Compare commits

..

3 Commits

Author SHA1 Message Date
Alan Buscaglia
acee366e82 docs(api): add Silk profiling and improve test data guidance 2025-12-16 15:19:06 +01:00
Alan Buscaglia
47d66c9c4c docs(api): update guide with project-specific patterns
- Add rls_transaction usage examples
- Add DJANGO_LOGGING_LEVEL=DEBUG option
- Add RLS context for raw SQL (set_config)
- Add section on partitioned tables index creation
- Improve index design guidance for RLS
2025-12-09 12:47:48 +01:00
Alan Buscaglia
8d41941d22 docs(api): add query performance guide and PR checklist item
- Add query-performance-guide.md with EXPLAIN ANALYZE best practices
- Add checklist item in PR template for query performance validation
- Document common issues, fixes, and minimum test data requirements
2025-12-09 12:43:00 +01:00
12 changed files with 591 additions and 264 deletions

View File

@@ -32,6 +32,7 @@ Please add a detailed description of how to review this PR.
#### API
- [ ] Verify if API specs need to be regenerated.
- [ ] Check if version updates are required (e.g., specs, Poetry, etc.).
- [ ] Query performance validated with `EXPLAIN ANALYZE` for new/modified endpoints. See [Query Performance Guide](https://github.com/prowler-cloud/prowler/blob/master/api/docs/query-performance-guide.md).
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/api/CHANGELOG.md), if applicable.
### License

View File

@@ -0,0 +1,457 @@
# Query Performance Guide
## Overview
This guide explains how to validate query performance when developing new endpoints or modifying existing ones. **This is part of the development process**, not a separate task—just like writing unit tests.
The goal is simple: ensure PostgreSQL uses indexes correctly for the queries your code generates.
## When to Validate
You **must** validate query performance when:
- Creating a new endpoint that queries the database
- Modifying an existing query (adding filters, joins, or sorting)
- Adding new indexes
- Working on performance-critical endpoints (overviews, findings, resources)
## Profiling with Django Silk (Recommended)
[Django Silk](https://github.com/jazzband/django-silk) is the recommended way to profile queries because it captures the actual SQL generated by your code during real HTTP requests. This gives you the most accurate picture of what happens in production.
### Enabling Silk
Silk is installed as a dev dependency but disabled by default. To enable it temporarily for profiling:
#### 1. Add Silk to your local settings
In `api/src/backend/config/django/devel.py`, add at the end of the file:
```python
# Silk profiler (temporary - remove after profiling)
INSTALLED_APPS += ["silk"] # noqa: F405
MIDDLEWARE += ["silk.middleware.SilkyMiddleware"] # noqa: F405
```
#### 2. Add Silk URLs
In `api/src/backend/api/v1/urls.py`, add at the end:
```python
from django.conf import settings
if settings.DEBUG:
urlpatterns += [path("silk/", include("silk.urls", namespace="silk"))]
```
#### 3. Run Silk migrations
```bash
cd api/src/backend
poetry run python manage.py migrate --database admin
```
#### 4. Access Silk
Start the development server and navigate to `http://localhost:8000/api/v1/silk/`
### Using Silk
1. Make requests to the endpoint you want to profile
2. Open Silk UI and find your request
3. Click on the request to see all SQL queries executed
4. For each query, you can see:
- Execution time
- Number of similar queries (N+1 detection)
- The actual SQL with parameters
- **EXPLAIN output** (click on a query to see it)
### Disabling Silk
After profiling, **remove the changes** you made to `devel.py` and `urls.py`. Don't commit Silk configuration to the repository.
## Manual Query Analysis with EXPLAIN ANALYZE
For quick checks or when you need more control, you can run `EXPLAIN ANALYZE` directly.
### 1. Get Your Query
#### Option A: Using Django Shell with RLS
This approach mirrors how queries run in production with Row Level Security enabled:
```bash
cd api/src/backend
poetry run python manage.py shell
```
```python
from django.db import connection
from api.db_utils import rls_transaction
from api.models import Finding
tenant_id = "your-tenant-uuid"
with rls_transaction(tenant_id):
# Build your queryset
qs = Finding.objects.filter(status="FAIL").order_by("-inserted_at")[:25]
# Force evaluation
list(qs)
# Get the SQL
print(connection.queries[-1]['sql'])
```
#### Option B: Print Query Without Execution
```python
from api.models import Finding
queryset = Finding.objects.filter(status="FAIL")
print(queryset.query)
```
> **Note:** This won't include RLS filters, so the actual production query will differ.
#### Option C: Enable SQL Logging
Set `DJANGO_LOGGING_LEVEL=DEBUG` in your environment:
```bash
DJANGO_LOGGING_LEVEL=DEBUG poetry run python manage.py runserver
```
### 2. Run EXPLAIN ANALYZE
Connect to PostgreSQL and run:
```sql
EXPLAIN ANALYZE <your_query>;
```
Or with more details:
```sql
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) <your_query>;
```
#### Running EXPLAIN with RLS Context
To test with RLS enabled (as it runs in production), set the tenant context first:
```sql
-- Set tenant context
SELECT set_config('api.tenant_id', 'your-tenant-uuid', TRUE);
-- Then run your EXPLAIN ANALYZE
EXPLAIN ANALYZE SELECT * FROM findings WHERE status = 'FAIL' LIMIT 25;
```
### 3. Interpret the Results
#### Good Signs (Index is being used)
```
Index Scan using findings_tenant_status_idx on findings
Index Cond: ((tenant_id = '...'::uuid) AND (status = 'FAIL'))
Rows Removed by Filter: 0
Actual Rows: 150
Planning Time: 0.5 ms
Execution Time: 2.3 ms
```
#### Bad Signs (Sequential scan - no index)
```
Seq Scan on findings
Filter: ((tenant_id = '...'::uuid) AND (status = 'FAIL'))
Rows Removed by Filter: 999850
Actual Rows: 150
Planning Time: 0.3 ms
Execution Time: 450.2 ms
```
## Quick Reference: What to Look For
| What You See | Meaning | Action |
|--------------|---------|--------|
| `Index Scan` | Index is being used | Good, no action needed |
| `Index Only Scan` | Even better - data comes from index only | Good, no action needed |
| `Bitmap Index Scan` | Index used, results combined | Usually fine |
| `Seq Scan` on large tables | Full table scan, no index | **Needs investigation** |
| `Rows Removed by Filter: <high number>` | Fetching too many rows | **Query or index issue** |
| High `Execution Time` | Query is slow | **Needs optimization** |
## Common Issues and Fixes
### 1. Missing Index
**Problem:** `Seq Scan` on a filtered column
```sql
-- Bad: No index on status
EXPLAIN ANALYZE SELECT * FROM findings WHERE status = 'FAIL';
-- Shows: Seq Scan on findings
```
**Fix:** Add an index
```python
# In your model
class Meta:
indexes = [
models.Index(fields=['status'], name='findings_status_idx'),
]
```
### 2. Index Not Used Due to Type Mismatch
**Problem:** Index exists but PostgreSQL doesn't use it
```sql
-- If tenant_id is UUID but you're passing a string without cast
WHERE tenant_id = 'some-uuid-string'
```
**Fix:** Ensure proper type casting in your queries
### 3. Index Not Used Due to Function Call
**Problem:** Wrapping column in a function prevents index usage
```sql
-- Bad: Index on inserted_at won't be used
WHERE DATE(inserted_at) = '2024-01-01'
-- Good: Use range instead
WHERE inserted_at >= '2024-01-01' AND inserted_at < '2024-01-02'
```
### 4. Wrong Index for Sorting
**Problem:** Query is sorted but index doesn't match sort order
```sql
-- If you have ORDER BY inserted_at DESC
-- You need an index with DESC or PostgreSQL will sort in memory
```
**Fix:** Create index with matching sort order
```python
class Meta:
indexes = [
models.Index(fields=['-inserted_at'], name='findings_inserted_desc_idx'),
]
```
### 5. Composite Index Column Order
**Problem:** Index exists but columns are in wrong order
```sql
-- Index on (tenant_id, scan_id)
-- This query WON'T use the index efficiently:
WHERE scan_id = '...'
-- This query WILL use the index:
WHERE tenant_id = '...' AND scan_id = '...'
```
**Rule:** The leftmost columns in a composite index must be in your WHERE clause.
## RLS (Row Level Security) Considerations
Prowler uses Row Level Security via PostgreSQL's `set_config`. When analyzing queries, remember:
1. RLS policies add implicit `WHERE tenant_id = current_tenant()` to queries
2. Always test with RLS enabled (how it runs in production)
3. Ensure `tenant_id` is the **first column** in composite indexes
### Using rls_transaction in Code
The `rls_transaction` context manager from `api.db_utils` sets the tenant context for all queries within its scope:
```python
from api.db_utils import rls_transaction
from api.models import Finding
with rls_transaction(tenant_id):
# All queries here will have RLS applied
qs = Finding.objects.filter(status="FAIL")
list(qs) # Execute
```
### Using RLS in Raw SQL (psql)
```sql
-- Set tenant context for the transaction
SELECT set_config('api.tenant_id', 'your-tenant-uuid', TRUE);
-- Now RLS policies will filter by this tenant
EXPLAIN ANALYZE SELECT * FROM findings WHERE status = 'FAIL';
```
### Index Design for RLS
Since every query includes `tenant_id` via RLS, your composite indexes should **always start with `tenant_id`**:
```python
class Meta:
indexes = [
# Good: tenant_id first
models.Index(fields=['tenant_id', 'status', 'severity']),
# Bad: tenant_id not first - RLS queries won't use this efficiently
models.Index(fields=['status', 'tenant_id']),
]
```
## Test Data Requirements
The amount of test data you need depends on what you're testing. PostgreSQL's query planner considers table statistics, index definitions, and data distribution when choosing execution plans.
### Important Considerations
1. **Small datasets may not use indexes**: PostgreSQL may choose a sequential scan over an index scan if the table is small enough that scanning it directly is faster. This is expected behavior.
2. **Data must exist in the tables you're querying**: If your endpoint queries `findings`, `resources`, `scans`, or other tables, ensure those tables have data. Use the `findings` management command to generate test data:
```bash
cd api/src/backend
poetry run python manage.py findings \
--tenant <TENANT_ID> \
--findings 1000 \
--resources 500 \
--batch 500
```
3. **Update table statistics**: After inserting test data, run `ANALYZE` to update PostgreSQL's statistics:
```sql
ANALYZE findings;
ANALYZE resources;
ANALYZE scans;
-- Add other tables as needed
```
4. **Test with realistic data distribution**: If your query filters by a specific value (e.g., `status='FAIL'`), ensure your test data includes a realistic mix of values.
### When Index Usage Matters Most
Focus on validating index usage when:
- The table will have thousands or millions of rows in production
- The query is called frequently (list endpoints, dashboards)
- The query has multiple filters or joins
For small lookup tables or infrequently-called endpoints, sequential scans may be acceptable.
## Performance Checklist for PRs
Before submitting a PR that adds or modifies database queries:
- [ ] Profiled queries with Silk or `EXPLAIN ANALYZE`
- [ ] Verified indexes are being used (no unexpected `Seq Scan` on large tables)
- [ ] Checked `Rows Removed by Filter` is reasonable
- [ ] Tested with RLS enabled
- [ ] For critical endpoints: documented the query plan in the PR
## Useful Commands
### Update Table Statistics
```sql
ANALYZE findings;
ANALYZE resources;
```
### See Existing Indexes
```sql
SELECT indexname, indexdef
FROM pg_indexes
WHERE tablename = 'findings';
```
### See Index Usage Stats
```sql
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
idx_tup_fetch
FROM pg_stat_user_indexes
WHERE tablename = 'findings'
ORDER BY idx_scan DESC;
```
### Check Table Size
```sql
SELECT
relname as table_name,
pg_size_pretty(pg_total_relation_size(relid)) as total_size
FROM pg_catalog.pg_statio_user_tables
WHERE relname IN ('findings', 'resources', 'scans')
ORDER BY pg_total_relation_size(relid) DESC;
```
## Working with Partitioned Tables
The `findings` and `resource_finding_mappings` tables are partitioned. When adding indexes, use the helper functions from `api.db_utils`:
### Adding Indexes to Partitions
```python
# In a migration file
from functools import partial
from django.db import migrations
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
class Migration(migrations.Migration):
atomic = False # Required for CONCURRENTLY
dependencies = [
("api", "XXXX_previous_migration"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="my_new_idx",
columns="tenant_id, status, severity",
all_partitions=False, # Only current/future partitions
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="my_new_idx",
),
),
]
```
### Parameters
- `all_partitions=False` (default): Only creates indexes on current and future partitions. Use this for new indexes to avoid maintenance overhead on old data.
- `all_partitions=True`: Creates indexes on all partitions. Use when migrating critical existing indexes.
See [Partitions Documentation](./partitions.md) for more details on partitioning strategy.
## Further Reading
- [Django Silk Documentation](https://github.com/jazzband/django-silk)
- [PostgreSQL EXPLAIN Documentation](https://www.postgresql.org/docs/current/sql-explain.html)
- [Using EXPLAIN](https://www.postgresql.org/docs/current/using-explain.html)
- [Index Types in PostgreSQL](https://www.postgresql.org/docs/current/indexes-types.html)
- [Prowler Partitions Documentation](./partitions.md)

View File

@@ -19,17 +19,12 @@ class CheckRemediation(MinimalSerializerMixin, BaseModel):
default=None,
description="Terraform code snippet with best practices for remediation",
)
nativeiac: str | None = Field(
default=None,
description="Native Infrastructure as Code code snippet with best practices for remediation",
recommendation_text: str | None = Field(
default=None, description="Text description with best practices"
)
other: str | None = Field(
recommendation_url: str | None = Field(
default=None,
description="Other remediation code snippet with best practices for remediation, usually used for web interfaces or other tools",
)
recommendation: str | None = Field(
default=None,
description="Text description with general best recommended practices to avoid the issue",
description="URL to external remediation documentation",
)
@@ -38,6 +33,9 @@ class CheckMetadata(MinimalSerializerMixin, BaseModel):
model_config = ConfigDict(frozen=True)
check_id: str = Field(
description="Unique provider identifier for the security check (e.g., 's3_bucket_public_access')",
)
title: str = Field(
description="Human-readable title of the security check",
)
@@ -61,9 +59,9 @@ class CheckMetadata(MinimalSerializerMixin, BaseModel):
default=None,
description="Remediation guidance including CLI commands and recommendations",
)
additional_urls: list[str] = Field(
default_factory=list,
description="List of additional URLs related to the check",
related_url: str | None = Field(
default=None,
description="URL to additional documentation or references",
)
categories: list[str] = Field(
default_factory=list,
@@ -81,23 +79,23 @@ class CheckMetadata(MinimalSerializerMixin, BaseModel):
recommendation = remediation_data.get("recommendation", {})
remediation = CheckRemediation(
cli=code["cli"],
terraform=code["terraform"],
nativeiac=code["nativeiac"],
other=code["other"],
recommendation=recommendation["text"],
cli=code.get("cli"),
terraform=code.get("terraform"),
recommendation_text=recommendation.get("text"),
recommendation_url=recommendation.get("url"),
)
return cls(
check_id=data["checkid"],
title=data["checktitle"],
description=data["description"],
provider=data["provider"],
risk=data["risk"],
risk=data.get("risk"),
service=data["servicename"],
resource_type=data["resourcetype"],
remediation=remediation,
additional_urls=data["additionalurls"],
categories=data["categories"],
related_url=data.get("relatedurl"),
categories=data.get("categories", []),
)
@@ -118,36 +116,35 @@ class SimplifiedFinding(MinimalSerializerMixin, BaseModel):
severity: Literal["critical", "high", "medium", "low", "informational"] = Field(
description="Severity level of the finding",
)
check_id: str = Field(
description="ID of the security check that generated this finding",
check_metadata: CheckMetadata = Field(
description="Metadata about the security check that generated this finding",
)
status_extended: str = Field(
description="Extended status information providing additional context",
)
delta: Literal["new", "changed"] | None = Field(
default=None,
delta: Literal["new", "changed"] = Field(
description="Change status: 'new' (not seen before), 'changed' (modified since last scan), or None (unchanged)",
)
muted: bool | None = Field(
default=None,
muted: bool = Field(
description="Whether this finding has been muted/suppressed by the user",
)
muted_reason: str | None = Field(
muted_reason: str = Field(
default=None,
description="Reason provided when muting this finding",
description="Reason provided when muting this finding (3-500 chars if muted)",
)
@classmethod
def from_api_response(cls, data: dict) -> "SimplifiedFinding":
"""Transform JSON:API finding response to simplified format."""
attributes = data["attributes"]
check_metadata = attributes["check_metadata"]
return cls(
id=data["id"],
uid=attributes["uid"],
status=attributes["status"],
severity=attributes["severity"],
check_id=attributes["check_metadata"]["checkid"],
check_metadata=CheckMetadata.from_api_response(check_metadata),
status_extended=attributes["status_extended"],
delta=attributes["delta"],
muted=attributes["muted"],
@@ -182,9 +179,6 @@ class DetailedFinding(SimplifiedFinding):
default_factory=list,
description="List of UUIDs for cloud resources associated with this finding",
)
check_metadata: CheckMetadata = Field(
description="Metadata about the security check that generated this finding",
)
@classmethod
def from_api_response(cls, data: dict) -> "DetailedFinding":
@@ -210,7 +204,6 @@ class DetailedFinding(SimplifiedFinding):
uid=attributes["uid"],
status=attributes["status"],
severity=attributes["severity"],
check_id=check_metadata["checkid"],
check_metadata=CheckMetadata.from_api_response(check_metadata),
status_extended=attributes.get("status_extended"),
delta=attributes.get("delta"),

View File

@@ -19,9 +19,9 @@ class FindingsTools(BaseTool):
"""Tools for security findings operations.
Provides tools for:
- search_security_findings: Fast and lightweight searching across findings
- get_finding_details: Get complete details for a specific finding
- get_findings_overview: Get aggregate statistics and trends across all findings
- Searching and filtering security findings
- Getting detailed finding information
- Viewing findings overview/statistics
"""
async def search_security_findings(
@@ -90,27 +90,27 @@ class FindingsTools(BaseTool):
) -> dict[str, Any]:
"""Search and filter security findings across all cloud providers with rich filtering capabilities.
IMPORTANT: This tool returns LIGHTWEIGHT findings. Use this for fast searching and filtering across many findings.
For complete details use prowler_app_get_finding_details on specific findings.
This is the primary tool for browsing and filtering security findings. Returns lightweight findings
optimized for searching across large result sets. For detailed information about a specific finding,
use get_finding_details.
Default behavior:
- Returns latest findings from most recent scans (no date parameters needed)
- Filters to FAIL status only (security issues found)
- Returns 50 results per page
- Returns 100 results per page
Date filtering:
- Without dates: queries findings from the most recent completed scan across all providers (most efficient)
- With dates: queries historical findings (2-day maximum range between date_from and date_to)
- Without dates: queries findings from the most recent completed scan across all providers (most efficient). This returns the latest snapshot of findings, not a time-based query.
- With dates: queries historical findings (2-day maximum range)
Each finding includes:
- Core identification: id (UUID for get_finding_details), uid, check_id
- Security context: status (FAIL/PASS/MANUAL), severity (critical/high/medium/low/informational)
- State tracking: delta (new/changed/unchanged), muted (boolean), muted_reason
- Extended details: status_extended with additional context
- Core identification: id, uid, check_id
- Security context: status, severity, check_metadata (title, description, remediation)
- State tracking: delta (new/changed), muted status
- Extended details: status_extended for additional context
Workflow:
1. Use this tool to search and filter findings by severity, status, provider, service, region, etc.
2. Use prowler_app_get_finding_details with the finding 'id' to get complete information about the finding
Returns:
Paginated list of simplified findings with total count and pagination metadata
"""
# Validate page_size parameter
self.api_client.validate_page_size(page_size)
@@ -185,39 +185,21 @@ class FindingsTools(BaseTool):
) -> dict[str, Any]:
"""Retrieve comprehensive details about a specific security finding by its ID.
IMPORTANT: This tool returns COMPLETE finding details.
Use this after finding a specific finding via prowler_app_search_security_findings
This tool provides MORE detailed information than search_security_findings. Use this when you need
to deeply analyze a specific finding or understand its complete context and history.
This tool provides ALL information that prowler_app_search_security_findings returns PLUS:
1. Check Metadata (information about the check script that generated the finding):
- title: Human-readable phrase used to summarize the check
- description: Detailed explanation of what the check validates and why it is important
- risk: What could happen if this check fails
- remediation: Complete remediation guidance including step-by-step instructions and code snippets with best practices to fix the issue:
* cli: Command-line commands to fix the issue
* terraform: Terraform code snippets with best practices
* nativeiac: Provider native Infrastructure as Code code snippets with best practices to fix the issue
* other: Other remediation code snippets with best practices, usually used for web interfaces or other tools
* recommendation: Text description with general best recommended practices to avoid the issue
- provider: Cloud provider (aws/azure/gcp/etc)
- service: Service name (s3/ec2/keyvault/etc)
- resource_type: Resource type being evaluated
- categories: Security categories this check belongs to
- additional_urls: List of additional URLs related to the check
2. Temporal Metadata:
- inserted_at: When this finding was first inserted into database
- updated_at: When this finding was last updated
- first_seen_at: When this finding was first detected across all scans
3. Relationships:
- scan_id: UUID of the scan that generated this finding
- resource_ids: List of UUIDs for cloud resources associated with this finding
Additional information compared to search_security_findings:
- Temporal metadata: when the finding was first seen, inserted, and last updated
- Scan relationship: ID of the scan that generated this finding
- Resource relationships: IDs of all cloud resources associated with this finding
Workflow:
1. Use prowler_app_search_security_findings to browse and filter findings
2. Use this tool with the finding 'id' to get remediation guidance and complete context
1. Use search_security_findings to browse and filter across many findings
2. Use get_finding_details to drill down into specific findings of interest
Returns:
dict containing detailed finding with comprehensive security metadata, temporal information,
and relationships to scans and resources
"""
params = {
# Return comprehensive fields including temporal metadata
@@ -243,31 +225,26 @@ class FindingsTools(BaseTool):
description="Filter statistics by cloud provider. Multiple values allowed. If empty, all providers are returned. For valid values, please refer to Prowler Hub/Prowler Documentation that you can also find in form of tools in this MCP Server.",
),
) -> dict[str, Any]:
"""Get aggregate statistics and trends about security findings as a markdown report.
"""Get high-level statistics about security findings formatted as a human-readable markdown report.
This tool provides a HIGH-LEVEL OVERVIEW without retrieving individual findings. Use this when you
need to understand the overall security posture, trends, or remediation progress across all findings.
Use this tool to get a quick overview of your security posture without retrieving individual findings.
Perfect for understanding trends, identifying areas of concern, and tracking improvements over time.
The markdown report includes:
The report includes:
- Summary statistics: total findings, fail/pass/muted counts with percentages
- Delta analysis: breakdown of new vs changed findings
- Trending information: how findings are evolving over time
1. Summary Statistics:
- Total number of findings
- Failed checks (security issues) with percentage
- Passed checks (no issues) with percentage
- Muted findings (user-suppressed) with percentage
Output format: Markdown-formatted report ready to present to users or include in documentation.
2. Delta Analysis (Change Tracking):
- New findings: never seen before in previous scans
* Broken down by: new failures, new passes, new muted
- Changed findings: status changed since last scan
* Broken down by: changed to fail, changed to pass, changed to muted
- Unchanged findings: same status as previous scan
Use cases:
- Quick security posture assessment
- Tracking remediation progress over time
- Identifying which providers have most issues
- Understanding finding trends (improving or degrading)
This helps answer questions like:
- "What's my overall security posture?"
- "How many critical security issues do I have?"
- "Are we improving or getting worse over time?"
- "How many new security issues appeared since last scan?"
Returns:
Dictionary with 'report' key containing markdown-formatted summary statistics
"""
params = {
# Return only LLM-relevant aggregate statistics

View File

@@ -267,10 +267,6 @@ export function RiskPlotClient({ data }: RiskPlotClientProps) {
<h3 className="text-text-neutral-primary text-lg font-semibold">
Risk Plot
</h3>
<p className="text-text-neutral-tertiary mt-1 text-xs">
Threat Score is severity-weighted, not quantity-based. Higher
severity findings have greater impact on the score.
</p>
</div>
<div className="relative min-h-[400px] w-full flex-1">
@@ -302,9 +298,9 @@ export function RiskPlotClient({ data }: RiskPlotClientProps) {
<YAxis
type="number"
dataKey="y"
name="Fail Findings"
name="Failed Findings"
label={{
value: "Fail Findings",
value: "Failed Findings",
angle: -90,
position: "left",
offset: 10,
@@ -342,7 +338,7 @@ export function RiskPlotClient({ data }: RiskPlotClientProps) {
{/* Interactive Legend - below chart */}
<div className="mt-4 flex flex-col items-start gap-2">
<p className="text-text-neutral-tertiary pl-2 text-xs">
Click to filter by provider
Click to filter by provider.
</p>
<ChartLegend
items={providers.map((p) => ({
@@ -367,7 +363,7 @@ export function RiskPlotClient({ data }: RiskPlotClientProps) {
{selectedPoint.name}
</h4>
<p className="text-text-neutral-tertiary text-xs">
Threat Score: {selectedPoint.x}% | Fail Findings:{" "}
Threat Score: {selectedPoint.x}% | Failed Findings:{" "}
{selectedPoint.y}
</p>
</div>

View File

@@ -7,6 +7,7 @@ import { getSeverityTrendsByTimeRange } from "@/actions/overview/severity-trends
import { LineChart } from "@/components/graphs/line-chart";
import { LineConfig, LineDataPoint } from "@/components/graphs/types";
import {
MUTED_COLOR,
SEVERITY_LEVELS,
SEVERITY_LINE_CONFIGS,
SeverityLevel,
@@ -39,9 +40,6 @@ export const FindingSeverityOverTime = ({
const params = new URLSearchParams();
params.set("filter[inserted_at]", point.date);
// Always filter by FAIL status since this chart shows failed findings
params.set("filter[status__in]", "FAIL");
// Add scan_ids filter
if (
point.scan_ids &&
@@ -99,6 +97,15 @@ export const FindingSeverityOverTime = ({
// Build line configurations from shared severity configs
const lines: LineConfig[] = [...SEVERITY_LINE_CONFIGS];
// Only add muted line if data contains it
if (data.some((item) => item.muted !== undefined)) {
lines.push({
dataKey: "muted",
color: MUTED_COLOR,
label: "Muted",
});
}
// Calculate x-axis interval based on data length to show all labels without overlap
const getXAxisInterval = (): number => {
const dataLength = data.length;

View File

@@ -0,0 +1,35 @@
"use client";
import { X } from "lucide-react";
import { useSearchParams } from "next/navigation";
import { Badge } from "@/components/shadcn";
import { useUrlFilters } from "@/hooks/use-url-filters";
export const ActiveCheckIdFilter = () => {
const searchParams = useSearchParams();
const { clearFilter } = useUrlFilters();
const checkIdFilter = searchParams.get("filter[check_id__in]");
if (!checkIdFilter) {
return null;
}
const checkIds = checkIdFilter.split(",");
const displayText =
checkIds.length > 1
? `${checkIds.length} Check IDs filtered`
: `Check ID: ${checkIds[0]}`;
return (
<Badge
variant="outline"
className="flex cursor-pointer items-center gap-1 px-3 py-1.5"
onClick={() => clearFilter("check_id__in")}
>
<span className="max-w-[200px] truncate text-sm">{displayText}</span>
<X className="size-3.5 shrink-0" />
</Badge>
);
};

View File

@@ -1,90 +0,0 @@
"use client";
import { X } from "lucide-react";
import { useSearchParams } from "next/navigation";
import { Badge } from "@/components/shadcn";
import { useUrlFilters } from "@/hooks/use-url-filters";
export interface ActiveFilterBadgeProps {
/**
* The filter key without the "filter[]" wrapper.
* Example: "scan__in", "check_id__in", "provider__in"
*/
filterKey: string;
/**
* Label to display before the value.
* Example: "Scan", "Check ID", "Provider"
*/
label: string;
/**
* Optional function to format a single value for display.
* Useful for truncating UUIDs, etc.
* Default: shows value as-is
*/
formatValue?: (value: string) => string;
/**
* Optional function to format the display when multiple values are selected.
* Default: "{count} {label}s filtered"
*/
formatMultiple?: (count: number, label: string) => string;
}
export const ActiveFilterBadge = ({
filterKey,
label,
formatValue = (v) => v,
formatMultiple = (count, lbl) => `${count} ${lbl}s filtered`,
}: ActiveFilterBadgeProps) => {
const searchParams = useSearchParams();
const { clearFilter } = useUrlFilters();
const fullKey = filterKey.startsWith("filter[")
? filterKey
: `filter[${filterKey}]`;
const filterValue = searchParams.get(fullKey);
if (!filterValue) {
return null;
}
const values = filterValue.split(",");
const displayText =
values.length > 1
? formatMultiple(values.length, label)
: `${label}: ${formatValue(values[0])}`;
return (
<Badge
variant="outline"
className="flex cursor-pointer items-center gap-1 px-3 py-1.5"
onClick={() => clearFilter(filterKey)}
>
<span className="max-w-[200px] truncate text-sm">{displayText}</span>
<X className="size-3.5 shrink-0" />
</Badge>
);
};
/**
* Pre-configured filter badges for common use cases
*/
export const ScanFilterBadge = () => (
<ActiveFilterBadge
filterKey="scan__in"
label="Scan"
formatValue={(id) => `${id.slice(0, 8)}...`}
/>
);
export const CheckIdFilterBadge = () => (
<ActiveFilterBadge
filterKey="check_id__in"
label="Check ID"
formatMultiple={(count) => `${count} Check IDs filtered`}
/>
);

View File

@@ -1,4 +1,4 @@
export * from "./active-filter-badge";
export * from "./active-check-id-filter";
export * from "./clear-filters-button";
export * from "./custom-account-selection";
export * from "./custom-checkbox-muted-findings";

View File

@@ -68,31 +68,10 @@ const CustomLineTooltip = ({
const typedPayload = payload as unknown as TooltipPayloadItem[];
// Filter payload if a line is selected or hovered
const filteredPayload = filterLine
const displayPayload = filterLine
? typedPayload.filter((item) => item.dataKey === filterLine)
: typedPayload;
// Sort by severity order: critical, high, medium, low, informational
const severityOrder = [
"critical",
"high",
"medium",
"low",
"informational",
] as const;
const displayPayload = [...filteredPayload].sort((a, b) => {
const aIndex = severityOrder.indexOf(
a.dataKey as (typeof severityOrder)[number],
);
const bIndex = severityOrder.indexOf(
b.dataKey as (typeof severityOrder)[number],
);
// Items not in severityOrder go to the end
if (aIndex === -1) return 1;
if (bIndex === -1) return -1;
return aIndex - bIndex;
});
if (displayPayload.length === 0) {
return null;
}
@@ -117,17 +96,12 @@ const CustomLineTooltip = ({
return (
<div key={item.dataKey} className="space-y-1">
<div className="flex items-center justify-between gap-4">
<div className="flex items-center gap-2">
<div
className="h-2 w-2 rounded-full"
style={{ backgroundColor: item.stroke }}
/>
<span className="text-text-neutral-secondary text-sm">
{item.name}
</span>
</div>
<span className="text-text-neutral-primary text-sm font-medium">
<div className="flex items-center gap-2">
<div
className="h-2 w-2 rounded-full"
style={{ backgroundColor: item.stroke }}
/>
<span className="text-text-neutral-primary text-sm">
{item.value}
</span>
</div>
@@ -286,7 +260,7 @@ export function LineChart({
<div className="mt-4 flex flex-col items-start gap-2">
<p className="text-text-neutral-tertiary pl-2 text-xs">
Click to filter by severity
Click to filter by severity.
</p>
<ChartLegend
items={legendItems}

View File

@@ -1,11 +1,9 @@
"use client";
import { ColumnDef, Row } from "@tanstack/react-table";
import Link from "next/link";
import { ColumnDef } from "@tanstack/react-table";
import { useRouter, useSearchParams } from "next/navigation";
import { InfoIcon } from "@/components/icons";
import { Button } from "@/components/shadcn";
import { TableLink } from "@/components/ui/custom";
import { DateWithTime, EntityInfo } from "@/components/ui/entities";
import { TriggerSheet } from "@/components/ui/sheet";
@@ -21,7 +19,7 @@ const getScanData = (row: { original: ScanProps }) => {
return row.original;
};
const ScanDetailsCell = ({ row }: { row: Row<ScanProps> }) => {
const ScanDetailsCell = ({ row }: { row: any }) => {
const router = useRouter();
const searchParams = useSearchParams();
const scanId = searchParams.get("scanId");
@@ -194,28 +192,11 @@ export const ColumnGetScans: ColumnDef<ScanProps>[] = [
),
cell: ({ row }) => {
const {
id,
attributes: { unique_resource_count, state },
attributes: { unique_resource_count },
} = getScanData(row);
const isCompleted = state === "completed";
if (!isCompleted) {
return (
<div className="flex w-fit items-center justify-center">
<span className="text-default-500 text-xs font-medium">
{unique_resource_count ?? "-"}
</span>
</div>
);
}
return (
<div className="flex w-fit items-center justify-center">
<Button asChild variant="link" size="sm" className="text-xs">
<Link href={`/resources?filter[scan__in]=${id}`}>
{unique_resource_count}
</Link>
</Button>
<span className="text-xs font-medium">{unique_resource_count}</span>
</div>
);
},

View File

@@ -3,10 +3,7 @@
import { useSearchParams } from "next/navigation";
import { ComplianceScanInfo } from "@/components/compliance/compliance-header/compliance-scan-info";
import {
CheckIdFilterBadge,
ScanFilterBadge,
} from "@/components/filters/active-filter-badge";
import { ActiveCheckIdFilter } from "@/components/filters/active-check-id-filter";
import { ClearFiltersButton } from "@/components/filters/clear-filters-button";
import {
MultiSelect,
@@ -168,9 +165,8 @@ export const DataTableFilterCustom = ({
</MultiSelect>
);
})}
<div className="flex flex-wrap items-center justify-start gap-2">
<ScanFilterBadge />
<CheckIdFilterBadge />
<div className="flex items-center justify-start gap-2">
<ActiveCheckIdFilter />
<ClearFiltersButton />
</div>
</div>