Compare commits
20 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 43da3fb29e | |||
| e7a67eff5d | |||
| e33825747f | |||
| d919d979dd | |||
| 6534faf678 | |||
| 1aa91cf60f | |||
| dad84f0ee2 | |||
| 0d7c5f6ac5 | |||
| 431776bcfd | |||
| 0e8080f09c | |||
| e4b2950436 | |||
| 63174caf98 | |||
| 4e508b69c9 | |||
| 18cfb191f5 | |||
| b898f257f1 | |||
| cccb3a4b94 | |||
| ca50b24d77 | |||
| 7eb204fff0 | |||
| 56c370d3a4 | |||
| b0d8534907 |
@@ -77,9 +77,10 @@ jobs:
|
||||
|
||||
- name: Safety
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
run: poetry run safety check --ignore 79023,79027,86217
|
||||
run: poetry run safety check --ignore 79023,79027,86217,71600
|
||||
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
|
||||
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
|
||||
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
|
||||
|
||||
- name: Vulture
|
||||
if: steps.check-changes.outputs.any_changed == 'true'
|
||||
|
||||
@@ -128,7 +128,8 @@ repos:
|
||||
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
|
||||
# TODO: 79023 & 79027 knack ReDoS until `azure-cli-core` (via `cartography`) allows `knack` >=0.13.0
|
||||
# TODO: 86217 because `alibabacloud-tea-openapi == 0.4.3` don't let us upgrade `cryptography >= 46.0.0`
|
||||
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745,79023,79027,86217'
|
||||
# TODO: 71600 CVE-2024-1135 false positive - fixed in gunicorn 22.0.0, project uses 23.0.0
|
||||
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745,79023,79027,86217,71600'
|
||||
language: system
|
||||
|
||||
- id: vulture
|
||||
|
||||
@@ -2,24 +2,26 @@
|
||||
|
||||
All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
## [1.24.0] (Prowler UNRELEASED)
|
||||
## [1.24.0] (Prowler v5.23.0)
|
||||
|
||||
### 🚀 Added
|
||||
|
||||
- Pin all unpinned dependencies to exact versions to prevent supply chain attacks and ensure reproducible builds [(#10469)](https://github.com/prowler-cloud/prowler/pull/10469)
|
||||
- Filter RBAC role lookup by `tenant_id` to prevent cross-tenant privilege leak [(#10491)](https://github.com/prowler-cloud/prowler/pull/10491)
|
||||
- RBAC role lookup filtered by `tenant_id` to prevent cross-tenant privilege leak [(#10491)](https://github.com/prowler-cloud/prowler/pull/10491)
|
||||
- `VALKEY_SCHEME`, `VALKEY_USERNAME`, and `VALKEY_PASSWORD` environment variables to configure Celery broker TLS/auth connection details for Valkey/ElastiCache [(#10420)](https://github.com/prowler-cloud/prowler/pull/10420)
|
||||
- `Vercel` provider support [(#10190)](https://github.com/prowler-cloud/prowler/pull/10190)
|
||||
- Finding groups list and latest endpoints support `sort=delta`, ordering by `new_count` then `changed_count` so groups with the most new findings rank highest [(#10606)](https://github.com/prowler-cloud/prowler/pull/10606)
|
||||
- Finding groups list and latest endpoints support `sort=status`, ordering by aggregated status with the FAIL > PASS > MUTED priority [(#10628)](https://github.com/prowler-cloud/prowler/pull/10628)
|
||||
- Finding group resources endpoints (`/finding-groups/{check_id}/resources` and `/finding-groups/latest/{check_id}/resources`) now expose `finding_id` per row, pointing to the most recent matching Finding for each resource. UUIDv7 ordering guarantees `Max(finding__id)` resolves to the latest snapshot [(#10630)](https://github.com/prowler-cloud/prowler/pull/10630)
|
||||
- Handle CIS and CISA SCuBA compliance framework from google workspace [(#10629)](https://github.com/prowler-cloud/prowler/pull/10629)
|
||||
|
||||
### 🔄 Changed
|
||||
|
||||
- Finding groups list/latest/resources now expose `status` ∈ `{FAIL, PASS, MANUAL}` and `muted: bool` as orthogonal fields. The aggregated `status` reflects the underlying check outcome regardless of mute state, and `muted=true` signals that every finding in the group/resource is muted. New `manual_count` is exposed alongside `pass_count`/`fail_count`, plus `pass_muted_count`/`fail_muted_count`/`manual_muted_count` siblings so clients can isolate the muted half of each status. The `new_*`/`changed_*` deltas are now broken down by status and mute state via 12 new counters (`new_fail_count`, `new_fail_muted_count`, `new_pass_count`, `new_pass_muted_count`, `new_manual_count`, `new_manual_muted_count` and the matching `changed_*` set). New `filter[muted]=true|false` and `sort=status` (FAIL > PASS > MANUAL) / `sort=muted` are supported. `filter[status]=MUTED` is no longer accepted [(#10630)](https://github.com/prowler-cloud/prowler/pull/10630)
|
||||
- Attack Paths: Periodic cleanup of stale scans with dead-worker detection via Celery inspect, marking orphaned `EXECUTING` scans as `FAILED` and recovering `graph_data_ready` [(#10387)](https://github.com/prowler-cloud/prowler/pull/10387)
|
||||
- Attack Paths: Replace `_provider_id` property with `_Provider_{uuid}` label for provider isolation, add regex-based label injection for custom queries [(#10402)](https://github.com/prowler-cloud/prowler/pull/10402)
|
||||
|
||||
### 🐞 Fixed
|
||||
|
||||
- `reaggregate_all_finding_group_summaries_task` now refreshes finding group daily summaries for every `(provider, day)` combination instead of only the latest scan per provider, matching the unbounded scope of `mute_historical_findings_task`. Mute rule operations no longer leave older daily summaries drifting from the underlying muted findings [(#10630)](https://github.com/prowler-cloud/prowler/pull/10630)
|
||||
- Finding groups list/latest now apply computed status/severity filters and finding-level prefilters (delta, region, service, category, resource group, scan, resource type), plus `check_title` support for sort/filter consistency [(#10428)](https://github.com/prowler-cloud/prowler/pull/10428)
|
||||
- Populate compliance data inside `check_metadata` for findings, which was always returned as `null` [(#10449)](https://github.com/prowler-cloud/prowler/pull/10449)
|
||||
- 403 error for admin users listing tenants due to roles query not using the admin database connection [(#10460)](https://github.com/prowler-cloud/prowler/pull/10460)
|
||||
|
||||
@@ -25,7 +25,7 @@ dependencies = [
|
||||
"defusedxml==0.7.1",
|
||||
"gunicorn==23.0.0",
|
||||
"lxml==5.3.2",
|
||||
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
|
||||
"prowler @ git+https://github.com/prowler-cloud/prowler.git@v5.23",
|
||||
"psycopg2-binary==2.9.9",
|
||||
"pytest-celery[redis] (==1.3.0)",
|
||||
"sentry-sdk[django] (==2.56.0)",
|
||||
|
||||
@@ -1115,13 +1115,14 @@ class FindingGroupAggregatedComputedFilter(FilterSet):
|
||||
STATUS_CHOICES = (
|
||||
("FAIL", "Fail"),
|
||||
("PASS", "Pass"),
|
||||
("MUTED", "Muted"),
|
||||
("MANUAL", "Manual"),
|
||||
)
|
||||
|
||||
status = ChoiceFilter(method="filter_status", choices=STATUS_CHOICES)
|
||||
status__in = CharInFilter(method="filter_status_in", lookup_expr="in")
|
||||
severity = ChoiceFilter(method="filter_severity", choices=SeverityChoices)
|
||||
severity__in = CharInFilter(method="filter_severity_in", lookup_expr="in")
|
||||
muted = BooleanFilter(field_name="muted")
|
||||
include_muted = BooleanFilter(method="filter_include_muted")
|
||||
|
||||
def filter_status(self, queryset, name, value):
|
||||
@@ -1198,7 +1199,7 @@ class FindingGroupAggregatedComputedFilter(FilterSet):
|
||||
if value is True:
|
||||
return queryset
|
||||
# include_muted=false: exclude fully-muted groups
|
||||
return queryset.exclude(fail_count=0, pass_count=0, muted_count__gt=0)
|
||||
return queryset.exclude(muted=True)
|
||||
|
||||
|
||||
class ProviderSecretFilter(FilterSet):
|
||||
|
||||
@@ -0,0 +1,95 @@
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0087_vercel_provider"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="manual_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="pass_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="fail_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="manual_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="muted",
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="new_fail_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="new_fail_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="new_pass_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="new_pass_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="new_manual_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="new_manual_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="changed_fail_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="changed_fail_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="changed_pass_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="changed_pass_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="changed_manual_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name="findinggroupdailysummary",
|
||||
name="changed_manual_muted_count",
|
||||
field=models.IntegerField(default=0),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,31 @@
|
||||
from django.db import migrations
|
||||
from tasks.tasks import backfill_finding_group_summaries_task
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.rls import Tenant
|
||||
|
||||
|
||||
def trigger_backfill_task(apps, schema_editor):
|
||||
"""
|
||||
Re-dispatch the finding-group backfill task for every tenant so the new
|
||||
`manual_count` and `muted` columns added in 0088 get populated from the
|
||||
last 10 days of completed scans.
|
||||
|
||||
The aggregator (`aggregate_finding_group_summaries`) recomputes every
|
||||
column on each call, so it back-populates the new fields without touching
|
||||
the existing ones beyond a normal upsert.
|
||||
"""
|
||||
tenant_ids = Tenant.objects.using(MainRouter.admin_db).values_list("id", flat=True)
|
||||
|
||||
for tenant_id in tenant_ids:
|
||||
backfill_finding_group_summaries_task.delay(tenant_id=str(tenant_id), days=10)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0088_finding_group_status_muted_fields"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(trigger_backfill_task, migrations.RunPython.noop),
|
||||
]
|
||||
@@ -1748,15 +1748,45 @@ class FindingGroupDailySummary(RowLevelSecurityProtectedModel):
|
||||
# Severity stored as integer for MAX aggregation (5=critical, 4=high, etc.)
|
||||
severity_order = models.SmallIntegerField(default=1)
|
||||
|
||||
# Finding counts
|
||||
# Finding counts (inclusive of muted findings; use the `muted` flag to
|
||||
# tell whether the group has any actionable findings).
|
||||
pass_count = models.IntegerField(default=0)
|
||||
fail_count = models.IntegerField(default=0)
|
||||
manual_count = models.IntegerField(default=0)
|
||||
muted_count = models.IntegerField(default=0)
|
||||
|
||||
# Delta counts
|
||||
# Status counts restricted to muted findings, so clients can isolate the
|
||||
# muted half of each status (e.g. `pass_count - pass_muted_count` gives the
|
||||
# actionable PASS findings).
|
||||
pass_muted_count = models.IntegerField(default=0)
|
||||
fail_muted_count = models.IntegerField(default=0)
|
||||
manual_muted_count = models.IntegerField(default=0)
|
||||
|
||||
# Whether every finding for this (provider, check, day) is muted.
|
||||
muted = models.BooleanField(default=False)
|
||||
|
||||
# Delta counts (non-muted, kept for convenience and as a "total" view).
|
||||
new_count = models.IntegerField(default=0)
|
||||
changed_count = models.IntegerField(default=0)
|
||||
|
||||
# Delta breakdown by (status, muted) so clients can answer questions like
|
||||
# "how many new failing findings appeared in this scan?" without scanning
|
||||
# the underlying findings table. Mirrors the existing pass/fail/manual
|
||||
# naming, with `_muted_count` siblings tracking the muted half of each
|
||||
# bucket explicitly.
|
||||
new_fail_count = models.IntegerField(default=0)
|
||||
new_fail_muted_count = models.IntegerField(default=0)
|
||||
new_pass_count = models.IntegerField(default=0)
|
||||
new_pass_muted_count = models.IntegerField(default=0)
|
||||
new_manual_count = models.IntegerField(default=0)
|
||||
new_manual_muted_count = models.IntegerField(default=0)
|
||||
changed_fail_count = models.IntegerField(default=0)
|
||||
changed_fail_muted_count = models.IntegerField(default=0)
|
||||
changed_pass_count = models.IntegerField(default=0)
|
||||
changed_pass_muted_count = models.IntegerField(default=0)
|
||||
changed_manual_count = models.IntegerField(default=0)
|
||||
changed_manual_muted_count = models.IntegerField(default=0)
|
||||
|
||||
# Resource counts
|
||||
resources_fail = models.IntegerField(default=0)
|
||||
resources_total = models.IntegerField(default=0)
|
||||
|
||||
@@ -15445,10 +15445,16 @@ class TestFindingGroupViewSet:
|
||||
# iam_password_policy has only PASS findings
|
||||
assert data[0]["attributes"]["status"] == "PASS"
|
||||
|
||||
def test_finding_groups_status_muted_all(
|
||||
def test_finding_groups_fully_muted_group_reflects_underlying_status(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
):
|
||||
"""Test that MUTED status returned when all findings are muted."""
|
||||
"""A fully-muted group still surfaces its underlying status (no MUTED).
|
||||
|
||||
rds_encryption has 2 muted FAIL findings, so the group must report
|
||||
status=FAIL (the orthogonal `muted` boolean signals it isn't actionable).
|
||||
The status×muted breakdown lets clients answer 'how many failing
|
||||
findings are muted in this group'.
|
||||
"""
|
||||
response = authenticated_client.get(
|
||||
reverse("finding-group-list"),
|
||||
{"filter[inserted_at]": TODAY, "filter[check_id]": "rds_encryption"},
|
||||
@@ -15456,8 +15462,21 @@ class TestFindingGroupViewSet:
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert len(data) == 1
|
||||
# rds_encryption has all muted findings
|
||||
assert data[0]["attributes"]["status"] == "MUTED"
|
||||
attrs = data[0]["attributes"]
|
||||
assert attrs["status"] == "FAIL"
|
||||
assert attrs["muted"] is True
|
||||
assert attrs["fail_count"] == 2
|
||||
assert attrs["fail_muted_count"] == 2
|
||||
assert attrs["pass_muted_count"] == 0
|
||||
assert attrs["manual_muted_count"] == 0
|
||||
assert attrs["muted_count"] == 2
|
||||
# Sanity: the per-status muted counts must add up to muted_count.
|
||||
assert (
|
||||
attrs["pass_muted_count"]
|
||||
+ attrs["fail_muted_count"]
|
||||
+ attrs["manual_muted_count"]
|
||||
== attrs["muted_count"]
|
||||
)
|
||||
|
||||
def test_finding_groups_status_filter(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
@@ -15949,7 +15968,7 @@ class TestFindingGroupViewSet:
|
||||
"extra_filters",
|
||||
[
|
||||
{},
|
||||
{"filter[muted]": "include"},
|
||||
{"filter[delta]": "new"},
|
||||
],
|
||||
ids=["summary_path", "finding_level_path"],
|
||||
)
|
||||
@@ -15967,7 +15986,8 @@ class TestFindingGroupViewSet:
|
||||
|
||||
Parametrized to cover both aggregation paths:
|
||||
- summary_path: default, uses _CheckTitleToCheckIdMixin on summaries
|
||||
- finding_level_path: filter[muted]=include forces CommonFindingFilters
|
||||
- finding_level_path: filter[delta]=new forces _aggregate_findings via
|
||||
CommonFindingFilters (delta is finding-level, not summary-level)
|
||||
"""
|
||||
params = {
|
||||
"filter[inserted_at]": TODAY,
|
||||
@@ -16872,68 +16892,6 @@ class TestFindingGroupViewSet:
|
||||
asc_keys = [delta_key(item) for item in response.json()["data"]]
|
||||
assert asc_keys == sorted(asc_keys)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_sort_by_status(
|
||||
self,
|
||||
authenticated_client,
|
||||
finding_groups_fixture,
|
||||
endpoint_name,
|
||||
):
|
||||
"""Sort by status orders by aggregated status (FAIL > PASS > MUTED)."""
|
||||
status_order = {"FAIL": 3, "PASS": 2, "MUTED": 1}
|
||||
|
||||
# Descending: FAIL groups first, then PASS
|
||||
params = {"sort": "-status"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert len(data) > 0
|
||||
|
||||
desc_statuses = [item["attributes"]["status"] for item in data]
|
||||
desc_keys = [status_order[s] for s in desc_statuses]
|
||||
assert desc_keys == sorted(desc_keys, reverse=True)
|
||||
|
||||
# Ascending: PASS groups first, then FAIL
|
||||
params["sort"] = "status"
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
asc_statuses = [
|
||||
item["attributes"]["status"] for item in response.json()["data"]
|
||||
]
|
||||
asc_keys = [status_order[s] for s in asc_statuses]
|
||||
assert asc_keys == sorted(asc_keys)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_sort_by_status_includes_muted(
|
||||
self,
|
||||
authenticated_client,
|
||||
finding_groups_fixture,
|
||||
endpoint_name,
|
||||
):
|
||||
"""When include_muted is set, MUTED groups participate in status sort."""
|
||||
status_order = {"FAIL": 3, "PASS": 2, "MUTED": 1}
|
||||
|
||||
params = {"sort": "status", "filter[include_muted]": "true"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
|
||||
statuses = [item["attributes"]["status"] for item in data]
|
||||
assert "MUTED" in statuses
|
||||
assert statuses[0] == "MUTED"
|
||||
keys = [status_order[s] for s in statuses]
|
||||
assert keys == sorted(keys)
|
||||
|
||||
def test_finding_groups_latest_ignores_date_filters(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
):
|
||||
@@ -16947,3 +16905,287 @@ class TestFindingGroupViewSet:
|
||||
data = response.json()["data"]
|
||||
# Should still return data, not filtered by the old date
|
||||
assert len(data) == 5
|
||||
|
||||
def test_finding_groups_status_choices_no_muted(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
):
|
||||
"""Every returned group must have status ∈ {FAIL, PASS, MANUAL}."""
|
||||
response = authenticated_client.get(
|
||||
reverse("finding-group-list"),
|
||||
{"filter[inserted_at]": TODAY},
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
statuses = {item["attributes"]["status"] for item in response.json()["data"]}
|
||||
assert statuses, "fixture should produce at least one group"
|
||||
assert statuses <= {"FAIL", "PASS", "MANUAL"}
|
||||
assert "MUTED" not in statuses
|
||||
|
||||
def test_finding_groups_serializer_exposes_muted_and_manual_count(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
):
|
||||
"""The /finding-groups payload must expose `muted`, `manual_count` and
|
||||
the per-status muted siblings (`pass_muted_count`/`fail_muted_count`/
|
||||
`manual_muted_count`)."""
|
||||
response = authenticated_client.get(
|
||||
reverse("finding-group-list"),
|
||||
{"filter[inserted_at]": TODAY, "filter[check_id]": "iam_password_policy"},
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
attrs = response.json()["data"][0]["attributes"]
|
||||
assert "muted" in attrs and isinstance(attrs["muted"], bool)
|
||||
assert "manual_count" in attrs and isinstance(attrs["manual_count"], int)
|
||||
assert attrs["muted"] is False # iam_password_policy has only non-muted PASS
|
||||
assert attrs["manual_count"] == 0
|
||||
assert attrs["pass_muted_count"] == 0
|
||||
assert attrs["fail_muted_count"] == 0
|
||||
assert attrs["manual_muted_count"] == 0
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_filter_status_muted_is_rejected(
|
||||
self, authenticated_client, finding_groups_fixture, endpoint_name
|
||||
):
|
||||
"""`filter[status]=MUTED` is no longer a valid status value."""
|
||||
params = {"filter[status]": "MUTED"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_400_BAD_REQUEST
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_filter_muted_true(
|
||||
self, authenticated_client, finding_groups_fixture, endpoint_name
|
||||
):
|
||||
"""`filter[muted]=true` returns only fully-muted groups."""
|
||||
params = {"filter[muted]": "true"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
check_ids = {item["id"] for item in data}
|
||||
# Only rds_encryption is fully muted in the fixture
|
||||
assert check_ids == {"rds_encryption"}
|
||||
assert all(item["attributes"]["muted"] is True for item in data)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_filter_muted_false(
|
||||
self, authenticated_client, finding_groups_fixture, endpoint_name
|
||||
):
|
||||
"""`filter[muted]=false` returns only groups with actionable findings."""
|
||||
params = {"filter[muted]": "false"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
check_ids = {item["id"] for item in data}
|
||||
assert "rds_encryption" not in check_ids
|
||||
assert check_ids == {
|
||||
"s3_bucket_public_access",
|
||||
"ec2_instance_public_ip",
|
||||
"iam_password_policy",
|
||||
"cloudtrail_enabled",
|
||||
}
|
||||
assert all(item["attributes"]["muted"] is False for item in data)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_sort_by_status(
|
||||
self, authenticated_client, finding_groups_fixture, endpoint_name
|
||||
):
|
||||
"""sort=status orders by aggregated status (FAIL > PASS > MANUAL)."""
|
||||
priority = {"FAIL": 3, "PASS": 2, "MANUAL": 1}
|
||||
params = {"sort": "-status"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert data, "fixture should produce groups"
|
||||
|
||||
desc_keys = [priority[item["attributes"]["status"]] for item in data]
|
||||
assert desc_keys == sorted(desc_keys, reverse=True)
|
||||
|
||||
params["sort"] = "status"
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
asc_keys = [
|
||||
priority[item["attributes"]["status"]] for item in response.json()["data"]
|
||||
]
|
||||
assert asc_keys == sorted(asc_keys)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_sort_by_muted(
|
||||
self, authenticated_client, finding_groups_fixture, endpoint_name
|
||||
):
|
||||
"""sort=muted orders by the boolean muted attribute."""
|
||||
# Need include_muted=true so the fully-muted group is part of the result
|
||||
params = {"sort": "-muted", "filter[include_muted]": "true"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert data, "fixture should produce groups"
|
||||
|
||||
muted_values = [item["attributes"]["muted"] for item in data]
|
||||
# Descending boolean: True (1) before False (0)
|
||||
assert muted_values == sorted(muted_values, reverse=True)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"endpoint_name", ["finding-group-list", "finding-group-latest"]
|
||||
)
|
||||
def test_finding_groups_delta_status_breakdown(
|
||||
self, authenticated_client, finding_groups_fixture, endpoint_name
|
||||
):
|
||||
"""`new_*` and `changed_*` counters split by status and mute state.
|
||||
|
||||
s3_bucket_public_access has 1 new FAIL and 1 changed FAIL (both
|
||||
non-muted) so the breakdown must reflect exactly that and the totals
|
||||
must equal the sum of the buckets.
|
||||
"""
|
||||
params = {"filter[check_id]": "s3_bucket_public_access"}
|
||||
if endpoint_name == "finding-group-list":
|
||||
params["filter[inserted_at]"] = TODAY
|
||||
|
||||
response = authenticated_client.get(reverse(endpoint_name), params)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert len(data) == 1
|
||||
attrs = data[0]["attributes"]
|
||||
|
||||
assert attrs["new_fail_count"] == 1
|
||||
assert attrs["new_fail_muted_count"] == 0
|
||||
assert attrs["new_pass_count"] == 0
|
||||
assert attrs["new_pass_muted_count"] == 0
|
||||
assert attrs["new_manual_count"] == 0
|
||||
assert attrs["new_manual_muted_count"] == 0
|
||||
assert attrs["changed_fail_count"] == 1
|
||||
assert attrs["changed_fail_muted_count"] == 0
|
||||
assert attrs["changed_pass_count"] == 0
|
||||
assert attrs["changed_pass_muted_count"] == 0
|
||||
assert attrs["changed_manual_count"] == 0
|
||||
assert attrs["changed_manual_muted_count"] == 0
|
||||
|
||||
new_total = (
|
||||
attrs["new_fail_count"]
|
||||
+ attrs["new_fail_muted_count"]
|
||||
+ attrs["new_pass_count"]
|
||||
+ attrs["new_pass_muted_count"]
|
||||
+ attrs["new_manual_count"]
|
||||
+ attrs["new_manual_muted_count"]
|
||||
)
|
||||
changed_total = (
|
||||
attrs["changed_fail_count"]
|
||||
+ attrs["changed_fail_muted_count"]
|
||||
+ attrs["changed_pass_count"]
|
||||
+ attrs["changed_pass_muted_count"]
|
||||
+ attrs["changed_manual_count"]
|
||||
+ attrs["changed_manual_muted_count"]
|
||||
)
|
||||
# The non-muted variants of the breakdown must sum to the legacy
|
||||
# totals (new_count/changed_count are stored as non-muted).
|
||||
assert (
|
||||
attrs["new_fail_count"]
|
||||
+ attrs["new_pass_count"]
|
||||
+ attrs["new_manual_count"]
|
||||
== attrs["new_count"]
|
||||
)
|
||||
assert (
|
||||
attrs["changed_fail_count"]
|
||||
+ attrs["changed_pass_count"]
|
||||
+ attrs["changed_manual_count"]
|
||||
== attrs["changed_count"]
|
||||
)
|
||||
# And the *full* breakdown (including the muted halves) is exposed
|
||||
# so clients can also count muted-only deltas without losing data.
|
||||
assert new_total >= attrs["new_count"]
|
||||
assert changed_total >= attrs["changed_count"]
|
||||
|
||||
def test_finding_groups_resources_serializer_exposes_muted(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
):
|
||||
"""The /finding-groups/<id>/resources payload must expose `muted`."""
|
||||
response = authenticated_client.get(
|
||||
reverse(
|
||||
"finding-group-resources",
|
||||
kwargs={"pk": "rds_encryption"},
|
||||
),
|
||||
{"filter[inserted_at]": TODAY},
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert data, "rds_encryption should expose its resources"
|
||||
for item in data:
|
||||
attrs = item["attributes"]
|
||||
assert "muted" in attrs and isinstance(attrs["muted"], bool)
|
||||
# rds_encryption has all muted findings
|
||||
assert attrs["muted"] is True
|
||||
# Status reflects the underlying check outcome (FAIL), not MUTED
|
||||
assert attrs["status"] == "FAIL"
|
||||
|
||||
def test_finding_groups_resources_exposes_finding_id(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
):
|
||||
"""The /resources payload exposes the most recent matching finding_id.
|
||||
|
||||
rds_encryption has 2 findings, one per resource. Each resource row must
|
||||
report the UUID of its corresponding Finding (UUIDv7 ordering means
|
||||
Max(finding__id) resolves to the latest snapshot in time).
|
||||
"""
|
||||
response = authenticated_client.get(
|
||||
reverse(
|
||||
"finding-group-resources",
|
||||
kwargs={"pk": "rds_encryption"},
|
||||
),
|
||||
{"filter[inserted_at]": TODAY},
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert data, "rds_encryption should expose its resources"
|
||||
|
||||
rds_finding_ids = {
|
||||
str(f.id) for f in finding_groups_fixture if f.check_id == "rds_encryption"
|
||||
}
|
||||
assert rds_finding_ids, "fixture sanity"
|
||||
|
||||
for item in data:
|
||||
attrs = item["attributes"]
|
||||
assert "finding_id" in attrs
|
||||
assert attrs["finding_id"] in rds_finding_ids
|
||||
|
||||
def test_finding_groups_latest_resources_exposes_finding_id(
|
||||
self, authenticated_client, finding_groups_fixture
|
||||
):
|
||||
"""The /latest/.../resources payload also exposes finding_id."""
|
||||
response = authenticated_client.get(
|
||||
reverse(
|
||||
"finding-group-latest_resources",
|
||||
kwargs={"check_id": "rds_encryption"},
|
||||
),
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert data, "rds_encryption should expose its resources via /latest"
|
||||
|
||||
rds_finding_ids = {
|
||||
str(f.id) for f in finding_groups_fixture if f.check_id == "rds_encryption"
|
||||
}
|
||||
for item in data:
|
||||
attrs = item["attributes"]
|
||||
assert "finding_id" in attrs
|
||||
assert attrs["finding_id"] in rds_finding_ids
|
||||
|
||||
@@ -4185,6 +4185,7 @@ class FindingGroupSerializer(BaseSerializerV1):
|
||||
check_description = serializers.CharField(required=False, allow_null=True)
|
||||
severity = serializers.CharField()
|
||||
status = serializers.CharField()
|
||||
muted = serializers.BooleanField()
|
||||
impacted_providers = serializers.ListField(
|
||||
child=serializers.CharField(), required=False
|
||||
)
|
||||
@@ -4192,9 +4193,25 @@ class FindingGroupSerializer(BaseSerializerV1):
|
||||
resources_total = serializers.IntegerField()
|
||||
pass_count = serializers.IntegerField()
|
||||
fail_count = serializers.IntegerField()
|
||||
manual_count = serializers.IntegerField()
|
||||
pass_muted_count = serializers.IntegerField()
|
||||
fail_muted_count = serializers.IntegerField()
|
||||
manual_muted_count = serializers.IntegerField()
|
||||
muted_count = serializers.IntegerField()
|
||||
new_count = serializers.IntegerField()
|
||||
changed_count = serializers.IntegerField()
|
||||
new_fail_count = serializers.IntegerField()
|
||||
new_fail_muted_count = serializers.IntegerField()
|
||||
new_pass_count = serializers.IntegerField()
|
||||
new_pass_muted_count = serializers.IntegerField()
|
||||
new_manual_count = serializers.IntegerField()
|
||||
new_manual_muted_count = serializers.IntegerField()
|
||||
changed_fail_count = serializers.IntegerField()
|
||||
changed_fail_muted_count = serializers.IntegerField()
|
||||
changed_pass_count = serializers.IntegerField()
|
||||
changed_pass_muted_count = serializers.IntegerField()
|
||||
changed_manual_count = serializers.IntegerField()
|
||||
changed_manual_muted_count = serializers.IntegerField()
|
||||
first_seen_at = serializers.DateTimeField(required=False, allow_null=True)
|
||||
last_seen_at = serializers.DateTimeField(required=False, allow_null=True)
|
||||
failing_since = serializers.DateTimeField(required=False, allow_null=True)
|
||||
@@ -4214,8 +4231,10 @@ class FindingGroupResourceSerializer(BaseSerializerV1):
|
||||
id = serializers.UUIDField(source="resource_id")
|
||||
resource = serializers.SerializerMethodField()
|
||||
provider = serializers.SerializerMethodField()
|
||||
finding_id = serializers.UUIDField()
|
||||
status = serializers.CharField()
|
||||
severity = serializers.CharField()
|
||||
muted = serializers.BooleanField()
|
||||
delta = serializers.CharField(required=False, allow_null=True)
|
||||
first_seen_at = serializers.DateTimeField(required=False, allow_null=True)
|
||||
last_seen_at = serializers.DateTimeField(required=False, allow_null=True)
|
||||
|
||||
@@ -26,10 +26,11 @@ from config.settings.social_login import (
|
||||
)
|
||||
from dj_rest_auth.registration.views import SocialLoginView
|
||||
from django.conf import settings as django_settings
|
||||
from django.contrib.postgres.aggregates import ArrayAgg, StringAgg
|
||||
from django.contrib.postgres.aggregates import ArrayAgg, BoolAnd, StringAgg
|
||||
from django.contrib.postgres.search import SearchQuery
|
||||
from django.db import transaction
|
||||
from django.db.models import (
|
||||
BooleanField,
|
||||
Case,
|
||||
CharField,
|
||||
Count,
|
||||
@@ -7076,9 +7077,29 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
severity_order=Max("severity_order"),
|
||||
pass_count=Sum("pass_count"),
|
||||
fail_count=Sum("fail_count"),
|
||||
manual_count=Sum("manual_count"),
|
||||
pass_muted_count=Sum("pass_muted_count"),
|
||||
fail_muted_count=Sum("fail_muted_count"),
|
||||
manual_muted_count=Sum("manual_muted_count"),
|
||||
muted_count=Sum("muted_count"),
|
||||
# The group is muted only if every contributing daily summary is
|
||||
# itself fully muted. BoolAnd returns False as soon as one row has
|
||||
# at least one actionable finding.
|
||||
muted=BoolAnd("muted"),
|
||||
new_count=Sum("new_count"),
|
||||
changed_count=Sum("changed_count"),
|
||||
new_fail_count=Sum("new_fail_count"),
|
||||
new_fail_muted_count=Sum("new_fail_muted_count"),
|
||||
new_pass_count=Sum("new_pass_count"),
|
||||
new_pass_muted_count=Sum("new_pass_muted_count"),
|
||||
new_manual_count=Sum("new_manual_count"),
|
||||
new_manual_muted_count=Sum("new_manual_muted_count"),
|
||||
changed_fail_count=Sum("changed_fail_count"),
|
||||
changed_fail_muted_count=Sum("changed_fail_muted_count"),
|
||||
changed_pass_count=Sum("changed_pass_count"),
|
||||
changed_pass_muted_count=Sum("changed_pass_muted_count"),
|
||||
changed_manual_count=Sum("changed_manual_count"),
|
||||
changed_manual_muted_count=Sum("changed_manual_muted_count"),
|
||||
resources_total=Sum("resources_total"),
|
||||
resources_fail=Sum("resources_fail"),
|
||||
impacted_providers_str=StringAgg(
|
||||
@@ -7104,39 +7125,95 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
output_field=IntegerField(),
|
||||
)
|
||||
|
||||
return queryset.values("check_id").annotate(
|
||||
severity_order=Max(severity_case),
|
||||
pass_count=Count("id", filter=Q(status="PASS", muted=False)),
|
||||
fail_count=Count("id", filter=Q(status="FAIL", muted=False)),
|
||||
muted_count=Count("id", filter=Q(muted=True)),
|
||||
new_count=Count("id", filter=Q(delta="new", muted=False)),
|
||||
changed_count=Count("id", filter=Q(delta="changed", muted=False)),
|
||||
resources_total=Count("resources__id", distinct=True),
|
||||
resources_fail=Count(
|
||||
"resources__id",
|
||||
distinct=True,
|
||||
filter=Q(status="FAIL", muted=False),
|
||||
),
|
||||
impacted_providers_str=StringAgg(
|
||||
Cast("scan__provider__provider", CharField()),
|
||||
delimiter=",",
|
||||
distinct=True,
|
||||
default="",
|
||||
),
|
||||
agg_first_seen_at=Min("first_seen_at"),
|
||||
agg_last_seen_at=Max("inserted_at"),
|
||||
agg_failing_since=Min(
|
||||
"first_seen_at", filter=Q(status="FAIL", muted=False)
|
||||
),
|
||||
check_title=Coalesce(
|
||||
Max(KeyTextTransform("checktitle", "check_metadata")),
|
||||
Max(KeyTextTransform("CheckTitle", "check_metadata")),
|
||||
Max(KeyTextTransform("Checktitle", "check_metadata")),
|
||||
),
|
||||
check_description=Coalesce(
|
||||
Max(KeyTextTransform("description", "check_metadata")),
|
||||
Max(KeyTextTransform("Description", "check_metadata")),
|
||||
),
|
||||
# `pass_count`, `fail_count` and `manual_count` count *every* finding
|
||||
# for the check (muted or not) so the aggregated `status` reflects the
|
||||
# underlying check outcome regardless of mute state. Whether the group
|
||||
# is actionable is signalled by the orthogonal `muted` flag below.
|
||||
return (
|
||||
queryset.values("check_id")
|
||||
.annotate(
|
||||
severity_order=Max(severity_case),
|
||||
pass_count=Count("id", filter=Q(status="PASS")),
|
||||
fail_count=Count("id", filter=Q(status="FAIL")),
|
||||
manual_count=Count("id", filter=Q(status="MANUAL")),
|
||||
pass_muted_count=Count("id", filter=Q(status="PASS", muted=True)),
|
||||
fail_muted_count=Count("id", filter=Q(status="FAIL", muted=True)),
|
||||
manual_muted_count=Count("id", filter=Q(status="MANUAL", muted=True)),
|
||||
muted_count=Count("id", filter=Q(muted=True)),
|
||||
nonmuted_count=Count("id", filter=Q(muted=False)),
|
||||
new_count=Count("id", filter=Q(delta="new", muted=False)),
|
||||
changed_count=Count("id", filter=Q(delta="changed", muted=False)),
|
||||
new_fail_count=Count(
|
||||
"id", filter=Q(delta="new", status="FAIL", muted=False)
|
||||
),
|
||||
new_fail_muted_count=Count(
|
||||
"id", filter=Q(delta="new", status="FAIL", muted=True)
|
||||
),
|
||||
new_pass_count=Count(
|
||||
"id", filter=Q(delta="new", status="PASS", muted=False)
|
||||
),
|
||||
new_pass_muted_count=Count(
|
||||
"id", filter=Q(delta="new", status="PASS", muted=True)
|
||||
),
|
||||
new_manual_count=Count(
|
||||
"id", filter=Q(delta="new", status="MANUAL", muted=False)
|
||||
),
|
||||
new_manual_muted_count=Count(
|
||||
"id", filter=Q(delta="new", status="MANUAL", muted=True)
|
||||
),
|
||||
changed_fail_count=Count(
|
||||
"id", filter=Q(delta="changed", status="FAIL", muted=False)
|
||||
),
|
||||
changed_fail_muted_count=Count(
|
||||
"id", filter=Q(delta="changed", status="FAIL", muted=True)
|
||||
),
|
||||
changed_pass_count=Count(
|
||||
"id", filter=Q(delta="changed", status="PASS", muted=False)
|
||||
),
|
||||
changed_pass_muted_count=Count(
|
||||
"id", filter=Q(delta="changed", status="PASS", muted=True)
|
||||
),
|
||||
changed_manual_count=Count(
|
||||
"id", filter=Q(delta="changed", status="MANUAL", muted=False)
|
||||
),
|
||||
changed_manual_muted_count=Count(
|
||||
"id", filter=Q(delta="changed", status="MANUAL", muted=True)
|
||||
),
|
||||
resources_total=Count("resources__id", distinct=True),
|
||||
resources_fail=Count(
|
||||
"resources__id",
|
||||
distinct=True,
|
||||
filter=Q(status="FAIL", muted=False),
|
||||
),
|
||||
impacted_providers_str=StringAgg(
|
||||
Cast("scan__provider__provider", CharField()),
|
||||
delimiter=",",
|
||||
distinct=True,
|
||||
default="",
|
||||
),
|
||||
agg_first_seen_at=Min("first_seen_at"),
|
||||
agg_last_seen_at=Max("inserted_at"),
|
||||
agg_failing_since=Min(
|
||||
"first_seen_at", filter=Q(status="FAIL", muted=False)
|
||||
),
|
||||
check_title=Coalesce(
|
||||
Max(KeyTextTransform("checktitle", "check_metadata")),
|
||||
Max(KeyTextTransform("CheckTitle", "check_metadata")),
|
||||
Max(KeyTextTransform("Checktitle", "check_metadata")),
|
||||
),
|
||||
check_description=Coalesce(
|
||||
Max(KeyTextTransform("description", "check_metadata")),
|
||||
Max(KeyTextTransform("Description", "check_metadata")),
|
||||
),
|
||||
)
|
||||
.annotate(
|
||||
# Group is muted only if it has zero non-muted findings.
|
||||
muted=Case(
|
||||
When(nonmuted_count=0, then=Value(True)),
|
||||
default=Value(False),
|
||||
output_field=BooleanField(),
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
def _split_computed_aggregate_filters(
|
||||
@@ -7148,6 +7225,7 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
"status__in",
|
||||
"severity",
|
||||
"severity__in",
|
||||
"muted",
|
||||
"include_muted",
|
||||
}
|
||||
finding_params = QueryDict(mutable=True)
|
||||
@@ -7179,7 +7257,8 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
Post-process aggregation results to add computed fields.
|
||||
|
||||
- Converts severity integer back to string
|
||||
- Computes aggregated status (FAIL > PASS > MUTED)
|
||||
- Computes aggregated status (FAIL > PASS > MANUAL); the orthogonal
|
||||
``muted`` boolean is already on the row from the SQL aggregation
|
||||
- Converts provider string to list
|
||||
"""
|
||||
results = []
|
||||
@@ -7197,13 +7276,19 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
if "agg_failing_since" in row:
|
||||
row["failing_since"] = row.pop("agg_failing_since")
|
||||
|
||||
# Compute aggregated status
|
||||
# Drop the helper count we use to derive `muted` in the
|
||||
# finding-level aggregation path.
|
||||
row.pop("nonmuted_count", None)
|
||||
|
||||
# Compute aggregated status. Counts are inclusive of muted findings,
|
||||
# so the underlying check outcome surfaces even when the group is
|
||||
# fully muted.
|
||||
if row.get("fail_count", 0) > 0:
|
||||
row["status"] = "FAIL"
|
||||
elif row.get("pass_count", 0) > 0:
|
||||
row["status"] = "PASS"
|
||||
else:
|
||||
row["status"] = "MUTED"
|
||||
row["status"] = "MANUAL"
|
||||
|
||||
# Convert provider string to list
|
||||
providers_str = row.pop("impacted_providers_str", "") or ""
|
||||
@@ -7220,9 +7305,11 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
"check_title": "check_title",
|
||||
"severity": "severity_order",
|
||||
"status": "status_order",
|
||||
"muted": "muted",
|
||||
"delta": "delta_order",
|
||||
"fail_count": "fail_count",
|
||||
"pass_count": "pass_count",
|
||||
"manual_count": "manual_count",
|
||||
"muted_count": "muted_count",
|
||||
"new_count": "new_count",
|
||||
"changed_count": "changed_count",
|
||||
@@ -7277,7 +7364,7 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
return ordering
|
||||
|
||||
def _apply_aggregated_computed_filters(self, queryset, computed_params: QueryDict):
|
||||
"""Apply computed filters (status/severity) on aggregated finding-group rows."""
|
||||
"""Apply computed filters (status/severity/muted) on aggregated finding-group rows."""
|
||||
if not computed_params:
|
||||
return queryset
|
||||
|
||||
@@ -7286,14 +7373,16 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
aggregated_status=Case(
|
||||
When(fail_count__gt=0, then=Value("FAIL")),
|
||||
When(pass_count__gt=0, then=Value("PASS")),
|
||||
default=Value("MUTED"),
|
||||
default=Value("MANUAL"),
|
||||
output_field=CharField(),
|
||||
)
|
||||
)
|
||||
|
||||
# Exclude fully-muted groups by default unless include_muted is set
|
||||
if "include_muted" not in computed_params:
|
||||
queryset = queryset.exclude(fail_count=0, pass_count=0, muted_count__gt=0)
|
||||
# Exclude fully-muted groups by default unless the caller has opted in
|
||||
# via either `include_muted` or an explicit `muted` filter (the latter
|
||||
# gives the caller direct control over the column).
|
||||
if "include_muted" not in computed_params and "muted" not in computed_params:
|
||||
queryset = queryset.exclude(muted=True)
|
||||
|
||||
filterset = FindingGroupAggregatedComputedFilter(
|
||||
computed_params, queryset=queryset
|
||||
@@ -7348,18 +7437,14 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
provider_type=Max("resource__provider__provider"),
|
||||
provider_uid=Max("resource__provider__uid"),
|
||||
provider_alias=Max("resource__provider__alias"),
|
||||
# status_order considers ALL findings (muted or not) so it
|
||||
# surfaces FAIL/PASS/MANUAL based on the underlying check
|
||||
# outcome. Whether the resource is actionable is signalled by
|
||||
# the orthogonal `muted` flag below.
|
||||
status_order=Max(
|
||||
Case(
|
||||
When(
|
||||
finding__status="FAIL",
|
||||
finding__muted=False,
|
||||
then=Value(3),
|
||||
),
|
||||
When(
|
||||
finding__status="PASS",
|
||||
finding__muted=False,
|
||||
then=Value(2),
|
||||
),
|
||||
When(finding__status="FAIL", then=Value(3)),
|
||||
When(finding__status="PASS", then=Value(2)),
|
||||
default=Value(1),
|
||||
output_field=IntegerField(),
|
||||
)
|
||||
@@ -7391,6 +7476,8 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
),
|
||||
first_seen_at=Min("finding__first_seen_at"),
|
||||
last_seen_at=Max("finding__inserted_at"),
|
||||
# True only if every finding for this resource+check is muted.
|
||||
muted=BoolAnd("finding__muted"),
|
||||
# Max() on muted_reason / check_metadata is safe because
|
||||
# all findings for the same resource+check share identical
|
||||
# values (mute rules and metadata are applied per-check).
|
||||
@@ -7398,6 +7485,12 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
resource_group=Max(
|
||||
KeyTextTransform("resourcegroup", "finding__check_metadata")
|
||||
),
|
||||
# Most recent matching Finding for this (resource, check):
|
||||
# Finding.id is a UUIDv7 (time-ordered in its high 48 bits).
|
||||
# Cast to text first because PostgreSQL has no built-in
|
||||
# `max(uuid)` aggregate; on the canonical lowercase form a
|
||||
# lexicographic Max() still resolves to the latest snapshot.
|
||||
finding_id=Max(Cast("finding__id", output_field=CharField())),
|
||||
)
|
||||
.filter(resource_id__isnull=False)
|
||||
)
|
||||
@@ -7406,8 +7499,8 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
_RESOURCE_SORT_ANNOTATIONS = {
|
||||
"status_order": lambda: Max(
|
||||
Case(
|
||||
When(finding__status="FAIL", finding__muted=False, then=Value(3)),
|
||||
When(finding__status="PASS", finding__muted=False, then=Value(2)),
|
||||
When(finding__status="FAIL", then=Value(3)),
|
||||
When(finding__status="PASS", then=Value(2)),
|
||||
default=Value(1),
|
||||
output_field=IntegerField(),
|
||||
)
|
||||
@@ -7481,7 +7574,7 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
elif status_order == 2:
|
||||
status = "PASS"
|
||||
else:
|
||||
status = "MUTED"
|
||||
status = "MANUAL"
|
||||
|
||||
delta_order = row.get("delta_order", 0)
|
||||
if delta_order == 2:
|
||||
@@ -7509,8 +7602,12 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
"delta": delta,
|
||||
"first_seen_at": row["first_seen_at"],
|
||||
"last_seen_at": row["last_seen_at"],
|
||||
"muted": bool(row.get("muted", False)),
|
||||
"muted_reason": row.get("muted_reason"),
|
||||
"resource_group": row.get("resource_group", ""),
|
||||
"finding_id": (
|
||||
str(row["finding_id"]) if row.get("finding_id") else None
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -7571,9 +7668,11 @@ class FindingGroupViewSet(BaseRLSViewSet):
|
||||
sort_param, self._FINDING_GROUP_SORT_MAP
|
||||
)
|
||||
if ordering:
|
||||
# status_order is annotated on demand so groups can be sorted
|
||||
# by their aggregated status (FAIL > PASS > MUTED), mirroring
|
||||
# the priority used in _post_process_aggregation.
|
||||
# status_order is annotated on demand so groups can be sorted by
|
||||
# their aggregated status (FAIL > PASS > MANUAL), mirroring the
|
||||
# priority used in _post_process_aggregation. Counts are
|
||||
# inclusive of muted findings, so the underlying check outcome
|
||||
# surfaces even for fully muted groups.
|
||||
if any(field.lstrip("-") == "status_order" for field in ordering):
|
||||
aggregated_queryset = aggregated_queryset.annotate(
|
||||
status_order=Case(
|
||||
|
||||
@@ -32,9 +32,13 @@ from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_github import GithubCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_googleworkspace import GoogleWorkspaceCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_kubernetes import KubernetesCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_m365 import M365CIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_oraclecloud import OracleCloudCIS
|
||||
from prowler.lib.outputs.compliance.cisa_scuba.cisa_scuba_googleworkspace import (
|
||||
GoogleWorkspaceCISASCuBA,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.csa.csa_alibabacloud import AlibabaCloudCSA
|
||||
from prowler.lib.outputs.compliance.csa.csa_aws import AWSCSA
|
||||
from prowler.lib.outputs.compliance.csa.csa_azure import AzureCSA
|
||||
@@ -93,7 +97,7 @@ COMPLIANCE_CLASS_MAP = {
|
||||
(lambda name: name.startswith("iso27001_"), AWSISO27001),
|
||||
(lambda name: name.startswith("kisa"), AWSKISAISMSP),
|
||||
(lambda name: name == "prowler_threatscore_aws", ProwlerThreatScoreAWS),
|
||||
(lambda name: name == "ccc_aws", CCC_AWS),
|
||||
(lambda name: name.startswith("ccc_"), CCC_AWS),
|
||||
(lambda name: name.startswith("c5_"), AWSC5),
|
||||
(lambda name: name.startswith("csa_"), AWSCSA),
|
||||
],
|
||||
@@ -102,7 +106,7 @@ COMPLIANCE_CLASS_MAP = {
|
||||
(lambda name: name == "mitre_attack_azure", AzureMitreAttack),
|
||||
(lambda name: name.startswith("ens_"), AzureENS),
|
||||
(lambda name: name.startswith("iso27001_"), AzureISO27001),
|
||||
(lambda name: name == "ccc_azure", CCC_Azure),
|
||||
(lambda name: name.startswith("ccc_"), CCC_Azure),
|
||||
(lambda name: name == "prowler_threatscore_azure", ProwlerThreatScoreAzure),
|
||||
(lambda name: name == "c5_azure", AzureC5),
|
||||
(lambda name: name.startswith("csa_"), AzureCSA),
|
||||
@@ -113,7 +117,7 @@ COMPLIANCE_CLASS_MAP = {
|
||||
(lambda name: name.startswith("ens_"), GCPENS),
|
||||
(lambda name: name.startswith("iso27001_"), GCPISO27001),
|
||||
(lambda name: name == "prowler_threatscore_gcp", ProwlerThreatScoreGCP),
|
||||
(lambda name: name == "ccc_gcp", CCC_GCP),
|
||||
(lambda name: name.startswith("ccc_"), CCC_GCP),
|
||||
(lambda name: name == "c5_gcp", GCPC5),
|
||||
(lambda name: name.startswith("csa_"), GCPCSA),
|
||||
],
|
||||
@@ -133,6 +137,10 @@ COMPLIANCE_CLASS_MAP = {
|
||||
"github": [
|
||||
(lambda name: name.startswith("cis_"), GithubCIS),
|
||||
],
|
||||
"googleworkspace": [
|
||||
(lambda name: name.startswith("cis_"), GoogleWorkspaceCIS),
|
||||
(lambda name: name.startswith("cisa_scuba_"), GoogleWorkspaceCISASCuBA),
|
||||
],
|
||||
"iac": [
|
||||
# IaC provider doesn't have specific compliance frameworks yet
|
||||
# Trivy handles its own compliance checks
|
||||
|
||||
@@ -1803,7 +1803,12 @@ def aggregate_finding_group_summaries(tenant_id: str, scan_id: str):
|
||||
output_field=IntegerField(),
|
||||
)
|
||||
|
||||
# Aggregate findings by check_id for this scan
|
||||
# Aggregate findings by check_id for this scan.
|
||||
# `pass_count`, `fail_count` and `manual_count` count *every* finding
|
||||
# in this group, regardless of mute state, so the aggregated `status`
|
||||
# always reflects the underlying check outcome (FAIL > PASS > MANUAL)
|
||||
# even when the group is fully muted. The orthogonal `muted` flag is
|
||||
# what tells whether the group has any actionable (non-muted) findings.
|
||||
aggregated = (
|
||||
Finding.objects.filter(
|
||||
tenant_id=tenant_id,
|
||||
@@ -1812,11 +1817,52 @@ def aggregate_finding_group_summaries(tenant_id: str, scan_id: str):
|
||||
.values("check_id")
|
||||
.annotate(
|
||||
severity_order=Max(severity_case),
|
||||
pass_count=Count("id", filter=Q(status="PASS", muted=False)),
|
||||
fail_count=Count("id", filter=Q(status="FAIL", muted=False)),
|
||||
pass_count=Count("id", filter=Q(status="PASS")),
|
||||
fail_count=Count("id", filter=Q(status="FAIL")),
|
||||
manual_count=Count("id", filter=Q(status="MANUAL")),
|
||||
pass_muted_count=Count("id", filter=Q(status="PASS", muted=True)),
|
||||
fail_muted_count=Count("id", filter=Q(status="FAIL", muted=True)),
|
||||
manual_muted_count=Count("id", filter=Q(status="MANUAL", muted=True)),
|
||||
muted_count=Count("id", filter=Q(muted=True)),
|
||||
nonmuted_count=Count("id", filter=Q(muted=False)),
|
||||
new_count=Count("id", filter=Q(delta="new", muted=False)),
|
||||
changed_count=Count("id", filter=Q(delta="changed", muted=False)),
|
||||
new_fail_count=Count(
|
||||
"id", filter=Q(delta="new", status="FAIL", muted=False)
|
||||
),
|
||||
new_fail_muted_count=Count(
|
||||
"id", filter=Q(delta="new", status="FAIL", muted=True)
|
||||
),
|
||||
new_pass_count=Count(
|
||||
"id", filter=Q(delta="new", status="PASS", muted=False)
|
||||
),
|
||||
new_pass_muted_count=Count(
|
||||
"id", filter=Q(delta="new", status="PASS", muted=True)
|
||||
),
|
||||
new_manual_count=Count(
|
||||
"id", filter=Q(delta="new", status="MANUAL", muted=False)
|
||||
),
|
||||
new_manual_muted_count=Count(
|
||||
"id", filter=Q(delta="new", status="MANUAL", muted=True)
|
||||
),
|
||||
changed_fail_count=Count(
|
||||
"id", filter=Q(delta="changed", status="FAIL", muted=False)
|
||||
),
|
||||
changed_fail_muted_count=Count(
|
||||
"id", filter=Q(delta="changed", status="FAIL", muted=True)
|
||||
),
|
||||
changed_pass_count=Count(
|
||||
"id", filter=Q(delta="changed", status="PASS", muted=False)
|
||||
),
|
||||
changed_pass_muted_count=Count(
|
||||
"id", filter=Q(delta="changed", status="PASS", muted=True)
|
||||
),
|
||||
changed_manual_count=Count(
|
||||
"id", filter=Q(delta="changed", status="MANUAL", muted=False)
|
||||
),
|
||||
changed_manual_muted_count=Count(
|
||||
"id", filter=Q(delta="changed", status="MANUAL", muted=True)
|
||||
),
|
||||
resources_total=Count("resources__id", distinct=True),
|
||||
resources_fail=Count(
|
||||
"resources__id",
|
||||
@@ -1895,9 +1941,26 @@ def aggregate_finding_group_summaries(tenant_id: str, scan_id: str):
|
||||
severity_order=row["severity_order"] or 1,
|
||||
pass_count=row["pass_count"],
|
||||
fail_count=row["fail_count"],
|
||||
manual_count=row["manual_count"],
|
||||
pass_muted_count=row["pass_muted_count"],
|
||||
fail_muted_count=row["fail_muted_count"],
|
||||
manual_muted_count=row["manual_muted_count"],
|
||||
muted_count=row["muted_count"],
|
||||
muted=row["nonmuted_count"] == 0,
|
||||
new_count=row["new_count"],
|
||||
changed_count=row["changed_count"],
|
||||
new_fail_count=row["new_fail_count"],
|
||||
new_fail_muted_count=row["new_fail_muted_count"],
|
||||
new_pass_count=row["new_pass_count"],
|
||||
new_pass_muted_count=row["new_pass_muted_count"],
|
||||
new_manual_count=row["new_manual_count"],
|
||||
new_manual_muted_count=row["new_manual_muted_count"],
|
||||
changed_fail_count=row["changed_fail_count"],
|
||||
changed_fail_muted_count=row["changed_fail_muted_count"],
|
||||
changed_pass_count=row["changed_pass_count"],
|
||||
changed_pass_muted_count=row["changed_pass_muted_count"],
|
||||
changed_manual_count=row["changed_manual_count"],
|
||||
changed_manual_muted_count=row["changed_manual_muted_count"],
|
||||
resources_total=row["resources_total"],
|
||||
resources_fail=row["resources_fail"],
|
||||
first_seen_at=row["agg_first_seen_at"],
|
||||
@@ -1917,9 +1980,26 @@ def aggregate_finding_group_summaries(tenant_id: str, scan_id: str):
|
||||
"severity_order",
|
||||
"pass_count",
|
||||
"fail_count",
|
||||
"manual_count",
|
||||
"pass_muted_count",
|
||||
"fail_muted_count",
|
||||
"manual_muted_count",
|
||||
"muted_count",
|
||||
"muted",
|
||||
"new_count",
|
||||
"changed_count",
|
||||
"new_fail_count",
|
||||
"new_fail_muted_count",
|
||||
"new_pass_count",
|
||||
"new_pass_muted_count",
|
||||
"new_manual_count",
|
||||
"new_manual_muted_count",
|
||||
"changed_fail_count",
|
||||
"changed_fail_muted_count",
|
||||
"changed_pass_count",
|
||||
"changed_pass_muted_count",
|
||||
"changed_manual_count",
|
||||
"changed_manual_muted_count",
|
||||
"resources_total",
|
||||
"resources_fail",
|
||||
"first_seen_at",
|
||||
|
||||
@@ -771,26 +771,49 @@ def aggregate_finding_group_summaries_task(tenant_id: str, scan_id: str):
|
||||
)
|
||||
@set_tenant(keep_tenant=True)
|
||||
def reaggregate_all_finding_group_summaries_task(tenant_id: str):
|
||||
"""Reaggregate finding group summaries for all providers' latest completed scans."""
|
||||
latest_scan_ids = list(
|
||||
Scan.objects.filter(tenant_id=tenant_id, state=StateChoices.COMPLETED)
|
||||
.order_by("provider_id", "-completed_at", "-inserted_at")
|
||||
.distinct("provider_id")
|
||||
.values_list("id", flat=True)
|
||||
"""Reaggregate finding group summaries for every (provider, day) combination.
|
||||
|
||||
Mirrors the unbounded scope of `mute_historical_findings_task`: that task
|
||||
rewrites every Finding row whose UID matches a mute rule, with no time
|
||||
limit. To keep the daily summaries consistent with that update, this task
|
||||
re-runs the aggregator on the latest completed scan of every (provider,
|
||||
day) pair that exists in the database. Tasks are dispatched in parallel
|
||||
via a Celery group so the wallclock scales with the worker pool, not with
|
||||
the number of pairs.
|
||||
"""
|
||||
completed_scans = list(
|
||||
Scan.objects.filter(
|
||||
tenant_id=tenant_id,
|
||||
state=StateChoices.COMPLETED,
|
||||
completed_at__isnull=False,
|
||||
)
|
||||
.order_by("-completed_at")
|
||||
.values("id", "completed_at", "provider_id")
|
||||
)
|
||||
if latest_scan_ids:
|
||||
|
||||
# Keep the latest scan per (provider, day) pair so the daily summary row
|
||||
# the aggregator writes is the most recent snapshot of that day for that
|
||||
# provider. Iterating from most recent to oldest means the first scan we
|
||||
# see for a given key wins.
|
||||
latest_scans: dict[tuple, str] = {}
|
||||
for scan in completed_scans:
|
||||
key = (scan["provider_id"], scan["completed_at"].date())
|
||||
if key not in latest_scans:
|
||||
latest_scans[key] = str(scan["id"])
|
||||
|
||||
scan_ids = list(latest_scans.values())
|
||||
if scan_ids:
|
||||
logger.info(
|
||||
"Reaggregating finding group summaries for %d scans: %s",
|
||||
len(latest_scan_ids),
|
||||
latest_scan_ids,
|
||||
"Reaggregating finding group summaries for %d scans (provider x day)",
|
||||
len(scan_ids),
|
||||
)
|
||||
group(
|
||||
aggregate_finding_group_summaries_task.si(
|
||||
tenant_id=tenant_id, scan_id=str(scan_id)
|
||||
tenant_id=tenant_id, scan_id=scan_id
|
||||
)
|
||||
for scan_id in latest_scan_ids
|
||||
for scan_id in scan_ids
|
||||
).apply_async()
|
||||
return {"scans_reaggregated": len(latest_scan_ids)}
|
||||
return {"scans_reaggregated": len(scan_ids)}
|
||||
|
||||
|
||||
@shared_task(base=RLSTask, name="lighthouse-connection-check")
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import uuid
|
||||
from contextlib import contextmanager
|
||||
from datetime import datetime, timezone
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import openai
|
||||
@@ -2362,35 +2362,96 @@ class TestReaggregateAllFindingGroupSummaries:
|
||||
@patch("tasks.tasks.group")
|
||||
@patch("tasks.tasks.aggregate_finding_group_summaries_task")
|
||||
@patch("tasks.tasks.Scan.objects.filter")
|
||||
def test_dispatches_subtasks_for_each_provider(
|
||||
def test_dispatches_subtasks_for_each_provider_per_day(
|
||||
self, mock_scan_filter, mock_agg_task, mock_group
|
||||
):
|
||||
scan_id_1 = uuid.uuid4()
|
||||
scan_id_2 = uuid.uuid4()
|
||||
provider_id_1 = uuid.uuid4()
|
||||
provider_id_2 = uuid.uuid4()
|
||||
scan_id_today_p1 = uuid.uuid4()
|
||||
scan_id_yesterday_p1 = uuid.uuid4()
|
||||
scan_id_today_p2 = uuid.uuid4()
|
||||
today = datetime.now(tz=timezone.utc)
|
||||
yesterday = today - timedelta(days=1)
|
||||
|
||||
mock_group_result = MagicMock()
|
||||
mock_group.side_effect = lambda gen: (list(gen), mock_group_result)[1]
|
||||
|
||||
mock_scan_filter.return_value.order_by.return_value.distinct.return_value.values_list.return_value = [
|
||||
scan_id_1,
|
||||
scan_id_2,
|
||||
mock_scan_filter.return_value.order_by.return_value.values.return_value = [
|
||||
{
|
||||
"id": scan_id_today_p1,
|
||||
"completed_at": today,
|
||||
"provider_id": provider_id_1,
|
||||
},
|
||||
{
|
||||
"id": scan_id_today_p2,
|
||||
"completed_at": today,
|
||||
"provider_id": provider_id_2,
|
||||
},
|
||||
{
|
||||
"id": scan_id_yesterday_p1,
|
||||
"completed_at": yesterday,
|
||||
"provider_id": provider_id_1,
|
||||
},
|
||||
]
|
||||
|
||||
result = reaggregate_all_finding_group_summaries_task(tenant_id=self.tenant_id)
|
||||
|
||||
assert result == {"scans_reaggregated": 2}
|
||||
assert mock_agg_task.si.call_count == 2
|
||||
assert result == {"scans_reaggregated": 3}
|
||||
assert mock_agg_task.si.call_count == 3
|
||||
mock_agg_task.si.assert_any_call(
|
||||
tenant_id=self.tenant_id, scan_id=str(scan_id_1)
|
||||
tenant_id=self.tenant_id, scan_id=str(scan_id_today_p1)
|
||||
)
|
||||
mock_agg_task.si.assert_any_call(
|
||||
tenant_id=self.tenant_id, scan_id=str(scan_id_2)
|
||||
tenant_id=self.tenant_id, scan_id=str(scan_id_today_p2)
|
||||
)
|
||||
mock_agg_task.si.assert_any_call(
|
||||
tenant_id=self.tenant_id, scan_id=str(scan_id_yesterday_p1)
|
||||
)
|
||||
mock_group_result.apply_async.assert_called_once()
|
||||
|
||||
@patch("tasks.tasks.group")
|
||||
@patch("tasks.tasks.aggregate_finding_group_summaries_task")
|
||||
@patch("tasks.tasks.Scan.objects.filter")
|
||||
def test_dedupes_scans_to_latest_per_provider_per_day(
|
||||
self, mock_scan_filter, mock_agg_task, mock_group
|
||||
):
|
||||
"""When several scans run on the same day for the same provider, only
|
||||
the latest one is dispatched (matching the daily summary unique key)."""
|
||||
provider_id = uuid.uuid4()
|
||||
latest_scan_today = uuid.uuid4()
|
||||
earlier_scan_today = uuid.uuid4()
|
||||
today_late = datetime.now(tz=timezone.utc)
|
||||
today_early = today_late - timedelta(hours=4)
|
||||
|
||||
mock_group_result = MagicMock()
|
||||
mock_group.side_effect = lambda gen: (list(gen), mock_group_result)[1]
|
||||
|
||||
# Returned ordered by `-completed_at`, so the most recent comes first.
|
||||
mock_scan_filter.return_value.order_by.return_value.values.return_value = [
|
||||
{
|
||||
"id": latest_scan_today,
|
||||
"completed_at": today_late,
|
||||
"provider_id": provider_id,
|
||||
},
|
||||
{
|
||||
"id": earlier_scan_today,
|
||||
"completed_at": today_early,
|
||||
"provider_id": provider_id,
|
||||
},
|
||||
]
|
||||
|
||||
result = reaggregate_all_finding_group_summaries_task(tenant_id=self.tenant_id)
|
||||
|
||||
assert result == {"scans_reaggregated": 1}
|
||||
mock_agg_task.si.assert_called_once_with(
|
||||
tenant_id=self.tenant_id, scan_id=str(latest_scan_today)
|
||||
)
|
||||
mock_group_result.apply_async.assert_called_once()
|
||||
|
||||
@patch("tasks.tasks.group")
|
||||
@patch("tasks.tasks.Scan.objects.filter")
|
||||
def test_no_completed_scans_skips_dispatch(self, mock_scan_filter, mock_group):
|
||||
mock_scan_filter.return_value.order_by.return_value.distinct.return_value.values_list.return_value = []
|
||||
mock_scan_filter.return_value.order_by.return_value.values.return_value = []
|
||||
|
||||
result = reaggregate_all_finding_group_summaries_task(tenant_id=self.tenant_id)
|
||||
|
||||
|
||||
@@ -163,6 +163,8 @@ These resources help ensure that AI-assisted contributions maintain consistency
|
||||
|
||||
All dependencies are listed in the `pyproject.toml` file.
|
||||
|
||||
The SDK keeps direct dependencies pinned to exact versions, while `poetry.lock` records the full resolved dependency tree and the artifact hashes for every package. Use `poetry install` from the lock file instead of ad-hoc `pip` installs when you need a reproducible environment.
|
||||
|
||||
For proper code documentation, refer to the following and follow the code documentation practices presented there: [Google Python Style Guide - Comments and Docstrings](https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings).
|
||||
|
||||
<Note>
|
||||
|
||||
@@ -98,6 +98,7 @@
|
||||
]
|
||||
},
|
||||
"user-guide/tutorials/prowler-app-rbac",
|
||||
"user-guide/tutorials/prowler-app-multi-tenant",
|
||||
"user-guide/tutorials/prowler-app-api-keys",
|
||||
"user-guide/tutorials/prowler-app-import-findings",
|
||||
{
|
||||
|
||||
|
After Width: | Height: | Size: 148 KiB |
|
After Width: | Height: | Size: 81 KiB |
|
After Width: | Height: | Size: 78 KiB |
|
After Width: | Height: | Size: 90 KiB |
|
After Width: | Height: | Size: 37 KiB |
|
After Width: | Height: | Size: 14 KiB |
|
After Width: | Height: | Size: 36 KiB |
|
After Width: | Height: | Size: 23 KiB |
|
After Width: | Height: | Size: 13 KiB |
|
After Width: | Height: | Size: 100 KiB |
|
After Width: | Height: | Size: 71 KiB |
|
After Width: | Height: | Size: 14 KiB |
@@ -21,29 +21,57 @@
|
||||
|
||||
## Supported Providers
|
||||
|
||||
The supported providers right now are:
|
||||
Prowler supports a wide range of providers organized by category:
|
||||
|
||||
| Provider | Support | Audit Scope/Entities | Interface |
|
||||
| -------------------------------------------------------------------------------- | ---------- | ---------------------------- | ------------ |
|
||||
| [AWS](/user-guide/providers/aws/getting-started-aws) | Official | Accounts | UI, API, CLI |
|
||||
| [Azure](/user-guide/providers/azure/getting-started-azure) | Official | Subscriptions | UI, API, CLI |
|
||||
| [Google Cloud](/user-guide/providers/gcp/getting-started-gcp) | Official | Projects | UI, API, CLI |
|
||||
| [Kubernetes](/user-guide/providers/kubernetes/getting-started-k8s) | Official | Clusters | UI, API, CLI |
|
||||
| [M365](/user-guide/providers/microsoft365/getting-started-m365) | Official | Tenants | UI, API, CLI |
|
||||
| [Github](/user-guide/providers/github/getting-started-github) | Official | Organizations / Repositories | UI, API, CLI |
|
||||
| [Oracle Cloud](/user-guide/providers/oci/getting-started-oci) | Official | Tenancies / Compartments | UI, API, CLI |
|
||||
| [Alibaba Cloud](/user-guide/providers/alibabacloud/getting-started-alibabacloud) | Official | Accounts | UI, API, CLI |
|
||||
| [Cloudflare](/user-guide/providers/cloudflare/getting-started-cloudflare) | Official | Accounts | UI, API, CLI |
|
||||
| [Infra as Code](/user-guide/providers/iac/getting-started-iac) | Official | Repositories | UI, API, CLI |
|
||||
| [MongoDB Atlas](/user-guide/providers/mongodbatlas/getting-started-mongodbatlas) | Official | Organizations | UI, API, CLI |
|
||||
| [OpenStack](/user-guide/providers/openstack/getting-started-openstack) | Official | Projects | UI, API, CLI |
|
||||
| [Vercel](/user-guide/providers/vercel/getting-started-vercel) | Official | Teams / Projects | CLI |
|
||||
| [LLM](/user-guide/providers/llm/getting-started-llm) | Official | Models | CLI |
|
||||
| [Image](/user-guide/providers/image/getting-started-image) | Official | Container Images | CLI, API |
|
||||
| [Google Workspace](/user-guide/providers/googleworkspace/getting-started-googleworkspace) | Official | Domains | CLI |
|
||||
| **NHN** | Unofficial | Tenants | CLI |
|
||||
### Cloud Service Providers (Infrastructure)
|
||||
|
||||
For more information about the checks and compliance of each provider visit [Prowler Hub](https://hub.prowler.com).
|
||||
| Provider | Support | Audit Scope/Entities | Interface |
|
||||
| -------------------------------------------------------------------------------- | ---------- | ------------------------ | ------------ |
|
||||
| [Alibaba Cloud](/user-guide/providers/alibabacloud/getting-started-alibabacloud) | Official | Accounts | UI, API, CLI |
|
||||
| [AWS](/user-guide/providers/aws/getting-started-aws) | Official | Accounts | UI, API, CLI |
|
||||
| [Azure](/user-guide/providers/azure/getting-started-azure) | Official | Subscriptions | UI, API, CLI |
|
||||
| [Cloudflare](/user-guide/providers/cloudflare/getting-started-cloudflare) | Official | Accounts | UI, API, CLI |
|
||||
| [Google Cloud](/user-guide/providers/gcp/getting-started-gcp) | Official | Projects | UI, API, CLI |
|
||||
| **NHN** | Unofficial | Tenants | CLI |
|
||||
| [OpenStack](/user-guide/providers/openstack/getting-started-openstack) | Official | Projects | UI, API, CLI |
|
||||
| [Oracle Cloud](/user-guide/providers/oci/getting-started-oci) | Official | Tenancies / Compartments | UI, API, CLI |
|
||||
|
||||
### Infrastructure as Code Providers
|
||||
|
||||
| Provider | Support | Audit Scope/Entities | Interface |
|
||||
| --------------------------------------------------------------------- | -------- | -------------------- | ------------ |
|
||||
| [Infra as Code](/user-guide/providers/iac/getting-started-iac) | Official | Repositories | UI, API, CLI |
|
||||
|
||||
### Software as a Service (SaaS) Providers
|
||||
|
||||
| Provider | Support | Audit Scope/Entities | Interface |
|
||||
| ----------------------------------------------------------------------------------------- | -------- | ---------------------------- | ------------ |
|
||||
| [GitHub](/user-guide/providers/github/getting-started-github) | Official | Organizations / Repositories | UI, API, CLI |
|
||||
| [Google Workspace](/user-guide/providers/googleworkspace/getting-started-googleworkspace) | Official | Domains | CLI |
|
||||
| [LLM](/user-guide/providers/llm/getting-started-llm) | Official | Models | CLI |
|
||||
| [M365](/user-guide/providers/microsoft365/getting-started-m365) | Official | Tenants | UI, API, CLI |
|
||||
| [MongoDB Atlas](/user-guide/providers/mongodbatlas/getting-started-mongodbatlas) | Official | Organizations | UI, API, CLI |
|
||||
| [Vercel](/user-guide/providers/vercel/getting-started-vercel) | Official | Teams / Projects | CLI |
|
||||
|
||||
### Kubernetes
|
||||
|
||||
| Provider | Support | Audit Scope/Entities | Interface |
|
||||
| -------------------------------------------------------------------------- | -------- | -------------------- | ------------ |
|
||||
| [Kubernetes](/user-guide/providers/kubernetes/getting-started-k8s) | Official | Clusters | UI, API, CLI |
|
||||
|
||||
### Containers
|
||||
|
||||
| Provider | Support | Audit Scope/Entities | Interface |
|
||||
| ------------------------------------------------------------------- | -------- | -------------------- | --------- |
|
||||
| [Image](/user-guide/providers/image/getting-started-image) | Official | Container Images / Registries | CLI, API |
|
||||
|
||||
### Custom Providers (Prowler Cloud Enterprise Only)
|
||||
|
||||
| Provider | Support | Audit Scope/Entities | Interface |
|
||||
| -------------------- | -------- | -------------------- | --------- |
|
||||
| VMware/Broadcom VCF | Official | Infrastructure | CLI |
|
||||
|
||||
For more information about the checks and compliance of each provider, visit [Prowler Hub](https://hub.prowler.com).
|
||||
|
||||
## Where to go next?
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ Before you begin, make sure you have:
|
||||
|
||||

|
||||
|
||||
### Step 2: Access Prowler Cloud or Prowler App
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to "Configuration" > "Cloud Providers"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: 'Getting Started With AWS on Prowler'
|
||||
---
|
||||
|
||||
## Prowler App
|
||||
## Prowler Cloud
|
||||
|
||||
<iframe width="560" height="380" src="https://www.youtube-nocookie.com/embed/RPgIWOCERzY" title="Prowler Cloud Onboarding AWS" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="1"></iframe>
|
||||
|
||||
@@ -16,7 +16,7 @@ title: 'Getting Started With AWS on Prowler'
|
||||

|
||||
|
||||
|
||||
### Step 2: Access Prowler Cloud or Prowler App
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to "Configuration" > "Cloud Providers"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: 'Getting Started With Azure on Prowler'
|
||||
---
|
||||
|
||||
## Prowler App
|
||||
## Prowler Cloud
|
||||
|
||||
<iframe width="560" height="380" src="https://www.youtube-nocookie.com/embed/v1as8vTFlMg" title="Prowler Cloud Onboarding Azure" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="1"></iframe>
|
||||
> Walkthrough video onboarding an Azure Subscription using Service Principal.
|
||||
@@ -32,7 +32,7 @@ For detailed instructions on how to create the Service Principal and configure p
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Access Prowler App
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Navigate to `Configuration` > `Cloud Providers`
|
||||
@@ -51,7 +51,7 @@ For detailed instructions on how to create the Service Principal and configure p
|
||||
|
||||

|
||||
|
||||
### Step 3: Add Credentials to Prowler App
|
||||
### Step 3: Add Credentials to Prowler Cloud
|
||||
|
||||
For Azure, Prowler App uses a service principal application to authenticate. For more information about the process of creating and adding permissions to a service principal refer to this [section](/user-guide/providers/azure/authentication). When you finish creating and adding the [Entra](/user-guide/providers/azure/create-prowler-service-principal#assigning-proper-permissions) and [Subscription](/user-guide/providers/azure/subscriptions) scope permissions to the service principal, enter the `Tenant ID`, `Client ID` and `Client Secret` of the service principal application.
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: 'Getting Started With GCP on Prowler'
|
||||
---
|
||||
|
||||
## Prowler App
|
||||
## Prowler Cloud
|
||||
|
||||
### Step 1: Get the GCP Project ID
|
||||
|
||||
@@ -11,7 +11,7 @@ title: 'Getting Started With GCP on Prowler'
|
||||
|
||||

|
||||
|
||||
### Step 2: Access Prowler Cloud or Prowler App
|
||||
### Step 2: Access Prowler Cloud
|
||||
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app)
|
||||
2. Go to "Configuration" > "Cloud Providers"
|
||||
|
||||
@@ -31,7 +31,7 @@ Prowler IaC provider scans the following Infrastructure as Code configurations f
|
||||
- Mutelist logic ([filtering](https://trivy.dev/latest/docs/configuration/filtering/)) is handled by Trivy, not Prowler.
|
||||
- Results are output in the same formats as other Prowler providers (CSV, JSON, HTML, etc.).
|
||||
|
||||
## Prowler App
|
||||
## Prowler Cloud
|
||||
|
||||
<VersionBadge version="5.14.0" />
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: 'Getting Started with Kubernetes'
|
||||
---
|
||||
|
||||
## Prowler App
|
||||
## Prowler Cloud
|
||||
|
||||
### Step 1: Access Prowler Cloud/App
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ The following steps apply to Prowler Cloud and the self-hosted Prowler App.
|
||||
3. Generate or locate the API key fingerprint and private key for that user. Follow the [Config File Authentication steps](/user-guide/providers/oci/authentication#config-file-authentication-manual-api-key-setup) to create or rotate the key pair and copy the fingerprint.
|
||||
4. Note the **Region** identifier to scan (for example, `us-ashburn-1`).
|
||||
|
||||
### Step 2: Access Prowler Cloud or Prowler App
|
||||
### Step 2: Access Prowler Cloud
|
||||
1. Navigate to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app).
|
||||
2. Go to **Configuration** → **Cloud Providers** and click **Add Cloud Provider**.
|
||||

|
||||
|
||||
@@ -13,9 +13,63 @@ Set up authentication for Vercel with the [Vercel Authentication](/user-guide/pr
|
||||
- Create a Vercel API Token with access to the target team
|
||||
- Identify the Team ID (optional, required to scope the scan to a single team)
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Prowler Cloud" icon="cloud" href="#prowler-cloud">
|
||||
Onboard Vercel using Prowler Cloud
|
||||
</Card>
|
||||
<Card title="Prowler CLI" icon="terminal" href="#prowler-cli">
|
||||
Onboard Vercel using Prowler CLI
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Prowler Cloud
|
||||
|
||||
<VersionBadge version="5.23.0" />
|
||||
|
||||
### Step 1: Add the Provider
|
||||
|
||||
1. Go to [Prowler Cloud](https://cloud.prowler.com/) or launch [Prowler App](/user-guide/tutorials/prowler-app).
|
||||
2. Navigate to "Configuration" > "Cloud Providers".
|
||||
|
||||

|
||||
|
||||
3. Click "Add Cloud Provider".
|
||||
|
||||

|
||||
|
||||
4. Select "Vercel".
|
||||
|
||||

|
||||
|
||||
5. Enter the **Team ID** and an optional alias, then click "Next".
|
||||
|
||||

|
||||
|
||||
<Note>
|
||||
The Team ID can be found in the Vercel Dashboard under "Settings" > "General". It follows the format `team_xxxxxxxxxxxxxxxxxxxx`. For detailed instructions, see the [Authentication guide](/user-guide/providers/vercel/authentication).
|
||||
</Note>
|
||||
|
||||
### Step 2: Provide Credentials
|
||||
|
||||
1. Enter the **API Token** created in the Vercel Dashboard.
|
||||
|
||||

|
||||
|
||||
For the complete token creation workflow, follow the [Authentication guide](/user-guide/providers/vercel/authentication#api-token).
|
||||
|
||||
### Step 3: Launch the Scan
|
||||
|
||||
1. Review the connection summary.
|
||||
2. Choose the scan schedule: run a single scan or set up daily scans (every 24 hours).
|
||||
3. Click **Launch Scan** to start auditing Vercel.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Prowler CLI
|
||||
|
||||
<VersionBadge version="5.22.0" />
|
||||
<VersionBadge version="5.23.0" />
|
||||
|
||||
### Step 1: Set Up Authentication
|
||||
|
||||
|
||||
@@ -105,6 +105,102 @@ If a query requires no parameters, the form displays a message confirming that t
|
||||
width="700"
|
||||
/>
|
||||
|
||||
## Writing Custom openCypher Queries
|
||||
|
||||
In addition to the built-in queries, Attack Paths supports custom read-only [openCypher](https://opencypher.org/) queries. Custom queries provide direct access to the underlying graph so security teams can answer ad-hoc questions, prototype detections, or extend coverage beyond the built-in catalogue.
|
||||
|
||||
To write a custom query, select **Custom openCypher query** from the query dropdown. A code editor with syntax highlighting and line numbers appears, ready to receive the query.
|
||||
|
||||
### Constraints and Safety Limits
|
||||
|
||||
Custom queries are sandboxed to keep the graph database safe and responsive:
|
||||
|
||||
- **Read-only:** Only read operations are allowed. Statements that mutate the graph (`CREATE`, `MERGE`, `SET`, `DELETE`, `REMOVE`, `DROP`, `LOAD CSV`, `CALL { ... }` writes, etc.) are rejected before execution.
|
||||
- **Length limit:** Each query is capped at **10,000 characters**.
|
||||
- **Scoped to the selected scan:** Results are automatically scoped to the provider and scan selected on the left panel. There is no need to filter by tenant or scan identifier in the query body.
|
||||
|
||||
### Example Queries
|
||||
|
||||
The following examples are read-only and can be pasted directly into the editor. Each one demonstrates a different graph traversal pattern.
|
||||
|
||||
**Internet-exposed EC2 instances with their security group rules:**
|
||||
|
||||
```cypher
|
||||
MATCH (i:EC2Instance)--(sg:EC2SecurityGroup)--(rule:IpPermissionInbound)
|
||||
WHERE i.exposed_internet = true
|
||||
RETURN i.instanceid AS instance, sg.name AS security_group,
|
||||
rule.fromport AS from_port, rule.toport AS to_port
|
||||
LIMIT 25
|
||||
```
|
||||
|
||||
**EC2 instances that can assume IAM roles:**
|
||||
|
||||
```cypher
|
||||
MATCH (i:EC2Instance)-[:STS_ASSUMEROLE_ALLOW]->(r:AWSRole)
|
||||
WHERE i.exposed_internet = true
|
||||
RETURN i.instanceid AS instance, r.name AS role_name, r.arn AS role_arn
|
||||
LIMIT 25
|
||||
```
|
||||
|
||||
**IAM principals with wildcard Allow statements:**
|
||||
|
||||
```cypher
|
||||
MATCH (principal:AWSPrincipal)--(policy:AWSPolicy)--(stmt:AWSPolicyStatement)
|
||||
WHERE stmt.effect = 'Allow'
|
||||
AND ANY(action IN stmt.action WHERE action = '*')
|
||||
RETURN principal.arn AS principal, policy.arn AS policy,
|
||||
stmt.action AS actions, stmt.resource AS resources
|
||||
LIMIT 25
|
||||
```
|
||||
|
||||
**Critical findings on internet-exposed resources:**
|
||||
|
||||
```cypher
|
||||
MATCH (i:EC2Instance)-[:HAS_FINDING]->(f:ProwlerFinding)
|
||||
WHERE i.exposed_internet = true AND f.status = 'FAIL'
|
||||
AND f.severity IN ['critical', 'high']
|
||||
RETURN i.instanceid AS instance, f.check_id AS check,
|
||||
f.severity AS severity, f.status AS status
|
||||
LIMIT 50
|
||||
```
|
||||
|
||||
**Roles trusting an AWS service (building block for PassRole escalation):**
|
||||
|
||||
```cypher
|
||||
MATCH (r:AWSRole)-[:TRUSTS_AWS_PRINCIPAL]->(p:AWSPrincipal)
|
||||
WHERE p.arn ENDS WITH '.amazonaws.com'
|
||||
RETURN r.name AS role_name, r.arn AS role_arn, p.arn AS trusted_service
|
||||
LIMIT 25
|
||||
```
|
||||
|
||||
### Tips for Writing Queries
|
||||
|
||||
- Start small with `LIMIT` to inspect the shape of the data before broadening the pattern.
|
||||
- Use `RETURN` projections (`RETURN n.name, n.region`) instead of returning whole nodes to keep responses compact.
|
||||
- Combine resource nodes with `ProwlerFinding` nodes via `HAS_FINDING` to correlate misconfigurations with the affected resources.
|
||||
- When a query times out or returns no rows, simplify the pattern step by step until the first variant runs successfully, then add constraints back.
|
||||
|
||||
### Cartography Schema Reference
|
||||
|
||||
Attack Paths graphs are populated by [Cartography](https://github.com/cartography-cncf/cartography), an open-source graph ingestion framework. The node labels, relationship types, and properties available in custom queries follow the upstream Cartography schema for the corresponding provider.
|
||||
|
||||
For the complete catalogue of node labels and relationships available in custom queries, refer to the official Cartography schema documentation:
|
||||
|
||||
- **AWS:** [Cartography AWS Schema](https://cartography-cncf.github.io/cartography/modules/aws/schema.html)
|
||||
|
||||
In addition to the upstream schema, Prowler enriches the graph with:
|
||||
|
||||
- **`ProwlerFinding`** nodes representing Prowler check results, linked to affected resources via `HAS_FINDING` relationships.
|
||||
- **`Internet`** nodes used to model exposure paths from the public internet to internal resources.
|
||||
|
||||
<Note>
|
||||
AI assistants connected through Prowler MCP Server can fetch the exact
|
||||
Cartography schema for the active scan via the
|
||||
`prowler_app_get_attack_paths_cartography_schema` tool. This guarantees that
|
||||
generated queries match the schema version pinned by the running Prowler
|
||||
release.
|
||||
</Note>
|
||||
|
||||
## Executing a Query
|
||||
|
||||
To run the selected query against the scan data, click **Execute Query**. The button displays a loading state while the query processes.
|
||||
|
||||
@@ -0,0 +1,151 @@
|
||||
---
|
||||
title: 'Managing Organizations (Multi-Tenant)'
|
||||
---
|
||||
|
||||
import { VersionBadge } from "/snippets/version-badge.mdx"
|
||||
|
||||
<VersionBadge version="5.23.0" />
|
||||
|
||||
Prowler App supports multi-tenancy through **Organizations**, allowing users to belong to multiple isolated environments within a single account. Each organization maintains its own providers, scans, findings, and user memberships, ensuring complete data separation between teams or business units.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
* **Organization (Tenant):** An isolated workspace containing its own providers, scans, findings, roles, and users. Every Prowler account operates within at least one organization.
|
||||
* **Membership:** The association between a user and an organization, including the membership role (`owner` or `member`).
|
||||
* **Active Organization:** The organization currently in use for the session. All actions (scans, findings, provider management) apply to the active organization.
|
||||
|
||||
<Note>
|
||||
When a new account is created without an invitation, a default organization is automatically provisioned. Accounts created through an invitation join the inviter's organization instead.
|
||||
|
||||
</Note>
|
||||
|
||||
## Viewing Organizations
|
||||
|
||||
To view all organizations associated with an account, navigate to the **Profile** page. The **Organizations** card displays every organization the user belongs to, including the role, name, join date, and whether it is the currently active organization.
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/organizations-card.png" alt="Organizations card in profile page" width="700" />
|
||||
|
||||
## Creating an Organization
|
||||
|
||||
To create a new organization:
|
||||
|
||||
1. Navigate to the **Profile** page.
|
||||
|
||||
2. In the **Organizations** card, click the **Create organization** button.
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/create-organization-button.png" alt="Create organization button" width="700" />
|
||||
|
||||
3. Enter a name for the new organization (maximum 100 characters).
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/create-organization-modal.png" alt="Create organization modal" width="700" />
|
||||
|
||||
4. Click **Create**. The session automatically switches to the newly created organization.
|
||||
|
||||
<Note>
|
||||
Creating an organization requires being authenticated. Any user can create a new organization regardless of their current role.
|
||||
|
||||
</Note>
|
||||
|
||||
## Switching Between Organizations
|
||||
|
||||
To switch the active organization:
|
||||
|
||||
1. Navigate to the **Profile** page.
|
||||
|
||||
2. In the **Organizations** card, locate the organization to switch to.
|
||||
|
||||
3. Click the **Switch** button next to the desired organization.
|
||||
|
||||
4. Confirm the switch in the dialog. The page reloads with the new organization's context, and all subsequent actions apply to it.
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/switch-organization-modal.png" alt="Switch organization confirmation modal" width="700" />
|
||||
|
||||
<Note>
|
||||
The currently active organization is indicated by an **Active** badge. Switching updates the session tokens, so the page will reload automatically.
|
||||
|
||||
</Note>
|
||||
|
||||
## Editing an Organization Name
|
||||
|
||||
Organization owners with the **Manage Account** permission can rename an organization:
|
||||
|
||||
1. Navigate to the **Profile** page.
|
||||
|
||||
2. In the **Organizations** card, click the **Edit** button next to the organization.
|
||||
|
||||
3. Update the name and save the changes.
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/edit-organization-modal.png" alt="Edit organization name modal" width="700" />
|
||||
|
||||
## Deleting an Organization
|
||||
|
||||
Organization owners with the **Manage Account** permission can delete an organization, provided they belong to at least two organizations (the last remaining organization cannot be deleted).
|
||||
|
||||
### Deleting a Non-Active Organization
|
||||
|
||||
1. Navigate to the **Profile** page.
|
||||
|
||||
2. Click the **Delete** button next to the organization to remove.
|
||||
|
||||
3. Type the organization name to confirm deletion.
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/delete-organization-modal.png" alt="Delete organization confirmation modal" width="700" />
|
||||
|
||||
4. Click **Delete**. The organization and all its associated data (providers, scans, findings) are permanently removed.
|
||||
|
||||
### Deleting the Active Organization
|
||||
|
||||
When deleting the currently active organization, an additional step is required:
|
||||
|
||||
1. Navigate to the **Profile** page.
|
||||
|
||||
2. Click the **Delete** button next to the active organization.
|
||||
|
||||
3. Select which organization to switch to after deletion.
|
||||
|
||||
4. Type the organization name to confirm.
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/delete-active-organization-modal.png" alt="Delete active organization modal with target selection" width="700" />
|
||||
|
||||
5. Click **Delete**. The session switches to the selected organization, and the deleted organization's data is permanently removed.
|
||||
|
||||
<Warning>
|
||||
Deleting an organization is irreversible. All providers, scans, findings, and configuration data within the organization are permanently deleted. Users who belong only to the deleted organization will lose access to Prowler.
|
||||
</Warning>
|
||||
|
||||
## Accepting an Invitation to an Organization
|
||||
|
||||
When invited to join an organization, the invited user receives a link to accept the invitation. The flow adapts depending on whether the user already has a Prowler account:
|
||||
|
||||
### Existing Users
|
||||
|
||||
1. Open the invitation link.
|
||||
|
||||
2. If already authenticated, the invitation is accepted automatically and the user is redirected to Prowler App.
|
||||
|
||||
3. If not authenticated, choose **I have an account -- Sign in**, authenticate with existing credentials, and the invitation is accepted upon sign-in.
|
||||
|
||||
<img src="/images/prowler-app/multi-tenant/sign-in-invitation.png" alt="Sign in screen after choosing I have an account from invitation" width="700" />
|
||||
|
||||
### New Users
|
||||
|
||||
1. Open the invitation link.
|
||||
|
||||
2. Choose **I'm new -- Create an account**.
|
||||
|
||||
3. Complete the sign-up process. Upon account creation, the invitation is accepted and the user joins the inviter's organization.
|
||||
|
||||
<Note>
|
||||
Invitations expire after 7 days. If an invitation has expired, contact the organization administrator to send a new one. For more details on invitation management, see [Managing Users and Role-Based Access Control (RBAC)](/user-guide/tutorials/prowler-app-rbac#invitations).
|
||||
|
||||
</Note>
|
||||
|
||||
## Permissions Reference
|
||||
|
||||
| Action | Required Conditions |
|
||||
|--------|-------------------|
|
||||
| View organizations | Any authenticated user |
|
||||
| Create an organization | Any authenticated user |
|
||||
| Switch organizations | Any authenticated user |
|
||||
| Edit organization name | Organization owner with **Manage Account** permission |
|
||||
| Delete an organization | Organization owner with **Manage Account** permission; must belong to more than one organization |
|
||||
@@ -2,12 +2,16 @@
|
||||
|
||||
All notable changes to the **Prowler MCP Server** are documented in this file.
|
||||
|
||||
## [0.6.0] (Prowler UNRELEASED)
|
||||
## [0.6.0] (Prowler v5.23.0)
|
||||
|
||||
### 🚀 Added
|
||||
|
||||
- Resource events tool to get timeline for a resource (who, what, when) [(#10412)](https://github.com/prowler-cloud/prowler/pull/10412)
|
||||
|
||||
### 🔄 Changed
|
||||
|
||||
- Pin `httpx` dependency to exact version for reproducible installs [(#10593)](https://github.com/prowler-cloud/prowler/pull/10593)
|
||||
|
||||
### 🔐 Security
|
||||
|
||||
- `authlib` bumped from 1.6.5 to 1.6.9 to fix CVE-2026-28802 (JWT `alg: none` validation bypass) [(#10579)](https://github.com/prowler-cloud/prowler/pull/10579)
|
||||
|
||||
@@ -5,7 +5,7 @@ requires = ["setuptools>=61.0", "wheel"]
|
||||
[project]
|
||||
dependencies = [
|
||||
"fastmcp==2.14.0",
|
||||
"httpx>=0.28.0"
|
||||
"httpx==0.28.1"
|
||||
]
|
||||
description = "MCP server for Prowler ecosystem"
|
||||
name = "prowler-mcp"
|
||||
|
||||
@@ -727,7 +727,7 @@ dependencies = [
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "fastmcp", specifier = "==2.14.0" },
|
||||
{ name = "httpx", specifier = ">=0.28.0" },
|
||||
{ name = "httpx", specifier = "==0.28.1" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -888,11 +888,11 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "pyjwt"
|
||||
version = "2.10.1"
|
||||
version = "2.12.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e7/46/bd74733ff231675599650d3e47f361794b22ef3e3770998dda30d3b63726/pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953", size = 87785, upload-time = "2024-11-28T03:43:29.933Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c2/27/a3b6e5bf6ff856d2509292e95c8f57f0df7017cf5394921fc4e4ef40308a/pyjwt-2.12.1.tar.gz", hash = "sha256:c74a7a2adf861c04d002db713dd85f84beb242228e671280bf709d765b03672b", size = 102564, upload-time = "2026-03-13T19:27:37.25Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/61/ad/689f02752eeec26aed679477e80e632ef1b682313be70793d798c1d5fc8f/PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb", size = 22997, upload-time = "2024-11-28T03:43:27.893Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/7a/8dd906bd22e79e47397a61742927f6747fe93242ef86645ee9092e610244/pyjwt-2.12.1-py3-none-any.whl", hash = "sha256:28ca37c070cad8ba8cd9790cd940535d40274d22f80ab87f3ac6a713e6e8454c", size = 29726, upload-time = "2026-03-13T19:27:35.677Z" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.3.2 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "about-time"
|
||||
@@ -1888,6 +1888,7 @@ files = [
|
||||
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
|
||||
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
|
||||
]
|
||||
markers = {dev = "platform_system == \"Windows\" or sys_platform == \"win32\""}
|
||||
|
||||
[[package]]
|
||||
name = "contextlib2"
|
||||
@@ -3083,7 +3084,7 @@ files = [
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=22.2.0"
|
||||
jsonschema-specifications = ">=2023.03.6"
|
||||
jsonschema-specifications = ">=2023.3.6"
|
||||
referencing = ">=0.28.4"
|
||||
rpds-py = ">=0.7.1"
|
||||
|
||||
@@ -3163,7 +3164,7 @@ files = [
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = ">=14.05.14"
|
||||
certifi = ">=14.5.14"
|
||||
durationpy = ">=0.7"
|
||||
google-auth = ">=1.0.1"
|
||||
oauthlib = ">=3.2.2"
|
||||
@@ -4944,24 +4945,25 @@ windows-terminal = ["colorama (>=0.4.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyjwt"
|
||||
version = "2.10.1"
|
||||
version = "2.12.1"
|
||||
description = "JSON Web Token implementation in Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb"},
|
||||
{file = "pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953"},
|
||||
{file = "pyjwt-2.12.1-py3-none-any.whl", hash = "sha256:28ca37c070cad8ba8cd9790cd940535d40274d22f80ab87f3ac6a713e6e8454c"},
|
||||
{file = "pyjwt-2.12.1.tar.gz", hash = "sha256:c74a7a2adf861c04d002db713dd85f84beb242228e671280bf709d765b03672b"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cryptography = {version = ">=3.4.0", optional = true, markers = "extra == \"crypto\""}
|
||||
typing_extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
|
||||
|
||||
[package.extras]
|
||||
crypto = ["cryptography (>=3.4.0)"]
|
||||
dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=6.0.0,<7.0.0)", "sphinx", "sphinx-rtd-theme", "zope.interface"]
|
||||
dev = ["coverage[toml] (==7.10.7)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=8.4.2,<9.0.0)", "sphinx", "sphinx-rtd-theme", "zope.interface"]
|
||||
docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"]
|
||||
tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
|
||||
tests = ["coverage[toml] (==7.10.7)", "pytest (>=8.4.2,<9.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pylint"
|
||||
@@ -4976,7 +4978,7 @@ files = [
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
astroid = ">=3.3.8,<=3.4.0-dev0"
|
||||
astroid = ">=3.3.8,<=3.4.0.dev0"
|
||||
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
|
||||
dill = [
|
||||
{version = ">=0.2", markers = "python_version < \"3.11\""},
|
||||
@@ -5822,10 +5824,10 @@ files = [
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
botocore = ">=1.37.4,<2.0a.0"
|
||||
botocore = ">=1.37.4,<2.0a0"
|
||||
|
||||
[package.extras]
|
||||
crt = ["botocore[crt] (>=1.37.4,<2.0a.0)"]
|
||||
crt = ["botocore[crt] (>=1.37.4,<2.0a0)"]
|
||||
|
||||
[[package]]
|
||||
name = "safety"
|
||||
@@ -6743,4 +6745,4 @@ files = [
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.10,<3.13"
|
||||
content-hash = "91739ee5e383337160f9f08b76944ab4e8629c94084c8a9d115246862557f7c5"
|
||||
content-hash = "4050d3a95f5bc5448576ca0361fd899b35aa04de28d379cdfd3c2b0db67848ad"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
## [5.23.0] (Prowler UNRELEASED)
|
||||
## [5.23.0] (Prowler v5.23.0)
|
||||
|
||||
### 🚀 Added
|
||||
|
||||
@@ -18,19 +18,23 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
- CISA SCuBA Google Workspace Baselines compliance [(#10466)](https://github.com/prowler-cloud/prowler/pull/10466)
|
||||
- CIS Google Workspace Foundations Benchmark v1.3.0 compliance [(#10462)](https://github.com/prowler-cloud/prowler/pull/10462)
|
||||
- `calendar_external_sharing_primary_calendar`, `calendar_external_sharing_secondary_calendar`, and `calendar_external_invitations_warning` checks for Google Workspace provider using the Cloud Identity Policy API [(#10597)](https://github.com/prowler-cloud/prowler/pull/10597)
|
||||
- 11 Drive and Docs checks for Google Workspace provider (`drive_external_sharing_warn_users`, `drive_publishing_files_disabled`, `drive_sharing_allowlisted_domains`, `drive_warn_sharing_with_allowlisted_domains`, `drive_access_checker_recipients_only`, `drive_internal_users_distribute_content`, `drive_shared_drive_creation_allowed`, `drive_shared_drive_managers_cannot_override`, `drive_shared_drive_members_only_access`, `drive_shared_drive_disable_download_print_copy`, `drive_desktop_access_disabled`) using the Cloud Identity Policy API [(#10648)](https://github.com/prowler-cloud/prowler/pull/10648)
|
||||
- `entra_conditional_access_policy_device_registration_mfa_required` check and `entra_intune_enrollment_sign_in_frequency_every_time` enhancement for M365 provider [(#10222)](https://github.com/prowler-cloud/prowler/pull/10222)
|
||||
- `entra_conditional_access_policy_block_elevated_insider_risk` check for M365 provider [(#10234)](https://github.com/prowler-cloud/prowler/pull/10234)
|
||||
- `Vercel` provider support with 30 checks [(#10189)](https://github.com/prowler-cloud/prowler/pull/10189)
|
||||
- `internet-exposed` category for 13 AWS checks (CloudFront, CodeArtifact, EC2, EFS, RDS, SageMaker, Shield, VPC) [(#10502)](https://github.com/prowler-cloud/prowler/pull/10502)
|
||||
- `stepfunctions_statemachine_no_secrets_in_definition` check for hardcoded secrets in AWS Step Functions state machine definitions [(#10570)](https://github.com/prowler-cloud/prowler/pull/10570)
|
||||
- CCC improvements with the latest checks and new mappings [(#10625)](https://github.com/prowler-cloud/prowler/pull/10625)
|
||||
|
||||
### 🔄 Changed
|
||||
|
||||
- Added `internet-exposed` category to 13 AWS checks (CloudFront, CodeArtifact, EC2, EFS, RDS, SageMaker, Shield, VPC) [(#10502)](https://github.com/prowler-cloud/prowler/pull/10502)
|
||||
- Minimum Python version from 3.9 to 3.10 and updated classifiers to reflect supported versions (3.10, 3.11, 3.12) [(#10464)](https://github.com/prowler-cloud/prowler/pull/10464)
|
||||
- Pin direct SDK dependencies to exact versions and rely on `poetry.lock` artifact hashes for reproducible installs [(#10593)](https://github.com/prowler-cloud/prowler/pull/10593)
|
||||
- Sensitive CLI flags now warn when values are passed directly, recommending environment variables instead [(#10532)](https://github.com/prowler-cloud/prowler/pull/10532)
|
||||
|
||||
### 🐞 Fixed
|
||||
|
||||
- OCI mutelist support: pass `tenancy_id` to `is_finding_muted` and update `oraclecloud_mutelist_example.yaml` to use `Accounts` key [(#10565)](https://github.com/prowler-cloud/prowler/issues/10565)
|
||||
- OCI mutelist support: pass `tenancy_id` to `is_finding_muted` and update `oraclecloud_mutelist_example.yaml` to use `Accounts` key [(#10566)](https://github.com/prowler-cloud/prowler/pull/10566)
|
||||
- `return` statements in `finally` blocks replaced across IAM, Organizations, GCP provider, and custom checks metadata to stop silently swallowing exceptions [(#10102)](https://github.com/prowler-cloud/prowler/pull/10102)
|
||||
- `JiraConnection` now includes issue types per project fetched during `test_connection`, fixing `JiraInvalidIssueTypeError` on non-English Jira instances [(#10534)](https://github.com/prowler-cloud/prowler/pull/10534)
|
||||
- `--list-checks` and `--list-checks-json` now include `threat-detection` category checks in their output [(#10578)](https://github.com/prowler-cloud/prowler/pull/10578)
|
||||
@@ -39,6 +43,15 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
- `is_policy_public` now recognizes `kms:CallerAccount`, `kms:ViaService`, `aws:CalledVia`, `aws:CalledViaFirst`, and `aws:CalledViaLast` as restrictive condition keys, fixing false positives in `kms_key_policy_is_not_public` and other checks that use `is_condition_block_restrictive` [(#10600)](https://github.com/prowler-cloud/prowler/pull/10600)
|
||||
- `_enabled_regions` empty-set bug in `AwsProvider.generate_regional_clients` creating boto3 clients for all 36 AWS regions instead of the audited ones, causing random CI timeouts and slow test runs [(#10598)](https://github.com/prowler-cloud/prowler/pull/10598)
|
||||
- Retrieve only the latest version from a package in AWS CodeArtifact [(#10243)](https://github.com/prowler-cloud/prowler/pull/10243)
|
||||
- AWS global services (CloudFront, Route53, Shield, FMS) now use the partition's global region instead of the profile's default region [(#10458)](https://github.com/prowler-cloud/prowler/pull/10458)
|
||||
- Oracle Cloud `events_rule_idp_group_mapping_changes` now recognizes the CIS 3.1 `add/remove` event names to avoid false positives [(#10416)](https://github.com/prowler-cloud/prowler/pull/10416)
|
||||
- Oracle Cloud password policy checks now exclude immutable system-managed policies (`SimplePasswordPolicy`, `StandardPasswordPolicy`) to avoid false positives [(#10453)](https://github.com/prowler-cloud/prowler/pull/10453)
|
||||
- Oracle Cloud `kms_key_rotation_enabled` now checks current key version age to avoid false positives on vaults without auto-rotation support [(#10450)](https://github.com/prowler-cloud/prowler/pull/10450)
|
||||
- OCI filestorage, blockstorage, KMS, and compute services now honor `--region` for scanning outside the tenancy home region [(#10472)](https://github.com/prowler-cloud/prowler/pull/10472)
|
||||
- OCI provider now supports multi-region filtering via `--region` [(#10473)](https://github.com/prowler-cloud/prowler/pull/10473)
|
||||
- `prowler image --registry` failing with `ImageNoImagesProvidedError` due to registry arguments not being forwarded to `ImageProvider` in `init_global_provider` [(#10470)](https://github.com/prowler-cloud/prowler/pull/10470)
|
||||
- OCI multi-region support for identity client configuration in blockstorage, identity, and filestorage services [(#10520)](https://github.com/prowler-cloud/prowler/pull/10520)
|
||||
- Google Workspace Calendar checks now filter for customer-level policies only, skipping OU and group overrides that could produce incorrect audit results [(#10658)](https://github.com/prowler-cloud/prowler/pull/10658)
|
||||
|
||||
### 🔐 Security
|
||||
|
||||
@@ -49,21 +62,6 @@ All notable changes to the **Prowler SDK** are documented in this file.
|
||||
|
||||
---
|
||||
|
||||
## [5.22.1] (Prowler UNRELEASED)
|
||||
|
||||
### 🐞 Fixed
|
||||
|
||||
- AWS global services (CloudFront, Route53, Shield, FMS) now use the partition's global region instead of the profile's default region [(#10458)](https://github.com/prowler-cloud/prowler/issues/10458)
|
||||
- Oracle Cloud `events_rule_idp_group_mapping_changes` now recognizes the CIS 3.1 `add/remove` event names to avoid false positives [(#10416)](https://github.com/prowler-cloud/prowler/pull/10416)
|
||||
- Oracle Cloud password policy checks now exclude immutable system-managed policies (`SimplePasswordPolicy`, `StandardPasswordPolicy`) to avoid false positives [(#10453)](https://github.com/prowler-cloud/prowler/pull/10453)
|
||||
- Oracle Cloud `kms_key_rotation_enabled` now checks current key version age to avoid false positives on vaults without auto-rotation support [(#10450)](https://github.com/prowler-cloud/prowler/pull/10450)
|
||||
- Oracle Cloud patch for filestorage, blockstorage, kms, and compute services in OCI to allow for region scanning outside home [(#10455)](https://github.com/prowler-cloud/prowler/pull/10472)
|
||||
- Oracle cloud provider now supports multi-region filtering [(#10435)](https://github.com/prowler-cloud/prowler/pull/10473)
|
||||
- `prowler image --registry` failing with `ImageNoImagesProvidedError` due to registry arguments not being forwarded to `ImageProvider` in `init_global_provider` [(#10457)](https://github.com/prowler-cloud/prowler/issues/10457)
|
||||
- Oracle Cloud multi-region support for identity client configuration in blockstorage, identity, and filestorage services [(#10519)](https://github.com/prowler-cloud/prowler/pull/10520)
|
||||
|
||||
---
|
||||
|
||||
## [5.22.0] (Prowler v5.22.0)
|
||||
|
||||
### 🐞 Fixed
|
||||
|
||||
@@ -230,7 +230,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.1.1",
|
||||
"Description": "Ensure users are warned when they share a file outside their domain",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_external_sharing_warn_users"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -251,7 +253,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.1.2",
|
||||
"Description": "Ensure users cannot publish files to the web or make visible to the world as public or unlisted",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_publishing_files_disabled"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -272,7 +276,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.1.3",
|
||||
"Description": "Ensure document sharing is being controlled by domain with allowlists",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_sharing_allowlisted_domains"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -293,7 +299,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.1.4",
|
||||
"Description": "Ensure users are warned when they share a file with users in an allowlisted domain",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_warn_sharing_with_allowlisted_domains"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -314,7 +322,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.1.5",
|
||||
"Description": "Ensure Access Checker is configured to limit file access",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_access_checker_recipients_only"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -335,7 +345,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.1.6",
|
||||
"Description": "Ensure only users inside your organization can distribute content externally",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_internal_users_distribute_content"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -356,7 +368,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.2.1",
|
||||
"Description": "Ensure users can create new shared drives",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_shared_drive_creation_allowed"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -377,7 +391,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.2.2",
|
||||
"Description": "Ensure manager access members cannot modify shared drive settings",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_shared_drive_managers_cannot_override"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -398,7 +414,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.2.3",
|
||||
"Description": "Ensure shared drive file access is restricted to members only",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_shared_drive_members_only_access"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -419,7 +437,9 @@
|
||||
{
|
||||
"Id": "3.1.2.1.2.4",
|
||||
"Description": "Ensure viewers and commenters ability to download, print, and copy files is disabled",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_shared_drive_disable_download_print_copy"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
@@ -461,7 +481,9 @@
|
||||
{
|
||||
"Id": "3.1.2.2.2",
|
||||
"Description": "Ensure desktop access to Drive is disabled",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_desktop_access_disabled"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "3 Apps",
|
||||
|
||||
@@ -1089,7 +1089,9 @@
|
||||
{
|
||||
"Id": "GWS.DRIVEDOCS.1.1",
|
||||
"Description": "External sharing SHALL be restricted to allowlisted domains",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_sharing_allowlisted_domains"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "Drive and Docs",
|
||||
@@ -1115,7 +1117,9 @@
|
||||
{
|
||||
"Id": "GWS.DRIVEDOCS.1.3",
|
||||
"Description": "Warnings SHALL be enabled when a user is attempting to share with someone not in allowlisted domains",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_warn_sharing_with_allowlisted_domains"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "Drive and Docs",
|
||||
@@ -1141,7 +1145,9 @@
|
||||
{
|
||||
"Id": "GWS.DRIVEDOCS.1.5",
|
||||
"Description": "Any OUs that do allow external sharing SHOULD disable making content available to anyone with the link",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_publishing_files_disabled"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "Drive and Docs",
|
||||
@@ -1154,7 +1160,9 @@
|
||||
{
|
||||
"Id": "GWS.DRIVEDOCS.1.6",
|
||||
"Description": "Agencies SHALL set access checking to recipients only",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_access_checker_recipients_only"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "Drive and Docs",
|
||||
@@ -1193,7 +1201,9 @@
|
||||
{
|
||||
"Id": "GWS.DRIVEDOCS.1.9",
|
||||
"Description": "Out-of-Domain file-level warnings SHALL be enabled",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_external_sharing_warn_users"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "Drive and Docs",
|
||||
@@ -1232,7 +1242,9 @@
|
||||
{
|
||||
"Id": "GWS.DRIVEDOCS.2.1",
|
||||
"Description": "Agencies SHOULD NOT allow members with manager access to override shared Google Drive creation settings",
|
||||
"Checks": [],
|
||||
"Checks": [
|
||||
"drive_shared_drive_managers_cannot_override"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "Drive and Docs",
|
||||
|
||||
@@ -0,0 +1,98 @@
|
||||
from colorama import Fore, Style
|
||||
from tabulate import tabulate
|
||||
|
||||
from prowler.config.config import orange_color
|
||||
|
||||
|
||||
def get_ccc_table(
|
||||
findings: list,
|
||||
bulk_checks_metadata: dict,
|
||||
compliance_framework: str,
|
||||
output_filename: str,
|
||||
output_directory: str,
|
||||
compliance_overview: bool,
|
||||
):
|
||||
section_table = {
|
||||
"Provider": [],
|
||||
"Section": [],
|
||||
"Status": [],
|
||||
"Muted": [],
|
||||
}
|
||||
pass_count = []
|
||||
fail_count = []
|
||||
muted_count = []
|
||||
sections = {}
|
||||
for index, finding in enumerate(findings):
|
||||
check = bulk_checks_metadata[finding.check_metadata.CheckID]
|
||||
check_compliances = check.Compliance
|
||||
for compliance in check_compliances:
|
||||
if compliance.Framework == "CCC":
|
||||
for requirement in compliance.Requirements:
|
||||
for attribute in requirement.Attributes:
|
||||
section = attribute.Section
|
||||
|
||||
if section not in sections:
|
||||
sections[section] = {"FAIL": 0, "PASS": 0, "Muted": 0}
|
||||
|
||||
if finding.muted:
|
||||
if index not in muted_count:
|
||||
muted_count.append(index)
|
||||
sections[section]["Muted"] += 1
|
||||
else:
|
||||
if finding.status == "FAIL" and index not in fail_count:
|
||||
fail_count.append(index)
|
||||
sections[section]["FAIL"] += 1
|
||||
elif finding.status == "PASS" and index not in pass_count:
|
||||
pass_count.append(index)
|
||||
sections[section]["PASS"] += 1
|
||||
|
||||
sections = dict(sorted(sections.items()))
|
||||
for section in sections:
|
||||
section_table["Provider"].append(compliance.Provider)
|
||||
section_table["Section"].append(section)
|
||||
if sections[section]["FAIL"] > 0:
|
||||
section_table["Status"].append(
|
||||
f"{Fore.RED}FAIL({sections[section]['FAIL']}){Style.RESET_ALL}"
|
||||
)
|
||||
else:
|
||||
if sections[section]["PASS"] > 0:
|
||||
section_table["Status"].append(
|
||||
f"{Fore.GREEN}PASS({sections[section]['PASS']}){Style.RESET_ALL}"
|
||||
)
|
||||
else:
|
||||
section_table["Status"].append(f"{Fore.GREEN}PASS{Style.RESET_ALL}")
|
||||
section_table["Muted"].append(
|
||||
f"{orange_color}{sections[section]['Muted']}{Style.RESET_ALL}"
|
||||
)
|
||||
|
||||
if (
|
||||
len(fail_count) + len(pass_count) + len(muted_count) > 1
|
||||
): # If there are no resources, don't print the compliance table
|
||||
print(
|
||||
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
|
||||
)
|
||||
total_findings_count = len(fail_count) + len(pass_count) + len(muted_count)
|
||||
overview_table = [
|
||||
[
|
||||
f"{Fore.RED}{round(len(fail_count) / total_findings_count * 100, 2)}% ({len(fail_count)}) FAIL{Style.RESET_ALL}",
|
||||
f"{Fore.GREEN}{round(len(pass_count) / total_findings_count * 100, 2)}% ({len(pass_count)}) PASS{Style.RESET_ALL}",
|
||||
f"{orange_color}{round(len(muted_count) / total_findings_count * 100, 2)}% ({len(muted_count)}) MUTED{Style.RESET_ALL}",
|
||||
]
|
||||
]
|
||||
print(tabulate(overview_table, tablefmt="rounded_grid"))
|
||||
if not compliance_overview:
|
||||
if len(fail_count) > 0 and len(section_table["Section"]) > 0:
|
||||
print(
|
||||
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
|
||||
)
|
||||
print(
|
||||
tabulate(
|
||||
section_table,
|
||||
tablefmt="rounded_grid",
|
||||
headers="keys",
|
||||
)
|
||||
)
|
||||
print(f"\nDetailed results of {compliance_framework.upper()} are in:")
|
||||
print(
|
||||
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
|
||||
)
|
||||
@@ -3,6 +3,7 @@ import sys
|
||||
from prowler.lib.check.models import Check_Report
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.outputs.compliance.c5.c5 import get_c5_table
|
||||
from prowler.lib.outputs.compliance.ccc.ccc import get_ccc_table
|
||||
from prowler.lib.outputs.compliance.cis.cis import get_cis_table
|
||||
from prowler.lib.outputs.compliance.csa.csa import get_csa_table
|
||||
from prowler.lib.outputs.compliance.ens.ens import get_ens_table
|
||||
@@ -104,6 +105,15 @@ def display_compliance_table(
|
||||
output_directory,
|
||||
compliance_overview,
|
||||
)
|
||||
elif compliance_framework.startswith("ccc_"):
|
||||
get_ccc_table(
|
||||
findings,
|
||||
bulk_checks_metadata,
|
||||
compliance_framework,
|
||||
output_filename,
|
||||
output_directory,
|
||||
compliance_overview,
|
||||
)
|
||||
else:
|
||||
get_generic_compliance_table(
|
||||
findings,
|
||||
|
||||
@@ -0,0 +1,44 @@
|
||||
{
|
||||
"Provider": "aws",
|
||||
"CheckID": "stepfunctions_statemachine_no_secrets_in_definition",
|
||||
"CheckTitle": "Step Functions state machine has no sensitive credentials in its definition",
|
||||
"CheckType": [
|
||||
"Software and Configuration Checks/AWS Security Best Practices",
|
||||
"TTPs/Credential Access",
|
||||
"Effects/Data Exposure",
|
||||
"Sensitive Data Identifications/Security"
|
||||
],
|
||||
"ServiceName": "stepfunctions",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "critical",
|
||||
"ResourceType": "AwsStepFunctionStateMachine",
|
||||
"ResourceGroup": "serverless",
|
||||
"Description": "**AWS Step Functions state machines** are inspected for **hardcoded secrets** (keys, tokens, passwords) embedded directly in the state machine **definition** (Amazon States Language JSON).\n\nSuch values indicate sensitive data is stored directly in task parameters instead of being sourced securely.",
|
||||
"Risk": "Plaintext secrets in state machine definitions reduce confidentiality: values can be viewed in the AWS Console, CLI, and may leak into execution logs or public outputs. Compromised credentials enable unauthorized AWS actions, lateral movement, and data exfiltration.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html",
|
||||
"https://docs.aws.amazon.com/step-functions/latest/dg/security-best-practices.html",
|
||||
"https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_how-services-use-secrets_step-functions.html",
|
||||
"https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "```yaml\nResources:\n <example_resource_name>:\n Type: AWS::StepFunctions::StateMachine\n Properties:\n StateMachineName: <example_resource_name>\n RoleArn: <example_resource_arn>\n DefinitionString: |\n {\n \"Comment\": \"Example state machine\",\n \"StartAt\": \"MyTask\",\n \"States\": {\n \"MyTask\": {\n \"Type\": \"Task\",\n \"Resource\": \"arn:aws:states:::aws-sdk:secretsmanager:getSecretValue\",\n \"Parameters\": {\n \"SecretId\": \"<example_secret_name>\"\n },\n \"End\": true\n }\n }\n }\n```",
|
||||
"Other": "1. In AWS Console, go to Step Functions and open your state machine\n2. Click Edit\n3. Remove any hardcoded secrets from the definition\n4. Use AWS Secrets Manager or Parameter Store to retrieve secrets at runtime\n5. Grant the state machine IAM role permission to access the secret\n6. Save the updated definition",
|
||||
"Terraform": "```hcl\nresource \"aws_sfn_state_machine\" \"<example_resource_name>\" {\n name = \"<example_resource_name>\"\n role_arn = \"<example_resource_arn>\"\n\n definition = jsonencode({\n Comment = \"Example state machine\"\n StartAt = \"MyTask\"\n States = {\n MyTask = {\n Type = \"Task\"\n Resource = \"arn:aws:states:::aws-sdk:secretsmanager:getSecretValue\"\n Parameters = {\n SecretId = \"<example_secret_name>\" # Reference secret by name, never hardcode value\n }\n End = true\n }\n }\n })\n}\n```"
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Store secrets outside the state machine definition and retrieve them securely at runtime using **AWS Secrets Manager** or **AWS Systems Manager Parameter Store**.\n- Use the `aws-sdk:secretsmanager:getSecretValue` integration to fetch secrets dynamically\n- Enforce **least privilege** on the state machine IAM role\n- Rotate secrets regularly and never embed them in the definition",
|
||||
"Url": "https://hub.prowler.com/check/stepfunctions_statemachine_no_secrets_in_definition"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"secrets"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,45 @@
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.lib.utils.utils import detect_secrets_scan
|
||||
from prowler.providers.aws.services.stepfunctions.stepfunctions_client import (
|
||||
stepfunctions_client,
|
||||
)
|
||||
|
||||
|
||||
class stepfunctions_statemachine_no_secrets_in_definition(Check):
|
||||
"""Check that AWS Step Functions state machine definitions contain no hardcoded secrets."""
|
||||
|
||||
def execute(self) -> list[Check_Report_AWS]:
|
||||
findings = []
|
||||
secrets_ignore_patterns = stepfunctions_client.audit_config.get(
|
||||
"secrets_ignore_patterns", []
|
||||
)
|
||||
for state_machine in stepfunctions_client.state_machines.values():
|
||||
report = Check_Report_AWS(metadata=self.metadata(), resource=state_machine)
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"No secrets found in Step Functions state machine {state_machine.name} definition."
|
||||
|
||||
if state_machine.definition:
|
||||
detect_secrets_output = detect_secrets_scan(
|
||||
data=state_machine.definition,
|
||||
excluded_secrets=secrets_ignore_patterns,
|
||||
detect_secrets_plugins=stepfunctions_client.audit_config.get(
|
||||
"detect_secrets_plugins",
|
||||
),
|
||||
)
|
||||
|
||||
if detect_secrets_output:
|
||||
secrets_string = ", ".join(
|
||||
[
|
||||
f"{secret['type']} on line {secret['line_number']}"
|
||||
for secret in detect_secrets_output
|
||||
]
|
||||
)
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Potential {'secrets' if len(detect_secrets_output) > 1 else 'secret'} "
|
||||
f"found in Step Functions state machine {state_machine.name} definition "
|
||||
f"-> {secrets_string}."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
@@ -41,6 +41,22 @@ class GoogleWorkspaceService:
|
||||
)
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _is_customer_level_policy(policy: dict) -> bool:
|
||||
"""Check if a policy applies at the customer (domain-wide) level.
|
||||
|
||||
The Cloud Identity Policy API returns policies at multiple
|
||||
organizational levels (customer, OU, group). Customer-level
|
||||
policies have no group targeting and no sub-OU targeting in
|
||||
their policyQuery.
|
||||
"""
|
||||
policy_query = policy.get("policyQuery", {})
|
||||
if policy_query.get("group"):
|
||||
return False
|
||||
if policy_query.get("orgUnit"):
|
||||
return False
|
||||
return True
|
||||
|
||||
def _handle_api_error(self, error, context: str, resource_name: str = ""):
|
||||
"""
|
||||
Centralized Google Workspace API error handling.
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "Google Calendar **warns users** when they invite guests from outside the organization to an event. This prompt gives users a chance to reconsider before sharing meeting details with external parties, reducing the likelihood of **accidental information disclosure** through calendar invitations.",
|
||||
"Description": "The domain-wide Google Calendar configuration **warns users** when they invite guests from outside the organization to an event. This prompt gives users a chance to reconsider before sharing meeting details with external parties, reducing the likelihood of **accidental information disclosure** through calendar invitations.",
|
||||
"Risk": "Without external invitation warnings, users may unintentionally include **external guests** in internal meetings, exposing **confidential meeting details**, agendas, and internal attendee lists to unauthorized parties. This is a common vector for inadvertent data leakage through everyday calendar actions.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "Primary calendars in the Google Workspace domain share **only free/busy information** with external users. When external sharing is set to share full event details, sensitive information such as meeting titles, attendees, locations, and descriptions is exposed to users outside the organization.",
|
||||
"Description": "The domain-wide default for primary calendars shares **only free/busy information** with external users. When external sharing is set to share full event details, sensitive information such as meeting titles, attendees, locations, and descriptions is exposed to users outside the organization.",
|
||||
"Risk": "Overly permissive external sharing of primary calendars exposes **sensitive meeting metadata** — titles, attendees, locations, and descriptions — to users outside the organization. This increases the risk of **information disclosure**, **social engineering**, and **targeted phishing** based on insights into organizational activities.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "Secondary calendars in the Google Workspace domain share **only free/busy information** with external users. Secondary calendars are additional calendars users create beyond their primary calendar (e.g., for projects, teams, or personal events), and are commonly used to organize sensitive or focused activities that should not be visible to external parties.",
|
||||
"Description": "The domain-wide default for secondary calendars shares **only free/busy information** with external users. Secondary calendars are additional calendars users create beyond their primary calendar (e.g., for projects, teams, or personal events), and are commonly used to organize sensitive or focused activities that should not be visible to external parties.",
|
||||
"Risk": "Overly permissive external sharing of secondary calendars exposes **project-specific or team-specific event details** to users outside the organization. Because secondary calendars often hold more targeted activities (e.g., product launches, internal reviews), unrestricted external sharing increases the risk of **information disclosure** and **competitive intelligence leakage**.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
|
||||
@@ -38,6 +38,8 @@ class Calendar(GoogleWorkspaceService):
|
||||
response = request.execute()
|
||||
|
||||
for policy in response.get("policies", []):
|
||||
if not self._is_customer_level_policy(policy):
|
||||
continue
|
||||
setting = policy.get("setting", {})
|
||||
setting_type = setting.get("type", "").removeprefix("settings/")
|
||||
value = setting.get("value", {})
|
||||
|
||||
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_access_checker_recipients_only",
|
||||
"CheckTitle": "Drive Access Checker is configured to recipients only",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide Access Checker configuration ensures that when a user shares a Drive file via a Google product other than Drive itself (e.g. by pasting a link in Gmail), the suggestions never expand sharing to a wider audience or to anyone with the link. Access Checker is set to **recipients only**.",
|
||||
"Risk": "If Access Checker suggests broader audiences or public visibility, users may **inadvertently widen access** to a file beyond the people they intended to share with. This is a common cause of unintentional internal or external over-sharing.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/60781",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Sharing options**\n4. Under **Access Checker**, select **Recipients only**\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Configure the Drive Access Checker to suggest sharing only with the explicit recipients of a link. This prevents accidental over-sharing through Gmail and other Google integrations.",
|
||||
"Url": "https://hub.prowler.com/check/drive_access_checker_recipients_only"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_external_sharing_warn_users",
|
||||
"drive_publishing_files_disabled"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,55 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_access_checker_recipients_only(Check):
|
||||
"""Check that Access Checker is configured to recipients only
|
||||
|
||||
This check verifies that the domain-level Drive and Docs Access Checker
|
||||
setting suggests granting access only to the explicit recipients of a
|
||||
shared link, rather than expanding access to wider audiences or making
|
||||
files publicly accessible.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
access_checker = drive_client.policies.access_checker_suggestions
|
||||
|
||||
if access_checker == "RECIPIENTS_ONLY":
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Drive and Docs Access Checker in domain "
|
||||
f"{drive_client.provider.identity.domain} is restricted to "
|
||||
f"recipients only."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if access_checker is None:
|
||||
report.status_extended = (
|
||||
f"Drive and Docs Access Checker is not explicitly "
|
||||
f"configured in domain {drive_client.provider.identity.domain}. "
|
||||
f"Access Checker should be set to recipients only."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Drive and Docs Access Checker in domain "
|
||||
f"{drive_client.provider.identity.domain} is set to "
|
||||
f"{access_checker}. Access Checker should be set to recipients only."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,4 @@
|
||||
from prowler.providers.common.provider import Provider
|
||||
from prowler.providers.googleworkspace.services.drive.drive_service import Drive
|
||||
|
||||
drive_client = Drive(Provider.get_global_provider())
|
||||
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_desktop_access_disabled",
|
||||
"CheckTitle": "Google Drive for desktop is disabled",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide default **disables Google Drive for desktop** for the organization. The Drive for desktop client synchronizes Drive content to local devices and uses its own \"offline\" mechanism that does not respect the central offline-access device policy, so disabling it closes a synchronization channel that would otherwise place organizational content on potentially unmanaged endpoints.",
|
||||
"Risk": "When Drive for desktop is enabled, organizational files are **synchronized to local devices** and remain accessible if the device is lost, stolen, or compromised. Because Drive for desktop bypasses the central offline-access controls, this channel is a frequently overlooked path for sensitive data to leave organization-managed environments.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/7491144",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Features and Applications** > **Google Drive for desktop**\n4. **Uncheck** *Allow Google Drive for desktop in your organization*\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Disable Google Drive for desktop to prevent local synchronization of organizational content. This reduces the risk of data loss when devices are lost or stolen and closes a channel that bypasses central offline-access controls.",
|
||||
"Url": "https://hub.prowler.com/check/drive_desktop_access_disabled"
|
||||
}
|
||||
},
|
||||
"Categories": [],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_desktop_access_disabled(Check):
|
||||
"""Check that Google Drive for desktop is disabled
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy disables
|
||||
Google Drive for desktop. The desktop client synchronizes Drive content
|
||||
to local devices and bypasses the standard offline access controls,
|
||||
so disabling it reduces the risk of organizational data being lost or
|
||||
stolen along with an end-user device.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
allow_desktop = drive_client.policies.allow_drive_for_desktop
|
||||
|
||||
if allow_desktop is False:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Google Drive for desktop is disabled in domain "
|
||||
f"{drive_client.provider.identity.domain}."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if allow_desktop is None:
|
||||
report.status_extended = (
|
||||
f"Google Drive for desktop is not explicitly configured "
|
||||
f"in domain {drive_client.provider.identity.domain}. "
|
||||
f"Drive for desktop should be disabled to prevent local "
|
||||
f"synchronization of organizational content."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Google Drive for desktop is enabled in domain "
|
||||
f"{drive_client.provider.identity.domain}. "
|
||||
f"Drive for desktop should be disabled to prevent local "
|
||||
f"synchronization of organizational content."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_external_sharing_warn_users",
|
||||
"CheckTitle": "Users are warned when sharing files outside the domain",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide Drive and Docs configuration **warns users** when they attempt to share a file with users outside the organization. This prompt gives users an opportunity to reconsider before exposing organizational content to external parties, reducing the likelihood of **accidental data disclosure** through everyday sharing actions.",
|
||||
"Risk": "Without external sharing warnings, users may unintentionally share **sensitive documents** with external recipients who are not entitled to the data. This is a common vector for inadvertent leakage of intellectual property, personally identifiable information, and confidential business data through routine Drive sharing.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/60781",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Sharing options**\n4. Under **Sharing outside of <Company>**, ensure sharing outside the domain is allowed and check **For files owned by users in <Company> warn when sharing outside of <Company>**\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Enable external sharing warnings so users are notified whenever they attempt to share a file outside the organization. This simple prompt helps prevent accidental disclosure of sensitive content to unintended recipients.",
|
||||
"Url": "https://hub.prowler.com/check/drive_external_sharing_warn_users"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_publishing_files_disabled",
|
||||
"drive_sharing_allowlisted_domains",
|
||||
"drive_warn_sharing_with_allowlisted_domains",
|
||||
"drive_access_checker_recipients_only",
|
||||
"drive_internal_users_distribute_content"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,54 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_external_sharing_warn_users(Check):
|
||||
"""Check that users are warned when sharing files outside the domain
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy warns
|
||||
users when they attempt to share a file with someone outside the
|
||||
organization, reducing the risk of accidental information disclosure.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
warning_enabled = drive_client.policies.warn_for_external_sharing
|
||||
|
||||
if warning_enabled is True:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"External sharing warnings for Drive and Docs are enabled "
|
||||
f"in domain {drive_client.provider.identity.domain}."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if warning_enabled is None:
|
||||
report.status_extended = (
|
||||
f"External sharing warnings for Drive and Docs are not "
|
||||
f"explicitly configured in domain "
|
||||
f"{drive_client.provider.identity.domain}. "
|
||||
f"Users should be warned when sharing files outside the organization."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"External sharing warnings for Drive and Docs are disabled "
|
||||
f"in domain {drive_client.provider.identity.domain}. "
|
||||
f"Users should be warned when sharing files outside the organization."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_internal_users_distribute_content",
|
||||
"CheckTitle": "Only internal users can distribute content outside the organization",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide default restricts distributing organizational content to shared drives owned by **another organization** to eligible **internal users** only. This prevents external collaborators with manager access to internal shared drives from moving content out of the organization.",
|
||||
"Risk": "If external users can move files from internal shared drives into shared drives owned by another organization, the organization **loses authoritative control** over its own data. This is a frequently overlooked path for unintentional or malicious data exfiltration through shared drive collaboration.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/60781",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Sharing options**\n4. Under **Distributing content outside of <Company>**, select **Only users in <Company>**\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Restrict the ability to distribute content to shared drives owned by another organization to internal users only. This preserves authoritative control over organizational data and closes a common shared-drive exfiltration path.",
|
||||
"Url": "https://hub.prowler.com/check/drive_internal_users_distribute_content"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_external_sharing_warn_users",
|
||||
"drive_publishing_files_disabled"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,56 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_internal_users_distribute_content(Check):
|
||||
"""Check that only internal users can distribute content externally
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy restricts
|
||||
distributing content to shared drives owned by another organization to
|
||||
eligible internal users only, preventing external collaborators from
|
||||
moving organizational content out of the domain.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
allowed = drive_client.policies.allowed_parties_for_distributing_content
|
||||
|
||||
if allowed in ("ELIGIBLE_INTERNAL_USERS", "NONE"):
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Distributing content outside the organization in domain "
|
||||
f"{drive_client.provider.identity.domain} is restricted to "
|
||||
f"{allowed}."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if allowed is None:
|
||||
report.status_extended = (
|
||||
f"Allowed parties for distributing content externally is not "
|
||||
f"explicitly configured in domain "
|
||||
f"{drive_client.provider.identity.domain}. "
|
||||
f"Only internal users should be allowed to distribute content externally."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Distributing content outside the organization in domain "
|
||||
f"{drive_client.provider.identity.domain} is set to {allowed}. "
|
||||
f"Only internal users should be allowed to distribute content externally."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_publishing_files_disabled",
|
||||
"CheckTitle": "Publishing Drive files to the web is disabled",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide Drive and Docs default **prevents users from publishing files to the web** or making them visible to the world as public or unlisted. Publishing a file to the web exposes its content to anyone on the internet, often without any audit trail, making it one of the highest-impact misconfigurations available to end users.",
|
||||
"Risk": "Allowing users to publish Drive files to the web creates a path for **unbounded data exposure**. Sensitive documents, intellectual property, customer data, or internal communications can be made publicly accessible — and indexed by search engines — with a single click, often unintentionally.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/60781",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Sharing options**\n4. Under **Sharing outside of <Company>**, **uncheck** *When sharing outside of <Company> is allowed, users in <Company> can make files and published web content visible to anyone with the link*\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Disable the ability for users to publish Drive files to the web or make them visible to anyone with the link. This eliminates the most direct path to unintentional public data exposure through Drive.",
|
||||
"Url": "https://hub.prowler.com/check/drive_publishing_files_disabled"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_external_sharing_warn_users",
|
||||
"drive_sharing_allowlisted_domains",
|
||||
"drive_internal_users_distribute_content"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,54 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_publishing_files_disabled(Check):
|
||||
"""Check that publishing Drive files to the web is disabled
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy prevents
|
||||
users from publishing files to the web or making them visible to anyone
|
||||
with the link, blocking unintended public exposure of organizational
|
||||
content.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
allow_publishing = drive_client.policies.allow_publishing_files
|
||||
|
||||
if allow_publishing is False:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Publishing files to the web is disabled in domain "
|
||||
f"{drive_client.provider.identity.domain}."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if allow_publishing is None:
|
||||
report.status_extended = (
|
||||
f"Publishing files to the web is not explicitly configured "
|
||||
f"in domain {drive_client.provider.identity.domain}. "
|
||||
f"Users should not be able to publish files to the web or make them public."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Publishing files to the web is enabled in domain "
|
||||
f"{drive_client.provider.identity.domain}. "
|
||||
f"Users should not be able to publish files to the web or make them public."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,150 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.providers.googleworkspace.lib.service.service import GoogleWorkspaceService
|
||||
|
||||
|
||||
class Drive(GoogleWorkspaceService):
|
||||
"""Google Workspace Drive and Docs service for auditing domain-level Drive policies.
|
||||
|
||||
Uses the Cloud Identity Policy API v1 to read Drive and Docs sharing,
|
||||
shared drive creation, and Drive for desktop settings configured in the
|
||||
Admin Console.
|
||||
"""
|
||||
|
||||
def __init__(self, provider):
|
||||
super().__init__(provider)
|
||||
self.policies = DrivePolicies()
|
||||
self.policies_fetched = False
|
||||
self._fetch_drive_policies()
|
||||
|
||||
def _fetch_drive_policies(self):
|
||||
"""Fetch Drive and Docs policies from the Cloud Identity Policy API v1."""
|
||||
logger.info("Drive - Fetching Drive and Docs policies...")
|
||||
|
||||
try:
|
||||
service = self._build_service("cloudidentity", "v1")
|
||||
|
||||
if not service:
|
||||
logger.error("Failed to build Cloud Identity service")
|
||||
return
|
||||
|
||||
request = service.policies().list(pageSize=100)
|
||||
fetch_succeeded = True
|
||||
|
||||
while request is not None:
|
||||
try:
|
||||
response = request.execute()
|
||||
|
||||
for policy in response.get("policies", []):
|
||||
if not self._is_customer_level_policy(policy):
|
||||
continue
|
||||
|
||||
setting = policy.get("setting", {})
|
||||
setting_type = setting.get("type", "").removeprefix("settings/")
|
||||
value = setting.get("value", {})
|
||||
|
||||
if setting_type == "drive_and_docs.external_sharing":
|
||||
self.policies.external_sharing_mode = value.get(
|
||||
"externalSharingMode"
|
||||
)
|
||||
self.policies.warn_for_external_sharing = value.get(
|
||||
"warnForExternalSharing"
|
||||
)
|
||||
self.policies.warn_for_sharing_outside_allowlisted_domains = value.get(
|
||||
"warnForSharingOutsideAllowlistedDomains"
|
||||
)
|
||||
self.policies.allow_publishing_files = value.get(
|
||||
"allowPublishingFiles"
|
||||
)
|
||||
self.policies.access_checker_suggestions = value.get(
|
||||
"accessCheckerSuggestions"
|
||||
)
|
||||
self.policies.allowed_parties_for_distributing_content = (
|
||||
value.get("allowedPartiesForDistributingContent")
|
||||
)
|
||||
logger.debug(
|
||||
"Drive external sharing settings fetched: "
|
||||
f"mode={self.policies.external_sharing_mode}, "
|
||||
f"warn={self.policies.warn_for_external_sharing}, "
|
||||
f"publish={self.policies.allow_publishing_files}"
|
||||
)
|
||||
|
||||
elif setting_type == "drive_and_docs.shared_drive_creation":
|
||||
self.policies.allow_shared_drive_creation = value.get(
|
||||
"allowSharedDriveCreation"
|
||||
)
|
||||
self.policies.allow_managers_to_override_settings = (
|
||||
value.get("allowManagersToOverrideSettings")
|
||||
)
|
||||
self.policies.allow_non_member_access = value.get(
|
||||
"allowNonMemberAccess"
|
||||
)
|
||||
self.policies.allowed_parties_for_download_print_copy = (
|
||||
value.get("allowedPartiesForDownloadPrintCopy")
|
||||
)
|
||||
logger.debug(
|
||||
"Drive shared drive creation settings fetched: "
|
||||
f"creation={self.policies.allow_shared_drive_creation}, "
|
||||
f"managers_override={self.policies.allow_managers_to_override_settings}"
|
||||
)
|
||||
|
||||
elif setting_type == "drive_and_docs.drive_for_desktop":
|
||||
self.policies.allow_drive_for_desktop = value.get(
|
||||
"allowDriveForDesktop"
|
||||
)
|
||||
logger.debug(
|
||||
"Drive for desktop setting fetched: "
|
||||
f"{self.policies.allow_drive_for_desktop}"
|
||||
)
|
||||
|
||||
request = service.policies().list_next(request, response)
|
||||
|
||||
except Exception as error:
|
||||
self._handle_api_error(
|
||||
error,
|
||||
"fetching Drive and Docs policies",
|
||||
self.provider.identity.customer_id,
|
||||
)
|
||||
fetch_succeeded = False
|
||||
break
|
||||
|
||||
self.policies_fetched = fetch_succeeded
|
||||
|
||||
logger.info(
|
||||
f"Drive and Docs policies fetched - "
|
||||
f"External sharing mode: {self.policies.external_sharing_mode}, "
|
||||
f"Shared drive creation: {self.policies.allow_shared_drive_creation}, "
|
||||
f"Drive for desktop: {self.policies.allow_drive_for_desktop}"
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
self._handle_api_error(
|
||||
error,
|
||||
"fetching Drive and Docs policies",
|
||||
self.provider.identity.customer_id,
|
||||
)
|
||||
self.policies_fetched = False
|
||||
|
||||
|
||||
class DrivePolicies(BaseModel):
|
||||
"""Model for domain-level Drive and Docs policy settings."""
|
||||
|
||||
# drive_and_docs.external_sharing
|
||||
external_sharing_mode: Optional[str] = None
|
||||
warn_for_external_sharing: Optional[bool] = None
|
||||
warn_for_sharing_outside_allowlisted_domains: Optional[bool] = None
|
||||
allow_publishing_files: Optional[bool] = None
|
||||
access_checker_suggestions: Optional[str] = None
|
||||
allowed_parties_for_distributing_content: Optional[str] = None
|
||||
|
||||
# drive_and_docs.shared_drive_creation
|
||||
allow_shared_drive_creation: Optional[bool] = None
|
||||
allow_managers_to_override_settings: Optional[bool] = None
|
||||
allow_non_member_access: Optional[bool] = None
|
||||
allowed_parties_for_download_print_copy: Optional[str] = None
|
||||
|
||||
# drive_and_docs.drive_for_desktop
|
||||
allow_drive_for_desktop: Optional[bool] = None
|
||||
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_shared_drive_creation_allowed",
|
||||
"CheckTitle": "Users are allowed to create new shared drives",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide default **allows users to create new shared drives**. Shared drives are owned by the organization (not the individual user), so content stored in them survives the deletion of the original creator's account, supporting data continuity and reducing the risk of accidental data loss.",
|
||||
"Risk": "When users cannot create shared drives, they store collaborative content in their personal **My Drive** instead. When that user account is deleted, the data is also deleted, leading to **unintentional data loss** of organizationally significant information. Allowing shared drive creation makes data survivable across account lifecycle events.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/7212025",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Shared drive creation**\n4. **Uncheck** *Prevent users in <Company> from creating new shared drives*\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Allow users to create new shared drives. This protects the organization from data loss when user accounts are deleted by ensuring collaborative content lives in organization-owned shared drives instead of personal My Drive folders.",
|
||||
"Url": "https://hub.prowler.com/check/drive_shared_drive_creation_allowed"
|
||||
}
|
||||
},
|
||||
"Categories": [],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_shared_drive_managers_cannot_override",
|
||||
"drive_shared_drive_members_only_access",
|
||||
"drive_shared_drive_disable_download_print_copy"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_shared_drive_creation_allowed(Check):
|
||||
"""Check that users are allowed to create new shared drives
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy permits
|
||||
users to create new shared drives. Allowing shared drive creation helps
|
||||
prevent data loss when individual user accounts are deleted, since
|
||||
content lives in shared drives owned by the organization rather than
|
||||
in personal My Drive folders.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
allow_creation = drive_client.policies.allow_shared_drive_creation
|
||||
|
||||
if allow_creation is True:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Users in domain {drive_client.provider.identity.domain} "
|
||||
f"are allowed to create new shared drives."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if allow_creation is None:
|
||||
report.status_extended = (
|
||||
f"Shared drive creation is not explicitly configured in "
|
||||
f"domain {drive_client.provider.identity.domain}. "
|
||||
f"Users should be allowed to create new shared drives to avoid "
|
||||
f"data loss when accounts are deleted."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Users in domain {drive_client.provider.identity.domain} "
|
||||
f"are prevented from creating new shared drives. "
|
||||
f"Users should be allowed to create new shared drives to avoid "
|
||||
f"data loss when accounts are deleted."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_shared_drive_disable_download_print_copy",
|
||||
"CheckTitle": "Viewers and commenters cannot download, print, or copy shared drive files",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide default prevents viewers and commenters of files stored in shared drives from **downloading, printing, or copying** the file contents. They can only read and comment on the existing content, preventing bulk extraction of sensitive material from shared drives.",
|
||||
"Risk": "When viewers and commenters can download, print, or copy shared drive files, they can **bulk-extract sensitive content** — including intellectual property, personally identifiable information, and confidential business documents — using nothing more than read access. This is one of the most direct paths to data exfiltration through Drive.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/7662202",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Shared drive creation**\n4. **Uncheck** *Allow viewers and commenters to download, print, and copy files*\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Restrict download, print, and copy actions in shared drives to editors or managers only. This prevents bulk data exfiltration by users who only need read or comment access to the underlying content.",
|
||||
"Url": "https://hub.prowler.com/check/drive_shared_drive_disable_download_print_copy"
|
||||
}
|
||||
},
|
||||
"Categories": [],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_shared_drive_creation_allowed",
|
||||
"drive_shared_drive_managers_cannot_override",
|
||||
"drive_shared_drive_members_only_access"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,56 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_shared_drive_disable_download_print_copy(Check):
|
||||
"""Check that download/print/copy is disabled for viewers and commenters
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy prevents
|
||||
viewers and commenters of shared drive files from downloading, printing,
|
||||
or copying their contents — limiting them to read and comment actions
|
||||
only and reducing the risk of bulk data exfiltration.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
allowed = drive_client.policies.allowed_parties_for_download_print_copy
|
||||
|
||||
if allowed in ("EDITORS_ONLY", "MANAGERS_ONLY"):
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Download, print, and copy in shared drives in domain "
|
||||
f"{drive_client.provider.identity.domain} is restricted to "
|
||||
f"{allowed}."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if allowed is None:
|
||||
report.status_extended = (
|
||||
f"Download, print, and copy restrictions for shared drive "
|
||||
f"viewers and commenters are not explicitly configured in "
|
||||
f"domain {drive_client.provider.identity.domain}. "
|
||||
f"These actions should be restricted to editors or managers only."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Download, print, and copy in shared drives in domain "
|
||||
f"{drive_client.provider.identity.domain} is set to {allowed}. "
|
||||
f"These actions should be restricted to editors or managers only."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_shared_drive_managers_cannot_override",
|
||||
"CheckTitle": "Shared drive managers cannot override shared drive settings",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide default prevents members with **manager access** to a shared drive from overriding the shared drive settings established by administrators. This ensures that security controls — such as external access, member-only access, and download restrictions — cannot be relaxed at the individual shared drive level.",
|
||||
"Risk": "If shared drive managers can override organizational defaults, **unauthorized data exposure** can occur when a manager intentionally or accidentally weakens a shared drive's security posture (for example, allowing external members or enabling download for viewers).",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/7662202",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Shared drive creation**\n4. **Uncheck** *Allow members with manager access to override the settings below*\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Prevent shared drive managers from overriding organizationally established shared drive settings. This ensures that security controls remain consistent across all shared drives and cannot be relaxed by non-administrators.",
|
||||
"Url": "https://hub.prowler.com/check/drive_shared_drive_managers_cannot_override"
|
||||
}
|
||||
},
|
||||
"Categories": [],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_shared_drive_creation_allowed",
|
||||
"drive_shared_drive_members_only_access",
|
||||
"drive_shared_drive_disable_download_print_copy"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_shared_drive_managers_cannot_override(Check):
|
||||
"""Check that shared drive managers cannot override shared drive settings
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy prevents
|
||||
members with manager access from overriding the shared drive settings
|
||||
configured by administrators, ensuring that security controls cannot be
|
||||
relaxed at the shared drive level.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
allow_override = drive_client.policies.allow_managers_to_override_settings
|
||||
|
||||
if allow_override is False:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Shared drive managers in domain "
|
||||
f"{drive_client.provider.identity.domain} cannot override "
|
||||
f"shared drive settings."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if allow_override is None:
|
||||
report.status_extended = (
|
||||
f"Manager override of shared drive settings is not "
|
||||
f"explicitly configured in domain "
|
||||
f"{drive_client.provider.identity.domain}. "
|
||||
f"Managers should not be allowed to override shared drive settings."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Shared drive managers in domain "
|
||||
f"{drive_client.provider.identity.domain} are allowed to "
|
||||
f"override shared drive settings. "
|
||||
f"Managers should not be allowed to override shared drive settings."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_shared_drive_members_only_access",
|
||||
"CheckTitle": "Shared drive file access is restricted to members only",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide default restricts shared drive file access to **explicit members** only. Non-members cannot be added to individual files inside the drive, preserving the shared drive's membership boundary as the authoritative access control surface.",
|
||||
"Risk": "If non-members can be added to files inside a shared drive, the **drive's membership becomes meaningless** as a security control. Sensitive content scoped to a specific team can be silently extended to users who were never granted access to the drive itself, leading to unintended information disclosure.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/7662202",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Shared drive creation**\n4. **Uncheck** *Allow people who aren't shared drive members to be added to files*\n5. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Restrict shared drive file access to explicit shared drive members. This preserves the drive membership as the authoritative access boundary and prevents silent expansion of access to non-members.",
|
||||
"Url": "https://hub.prowler.com/check/drive_shared_drive_members_only_access"
|
||||
}
|
||||
},
|
||||
"Categories": [],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_shared_drive_creation_allowed",
|
||||
"drive_shared_drive_managers_cannot_override",
|
||||
"drive_shared_drive_disable_download_print_copy"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,56 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_shared_drive_members_only_access(Check):
|
||||
"""Check that shared drive file access is restricted to members only
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy prevents
|
||||
people who are not shared drive members from being added to files within
|
||||
a shared drive, restricting file access to that drive's explicit
|
||||
membership.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
allow_non_member = drive_client.policies.allow_non_member_access
|
||||
|
||||
if allow_non_member is False:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Shared drive file access in domain "
|
||||
f"{drive_client.provider.identity.domain} is restricted to "
|
||||
f"shared drive members only."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if allow_non_member is None:
|
||||
report.status_extended = (
|
||||
f"Shared drive non-member access is not explicitly "
|
||||
f"configured in domain {drive_client.provider.identity.domain}. "
|
||||
f"Shared drive file access should be restricted to members only."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Shared drive file access in domain "
|
||||
f"{drive_client.provider.identity.domain} allows non-members "
|
||||
f"to be added to files. "
|
||||
f"Shared drive file access should be restricted to members only."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_sharing_allowlisted_domains",
|
||||
"CheckTitle": "Document sharing is restricted to allowlisted domains",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "The domain-wide default restricts external sharing of Drive and Docs files to **a curated list of allowlisted domains**, rather than allowing sharing with arbitrary external recipients. This converts external sharing from an open default into a controlled allow-list that aligns with documented business relationships.",
|
||||
"Risk": "When external sharing is unrestricted, users can share organizational content with **any external Google account**, including untrusted or unknown parties. Restricting sharing to allowlisted domains drastically reduces the surface area for accidental and malicious data exfiltration through Drive.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/60781",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Sharing options**\n4. Under **Sharing outside of <Company>**, select **ALLOWLISTED DOMAINS - Files owned by users in <Company> can be shared with Google Accounts in compatible allowlisted domains**\n5. Configure the allowlisted domains list as appropriate for your organization\n6. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Restrict Drive and Docs external sharing to allowlisted domains. This converts external sharing from an open default into a controlled allow-list aligned with documented business relationships and reduces the risk of accidental or malicious data exposure.",
|
||||
"Url": "https://hub.prowler.com/check/drive_sharing_allowlisted_domains"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_external_sharing_warn_users",
|
||||
"drive_warn_sharing_with_allowlisted_domains",
|
||||
"drive_publishing_files_disabled"
|
||||
],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,54 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_sharing_allowlisted_domains(Check):
|
||||
"""Check that document sharing is restricted to allowlisted domains
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy restricts
|
||||
external sharing to a list of explicitly allowlisted domains, blocking
|
||||
sharing with arbitrary external recipients.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
mode = drive_client.policies.external_sharing_mode
|
||||
|
||||
if mode == "ALLOWLISTED_DOMAINS":
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Drive and Docs external sharing in domain "
|
||||
f"{drive_client.provider.identity.domain} is restricted to "
|
||||
f"allowlisted domains."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if mode is None:
|
||||
report.status_extended = (
|
||||
f"Drive and Docs external sharing mode is not explicitly "
|
||||
f"configured in domain {drive_client.provider.identity.domain}. "
|
||||
f"Sharing should be restricted to allowlisted domains."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Drive and Docs external sharing in domain "
|
||||
f"{drive_client.provider.identity.domain} is set to {mode}. "
|
||||
f"Sharing should be restricted to allowlisted domains."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"Provider": "googleworkspace",
|
||||
"CheckID": "drive_warn_sharing_with_allowlisted_domains",
|
||||
"CheckTitle": "Users are warned when sharing files with allowlisted domains",
|
||||
"CheckType": [],
|
||||
"ServiceName": "drive",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "NotDefined",
|
||||
"ResourceGroup": "collaboration",
|
||||
"Description": "At the domain level, even when external sharing is restricted to allowlisted domains, Google Drive **warns users** before they share a file with a user in an allowlisted domain. This second-step prompt helps users recognize when they are crossing the organizational boundary, even within permitted destinations.",
|
||||
"Risk": "Allowlisted domains are still external. Users may not realize that even an allowlisted recipient is outside the organization, leading to **unintentional disclosure of sensitive content** to legitimate but external collaborators. A warning prompt at share time mitigates that without preventing the sharing itself.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": [
|
||||
"https://support.google.com/a/answer/60781",
|
||||
"https://cloud.google.com/identity/docs/concepts/supported-policy-api-settings"
|
||||
],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "1. Sign in to the Google **Admin console** at https://admin.google.com\n2. Navigate to **Apps** > **Google Workspace** > **Drive and Docs**\n3. Click **Sharing settings** > **Sharing options**\n4. Under **Sharing outside of <Company>**, ensure **ALLOWLISTED DOMAINS** is selected\n5. Check **Warn when files owned by users or shared drives in <Company> are shared with users in allowlisted domains**\n6. Click **Save**",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Enable warnings for sharing with allowlisted domains so users are reminded that they are sharing externally, even when the destination is permitted. This preserves the convenience of allowlisted sharing while reducing accidental disclosure.",
|
||||
"Url": "https://hub.prowler.com/check/drive_warn_sharing_with_allowlisted_domains"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [
|
||||
"drive_external_sharing_warn_users",
|
||||
"drive_sharing_allowlisted_domains"
|
||||
],
|
||||
"Notes": "This check is meaningful only when external sharing is restricted to allowlisted domains. See the related check drive_sharing_allowlisted_domains."
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
from typing import List
|
||||
|
||||
from prowler.lib.check.models import Check, CheckReportGoogleWorkspace
|
||||
from prowler.providers.googleworkspace.services.drive.drive_client import drive_client
|
||||
|
||||
|
||||
class drive_warn_sharing_with_allowlisted_domains(Check):
|
||||
"""Check that users are warned when sharing with allowlisted domains
|
||||
|
||||
This check verifies that the domain-level Drive and Docs policy warns
|
||||
users when they share files with users in allowlisted domains, providing
|
||||
an opportunity to reconsider before sharing externally even within
|
||||
permitted domains.
|
||||
"""
|
||||
|
||||
def execute(self) -> List[CheckReportGoogleWorkspace]:
|
||||
findings = []
|
||||
|
||||
if drive_client.policies_fetched:
|
||||
report = CheckReportGoogleWorkspace(
|
||||
metadata=self.metadata(),
|
||||
resource=drive_client.provider.identity,
|
||||
resource_name=drive_client.provider.identity.domain,
|
||||
resource_id=drive_client.provider.identity.customer_id,
|
||||
customer_id=drive_client.provider.identity.customer_id,
|
||||
location="global",
|
||||
)
|
||||
|
||||
warn_enabled = (
|
||||
drive_client.policies.warn_for_sharing_outside_allowlisted_domains
|
||||
)
|
||||
|
||||
if warn_enabled is True:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Users are warned when sharing files with allowlisted "
|
||||
f"domains in domain {drive_client.provider.identity.domain}."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if warn_enabled is None:
|
||||
report.status_extended = (
|
||||
f"Warning when sharing with allowlisted domains is not "
|
||||
f"explicitly configured in domain "
|
||||
f"{drive_client.provider.identity.domain}. "
|
||||
f"Users should be warned when sharing files with users in allowlisted domains."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Warning when sharing with allowlisted domains is disabled "
|
||||
f"in domain {drive_client.provider.identity.domain}. "
|
||||
f"Users should be warned when sharing files with users in allowlisted domains."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -24,7 +24,7 @@
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Regularly audit API tokens and revoke any that have not been used within 90 days. Implement a token lifecycle management process that includes periodic reviews, automatic expiration dates, and documentation of each token's purpose and owner.",
|
||||
"Url": "https://hub.prowler.com/checks/vercel/authentication_no_stale_tokens"
|
||||
"Url": "https://hub.prowler.com/check/authentication_no_stale_tokens"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Remove expired tokens and create new ones with appropriate expiration dates. Implement a token rotation schedule to ensure tokens are refreshed before they expire. Update all integrations and automation that depend on the replaced tokens.",
|
||||
"Url": "https://hub.prowler.com/checks/vercel/authentication_token_not_expired"
|
||||
"Url": "https://hub.prowler.com/check/authentication_token_not_expired"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Configure the production branch to main or master and ensure all production deployments go through the standard merge workflow. Use branch protection rules in your Git provider to prevent direct pushes to the production branch.",
|
||||
"Url": "https://hub.prowler.com/checks/vercel/deployment_production_uses_stable_target"
|
||||
"Url": "https://hub.prowler.com/check/deployment_production_uses_stable_target"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
|
||||