Compare commits

..

58 Commits

Author SHA1 Message Date
Nacho Rivera
722554ad3f chore(mitre azure): add mapping to mitre for azure provider (#3857)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2024-04-30 17:34:10 +02:00
Sergio Garcia
484cf6f49d fix(metadata): remove semicolons from metadata texts (#3830) 2024-04-30 14:02:43 +02:00
tianzedavid
e4154ed4a2 chore: fix some comments (#3900) 2024-04-30 13:43:55 +02:00
Sergio Garcia
86cb9f5838 fix(vpc): solve AWS principal key error (#3903) 2024-04-30 13:29:58 +02:00
Sergio Garcia
1622d0aa35 fix(vpc): solve subnet route key error (#3902) 2024-04-30 13:09:31 +02:00
Sergio Garcia
b54ecb50bf fix(efs): check all public conditions (#3872) 2024-04-30 13:08:05 +02:00
dependabot[bot]
f16857fdf1 chore(deps): bump boto3 from 1.34.84 to 1.34.94 (#3894)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 12:50:07 +02:00
Rubén De la Torre Vico
ab109c935c docs(unit-testing): Add GCP services documentation (#3901) 2024-04-30 12:49:51 +02:00
dependabot[bot]
8e7e456431 chore(deps-dev): bump black from 24.4.0 to 24.4.2 (#3883)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 12:14:58 +02:00
dependabot[bot]
46114cd5f4 chore(deps-dev): bump moto from 5.0.5 to 5.0.6 (#3882)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 11:22:46 +02:00
dependabot[bot]
275e509c8d chore(deps): bump azure-mgmt-compute from 30.6.0 to 31.0.0 (#3880)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 10:37:48 +02:00
dependabot[bot]
12f135669f chore(deps-dev): bump coverage from 7.4.4 to 7.5.0 (#3879)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 10:11:56 +02:00
dependabot[bot]
f004df673d chore(deps-dev): bump pytest from 8.1.1 to 8.2.0 (#3878)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 09:46:18 +02:00
dependabot[bot]
3ed24b5d7a chore(deps-dev): bump pytest-xdist from 3.5.0 to 3.6.1 (#3877)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 09:07:12 +02:00
dependabot[bot]
77eade01a2 chore(deps): bump botocore from 1.34.89 to 1.34.94 (#3876)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 08:19:05 +02:00
dependabot[bot]
a2158983f7 chore(deps): bump trufflesecurity/trufflehog from 3.73.0 to 3.74.0 (#3874)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 07:50:48 +02:00
dependabot[bot]
c0d57c9498 chore(deps-dev): bump freezegun from 1.4.0 to 1.5.0 (#3875)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 07:49:41 +02:00
Sergio Garcia
35c8ea5e3f fix(aws): not show findings when AccessDenieds (#3803) 2024-04-29 17:42:44 +02:00
Sergio Garcia
b36152484d chore(docs): update BridgeCrew links in metadata to our local docs link (#3858)
Co-authored-by: puchy22 <rubendltv22@gmail.com>
2024-04-29 17:39:04 +02:00
Rubén De la Torre Vico
768ca3f0ce test(gcp): Add new services tests to GCP (#3796)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-04-29 12:24:44 +02:00
Kay Agahd
bedd05c075 fix(aws): Extend opensearch_service_domains_use_cognito_authentication_for_kibana with SAML (#3864) 2024-04-29 12:08:03 +02:00
Sergio Garcia
721f73fdbe chore(gcp): handle list projects API call errors (#3849)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-04-29 11:32:21 +02:00
Sergio Garcia
34c2128d88 chore(docs): solve some issues (#3868) 2024-04-29 10:19:37 +02:00
Pedro Martín
14de3acdaa docs(audit_info): update docs about audit info and new testing (#3831)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-04-29 09:40:18 +02:00
Matt Merchant
899b2f8eb6 chore(get_tagged_resources): Add return value type hint (#3860)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-04-26 15:23:16 +02:00
Nacho Rivera
27bb05fedc chore(regions_update): Changes in regions for AWS services. (#3862)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-04-26 11:57:32 +02:00
Pedro Martín
e1909b8ad9 fix(s3-integration): Store compliance outputs in their folder (#3859) 2024-04-26 08:22:36 +02:00
Pedro Martín
0ed7a247b6 fix(KeyError): handle CacheSubnetGroupName keyError (#3856) 2024-04-26 08:17:30 +02:00
Pedro Martín
ee46bf3809 feat(json-ocsf): Add new fields for py-ocsf 0.1.0 (#3853) 2024-04-25 12:47:28 +02:00
Nacho Rivera
469254094b chore(regions_update): Changes in regions for AWS services. (#3855)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-04-25 12:09:23 +02:00
Pedro Martín
acac3fc693 feat(ec2): Add 2 new checks + fixers related with EC2 service (#3827)
Co-authored-by: Sergio <sergio@prowler.com>
2024-04-24 11:43:19 +02:00
Nacho Rivera
022b7ef756 chore(regions_update): Changes in regions for AWS services. (#3848)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-04-24 11:29:26 +02:00
dependabot[bot]
69d4f55734 chore(deps): bump google-api-python-client from 2.125.0 to 2.127.0 (#3844)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-24 10:12:49 +02:00
dependabot[bot]
a0bff4b859 chore(deps): bump botocore from 1.34.84 to 1.34.89 (#3836)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-24 09:38:20 +02:00
Nacho Rivera
23df599a03 chore(regions_update): Changes in regions for AWS services. (#3842)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-04-23 17:48:34 +02:00
dependabot[bot]
c8d74ca350 chore(deps): bump azure-mgmt-containerservice from 29.1.0 to 30.0.0 (#3835)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 17:48:15 +02:00
dependabot[bot]
8d6ba43ad0 chore(deps): bump msgraph-sdk from 1.2.0 to 1.3.0 (#3834)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 08:29:03 +02:00
Nacho Rivera
44ca2f7a66 chore(regions_update): Changes in regions for AWS services. (#3826)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-04-22 12:48:42 +02:00
Pepe Fagoaga
ec0be1c7fe chore(check): global_provider is not needed here (#3828) 2024-04-22 12:05:41 +02:00
Pepe Fagoaga
fd732db91b fix(mutelist): Be called whatever the provider (#3811) 2024-04-22 11:16:21 +02:00
Pepe Fagoaga
67f45b7767 chore(release): 4.1.0 (#3817) 2024-04-22 09:40:37 +02:00
Nacho Rivera
396e6a1c36 chore(regions_update): Changes in regions for AWS services. (#3824)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-04-22 09:39:04 +02:00
Jakob Rieck
326c46defd fix(aws): Corrects privilege escalation vectors (#3823) 2024-04-19 13:42:51 +02:00
Jakob Rieck
7a1762be51 fix(aws): Include record names for dangling IPs (#3821)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-04-19 12:47:03 +02:00
Nacho Rivera
b466b476a3 chore(regions_update): Changes in regions for AWS services. (#3822)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-04-19 11:32:22 +02:00
Pepe Fagoaga
e4652d4339 fix(ocsf): Add resource details to data (#3819) 2024-04-19 08:35:26 +02:00
Pepe Fagoaga
f1e4cd3938 docs(ocsf): Add missing fields to the example (#3816) 2024-04-19 08:09:36 +02:00
dependabot[bot]
e192a98079 chore(deps): bump aiohttp from 3.9.3 to 3.9.4 (#3818)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-19 07:50:48 +02:00
Pedro Martín
833dc83922 fix(dashboard): fix error in windows for csvreader (#3806) 2024-04-18 15:27:20 +02:00
Pedro Martín
ab1751c595 fix(overview-table): change font in overview table (#3815) 2024-04-18 14:53:32 +02:00
Sergio Garcia
fff06f971e chore(vpc): improve public subnet logic (#3814) 2024-04-18 13:58:42 +02:00
Pepe Fagoaga
a138d2964e fix(execute_check): Handle ModuleNotFoundError (#3812) 2024-04-18 12:36:15 +02:00
Pedro Martín
e6d7965453 fix(network_azure): handle capitalized protocols in security group rules (#3808) 2024-04-18 08:11:29 +02:00
Sergio Garcia
ab714f0fc7 chore(fixer): add more fixers (#3772)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-04-18 08:09:03 +02:00
Sergio Garcia
465b0f6a16 fix(utils): import libraries when needed (#3805) 2024-04-17 16:35:04 +02:00
Pedro Martín
bd87351ea7 chore(aws): Add CloudTrail Threat Detection tests (#3804)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-04-17 14:01:39 +02:00
Sergio Garcia
d79ec44e4c chore(ec2): improve handling of ENIs (#3798) 2024-04-17 13:12:31 +02:00
Matt Merchant
a2f84a12ea docs(developer guide): fix broken link (#3799) 2024-04-17 10:56:35 +02:00
550 changed files with 10563 additions and 3929 deletions

View File

@@ -11,7 +11,7 @@ jobs:
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.73.0
uses: trufflesecurity/trufflehog@v3.74.0
with:
path: ./
base: ${{ github.event.repository.default_branch }}

View File

@@ -10,4 +10,4 @@
Want some swag as appreciation for your contribution?
# Prowler Developer Guide
https://docs.prowler.cloud/en/latest/tutorials/developer-guide/
https://docs.prowler.com/projects/prowler-open-source/en/latest/developer-guide/introduction/

View File

@@ -0,0 +1,23 @@
import warnings
from dashboard.common_methods import get_section_containers_format2
warnings.filterwarnings("ignore")
def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_SUBTECHNIQUES",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
].copy()
return get_section_containers_format2(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_SUBTECHNIQUES"
)

View File

@@ -27,3 +27,6 @@ informational_color = "#3274d9"
# Folder output path
folder_path_overview = os.getcwd() + "/output"
folder_path_compliance = os.getcwd() + "/output/compliance"
# Encoding
encoding_format = "utf-8"

View File

@@ -15,6 +15,7 @@ from dash.dependencies import Input, Output
# Config import
from dashboard.config import (
encoding_format,
fail_color,
folder_path_compliance,
info_color,
@@ -37,7 +38,7 @@ warnings.filterwarnings("ignore")
csv_files = []
for file in glob.glob(os.path.join(folder_path_compliance, "*.csv")):
with open(file, "r", newline="") as csvfile:
with open(file, "r", newline="", encoding=encoding_format) as csvfile:
reader = csv.reader(csvfile)
num_rows = sum(1 for row in reader)
if num_rows > 1:
@@ -266,6 +267,7 @@ def display_data(
# Rename the column SUBSCRIPTIONID to ACCOUNTID for Azure
if data.columns.str.contains("SUBSCRIPTIONID").any():
data.rename(columns={"SUBSCRIPTIONID": "ACCOUNTID"}, inplace=True)
data["REGION"] = "-"
# Handle v3 azure cis compliance
if data.columns.str.contains("SUBSCRIPTION").any():
data.rename(columns={"SUBSCRIPTION": "ACCOUNTID"}, inplace=True)
@@ -432,6 +434,12 @@ def display_data(
):
pie_2 = get_bar_graph(df, "REQUIREMENTS_ATTRIBUTES_SERVICE")
current_filter = "services"
elif (
"REQUIREMENTS_ID" in df.columns
and not df["REQUIREMENTS_ID"].isnull().values.any()
):
pie_2 = get_bar_graph(df, "REQUIREMENTS_ID")
current_filter = "techniques"
else:
fig = px.pie()
fig.update_layout(

View File

@@ -17,6 +17,7 @@ from dash.dependencies import Input, Output
# Config import
from dashboard.config import (
critical_color,
encoding_format,
fail_color,
folder_path_overview,
high_color,
@@ -50,7 +51,7 @@ warnings.filterwarnings("ignore")
csv_files = []
for file in glob.glob(os.path.join(folder_path_overview, "*.csv")):
with open(file, "r", newline="") as csvfile:
with open(file, "r", newline="", encoding=encoding_format) as csvfile:
reader = csv.reader(csvfile)
num_rows = sum(1 for row in reader)
if num_rows > 1:
@@ -221,7 +222,7 @@ else:
# Handle the case where there is location column
if "LOCATION" in data.columns:
data["REGION"] = data["LOCATION"]
# Hande the case where there is no region column
# Handle the case where there is no region column
if "REGION" not in data.columns:
data["REGION"] = "-"
# Handle the case where the region is null
@@ -684,7 +685,7 @@ def filter_data(
########################################################
"""Line PLOT 1"""
########################################################
# Formating date columns
# Formatting date columns
filtered_data_sp["TIMESTAMP_formatted"] = pd.to_datetime(
filtered_data_sp["TIMESTAMP"]
).dt.strftime("%Y-%m-%d")
@@ -936,7 +937,12 @@ def filter_data(
table = dash_table.DataTable(
data=table_data.to_dict("records"),
style_data={"whiteSpace": "normal", "height": "auto", "color": "black"},
style_data={
"whiteSpace": "normal",
"height": "auto",
"color": "black",
"fontFamily": "sans-serif",
},
columns=[
{
"name": "Check ID - Resource UID",
@@ -960,6 +966,7 @@ def filter_data(
"fontWeight": "bold",
"layout": "fixed",
"backgroundColor": "rgb(41,37,36)",
"fontFamily": "sans-serif",
},
page_size=table_row_values,
style_data_conditional=[
@@ -1063,7 +1070,11 @@ def filter_data(
style_as_list_view=True,
filter_action="native",
filter_options={"placeholder_text": "🔍"},
style_filter={"background-color": "#3e403f", "color": "white"},
style_filter={
"background-color": "#3e403f",
"color": "white",
"fontFamily": "sans-serif",
},
)
# Status Graphic

View File

@@ -249,11 +249,11 @@ Each Prowler check has metadata associated which is stored at the same level of
# Code holds different methods to remediate the FAIL finding
"Code": {
# CLI holds the command in the provider native CLI to remediate it
"CLI": "https://docs.bridgecrew.io/docs/public_8#cli-command",
"CLI": "https://docs.prowler.com/checks/public_8#cli-command",
# NativeIaC holds the native IaC code to remediate it, use "https://docs.bridgecrew.io/docs"
"NativeIaC": "",
# Other holds the other commands, scripts or code to remediate it, use "https://www.trendmicro.com/cloudoneconformity"
"Other": "https://docs.bridgecrew.io/docs/public_8#aws-console",
"Other": "https://docs.prowler.com/checks/public_8#aws-console",
# Terraform holds the Terraform code to remediate it, use "https://docs.bridgecrew.io/docs"
"Terraform": ""
},

View File

@@ -58,12 +58,12 @@ from prowler.providers.<provider>.lib.service.service import ServiceParentClass
# Create a class for the Service
################## <Service>
class <Service>(ServiceParentClass):
def __init__(self, audit_info):
def __init__(self, provider):
# Call Service Parent Class __init__
# We use the __class__.__name__ to get it automatically
# from the Service Class name but you can pass a custom
# string if the provider's API service name is different
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
# Create an empty dictionary of items to be gathered,
# using the unique ID as the dictionary key
@@ -178,6 +178,8 @@ class <Service>(ServiceParentClass):
f"{<item>.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
```
???+note
To avoid fake findings, when Prowler can't retrieve the items, because an Access Denied or similar error, we set that items value as `None`.
#### Service Models
@@ -223,10 +225,10 @@ Each Prowler service requires a service client to use the service in the checks.
The following is the `<new_service_name>_client.py` containing the initialization of the service's class we have just created so the service's checks can use them:
```python
from prowler.providers.<provider>.lib.audit_info.audit_info import audit_info
from prowler.providers.common.common import get_global_provider
from prowler.providers.<provider>.services.<new_service_name>.<new_service_name>_service import <Service>
<new_service_name>_client = <Service>(audit_info)
<new_service_name>_client = <Service>(get_global_provider())
```
## Permissions

View File

@@ -62,50 +62,6 @@ For the AWS provider we have ways to test a Prowler check based on the following
In the following section we are going to explain all of the above scenarios with examples. The main difference between those scenarios comes from if the [Moto](https://github.com/getmoto/moto) library covers the AWS API calls made by the service. You can check the covered API calls [here](https://github.com/getmoto/moto/blob/master/IMPLEMENTATION_COVERAGE.md).
An important point for the AWS testing is that in each check we MUST have a unique `audit_info` which is the key object during the AWS execution to isolate the test execution.
Check the [Audit Info](./audit-info.md) section to get more details.
```python
# We need to import the AWS_Audit_Info and the Audit_Metadata
# to set the audit_info to call AWS APIs
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_ACCOUNT_NUMBER = "123456789012"
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audit_config=None,
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
```
### Checks
For the AWS tests examples we are going to use the tests for the `iam_password_policy_uppercase` check.
@@ -148,29 +104,29 @@ class Test_iam_password_policy_uppercase:
# policy we want to set to False the RequireUppercaseCharacters
iam_client.update_account_password_policy(RequireUppercaseCharacters=False)
# We set a mocked audit_info for AWS not to share the same audit state
# between checks
current_audit_info = self.set_mocked_audit_info()
# The aws_provider is mocked using set_mocked_aws_provider to use it as the return of the get_global_provider method.
# this mocked provider is defined in fixtures
aws_provider = set_mocked_aws_provider([AWS_REGION_US_EAST_1])
# The Prowler service import MUST be made within the decorated
# code not to make real API calls to the AWS service.
from prowler.providers.aws.services.iam.iam_service import IAM
# Prowler for AWS uses a shared object called `current_audit_info` where it stores
# the audit's state, credentials and configuration.
# Prowler for AWS uses a shared object called aws_provider where it stores
# the info related with the provider
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
"prowler.providers.common.common.get_global_provider",
return_value=aws_provider,
),
# We have to mock also the iam_client from the check to enforce that the iam_client used is the one
# created within this check because patch != import, and if you execute tests in parallel some objects
# can be already initialised hence the check won't be isolated
mock.patch(
"prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase.iam_client",
new=IAM(current_audit_info),
new=IAM(aws_provider),
):
# We import the check within the two mocks not to initialise the iam_client with some shared information from
# the current_audit_info or the IAM service.
# the aws_provider or the IAM service.
from prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase import (
iam_password_policy_uppercase,
)
@@ -235,9 +191,8 @@ class Test_iam_password_policy_uppercase:
expiration=True,
)
# We set a mocked audit_info for AWS not to share the same audit state
# between checks
current_audit_info = self.set_mocked_audit_info()
# We set a mocked aws_provider to unify providers, this way will isolate each test not to step on other tests configuration
aws_provider = set_mocked_aws_provider([AWS_REGION_US_EAST_1])
# In this scenario we have to mock also the IAM service and the iam_client from the check to enforce # that the iam_client used is the one created within this check because patch != import, and if you # execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked IAM client for both objects
@@ -249,7 +204,7 @@ class Test_iam_password_policy_uppercase:
new=mocked_iam_client,
):
# We import the check within the two mocks not to initialise the iam_client with some shared information from
# the current_audit_info or the IAM service.
# the aws_provider or the IAM service.
from prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase import (
iam_password_policy_uppercase,
)
@@ -333,19 +288,48 @@ Note that this does not use Moto, to keep it simple, but if you use any `moto`-d
#### Mocking more than one service
Since we are mocking the provider, it can be customized setting multiple attributes to the provider:
```python
def set_mocked_aws_provider(
audited_regions: list[str] = [],
audited_account: str = AWS_ACCOUNT_NUMBER,
audited_account_arn: str = AWS_ACCOUNT_ARN,
audited_partition: str = AWS_COMMERCIAL_PARTITION,
expected_checks: list[str] = [],
profile_region: str = None,
audit_config: dict = {},
fixer_config: dict = {},
scan_unused_services: bool = True,
audit_session: session.Session = session.Session(
profile_name=None,
botocore_session=None,
),
original_session: session.Session = None,
enabled_regions: set = None,
arguments: Namespace = Namespace(),
create_default_organization: bool = True,
) -> AwsProvider:
```
If the test your are creating belongs to a check that uses more than one provider service, you should mock each of the services used. For example, the check `cloudtrail_logs_s3_bucket_access_logging_enabled` requires the CloudTrail and the S3 client, hence the service's mock part of the test will be as follows:
```python
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=mock_audit_info,
"prowler.providers.common.common.get_global_provider",
return_value=set_mocked_aws_provider(
[AWS_REGION_US_EAST_1, AWS_REGION_EU_WEST_1]
),
), mock.patch(
"prowler.providers.aws.services.cloudtrail.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_client",
new=Cloudtrail(mock_audit_info),
new=Cloudtrail(
set_mocked_aws_provider([AWS_REGION_US_EAST_1, AWS_REGION_EU_WEST_1])
),
), mock.patch(
"prowler.providers.aws.services.cloudtrail.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_logs_s3_bucket_access_logging_enabled.s3_client",
new=S3(mock_audit_info),
new=S3(
set_mocked_aws_provider([AWS_REGION_US_EAST_1, AWS_REGION_EU_WEST_1])
),
):
```
@@ -363,10 +347,10 @@ from prowler.providers.<provider>.services.<service>.<service>_client import <se
```
2. `<service>_client.py`:
```python
from prowler.providers.<provider>.lib.audit_info.audit_info import audit_info
from prowler.providers.common.common import get_global_provider
from prowler.providers.<provider>.services.<service>.<service>_service import <SERVICE>
<service>_client = <SERVICE>(audit_info)
<service>_client = <SERVICE>(mocked_provider)
```
Due to the above import path it's not the same to patch the following objects because if you run a bunch of tests, either in parallel or not, some clients can be already instantiated by another check, hence your test execution will be using another test's service instance:
@@ -384,19 +368,20 @@ A useful read about this topic can be found in the following article: https://st
Mocking a service client using the following code ...
Once the needed attributes are set for the mocked provider, you can use the mocked provider:
```python title="Mocking the service_client"
with mock.patch(
"prowler.providers.<provider>.lib.audit_info.audit_info.audit_info",
new=audit_info,
"prowler.providers.common.common.get_global_provider",
new=set_mocked_aws_provider([<region>]),
), mock.patch(
"prowler.providers.<provider>.services.<service>.<check>.<check>.<service>_client",
new=<SERVICE>(audit_info),
new=<SERVICE>(set_mocked_aws_provider([<region>])),
):
```
will cause that the service will be initialised twice:
1. When the `<SERVICE>(audit_info)` is mocked out using `mock.patch` to have the object ready for the patching.
2. At the `<service>_client.py` when we are patching it since the `mock.patch` needs to go to that object an initialise it, hence the `<SERVICE>(audit_info)` will be called again.
1. When the `<SERVICE>(set_mocked_aws_provider([<region>]))` is mocked out using `mock.patch` to have the object ready for the patching.
2. At the `<service>_client.py` when we are patching it since the `mock.patch` needs to go to that object an initialise it, hence the `<SERVICE>(set_mocked_aws_provider([<region>]))` will be called again.
Then, when we import the `<service>_client.py` at `<check>.py`, since we are mocking where the object is used, Python will use the mocked one.
@@ -408,24 +393,24 @@ Mocking a service client using the following code ...
```python title="Mocking the service and the service_client"
with mock.patch(
"prowler.providers.<provider>.lib.audit_info.audit_info.audit_info",
new=audit_info,
"prowler.providers.common.common.get_global_provider",
new=set_mocked_aws_provider([<region>]),
), mock.patch(
"prowler.providers.<provider>.services.<service>.<SERVICE>",
new=<SERVICE>(audit_info),
new=<SERVICE>(set_mocked_aws_provider([<region>])),
) as service_client, mock.patch(
"prowler.providers.<provider>.services.<service>.<service>_client.<service>_client",
new=service_client,
):
```
will cause that the service will be initialised once, just when the `<SERVICE>(audit_info)` is mocked out using `mock.patch`.
will cause that the service will be initialised once, just when the `set_mocked_aws_provider([<region>])` is mocked out using `mock.patch`.
Then, at the check_level when Python tries to import the client with `from prowler.providers.<provider>.services.<service>.<service>_client`, since it is already mocked out, the execution will continue using the `service_client` without getting into the `<service>_client.py`.
### Services
For testing the AWS services we have to follow the same logic as with the AWS checks, we have to check if the AWS API calls made by the service are covered by Moto and we have to test the service `__init__` to verifiy that the information is being correctly retrieved.
For testing the AWS services we have to follow the same logic as with the AWS checks, we have to check if the AWS API calls made by the service are covered by Moto and we have to test the service `__init__` to verify that the information is being correctly retrieved.
The service tests could act as *Integration Tests* since we test how the service retrieves the information from the provider, but since Moto or the custom mock objects mocks that calls this test will fall into *Unit Tests*.
@@ -532,7 +517,113 @@ class Test_compute_project_os_login_enabled:
### Services
Coming soon ...
For testing Google Cloud Services, we have to follow the same logic as with the Google Cloud checks. We still mocking all API calls, but in this case, every API call to set up an attribute is defined in [fixtures file](https://github.com/prowler-cloud/prowler/blob/master/tests/providers/gcp/gcp_fixtures.py) in `mock_api_client` function. Remember that EVERY method of a service must be tested.
The following code shows a real example of a testing class, but it has more comments than usual for educational purposes.
```python title="BigQuery Service Test"
# We need to import the unittest.mock.patch to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
from unittest.mock import patch
# Import the class needed from the service file
from prowler.providers.gcp.services.bigquery.bigquery_service import BigQuery
# Necessary constans and functions from fixtures file
from tests.providers.gcp.gcp_fixtures import (
GCP_PROJECT_ID,
mock_api_client,
mock_is_api_active,
set_mocked_gcp_provider,
)
class TestBigQueryService:
# Only method needed to test full service
def test_service(self):
# In this case we are mocking the __is_api_active__ to ensure our mocked project is used
# And all the client to use our mocked API calls
with patch(
"prowler.providers.gcp.lib.service.service.GCPService.__is_api_active__",
new=mock_is_api_active,
), patch(
"prowler.providers.gcp.lib.service.service.GCPService.__generate_client__",
new=mock_api_client,
):
# Instantiate an object of class with the mocked provider
bigquery_client = BigQuery(
set_mocked_gcp_provider(project_ids=[GCP_PROJECT_ID])
)
# Check all attributes of the tested class is well set up according API calls mocked from GCP fixture file
assert bigquery_client.service == "bigquery"
assert bigquery_client.project_ids == [GCP_PROJECT_ID]
assert len(bigquery_client.datasets) == 2
assert bigquery_client.datasets[0].name == "unique_dataset1_name"
assert bigquery_client.datasets[0].id.__class__.__name__ == "str"
assert bigquery_client.datasets[0].region == "US"
assert bigquery_client.datasets[0].cmk_encryption
assert bigquery_client.datasets[0].public
assert bigquery_client.datasets[0].project_id == GCP_PROJECT_ID
assert bigquery_client.datasets[1].name == "unique_dataset2_name"
assert bigquery_client.datasets[1].id.__class__.__name__ == "str"
assert bigquery_client.datasets[1].region == "EU"
assert not bigquery_client.datasets[1].cmk_encryption
assert not bigquery_client.datasets[1].public
assert bigquery_client.datasets[1].project_id == GCP_PROJECT_ID
assert len(bigquery_client.tables) == 2
assert bigquery_client.tables[0].name == "unique_table1_name"
assert bigquery_client.tables[0].id.__class__.__name__ == "str"
assert bigquery_client.tables[0].region == "US"
assert bigquery_client.tables[0].cmk_encryption
assert bigquery_client.tables[0].project_id == GCP_PROJECT_ID
assert bigquery_client.tables[1].name == "unique_table2_name"
assert bigquery_client.tables[1].id.__class__.__name__ == "str"
assert bigquery_client.tables[1].region == "US"
assert not bigquery_client.tables[1].cmk_encryption
assert bigquery_client.tables[1].project_id == GCP_PROJECT_ID
```
As it can be confusing where all these values come from, I'll give an example to make this clearer. First we need to check
what is the API call used to obtain the datasets. In this case if we check the service the call is
`self.client.datasets().list(projectId=project_id)`.
Now in the fixture file we have to mock this call in our `MagicMock` client in the function `mock_api_client`. The best way to mock
is following the actual format, add one function where the client is passed to be changed, the format of this function name must be
`mock_api_<endpoint>_calls` (*endpoint* refers to the first attribute pointed after *client*).
In the example of BigQuery the function is called `mock_api_dataset_calls`. And inside of this function we found an assignation to
be used in the `__get_datasets__` method in BigQuery class:
```python
# Mocking datasets
dataset1_id = str(uuid4())
dataset2_id = str(uuid4())
client.datasets().list().execute.return_value = {
"datasets": [
{
"datasetReference": {
"datasetId": "unique_dataset1_name",
"projectId": GCP_PROJECT_ID,
},
"id": dataset1_id,
"location": "US",
},
{
"datasetReference": {
"datasetId": "unique_dataset2_name",
"projectId": GCP_PROJECT_ID,
},
"id": dataset2_id,
"location": "EU",
},
]
}
```
## Azure

View File

@@ -313,7 +313,7 @@ prowler gcp --project-ids <Project ID 1> <Project ID 2> ... <Project ID N>
See more details about GCP Authentication in [Requirements](getting-started/requirements.md#google-cloud)
## Kubernetes
### Kubernetes
Prowler allows you to scan your Kubernetes Cluster either from within the cluster or from outside the cluster.

View File

@@ -27,7 +27,7 @@ Those credentials must be associated to a user or role with proper permissions t
Prowler can use your custom AWS Profile with:
```console
prowler <provider> -p/--profile <profile_name>
prowler aws -p/--profile <profile_name>
```
## Multi-Factor Authentication

View File

@@ -3,13 +3,13 @@
To save your report in an S3 bucket, use `-B`/`--output-bucket`.
```sh
prowler <provider> -B my-bucket
prowler aws -B my-bucket
```
If you can use a custom folder and/or filename, use `-o`/`--output-directory` and/or `-F`/`--output-filename`.
```sh
prowler <provider> \
prowler aws \
-B my-bucket \
--output-directory test-folder \
--output-filename output-filename
@@ -18,8 +18,11 @@ prowler <provider> \
By default Prowler sends HTML, JSON and CSV output formats, if you want to send a custom output format or a single one of the defaults you can specify it with the `-M`/`--output-modes` flag.
```sh
prowler <provider> -M csv -B my-bucket
prowler aws -M csv -B my-bucket
```
???+ note
In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D`/`--output-bucket-no-assume` instead of `-B`/`--output-bucket`. Make sure that the used credentials have `s3:PutObject` permissions in the S3 path where the reports are going to be uploaded.
In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D`/`--output-bucket-no-assume` instead of `-B`/`--output-bucket`.
???+ warning
Make sure that the used credentials have `s3:PutObject` permissions in the S3 path where the reports are going to be uploaded.

View File

@@ -101,10 +101,10 @@ For some fixers, you can have configurable parameters depending on your use case
# Fixer configuration file
aws:
# ec2_ebs_default_encryption
# No configuration needed for this check
# No configuration needed for this check
# s3_account_level_public_access_blocks
# No configuration needed for this check
# No configuration needed for this check
# iam_password_policy_* checks:
iam_password_policy:
@@ -117,4 +117,36 @@ aws:
MaxPasswordAge: 90
PasswordReusePrevention: 24
HardExpiry: False
# accessanalyzer_enabled
accessanalyzer_enabled:
AnalyzerName: "DefaultAnalyzer"
AnalyzerType: "ACCOUNT_UNUSED_ACCESS"
# guardduty_is_enabled
# No configuration needed for this check
# securityhub_enabled
securityhub_enabled:
EnableDefaultStandards: True
# cloudtrail_multi_region_enabled
cloudtrail_multi_region_enabled:
TrailName: "DefaultTrail"
S3BucketName: "my-cloudtrail-bucket"
IsMultiRegionTrail: True
EnableLogFileValidation: True
# CloudWatchLogsLogGroupArn: "arn:aws:logs:us-east-1:123456789012:log-group:my-cloudtrail-log-group"
# CloudWatchLogsRoleArn: "arn:aws:iam::123456789012:role/my-cloudtrail-role"
# KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
# kms_cmk_rotation_enabled
# No configuration needed for this check
# ec2_ebs_snapshot_account_block_public_access
ec2_ebs_snapshot_account_block_public_access:
State: "block-all-sharing"
# ec2_instance_account_imdsv2_enabled
# No configuration needed for this check
```

View File

@@ -103,10 +103,11 @@ The JSON-OCSF output format implements the [Detection Finding](https://schema.oc
```json
[{
"metadata": {
"event_code": "cloudtrail_multi_region_enabled",
"product": {
"name": "Prowler",
"vendor_name": "Prowler",
"version": "4.0.0"
"version": "4.1.0"
},
"version": "1.1.0"
},
@@ -123,7 +124,7 @@ The JSON-OCSF output format implements the [Detection Finding](https://schema.oc
"desc": "Ensure CloudTrail is enabled in all regions",
"product_uid": "prowler",
"title": "Ensure CloudTrail is enabled in all regions",
"uid": "prowler-aws-cloudtrail_multi_region_enabled-xxxxxxxx-ap-northeast-1-xxxxxxxx"
"uid": "prowler-aws-cloudtrail_multi_region_enabled-123456789012-ap-northeast-1-123456789012"
},
"resources": [
{
@@ -133,9 +134,12 @@ The JSON-OCSF output format implements the [Detection Finding](https://schema.oc
"name": "cloudtrail"
},
"labels": [],
"name": "xxxxxxxx",
"name": "123456789012",
"type": "AwsCloudTrailTrail",
"uid": "arn:aws:cloudtrail:ap-northeast-1:xxxxxxxx:trail"
"uid": "arn:aws:cloudtrail:ap-northeast-1:123456789012:trail",
"data": {
"details": ""
},
}
],
"category_name": "Findings",
@@ -144,10 +148,10 @@ The JSON-OCSF output format implements the [Detection Finding](https://schema.oc
"class_uid": 2004,
"cloud": {
"account": {
"name": "",
"name": "test-account",
"type": "AWS_Account",
"type_id": 10,
"uid": "xxxxxxxx"
"uid": "123456789012"
},
"org": {
"name": "",
@@ -165,7 +169,49 @@ The JSON-OCSF output format implements the [Detection Finding](https://schema.oc
]
},
"type_uid": 200401,
"type_name": "Create"
"type_name": "Create",
"unmapped": {
"check_type": "Software and Configuration Checks,Industry and Regulatory Standards,CIS AWS Foundations Benchmark",
"related_url": "",
"categories": "forensics-ready",
"depends_on": "",
"related_to": "",
"notes": "",
"compliance": {
"CISA": [
"your-systems-3",
"your-data-2"
],
"SOC2": [
"cc_2_1",
"cc_7_2",
"cc_a_1_2"
],
"CIS-1.4": [
"3.1"
],
"CIS-1.5": [
"3.1"
],
"GDPR": [
"article_25",
"article_30"
],
"AWS-Foundational-Security-Best-Practices": [
"cloudtrail"
],
"ISO27001-2013": [
"A.12.4"
],
"HIPAA": [
"164_308_a_1_ii_d",
"164_308_a_3_ii_a",
"164_308_a_6_ii",
"164_312_b",
"164_312_e_2_i"
],
}
},
}]
```
@@ -277,9 +323,9 @@ The following is the mapping between the native JSON and the Detection Finding f
| StatusExtended | status_detail |
| Severity | severity |
| ResourceType | resources.type |
| ResourceDetails | _Not mapped yet_ |
| ResourceDetails | resources.data.details |
| Description | finding_info.desc |
| Risk | risk_details _Available from OCSF 1.2_ |
| Risk | risk_details |
| RelatedUrl | unmapped.related_url |
| Remediation.Recommendation.Text | remediation.desc |
| Remediation.Recommendation.Url | remediation.references |
@@ -298,7 +344,7 @@ The following is the mapping between the native JSON and the Detection Finding f
| OrganizationsInfo.account_email | _Not mapped yet_ |
| OrganizationsInfo.account_arn | _Not mapped yet_ |
| OrganizationsInfo.account_org | cloud.org.name |
| OrganizationsInfo.account_tags | cloud.account.labels _Available from OCSF 1.2_ |
| OrganizationsInfo.account_tags | cloud.account.labels |
| Region | resources.region |
| ResourceId | resources.name |
| ResourceArn | resources.uid |

793
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -215,7 +215,7 @@ def prowler():
checks_to_execute,
global_provider,
custom_checks_metadata,
getattr(args, "mutelist_file", None),
global_provider.mutelist_file_path,
args.config_file,
)
else:
@@ -345,8 +345,10 @@ def prowler():
global_provider,
global_provider.output_options,
)
# Only display compliance table if there are findings and it is a default execution
if findings and default_execution:
# Only display compliance table if there are findings (not all MANUAL) and it is a default execution
if (
findings and not all(finding.status == "MANUAL" for finding in findings)
) and default_execution:
compliance_overview = False
if not compliance_framework:
compliance_framework = get_available_compliance_frameworks(provider)

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ from prowler.providers.common.common import get_global_provider
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "4.0.1"
prowler_version = "4.1.0"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
aws_logo = "https://user-images.githubusercontent.com/38561120/235953920-3e3fba08-0795-41dc-b480-9bea57db9f2e.png"
azure_logo = "https://user-images.githubusercontent.com/38561120/235927375-b23e2e0f-8932-49ec-b59c-d89f61c8041d.png"
@@ -70,10 +70,11 @@ def get_default_mute_file_path(provider: str):
"""
get_default_mute_file_path returns the default mute file path for the provider
"""
# TODO: crate default mutelist file for kubernetes, azure and gcp
if provider == "aws":
return f"{pathlib.Path(os.path.dirname(os.path.realpath(__file__)))}/{provider}_mutelist.yaml"
return None
# TODO: create default mutelist file for kubernetes, azure and gcp
mutelist_path = f"{pathlib.Path(os.path.dirname(os.path.realpath(__file__)))}/{provider}_mutelist.yaml"
if not os.path.isfile(mutelist_path):
mutelist_path = None
return mutelist_path
def check_current_version():

View File

@@ -1,10 +1,10 @@
# Fixer configuration file
aws:
# ec2_ebs_default_encryption
# No configuration needed for this check
# No configuration needed for this check
# s3_account_level_public_access_blocks
# No configuration needed for this check
# No configuration needed for this check
# iam_password_policy_* checks:
iam_password_policy:
@@ -17,3 +17,35 @@ aws:
MaxPasswordAge: 90
PasswordReusePrevention: 24
HardExpiry: False
# accessanalyzer_enabled
accessanalyzer_enabled:
AnalyzerName: "DefaultAnalyzer"
AnalyzerType: "ACCOUNT_UNUSED_ACCESS"
# guardduty_is_enabled
# No configuration needed for this check
# securityhub_enabled
securityhub_enabled:
EnableDefaultStandards: True
# cloudtrail_multi_region_enabled
cloudtrail_multi_region_enabled:
TrailName: "DefaultTrail"
S3BucketName: "my-cloudtrail-bucket"
IsMultiRegionTrail: True
EnableLogFileValidation: True
# CloudWatchLogsLogGroupArn: "arn:aws:logs:us-east-1:123456789012:log-group:my-cloudtrail-log-group"
# CloudWatchLogsRoleArn: "arn:aws:iam::123456789012:role/my-cloudtrail-role"
# KmsKeyId: "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
# kms_cmk_rotation_enabled
# No configuration needed for this check
#ec2_ebs_snapshot_account_block_public_access
ec2_ebs_snapshot_account_block_public_access:
State: "block-all-sharing"
#ec2_instance_account_imdsv2_enabled
# No configuration needed for this check

View File

@@ -22,7 +22,6 @@ from prowler.lib.logger import logger
from prowler.lib.mutelist.mutelist import mutelist_findings
from prowler.lib.outputs.outputs import report
from prowler.lib.utils.utils import open_file, parse_json_file, print_boxes
from prowler.providers.common.common import get_global_provider
from prowler.providers.common.models import Audit_Metadata
@@ -500,12 +499,30 @@ def run_fixer(check_findings: list) -> int:
)
for finding in findings:
if finding.status == "FAIL":
# Check if fixer has region as argument to check if it is a region specific fixer
if "region" in fixer.__code__.co_varnames:
# Check what type of fixer is:
# - If it is a fixer for a specific resource and region
# - If it is a fixer for a specific region
# - If it is a fixer for a specific resource
if (
"region" in fixer.__code__.co_varnames
and "resource_id" in fixer.__code__.co_varnames
):
print(
f"\t{orange_color}FIXING{Style.RESET_ALL} {finding.resource_id} in {finding.region}... "
)
if fixer(
resource_id=finding.resource_id,
region=finding.region,
):
fixed_findings += 1
print(f"\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
print(f"\t{Fore.RED}ERROR{Style.RESET_ALL}")
elif "region" in fixer.__code__.co_varnames:
print(
f"\t{orange_color}FIXING{Style.RESET_ALL} {finding.region}... "
)
if fixer(finding.region):
if fixer(region=finding.region):
fixed_findings += 1
print(f"\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
@@ -514,7 +531,7 @@ def run_fixer(check_findings: list) -> int:
print(
f"\t{orange_color}FIXING{Style.RESET_ALL} Resource {finding.resource_id}... "
)
if fixer(finding.resource_id):
if fixer(resource_id=finding.resource_id):
fixed_findings += 1
print(f"\t\t{Fore.GREEN}DONE{Style.RESET_ALL}")
else:
@@ -539,9 +556,6 @@ def execute_checks(
services_executed = set()
checks_executed = set()
# TODO: why is this here?
global_provider = get_global_provider()
# Initialize the Audit Metadata
# TODO: this should be done in the provider class
global_provider.audit_metadata = Audit_Metadata(
@@ -717,6 +731,11 @@ def execute(
)
except Exception:
sys.exit(1)
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {global_provider.type.upper()} provider"
)
check_findings = []
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -120,7 +120,7 @@ class ISO27001_2013_Requirement_Attribute(BaseModel):
# MITRE Requirement Attribute
class Mitre_Requirement_Attribute(BaseModel):
class Mitre_Requirement_Attribute_AWS(BaseModel):
"""MITRE Requirement Attribute"""
AWSService: str
@@ -129,6 +129,16 @@ class Mitre_Requirement_Attribute(BaseModel):
Comment: str
# MITRE Requirement Attribute
class Mitre_Requirement_Attribute_Azure(BaseModel):
"""MITRE Requirement Attribute"""
AzureService: str
Category: str
Value: str
Comment: str
# MITRE Requirement
class Mitre_Requirement(BaseModel):
"""Mitre_Requirement holds the model for every MITRE requirement"""
@@ -140,7 +150,9 @@ class Mitre_Requirement(BaseModel):
Description: str
Platforms: list[str]
TechniqueURL: str
Attributes: list[Mitre_Requirement_Attribute]
Attributes: Union[
list[Mitre_Requirement_Attribute_AWS], list[Mitre_Requirement_Attribute_Azure]
]
Checks: list[str]

View File

@@ -10,7 +10,6 @@ from prowler.config.config import (
default_fixer_config_file_path,
default_output_directory,
finding_statuses,
get_default_mute_file_path,
valid_severities,
)
from prowler.providers.common.arguments import (
@@ -326,14 +325,11 @@ Detailed documentation at https://docs.prowler.com
def __init_mutelist_parser__(self):
mutelist_subparser = self.common_providers_parser.add_argument_group("Mutelist")
provider = sys.argv[1] if len(sys.argv) > 1 else "aws"
mutelist_subparser.add_argument(
"--mutelist-file",
"-w",
nargs="?",
# TODO(PRWLR-3519): this has to be done in the provider class not here
default=get_default_mute_file_path(provider),
help="Path for mutelist yaml file. See example prowler/config/<provider>_mutelist.yaml for reference and format. For AWS provider, it also accepts AWS DynamoDB Table, Lambda ARNs or S3 URIs, see more in https://docs.prowler.cloud/en/latest/tutorials/mutelist/",
help="Path for mutelist YAML file. See example prowler/config/<provider>_mutelist.yaml for reference and format. For AWS provider, it also accepts AWS DynamoDB Table, Lambda ARNs or S3 URIs, see more in https://docs.prowler.cloud/en/latest/tutorials/mutelist/",
)
def __init_config_parser__(self):

View File

@@ -40,7 +40,7 @@ class FindingOutput(BaseModel):
# Optional since depends on permissions
account_organization_name: Optional[str]
# Optional since depends on permissions
account_tags: Optional[str]
account_tags: Optional[list[str]]
finding_uid: str
provider: str
check_id: str

View File

@@ -17,9 +17,9 @@ from prowler.lib.outputs.compliance.generic import (
from prowler.lib.outputs.compliance.iso27001_2013_aws import (
write_compliance_row_iso27001_2013_aws,
)
from prowler.lib.outputs.compliance.mitre_attack_aws import (
from prowler.lib.outputs.compliance.mitre_attack.mitre_attack import (
get_mitre_attack_table,
write_compliance_row_mitre_attack_aws,
write_compliance_row_mitre_attack,
)
@@ -77,7 +77,6 @@ def get_check_compliance_frameworks_in_input(
)
if compliance_name.replace("-", "_") in input_compliance_frameworks:
check_compliances.append(compliance)
return check_compliances
@@ -125,13 +124,9 @@ def fill_compliance(
file_descriptors, finding, compliance, output_options, provider
)
elif (
compliance.Framework == "MITRE-ATTACK"
and compliance.Version == ""
and compliance.Provider == "AWS"
):
write_compliance_row_mitre_attack_aws(
file_descriptors, finding, compliance, output_options, provider
elif compliance.Framework == "MITRE-ATTACK" and compliance.Version == "":
write_compliance_row_mitre_attack(
file_descriptors, finding, compliance, provider
)
else:

View File

@@ -1,74 +1,95 @@
from csv import DictWriter
from importlib import import_module
from colorama import Fore, Style
from tabulate import tabulate
from prowler.config.config import orange_color, timestamp
from prowler.lib.outputs.compliance.models import Check_Output_MITRE_ATTACK
from prowler.lib.logger import logger
from prowler.lib.outputs.csv.csv import generate_csv_fields
from prowler.lib.outputs.utils import unroll_list
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_mitre_attack_aws(
file_descriptors, finding, compliance, output_options, provider
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
def write_compliance_row_mitre_attack(file_descriptors, finding, compliance, provider):
try:
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_MITRE_ATTACK)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
attributes_aws_services = ", ".join(
attribute.AWSService for attribute in requirement.Attributes
)
attributes_categories = ", ".join(
attribute.Category for attribute in requirement.Attributes
)
attributes_values = ", ".join(
attribute.Value for attribute in requirement.Attributes
)
attributes_comments = ", ".join(
attribute.Comment for attribute in requirement.Attributes
)
compliance_row = Check_Output_MITRE_ATTACK(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=provider.identity.account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Name=requirement_name,
Requirements_Tactics=unroll_list(requirement.Tactics),
Requirements_SubTechniques=unroll_list(requirement.SubTechniques),
Requirements_Platforms=unroll_list(requirement.Platforms),
Requirements_TechniqueURL=requirement.TechniqueURL,
Requirements_Attributes_AWSServices=attributes_aws_services,
Requirements_Attributes_Categories=attributes_categories,
Requirements_Attributes_Values=attributes_values,
Requirements_Attributes_Comments=attributes_comments,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
Muted=finding.muted,
mitre_attack_model_name = "MitreAttack" + compliance.Provider
module = import_module("prowler.lib.outputs.compliance.mitre_attack.models")
mitre_attack_model = getattr(module, mitre_attack_model_name)
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(mitre_attack_model)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
csv_writer.writerow(compliance_row.__dict__)
if compliance.Provider == "AWS":
attributes_services = ", ".join(
attribute.AWSService for attribute in requirement.Attributes
)
elif compliance.Provider == "Azure":
attributes_services = ", ".join(
attribute.AzureService for attribute in requirement.Attributes
)
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
attributes_categories = ", ".join(
attribute.Category for attribute in requirement.Attributes
)
attributes_values = ", ".join(
attribute.Value for attribute in requirement.Attributes
)
attributes_comments = ", ".join(
attribute.Comment for attribute in requirement.Attributes
)
common_data = {
"Provider": finding.check_metadata.Provider,
"Description": compliance.Description,
"AssessmentDate": outputs_unix_timestamp(
provider.output_options.unix_timestamp, timestamp
),
"Requirements_Id": requirement_id,
"Requirements_Name": requirement_name,
"Requirements_Description": requirement_description,
"Requirements_Tactics": unroll_list(requirement.Tactics),
"Requirements_SubTechniques": unroll_list(requirement.SubTechniques),
"Requirements_Platforms": unroll_list(requirement.Platforms),
"Requirements_TechniqueURL": requirement.TechniqueURL,
"Requirements_Attributes_Services": attributes_services,
"Requirements_Attributes_Categories": attributes_categories,
"Requirements_Attributes_Values": attributes_values,
"Requirements_Attributes_Comments": attributes_comments,
"Status": finding.status,
"StatusExtended": finding.status_extended,
"ResourceId": finding.resource_id,
"CheckId": finding.check_metadata.CheckID,
"Muted": finding.muted,
}
if compliance.Provider == "AWS":
common_data["AccountId"] = provider.identity.account
common_data["Region"] = finding.region
elif compliance.Provider == "Azure":
common_data["SubscriptionId"] = unroll_list(
provider.identity.subscriptions
)
compliance_row = mitre_attack_model(**common_data)
csv_writer.writerow(compliance_row.__dict__)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def get_mitre_attack_table(

View File

@@ -0,0 +1,56 @@
from pydantic import BaseModel
class MitreAttackAWS(BaseModel):
"""
MitreAttackAWS generates a finding's output in CSV MITRE ATTACK format for AWS.
"""
Provider: str
Description: str
AccountId: str
Region: str
AssessmentDate: str
Requirements_Id: str
Requirements_Name: str
Requirements_Description: str
Requirements_Tactics: str
Requirements_SubTechniques: str
Requirements_Platforms: str
Requirements_TechniqueURL: str
Requirements_Attributes_Services: str
Requirements_Attributes_Categories: str
Requirements_Attributes_Values: str
Requirements_Attributes_Comments: str
Status: str
StatusExtended: str
ResourceId: str
CheckId: str
Muted: bool
class MitreAttackAzure(BaseModel):
"""
MitreAttackAzure generates a finding's output in CSV MITRE ATTACK format for Azure.
"""
Provider: str
Description: str
SubscriptionId: str
AssessmentDate: str
Requirements_Id: str
Requirements_Name: str
Requirements_Description: str
Requirements_Tactics: str
Requirements_SubTechniques: str
Requirements_Platforms: str
Requirements_TechniqueURL: str
Requirements_Attributes_Services: str
Requirements_Attributes_Categories: str
Requirements_Attributes_Values: str
Requirements_Attributes_Comments: str
Status: str
StatusExtended: str
ResourceId: str
CheckId: str
Muted: bool

View File

@@ -4,34 +4,6 @@ from pydantic import BaseModel
# TODO: move this to outputs/<compliance>/models.py
class Check_Output_MITRE_ATTACK(BaseModel):
"""
Check_Output_MITRE_ATTACK generates a finding's output in CSV MITRE ATTACK format.
"""
Provider: str
Description: str
AccountId: str
Region: str
AssessmentDate: str
Requirements_Id: str
Requirements_Name: str
Requirements_Description: str
Requirements_Tactics: str
Requirements_SubTechniques: str
Requirements_Platforms: str
Requirements_TechniqueURL: str
Requirements_Attributes_AWSServices: str
Requirements_Attributes_Categories: str
Requirements_Attributes_Values: str
Requirements_Attributes_Comments: str
Status: str
StatusExtended: str
ResourceId: str
CheckId: str
Muted: bool
class Check_Output_CSV_ENS_RD2022(BaseModel):
"""
Check_Output_CSV_ENS_RD2022 generates a finding's output in CSV ENS RD2022 format.

View File

@@ -9,6 +9,10 @@ from prowler.config.config import (
)
from prowler.lib.logger import logger
from prowler.lib.outputs.common_models import FindingOutput
from prowler.lib.outputs.compliance.mitre_attack.models import (
MitreAttackAWS,
MitreAttackAzure,
)
from prowler.lib.outputs.compliance.models import (
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
@@ -18,7 +22,6 @@ from prowler.lib.outputs.compliance.models import (
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
Check_Output_CSV_KUBERNETES_CIS,
Check_Output_MITRE_ATTACK,
)
from prowler.lib.outputs.csv.csv import generate_csv_fields
from prowler.lib.utils.utils import file_exists, open_file
@@ -117,6 +120,13 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, provi
Check_Output_CSV_AZURE_CIS,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "mitre_attack_azure":
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
MitreAttackAzure,
)
file_descriptors.update({output_mode: file_descriptor})
else:
file_descriptor = initialize_file_descriptor(
filename,
@@ -170,7 +180,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, provi
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
Check_Output_MITRE_ATTACK,
MitreAttackAWS,
)
file_descriptors.update({output_mode: file_descriptor})

View File

@@ -81,6 +81,7 @@ def fill_json_ocsf(finding_output: FindingOutput) -> DetectionFinding:
status=finding_status.name,
status_code=finding_output.status,
status_detail=finding_output.status_extended,
risk_details=finding_output.risk,
resources=[
ResourceDetails(
# TODO: Check labels for other providers
@@ -96,6 +97,7 @@ def fill_json_ocsf(finding_output: FindingOutput) -> DetectionFinding:
# TODO: this should be included only if using the Cloud profile
cloud_partition=finding_output.partition,
region=finding_output.region,
data={"details": finding_output.resource_details},
)
],
metadata=Metadata(
@@ -135,6 +137,7 @@ def fill_json_ocsf(finding_output: FindingOutput) -> DetectionFinding:
type_id=cloud_account_type.value,
type=cloud_account_type.name,
uid=finding_output.account_uid,
labels=finding_output.account_tags,
),
org=Organization(
uid=finding_output.account_organization_uid,

View File

@@ -20,7 +20,7 @@ from prowler.lib.outputs.csv.csv import generate_csv_fields
from prowler.lib.outputs.file_descriptors import fill_file_descriptors
from prowler.lib.outputs.json_asff.json_asff import fill_json_asff
from prowler.lib.outputs.json_ocsf.json_ocsf import fill_json_ocsf
from prowler.lib.outputs.utils import unroll_dict
from prowler.lib.outputs.utils import unroll_dict, unroll_list
def stdout_report(finding, color, verbose, status, fix):
@@ -88,7 +88,6 @@ def report(check_findings, provider):
available_compliance_frameworks
)
)
fill_compliance(
output_options,
finding,
@@ -145,6 +144,9 @@ def report(check_findings, provider):
finding_output.compliance = unroll_dict(
finding_output.compliance
)
finding_output.account_tags = unroll_list(
finding_output.account_tags, ","
)
csv_writer = DictWriter(
file_descriptors["csv"],
fieldnames=generate_csv_fields(FindingOutput),

View File

@@ -40,11 +40,13 @@ def display_summary_table(
entity_type = "Context"
audited_entities = provider.identity.context
if findings:
# Check if there are findings and that they are not all MANUAL
if findings and not all(finding.status == "MANUAL" for finding in findings):
current = {
"Service": "",
"Provider": "",
"Total": 0,
"Pass": 0,
"Critical": 0,
"High": 0,
"Medium": 0,
@@ -70,9 +72,9 @@ def display_summary_table(
):
add_service_to_table(findings_table, current)
current["Total"] = current["Muted"] = current["Critical"] = current[
"High"
] = current["Medium"] = current["Low"] = 0
current["Total"] = current["Pass"] = current["Muted"] = current[
"Critical"
] = current["High"] = current["Medium"] = current["Low"] = 0
current["Service"] = finding.check_metadata.ServiceName
current["Provider"] = finding.check_metadata.Provider
@@ -83,6 +85,7 @@ def display_summary_table(
current["Muted"] += 1
if finding.status == "PASS":
pass_count += 1
current["Pass"] += 1
elif finding.status == "FAIL":
fail_count += 1
if finding.check_metadata.Severity == "critical":
@@ -155,7 +158,7 @@ def add_service_to_table(findings_table, current):
)
current["Status"] = f"{Fore.RED}FAIL ({total_fails}){Style.RESET_ALL}"
else:
current["Status"] = f"{Fore.GREEN}PASS ({current['Total']}){Style.RESET_ALL}"
current["Status"] = f"{Fore.GREEN}PASS ({current['Pass']}){Style.RESET_ALL}"
findings_table["Provider"].append(current["Provider"])
findings_table["Service"].append(current["Service"])

View File

@@ -1,6 +1,5 @@
def unroll_list(listed_items: list):
def unroll_list(listed_items: list, separator: str = "|"):
unrolled_items = ""
separator = "|"
if listed_items:
for item in listed_items:
if not unrolled_items:

View File

@@ -1,7 +1,12 @@
import grp
import json
import os
import pwd
try:
import grp
import pwd
except ImportError:
pass
import re
import sys
import tempfile

View File

@@ -17,7 +17,6 @@ from prowler.config.config import (
)
from prowler.lib.check.check import list_modules, recover_checks_from_service
from prowler.lib.logger import logger
from prowler.lib.mutelist.mutelist import parse_mutelist_file
from prowler.lib.utils.utils import open_file, parse_json_file, print_boxes
from prowler.providers.aws.config import (
AWS_STS_GLOBAL_ENDPOINT_REGION,
@@ -54,7 +53,6 @@ class AwsProvider(Provider):
_audit_config: dict
_scan_unused_services: bool = False
_enabled_regions: set = set()
_mutelist: dict = {}
_output_options: AWSOutputOptions
# TODO: this is not optional, enforce for all providers
audit_metadata: Audit_Metadata
@@ -284,20 +282,6 @@ class AwsProvider(Provider):
arguments, bulk_checks_metadata, self._identity
)
@property
def mutelist(self):
return self._mutelist
@mutelist.setter
def mutelist(self, mutelist_path):
if mutelist_path:
mutelist = parse_mutelist_file(
mutelist_path, self._session.current_session, self._identity.account
)
else:
mutelist = {}
self._mutelist = mutelist
@property
def get_output_mapping(self):
return {
@@ -646,7 +630,7 @@ class AwsProvider(Provider):
audited_regions.add(region)
return audited_regions
def get_tagged_resources(self, input_resource_tags: list[str]):
def get_tagged_resources(self, input_resource_tags: list[str]) -> list[str]:
"""
Returns a list of the resources that are going to be scanned based on the given input tags.
@@ -805,18 +789,25 @@ class AwsProvider(Provider):
def get_aws_enabled_regions(self, current_session: Session) -> set:
"""get_aws_enabled_regions returns a set of enabled AWS regions"""
try:
# EC2 Client to check enabled regions
service = "ec2"
default_region = self.get_default_region(service)
ec2_client = current_session.client(service, region_name=default_region)
# EC2 Client to check enabled regions
service = "ec2"
default_region = self.get_default_region(service)
ec2_client = current_session.client(service, region_name=default_region)
enabled_regions = set()
# With AllRegions=False we only get the enabled regions for the account
for region in ec2_client.describe_regions(AllRegions=False).get(
"Regions", []
):
enabled_regions.add(region.get("RegionName"))
enabled_regions = set()
# With AllRegions=False we only get the enabled regions for the account
for region in ec2_client.describe_regions(AllRegions=False).get("Regions", []):
enabled_regions.add(region.get("RegionName"))
return enabled_regions
return enabled_regions
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return set()
# TODO: review this function
# Maybe this should be done within the AwsProvider and not in __main__.py

View File

@@ -1278,6 +1278,7 @@
"braket": {
"regions": {
"aws": [
"eu-west-2",
"us-east-1",
"us-west-1",
"us-west-2"
@@ -1292,6 +1293,7 @@
"ap-northeast-1",
"ap-northeast-2",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
@@ -4220,6 +4222,7 @@
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -4650,6 +4653,7 @@
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
@@ -6166,7 +6170,10 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"lightsail": {
@@ -6518,7 +6525,9 @@
"aws": [
"us-east-1"
],
"aws-cn": [],
"aws-cn": [
"cn-northwest-1"
],
"aws-us-gov": []
}
},
@@ -7191,6 +7200,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -8664,16 +8674,21 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -9445,6 +9460,7 @@
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
@@ -9971,6 +9987,7 @@
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",

View File

@@ -30,9 +30,9 @@ def get_organizations_metadata(
def parse_organizations_metadata(metadata: dict, tags: dict) -> AWSOrganizationsInfo:
try:
# Convert Tags dictionary to String
account_details_tags = ""
account_details_tags = []
for tag in tags.get("Tags", {}):
account_details_tags += f"{tag['Key']}:{tag['Value']},"
account_details_tags.append(f"{tag['Key']}:{tag['Value']}")
account_details = metadata.get("Account", {})
@@ -46,7 +46,7 @@ def parse_organizations_metadata(metadata: dict, tags: dict) -> AWSOrganizations
organization_account_arn=aws_account_arn.arn,
organization_arn=aws_organization_arn,
organization_id=aws_organization_id,
account_tags=account_details_tags.rstrip(","),
account_tags=account_details_tags,
)
except Exception as error:
logger.warning(

View File

@@ -10,24 +10,25 @@ def send_to_s3_bucket(
output_filename, output_directory, output_mode, output_bucket_name, audit_session
):
try:
filename = ""
# Get only last part of the path
if output_mode == "csv":
filename = f"{output_filename}{csv_file_suffix}"
elif output_mode == "json-asff":
filename = f"{output_filename}{json_asff_file_suffix}"
elif output_mode == "json-ocsf":
filename = f"{output_filename}{json_ocsf_file_suffix}"
else: # Compliance output mode
filename = f"{output_filename}_{output_mode}{csv_file_suffix}"
logger.info(f"Sending output file {filename} to S3 bucket {output_bucket_name}")
# File location
file_name = output_directory + "/" + filename
# S3 Object name
bucket_directory = get_s3_object_path(output_directory)
object_name = bucket_directory + "/" + output_mode + "/" + filename
filename = ""
# Get only last part of the path
if output_mode in ["csv", "json-asff", "json-ocsf"]:
if output_mode == "csv":
filename = f"{output_filename}{csv_file_suffix}"
elif output_mode == "json-asff":
filename = f"{output_filename}{json_asff_file_suffix}"
elif output_mode == "json-ocsf":
filename = f"{output_filename}{json_ocsf_file_suffix}"
file_name = output_directory + "/" + filename
object_name = bucket_directory + "/" + output_mode + "/" + filename
else: # Compliance output mode
filename = f"{output_filename}_{output_mode}{csv_file_suffix}"
file_name = output_directory + "/compliance/" + filename
object_name = bucket_directory + "/compliance/" + filename
logger.info(f"Sending output file {filename} to S3 bucket {output_bucket_name}")
s3_client = audit_session.client("s3")
s3_client.upload_file(file_name, output_bucket_name, object_name)

View File

@@ -15,7 +15,7 @@ MAX_WORKERS = 10
class AWSService:
"""The AWSService class offers a parent class for each AWS Service to generate:
- AWS Regional Clients
- Shared information like the account ID and ARN, the the AWS partition and the checks audited
- Shared information like the account ID and ARN, the AWS partition and the checks audited
- AWS Session
- Thread pool for the __threading_call__
- Also handles if the AWS Service is Global

View File

@@ -16,7 +16,7 @@ class AWSOrganizationsInfo:
organization_account_arn: str
organization_arn: str
organization_id: str
account_tags: str
account_tags: list[str]
@dataclass

View File

@@ -0,0 +1,41 @@
from prowler.lib.logger import logger
from prowler.providers.aws.services.accessanalyzer.accessanalyzer_client import (
accessanalyzer_client,
)
def fixer(region):
"""
Enable Access Analyzer in a region. Requires the access-analyzer:CreateAnalyzer permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "access-analyzer:CreateAnalyzer",
"Resource": "*"
}
]
}
Args:
region (str): AWS region
Returns:
bool: True if Access Analyzer is enabled, False otherwise
"""
try:
regional_client = accessanalyzer_client.regional_clients[region]
regional_client.create_analyzer(
analyzerName=accessanalyzer_client.fixer_config.get(
"accessanalyzer_enabled", {}
).get("AnalyzerName", "DefaultAnalyzer"),
type=accessanalyzer_client.fixer_config.get(
"accessanalyzer_enabled", {}
).get("AnalyzerType", "ACCOUNT_UNUSED_ACCESS"),
)
except Exception as error:
logger.error(
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
else:
return True

View File

@@ -11,13 +11,13 @@
"Severity": "medium",
"ResourceType": "Other",
"Description": "Maintain current contact details.",
"Risk": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details; and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner; AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation; proactive measures may be taken; including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.",
"Risk": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.",
"RelatedUrl": "",
"Remediation": {
"Code": {
"CLI": "No command available.",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/iam_18-maintain-contact-details#aws-console",
"Other": "https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console",
"Terraform": ""
},
"Recommendation": {

View File

@@ -11,13 +11,13 @@
"Severity": "medium",
"ResourceType": "Other",
"Description": "Maintain different contact details to security, billing and operations.",
"Risk": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details; and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner; AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation; proactive measures may be taken; including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.",
"Risk": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy. If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question.",
"RelatedUrl": "https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact.html",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/iam_18-maintain-contact-details#aws-console",
"Other": "https://docs.prowler.com/checks/aws/iam-policies/iam_18-maintain-contact-details#aws-console",
"Terraform": ""
},
"Recommendation": {

View File

@@ -6,22 +6,26 @@ class account_maintain_different_contact_details_to_security_billing_and_operati
Check
):
def execute(self):
report = Check_Report_AWS(self.metadata())
report.region = account_client.region
report.resource_id = account_client.audited_account
report.resource_arn = account_client.audited_account_arn
findings = []
if account_client.contact_base:
report = Check_Report_AWS(self.metadata())
report.region = account_client.region
report.resource_id = account_client.audited_account
report.resource_arn = account_client.audited_account_arn
if (
len(account_client.contact_phone_numbers)
== account_client.number_of_contacts
and len(account_client.contact_names) == account_client.number_of_contacts
# This is because the primary contact has no email field
and len(account_client.contact_emails)
== account_client.number_of_contacts - 1
):
report.status = "PASS"
report.status_extended = "SECURITY, BILLING and OPERATIONS contacts found and they are different between each other and between ROOT contact."
else:
report.status = "FAIL"
report.status_extended = "SECURITY, BILLING and OPERATIONS contacts not found or they are not different between each other and between ROOT contact."
return [report]
if (
len(account_client.contact_phone_numbers)
== account_client.number_of_contacts
and len(account_client.contact_names)
== account_client.number_of_contacts
# This is because the primary contact has no email field
and len(account_client.contact_emails)
== account_client.number_of_contacts - 1
):
report.status = "PASS"
report.status_extended = "SECURITY, BILLING and OPERATIONS contacts found and they are different between each other and between ROOT contact."
else:
report.status = "FAIL"
report.status_extended = "SECURITY, BILLING and OPERATIONS contacts not found or they are not different between each other and between ROOT contact."
findings.append(report)
return findings

View File

@@ -17,7 +17,7 @@
"Code": {
"CLI": "No command available.",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/iam_19#aws-console",
"Other": "https://docs.prowler.com/checks/aws/iam-policies/iam_19#aws-console",
"Terraform": ""
},
"Recommendation": {

View File

@@ -17,7 +17,7 @@
"Code": {
"CLI": "No command available.",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/iam_15",
"Other": "https://docs.prowler.com/checks/aws/iam-policies/iam_15",
"Terraform": ""
},
"Recommendation": {

View File

@@ -18,28 +18,29 @@ class Account(AWSService):
self.contacts_security = self.__get_alternate_contact__("SECURITY")
self.contacts_operations = self.__get_alternate_contact__("OPERATIONS")
# Set of contact phone numbers
self.contact_phone_numbers = {
self.contact_base.phone_number,
self.contacts_billing.phone_number,
self.contacts_security.phone_number,
self.contacts_operations.phone_number,
}
if self.contact_base:
# Set of contact phone numbers
self.contact_phone_numbers = {
self.contact_base.phone_number,
self.contacts_billing.phone_number,
self.contacts_security.phone_number,
self.contacts_operations.phone_number,
}
# Set of contact names
self.contact_names = {
self.contact_base.name,
self.contacts_billing.name,
self.contacts_security.name,
self.contacts_operations.name,
}
# Set of contact names
self.contact_names = {
self.contact_base.name,
self.contacts_billing.name,
self.contacts_security.name,
self.contacts_operations.name,
}
# Set of contact emails
self.contact_emails = {
self.contacts_billing.email,
self.contacts_security.email,
self.contacts_operations.email,
}
# Set of contact emails
self.contact_emails = {
self.contacts_billing.email,
self.contacts_security.email,
self.contacts_operations.email,
}
def __get_contact_information__(self):
try:
@@ -53,10 +54,16 @@ class Account(AWSService):
phone_number=primary_account_contact.get("PhoneNumber"),
)
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return Contact(type="PRIMARY")
if error.response["Error"]["Code"] == "AccessDeniedException":
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return None
else:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return Contact(type="PRIMARY")
def __get_alternate_contact__(self, contact_type: str):
try:

View File

@@ -21,7 +21,7 @@
"Terraform": ""
},
"Recommendation": {
"Text": "Monitor certificate expiration and take automated action to renew; replace or remove. Having shorter TTL for any security artifact is a general recommendation; but requires additional automation in place. If not longer required delete certificate. Use AWS config using the managed rule: acm-certificate-expiration-check.",
"Text": "Monitor certificate expiration and take automated action to renew, replace or remove. Having shorter TTL for any security artifact is a general recommendation, but requires additional automation in place. If not longer required delete certificate. Use AWS config using the managed rule: acm-certificate-expiration-check.",
"Url": "https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-expiration-check.html"
}
},

View File

@@ -19,9 +19,9 @@
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "https://docs.bridgecrew.io/docs/public_6-api-gateway-authorizer-set#cloudformation",
"NativeIaC": "https://docs.prowler.com/checks/aws/public-policies/public_6-api-gateway-authorizer-set#cloudformation",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/public_6-api-gateway-authorizer-set#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/public-policies/public_6-api-gateway-authorizer-set#terraform"
},
"Recommendation": {
"Text": "Implement Amazon Cognito or a Lambda function to control access to your API.",

View File

@@ -21,7 +21,7 @@
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate#terraform"
},
"Recommendation": {
"Text": "Monitoring is an important part of maintaining the reliability, availability and performance of API Gateway and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution. CloudTrail provides a record of actions taken by a user, role, or an AWS service in API Gateway. Using the information collected by CloudTrail, you can determine the request that was made to API Gateway, the IP address from which the request was made, who made the request, etc.",

View File

@@ -20,8 +20,8 @@
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/bc_aws_logging_30#aws-console",
"Terraform": "https://docs.bridgecrew.io/docs/bc_aws_logging_30#cloudformation"
"Other": "https://docs.prowler.com/checks/aws/logging-policies/bc_aws_logging_30#aws-console",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/bc_aws_logging_30#cloudformation"
},
"Recommendation": {
"Text": "Monitoring is an important part of maintaining the reliability, availability and performance of API Gateway and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution. CloudTrail provides a record of actions taken by a user, role, or an AWS service in API Gateway. Using the information collected by CloudTrail, you can determine the request that was made to API Gateway, the IP address from which the request was made, who made the request, etc.",

View File

@@ -18,7 +18,7 @@
"CLI": "aws athena update-work-group --region <REGION> --work-group <workgroup_name> --configuration-updates ResultConfigurationUpdates={EncryptionConfiguration={EncryptionOption=SSE_S3|SSE_KMS|CSE_KMS}}",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Athena/encryption-enabled.html",
"Terraform": "https://docs.bridgecrew.io/docs/ensure-that-athena-workgroup-is-encrypted#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/general-policies/ensure-that-athena-workgroup-is-encrypted#terraform"
},
"Recommendation": {
"Text": "Enable Encryption. Use a CMK where possible. It will provide additional management and privacy benefits.",

View File

@@ -16,9 +16,9 @@
"Remediation": {
"Code": {
"CLI": "aws athena update-work-group --region <REGION> --work-group <workgroup_name> --configuration-updates EnforceWorkGroupConfiguration=True",
"NativeIaC": "https://docs.bridgecrew.io/docs/bc_aws_general_33#cloudformation",
"NativeIaC": "https://docs.prowler.com/checks/aws/general-policies/bc_aws_general_33#cloudformation",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/bc_aws_general_33#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/general-policies/bc_aws_general_33#terraform"
},
"Recommendation": {
"Text": "Ensure that workgroup configuration is enforced so it cannot be overriden by client-side settings.",
@@ -29,4 +29,4 @@
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}
}

View File

@@ -9,7 +9,7 @@
"Severity": "low",
"ResourceType": "AwsLambdaFunction",
"Description": "Check if Lambda functions invoke API operations are being recorded by CloudTrail.",
"Risk": "If logs are not enabled; monitoring of service use and threat analysis is not possible.",
"Risk": "If logs are not enabled, monitoring of service use and threat analysis is not possible.",
"RelatedUrl": "https://docs.aws.amazon.com/lambda/latest/dg/logging-using-cloudtrail.html",
"Remediation": {
"Code": {

View File

@@ -9,7 +9,7 @@
"Severity": "critical",
"ResourceType": "AwsLambdaFunction",
"Description": "Find secrets in Lambda functions code.",
"Risk": "The use of a hard-coded password increases the possibility of password guessing. If hard-coded passwords are used; it is possible that malicious users gain access through the account in question.",
"Risk": "The use of a hard-coded password increases the possibility of password guessing. If hard-coded passwords are used, it is possible that malicious users gain access through the account in question.",
"RelatedUrl": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/lambda-functions.html",
"Remediation": {
"Code": {

View File

@@ -9,14 +9,14 @@
"Severity": "critical",
"ResourceType": "AwsLambdaFunction",
"Description": "Find secrets in Lambda functions variables.",
"Risk": "The use of a hard-coded password increases the possibility of password guessing. If hard-coded passwords are used; it is possible that malicious users gain access through the account in question.",
"Risk": "The use of a hard-coded password increases the possibility of password guessing. If hard-coded passwords are used, it is possible that malicious users gain access through the account in question.",
"RelatedUrl": "https://docs.aws.amazon.com/secretsmanager/latest/userguide/lambda-functions.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/bc_aws_secrets_3#cli-command",
"NativeIaC": "https://docs.bridgecrew.io/docs/bc_aws_secrets_3#cloudformation",
"CLI": "https://docs.prowler.com/checks/aws/secrets-policies/bc_aws_secrets_3#cli-command",
"NativeIaC": "https://docs.prowler.com/checks/aws/secrets-policies/bc_aws_secrets_3#cloudformation",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/bc_aws_secrets_3#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/secrets-policies/bc_aws_secrets_3#terraform"
},
"Recommendation": {
"Text": "Use Secrets Manager to securely provide database credentials to Lambda functions and secure the databases as well as use the credentials to connect and query them without hardcoding the secrets in code or passing them through environmental variables.",

View File

@@ -1,32 +0,0 @@
{
"Provider": "aws",
"CheckID": "awslambda_function_not_directly_publicly_accessible_via_elbv2",
"CheckTitle": "Check if Lambda functions have public application load balancer ahead of them.",
"CheckType": [],
"ServiceName": "lambda",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:lambda:region:account-id:function/function-name",
"Severity": "critical",
"ResourceType": "AwsLambdaFunction",
"Description": "Check if Lambda functions have public application load balancer ahead of them.",
"Risk": "Publicly accessible services could expose sensitive data to bad actors.",
"RelatedUrl": "https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html",
"Remediation": {
"Code": {
"CLI": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Lambda/function-exposed.html",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "Place security groups around public load balancers",
"Url": "https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html"
}
},
"Categories": [
"internet-exposed"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}

View File

@@ -1,29 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_client
from prowler.providers.aws.services.elbv2.elbv2_client import elbv2_client
class awslambda_function_not_directly_publicly_accessible_via_elbv2(Check):
def execute(self):
findings = []
if awslambda_client.functions:
public_lambda_functions = {}
for target_group in elbv2_client.target_groups:
if target_group.public and target_group.target_type == "lambda":
public_lambda_functions[target_group.target] = target_group.arn
for function in awslambda_client.functions.values():
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
report.status = "PASS"
report.status_extended = f"Lambda function {function.name} is not behind an Internet facing Load Balancer."
if function.arn in public_lambda_functions:
report.status = "FAIL"
report.status_extended = f"Lambda function {function.name} is behind an Internet facing Load Balancer through target group {public_lambda_functions[function.arn]}."
findings.append(report)
return findings

View File

@@ -9,7 +9,7 @@
"Severity": "medium",
"ResourceType": "AwsLambdaFunction",
"Description": "Find obsolete Lambda runtimes.",
"Risk": "If you have functions running on a runtime that will be deprecated in the next 60 days; Lambda notifies you by email that you should prepare by migrating your function to a supported runtime. In some cases; such as security issues that require a backwards-incompatible update; or software that does not support a long-term support (LTS) schedule; advance notice might not be possible. After a runtime is deprecated; Lambda might retire it completely at any time by disabling invocation. Deprecated runtimes are not eligible for security updates or technical support.",
"Risk": "If you have functions running on a runtime that will be deprecated in the next 60 days, Lambda notifies you by email that you should prepare by migrating your function to a supported runtime. In some cases, such as security issues that require a backwards-incompatible update, or software that does not support a long-term support (LTS) schedule, advance notice might not be possible. After a runtime is deprecated, Lambda might retire it completely at any time by disabling invocation. Deprecated runtimes are not eligible for security updates or technical support.",
"RelatedUrl": "https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html",
"Remediation": {
"Code": {

View File

@@ -1,6 +1,7 @@
from datetime import datetime
from typing import Optional
from botocore.client import ClientError
from pydantic import BaseModel
from prowler.lib.logger import logger
@@ -37,6 +38,8 @@ class Backup(AWSService):
self.audit_resources,
)
):
if self.backup_vaults is None:
self.backup_vaults = []
self.backup_vaults.append(
BackupVault(
arn=configuration.get("BackupVaultArn"),
@@ -55,7 +58,13 @@ class Backup(AWSService):
),
)
)
except ClientError as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
if error.response["Error"]["Code"] == "AccessDeniedException":
if not self.backup_vaults:
self.backup_vaults = None
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -5,24 +5,24 @@ from prowler.providers.aws.services.backup.backup_client import backup_client
class backup_vaults_encrypted(Check):
def execute(self):
findings = []
for backup_vault in backup_client.backup_vaults:
# By default we assume that the result is fail
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = (
f"Backup Vault {backup_vault.name} is not encrypted."
)
report.resource_arn = backup_vault.arn
report.resource_id = backup_vault.name
report.region = backup_vault.region
# if it is encrypted we only change the status and the status extended
if backup_vault.encryption:
report.status = "PASS"
if backup_client.backup_vaults:
for backup_vault in backup_client.backup_vaults:
# By default we assume that the result is fail
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = (
f"Backup Vault {backup_vault.name} is encrypted."
f"Backup Vault {backup_vault.name} is not encrypted."
)
# then we store the finding
findings.append(report)
report.resource_arn = backup_vault.arn
report.resource_id = backup_vault.name
report.region = backup_vault.region
# if it is encrypted we only change the status and the status extended
if backup_vault.encryption:
report.status = "PASS"
report.status_extended = (
f"Backup Vault {backup_vault.name} is encrypted."
)
# then we store the finding
findings.append(report)
return findings

View File

@@ -5,18 +5,19 @@ from prowler.providers.aws.services.backup.backup_client import backup_client
class backup_vaults_exist(Check):
def execute(self):
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = "No Backup Vault exist."
report.resource_arn = backup_client.backup_vault_arn_template
report.resource_id = backup_client.audited_account
report.region = backup_client.region
if backup_client.backup_vaults:
report.status = "PASS"
report.status_extended = f"At least one backup vault exists: {backup_client.backup_vaults[0].name}."
report.resource_arn = backup_client.backup_vaults[0].arn
report.resource_id = backup_client.backup_vaults[0].name
report.region = backup_client.backup_vaults[0].region
if backup_client.backup_vaults is not None:
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = "No Backup Vault exist."
report.resource_arn = backup_client.backup_vault_arn_template
report.resource_id = backup_client.audited_account
report.region = backup_client.region
if backup_client.backup_vaults:
report.status = "PASS"
report.status_extended = f"At least one backup vault exists: {backup_client.backup_vaults[0].name}."
report.resource_arn = backup_client.backup_vaults[0].arn
report.resource_id = backup_client.backup_vaults[0].name
report.region = backup_client.backup_vaults[0].region
findings.append(report)
findings.append(report)
return findings

View File

@@ -13,7 +13,7 @@
"RelatedUrl": "https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-secretsmanager-secret-generatesecretstring.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/bc_aws_secrets_2#cli-command",
"CLI": "https://docs.prowler.com/checks/aws/secrets-policies/bc_aws_secrets_2#cli-command",
"NativeIaC": "",
"Other": "",
"Terraform": ""

View File

@@ -9,7 +9,7 @@
"Severity": "medium",
"ResourceType": "AwsCloudFormationStack",
"Description": "Enable termination protection for Cloudformation Stacks",
"Risk": "Without termination protection enabled; a critical cloudformation stack can be accidently deleted.",
"Risk": "Without termination protection enabled, a critical cloudformation stack can be accidently deleted.",
"RelatedUrl": "https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html",
"Remediation": {
"Code": {

View File

@@ -9,7 +9,7 @@
"Severity": "low",
"ResourceType": "AwsCloudFrontDistribution",
"Description": "Check if Geo restrictions are enabled in CloudFront distributions.",
"Risk": "Consider countries where service should not be accessed; by legal or compliance requirements. Additionally if not restricted the attack vector is increased.",
"Risk": "Consider countries where service should not be accessed, by legal or compliance requirements. Additionally if not restricted the attack vector is increased.",
"RelatedUrl": "https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html",
"Remediation": {
"Code": {
@@ -19,7 +19,7 @@
"Terraform": ""
},
"Recommendation": {
"Text": "If possible; define and enable Geo restrictions for this service.",
"Text": "If possible, define and enable Geo restrictions for this service.",
"Url": "https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html"
}
},

View File

@@ -14,9 +14,9 @@
"Remediation": {
"Code": {
"CLI": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudFront/security-policy.html",
"NativeIaC": "https://docs.bridgecrew.io/docs/networking_32#cloudformation",
"NativeIaC": "https://docs.prowler.com/checks/aws/networking-policies/networking_32#cloudformation",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/networking_32#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/networking-policies/networking_32#terraform"
},
"Recommendation": {
"Text": "Use HTTPS everywhere possible. It will enforce privacy and protect against account hijacking and other threats.",

View File

@@ -13,10 +13,10 @@
"RelatedUrl": "https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/logging_20#cli-command",
"NativeIaC": "https://docs.bridgecrew.io/docs/logging_20#cloudformation",
"CLI": "https://docs.prowler.com/checks/aws/logging-policies/logging_20#cli-command",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_20#cloudformation",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/logging_20#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_20#terraform"
},
"Recommendation": {
"Text": "Real-time monitoring can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Enable logging for services with defined log rotation. These logs are useful for Incident Response and forensics investigation among other use cases.",

View File

@@ -13,9 +13,9 @@
"RelatedUrl": "https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/secure-connections-supported-viewer-protocols-ciphers.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/networking_33#cli-command",
"CLI": "https://docs.prowler.com/checks/aws/networking-policies/networking_33#cli-command",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/networking_33#aws-cloudfront-console",
"Other": "https://docs.prowler.com/checks/aws/networking-policies/networking_33#aws-cloudfront-console",
"Terraform": ""
},
"Recommendation": {

View File

@@ -11,17 +11,17 @@
"Severity": "medium",
"ResourceType": "AwsCloudFrontDistribution",
"Description": "Check if CloudFront distributions are using WAF.",
"Risk": "Potential attacks and / or abuse of service; more even for even for internet reachable services.",
"Risk": "Potential attacks and / or abuse of service, more even for even for internet reachable services.",
"RelatedUrl": "https://docs.aws.amazon.com/waf/latest/developerguide/cloudfront-features.html",
"Remediation": {
"Code": {
"CLI": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/CloudFront/cloudfront-integrated-with-waf.html",
"NativeIaC": "https://docs.bridgecrew.io/docs/bc_aws_general_27#cloudformation",
"Other": "https://docs.bridgecrew.io/docs/bc_aws_general_27#cloudfront-console",
"Terraform": "https://docs.bridgecrew.io/docs/bc_aws_general_27#terraform"
"NativeIaC": "https://docs.prowler.com/checks/aws/general-policies/bc_aws_general_27#cloudformation",
"Other": "https://docs.prowler.com/checks/aws/general-policies/bc_aws_general_27#cloudfront-console",
"Terraform": "https://docs.prowler.com/checks/aws/general-policies/bc_aws_general_27#terraform"
},
"Recommendation": {
"Text": "Use AWS WAF to protect your service from common web exploits. These could affect availability and performance; compromise security; or consume excessive resources.",
"Text": "Use AWS WAF to protect your service from common web exploits. These could affect availability and performance, compromise security, or consume excessive resources.",
"Url": "https://docs.aws.amazon.com/waf/latest/developerguide/cloudfront-features.html"
}
},

View File

@@ -8,28 +8,29 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_bucket_requires_mfa_delete(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) does not have MFA delete enabled."
for bucket in s3_client.buckets:
if trail_bucket == bucket.name:
trail_bucket_is_in_account = True
if bucket.mfa_delete:
report.status = "PASS"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) has MFA delete enabled."
# check if trail bucket is a cross account bucket
if not trail_bucket_is_in_account:
report.status = "MANUAL"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually."
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) does not have MFA delete enabled."
for bucket in s3_client.buckets:
if trail_bucket == bucket.name:
trail_bucket_is_in_account = True
if bucket.mfa_delete:
report.status = "PASS"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) has MFA delete enabled."
# check if trail bucket is a cross account bucket
if not trail_bucket_is_in_account:
report.status = "MANUAL"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually."
findings.append(report)
findings.append(report)
return findings

View File

@@ -13,13 +13,13 @@
"Severity": "low",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail trails are integrated with CloudWatch Logs",
"Risk": "Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user; API; resource; and IP address; and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.",
"Risk": "Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.",
"RelatedUrl": "",
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --cloudwatch-logs-log-group- arn <cloudtrail_log_group_arn> --cloudwatch-logs-role-arn <cloudtrail_cloudwatchLogs_role_arn>",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/logging_4#aws-console",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_4#aws-console",
"Terraform": ""
},
"Recommendation": {

View File

@@ -11,37 +11,38 @@ maximum_time_without_logging = 1
class cloudtrail_cloudwatch_logging_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = (
f"Multiregion trail {trail.name} has been logging the last 24h."
)
else:
report.status_extended = f"Single region trail {trail.name} has been logging the last 24h."
if trail.latest_cloudwatch_delivery_time:
last_log_delivery = (
datetime.now().replace(tzinfo=timezone.utc)
- trail.latest_cloudwatch_delivery_time
)
if last_log_delivery > timedelta(days=maximum_time_without_logging):
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} has been logging the last 24h."
else:
report.status_extended = f"Single region trail {trail.name} has been logging the last 24h."
if trail.latest_cloudwatch_delivery_time:
last_log_delivery = (
datetime.now().replace(tzinfo=timezone.utc)
- trail.latest_cloudwatch_delivery_time
)
if last_log_delivery > timedelta(
days=maximum_time_without_logging
):
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h."
else:
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h."
else:
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h."
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h or not configured to deliver logs."
else:
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h."
else:
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h or not configured to deliver logs."
else:
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h or not configured to deliver logs."
findings.append(report)
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h or not configured to deliver logs."
findings.append(report)
return findings

View File

@@ -7,19 +7,18 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_insights_exist(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
report.status_extended = f"Trail {trail.name} does not have insight selectors and it is logging."
if trail.has_insight_selectors:
report.status = "PASS"
report.status_extended = (
f"Trail {trail.name} has insight selectors and it is logging."
)
findings.append(report)
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
report.status_extended = f"Trail {trail.name} does not have insight selectors and it is logging."
if trail.has_insight_selectors:
report.status = "PASS"
report.status_extended = f"Trail {trail.name} has insight selectors and it is logging."
findings.append(report)
return findings

View File

@@ -13,12 +13,12 @@
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail logs are encrypted at rest using KMS CMKs",
"Risk": "By default; the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable; you can instead use server-side encryption with AWS KMSmanaged keys (SSE-KMS) for your CloudTrail log files.",
"Risk": "By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMSmanaged keys (SSE-KMS) for your CloudTrail log files.",
"RelatedUrl": "",
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --kms-id <cloudtrail_kms_key> aws kms put-key-policy --key-id <cloudtrail_kms_key> --policy <cloudtrail_kms_key_policy>",
"NativeIaC": "https://docs.bridgecrew.io/docs/logging_7#fix---buildtime",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_7#fix---buildtime",
"Other": "",
"Terraform": ""
},

View File

@@ -7,32 +7,29 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_kms_encryption_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = (
f"Multiregion trail {trail.name} has encryption disabled."
)
else:
report.status_extended = (
f"Single region trail {trail.name} has encryption disabled."
)
if trail.kms_key:
report.status = "PASS"
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = (
f"Multiregion trail {trail.name} has encryption enabled."
f"Multiregion trail {trail.name} has encryption disabled."
)
else:
report.status_extended = (
f"Single region trail {trail.name} has encryption enabled."
f"Single region trail {trail.name} has encryption disabled."
)
findings.append(report)
if trail.kms_key:
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} has encryption enabled."
else:
report.status_extended = f"Single region trail {trail.name} has encryption enabled."
findings.append(report)
return findings

View File

@@ -18,9 +18,9 @@
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --enable-log-file-validation",
"NativeIaC": "https://docs.bridgecrew.io/docs/logging_2#cloudformation",
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_2#cloudformation",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/logging_2#terraform"
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_2#terraform"
},
"Recommendation": {
"Text": "Ensure LogFileValidationEnabled is set to true for each trail.",

View File

@@ -7,26 +7,25 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_log_file_validation_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = (
f"Multiregion trail {trail.name} log file validation disabled."
)
else:
report.status_extended = f"Single region trail {trail.name} log file validation disabled."
if trail.log_file_validation_enabled:
report.status = "PASS"
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} log file validation enabled."
report.status_extended = f"Multiregion trail {trail.name} log file validation disabled."
else:
report.status_extended = f"Single region trail {trail.name} log file validation enabled."
findings.append(report)
report.status_extended = f"Single region trail {trail.name} log file validation disabled."
if trail.log_file_validation_enabled:
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} log file validation enabled."
else:
report.status_extended = f"Single region trail {trail.name} log file validation enabled."
findings.append(report)
return findings

View File

@@ -13,17 +13,17 @@
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket",
"Risk": "Server access logs can assist you in security and access audits; help you learn about your customer base; and understand your Amazon S3 bill.",
"Risk": "Server access logs can assist you in security and access audits, help you learn about your customer base, and understand your Amazon S3 bill.",
"RelatedUrl": "",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/logging_6#aws-console",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_6#aws-console",
"Terraform": ""
},
"Recommendation": {
"Text": "Ensure that S3 buckets have Logging enabled. CloudTrail data events can be used in place of S3 bucket logging. If that is the case; this finding can be considered a false positive.",
"Text": "Ensure that S3 buckets have Logging enabled. CloudTrail data events can be used in place of S3 bucket logging. If that is the case, this finding can be considered a false positive.",
"Url": "https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html"
}
},

View File

@@ -8,35 +8,36 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_logs_s3_bucket_access_logging_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
if trail.name:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"Multiregion Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}."
else:
report.status_extended = f"Single region Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}."
for bucket in s3_client.buckets:
if trail_bucket == bucket.name:
trail_bucket_is_in_account = True
if bucket.logging:
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}."
else:
report.status_extended = f"Single region trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}."
break
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.name:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"Multiregion Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}."
else:
report.status_extended = f"Single region Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}."
for bucket in s3_client.buckets:
if trail_bucket == bucket.name:
trail_bucket_is_in_account = True
if bucket.logging:
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = f"Multiregion trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}."
else:
report.status_extended = f"Single region trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}."
break
# check if trail is delivering logs in a cross account bucket
if not trail_bucket_is_in_account:
report.status = "MANUAL"
report.status_extended = f"Trail {trail.name} is delivering logs in a cross-account bucket {trail_bucket} in another account out of Prowler's permissions scope, please check it manually."
findings.append(report)
# check if trail is delivering logs in a cross account bucket
if not trail_bucket_is_in_account:
report.status = "MANUAL"
report.status_extended = f"Trail {trail.name} is delivering logs in a cross-account bucket {trail_bucket} in another account out of Prowler's permissions scope, please check it manually."
findings.append(report)
return findings

View File

@@ -19,7 +19,7 @@
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/logging_3#aws-console",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_3#aws-console",
"Terraform": ""
},
"Recommendation": {

View File

@@ -8,41 +8,42 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_logs_s3_bucket_is_not_publicly_accessible(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
if trail.name:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is not publicly accessible."
else:
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is not publicly accessible."
for bucket in s3_client.buckets:
# Here we need to ensure that acl_grantee is filled since if we don't have permissions to query the api for a concrete region
# (for example due to a SCP) we are going to try access an attribute from a None type
if trail_bucket == bucket.name:
trail_bucket_is_in_account = True
if bucket.acl_grantees:
for grant in bucket.acl_grantees:
if (
grant.URI
== "http://acs.amazonaws.com/groups/global/AllUsers"
):
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is publicly accessible."
else:
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is publicly accessible."
break
# check if trail bucket is a cross account bucket
if not trail_bucket_is_in_account:
report.status = "MANUAL"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually."
findings.append(report)
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
if trail.name:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
if trail.is_multiregion:
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is not publicly accessible."
else:
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is not publicly accessible."
for bucket in s3_client.buckets:
# Here we need to ensure that acl_grantee is filled since if we don't have permissions to query the api for a concrete region
# (for example due to a SCP) we are going to try access an attribute from a None type
if trail_bucket == bucket.name:
trail_bucket_is_in_account = True
if bucket.acl_grantees:
for grant in bucket.acl_grantees:
if (
grant.URI
== "http://acs.amazonaws.com/groups/global/AllUsers"
):
report.status = "FAIL"
if trail.is_multiregion:
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is publicly accessible."
else:
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is publicly accessible."
break
# check if trail bucket is a cross account bucket
if not trail_bucket_is_in_account:
report.status = "MANUAL"
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually."
findings.append(report)
return findings

View File

@@ -13,14 +13,14 @@
"Severity": "high",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail is enabled in all regions",
"Risk": "AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller; the time of the API call; the source IP address of the API caller; the request parameters; and the response elements returned by the AWS service.",
"Risk": "AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.",
"RelatedUrl": "",
"Remediation": {
"Code": {
"CLI": "aws cloudtrail create-trail --name <trail_name> --bucket-name <s3_bucket_for_cloudtrail> --is-multi-region-trail aws cloudtrail update-trail --name <trail_name> --is-multi-region-trail ",
"NativeIaC": "https://docs.bridgecrew.io/docs/logging_1#cloudformation",
"Other": "https://docs.bridgecrew.io/docs/logging_1#aws-console",
"Terraform": "https://docs.bridgecrew.io/docs/logging_1#terraform"
"NativeIaC": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#cloudformation",
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#aws-console",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_1#terraform"
},
"Recommendation": {
"Text": "Ensure Logging is set to ON on all regions (even if they are not being used at the moment.",

View File

@@ -7,36 +7,35 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_multi_region_enabled(Check):
def execute(self):
findings = []
for region in cloudtrail_client.regional_clients.keys():
report = Check_Report_AWS(self.metadata())
report.region = region
for trail in cloudtrail_client.trails.values():
if trail.region == region or trail.is_multiregion:
if trail.is_logging:
report.status = "PASS"
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
if trail.is_multiregion:
report.status_extended = (
f"Trail {trail.name} is multiregion and it is logging."
)
if cloudtrail_client.trails is not None:
for region in cloudtrail_client.regional_clients.keys():
report = Check_Report_AWS(self.metadata())
report.region = region
for trail in cloudtrail_client.trails.values():
if trail.region == region or trail.is_multiregion:
if trail.is_logging:
report.status = "PASS"
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
if trail.is_multiregion:
report.status_extended = f"Trail {trail.name} is multiregion and it is logging."
else:
report.status_extended = f"Trail {trail.name} is not multiregion and it is logging."
# Since there exists a logging trail in that region there is no point in checking the remaining trails
# Store the finding and exit the loop
findings.append(report)
break
else:
report.status_extended = f"Trail {trail.name} is not multiregion and it is logging."
# Since there exists a logging trail in that region there is no point in checking the remaining trails
# Store the finding and exit the loop
findings.append(report)
break
else:
report.status = "FAIL"
report.status_extended = (
"No CloudTrail trails enabled and logging were found."
)
report.resource_arn = (
cloudtrail_client.__get_trail_arn_template__(region)
)
report.resource_id = cloudtrail_client.audited_account
# If there are no trails logging it is needed to store the FAIL once all the trails have been checked
if report.status == "FAIL":
findings.append(report)
report.status = "FAIL"
report.status_extended = (
"No CloudTrail trails enabled and logging were found."
)
report.resource_arn = (
cloudtrail_client.__get_trail_arn_template__(region)
)
report.resource_id = cloudtrail_client.audited_account
# If there are no trails logging it is needed to store the FAIL once all the trails have been checked
if report.status == "FAIL":
findings.append(report)
return findings

View File

@@ -0,0 +1,58 @@
from prowler.lib.logger import logger
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
)
def fixer(region):
"""
NOTE: Define the S3 bucket name in the fixer_config.yaml file.
Enable CloudTrail in a region. Requires the cloudtrail:CreateTrail permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "cloudtrail:CreateTrail",
"Resource": "*"
}
]
}
Args:
region (str): AWS region
Returns:
bool: True if CloudTrail is enabled, False otherwise
"""
try:
cloudtrail_fixer_config = cloudtrail_client.fixer_config.get(
"cloudtrail_multi_region_enabled", {}
)
regional_client = cloudtrail_client.regional_clients[region]
args = {
"Name": cloudtrail_fixer_config.get("TrailName", "DefaultTrail"),
"S3BucketName": cloudtrail_fixer_config.get("S3BucketName"),
"IsMultiRegionTrail": cloudtrail_fixer_config.get(
"IsMultiRegionTrail", True
),
"EnableLogFileValidation": cloudtrail_fixer_config.get(
"EnableLogFileValidation", True
),
}
if cloudtrail_fixer_config.get("CloudWatchLogsLogGroupArn"):
args["CloudWatchLogsLogGroupArn"] = cloudtrail_fixer_config.get(
"CloudWatchLogsLogGroupArn"
)
if cloudtrail_fixer_config.get("CloudWatchLogsRoleArn"):
args["CloudWatchLogsRoleArn"] = cloudtrail_fixer_config.get(
"CloudWatchLogsRoleArn"
)
if cloudtrail_fixer_config.get("KmsKeyId"):
args["KmsKeyId"] = cloudtrail_fixer_config.get("KmsKeyId")
regional_client.create_trail(**args)
except Exception as error:
logger.error(
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return False
else:
return True

View File

@@ -12,17 +12,17 @@
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure CloudTrail logging management events in All Regions",
"Risk": "AWS CloudTrail enables governance, compliance, operational auditing, and risk auditing of your AWS account. To meet FTR requirements, you must have management events enabled for all AWS accounts and in all regions and aggregate these logs into an Amazon Simple Storage Service (Amazon S3) bucket owned by a separate AWS account.",
"RelatedUrl": "https://docs.bridgecrew.io/docs/logging_14",
"RelatedUrl": "https://docs.prowler.com/checks/aws/logging-policies/logging_14",
"Remediation": {
"Code": {
"CLI": "aws cloudtrail update-trail --name <trail_name> --is-multi-region-trail",
"NativeIaC": "",
"Other": "https://docs.bridgecrew.io/docs/logging_14",
"Terraform": "https://docs.bridgecrew.io/docs/logging_14#terraform"
"Other": "https://docs.prowler.com/checks/aws/logging-policies/logging_14",
"Terraform": "https://docs.prowler.com/checks/aws/logging-policies/logging_14#terraform"
},
"Recommendation": {
"Text": "Enable CloudTrail logging management events in All Regions",
"Url": "https://docs.bridgecrew.io/docs/logging_14"
"Url": "https://docs.prowler.com/checks/aws/logging-policies/logging_14"
}
},
"Categories": [

View File

@@ -7,48 +7,49 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_multi_region_enabled_logging_management_events(Check):
def execute(self):
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = (
"No trail found with multi-region enabled and logging management events."
)
report.region = cloudtrail_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.trail_arn_template
if cloudtrail_client.trails is not None:
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = "No trail found with multi-region enabled and logging management events."
report.region = cloudtrail_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.trail_arn_template
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
if trail.is_multiregion:
for event in trail.data_events:
# Classic event selectors
if not event.is_advanced:
# Check if trail has IncludeManagementEvents and ReadWriteType is All
if (
event.event_selector["ReadWriteType"] == "All"
and event.event_selector["IncludeManagementEvents"]
):
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} is multi-region, is logging and have management events enabled."
for trail in cloudtrail_client.trails.values():
if trail.is_logging:
if trail.is_multiregion:
for event in trail.data_events:
# Classic event selectors
if not event.is_advanced:
# Check if trail has IncludeManagementEvents and ReadWriteType is All
if (
event.event_selector["ReadWriteType"] == "All"
and event.event_selector["IncludeManagementEvents"]
):
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} is multi-region, is logging and have management events enabled."
# Advanced event selectors
elif event.is_advanced:
if event.event_selector.get(
"Name"
) == "Management events selector" and all(
[
field["Field"] != "readOnly"
for field in event.event_selector["FieldSelectors"]
]
):
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} is multi-region, is logging and have management events enabled."
findings.append(report)
# Advanced event selectors
elif event.is_advanced:
if event.event_selector.get(
"Name"
) == "Management events selector" and all(
[
field["Field"] != "readOnly"
for field in event.event_selector[
"FieldSelectors"
]
]
):
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} is multi-region, is logging and have management events enabled."
findings.append(report)
return findings

View File

@@ -8,23 +8,41 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_s3_dataevents_read_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for data_event in trail.data_events:
# classic event selectors
if not data_event.is_advanced:
# Check if trail has a data event for all S3 Buckets for read
if (
data_event.event_selector["ReadWriteType"] == "ReadOnly"
or data_event.event_selector["ReadWriteType"] == "All"
):
for resource in data_event.event_selector["DataResources"]:
if "AWS::S3::Object" == resource["Type"] and (
f"arn:{cloudtrail_client.audited_partition}:s3"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::*/*"
in resource["Values"]
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
for data_event in trail.data_events:
# classic event selectors
if not data_event.is_advanced:
# Check if trail has a data event for all S3 Buckets for read
if (
data_event.event_selector["ReadWriteType"] == "ReadOnly"
or data_event.event_selector["ReadWriteType"] == "All"
):
for resource in data_event.event_selector["DataResources"]:
if "AWS::S3::Object" == resource["Type"] and (
f"arn:{cloudtrail_client.audited_partition}:s3"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::*/*"
in resource["Values"]
):
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has a classic data event selector to record all S3 object-level API operations."
findings.append(report)
# advanced event selectors
elif data_event.is_advanced:
for field_selector in data_event.event_selector[
"FieldSelectors"
]:
if (
field_selector["Field"] == "resources.type"
and field_selector["Equals"][0] == "AWS::S3::Object"
):
report = Check_Report_AWS(self.metadata())
report.region = trail.region
@@ -32,31 +50,16 @@ class cloudtrail_s3_dataevents_read_enabled(Check):
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has a classic data event selector to record all S3 object-level API operations."
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has an advanced data event selector to record all S3 object-level API operations."
findings.append(report)
# advanced event selectors
elif data_event.is_advanced:
for field_selector in data_event.event_selector["FieldSelectors"]:
if (
field_selector["Field"] == "resources.type"
and field_selector["Equals"][0] == "AWS::S3::Object"
):
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has an advanced data event selector to record all S3 object-level API operations."
findings.append(report)
if not findings and (
s3_client.buckets or cloudtrail_client.provider.scan_unused_services
):
report = Check_Report_AWS(self.metadata())
report.region = cloudtrail_client.region
report.resource_arn = cloudtrail_client.trail_arn_template
report.resource_id = cloudtrail_client.audited_account
report.status = "FAIL"
report.status_extended = "No CloudTrail trails have a data event to record all S3 object-level API operations."
findings.append(report)
if not findings and (
s3_client.buckets or cloudtrail_client.provider.scan_unused_services
):
report = Check_Report_AWS(self.metadata())
report.region = cloudtrail_client.region
report.resource_arn = cloudtrail_client.trail_arn_template
report.resource_id = cloudtrail_client.audited_account
report.status = "FAIL"
report.status_extended = "No CloudTrail trails have a data event to record all S3 object-level API operations."
findings.append(report)
return findings

View File

@@ -8,23 +8,41 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_s3_dataevents_write_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for data_event in trail.data_events:
# Classic event selectors
if not data_event.is_advanced:
# Check if trail has a data event for all S3 Buckets for write
if (
data_event.event_selector["ReadWriteType"] == "All"
or data_event.event_selector["ReadWriteType"] == "WriteOnly"
):
for resource in data_event.event_selector["DataResources"]:
if "AWS::S3::Object" == resource["Type"] and (
f"arn:{cloudtrail_client.audited_partition}:s3"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::*/*"
in resource["Values"]
if cloudtrail_client.trails is not None:
for trail in cloudtrail_client.trails.values():
for data_event in trail.data_events:
# Classic event selectors
if not data_event.is_advanced:
# Check if trail has a data event for all S3 Buckets for write
if (
data_event.event_selector["ReadWriteType"] == "All"
or data_event.event_selector["ReadWriteType"] == "WriteOnly"
):
for resource in data_event.event_selector["DataResources"]:
if "AWS::S3::Object" == resource["Type"] and (
f"arn:{cloudtrail_client.audited_partition}:s3"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::"
in resource["Values"]
or f"arn:{cloudtrail_client.audited_partition}:s3:::*/*"
in resource["Values"]
):
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has a classic data event selector to record all S3 object-level API operations."
findings.append(report)
# Advanced event selectors
elif data_event.is_advanced:
for field_selector in data_event.event_selector[
"FieldSelectors"
]:
if (
field_selector["Field"] == "resources.type"
and field_selector["Equals"][0] == "AWS::S3::Object"
):
report = Check_Report_AWS(self.metadata())
report.region = trail.region
@@ -32,31 +50,16 @@ class cloudtrail_s3_dataevents_write_enabled(Check):
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has a classic data event selector to record all S3 object-level API operations."
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has an advanced data event selector to record all S3 object-level API operations."
findings.append(report)
# Advanced event selectors
elif data_event.is_advanced:
for field_selector in data_event.event_selector["FieldSelectors"]:
if (
field_selector["Field"] == "resources.type"
and field_selector["Equals"][0] == "AWS::S3::Object"
):
report = Check_Report_AWS(self.metadata())
report.region = trail.region
report.resource_id = trail.name
report.resource_arn = trail.arn
report.resource_tags = trail.tags
report.status = "PASS"
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has an advanced data event selector to record all S3 object-level API operations."
findings.append(report)
if not findings and (
s3_client.buckets or cloudtrail_client.provider.scan_unused_services
):
report = Check_Report_AWS(self.metadata())
report.region = cloudtrail_client.region
report.resource_arn = cloudtrail_client.trail_arn_template
report.resource_id = cloudtrail_client.audited_account
report.status = "FAIL"
report.status_extended = "No CloudTrail trails have a data event to record all S3 object-level API operations."
findings.append(report)
if not findings and (
s3_client.buckets or cloudtrail_client.provider.scan_unused_services
):
report = Check_Report_AWS(self.metadata())
report.region = cloudtrail_client.region
report.resource_arn = cloudtrail_client.trail_arn_template
report.resource_id = cloudtrail_client.audited_account
report.status = "FAIL"
report.status_extended = "No CloudTrail trails have a data event to record all S3 object-level API operations."
findings.append(report)
return findings

View File

@@ -17,10 +17,11 @@ class Cloudtrail(AWSService):
self.trail_arn_template = f"arn:{self.audited_partition}:cloudtrail:{self.region}:{self.audited_account}:trail"
self.trails = {}
self.__threading_call__(self.__get_trails__)
self.__get_trail_status__()
self.__get_insight_selectors__()
self.__get_event_selectors__()
self.__list_tags_for_resource__()
if self.trails:
self.__get_trail_status__()
self.__get_insight_selectors__()
self.__get_event_selectors__()
self.__list_tags_for_resource__()
def __get_trail_arn_template__(self, region):
return (
@@ -45,6 +46,8 @@ class Cloudtrail(AWSService):
kms_key_id = trail["KmsKeyId"]
if "CloudWatchLogsLogGroupArn" in trail:
log_group_arn = trail["CloudWatchLogsLogGroupArn"]
if self.trails is None:
self.trails = {}
self.trails[trail["TrailARN"]] = Trail(
name=trail["Name"],
is_multiregion=trail["IsMultiRegionTrail"],
@@ -61,12 +64,24 @@ class Cloudtrail(AWSService):
has_insight_selectors=trail.get("HasInsightSelectors"),
)
if trails_count == 0:
if self.trails is None:
self.trails = {}
self.trails[self.__get_trail_arn_template__(regional_client.region)] = (
Trail(
region=regional_client.region,
)
)
except ClientError as error:
if error.response["Error"]["Code"] == "AccessDeniedException":
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
if not self.trails:
self.trails = None
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -5,20 +5,19 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
)
THRESHOLD = cloudtrail_client.audit_config.get(
"threat_detection_enumeration_threshold", 0.1
)
THREAT_DETECTION_MINUTES = cloudtrail_client.audit_config.get(
"threat_detection_enumeration_minutes", 1440
)
ENUMERATION_ACTIONS = cloudtrail_client.audit_config.get(
"threat_detection_enumeration_actions", []
)
class cloudtrail_threat_detection_enumeration(Check):
def execute(self):
findings = []
threshold = cloudtrail_client.audit_config.get(
"threat_detection_enumeration_threshold", 0.1
)
threat_detection_minutes = cloudtrail_client.audit_config.get(
"threat_detection_enumeration_minutes", 1440
)
enumeration_actions = cloudtrail_client.audit_config.get(
"threat_detection_enumeration_actions", []
)
potential_enumeration = {}
found_potential_enumeration = False
multiregion_trail = None
@@ -33,11 +32,11 @@ class cloudtrail_threat_detection_enumeration(Check):
else [multiregion_trail]
)
for trail in trails_to_scan:
for event_name in ENUMERATION_ACTIONS:
for event_name in enumeration_actions:
for event_log in cloudtrail_client.__lookup_events__(
trail=trail,
event_name=event_name,
minutes=THREAT_DETECTION_MINUTES,
minutes=threat_detection_minutes,
):
event_log = json.loads(event_log["CloudTrailEvent"])
if ".amazonaws.com" not in event_log["sourceIPAddress"]:
@@ -47,8 +46,8 @@ class cloudtrail_threat_detection_enumeration(Check):
event_name
)
for source_ip, actions in potential_enumeration.items():
ip_threshold = round(len(actions) / len(ENUMERATION_ACTIONS), 2)
if len(actions) / len(ENUMERATION_ACTIONS) > THRESHOLD:
ip_threshold = round(len(actions) / len(enumeration_actions), 2)
if len(actions) / len(enumeration_actions) > threshold:
found_potential_enumeration = True
report = Check_Report_AWS(self.metadata())
report.region = cloudtrail_client.region

View File

@@ -5,20 +5,20 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
)
THRESHOLD = cloudtrail_client.audit_config.get(
"threat_detection_privilege_escalation_threshold", 0.1
)
THREAT_DETECTION_MINUTES = cloudtrail_client.audit_config.get(
"threat_detection_privilege_escalation_minutes", 1440
)
PRIVILEGE_ESCALATION_ACTIONS = cloudtrail_client.audit_config.get(
"threat_detection_privilege_escalation_actions", []
)
class cloudtrail_threat_detection_privilege_escalation(Check):
def execute(self):
findings = []
threshold = cloudtrail_client.audit_config.get(
"threat_detection_privilege_escalation_threshold", 0.1
)
threat_detection_minutes = cloudtrail_client.audit_config.get(
"threat_detection_privilege_escalation_minutes", 1440
)
privilege_escalation_actions = cloudtrail_client.audit_config.get(
"threat_detection_privilege_escalation_actions", []
)
potential_privilege_escalation = {}
found_potential_privilege_escalation = False
multiregion_trail = None
@@ -33,11 +33,11 @@ class cloudtrail_threat_detection_privilege_escalation(Check):
else [multiregion_trail]
)
for trail in trails_to_scan:
for event_name in PRIVILEGE_ESCALATION_ACTIONS:
for event_name in privilege_escalation_actions:
for event_log in cloudtrail_client.__lookup_events__(
trail=trail,
event_name=event_name,
minutes=THREAT_DETECTION_MINUTES,
minutes=threat_detection_minutes,
):
event_log = json.loads(event_log["CloudTrailEvent"])
if ".amazonaws.com" not in event_log["sourceIPAddress"]:
@@ -52,8 +52,8 @@ class cloudtrail_threat_detection_privilege_escalation(Check):
event_log["sourceIPAddress"]
].add(event_name)
for source_ip, actions in potential_privilege_escalation.items():
ip_threshold = round(len(actions) / len(PRIVILEGE_ESCALATION_ACTIONS), 2)
if len(actions) / len(PRIVILEGE_ESCALATION_ACTIONS) > THRESHOLD:
ip_threshold = round(len(actions) / len(privilege_escalation_actions), 2)
if len(actions) / len(privilege_escalation_actions) > threshold:
found_potential_privilege_escalation = True
report = Check_Report_AWS(self.metadata())
report.region = cloudtrail_client.region

View File

@@ -15,10 +15,10 @@
"RelatedUrl": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/monitoring_11#procedure",
"CLI": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_11#procedure",
"NativeIaC": "",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/monitoring_11#fix---buildtime"
"Terraform": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_11#fix---buildtime"
},
"Recommendation": {
"Text": "It is recommended that a metric filter and alarm be established for unauthorized requests.",

View File

@@ -15,21 +15,24 @@ class cloudwatch_changes_to_network_acls_alarm_configured(Check):
def execute(self):
pattern = r"\$\.eventName\s*=\s*.?CreateNetworkAcl.+\$\.eventName\s*=\s*.?CreateNetworkAclEntry.+\$\.eventName\s*=\s*.?DeleteNetworkAcl.+\$\.eventName\s*=\s*.?DeleteNetworkAclEntry.+\$\.eventName\s*=\s*.?ReplaceNetworkAclEntry.+\$\.eventName\s*=\s*.?ReplaceNetworkAclAssociation.?"
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = (
"No CloudWatch log groups found with metric filters or alarms associated."
)
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
if (
cloudtrail_client.trails is not None
and logs_client.metric_filters is not None
and cloudwatch_client.metric_alarms is not None
):
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = "No CloudWatch log groups found with metric filters or alarms associated."
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
findings.append(report)
return findings

View File

@@ -15,10 +15,10 @@
"RelatedUrl": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/monitoring_12#procedure",
"CLI": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_12#procedure",
"NativeIaC": "",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/monitoring_12#fix---buildtime"
"Terraform": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_12#fix---buildtime"
},
"Recommendation": {
"Text": "It is recommended that a metric filter and alarm be established for unauthorized requests.",

View File

@@ -15,21 +15,24 @@ class cloudwatch_changes_to_network_gateways_alarm_configured(Check):
def execute(self):
pattern = r"\$\.eventName\s*=\s*.?CreateCustomerGateway.+\$\.eventName\s*=\s*.?DeleteCustomerGateway.+\$\.eventName\s*=\s*.?AttachInternetGateway.+\$\.eventName\s*=\s*.?CreateInternetGateway.+\$\.eventName\s*=\s*.?DeleteInternetGateway.+\$\.eventName\s*=\s*.?DetachInternetGateway.?"
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = (
"No CloudWatch log groups found with metric filters or alarms associated."
)
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
if (
cloudtrail_client.trails is not None
and logs_client.metric_filters is not None
and cloudwatch_client.metric_alarms is not None
):
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = "No CloudWatch log groups found with metric filters or alarms associated."
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
findings.append(report)
return findings

View File

@@ -15,10 +15,10 @@
"RelatedUrl": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/monitoring_13#procedure",
"CLI": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_13#procedure",
"NativeIaC": "",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/monitoring_13#fix---buildtime"
"Terraform": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_13#fix---buildtime"
},
"Recommendation": {
"Text": "If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for route table changes and the <cloudtrail_log_group_name> taken from audit step 1. aws logs put-metric-filter --log-group-name <cloudtrail_log_group_name> -- filter-name `<route_table_changes_metric>` --metric-transformations metricName= `<route_table_changes_metric>` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = ec2.amazonaws.com) && (($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name <sns_topic_name> Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn <sns_topic_arn> --protocol <protocol_for_sns> - -notification-endpoint <sns_subscription_endpoints> Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `<route_table_changes_alarm>` --metric-name `<route_table_changes_metric>` --statistic Sum --period 300 - -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -- evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions <sns_topic_arn>",

View File

@@ -15,21 +15,24 @@ class cloudwatch_changes_to_network_route_tables_alarm_configured(Check):
def execute(self):
pattern = r"\$\.eventSource\s*=\s*.?ec2.amazonaws.com.+\$\.eventName\s*=\s*.?CreateRoute.+\$\.eventName\s*=\s*.?CreateRouteTable.+\$\.eventName\s*=\s*.?ReplaceRoute.+\$\.eventName\s*=\s*.?ReplaceRouteTableAssociation.+\$\.eventName\s*=\s*.?DeleteRouteTable.+\$\.eventName\s*=\s*.?DeleteRoute.+\$\.eventName\s*=\s*.?DisassociateRouteTable.?"
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = (
"No CloudWatch log groups found with metric filters or alarms associated."
)
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
if (
cloudtrail_client.trails is not None
and logs_client.metric_filters is not None
and cloudwatch_client.metric_alarms is not None
):
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = "No CloudWatch log groups found with metric filters or alarms associated."
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
findings.append(report)
return findings

View File

@@ -15,10 +15,10 @@
"RelatedUrl": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html",
"Remediation": {
"Code": {
"CLI": "https://docs.bridgecrew.io/docs/monitoring_14#procedure",
"CLI": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_14#procedure",
"NativeIaC": "",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/monitoring_14#fix---buildtime"
"Terraform": "https://docs.prowler.com/checks/aws/monitoring-policies/monitoring_14#fix---buildtime"
},
"Recommendation": {
"Text": "It is recommended that a metric filter and alarm be established for unauthorized requests.",

View File

@@ -15,21 +15,24 @@ class cloudwatch_changes_to_vpcs_alarm_configured(Check):
def execute(self):
pattern = r"\$\.eventName\s*=\s*.?CreateVpc.+\$\.eventName\s*=\s*.?DeleteVpc.+\$\.eventName\s*=\s*.?ModifyVpcAttribute.+\$\.eventName\s*=\s*.?AcceptVpcPeeringConnection.+\$\.eventName\s*=\s*.?CreateVpcPeeringConnection.+\$\.eventName\s*=\s*.?DeleteVpcPeeringConnection.+\$\.eventName\s*=\s*.?RejectVpcPeeringConnection.+\$\.eventName\s*=\s*.?AttachClassicLinkVpc.+\$\.eventName\s*=\s*.?DetachClassicLinkVpc.+\$\.eventName\s*=\s*.?DisableVpcClassicLink.+\$\.eventName\s*=\s*.?EnableVpcClassicLink.?"
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = (
"No CloudWatch log groups found with metric filters or alarms associated."
)
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
if (
cloudtrail_client.trails is not None
and logs_client.metric_filters is not None
and cloudwatch_client.metric_alarms is not None
):
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = "No CloudWatch log groups found with metric filters or alarms associated."
report.region = logs_client.region
report.resource_id = logs_client.audited_account
report.resource_arn = logs_client.log_group_arn_template
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
findings.append(report)
return findings

View File

@@ -5,17 +5,20 @@ from prowler.providers.aws.services.iam.iam_client import iam_client
class cloudwatch_cross_account_sharing_disabled(Check):
def execute(self):
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "PASS"
report.status_extended = "CloudWatch doesn't allow cross-account sharing."
report.resource_arn = iam_client.role_arn_template
report.resource_id = iam_client.audited_account
report.region = iam_client.region
for role in iam_client.roles:
if role.name == "CloudWatch-CrossAccountSharingRole":
report.resource_arn = role.arn
report.resource_id = role.name
report.status = "FAIL"
report.status_extended = "CloudWatch has allowed cross-account sharing."
findings.append(report)
if iam_client.roles is not None:
report = Check_Report_AWS(self.metadata())
report.status = "PASS"
report.status_extended = "CloudWatch doesn't allow cross-account sharing."
report.resource_arn = iam_client.role_arn_template
report.resource_id = iam_client.audited_account
report.region = iam_client.region
for role in iam_client.roles:
if role.name == "CloudWatch-CrossAccountSharingRole":
report.resource_arn = role.arn
report.resource_id = role.name
report.status = "FAIL"
report.status_extended = (
"CloudWatch has allowed cross-account sharing."
)
findings.append(report)
return findings

Some files were not shown because too many files have changed in this diff Show More