mirror of
https://github.com/prowler-cloud/prowler.git
synced 2026-01-25 02:08:11 +00:00
Compare commits
35 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7500da60e2 | ||
|
|
790fff460a | ||
|
|
9055dbafe3 | ||
|
|
4454d9115e | ||
|
|
0d74dec446 | ||
|
|
0313dba7b4 | ||
|
|
3fafac75ef | ||
|
|
6b24b46f3d | ||
|
|
474e39a4c9 | ||
|
|
e652298b6a | ||
|
|
9340ae43f3 | ||
|
|
552024c53e | ||
|
|
3aba71ad2f | ||
|
|
ade511df28 | ||
|
|
fc650214d4 | ||
|
|
8266fd0c6f | ||
|
|
f4308032c3 | ||
|
|
1e1f445ade | ||
|
|
d41b0332ac | ||
|
|
7258466572 | ||
|
|
76db92ea14 | ||
|
|
ad3cd66e08 | ||
|
|
22f8855ad7 | ||
|
|
36e095c830 | ||
|
|
887cac1264 | ||
|
|
13059e0568 | ||
|
|
9e8023d716 | ||
|
|
c54ba5fd8c | ||
|
|
db80e063d4 | ||
|
|
b6aa12706a | ||
|
|
c1caf6717d | ||
|
|
513fd9f532 | ||
|
|
bf77f817cb | ||
|
|
e0bfef2ece | ||
|
|
4a87f908a8 |
@@ -39,7 +39,7 @@ It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, Fe
|
||||
|
||||
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.cloud/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.cloud/en/latest/tutorials/misc/#categories) |
|
||||
|---|---|---|---|---|
|
||||
| AWS | 285 | 55 -> `prowler aws --list-services` | 25 -> `prowler aws --list-compliance` | 5 -> `prowler aws --list-categories` |
|
||||
| AWS | 287 | 56 -> `prowler aws --list-services` | 25 -> `prowler aws --list-compliance` | 5 -> `prowler aws --list-categories` |
|
||||
| GCP | 73 | 11 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
|
||||
| Azure | 23 | 4 -> `prowler azure --list-services` | CIS soon | 1 -> `prowler azure --list-categories` |
|
||||
| Kubernetes | Planned | - | - | - |
|
||||
@@ -115,8 +115,8 @@ Make sure you have properly configured your AWS-CLI with a valid Access Key and
|
||||
|
||||
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
|
||||
|
||||
- arn:aws:iam::aws:policy/SecurityAudit
|
||||
- arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
|
||||
- `arn:aws:iam::aws:policy/SecurityAudit`
|
||||
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
|
||||
|
||||
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
|
||||
|
||||
|
||||
@@ -23,8 +23,8 @@ export AWS_SESSION_TOKEN="XXXXXXXXX"
|
||||
|
||||
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
|
||||
|
||||
- arn:aws:iam::aws:policy/SecurityAudit
|
||||
- arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
|
||||
- `arn:aws:iam::aws:policy/SecurityAudit`
|
||||
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
|
||||
|
||||
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
|
||||
|
||||
|
||||
@@ -16,8 +16,8 @@ export AWS_SESSION_TOKEN="XXXXXXXXX"
|
||||
|
||||
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
|
||||
|
||||
- arn:aws:iam::aws:policy/SecurityAudit
|
||||
- arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
|
||||
- `arn:aws:iam::aws:policy/SecurityAudit`
|
||||
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
|
||||
|
||||
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
|
||||
|
||||
|
||||
@@ -1,18 +1,19 @@
|
||||
# AWS Organizations
|
||||
## Get AWS Account details from your AWS Organization:
|
||||
## Get AWS Account details from your AWS Organization
|
||||
|
||||
Prowler allows you to get additional information of the scanned account in CSV and JSON outputs. When scanning a single account you get the Account ID as part of the output.
|
||||
|
||||
If you have AWS Organizations Prowler can get your account details like Account Name, Email, ARN, Organization ID and Tags and you will have them next to every finding in the CSV and JSON outputs.
|
||||
|
||||
- In order to do that you can use the option `-O`/`--organizations-role <organizations_role_arn>`. See the following sample command:
|
||||
In order to do that you can use the option `-O`/`--organizations-role <organizations_role_arn>`. See the following sample command:
|
||||
|
||||
```
|
||||
prowler aws -O arn:aws:iam::<management_organizations_account_id>:role/<role_name>
|
||||
```shell
|
||||
prowler aws \
|
||||
-O arn:aws:iam::<management_organizations_account_id>:role/<role_name>
|
||||
```
|
||||
> Make sure the role in your AWS Organizations management account has the permissions `organizations:ListAccounts*` and `organizations:ListTagsForResource`.
|
||||
|
||||
- In that command Prowler will scan the account and getting the account details from the AWS Organizations management account assuming a role and creating two reports with those details in JSON and CSV.
|
||||
In that command Prowler will scan the account and getting the account details from the AWS Organizations management account assuming a role and creating two reports with those details in JSON and CSV.
|
||||
|
||||
In the JSON output below (redacted) you can see tags coded in base64 to prevent breaking CSV or JSON due to its format:
|
||||
|
||||
@@ -30,20 +31,28 @@ The additional fields in CSV header output are as follow:
|
||||
ACCOUNT_DETAILS_EMAIL,ACCOUNT_DETAILS_NAME,ACCOUNT_DETAILS_ARN,ACCOUNT_DETAILS_ORG,ACCOUNT_DETAILS_TAGS
|
||||
```
|
||||
|
||||
## Assume Role and across all accounts in AWS Organizations or just a list of accounts:
|
||||
## Extra: run Prowler across all accounts in AWS Organizations by assuming roles
|
||||
|
||||
If you want to run Prowler across all accounts of AWS Organizations you can do this:
|
||||
|
||||
- First get a list of accounts that are not suspended:
|
||||
1. First get a list of accounts that are not suspended:
|
||||
|
||||
```
|
||||
ACCOUNTS_IN_ORGS=$(aws organizations list-accounts --query Accounts[?Status==`ACTIVE`].Id --output text)
|
||||
```
|
||||
```shell
|
||||
ACCOUNTS_IN_ORGS=$(aws organizations list-accounts \
|
||||
--query "Accounts[?Status=='ACTIVE'].Id" \
|
||||
--output text \
|
||||
)
|
||||
```
|
||||
|
||||
- Then run Prowler to assume a role (same in all members) per each account, in this example it is just running one particular check:
|
||||
2. Then run Prowler to assume a role (same in all members) per each account:
|
||||
|
||||
```
|
||||
for accountId in $ACCOUNTS_IN_ORGS; do prowler aws -O arn:aws:iam::<management_organizations_account_id>:role/<role_name>; done
|
||||
```
|
||||
```shell
|
||||
for accountId in $ACCOUNTS_IN_ORGS;
|
||||
do
|
||||
prowler aws \
|
||||
-O arn:aws:iam::<management_organizations_account_id>:role/<role_name> \
|
||||
-R arn:aws:iam::"${accountId}":role/<role_name>;
|
||||
done
|
||||
```
|
||||
|
||||
- Using the same for loop it can be scanned a list of accounts with a variable like `ACCOUNTS_LIST='11111111111 2222222222 333333333'`
|
||||
> Using the same for loop it can be scanned a list of accounts with a variable like `ACCOUNTS_LIST='11111111111 2222222222 333333333'`
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
# Configuration File
|
||||
Several Prowler's checks have user configurable variables that can be modified in a common **configuration file**.
|
||||
This file can be found in the following path:
|
||||
Several Prowler's checks have user configurable variables that can be modified in a common **configuration file**. This file can be found in the following [path](https://github.com/prowler-cloud/prowler/blob/master/prowler/config/config.yaml):
|
||||
```
|
||||
prowler/config/config.yaml
|
||||
```
|
||||
|
||||
## Configurable Checks
|
||||
Also you can input a custom configuration file using the `--config-file` argument.
|
||||
|
||||
## AWS
|
||||
|
||||
### Configurable Checks
|
||||
The following list includes all the checks with configurable variables that can be changed in the mentioned configuration yaml file:
|
||||
|
||||
1. aws.ec2_elastic_ip_shodan
|
||||
@@ -29,48 +32,73 @@ The following list includes all the checks with configurable variables that can
|
||||
- aws.awslambda_function_using_supported_runtimes
|
||||
- obsolete_lambda_runtimes (List of Strings)
|
||||
|
||||
## Config Yaml File
|
||||
## Azure
|
||||
|
||||
# AWS EC2 Configuration
|
||||
# aws.ec2_elastic_ip_shodan
|
||||
shodan_api_key: null
|
||||
# aws.ec2_securitygroup_with_many_ingress_egress_rules --> by default is 50 rules
|
||||
max_security_group_rules: 50
|
||||
# aws.ec2_instance_older_than_specific_days --> by default is 6 months (180 days)
|
||||
max_ec2_instance_age_in_days: 180
|
||||
## GCP
|
||||
|
||||
# AWS VPC Configuration (vpc_endpoint_connections_trust_boundaries, vpc_endpoint_services_allowed_principals_trust_boundaries)
|
||||
# Single account environment: No action required. The AWS account number will be automatically added by the checks.
|
||||
# Multi account environment: Any additional trusted account number should be added as a space separated list, e.g.
|
||||
# trusted_account_ids : ["123456789012", "098765432109", "678901234567"]
|
||||
trusted_account_ids: []
|
||||
## Config YAML File Structure
|
||||
> This is the new Prowler configuration file format. The old one without provider keys is still compatible just for the AWS provider.
|
||||
```yaml
|
||||
# AWS Configuration
|
||||
aws:
|
||||
# AWS EC2 Configuration
|
||||
# aws.ec2_elastic_ip_shodan
|
||||
shodan_api_key: null
|
||||
# aws.ec2_securitygroup_with_many_ingress_egress_rules --> by default is 50 rules
|
||||
max_security_group_rules: 50
|
||||
# aws.ec2_instance_older_than_specific_days --> by default is 6 months (180 days)
|
||||
max_ec2_instance_age_in_days: 180
|
||||
|
||||
# AWS Cloudwatch Configuration
|
||||
# aws.cloudwatch_log_group_retention_policy_specific_days_enabled --> by default is 365 days
|
||||
log_group_retention_days: 365
|
||||
# AWS VPC Configuration (vpc_endpoint_connections_trust_boundaries, vpc_endpoint_services_allowed_principals_trust_boundaries)
|
||||
# Single account environment: No action required. The AWS account number will be automatically added by the checks.
|
||||
# Multi account environment: Any additional trusted account number should be added as a space separated list, e.g.
|
||||
# trusted_account_ids : ["123456789012", "098765432109", "678901234567"]
|
||||
trusted_account_ids: []
|
||||
|
||||
# AWS AppStream Session Configuration
|
||||
# aws.appstream_fleet_session_idle_disconnect_timeout
|
||||
max_idle_disconnect_timeout_in_seconds: 600 # 10 Minutes
|
||||
# aws.appstream_fleet_session_disconnect_timeout
|
||||
max_disconnect_timeout_in_seconds: 300 # 5 Minutes
|
||||
# aws.appstream_fleet_maximum_session_duration
|
||||
max_session_duration_seconds: 36000 # 10 Hours
|
||||
# AWS Cloudwatch Configuration
|
||||
# aws.cloudwatch_log_group_retention_policy_specific_days_enabled --> by default is 365 days
|
||||
log_group_retention_days: 365
|
||||
|
||||
# AWS Lambda Configuration
|
||||
# aws.awslambda_function_using_supported_runtimes
|
||||
obsolete_lambda_runtimes:
|
||||
# AWS AppStream Session Configuration
|
||||
# aws.appstream_fleet_session_idle_disconnect_timeout
|
||||
max_idle_disconnect_timeout_in_seconds: 600 # 10 Minutes
|
||||
# aws.appstream_fleet_session_disconnect_timeout
|
||||
max_disconnect_timeout_in_seconds: 300 # 5 Minutes
|
||||
# aws.appstream_fleet_maximum_session_duration
|
||||
max_session_duration_seconds: 36000 # 10 Hours
|
||||
|
||||
# AWS Lambda Configuration
|
||||
# aws.awslambda_function_using_supported_runtimes
|
||||
obsolete_lambda_runtimes:
|
||||
[
|
||||
"python3.6",
|
||||
"python2.7",
|
||||
"nodejs4.3",
|
||||
"nodejs4.3-edge",
|
||||
"nodejs6.10",
|
||||
"nodejs",
|
||||
"nodejs8.10",
|
||||
"nodejs10.x",
|
||||
"dotnetcore1.0",
|
||||
"dotnetcore2.0",
|
||||
"dotnetcore2.1",
|
||||
"ruby2.5",
|
||||
"python3.6",
|
||||
"python2.7",
|
||||
"nodejs4.3",
|
||||
"nodejs4.3-edge",
|
||||
"nodejs6.10",
|
||||
"nodejs",
|
||||
"nodejs8.10",
|
||||
"nodejs10.x",
|
||||
"dotnetcore1.0",
|
||||
"dotnetcore2.0",
|
||||
"dotnetcore2.1",
|
||||
"ruby2.5",
|
||||
]
|
||||
|
||||
# AWS Organizations
|
||||
# organizations_scp_check_deny_regions
|
||||
# organizations_enabled_regions: [
|
||||
# 'eu-central-1',
|
||||
# 'eu-west-1',
|
||||
# "us-east-1"
|
||||
# ]
|
||||
organizations_enabled_regions: []
|
||||
organizations_trusted_delegated_administrators: []
|
||||
|
||||
# Azure Configuration
|
||||
azure:
|
||||
|
||||
# GCP Configuration
|
||||
gcp:
|
||||
|
||||
```
|
||||
|
||||
@@ -1,281 +0,0 @@
|
||||
# Developer Guide
|
||||
|
||||
You can extend Prowler in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
|
||||
|
||||
## Get the code and install all dependencies
|
||||
|
||||
First of all, you need a version of Python 3.9 or higher and also pip installed to be able to install all dependencies required. Once that is satisfied go a head and clone the repo:
|
||||
|
||||
```
|
||||
git clone https://github.com/prowler-cloud/prowler
|
||||
cd prowler
|
||||
```
|
||||
For isolation and avoid conflicts with other environments, we recommend usage of `poetry`:
|
||||
```
|
||||
pip install poetry
|
||||
```
|
||||
Then install all dependencies including the ones for developers:
|
||||
```
|
||||
poetry install
|
||||
poetry shell
|
||||
```
|
||||
|
||||
## Contributing with your code or fixes to Prowler
|
||||
|
||||
This repo has git pre-commit hooks managed via the pre-commit tool. Install it how ever you like, then in the root of this repo run:
|
||||
```
|
||||
pre-commit install
|
||||
```
|
||||
You should get an output like the following:
|
||||
```
|
||||
pre-commit installed at .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated (these should have been already installed if you ran `pipenv install -d`):
|
||||
|
||||
- `bandit` for code security review.
|
||||
- `safety` and `dependabot` for dependencies.
|
||||
- `hadolint` and `dockle` for our containers security.
|
||||
- `snyk` in Docker Hub.
|
||||
- `clair` in Amazon ECR.
|
||||
- `vulture`, `flake8`, `black` and `pylint` for formatting and best practices.
|
||||
|
||||
You can see all dependencies in file `Pipfile`.
|
||||
|
||||
## Create a new check for a Provider
|
||||
|
||||
### If the check you want to create belongs to an existing service
|
||||
|
||||
To create a new check, you will need to create a folder inside the specific service, i.e. `prowler/providers/<provider>/services/<service>/<check_name>/`, with the name of check following the pattern: `service_subservice_action`.
|
||||
Inside that folder, create the following files:
|
||||
|
||||
- An empty `__init__.py`: to make Python treat this check folder as a package.
|
||||
- A `check_name.py` containing the check's logic, for example:
|
||||
```
|
||||
# Import the Check_Report of the specific provider
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
# Import the client of the specific service
|
||||
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
|
||||
# Create the class for the check
|
||||
class ec2_ebs_volume_encryption(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
# Iterate the service's asset that want to be analyzed
|
||||
for volume in ec2_client.volumes:
|
||||
# Initialize a Check Report for each item and assign the region, resource_id, resource_arn and resource_tags
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = volume.region
|
||||
report.resource_id = volume.id
|
||||
report.resource_arn = volume.arn
|
||||
report.resource_tags = volume.tags
|
||||
# Make the logic with conditions and create a PASS and a FAIL with a status and a status_extended
|
||||
if volume.encrypted:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"EBS Snapshot {volume.id} is encrypted."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"EBS Snapshot {volume.id} is unencrypted."
|
||||
findings.append(report) # Append a report for each item
|
||||
|
||||
return findings
|
||||
```
|
||||
- A `check_name.metadata.json` containing the check's metadata, for example:
|
||||
```
|
||||
{
|
||||
"Provider": "aws",
|
||||
"CheckID": "ec2_ebs_volume_encryption",
|
||||
"CheckTitle": "Ensure there are no EBS Volumes unencrypted.",
|
||||
"CheckType": [
|
||||
"Data Protection"
|
||||
],
|
||||
"ServiceName": "ec2",
|
||||
"SubServiceName": "volume",
|
||||
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "AwsEc2Volume",
|
||||
"Description": "Ensure there are no EBS Volumes unencrypted.",
|
||||
"Risk": "Data encryption at rest prevents data visibility in the event of its unauthorized access or theft.",
|
||||
"RelatedUrl": "",
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Encrypt all EBS volumes and Enable Encryption by default You can configure your AWS account to enforce the encryption of the new EBS volumes and snapshot copies that you create. For example; Amazon EBS encrypts the EBS volumes created when you launch an instance and the snapshots that you copy from an unencrypted snapshot.",
|
||||
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"encryption"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [],
|
||||
"Notes": ""
|
||||
}
|
||||
```
|
||||
|
||||
### If the check you want to create belongs to a service not supported already by Prowler you will need to create a new service first
|
||||
|
||||
To create a new service, you will need to create a folder inside the specific provider, i.e. `prowler/providers/<provider>/services/<service>/`.
|
||||
Inside that folder, create the following files:
|
||||
|
||||
- An empty `__init__.py`: to make Python treat this service folder as a package.
|
||||
- A `<service>_service.py`, containing all the service's logic and API Calls:
|
||||
```
|
||||
# You must import the following libraries
|
||||
import threading
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
|
||||
from prowler.providers.aws.aws_provider import generate_regional_clients
|
||||
|
||||
|
||||
# Create a class for the Service
|
||||
################## <Service>
|
||||
class <Service>:
|
||||
def __init__(self, audit_info):
|
||||
self.service = "<service>" # The name of the service boto3 client
|
||||
self.session = audit_info.audit_session
|
||||
self.audited_account = audit_info.audited_account
|
||||
self.audit_resources = audit_info.audit_resources
|
||||
self.regional_clients = generate_regional_clients(self.service, audit_info)
|
||||
self.<items> = [] # Create an empty list of the items to be gathered, e.g., instances
|
||||
self.__threading_call__(self.__describe_<items>__)
|
||||
self.__describe_<item>__() # Optionally you can create another function to retrieve more data about each item
|
||||
|
||||
def __get_session__(self):
|
||||
return self.session
|
||||
|
||||
def __threading_call__(self, call):
|
||||
threads = []
|
||||
for regional_client in self.regional_clients.values():
|
||||
threads.append(threading.Thread(target=call, args=(regional_client,)))
|
||||
for t in threads:
|
||||
t.start()
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
def __describe_<items>__(self, regional_client):
|
||||
"""Get ALL <Service> <Items>"""
|
||||
logger.info("<Service> - Describing <Items>...")
|
||||
try:
|
||||
describe_<items>_paginator = regional_client.get_paginator("describe_<items>") # Paginator to get every item
|
||||
for page in describe_<items>_paginator.paginate():
|
||||
for <item> in page["<Items>"]:
|
||||
if not self.audit_resources or (
|
||||
is_resource_filtered(<item>["<item_arn>"], self.audit_resources)
|
||||
):
|
||||
self.<items>.append(
|
||||
<Item>(
|
||||
arn=stack["<item_arn>"],
|
||||
name=stack["<item_name>"],
|
||||
tags=stack.get("Tags", []),
|
||||
region=regional_client.region,
|
||||
)
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_<item>__(self):
|
||||
"""Get Details for a <Service> <Item>"""
|
||||
logger.info("<Service> - Describing <Item> to get specific details...")
|
||||
try:
|
||||
for <item> in self.<items>:
|
||||
<item>_details = self.regional_clients[<item>.region].describe_<item>(
|
||||
<Attribute>=<item>.name
|
||||
)
|
||||
# For example, check if item is Public
|
||||
<item>.public = <item>_details.get("Public", False)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{<item>.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
|
||||
class <Item>(BaseModel):
|
||||
"""<Item> holds a <Service> <Item>"""
|
||||
|
||||
arn: str
|
||||
"""<Items>[].Arn"""
|
||||
name: str
|
||||
"""<Items>[].Name"""
|
||||
public: bool
|
||||
"""<Items>[].Public"""
|
||||
tags: Optional[list] = []
|
||||
region: str
|
||||
|
||||
```
|
||||
- A `<service>_client_.py`, containing the initialization of the service's class we have just created so the service's checks can use them:
|
||||
```
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.<service>.<service>_service import <Service>
|
||||
|
||||
<service>_client = <Service>(current_audit_info)
|
||||
```
|
||||
|
||||
## Create a new security compliance framework
|
||||
|
||||
If you want to create or contribute with your own security frameworks or add public ones to Prowler you need to make sure the checks are available if not you have to create your own. Then create a compliance file per provider like in `prowler/compliance/aws/` and name it as `<framework>_<version>_<provider>.json` then follow the following format to create yours.
|
||||
|
||||
Each file version of a framework will have the following structure at high level with the case that each framework needs to be generally identified, one requirement can be also called one control but one requirement can be linked to multiple prowler checks.:
|
||||
|
||||
- `Framework`: string. Distinguish name of the framework, like CIS
|
||||
- `Provider`: string. Provider where the framework applies, such as AWS, Azure, OCI,...
|
||||
- `Version`: string. Version of the framework itself, like 1.4 for CIS.
|
||||
- `Requirements`: array of objects. Include all requirements or controls with the mapping to Prowler.
|
||||
- `Requirements_Id`: string. Unique identifier per each requirement in the specific framework
|
||||
- `Requirements_Description`: string. Description as in the framework.
|
||||
- `Requirements_Attributes`: array of objects. Includes all needed attributes per each requirement, like levels, sections, etc. Whatever helps to create a dedicated report with the result of the findings. Attributes would be taken as closely as possible from the framework's own terminology directly.
|
||||
- `Requirements_Checks`: array. Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case of no automation possible this can be empty.
|
||||
|
||||
```
|
||||
{
|
||||
"Framework": "<framework>-<provider>",
|
||||
"Version": "<version>",
|
||||
"Requirements": [
|
||||
{
|
||||
"Id": "<unique-id>",
|
||||
"Description": "Requiemente full description",
|
||||
"Checks": [
|
||||
"Here is the prowler check or checks that is going to be executed"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
<Add here your custom attributes.>
|
||||
}
|
||||
]
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.
|
||||
|
||||
|
||||
## Create a custom output format
|
||||
|
||||
## Create a new integration
|
||||
|
||||
## Contribute with documentation
|
||||
|
||||
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them.
|
||||
|
||||
1. Install `mkdocs` with your favorite package manager.
|
||||
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.
|
||||
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yaml` file in the root folder of the Prowler repo.
|
||||
4. Once you are done with changes, please send a pull request to us for review and merge. Thank you in advance!
|
||||
|
||||
## Want some swag as appreciation for your contribution?
|
||||
|
||||
If you are like us and you love swag, we are happy to thank you for your contribution with some laptop stickers or whatever other swag we may have at that time. Please, tell us more details and your pull request link in our [Slack workspace here](https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog). You can also reach out to Toni de la Fuente on Twitter [here](https://twitter.com/ToniBlyx), his DMs are open.
|
||||
9
docs/tutorials/developer-guide/audit-info.md
Normal file
9
docs/tutorials/developer-guide/audit-info.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Audit Info
|
||||
|
||||
In each Prowler provider we have a Python object called `audit_info` which is in charge of keeping the credentials, the configuration and the state of each audit, and it's passed to each service during the `__init__`.
|
||||
|
||||
- AWS: https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/audit_info/models.py#L34-L54
|
||||
- GCP: https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/audit_info/models.py#L7-L30
|
||||
- Azure: https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/lib/audit_info/models.py#L17-L31
|
||||
|
||||
This `audit_info` object is shared during the Prowler execution and for that reason is important to mock it in each test to isolate them. See the [testing guide](./unit-testing.md) for more information.
|
||||
276
docs/tutorials/developer-guide/checks.md
Normal file
276
docs/tutorials/developer-guide/checks.md
Normal file
@@ -0,0 +1,276 @@
|
||||
# Create a new Check for a Provider
|
||||
|
||||
Here you can find how to create new checks for Prowler.
|
||||
|
||||
**To create a check is required to have a Prowler provider service already created, so if the service is not present or the attribute you want to audit is not retrieved by the service, please refer to the [Service](./service.md) documentation.**
|
||||
|
||||
## Introduction
|
||||
To create a new check for a supported Prowler provider, you will need to create a folder with the check name inside the specific service for the selected provider.
|
||||
|
||||
We are going to use the `ec2_ami_public` check form the `AWS` provider as an example. So the folder name will `prowler/providers/aws/services/ec2/ec2_ami_public` (following the format `prowler/providers/<provider>/services/<service>/<check_name>`), with the name of check following the pattern: `service_subservice/resource_action`.
|
||||
|
||||
Inside that folder, we need to create three files:
|
||||
|
||||
- An empty `__init__.py`: to make Python treat this check folder as a package.
|
||||
- A `check_name.py` with the above format containing the check's logic. Refer to the [check](./checks.md#check)
|
||||
- A `check_name.metadata.json` containing the check's metadata. Refer to the [check metadata](./checks.md#check-metadata)
|
||||
|
||||
## Check
|
||||
|
||||
The Prowler's check structure is very simple and following it there is nothing more to do to include a check in a provider's service because the load is done dynamically based on the paths.
|
||||
|
||||
The following is the code for the `ec2_ami_public` check:
|
||||
```python
|
||||
# At the top of the file we need to import the following:
|
||||
# - Check class which is in charge of the following:
|
||||
# - Retrieve the check metadata and expose the `metadata()`
|
||||
# to return a JSON representation of the metadata,
|
||||
# read more at Check Metadata Model down below.
|
||||
# - Enforce that each check requires to have the `execute()` function
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
|
||||
# Then you have to import the provider service client
|
||||
# read more at the Service documentation.
|
||||
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
|
||||
# For each check we need to create a python class called the same as the
|
||||
# file which inherits from the Check class.
|
||||
class ec2_ami_public(Check):
|
||||
"""ec2_ami_public verifies if an EC2 AMI is publicly shared"""
|
||||
|
||||
# Then, within the check's class we need to create the "execute(self)"
|
||||
# function, which is enforce by the "Check" class to implement
|
||||
# the Check's interface and let Prowler to run this check.
|
||||
def execute(self):
|
||||
|
||||
# Inside the execute(self) function we need to create
|
||||
# the list of findings initialised to an empty list []
|
||||
findings = []
|
||||
|
||||
# Then, using the service client we need to iterate by the resource we
|
||||
# want to check, in this case EC2 AMIs stored in the
|
||||
# "ec2_client.images" object.
|
||||
for image in ec2_client.images:
|
||||
|
||||
# Once iterating for the images, we have to intialise
|
||||
# the Check_Report_AWS class passing the check's metadata
|
||||
# using the "metadata" function explained above.
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
|
||||
# For each Prowler check we MUST fill the following
|
||||
# Check_Report_AWS fields:
|
||||
# - region
|
||||
# - resource_id
|
||||
# - resource_arn
|
||||
# - resource_tags
|
||||
# - status
|
||||
# - status_extended
|
||||
report.region = image.region
|
||||
report.resource_id = image.id
|
||||
report.resource_arn = image.arn
|
||||
# The resource_tags should be filled if the resource has the ability
|
||||
# of having tags, please check the service first.
|
||||
report.resource_tags = image.tags
|
||||
|
||||
# Then we need to create the business logic for the check
|
||||
# which always should be simple because the Prowler service
|
||||
# must do the heavy lifting and the check should be in charge
|
||||
# of parsing the data provided
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"EC2 AMI {image.id} is not public."
|
||||
|
||||
# In this example each "image" object has a boolean attribute
|
||||
# called "public" to set if the AMI is publicly shared
|
||||
if image.public:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"EC2 AMI {image.id} is currently public."
|
||||
)
|
||||
|
||||
# Then at the same level as the "report"
|
||||
# object we need to append it to the findings list.
|
||||
findings.append(report)
|
||||
|
||||
# Last thing to do is to return the findings list to Prowler
|
||||
return findings
|
||||
```
|
||||
|
||||
### Check Status
|
||||
|
||||
All the checks MUST fill the `report.status` and `report.status_extended` with the following criteria:
|
||||
|
||||
- Status -- `report.status`
|
||||
- `PASS` --> If the check is passing against the configured value.
|
||||
- `FAIL` --> If the check is passing against the configured value.
|
||||
- `INFO` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
|
||||
- Status Extended -- `report.status_extended`
|
||||
- MUST end in a dot `.`
|
||||
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`
|
||||
### Resource ID, Name and ARN
|
||||
All the hecks must fill the `report.resource_id` and `report.resource_arn` with the following criteria:
|
||||
|
||||
- AWS
|
||||
- Resource ID -- `report.resource_id`
|
||||
- AWS Account --> Account Number `123456789012`
|
||||
- AWS Resource --> Resource ID / Name
|
||||
- Root resource --> `<root_account>`
|
||||
- Resource ARN -- `report.resource_arn`
|
||||
- AWS Account --> Root ARN `arn:aws:iam::123456789012:root`
|
||||
- AWS Resource --> Resource ARN
|
||||
- Root resource --> Root ARN `arn:aws:iam::123456789012:root`
|
||||
- GCP
|
||||
- Resource ID -- `report.resource_id`
|
||||
- GCP Resource --> Resource ID
|
||||
- Resource Name -- `report.resource_name`
|
||||
- GCP Resource --> Resource Name
|
||||
- Azure
|
||||
- Resource ID -- `report.resource_id`
|
||||
- Azure Resource --> Resource ID
|
||||
- Resource Name -- `report.resource_name`
|
||||
- Azure Resource --> Resource Name
|
||||
|
||||
### Python Model
|
||||
The following is the Python model for the check's class.
|
||||
|
||||
As per August 5th 2023 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L59-L80).
|
||||
|
||||
```python
|
||||
class Check(ABC, Check_Metadata_Model):
|
||||
"""Prowler Check"""
|
||||
|
||||
def __init__(self, **data):
|
||||
"""Check's init function. Calls the CheckMetadataModel init."""
|
||||
# Parse the Check's metadata file
|
||||
metadata_file = (
|
||||
os.path.abspath(sys.modules[self.__module__].__file__)[:-3]
|
||||
+ ".metadata.json"
|
||||
)
|
||||
# Store it to validate them with Pydantic
|
||||
data = Check_Metadata_Model.parse_file(metadata_file).dict()
|
||||
# Calls parents init function
|
||||
super().__init__(**data)
|
||||
|
||||
def metadata(self) -> dict:
|
||||
"""Return the JSON representation of the check's metadata"""
|
||||
return self.json()
|
||||
|
||||
@abstractmethod
|
||||
def execute(self):
|
||||
"""Execute the check's logic"""
|
||||
```
|
||||
|
||||
## Check Metadata
|
||||
|
||||
Each Prowler check has metadata associated which is stored at the same level of the check's folder in a file called A `check_name.metadata.json` containing the check's metadata.
|
||||
|
||||
> We are going to include comments in this example metadata JSON but they cannot be included because the JSON format does not allow comments.
|
||||
|
||||
```json
|
||||
{
|
||||
# Provider holds the Prowler provider which the checks belongs to
|
||||
"Provider": "aws",
|
||||
# CheckID holds check name
|
||||
"CheckID": "ec2_ami_public",
|
||||
# CheckTitle holds the title of the check
|
||||
"CheckTitle": "Ensure there are no EC2 AMIs set as Public.",
|
||||
# CheckType holds Software and Configuration Checks, check more here
|
||||
# https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types
|
||||
"CheckType": [
|
||||
"Infrastructure Security"
|
||||
],
|
||||
# ServiceName holds the provider service name
|
||||
"ServiceName": "ec2",
|
||||
# SubServiceName holds the service's subservice or resource used by the check
|
||||
"SubServiceName": "ami",
|
||||
# ResourceIdTemplate holds the unique ID for the resource used by the check
|
||||
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
|
||||
# Severity holds the check's severity, always in lowercase (critical, high, medium, low or informational)
|
||||
"Severity": "critical",
|
||||
# ResourceType only for AWS, holds the type from here
|
||||
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
|
||||
"ResourceType": "Other",
|
||||
# Description holds the title of the check, for now is the same as CheckTitle
|
||||
"Description": "Ensure there are no EC2 AMIs set as Public.",
|
||||
# Risk holds the check risk if the result is FAIL
|
||||
"Risk": "When your AMIs are publicly accessible, they are available in the Community AMIs where everyone with an AWS account can use them to launch EC2 instances. Your AMIs could contain snapshots of your applications (including their data), therefore exposing your snapshots in this manner is not advised.",
|
||||
# RelatedUrl holds an URL with more information about the check purpose
|
||||
"RelatedUrl": "",
|
||||
# Remediation holds the information to help the practitioner to fix the issue in the case of the check raise a FAIL
|
||||
"Remediation": {
|
||||
# Code holds different methods to remediate the FAIL finding
|
||||
"Code": {
|
||||
# CLI holds the command in the provider native CLI to remediate it
|
||||
"CLI": "https://docs.bridgecrew.io/docs/public_8#cli-command",
|
||||
# NativeIaC holds the native IaC code to remediate it, use "https://docs.bridgecrew.io/docs"
|
||||
"NativeIaC": "",
|
||||
# Other holds the other commands, scripts or code to remediate it, use "https://www.trendmicro.com/cloudoneconformity"
|
||||
"Other": "https://docs.bridgecrew.io/docs/public_8#aws-console",
|
||||
# Terraform holds the Terraform code to remediate it, use "https://docs.bridgecrew.io/docs"
|
||||
"Terraform": ""
|
||||
},
|
||||
# Recommendation holds the recommendation for this check with a description and a related URL
|
||||
"Recommendation": {
|
||||
"Text": "We recommend your EC2 AMIs are not publicly accessible, or generally available in the Community AMIs.",
|
||||
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/cancel-sharing-an-AMI.html"
|
||||
}
|
||||
},
|
||||
# Categories holds the category or categories where the check can be included, if applied
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
# DependsOn is not actively used for the moment but it will hold other
|
||||
# checks wich this check is dependant to
|
||||
"DependsOn": [],
|
||||
# RelatedTo is not actively used for the moment but it will hold other
|
||||
# checks wich this check is related to
|
||||
"RelatedTo": [],
|
||||
# Notes holds additional information not covered in this file
|
||||
"Notes": ""
|
||||
}
|
||||
```
|
||||
|
||||
### Remediation Code
|
||||
|
||||
For the Remediation Code we use the following knowledge base to fill it:
|
||||
|
||||
- Official documentation for the provider
|
||||
- https://docs.bridgecrew.io
|
||||
- https://www.trendmicro.com/cloudoneconformity
|
||||
- https://github.com/cloudmatos/matos/tree/master/remediations
|
||||
|
||||
### RelatedURL and Recommendation
|
||||
|
||||
The RelatedURL field must be filled with an URL from the provider's official documentation like https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-intro.html
|
||||
|
||||
Also, if not present you can use the Risk and Recommendation texts from the TrendMicro [CloudConformity](https://www.trendmicro.com/cloudoneconformity) guide.
|
||||
|
||||
|
||||
### Python Model
|
||||
The following is the Python model for the check's metadata model. We use the Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) as the parent class.
|
||||
|
||||
As per August 5th 2023 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L34-L56).
|
||||
```python
|
||||
class Check_Metadata_Model(BaseModel):
|
||||
"""Check Metadata Model"""
|
||||
|
||||
Provider: str
|
||||
CheckID: str
|
||||
CheckTitle: str
|
||||
CheckType: list[str]
|
||||
ServiceName: str
|
||||
SubServiceName: str
|
||||
ResourceIdTemplate: str
|
||||
Severity: str
|
||||
ResourceType: str
|
||||
Description: str
|
||||
Risk: str
|
||||
RelatedUrl: str
|
||||
Remediation: Remediation
|
||||
Categories: list[str]
|
||||
DependsOn: list[str]
|
||||
RelatedTo: list[str]
|
||||
Notes: str
|
||||
# We set the compliance to None to
|
||||
# store the compliance later if supplied
|
||||
Compliance: list = None
|
||||
```
|
||||
47
docs/tutorials/developer-guide/developer-guide.md
Normal file
47
docs/tutorials/developer-guide/developer-guide.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Developer Guide
|
||||
|
||||
You can extend Prowler in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
|
||||
|
||||
## Get the code and install all dependencies
|
||||
|
||||
First of all, you need a version of Python 3.9 or higher and also pip installed to be able to install all dependencies required. Once that is satisfied go a head and clone the repo:
|
||||
|
||||
```
|
||||
git clone https://github.com/prowler-cloud/prowler
|
||||
cd prowler
|
||||
```
|
||||
For isolation and avoid conflicts with other environments, we recommend usage of `poetry`:
|
||||
```
|
||||
pip install poetry
|
||||
```
|
||||
Then install all dependencies including the ones for developers:
|
||||
```
|
||||
poetry install
|
||||
poetry shell
|
||||
```
|
||||
|
||||
## Contributing with your code or fixes to Prowler
|
||||
|
||||
This repo has git pre-commit hooks managed via the [pre-commit](https://pre-commit.com/) tool. [Install](https://pre-commit.com/#install) it how ever you like, then in the root of this repo run:
|
||||
```shell
|
||||
pre-commit install
|
||||
```
|
||||
You should get an output like the following:
|
||||
```shell
|
||||
pre-commit installed at .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated (these should have been already installed if you ran `pipenv install -d`):
|
||||
|
||||
- [`bandit`](https://pypi.org/project/bandit/) for code security review.
|
||||
- [`safety`](https://pypi.org/project/safety/) and [`dependabot`](https://github.com/features/security) for dependencies.
|
||||
- [`hadolint`](https://github.com/hadolint/hadolint) and [`dockle`](https://github.com/goodwithtech/dockle) for our containers security.
|
||||
- [`Snyk`](https://docs.snyk.io/integrations/snyk-container-integrations/container-security-with-docker-hub-integration) in Docker Hub.
|
||||
- [`clair`](https://github.com/quay/clair) in Amazon ECR.
|
||||
- [`vulture`](https://pypi.org/project/vulture/), [`flake8`](https://pypi.org/project/flake8/), [`black`](https://pypi.org/project/black/) and [`pylint`](https://pypi.org/project/pylint/) for formatting and best practices.
|
||||
|
||||
You can see all dependencies in file `pyproject.toml`.
|
||||
|
||||
## Want some swag as appreciation for your contribution?
|
||||
|
||||
If you are like us and you love swag, we are happy to thank you for your contribution with some laptop stickers or whatever other swag we may have at that time. Please, tell us more details and your pull request link in our [Slack workspace here](https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog). You can also reach out to Toni de la Fuente on Twitter [here](https://twitter.com/ToniBlyx), his DMs are open.
|
||||
8
docs/tutorials/developer-guide/documentation.md
Normal file
8
docs/tutorials/developer-guide/documentation.md
Normal file
@@ -0,0 +1,8 @@
|
||||
## Contribute with documentation
|
||||
|
||||
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them.
|
||||
|
||||
1. Install `mkdocs` with your favorite package manager.
|
||||
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.
|
||||
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yaml` file in the root folder of the Prowler repo.
|
||||
4. Once you are done with changes, please send a pull request to us for review and merge. Thank you in advance!
|
||||
3
docs/tutorials/developer-guide/integration-testing.md
Normal file
3
docs/tutorials/developer-guide/integration-testing.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Integration Tests
|
||||
|
||||
Coming soon ...
|
||||
3
docs/tutorials/developer-guide/integrations.md
Normal file
3
docs/tutorials/developer-guide/integrations.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Create a new integration
|
||||
|
||||
Coming soon ...
|
||||
3
docs/tutorials/developer-guide/outputs.md
Normal file
3
docs/tutorials/developer-guide/outputs.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Create a custom output format
|
||||
|
||||
Coming soon ...
|
||||
@@ -0,0 +1,41 @@
|
||||
# Create a new security compliance framework
|
||||
|
||||
|
||||
## Introduction
|
||||
If you want to create or contribute with your own security frameworks or add public ones to Prowler you need to make sure the checks are available if not you have to create your own. Then create a compliance file per provider like in `prowler/compliance/<provider>/` and name it as `<framework>_<version>_<provider>.json` then follow the following format to create yours.
|
||||
|
||||
## Compliance Framework
|
||||
Each file version of a framework will have the following structure at high level with the case that each framework needs to be generally identified, one requirement can be also called one control but one requirement can be linked to multiple prowler checks.:
|
||||
|
||||
- `Framework`: string. Distinguish name of the framework, like CIS
|
||||
- `Provider`: string. Provider where the framework applies, such as AWS, Azure, OCI,...
|
||||
- `Version`: string. Version of the framework itself, like 1.4 for CIS.
|
||||
- `Requirements`: array of objects. Include all requirements or controls with the mapping to Prowler.
|
||||
- `Requirements_Id`: string. Unique identifier per each requirement in the specific framework
|
||||
- `Requirements_Description`: string. Description as in the framework.
|
||||
- `Requirements_Attributes`: array of objects. Includes all needed attributes per each requirement, like levels, sections, etc. Whatever helps to create a dedicated report with the result of the findings. Attributes would be taken as closely as possible from the framework's own terminology directly.
|
||||
- `Requirements_Checks`: array. Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case of no automation possible this can be empty.
|
||||
|
||||
```
|
||||
{
|
||||
"Framework": "<framework>-<provider>",
|
||||
"Version": "<version>",
|
||||
"Requirements": [
|
||||
{
|
||||
"Id": "<unique-id>",
|
||||
"Description": "Requiemente full description",
|
||||
"Checks": [
|
||||
"Here is the prowler check or checks that is going to be executed"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
<Add here your custom attributes.>
|
||||
}
|
||||
]
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.
|
||||
230
docs/tutorials/developer-guide/service.md
Normal file
230
docs/tutorials/developer-guide/service.md
Normal file
@@ -0,0 +1,230 @@
|
||||
# Create a new Provider Service
|
||||
|
||||
Here you can find how to create a new service, or to complement an existing one, for a Prowler Provider.
|
||||
|
||||
## Introduction
|
||||
|
||||
To create a new service, you will need to create a folder inside the specific provider, i.e. `prowler/providers/<provider>/services/<service>/`.
|
||||
|
||||
Inside that folder, you MUST create the following files:
|
||||
|
||||
- An empty `__init__.py`: to make Python treat this service folder as a package.
|
||||
- A `<service>_service.py`, containing all the service's logic and API calls.
|
||||
- A `<service>_client_.py`, containing the initialization of the service's class we have just created so the checks's checks can use it.
|
||||
|
||||
## Service
|
||||
|
||||
The Prowler's service structure is the following and the way to initialise it is just by importing the service client in a check.
|
||||
|
||||
## Service Base Class
|
||||
|
||||
All the Prowler provider's services inherits from a base class depending on the provider used.
|
||||
|
||||
- [AWS Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/aws/lib/service/service.py)
|
||||
- [GCP Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/gcp/lib/service/service.py)
|
||||
- [Azure Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/azure/lib/service/service.py)
|
||||
|
||||
Each class is used to initialize the credentials and the API's clients to be used in the service. If some threading is used it must be coded there.
|
||||
|
||||
## Service Class
|
||||
|
||||
Due to the complexity and differencies of each provider API we are going to use an example service to guide you in how can it be created.
|
||||
|
||||
The following is the `<service>_service.py` file:
|
||||
|
||||
```python
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
# The following is just for the AWS provider
|
||||
from botocore.client import ClientError
|
||||
|
||||
# To use the Pydantic's BaseModel
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Prowler logging library
|
||||
from prowler.lib.logger import logger
|
||||
|
||||
# Prowler resource filter, only for the AWS provider
|
||||
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
|
||||
|
||||
# Provider parent class
|
||||
from prowler.providers.<provider>.lib.service.service import ServiceParentClass
|
||||
|
||||
|
||||
# Create a class for the Service
|
||||
################## <Service>
|
||||
class <Service>(ServiceParentClass):
|
||||
def __init__(self, audit_info):
|
||||
# Call Service Parent Class __init__
|
||||
# We use the __class__.__name__ to get it automatically
|
||||
# from the Service Class name but you can pass a custom
|
||||
# string if the provider's API service name is different
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
|
||||
# Create an empty dictionary of items to be gathered,
|
||||
# using the unique ID as the dictionary key
|
||||
# e.g., instances
|
||||
self.<items> = {}
|
||||
|
||||
# If you can parallelize by regions or locations
|
||||
# you can use the __threading_call__ function
|
||||
# available in the Service Parent Class
|
||||
self.__threading_call__(self.__describe_<items>__)
|
||||
|
||||
# Optionally you can create another function to retrieve
|
||||
# more data about each item without parallel
|
||||
self.__describe_<item>__()
|
||||
|
||||
def __describe_<items>__(self, regional_client):
|
||||
"""Get ALL <Service> <Items>"""
|
||||
logger.info("<Service> - Describing <Items>...")
|
||||
|
||||
# We MUST include a try/except block in each function
|
||||
try:
|
||||
|
||||
# Call to the provider API to retrieve the data we want
|
||||
describe_<items>_paginator = regional_client.get_paginator("describe_<items>")
|
||||
|
||||
# Paginator to get every item
|
||||
for page in describe_<items>_paginator.paginate():
|
||||
|
||||
# Another try/except within the loop for to continue looping
|
||||
# if something unexpected happens
|
||||
try:
|
||||
|
||||
for <item> in page["<Items>"]:
|
||||
|
||||
# For the AWS provider we MUST include the following lines to retrieve
|
||||
# or not data for the resource passed as argument using the --resource-arn
|
||||
if not self.audit_resources or (
|
||||
is_resource_filtered(<item>["<item_arn>"], self.audit_resources)
|
||||
):
|
||||
# Then we have to include the retrieved resource in the object
|
||||
# previously created
|
||||
self.<items>[<item_unique_id>] =
|
||||
<Item>(
|
||||
arn=stack["<item_arn>"],
|
||||
name=stack["<item_name>"],
|
||||
tags=stack.get("Tags", []),
|
||||
region=regional_client.region,
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{<provider_specific_field>} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
# In the except part we have to use the following code to log the errors
|
||||
except Exception as error:
|
||||
# Depending on each provider we can use the following fields in the logger:
|
||||
# - AWS: regional_client.region or self.region
|
||||
# - GCP: project_id and location
|
||||
# - Azure: subscription
|
||||
|
||||
logger.error(
|
||||
f"{<provider_specific_field>} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_<item>__(self):
|
||||
"""Get Details for a <Service> <Item>"""
|
||||
logger.info("<Service> - Describing <Item> to get specific details...")
|
||||
|
||||
# We MUST include a try/except block in each function
|
||||
try:
|
||||
|
||||
# Loop over the items retrieved in the previous function
|
||||
for <item> in self.<items>:
|
||||
|
||||
# When we perform calls to the Provider API within a for loop we have
|
||||
# to include another try/except block because in the cloud there are
|
||||
# ephemeral resources that can be deleted at the time we are checking them
|
||||
try:
|
||||
<item>_details = self.regional_clients[<item>.region].describe_<item>(
|
||||
<Attribute>=<item>.name
|
||||
)
|
||||
|
||||
# For example, check if item is Public. Here is important if we are
|
||||
# getting values from a dictionary we have to use the "dict.get()"
|
||||
# function with a default value in the case this value is not present
|
||||
<item>.public = <item>_details.get("Public", False)
|
||||
|
||||
|
||||
# In this except block, for example for the AWS Provider we can use
|
||||
# the botocore.ClientError exception and check for a specific error code
|
||||
# to raise a WARNING instead of an ERROR if some resource is not present.
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
|
||||
logger.warning(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{<provider_specific_field>} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
continue
|
||||
|
||||
# In the except part we have to use the following code to log the errors
|
||||
except Exception as error:
|
||||
# Depending on each provider we can use the following fields in the logger:
|
||||
# - AWS: regional_client.region or self.region
|
||||
# - GCP: project_id and location
|
||||
# - Azure: subscription
|
||||
|
||||
logger.error(
|
||||
f"{<item>.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
|
||||
# In each service class we have to create some classes using the Pydantic's Basemodel for the resources we want to audit.
|
||||
class <Item>(BaseModel):
|
||||
"""<Item> holds a <Service> <Item>"""
|
||||
|
||||
arn: str
|
||||
"""<Items>[].arn"""
|
||||
|
||||
name: str
|
||||
"""<Items>[].name"""
|
||||
|
||||
region: str
|
||||
"""<Items>[].region"""
|
||||
|
||||
public: bool
|
||||
"""<Items>[].public"""
|
||||
|
||||
# We can create Optional attributes set to None by default
|
||||
tags: Optional[list] = []
|
||||
"""<Items>[].tags"""
|
||||
|
||||
|
||||
```
|
||||
|
||||
### Service Objects
|
||||
In the service each list of resources should be created as a Python [dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) since we are performing lookups all the time the Python dictionary lookup has [O(1) complexity](https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions).
|
||||
|
||||
Example:
|
||||
```python
|
||||
self.vpcs = {}
|
||||
self.vpcs["vpc-01234567890abcdef"] = VPC_Object_Class
|
||||
```
|
||||
|
||||
## Service Client
|
||||
|
||||
Each Prowler service requires a service client to use the service in the checks.
|
||||
|
||||
The following is the `<service>_client.py` containing the initialization of the service's class we have just created so the service's checks can use them:
|
||||
|
||||
```python
|
||||
from prowler.providers.<provider>.lib.audit_info.audit_info import audit_info
|
||||
from prowler.providers.<provider>.services.<service>.<service>_service import <Service>
|
||||
|
||||
<service>_client = <Service>(audit_info)
|
||||
```
|
||||
|
||||
## Permissions
|
||||
|
||||
It is really important to check if the current Prowler's permissions for each provider are enough to implement a new service. If we need to include more please refer to the following documentaion and update it:
|
||||
|
||||
- AWS: https://docs.prowler.cloud/en/latest/getting-started/requirements/#aws-authentication
|
||||
- Azure: https://docs.prowler.cloud/en/latest/getting-started/requirements/#permissions
|
||||
- GCP: https://docs.prowler.cloud/en/latest/getting-started/requirements/#gcp-authentication
|
||||
543
docs/tutorials/developer-guide/unit-testing.md
Normal file
543
docs/tutorials/developer-guide/unit-testing.md
Normal file
@@ -0,0 +1,543 @@
|
||||
# Unit Tests
|
||||
|
||||
The unit tests for the Prowler checks varies between each provider supported.
|
||||
|
||||
Here we left some good reads about unit testing and things we've learnt through all the process.
|
||||
|
||||
**Python Testing**
|
||||
|
||||
- https://docs.python-guide.org/writing/tests/
|
||||
|
||||
**Where to patch**
|
||||
|
||||
- https://docs.python.org/3/library/unittest.mock.html#where-to-patch
|
||||
- https://stackoverflow.com/questions/893333/multiple-variables-in-a-with-statement
|
||||
- https://docs.python.org/3/reference/compound_stmts.html#the-with-statement
|
||||
|
||||
**Utils to trace mocking and test execution**
|
||||
|
||||
- https://news.ycombinator.com/item?id=36054868
|
||||
- https://docs.python.org/3/library/sys.html#sys.settrace
|
||||
- https://github.com/kunalb/panopticon
|
||||
|
||||
**Patching vs. Importing**
|
||||
|
||||
This is an important topic within the Prowler check's unit testing. Due to the dynamic nature of the check's load, the process of importing the service client from a check is the following:
|
||||
|
||||
1. `<check>.py`:
|
||||
```python
|
||||
from prowler.providers.<provider>.services.<service>.<service>_client import <service>_client
|
||||
```
|
||||
2. `<service>_client.py`:
|
||||
```python
|
||||
from prowler.providers.<provider>.lib.audit_info.audit_info import audit_info
|
||||
from prowler.providers.<provider>.services.<service>.<service>_service import <SERVICE>
|
||||
|
||||
<service>_client = <SERVICE>(audit_info)
|
||||
```
|
||||
|
||||
Due to the above import path it's not the same to patch the following objects because if you run a bunch of tests, either in parallel or not, some clients can be already instantiated by another check, hence your test exection will be using another test's service instance:
|
||||
|
||||
- `<service>_client` imported at `<check>.py`
|
||||
- `<service>_client` initialised at `<service>_client.py`
|
||||
- `<SERVICE>` imported at `<service>_client.py`
|
||||
|
||||
A useful read about this topic can be found in the following article: https://stackoverflow.com/questions/8658043/how-to-mock-an-import
|
||||
|
||||
## General Recommendations
|
||||
|
||||
When creating tests for some provider's checks we follow these guidelines trying to cover as much test scenarios as possible:
|
||||
|
||||
1. Create a test without resource to generate 0 findings, because Prowler will generate 0 findings if a service does not contain the resources the check is looking for audit.
|
||||
2. Create test to generate both a `PASS` and a `FAIL` result.
|
||||
3. Create tests with more than 1 resource to evaluate how the check behaves and if the number of findings is right.
|
||||
|
||||
## How to run Prowler tests
|
||||
|
||||
To run the Prowler test suite you need to install the testing dependencies already included in the `pyproject.toml` file. If you didn't install it yet please read the developer guide introduction [here](./developer-guide.md#get-the-code-and-install-all-dependencies).
|
||||
|
||||
Then in the project's root path execute `pytest -n auto -vvv -s -x` or use the `Makefile` with `make test`.
|
||||
|
||||
Other commands to run tests:
|
||||
|
||||
- Run tests for a provider: `pytest -n auto -vvv -s -x tests/providers/<provider>/services`
|
||||
- Run tests for a provider service: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>`
|
||||
- Run tests for a provider check: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>/<check>`
|
||||
|
||||
> Refer to the [pytest documentation](https://docs.pytest.org/en/7.1.x/getting-started.html) documentation for more information.
|
||||
|
||||
## AWS
|
||||
|
||||
For the AWS provider we have ways to test a Prowler check based on the following criteria:
|
||||
|
||||
> Note: We use and contribute to the [Moto](https://github.com/getmoto/moto) library which allows us to easily mock out tests based on AWS infrastructure. **It's awesome!**
|
||||
|
||||
- AWS API calls covered by [Moto](https://github.com/getmoto/moto):
|
||||
- Service tests with `@mock_<service>`
|
||||
- Checks tests with `@mock_<service>`
|
||||
- AWS API calls not covered by Moto:
|
||||
- Service test with `mock_make_api_call`
|
||||
- Checks tests with [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock)
|
||||
- AWS API calls partially covered by Moto:
|
||||
- Service test with `@mock_<service>` and `mock_make_api_call`
|
||||
- Checks tests with `@mock_<service>` and `mock_make_api_call`
|
||||
|
||||
In the following section we are going to explain all of the above scenarios with examples based on if the [Moto](https://github.com/getmoto/moto) library covers the AWS API calls made by the service. You can check the covered API calls [here](https://github.com/getmoto/moto/blob/master/IMPLEMENTATION_COVERAGE.md).
|
||||
|
||||
An important point for the AWS testing is that in each check we MUST have a unique `audit_info` which is the key object during the AWS execution to isolate the test execution.
|
||||
|
||||
Check the [Audit Info](./audit-info.md) section to get more details.
|
||||
|
||||
```python
|
||||
# We need to import the AWS_Audit_Info and the Audit_Metadata
|
||||
# to set the audit_info to call AWS APIs
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
from prowler.providers.common.models import Audit_Metadata
|
||||
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
|
||||
def set_mocked_audit_info(self):
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=session.Session(
|
||||
profile_name=None,
|
||||
botocore_session=None,
|
||||
),
|
||||
audit_config=None,
|
||||
audited_account=AWS_ACCOUNT_NUMBER,
|
||||
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
|
||||
audited_user_id=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=["us-east-1", "eu-west-1"],
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
|
||||
return audit_info
|
||||
```
|
||||
### Checks
|
||||
|
||||
For the AWS tests examples we are going to use the tests for the `iam_password_policy_uppercase` check.
|
||||
|
||||
This section is going to be divided based on the API coverage of the [Moto](https://github.com/getmoto/moto) library.
|
||||
|
||||
#### API calls covered
|
||||
|
||||
If the [Moto](https://github.com/getmoto/moto) library covers the API calls we want to test we can use the `@mock_<service>` decorator which will mocked out all the API calls made to AWS keeping the state within the code decorated, in this case the test function.
|
||||
|
||||
```python
|
||||
# We need to import the unittest.mock to allow us to patch some objects
|
||||
# not to use shared ones between test, hence to isolate the test
|
||||
from unittest import mock
|
||||
|
||||
# Boto3 client and session to call the AWS APIs
|
||||
from boto3 import client, session
|
||||
|
||||
# Moto decorator for the IAM service we want to mock
|
||||
from moto import mock_iam
|
||||
|
||||
# Constants used
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
AWS_REGION = "us-east-1"
|
||||
|
||||
|
||||
# We always name the test classes like Test_<check_name>
|
||||
class Test_iam_password_policy_uppercase:
|
||||
|
||||
# We include the Moto decorator for the service we want to use
|
||||
# You can include more than one if two or more services are
|
||||
# involved in test
|
||||
@mock_iam
|
||||
# We name the tests with test_<service>_<check_name>_<test_action>
|
||||
def test_iam_password_policy_no_uppercase_flag(self):
|
||||
# First, we have to create an IAM client
|
||||
iam_client = client("iam", region_name=AWS_REGION)
|
||||
|
||||
# Then, since all the AWS accounts have a password
|
||||
# policy we want to set to False the RequireUppercaseCharacters
|
||||
iam_client.update_account_password_policy(RequireUppercaseCharacters=False)
|
||||
|
||||
# We set a mocked audit_info for AWS not to share the same audit state
|
||||
# between checks
|
||||
current_audit_info = self.set_mocked_audit_info()
|
||||
|
||||
# The Prowler service import MUST be made within the decorated
|
||||
# code not to make real API calls to the AWS service.
|
||||
from prowler.providers.aws.services.iam.iam_service import IAM
|
||||
|
||||
# Prowler for AWS uses a shared object called `current_audit_info` where it stores
|
||||
# the audit's state, credentials and configuration.
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
new=current_audit_info,
|
||||
),
|
||||
# We have to mock also the iam_client from the check to enforce that the iam_client used is the one
|
||||
# created within this check because patch != import, and if you execute tests in parallel some objects
|
||||
# can be already initialised hence the check won't be isolated
|
||||
mock.patch(
|
||||
"prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase.iam_client",
|
||||
new=IAM(current_audit_info),
|
||||
):
|
||||
# We import the check within the two mocks not to initialise the iam_client with some shared information from
|
||||
# the current_audit_info or the IAM service.
|
||||
from prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase import (
|
||||
iam_password_policy_uppercase,
|
||||
)
|
||||
|
||||
# Once imported, we only need to instantiate the check's class
|
||||
check = iam_password_policy_uppercase()
|
||||
|
||||
# And then, call the execute() function to run the check
|
||||
# against the IAM client we've set up.
|
||||
result = check.execute()
|
||||
|
||||
# Last but not least, we need to assert all the fields
|
||||
# from the check's results
|
||||
assert len(results) == 1
|
||||
assert result[0].status == "FAIL"
|
||||
assert result[0].status_extended == "IAM password policy does not require at least one uppercase letter."
|
||||
assert result[0].resource_arn == f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
|
||||
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
|
||||
assert result[0].resource_tags == []
|
||||
assert result[0].region == AWS_REGION
|
||||
```
|
||||
|
||||
#### API calls not covered
|
||||
|
||||
If the IAM service for the check's we want to test is not covered by Moto we have to inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock) because we cannot instantiate the service since it will make real calls to the AWS APIs.
|
||||
|
||||
> The following example uses the IAM GetAccountPasswordPolicy which is covered by Moto but this is only for demonstration purposes.
|
||||
|
||||
The following code shows how to use MagicMock to create the service objects.
|
||||
|
||||
```python
|
||||
# We need to import the unittest.mock to allow us to patch some objects
|
||||
# not to use shared ones between test, hence to isolate the test
|
||||
from unittest import mock
|
||||
|
||||
# Constants used
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
AWS_REGION = "us-east-1"
|
||||
|
||||
|
||||
# We always name the test classes like Test_<check_name>
|
||||
class Test_iam_password_policy_uppercase:
|
||||
|
||||
# We name the tests with test_<service>_<check_name>_<test_action>
|
||||
def test_iam_password_policy_no_uppercase_flag(self):
|
||||
# Mocked client with MagicMock
|
||||
mocked_iam_client = mock.MagicMock
|
||||
|
||||
# Since the IAM Password Policy has their own model we have to import it
|
||||
from prowler.providers.aws.services.iam.iam_service import PasswordPolicy
|
||||
|
||||
# Create the mock PasswordPolicy object
|
||||
mocked_iam_client.password_policy = PasswordPolicy(
|
||||
length=5,
|
||||
symbols=True,
|
||||
numbers=True,
|
||||
# We set the value to False to test the check
|
||||
uppercase=False,
|
||||
lowercase=True,
|
||||
allow_change=False,
|
||||
expiration=True,
|
||||
)
|
||||
|
||||
# We set a mocked audit_info for AWS not to share the same audit state
|
||||
# between checks
|
||||
current_audit_info = self.set_mocked_audit_info()
|
||||
|
||||
# In this scenario we have to mock also the IAM service and the iam_client from the check to enforce that the iam_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
|
||||
# In this case we don't use the Moto decorator, we use the mocked IAM client for both objects
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.services.iam.iam_service.IAM",
|
||||
new=mocked_iam_client,
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.iam.iam_client.iam_client",
|
||||
new=mocked_iam_client,
|
||||
):
|
||||
# We import the check within the two mocks not to initialise the iam_client with some shared information from
|
||||
# the current_audit_info or the IAM service.
|
||||
from prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase import (
|
||||
iam_password_policy_uppercase,
|
||||
)
|
||||
|
||||
# Once imported, we only need to instantiate the check's class
|
||||
check = iam_password_policy_uppercase()
|
||||
|
||||
# And then, call the execute() function to run the check
|
||||
# against the IAM client we've set up.
|
||||
result = check.execute()
|
||||
|
||||
# Last but not least, we need to assert all the fields
|
||||
# from the check's results
|
||||
assert len(results) == 1
|
||||
assert result[0].status == "FAIL"
|
||||
assert result[0].status_extended == "IAM password policy does not require at least one uppercase letter."
|
||||
assert result[0].resource_arn == f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
|
||||
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
|
||||
assert result[0].resource_tags == []
|
||||
assert result[0].region == AWS_REGION
|
||||
```
|
||||
|
||||
#### API calls partially covered
|
||||
|
||||
If the API calls we want to use in the service are partially covered by the Moto decorator we have to create our own mocked API calls to use it in combination.
|
||||
|
||||
To do so, you need to mock the `botocore.client.BaseClient._make_api_call` function, which is the Boto3 function in charge of making the real API call to the AWS APIs, using `mock.patch <https://docs.python.org/3/library/unittest.mock.html#patch>`:
|
||||
|
||||
|
||||
```python
|
||||
|
||||
import boto3
|
||||
import botocore
|
||||
from unittest.mock import patch
|
||||
from moto import mock_iam
|
||||
|
||||
# Original botocore _make_api_call function
|
||||
orig = botocore.client.BaseClient._make_api_call
|
||||
|
||||
# Mocked botocore _make_api_call function
|
||||
def mock_make_api_call(self, operation_name, kwarg):
|
||||
# As you can see the operation_name has the get_account_password_policy snake_case form but
|
||||
# we are using the GetAccountPasswordPolicy form.
|
||||
# Rationale -> https://github.com/boto/botocore/blob/develop/botocore/client.py#L810:L816
|
||||
if operation_name == 'GetAccountPasswordPolicy':
|
||||
return {
|
||||
'PasswordPolicy': {
|
||||
'MinimumPasswordLength': 123,
|
||||
'RequireSymbols': True|False,
|
||||
'RequireNumbers': True|False,
|
||||
'RequireUppercaseCharacters': True|False,
|
||||
'RequireLowercaseCharacters': True|False,
|
||||
'AllowUsersToChangePassword': True|False,
|
||||
'ExpirePasswords': True|False,
|
||||
'MaxPasswordAge': 123,
|
||||
'PasswordReusePrevention': 123,
|
||||
'HardExpiry': True|False
|
||||
}
|
||||
}
|
||||
# If we don't want to patch the API call
|
||||
return orig(self, operation_name, kwarg)
|
||||
|
||||
# We always name the test classes like Test_<check_name>
|
||||
class Test_iam_password_policy_uppercase:
|
||||
|
||||
# We include the custom API call mock decorator for the service we want to use
|
||||
@patch("botocore.client.BaseClient._make_api_call", new=mock_make_api_call)
|
||||
# We include also the IAM Moto decorator for the API calls supported
|
||||
@mock_iam
|
||||
# We name the tests with test_<service>_<check_name>_<test_action>
|
||||
def test_iam_password_policy_no_uppercase_flag(self):
|
||||
# Check the previous section to see the check test since is the same
|
||||
```
|
||||
|
||||
Note that this does not use Moto, to keep it simple, but if you use any `moto`-decorators in addition to the patch, the call to `orig(self, operation_name, kwarg)` will be intercepted by Moto.
|
||||
|
||||
> The above code comes from here https://docs.getmoto.org/en/latest/docs/services/patching_other_services.html
|
||||
|
||||
#### Mocking more than one service
|
||||
|
||||
If the test your are creating belongs to a check that uses more than one provider service, you should mock each of the services used. For example, the check `cloudtrail_logs_s3_bucket_access_logging_enabled` requires the CloudTrail and the S3 client, hence the service's mock part of the test will be as follows:
|
||||
|
||||
|
||||
```python
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
new=mock_audit_info,
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.cloudtrail.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_client",
|
||||
new=Cloudtrail(mock_audit_info),
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.cloudtrail.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_logs_s3_bucket_access_logging_enabled.s3_client",
|
||||
new=S3(mock_audit_info),
|
||||
):
|
||||
```
|
||||
|
||||
|
||||
As you can see in the above code, it is required to mock the AWS audit info and both services used.
|
||||
|
||||
### Services
|
||||
|
||||
For testing the AWS services we have to follow the same logic as with the AWS checks, we have to check if the AWS API calls made by the service are covered by Moto and we have to test the service `__init__` to verifiy that the information is being correctly retrieved.
|
||||
|
||||
The service tests could act as *Integration Tests* since we test how the service retrieves the information from the provider, but since Moto or the custom mock objects mocks that calls this test will fall into *Unit Tests*.
|
||||
|
||||
Please refer to the [AWS checks tests](./unit-testing.md#checks) for more information on how to create tests and check the existing services tests [here](https://github.com/prowler-cloud/prowler/tree/master/tests/providers/aws/services).
|
||||
|
||||
## GCP
|
||||
|
||||
### Checks
|
||||
|
||||
For the GCP Provider we don't have any library to mock out the API calls we use. So in this scenario we inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock).
|
||||
|
||||
The following code shows how to use MagicMock to create the service objects for a GCP check test.
|
||||
|
||||
```python
|
||||
# We need to import the unittest.mock to allow us to patch some objects
|
||||
# not to use shared ones between test, hence to isolate the test
|
||||
from unittest import mock
|
||||
|
||||
# GCP Constants
|
||||
GCP_PROJECT_ID = "123456789012"
|
||||
|
||||
# We are going to create a test for the compute_firewall_rdp_access_from_the_internet_allowed check
|
||||
class Test_compute_firewall_rdp_access_from_the_internet_allowed:
|
||||
|
||||
# We name the tests with test_<service>_<check_name>_<test_action>
|
||||
def test_compute_compute_firewall_rdp_access_from_the_internet_allowed_one_compliant_rule_with_valid_port(self):
|
||||
# Mocked client with MagicMock
|
||||
compute_client = mock.MagicMock
|
||||
|
||||
# Assign GCP client configuration
|
||||
compute_client.project_ids = [GCP_PROJECT_ID]
|
||||
compute_client.region = "global"
|
||||
|
||||
# Import the service resource model to create the mocked object
|
||||
from prowler.providers.gcp.services.compute.compute_service import Firewall
|
||||
|
||||
# Create the custom Firewall object to be tested
|
||||
firewall = Firewall(
|
||||
name="test",
|
||||
id="1234567890",
|
||||
source_ranges=["0.0.0.0/0"],
|
||||
direction="INGRESS",
|
||||
allowed_rules=[{"IPProtocol": "tcp", "ports": ["443"]}],
|
||||
project_id=GCP_PROJECT_ID,
|
||||
)
|
||||
compute_client.firewalls = [firewall]
|
||||
|
||||
# In this scenario we have to mock also the Compute service and the compute_client from the check to enforce that the compute_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
|
||||
# In this case we don't use the Moto decorator, we use the mocked Compute client for both objects
|
||||
with mock.patch(
|
||||
"prowler.providers.gcp.services.compute.compute_service.Compute",
|
||||
new=defender_client,
|
||||
), mock.patch(
|
||||
"prowler.providers.gcp.services.compute.compute_client.compute_client",
|
||||
new=defender_client,
|
||||
):
|
||||
|
||||
# We import the check within the two mocks not to initialise the iam_client with some shared information from
|
||||
# the current_audit_info or the Compute service.
|
||||
from prowler.providers.gcp.services.compute.compute_firewall_rdp_access_from_the_internet_allowed.compute_firewall_rdp_access_from_the_internet_allowed import (
|
||||
compute_firewall_rdp_access_from_the_internet_allowed,
|
||||
)
|
||||
|
||||
# Once imported, we only need to instantiate the check's class
|
||||
check = compute_firewall_rdp_access_from_the_internet_allowed()
|
||||
|
||||
# And then, call the execute() function to run the check
|
||||
# against the IAM client we've set up.
|
||||
result = check.execute()
|
||||
|
||||
# Last but not least, we need to assert all the fields
|
||||
# from the check's results
|
||||
assert len(result) == 1
|
||||
assert result[0].status == "PASS"
|
||||
assert result[0].status_extended == f"Firewall {firewall.name} does not expose port 3389 (RDP) to the internet."
|
||||
assert result[0].resource_name = firewall.name
|
||||
assert result[0].resource_id == firewall.id
|
||||
assert result[0].project_id = GCP_PROJECT_ID
|
||||
assert result[0].location = compute_client.region
|
||||
```
|
||||
|
||||
### Services
|
||||
|
||||
Coming soon ...
|
||||
|
||||
## Azure
|
||||
|
||||
### Checks
|
||||
|
||||
For the Azure Provider we don't have any library to mock out the API calls we use. So in this scenario we inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock).
|
||||
|
||||
The following code shows how to use MagicMock to create the service objects for a Azure check test.
|
||||
|
||||
```python
|
||||
# We need to import the unittest.mock to allow us to patch some objects
|
||||
# not to use shared ones between test, hence to isolate the test
|
||||
from unittest import mock
|
||||
|
||||
from uuid import uuid4
|
||||
|
||||
# Azure Constants
|
||||
AZURE_SUSCRIPTION = str(uuid4())
|
||||
|
||||
|
||||
|
||||
# We are going to create a test for the Test_defender_ensure_defender_for_arm_is_on check
|
||||
class Test_defender_ensure_defender_for_arm_is_on:
|
||||
|
||||
# We name the tests with test_<service>_<check_name>_<test_action>
|
||||
def test_defender_defender_ensure_defender_for_arm_is_on_arm_pricing_tier_not_standard(self):
|
||||
resource_id = str(uuid4())
|
||||
|
||||
# Mocked client with MagicMock
|
||||
defender_client = mock.MagicMock
|
||||
|
||||
# Import the service resource model to create the mocked object
|
||||
from prowler.providers.azure.services.defender.defender_service import Defender_Pricing
|
||||
|
||||
# Create the custom Defender object to be tested
|
||||
defender_client.pricings = {
|
||||
AZURE_SUSCRIPTION: {
|
||||
"Arm": Defender_Pricing(
|
||||
resource_id=resource_id,
|
||||
pricing_tier="Not Standard",
|
||||
free_trial_remaining_time=0,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
# In this scenario we have to mock also the Defender service and the defender_client from the check to enforce that the defender_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
|
||||
# In this case we don't use the Moto decorator, we use the mocked Defender client for both objects
|
||||
with mock.patch(
|
||||
"prowler.providers.azure.services.defender.defender_service.Defender",
|
||||
new=defender_client,
|
||||
), mock.patch(
|
||||
"prowler.providers.azure.services.defender.defender_client.defender_client",
|
||||
new=defender_client,
|
||||
):
|
||||
|
||||
# We import the check within the two mocks not to initialise the iam_client with some shared information from
|
||||
# the current_audit_info or the Defender service.
|
||||
from prowler.providers.azure.services.defender.defender_ensure_defender_for_arm_is_on.defender_ensure_defender_for_arm_is_on import (
|
||||
defender_ensure_defender_for_arm_is_on,
|
||||
)
|
||||
|
||||
# Once imported, we only need to instantiate the check's class
|
||||
check = defender_ensure_defender_for_arm_is_on()
|
||||
|
||||
# And then, call the execute() function to run the check
|
||||
# against the IAM client we've set up.
|
||||
result = check.execute()
|
||||
|
||||
# Last but not least, we need to assert all the fields
|
||||
# from the check's results
|
||||
assert len(result) == 1
|
||||
assert result[0].status == "FAIL"
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"Defender plan Defender for ARM from subscription {AZURE_SUSCRIPTION} is set to OFF (pricing tier not standard)"
|
||||
)
|
||||
assert result[0].subscription == AZURE_SUSCRIPTION
|
||||
assert result[0].resource_name == "Defender plan ARM"
|
||||
assert result[0].resource_id == resource_id
|
||||
```
|
||||
|
||||
### Services
|
||||
|
||||
Coming soon ...
|
||||
15
mkdocs.yml
15
mkdocs.yml
@@ -38,7 +38,7 @@ nav:
|
||||
- Logging: tutorials/logging.md
|
||||
- Allowlist: tutorials/allowlist.md
|
||||
- Pentesting: tutorials/pentesting.md
|
||||
- Developer Guide: tutorials/developer-guide.md
|
||||
- Developer Guide: tutorials/developer-guide/developer-guide.md
|
||||
- AWS:
|
||||
- Authentication: tutorials/aws/authentication.md
|
||||
- Assume Role: tutorials/aws/role-assumption.md
|
||||
@@ -56,7 +56,18 @@ nav:
|
||||
- Subscriptions: tutorials/azure/subscriptions.md
|
||||
- Google Cloud:
|
||||
- Authentication: tutorials/gcp/authentication.md
|
||||
- Developer Guide: tutorials/developer-guide.md
|
||||
- Developer Guide:
|
||||
- Introduction: tutorials/developer-guide/developer-guide.md
|
||||
- Audit Info: tutorials/developer-guide/audit-info.md
|
||||
- Services: tutorials/developer-guide/service.md
|
||||
- Checks: tutorials/developer-guide/checks.md
|
||||
- Documentation: tutorials/developer-guide/documentation.md
|
||||
- Compliance: tutorials/developer-guide/security-compliance-framework.md
|
||||
- Outputs: tutorials/developer-guide/outputs.md
|
||||
- Integrations: tutorials/developer-guide/integrations.md
|
||||
- Testing:
|
||||
- Unit Tests: tutorials/developer-guide/unit-testing.md
|
||||
- Integration Tests: tutorials/developer-guide/integration-testing.md
|
||||
- Security: security.md
|
||||
- Contact Us: contact.md
|
||||
- Troubleshooting: troubleshooting.md
|
||||
|
||||
32
poetry.lock
generated
32
poetry.lock
generated
@@ -28,13 +28,13 @@ grapheme = "0.6.0"
|
||||
|
||||
[[package]]
|
||||
name = "astroid"
|
||||
version = "2.15.4"
|
||||
version = "2.15.6"
|
||||
description = "An abstract syntax tree for Python with inference support."
|
||||
optional = false
|
||||
python-versions = ">=3.7.2"
|
||||
files = [
|
||||
{file = "astroid-2.15.4-py3-none-any.whl", hash = "sha256:a1b8543ef9d36ea777194bc9b17f5f8678d2c56ee6a45b2c2f17eec96f242347"},
|
||||
{file = "astroid-2.15.4.tar.gz", hash = "sha256:c81e1c7fbac615037744d067a9bb5f9aeb655edf59b63ee8b59585475d6f80d8"},
|
||||
{file = "astroid-2.15.6-py3-none-any.whl", hash = "sha256:389656ca57b6108f939cf5d2f9a2a825a3be50ba9d589670f393236e0a03b91c"},
|
||||
{file = "astroid-2.15.6.tar.gz", hash = "sha256:903f024859b7c7687d7a7f3a3f73b17301f8e42dfd9cc9df9d4418172d3e2dbd"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1316,13 +1316,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "mkdocs"
|
||||
version = "1.4.3"
|
||||
version = "1.5.2"
|
||||
description = "Project documentation with Markdown."
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "mkdocs-1.4.3-py3-none-any.whl", hash = "sha256:6ee46d309bda331aac915cd24aab882c179a933bd9e77b80ce7d2eaaa3f689dd"},
|
||||
{file = "mkdocs-1.4.3.tar.gz", hash = "sha256:5955093bbd4dd2e9403c5afaf57324ad8b04f16886512a3ee6ef828956481c57"},
|
||||
{file = "mkdocs-1.5.2-py3-none-any.whl", hash = "sha256:60a62538519c2e96fe8426654a67ee177350451616118a41596ae7c876bb7eac"},
|
||||
{file = "mkdocs-1.5.2.tar.gz", hash = "sha256:70d0da09c26cff288852471be03c23f0f521fc15cf16ac89c7a3bfb9ae8d24f9"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1331,16 +1331,19 @@ colorama = {version = ">=0.4", markers = "platform_system == \"Windows\""}
|
||||
ghp-import = ">=1.0"
|
||||
importlib-metadata = {version = ">=4.3", markers = "python_version < \"3.10\""}
|
||||
jinja2 = ">=2.11.1"
|
||||
markdown = ">=3.2.1,<3.4"
|
||||
markdown = ">=3.2.1"
|
||||
markupsafe = ">=2.0.1"
|
||||
mergedeep = ">=1.3.4"
|
||||
packaging = ">=20.5"
|
||||
pathspec = ">=0.11.1"
|
||||
platformdirs = ">=2.2.0"
|
||||
pyyaml = ">=5.1"
|
||||
pyyaml-env-tag = ">=0.1"
|
||||
watchdog = ">=2.0"
|
||||
|
||||
[package.extras]
|
||||
i18n = ["babel (>=2.9.0)"]
|
||||
min-versions = ["babel (==2.9.0)", "click (==7.0)", "colorama (==0.4)", "ghp-import (==1.0)", "importlib-metadata (==4.3)", "jinja2 (==2.11.1)", "markdown (==3.2.1)", "markupsafe (==2.0.1)", "mergedeep (==1.3.4)", "packaging (==20.5)", "pyyaml (==5.1)", "pyyaml-env-tag (==0.1)", "typing-extensions (==3.10)", "watchdog (==2.0)"]
|
||||
min-versions = ["babel (==2.9.0)", "click (==7.0)", "colorama (==0.4)", "ghp-import (==1.0)", "importlib-metadata (==4.3)", "jinja2 (==2.11.1)", "markdown (==3.2.1)", "markupsafe (==2.0.1)", "mergedeep (==1.3.4)", "packaging (==20.5)", "pathspec (==0.11.1)", "platformdirs (==2.2.0)", "pyyaml (==5.1)", "pyyaml-env-tag (==0.1)", "typing-extensions (==3.10)", "watchdog (==2.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "mkdocs-material"
|
||||
@@ -1831,17 +1834,17 @@ tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pylint"
|
||||
version = "2.17.4"
|
||||
version = "2.17.5"
|
||||
description = "python code static checker"
|
||||
optional = false
|
||||
python-versions = ">=3.7.2"
|
||||
files = [
|
||||
{file = "pylint-2.17.4-py3-none-any.whl", hash = "sha256:7a1145fb08c251bdb5cca11739722ce64a63db479283d10ce718b2460e54123c"},
|
||||
{file = "pylint-2.17.4.tar.gz", hash = "sha256:5dcf1d9e19f41f38e4e85d10f511e5b9c35e1aa74251bf95cdd8cb23584e2db1"},
|
||||
{file = "pylint-2.17.5-py3-none-any.whl", hash = "sha256:73995fb8216d3bed149c8d51bba25b2c52a8251a2c8ac846ec668ce38fab5413"},
|
||||
{file = "pylint-2.17.5.tar.gz", hash = "sha256:f7b601cbc06fef7e62a754e2b41294c2aa31f1cb659624b9a85bcba29eaf8252"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
astroid = ">=2.15.4,<=2.17.0-dev0"
|
||||
astroid = ">=2.15.6,<=2.17.0-dev0"
|
||||
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
|
||||
dill = [
|
||||
{version = ">=0.2", markers = "python_version < \"3.11\""},
|
||||
@@ -2399,7 +2402,8 @@ files = [
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp310-cp310-win32.whl", hash = "sha256:763d65baa3b952479c4e972669f679fe490eee058d5aa85da483ebae2009d231"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp310-cp310-win_amd64.whl", hash = "sha256:d000f258cf42fec2b1bbf2863c61d7b8918d31ffee905da62dede869254d3b8a"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:045e0626baf1c52e5527bd5db361bc83180faaba2ff586e763d3d5982a876a9e"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-macosx_12_6_arm64.whl", hash = "sha256:721bc4ba4525f53f6a611ec0967bdcee61b31df5a56801281027a3a6d1c2daf5"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:1a6391a7cabb7641c32517539ca42cf84b87b667bad38b78d4d42dd23e957c81"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:9c7617df90c1365638916b98cdd9be833d31d337dbcd722485597b43c4a215bf"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:41d0f1fa4c6830176eef5b276af04c89320ea616655d01327d5ce65e50575c94"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-win32.whl", hash = "sha256:f6d3d39611ac2e4f62c3128a9eed45f19a6608670c5a2f4f07f24e8de3441d38"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-win_amd64.whl", hash = "sha256:da538167284de58a52109a9b89b8f6a53ff8437dd6dc26d33b57bf6699153122"},
|
||||
@@ -2891,4 +2895,4 @@ docs = ["mkdocs", "mkdocs-material"]
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.9"
|
||||
content-hash = "af1b95fa997c6e4eb7f6764a69c00cb89aac5cfce77c130e1722f6322f495755"
|
||||
content-hash = "17459c4c8a7acf4c4a31253edf406113fbcedf8d81d17042f6b33665c3a6f47d"
|
||||
|
||||
@@ -219,7 +219,11 @@ def prowler():
|
||||
|
||||
# Resolve previous fails of Security Hub
|
||||
if provider == "aws" and args.security_hub and not args.skip_sh_update:
|
||||
resolve_security_hub_previous_findings(args.output_directory, audit_info)
|
||||
resolve_security_hub_previous_findings(
|
||||
audit_output_options.output_directory,
|
||||
audit_output_options.output_filename,
|
||||
audit_info,
|
||||
)
|
||||
|
||||
# Display summary table
|
||||
if not args.only_logs:
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import os
|
||||
import pathlib
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from os import getcwd
|
||||
|
||||
@@ -10,7 +11,7 @@ from prowler.lib.logger import logger
|
||||
|
||||
timestamp = datetime.today()
|
||||
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
|
||||
prowler_version = "3.7.2"
|
||||
prowler_version = "3.8.2"
|
||||
html_logo_url = "https://github.com/prowler-cloud/prowler/"
|
||||
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
|
||||
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
|
||||
@@ -23,16 +24,23 @@ banner_color = "\033[1;92m"
|
||||
|
||||
# Compliance
|
||||
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
available_compliance_frameworks = []
|
||||
for provider in ["aws", "gcp"]:
|
||||
with os.scandir(f"{actual_directory}/../compliance/{provider}") as files:
|
||||
files = [
|
||||
file.name
|
||||
for file in files
|
||||
if file.is_file()
|
||||
and file.name.endswith(".json")
|
||||
and available_compliance_frameworks.append(file.name.removesuffix(".json"))
|
||||
]
|
||||
|
||||
|
||||
def get_available_compliance_frameworks():
|
||||
available_compliance_frameworks = []
|
||||
for provider in ["aws", "gcp", "azure"]:
|
||||
with os.scandir(f"{actual_directory}/../compliance/{provider}") as files:
|
||||
for file in files:
|
||||
if file.is_file() and file.name.endswith(".json"):
|
||||
available_compliance_frameworks.append(
|
||||
file.name.removesuffix(".json")
|
||||
)
|
||||
return available_compliance_frameworks
|
||||
|
||||
|
||||
available_compliance_frameworks = get_available_compliance_frameworks()
|
||||
|
||||
|
||||
# AWS services-regions matrix json
|
||||
aws_services_json_file = "aws_regions_by_service.json"
|
||||
|
||||
@@ -47,7 +55,9 @@ json_file_suffix = ".json"
|
||||
json_asff_file_suffix = ".asff.json"
|
||||
json_ocsf_file_suffix = ".ocsf.json"
|
||||
html_file_suffix = ".html"
|
||||
config_yaml = f"{pathlib.Path(os.path.dirname(os.path.realpath(__file__)))}/config.yaml"
|
||||
default_config_file_path = (
|
||||
f"{pathlib.Path(os.path.dirname(os.path.realpath(__file__)))}/config.yaml"
|
||||
)
|
||||
|
||||
|
||||
def check_current_version():
|
||||
@@ -62,29 +72,51 @@ def check_current_version():
|
||||
else:
|
||||
return f"{prowler_version_string} (it is the latest version, yay!)"
|
||||
except Exception as error:
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
|
||||
)
|
||||
return f"{prowler_version_string}"
|
||||
|
||||
|
||||
def change_config_var(variable, value):
|
||||
def change_config_var(variable: str, value: str, audit_info):
|
||||
try:
|
||||
with open(config_yaml) as f:
|
||||
doc = yaml.safe_load(f)
|
||||
|
||||
doc[variable] = value
|
||||
|
||||
with open(config_yaml, "w") as f:
|
||||
yaml.dump(doc, f)
|
||||
if (
|
||||
hasattr(audit_info, "audit_config")
|
||||
and audit_info.audit_config is not None
|
||||
and variable in audit_info.audit_config
|
||||
):
|
||||
audit_info.audit_config[variable] = value
|
||||
return audit_info
|
||||
except Exception as error:
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
|
||||
)
|
||||
|
||||
|
||||
def get_config_var(variable):
|
||||
def load_and_validate_config_file(provider: str, config_file_path: str) -> dict:
|
||||
"""
|
||||
load_and_validate_config_file reads the Prowler config file in YAML format from the default location or the file passed with the --config-file flag
|
||||
"""
|
||||
try:
|
||||
with open(config_yaml) as f:
|
||||
doc = yaml.safe_load(f)
|
||||
with open(config_file_path) as f:
|
||||
config = {}
|
||||
config_file = yaml.safe_load(f)
|
||||
|
||||
# Not to introduce a breaking change we have to allow the old format config file without any provider keys
|
||||
# and a new format with a key for each provider to include their configuration values within
|
||||
# Check if the new format is passed
|
||||
if "aws" in config_file or "gcp" in config_file or "azure" in config_file:
|
||||
config = config_file.get(provider, {})
|
||||
else:
|
||||
config = config_file if config_file else {}
|
||||
# Not to break Azure and GCP does not support neither use the old config format
|
||||
if provider in ["azure", "gcp"]:
|
||||
config = {}
|
||||
|
||||
return config
|
||||
|
||||
return doc[variable]
|
||||
except Exception as error:
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
return ""
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
@@ -1,57 +1,61 @@
|
||||
# AWS EC2 Configuration
|
||||
# aws.ec2_elastic_ip_shodan
|
||||
shodan_api_key: null
|
||||
# aws.ec2_securitygroup_with_many_ingress_egress_rules --> by default is 50 rules
|
||||
max_security_group_rules: 50
|
||||
# aws.ec2_instance_older_than_specific_days --> by default is 6 months (180 days)
|
||||
max_ec2_instance_age_in_days: 180
|
||||
# AWS Configuration
|
||||
aws:
|
||||
# AWS EC2 Configuration
|
||||
# aws.ec2_elastic_ip_shodan
|
||||
shodan_api_key: null
|
||||
# aws.ec2_securitygroup_with_many_ingress_egress_rules --> by default is 50 rules
|
||||
max_security_group_rules: 50
|
||||
# aws.ec2_instance_older_than_specific_days --> by default is 6 months (180 days)
|
||||
max_ec2_instance_age_in_days: 180
|
||||
|
||||
# AWS VPC Configuration (vpc_endpoint_connections_trust_boundaries, vpc_endpoint_services_allowed_principals_trust_boundaries)
|
||||
# Single account environment: No action required. The AWS account number will be automatically added by the checks.
|
||||
# Multi account environment: Any additional trusted account number should be added as a space separated list, e.g.
|
||||
# trusted_account_ids : ["123456789012", "098765432109", "678901234567"]
|
||||
trusted_account_ids: []
|
||||
# AWS VPC Configuration (vpc_endpoint_connections_trust_boundaries, vpc_endpoint_services_allowed_principals_trust_boundaries)
|
||||
# Single account environment: No action required. The AWS account number will be automatically added by the checks.
|
||||
# Multi account environment: Any additional trusted account number should be added as a space separated list, e.g.
|
||||
# trusted_account_ids : ["123456789012", "098765432109", "678901234567"]
|
||||
trusted_account_ids: []
|
||||
|
||||
# AWS Cloudwatch Configuration
|
||||
# aws.cloudwatch_log_group_retention_policy_specific_days_enabled --> by default is 365 days
|
||||
log_group_retention_days: 365
|
||||
# AWS Cloudwatch Configuration
|
||||
# aws.cloudwatch_log_group_retention_policy_specific_days_enabled --> by default is 365 days
|
||||
log_group_retention_days: 365
|
||||
|
||||
# AWS AppStream Session Configuration
|
||||
# aws.appstream_fleet_session_idle_disconnect_timeout
|
||||
max_idle_disconnect_timeout_in_seconds: 600 # 10 Minutes
|
||||
# aws.appstream_fleet_session_disconnect_timeout
|
||||
max_disconnect_timeout_in_seconds: 300 # 5 Minutes
|
||||
# aws.appstream_fleet_maximum_session_duration
|
||||
max_session_duration_seconds: 36000 # 10 Hours
|
||||
# AWS AppStream Session Configuration
|
||||
# aws.appstream_fleet_session_idle_disconnect_timeout
|
||||
max_idle_disconnect_timeout_in_seconds: 600 # 10 Minutes
|
||||
# aws.appstream_fleet_session_disconnect_timeout
|
||||
max_disconnect_timeout_in_seconds: 300 # 5 Minutes
|
||||
# aws.appstream_fleet_maximum_session_duration
|
||||
max_session_duration_seconds: 36000 # 10 Hours
|
||||
|
||||
# AWS Lambda Configuration
|
||||
# aws.awslambda_function_using_supported_runtimes
|
||||
obsolete_lambda_runtimes:
|
||||
[
|
||||
"python3.6",
|
||||
"python2.7",
|
||||
"nodejs4.3",
|
||||
"nodejs4.3-edge",
|
||||
"nodejs6.10",
|
||||
"nodejs",
|
||||
"nodejs8.10",
|
||||
"nodejs10.x",
|
||||
"dotnetcore1.0",
|
||||
"dotnetcore2.0",
|
||||
"dotnetcore2.1",
|
||||
"ruby2.5",
|
||||
]
|
||||
# AWS Lambda Configuration
|
||||
# aws.awslambda_function_using_supported_runtimes
|
||||
obsolete_lambda_runtimes:
|
||||
[
|
||||
"python3.6",
|
||||
"python2.7",
|
||||
"nodejs4.3",
|
||||
"nodejs4.3-edge",
|
||||
"nodejs6.10",
|
||||
"nodejs",
|
||||
"nodejs8.10",
|
||||
"nodejs10.x",
|
||||
"dotnetcore1.0",
|
||||
"dotnetcore2.0",
|
||||
"dotnetcore2.1",
|
||||
"ruby2.5",
|
||||
]
|
||||
|
||||
# AWS Organizations
|
||||
# organizations_scp_check_deny_regions
|
||||
# organizations_enabled_regions: [
|
||||
# 'eu-central-1',
|
||||
# 'eu-west-1',
|
||||
# "us-east-1"
|
||||
# ]
|
||||
organizations_enabled_regions: []
|
||||
# organizations_delegated_administrators
|
||||
# organizations_trusted_delegated_administrators: [
|
||||
# "12345678901"
|
||||
# ]
|
||||
organizations_trusted_delegated_administrators: []
|
||||
# AWS Organizations
|
||||
# organizations_scp_check_deny_regions
|
||||
# organizations_enabled_regions: [
|
||||
# 'eu-central-1',
|
||||
# 'eu-west-1',
|
||||
# "us-east-1"
|
||||
# ]
|
||||
organizations_enabled_regions: []
|
||||
organizations_trusted_delegated_administrators: []
|
||||
|
||||
# Azure Configuration
|
||||
azure:
|
||||
|
||||
# GCP Configuration
|
||||
gcp:
|
||||
|
||||
@@ -207,42 +207,43 @@ def list_categories(bulk_checks_metadata: dict) -> set():
|
||||
|
||||
def print_categories(categories: set):
|
||||
categories_num = len(categories)
|
||||
plural_string = f"There are {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available categories: \n"
|
||||
singular_string = f"There is {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available category: \n"
|
||||
plural_string = f"\nThere are {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available categories.\n"
|
||||
singular_string = f"\nThere is {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available category.\n"
|
||||
|
||||
message = plural_string if categories_num > 1 else singular_string
|
||||
print(message)
|
||||
for category in categories:
|
||||
print(f"- {category}")
|
||||
|
||||
print(message)
|
||||
|
||||
|
||||
def print_services(service_list: set):
|
||||
services_num = len(service_list)
|
||||
plural_string = (
|
||||
f"There are {Fore.YELLOW}{services_num}{Style.RESET_ALL} available services: \n"
|
||||
)
|
||||
plural_string = f"\nThere are {Fore.YELLOW}{services_num}{Style.RESET_ALL} available services.\n"
|
||||
singular_string = (
|
||||
f"There is {Fore.YELLOW}{services_num}{Style.RESET_ALL} available service: \n"
|
||||
f"\nThere is {Fore.YELLOW}{services_num}{Style.RESET_ALL} available service.\n"
|
||||
)
|
||||
|
||||
message = plural_string if services_num > 1 else singular_string
|
||||
print(message)
|
||||
|
||||
for service in service_list:
|
||||
print(f"- {service}")
|
||||
|
||||
print(message)
|
||||
|
||||
|
||||
def print_compliance_frameworks(
|
||||
bulk_compliance_frameworks: dict,
|
||||
):
|
||||
frameworks_num = len(bulk_compliance_frameworks.keys())
|
||||
plural_string = f"There are {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Frameworks: \n"
|
||||
singular_string = f"There is {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Framework: \n"
|
||||
plural_string = f"\nThere are {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Frameworks.\n"
|
||||
singular_string = f"\nThere is {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Framework.\n"
|
||||
message = plural_string if frameworks_num > 1 else singular_string
|
||||
|
||||
print(message)
|
||||
for framework in bulk_compliance_frameworks.keys():
|
||||
print(f"\t- {Fore.YELLOW}{framework}{Style.RESET_ALL}")
|
||||
print(f"- {framework}")
|
||||
|
||||
print(message)
|
||||
|
||||
|
||||
def print_compliance_requirements(
|
||||
|
||||
@@ -5,6 +5,7 @@ from argparse import RawTextHelpFormatter
|
||||
from prowler.config.config import (
|
||||
available_compliance_frameworks,
|
||||
check_current_version,
|
||||
default_config_file_path,
|
||||
default_output_directory,
|
||||
)
|
||||
from prowler.providers.aws.aws_provider import get_aws_available_regions
|
||||
@@ -45,6 +46,7 @@ Detailed documentation at https://docs.prowler.cloud
|
||||
self.__init_checks_parser__()
|
||||
self.__init_exclude_checks_parser__()
|
||||
self.__init_list_checks_parser__()
|
||||
self.__init_config_parser__()
|
||||
|
||||
# Init Providers Arguments
|
||||
self.__init_aws_parser__()
|
||||
@@ -260,6 +262,15 @@ Detailed documentation at https://docs.prowler.cloud
|
||||
help="List the available check's categories",
|
||||
)
|
||||
|
||||
def __init_config_parser__(self):
|
||||
config_parser = self.common_providers_parser.add_argument_group("Configuration")
|
||||
config_parser.add_argument(
|
||||
"--config-file",
|
||||
nargs="?",
|
||||
default=default_config_file_path,
|
||||
help="Set configuration file path",
|
||||
)
|
||||
|
||||
def __init_aws_parser__(self):
|
||||
"""Init the AWS Provider CLI parser"""
|
||||
aws_parser = self.subparsers.add_parser(
|
||||
|
||||
@@ -14,7 +14,7 @@ logging_levels = {
|
||||
def set_logging_config(log_level: str, log_file: str = None, only_logs: bool = False):
|
||||
# Logs formatter
|
||||
stream_formatter = logging.Formatter(
|
||||
"%(asctime)s [File: %(filename)s:%(lineno)d] \t[Module: %(module)s]\t %(levelname)s: %(message)s"
|
||||
"\n%(asctime)s [File: %(filename)s:%(lineno)d] \t[Module: %(module)s]\t %(levelname)s: %(message)s"
|
||||
)
|
||||
log_file_formatter = logging.Formatter(
|
||||
'{"timestamp": "%(asctime)s", "filename": "%(filename)s:%(lineno)d", "level": "%(levelname)s", "module": "%(module)s", "message": "%(message)s"}'
|
||||
|
||||
@@ -145,6 +145,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"il-central-1",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
@@ -786,7 +787,10 @@
|
||||
"us-west-1",
|
||||
"us-west-2"
|
||||
],
|
||||
"aws-cn": [],
|
||||
"aws-cn": [
|
||||
"cn-north-1",
|
||||
"cn-northwest-1"
|
||||
],
|
||||
"aws-us-gov": []
|
||||
}
|
||||
},
|
||||
@@ -2340,7 +2344,6 @@
|
||||
"aws": [
|
||||
"ap-northeast-1",
|
||||
"ap-northeast-2",
|
||||
"ap-south-1",
|
||||
"ap-southeast-1",
|
||||
"ap-southeast-2",
|
||||
"ca-central-1",
|
||||
@@ -3226,6 +3229,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"il-central-1",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
@@ -4547,6 +4551,7 @@
|
||||
"regions": {
|
||||
"aws": [
|
||||
"af-south-1",
|
||||
"ap-east-1",
|
||||
"ap-northeast-1",
|
||||
"ap-northeast-2",
|
||||
"ap-northeast-3",
|
||||
@@ -4561,9 +4566,11 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
"us-east-1",
|
||||
"us-east-2",
|
||||
"us-west-1",
|
||||
"us-west-2"
|
||||
],
|
||||
"aws-cn": [],
|
||||
@@ -5292,6 +5299,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"il-central-1",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
@@ -5615,6 +5623,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"il-central-1",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
@@ -5627,7 +5636,10 @@
|
||||
"cn-north-1",
|
||||
"cn-northwest-1"
|
||||
],
|
||||
"aws-us-gov": []
|
||||
"aws-us-gov": [
|
||||
"us-gov-east-1",
|
||||
"us-gov-west-1"
|
||||
]
|
||||
}
|
||||
},
|
||||
"license-manager-user-subscriptions": {
|
||||
@@ -6435,6 +6447,7 @@
|
||||
"monitron": {
|
||||
"regions": {
|
||||
"aws": [
|
||||
"ap-southeast-2",
|
||||
"eu-west-1",
|
||||
"us-east-1"
|
||||
],
|
||||
@@ -6534,6 +6547,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
"us-east-1",
|
||||
@@ -7871,6 +7885,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"il-central-1",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
@@ -8260,25 +8275,36 @@
|
||||
"schemas": {
|
||||
"regions": {
|
||||
"aws": [
|
||||
"af-south-1",
|
||||
"ap-east-1",
|
||||
"ap-northeast-1",
|
||||
"ap-northeast-2",
|
||||
"ap-northeast-3",
|
||||
"ap-south-1",
|
||||
"ap-southeast-1",
|
||||
"ap-southeast-2",
|
||||
"ap-southeast-3",
|
||||
"ca-central-1",
|
||||
"eu-central-1",
|
||||
"eu-central-2",
|
||||
"eu-north-1",
|
||||
"eu-south-1",
|
||||
"eu-south-2",
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
"us-east-1",
|
||||
"us-east-2",
|
||||
"us-west-1",
|
||||
"us-west-2"
|
||||
],
|
||||
"aws-cn": [],
|
||||
"aws-cn": [
|
||||
"cn-north-1",
|
||||
"cn-northwest-1"
|
||||
],
|
||||
"aws-us-gov": []
|
||||
}
|
||||
},
|
||||
@@ -8363,6 +8389,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"il-central-1",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
@@ -8498,6 +8525,7 @@
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"il-central-1",
|
||||
"me-central-1",
|
||||
"me-south-1",
|
||||
"sa-east-1",
|
||||
@@ -9743,7 +9771,11 @@
|
||||
"ap-northeast-1",
|
||||
"ap-southeast-1",
|
||||
"ap-southeast-2",
|
||||
"ca-central-1",
|
||||
"eu-central-1",
|
||||
"eu-north-1",
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"us-east-1",
|
||||
"us-east-2",
|
||||
"us-west-2"
|
||||
|
||||
@@ -36,4 +36,5 @@ current_audit_info = AWS_Audit_Info(
|
||||
audited_regions=None,
|
||||
organizations_metadata=None,
|
||||
audit_metadata=None,
|
||||
audit_config=None,
|
||||
)
|
||||
|
||||
@@ -51,3 +51,4 @@ class AWS_Audit_Info:
|
||||
audit_resources: list
|
||||
organizations_metadata: AWS_Organizations_Info
|
||||
audit_metadata: Optional[Any] = None
|
||||
audit_config: Optional[dict] = None
|
||||
|
||||
@@ -6,10 +6,19 @@ def is_account_only_allowed_in_condition(
|
||||
valid_condition_options = {
|
||||
"StringEquals": [
|
||||
"aws:SourceAccount",
|
||||
"aws:SourceOwner",
|
||||
"s3:ResourceAccount",
|
||||
"aws:PrincipalAccount",
|
||||
"aws:ResourceAccount",
|
||||
],
|
||||
"StringLike": [
|
||||
"aws:SourceAccount",
|
||||
"aws:SourceOwner",
|
||||
"aws:SourceArn",
|
||||
"aws:PrincipalArn",
|
||||
"aws:ResourceAccount",
|
||||
"aws:PrincipalAccount",
|
||||
],
|
||||
"StringLike": ["aws:SourceArn", "aws:PrincipalArn"],
|
||||
"ArnLike": ["aws:SourceArn", "aws:PrincipalArn"],
|
||||
"ArnEquals": ["aws:SourceArn", "aws:PrincipalArn"],
|
||||
}
|
||||
|
||||
@@ -4,11 +4,7 @@ from operator import itemgetter
|
||||
|
||||
from boto3 import session
|
||||
|
||||
from prowler.config.config import (
|
||||
json_asff_file_suffix,
|
||||
output_file_timestamp,
|
||||
timestamp_utc,
|
||||
)
|
||||
from prowler.config.config import json_asff_file_suffix, timestamp_utc
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.outputs.models import Check_Output_JSON_ASFF
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
@@ -60,16 +56,14 @@ def send_to_security_hub(
|
||||
|
||||
# Move previous Security Hub check findings to ARCHIVED (as prowler didn't re-detect them)
|
||||
def resolve_security_hub_previous_findings(
|
||||
output_directory: str, audit_info: AWS_Audit_Info
|
||||
output_directory: str, output_filename: str, audit_info: AWS_Audit_Info
|
||||
) -> list:
|
||||
"""
|
||||
resolve_security_hub_previous_findings archives all the findings that does not appear in the current execution
|
||||
"""
|
||||
logger.info("Checking previous findings in Security Hub to archive them.")
|
||||
# Read current findings from json-asff file
|
||||
with open(
|
||||
f"{output_directory}/prowler-output-{audit_info.audited_account}-{output_file_timestamp}{json_asff_file_suffix}"
|
||||
) as f:
|
||||
with open(f"{output_directory}/{output_filename}{json_asff_file_suffix}") as f:
|
||||
json_asff_file = json.load(f)
|
||||
|
||||
# Sort by region
|
||||
|
||||
@@ -4,6 +4,7 @@ from prowler.providers.aws.aws_provider import (
|
||||
generate_regional_clients,
|
||||
get_default_region,
|
||||
)
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
|
||||
|
||||
class AWSService:
|
||||
@@ -14,7 +15,7 @@ class AWSService:
|
||||
- Also handles if the AWS Service is Global
|
||||
"""
|
||||
|
||||
def __init__(self, service, audit_info, global_service=False):
|
||||
def __init__(self, service: str, audit_info: AWS_Audit_Info, global_service=False):
|
||||
# Audit Information
|
||||
self.audit_info = audit_info
|
||||
self.audited_account = audit_info.audited_account
|
||||
@@ -22,6 +23,7 @@ class AWSService:
|
||||
self.audited_partition = audit_info.audited_partition
|
||||
self.audit_resources = audit_info.audit_resources
|
||||
self.audited_checks = audit_info.audit_metadata.expected_checks
|
||||
self.audit_config = audit_info.audit_config
|
||||
|
||||
# AWS Session
|
||||
self.session = audit_info.audit_session
|
||||
|
||||
@@ -25,6 +25,7 @@ class accessanalyzer_enabled(Check):
|
||||
f"IAM Access Analyzer in account {analyzer.name} is not enabled."
|
||||
)
|
||||
report.resource_id = analyzer.name
|
||||
report.resource_arn = analyzer.arn
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
|
||||
@@ -12,9 +12,7 @@ class accessanalyzer_enabled_without_findings(Check):
|
||||
report.region = analyzer.region
|
||||
if analyzer.status == "ACTIVE":
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"IAM Access Analyzer {analyzer.name} does not have active findings"
|
||||
)
|
||||
report.status_extended = f"IAM Access Analyzer {analyzer.name} does not have active findings."
|
||||
report.resource_id = analyzer.name
|
||||
report.resource_arn = analyzer.arn
|
||||
report.resource_tags = analyzer.tags
|
||||
@@ -26,20 +24,21 @@ class accessanalyzer_enabled_without_findings(Check):
|
||||
|
||||
if active_finding_counter > 0:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"IAM Access Analyzer {analyzer.name} has {active_finding_counter} active findings"
|
||||
report.status_extended = f"IAM Access Analyzer {analyzer.name} has {active_finding_counter} active findings."
|
||||
report.resource_id = analyzer.name
|
||||
report.resource_arn = analyzer.arn
|
||||
report.resource_tags = analyzer.tags
|
||||
elif analyzer.status == "NOT_AVAILABLE":
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"IAM Access Analyzer in account {analyzer.name} is not enabled"
|
||||
f"IAM Access Analyzer in account {analyzer.name} is not enabled."
|
||||
)
|
||||
report.resource_id = analyzer.name
|
||||
report.resource_arn = analyzer.arn
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"IAM Access Analyzer {analyzer.name} is not active"
|
||||
f"IAM Access Analyzer {analyzer.name} is not active."
|
||||
)
|
||||
report.resource_id = analyzer.name
|
||||
report.resource_arn = analyzer.arn
|
||||
|
||||
@@ -43,7 +43,7 @@ class AccessAnalyzer(AWSService):
|
||||
if analyzer_count == 0:
|
||||
self.analyzers.append(
|
||||
Analyzer(
|
||||
arn="",
|
||||
arn=self.audited_account_arn,
|
||||
name=self.audited_account,
|
||||
status="NOT_AVAILABLE",
|
||||
tags=[],
|
||||
|
||||
@@ -15,11 +15,13 @@ class apigatewayv2_access_logging_enabled(Check):
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"API Gateway V2 {api.name} ID {api.id} in stage {stage.name} has access logging enabled."
|
||||
report.resource_id = api.name
|
||||
report.resource_arn = api.arn
|
||||
report.resource_tags = api.tags
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"API Gateway V2 {api.name} ID {api.id} in stage {stage.name} has access logging disabled."
|
||||
report.resource_id = api.name
|
||||
report.resource_arn = api.arn
|
||||
report.resource_tags = api.tags
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -19,12 +19,12 @@ class appstream_fleet_default_internet_access_disabled(Check):
|
||||
if fleet.enable_default_internet_access:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Fleet {fleet.name} has default internet access enabled"
|
||||
f"Fleet {fleet.name} has default internet access enabled."
|
||||
)
|
||||
else:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Fleet {fleet.name} has default internet access disabled"
|
||||
f"Fleet {fleet.name} has default internet access disabled."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -1,16 +1,18 @@
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.appstream.appstream_client import appstream_client
|
||||
|
||||
max_session_duration_seconds = get_config_var("max_session_duration_seconds")
|
||||
"""max_session_duration_seconds, default: 36000 seconds (10 hours)"""
|
||||
|
||||
|
||||
class appstream_fleet_maximum_session_duration(Check):
|
||||
"""Check if there are AppStream Fleets with the user maximum session duration no longer than 10 hours"""
|
||||
|
||||
def execute(self):
|
||||
"""Execute the appstream_fleet_maximum_session_duration check"""
|
||||
|
||||
# max_session_duration_seconds, default: 36000 seconds (10 hours)
|
||||
max_session_duration_seconds = appstream_client.audit_config.get(
|
||||
"max_session_duration_seconds", 36000
|
||||
)
|
||||
|
||||
findings = []
|
||||
for fleet in appstream_client.fleets:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
@@ -21,10 +23,10 @@ class appstream_fleet_maximum_session_duration(Check):
|
||||
|
||||
if fleet.max_user_duration_in_seconds < max_session_duration_seconds:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Fleet {fleet.name} has the maximum session duration configured for less that 10 hours"
|
||||
report.status_extended = f"Fleet {fleet.name} has the maximum session duration configured for less that 10 hours."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Fleet {fleet.name} has the maximum session duration configured for more that 10 hours"
|
||||
report.status_extended = f"Fleet {fleet.name} has the maximum session duration configured for more that 10 hours."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -1,16 +1,18 @@
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.appstream.appstream_client import appstream_client
|
||||
|
||||
max_disconnect_timeout_in_seconds = get_config_var("max_disconnect_timeout_in_seconds")
|
||||
"""max_disconnect_timeout_in_seconds, default: 300 seconds (5 minutes)"""
|
||||
|
||||
|
||||
class appstream_fleet_session_disconnect_timeout(Check):
|
||||
"""Check if there are AppStream Fleets with the session disconnect timeout set to 5 minutes or less"""
|
||||
|
||||
def execute(self):
|
||||
"""Execute the appstream_fleet_maximum_session_duration check"""
|
||||
|
||||
# max_disconnect_timeout_in_seconds, default: 300 seconds (5 minutes)
|
||||
max_disconnect_timeout_in_seconds = appstream_client.audit_config.get(
|
||||
"max_disconnect_timeout_in_seconds", 300
|
||||
)
|
||||
|
||||
findings = []
|
||||
for fleet in appstream_client.fleets:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
@@ -21,11 +23,11 @@ class appstream_fleet_session_disconnect_timeout(Check):
|
||||
|
||||
if fleet.disconnect_timeout_in_seconds <= max_disconnect_timeout_in_seconds:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session disconnect timeout set to less than 5 minutes"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session disconnect timeout set to less than 5 minutes."
|
||||
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session disconnect timeout set to more than 5 minutes"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session disconnect timeout set to more than 5 minutes."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.appstream.appstream_client import appstream_client
|
||||
|
||||
max_idle_disconnect_timeout_in_seconds = get_config_var(
|
||||
"max_idle_disconnect_timeout_in_seconds"
|
||||
)
|
||||
"""max_idle_disconnect_timeout_in_seconds, default: 600 seconds (10 minutes)"""
|
||||
|
||||
|
||||
class appstream_fleet_session_idle_disconnect_timeout(Check):
|
||||
"""Check if there are AppStream Fleets with the idle disconnect timeout set to 10 minutes or less"""
|
||||
|
||||
def execute(self):
|
||||
"""Execute the appstream_fleet_session_idle_disconnect_timeout check"""
|
||||
|
||||
# max_idle_disconnect_timeout_in_seconds, default: 600 seconds (10 minutes)
|
||||
max_idle_disconnect_timeout_in_seconds = appstream_client.audit_config.get(
|
||||
"max_idle_disconnect_timeout_in_seconds", 600
|
||||
)
|
||||
|
||||
findings = []
|
||||
for fleet in appstream_client.fleets:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
@@ -27,11 +27,11 @@ class appstream_fleet_session_idle_disconnect_timeout(Check):
|
||||
<= max_idle_disconnect_timeout_in_seconds
|
||||
):
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session idle disconnect timeout set to less than 10 minutes"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session idle disconnect timeout set to less than 10 minutes."
|
||||
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session idle disconnect timeout set to more than 10 minutes"
|
||||
report.status_extended = f"Fleet {fleet.name} has the session idle disconnect timeout set to more than 10 minutes."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
0
prowler/providers/aws/services/athena/__init__.py
Normal file
0
prowler/providers/aws/services/athena/__init__.py
Normal file
4
prowler/providers/aws/services/athena/athena_client.py
Normal file
4
prowler/providers/aws/services/athena/athena_client.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.athena.athena_service import Athena
|
||||
|
||||
athena_client = Athena(current_audit_info)
|
||||
109
prowler/providers/aws/services/athena/athena_service.py
Normal file
109
prowler/providers/aws/services/athena/athena_service.py
Normal file
@@ -0,0 +1,109 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
|
||||
from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
|
||||
################## Athena
|
||||
class Athena(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
self.workgroups = {}
|
||||
self.__threading_call__(self.__list_workgroups__)
|
||||
self.__get_workgroups__()
|
||||
self.__list_tags_for_resource__()
|
||||
|
||||
def __list_workgroups__(self, regional_client):
|
||||
logger.info("Athena - Listing WorkGroups...")
|
||||
try:
|
||||
list_workgroups = regional_client.list_work_groups()
|
||||
for workgroup in list_workgroups["WorkGroups"]:
|
||||
workgroup_name = workgroup["Name"]
|
||||
workgroup_arn = f"arn:{self.audited_partition}:athena:{regional_client.region}:{self.audited_account}:workgroup/{workgroup_name}"
|
||||
if not self.audit_resources or (
|
||||
is_resource_filtered(workgroup_arn, self.audit_resources)
|
||||
):
|
||||
self.workgroups[workgroup_arn] = WorkGroup(
|
||||
arn=workgroup_arn,
|
||||
name=workgroup_name,
|
||||
region=regional_client.region,
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_workgroups__(self):
|
||||
logger.info("Athena - Getting WorkGroups...")
|
||||
try:
|
||||
for workgroup in self.workgroups.values():
|
||||
wg = self.regional_clients[workgroup.region].get_work_group(
|
||||
WorkGroup=workgroup.name
|
||||
)
|
||||
|
||||
wg_configuration = wg.get("WorkGroup").get("Configuration")
|
||||
self.workgroups[
|
||||
workgroup.arn
|
||||
].enforce_workgroup_configuration = wg_configuration.get(
|
||||
"EnforceWorkGroupConfiguration", False
|
||||
)
|
||||
|
||||
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
|
||||
encryption = (
|
||||
wg_configuration.get(
|
||||
"ResultConfiguration",
|
||||
{"EncryptionConfiguration": {}},
|
||||
)
|
||||
.get(
|
||||
"EncryptionConfiguration",
|
||||
{"EncryptionOption": ""},
|
||||
)
|
||||
.get("EncryptionOption")
|
||||
)
|
||||
|
||||
if encryption in ["SSE_S3", "SSE_KMS", "CSE_KMS"]:
|
||||
encryption_configuration = EncryptionConfiguration(
|
||||
encryption_option=encryption, encrypted=True
|
||||
)
|
||||
self.workgroups[
|
||||
workgroup.arn
|
||||
].encryption_configuration = encryption_configuration
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __list_tags_for_resource__(self):
|
||||
logger.info("Athena - Listing Tags...")
|
||||
try:
|
||||
for workgroup in self.workgroups.values():
|
||||
regional_client = self.regional_clients[workgroup.region]
|
||||
workgroup.tags = regional_client.list_tags_for_resource(
|
||||
ResourceARN=workgroup.arn
|
||||
)["Tags"]
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
|
||||
class EncryptionConfiguration(BaseModel):
|
||||
encryption_option: str
|
||||
encrypted: bool
|
||||
|
||||
|
||||
class WorkGroup(BaseModel):
|
||||
arn: str
|
||||
name: str
|
||||
encryption_configuration: EncryptionConfiguration = EncryptionConfiguration(
|
||||
encryption_option="", encrypted=False
|
||||
)
|
||||
enforce_workgroup_configuration: bool = False
|
||||
region: str
|
||||
tags: Optional[list] = []
|
||||
@@ -0,0 +1,34 @@
|
||||
{
|
||||
"Provider": "aws",
|
||||
"CheckID": "athena_workgroup_encryption",
|
||||
"CheckTitle": "Ensure that encryption at rest is enabled for Amazon Athena query results stored in Amazon S3 in order to secure data and meet compliance requirements for data-at-rest encryption.",
|
||||
"CheckType": [
|
||||
"Software and Configuration Checks"
|
||||
],
|
||||
"ServiceName": "athena",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "arn:partition:athena:region:account-id:workgroup/resource-id",
|
||||
"Severity": "high",
|
||||
"ResourceType": "WorkGroup",
|
||||
"Description": "Ensure that encryption at rest is enabled for Amazon Athena query results stored in Amazon S3 in order to secure data and meet compliance requirements for data-at-rest encryption.",
|
||||
"Risk": "If not enabled sensitive information at rest is not protected.",
|
||||
"RelatedUrl": "https://docs.aws.amazon.com/athena/latest/ug/encryption.html",
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "aws athena update-work-group --region <REGION> --work-group <workgroup_name> --configuration-updates ResultConfigurationUpdates={EncryptionConfiguration={EncryptionOption=SSE_S3|SSE_KMS|CSE_KMS}}",
|
||||
"NativeIaC": "",
|
||||
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Athena/encryption-enabled.html",
|
||||
"Terraform": "https://docs.bridgecrew.io/docs/ensure-that-athena-workgroup-is-encrypted#terraform"
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Enable Encryption. Use a CMK where possible. It will provide additional management and privacy benefits.",
|
||||
"Url": "https://docs.aws.amazon.com/athena/latest/ug/encrypting-query-results-stored-in-s3.html"
|
||||
}
|
||||
},
|
||||
"Categories": [
|
||||
"encryption"
|
||||
],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,27 @@
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.athena.athena_client import athena_client
|
||||
|
||||
|
||||
class athena_workgroup_encryption(Check):
|
||||
"""Check if there are Athena workgroups not encrypting query results"""
|
||||
|
||||
def execute(self):
|
||||
"""Execute the athena_workgroup_encryption check"""
|
||||
findings = []
|
||||
for workgroup in athena_client.workgroups.values():
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = workgroup.region
|
||||
report.resource_id = workgroup.name
|
||||
report.resource_arn = workgroup.arn
|
||||
report.resource_tags = workgroup.tags
|
||||
|
||||
if workgroup.encryption_configuration.encrypted:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Athena WorkGroup {workgroup.name} encrypts the query results using {workgroup.encryption_configuration.encryption_option}."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Athena WorkGroup {workgroup.name} does not encrypt the query results."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"Provider": "aws",
|
||||
"CheckID": "athena_workgroup_enforce_configuration",
|
||||
"CheckTitle": "Ensure that workgroup configuration is enforced so it cannot be overriden by client-side settings.",
|
||||
"CheckType": [
|
||||
"Software and Configuration Checks"
|
||||
],
|
||||
"ServiceName": "athena",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "arn:partition:athena:region:account-id:workgroup/resource-id",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "WorkGroup",
|
||||
"Description": "Ensure that workgroup configuration is enforced so it cannot be overriden by client-side settings.",
|
||||
"Risk": "If workgroup configuration is not enforced security settings like encryption can be overriden by client-side settings.",
|
||||
"RelatedUrl": "https://docs.aws.amazon.com/athena/latest/ug/workgroups-settings-override.html",
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "aws athena update-work-group --region <REGION> --work-group <workgroup_name> --configuration-updates EnforceWorkGroupConfiguration=True",
|
||||
"NativeIaC": "https://docs.bridgecrew.io/docs/bc_aws_general_33#cloudformation",
|
||||
"Other": "",
|
||||
"Terraform": "https://docs.bridgecrew.io/docs/bc_aws_general_33#terraform"
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Ensure that workgroup configuration is enforced so it cannot be overriden by client-side settings.",
|
||||
"Url": "https://docs.aws.amazon.com/athena/latest/ug/workgroups-settings-override.html"
|
||||
}
|
||||
},
|
||||
"Categories": [],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [],
|
||||
"Notes": ""
|
||||
}
|
||||
@@ -0,0 +1,27 @@
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.athena.athena_client import athena_client
|
||||
|
||||
|
||||
class athena_workgroup_enforce_configuration(Check):
|
||||
"""Check if there are Athena workgroups not encrypting query results"""
|
||||
|
||||
def execute(self):
|
||||
"""Execute the athena_workgroup_enforce_configuration check"""
|
||||
findings = []
|
||||
for workgroup in athena_client.workgroups.values():
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = workgroup.region
|
||||
report.resource_id = workgroup.name
|
||||
report.resource_arn = workgroup.arn
|
||||
report.resource_tags = workgroup.tags
|
||||
|
||||
if workgroup.enforce_workgroup_configuration:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Athena WorkGroup {workgroup.name} enforces the workgroup configuration, so it cannot be overridden by the client-side settings."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Athena WorkGroup {workgroup.name} does not enforce the workgroup configuration, so it can be overridden by the client-side settings."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
@@ -17,7 +17,7 @@ class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check)
|
||||
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Lambda function {function.name} is not recorded by CloudTrail"
|
||||
f"Lambda function {function.name} is not recorded by CloudTrail."
|
||||
)
|
||||
lambda_recorded_cloudtrail = False
|
||||
for trail in cloudtrail_client.trails:
|
||||
@@ -46,7 +46,7 @@ class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check)
|
||||
break
|
||||
if lambda_recorded_cloudtrail:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Lambda function {function.name} is recorded by CloudTrail trail {trail.name}"
|
||||
report.status_extended = f"Lambda function {function.name} is recorded by CloudTrail trail {trail.name}."
|
||||
break
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ class awslambda_function_no_secrets_in_code(Check):
|
||||
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"No secrets found in Lambda function {function.name} code"
|
||||
f"No secrets found in Lambda function {function.name} code."
|
||||
)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
function.code.code_zip.extractall(tmp_dir_name)
|
||||
@@ -55,11 +55,11 @@ class awslambda_function_no_secrets_in_code(Check):
|
||||
if secrets_findings:
|
||||
final_output_string = "; ".join(secrets_findings)
|
||||
report.status = "FAIL"
|
||||
# report.status_extended = f"Potential {'secrets' if len(secrets_findings)>1 else 'secret'} found in Lambda function {function.name} code. {final_output_string}"
|
||||
# report.status_extended = f"Potential {'secrets' if len(secrets_findings)>1 else 'secret'} found in Lambda function {function.name} code. {final_output_string}."
|
||||
if len(secrets_findings) > 1:
|
||||
report.status_extended = f"Potential secrets found in Lambda function {function.name} code -> {final_output_string}"
|
||||
report.status_extended = f"Potential secrets found in Lambda function {function.name} code -> {final_output_string}."
|
||||
else:
|
||||
report.status_extended = f"Potential secret found in Lambda function {function.name} code -> {final_output_string}"
|
||||
report.status_extended = f"Potential secret found in Lambda function {function.name} code -> {final_output_string}."
|
||||
# break // Don't break as there may be additional findings
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -21,7 +21,7 @@ class awslambda_function_no_secrets_in_variables(Check):
|
||||
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"No secrets found in Lambda function {function.name} variables"
|
||||
f"No secrets found in Lambda function {function.name} variables."
|
||||
)
|
||||
|
||||
if function.environment:
|
||||
@@ -47,7 +47,7 @@ class awslambda_function_no_secrets_in_variables(Check):
|
||||
]
|
||||
)
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Potential secret found in Lambda function {function.name} variables -> {secrets_string}"
|
||||
report.status_extended = f"Potential secret found in Lambda function {function.name} variables -> {secrets_string}."
|
||||
|
||||
os.remove(temp_env_data_file.name)
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ class awslambda_function_not_publicly_accessible(Check):
|
||||
report.resource_tags = function.tags
|
||||
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Lambda function {function.name} has a policy resource-based policy not public"
|
||||
report.status_extended = f"Lambda function {function.name} has a policy resource-based policy not public."
|
||||
|
||||
public_access = False
|
||||
if function.policy:
|
||||
@@ -36,7 +36,7 @@ class awslambda_function_not_publicly_accessible(Check):
|
||||
|
||||
if public_access:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Lambda function {function.name} has a policy resource-based policy with public access"
|
||||
report.status_extended = f"Lambda function {function.name} has a policy resource-based policy with public access."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_client
|
||||
|
||||
@@ -14,12 +13,14 @@ class awslambda_function_using_supported_runtimes(Check):
|
||||
report.resource_arn = function.arn
|
||||
report.resource_tags = function.tags
|
||||
|
||||
if function.runtime in get_config_var("obsolete_lambda_runtimes"):
|
||||
if function.runtime in awslambda_client.audit_config.get(
|
||||
"obsolete_lambda_runtimes", []
|
||||
):
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Lambda function {function.name} is using {function.runtime} which is obsolete"
|
||||
report.status_extended = f"Lambda function {function.name} is using {function.runtime} which is obsolete."
|
||||
else:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Lambda function {function.name} is using {function.runtime} which is supported"
|
||||
report.status_extended = f"Lambda function {function.name} is using {function.runtime} which is supported."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -7,15 +7,13 @@ class backup_plans_exist(Check):
|
||||
findings = []
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.status = "FAIL"
|
||||
report.status_extended = "No Backup Plan Exist"
|
||||
report.status_extended = "No Backup Plan exist."
|
||||
report.resource_arn = backup_client.audited_account_arn
|
||||
report.resource_id = backup_client.audited_account
|
||||
report.region = backup_client.region
|
||||
if backup_client.backup_plans:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"At least one backup plan exists: {backup_client.backup_plans[0].name}"
|
||||
)
|
||||
report.status_extended = f"At least one backup plan exists: {backup_client.backup_plans[0].name}."
|
||||
report.resource_arn = backup_client.backup_plans[0].arn
|
||||
report.resource_id = backup_client.backup_plans[0].name
|
||||
report.region = backup_client.backup_plans[0].region
|
||||
|
||||
@@ -9,13 +9,13 @@ class backup_reportplans_exist(Check):
|
||||
if backup_client.backup_plans:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.status = "FAIL"
|
||||
report.status_extended = "No Backup Report Plan Exist"
|
||||
report.status_extended = "No Backup Report Plan exist."
|
||||
report.resource_arn = backup_client.audited_account_arn
|
||||
report.resource_id = backup_client.audited_account
|
||||
report.region = backup_client.region
|
||||
if backup_client.backup_report_plans:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"At least one backup report plan exists: { backup_client.backup_report_plans[0].name}"
|
||||
report.status_extended = f"At least one backup report plan exists: {backup_client.backup_report_plans[0].name}."
|
||||
report.resource_arn = backup_client.backup_report_plans[0].arn
|
||||
report.resource_id = backup_client.backup_report_plans[0].name
|
||||
report.region = backup_client.backup_report_plans[0].region
|
||||
|
||||
@@ -11,7 +11,7 @@ class backup_vaults_encrypted(Check):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Backup Vault {backup_vault.name} is not encrypted"
|
||||
f"Backup Vault {backup_vault.name} is not encrypted."
|
||||
)
|
||||
report.resource_arn = backup_vault.arn
|
||||
report.resource_id = backup_vault.name
|
||||
@@ -20,7 +20,7 @@ class backup_vaults_encrypted(Check):
|
||||
if backup_vault.encryption:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Backup Vault {backup_vault.name} is encrypted"
|
||||
f"Backup Vault {backup_vault.name} is encrypted."
|
||||
)
|
||||
# then we store the finding
|
||||
findings.append(report)
|
||||
|
||||
@@ -7,13 +7,13 @@ class backup_vaults_exist(Check):
|
||||
findings = []
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.status = "FAIL"
|
||||
report.status_extended = "No Backup Vault Exist"
|
||||
report.status_extended = "No Backup Vault exist."
|
||||
report.resource_arn = backup_client.audited_account_arn
|
||||
report.resource_id = backup_client.audited_account
|
||||
report.region = backup_client.region
|
||||
if backup_client.backup_vaults:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"At least one backup vault exists: { backup_client.backup_vaults[0].name}"
|
||||
report.status_extended = f"At least one backup vault exists: {backup_client.backup_vaults[0].name}."
|
||||
report.resource_arn = backup_client.backup_vaults[0].arn
|
||||
report.resource_id = backup_client.backup_vaults[0].name
|
||||
report.region = backup_client.backup_vaults[0].region
|
||||
|
||||
@@ -20,10 +20,10 @@ class cloudformation_stacks_termination_protection_enabled(Check):
|
||||
|
||||
if stack.enable_termination_protection:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudFormation {stack.name} has termination protection enabled"
|
||||
report.status_extended = f"CloudFormation {stack.name} has termination protection enabled."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudFormation {stack.name} has termination protection disabled"
|
||||
report.status_extended = f"CloudFormation {stack.name} has termination protection disabled."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -18,10 +18,10 @@ class cloudfront_distributions_field_level_encryption_enabled(Check):
|
||||
and distribution.default_cache_config.field_level_encryption_id
|
||||
):
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Field Level Encryption enabled"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Field Level Encryption enabled."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Field Level Encryption disabled"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Field Level Encryption disabled."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -18,10 +18,10 @@ class cloudfront_distributions_geo_restrictions_enabled(Check):
|
||||
report.resource_tags = distribution.tags
|
||||
if distribution.geo_restriction_type == GeoRestrictionType.none:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Geo restrictions disabled"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Geo restrictions disabled."
|
||||
else:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Geo restrictions enabled"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} has Geo restrictions enabled."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ class cloudfront_distributions_https_enabled(Check):
|
||||
):
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"CloudFront Distribution {distribution.id} has redirect to HTTPS"
|
||||
f"CloudFront Distribution {distribution.id} has redirect to HTTPS."
|
||||
)
|
||||
elif (
|
||||
distribution.default_cache_config
|
||||
@@ -33,11 +33,11 @@ class cloudfront_distributions_https_enabled(Check):
|
||||
):
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"CloudFront Distribution {distribution.id} has HTTPS only"
|
||||
f"CloudFront Distribution {distribution.id} has HTTPS only."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} viewers can use HTTP or HTTPS"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} viewers can use HTTP or HTTPS."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -19,12 +19,12 @@ class cloudfront_distributions_logging_enabled(Check):
|
||||
):
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"CloudFront Distribution {distribution.id} has logging enabled"
|
||||
f"CloudFront Distribution {distribution.id} has logging enabled."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"CloudFront Distribution {distribution.id} has logging disabled"
|
||||
f"CloudFront Distribution {distribution.id} has logging disabled."
|
||||
)
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ class cloudfront_distributions_using_deprecated_ssl_protocols(Check):
|
||||
report.resource_id = distribution.id
|
||||
report.resource_tags = distribution.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is not using a deprecated SSL protocol"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is not using a deprecated SSL protocol."
|
||||
|
||||
bad_ssl_protocol = False
|
||||
for origin in distribution.origins:
|
||||
@@ -34,7 +34,7 @@ class cloudfront_distributions_using_deprecated_ssl_protocols(Check):
|
||||
break
|
||||
if bad_ssl_protocol:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is using a deprecated SSL protocol"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is using a deprecated SSL protocol."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -15,10 +15,10 @@ class cloudfront_distributions_using_waf(Check):
|
||||
report.resource_tags = distribution.tags
|
||||
if distribution.web_acl_id:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is using AWS WAF web ACL {distribution.web_acl_id}"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is using AWS WAF web ACL {distribution.web_acl_id}."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is not using AWS WAF web ACL"
|
||||
report.status_extended = f"CloudFront Distribution {distribution.id} is not using AWS WAF web ACL."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -55,13 +55,11 @@ class CloudFront(AWSService):
|
||||
]["Logging"]["Enabled"]
|
||||
distributions[
|
||||
distribution_id
|
||||
].geo_restriction_type = distribution_config["DistributionConfig"][
|
||||
"Restrictions"
|
||||
][
|
||||
"GeoRestriction"
|
||||
][
|
||||
"RestrictionType"
|
||||
]
|
||||
].geo_restriction_type = GeoRestrictionType(
|
||||
distribution_config["DistributionConfig"]["Restrictions"][
|
||||
"GeoRestriction"
|
||||
]["RestrictionType"]
|
||||
)
|
||||
distributions[distribution_id].web_acl_id = distribution_config[
|
||||
"DistributionConfig"
|
||||
]["WebACLId"]
|
||||
@@ -71,9 +69,11 @@ class CloudFront(AWSService):
|
||||
realtime_log_config_arn=distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("RealtimeLogConfigArn"),
|
||||
viewer_protocol_policy=distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("ViewerProtocolPolicy"),
|
||||
viewer_protocol_policy=ViewerProtocolPolicy(
|
||||
distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("ViewerProtocolPolicy")
|
||||
),
|
||||
field_level_encryption_id=distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("FieldLevelEncryptionId"),
|
||||
@@ -131,7 +131,7 @@ class DefaultCacheConfigBehaviour(BaseModel):
|
||||
|
||||
|
||||
class Distribution(BaseModel):
|
||||
"""Distribution holds a CloudFront Distribution with the required information to run the rela"""
|
||||
"""Distribution holds a CloudFront Distribution resource"""
|
||||
|
||||
arn: str
|
||||
id: str
|
||||
|
||||
@@ -21,10 +21,10 @@ class cloudtrail_cloudwatch_logging_enabled(Check):
|
||||
report.status = "PASS"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = (
|
||||
f"Multiregion trail {trail.name} has been logging the last 24h"
|
||||
f"Multiregion trail {trail.name} has been logging the last 24h."
|
||||
)
|
||||
else:
|
||||
report.status_extended = f"Single region trail {trail.name} has been logging the last 24h"
|
||||
report.status_extended = f"Single region trail {trail.name} has been logging the last 24h."
|
||||
if trail.latest_cloudwatch_delivery_time:
|
||||
last_log_delivery = (
|
||||
datetime.now().replace(tzinfo=timezone.utc)
|
||||
@@ -33,15 +33,15 @@ class cloudtrail_cloudwatch_logging_enabled(Check):
|
||||
if last_log_delivery > timedelta(days=maximum_time_without_logging):
|
||||
report.status = "FAIL"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h"
|
||||
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h."
|
||||
else:
|
||||
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h"
|
||||
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h or not configured to deliver logs"
|
||||
report.status_extended = f"Multiregion trail {trail.name} is not logging in the last 24h or not configured to deliver logs."
|
||||
else:
|
||||
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h or not configured to deliver logs"
|
||||
report.status_extended = f"Single region trail {trail.name} is not logging in the last 24h or not configured to deliver logs."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -17,21 +17,21 @@ class cloudtrail_kms_encryption_enabled(Check):
|
||||
report.status = "FAIL"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = (
|
||||
f"Multiregion trail {trail.name} has encryption disabled"
|
||||
f"Multiregion trail {trail.name} has encryption disabled."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Single region trail {trail.name} has encryption disabled"
|
||||
f"Single region trail {trail.name} has encryption disabled."
|
||||
)
|
||||
if trail.kms_key:
|
||||
report.status = "PASS"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = (
|
||||
f"Multiregion trail {trail.name} has encryption enabled"
|
||||
f"Multiregion trail {trail.name} has encryption enabled."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Single region trail {trail.name} has encryption enabled"
|
||||
f"Single region trail {trail.name} has encryption enabled."
|
||||
)
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -17,18 +17,16 @@ class cloudtrail_log_file_validation_enabled(Check):
|
||||
report.status = "FAIL"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = (
|
||||
f"Multiregion trail {trail.name} log file validation disabled"
|
||||
f"Multiregion trail {trail.name} log file validation disabled."
|
||||
)
|
||||
else:
|
||||
report.status_extended = (
|
||||
f"Single region trail {trail.name} log file validation disabled"
|
||||
)
|
||||
report.status_extended = f"Single region trail {trail.name} log file validation disabled."
|
||||
if trail.log_file_validation_enabled:
|
||||
report.status = "PASS"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = f"Multiregion trail {trail.name} log file validation enabled"
|
||||
report.status_extended = f"Multiregion trail {trail.name} log file validation enabled."
|
||||
else:
|
||||
report.status_extended = f"Single region trail {trail.name} log file validation enabled"
|
||||
report.status_extended = f"Single region trail {trail.name} log file validation enabled."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -19,24 +19,24 @@ class cloudtrail_logs_s3_bucket_access_logging_enabled(Check):
|
||||
report.resource_tags = trail.tags
|
||||
report.status = "FAIL"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = f"Multiregion Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}"
|
||||
report.status_extended = f"Multiregion Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}."
|
||||
else:
|
||||
report.status_extended = f"Single region Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}"
|
||||
report.status_extended = f"Single region Trail {trail.name} S3 bucket access logging is not enabled for bucket {trail_bucket}."
|
||||
for bucket in s3_client.buckets:
|
||||
if trail_bucket == bucket.name:
|
||||
trail_bucket_is_in_account = True
|
||||
if bucket.logging:
|
||||
report.status = "PASS"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = f"Multiregion trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}"
|
||||
report.status_extended = f"Multiregion trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}."
|
||||
else:
|
||||
report.status_extended = f"Single region trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}"
|
||||
report.status_extended = f"Single region trail {trail.name} S3 bucket access logging is enabled for bucket {trail_bucket}."
|
||||
break
|
||||
|
||||
# check if trail is delivering logs in a cross account bucket
|
||||
if not trail_bucket_is_in_account:
|
||||
report.status = "INFO"
|
||||
report.status_extended = f"Trail {trail.name} is delivering logs in a cross-account bucket {trail_bucket} in another account out of Prowler's permissions scope, please check it manually"
|
||||
report.status_extended = f"Trail {trail.name} is delivering logs in a cross-account bucket {trail_bucket} in another account out of Prowler's permissions scope, please check it manually."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -19,9 +19,9 @@ class cloudtrail_logs_s3_bucket_is_not_publicly_accessible(Check):
|
||||
report.resource_tags = trail.tags
|
||||
report.status = "PASS"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is not publicly accessible"
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is not publicly accessible."
|
||||
else:
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is not publicly accessible"
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is not publicly accessible."
|
||||
for bucket in s3_client.buckets:
|
||||
# Here we need to ensure that acl_grantee is filled since if we don't have permissions to query the api for a concrete region
|
||||
# (for example due to a SCP) we are going to try access an attribute from a None type
|
||||
@@ -35,14 +35,14 @@ class cloudtrail_logs_s3_bucket_is_not_publicly_accessible(Check):
|
||||
):
|
||||
report.status = "FAIL"
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is publicly accessible"
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from multiregion trail {trail.name} is publicly accessible."
|
||||
else:
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is publicly accessible"
|
||||
report.status_extended = f"S3 Bucket {trail_bucket} from single region trail {trail.name} is publicly accessible."
|
||||
break
|
||||
# check if trail bucket is a cross account bucket
|
||||
if not trail_bucket_is_in_account:
|
||||
report.status = "INFO"
|
||||
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually"
|
||||
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -19,10 +19,10 @@ class cloudtrail_multi_region_enabled(Check):
|
||||
report.resource_tags = trail.tags
|
||||
if trail.is_multiregion:
|
||||
report.status_extended = (
|
||||
f"Trail {trail.name} is multiregion and it is logging"
|
||||
f"Trail {trail.name} is multiregion and it is logging."
|
||||
)
|
||||
else:
|
||||
report.status_extended = f"Trail {trail.name} is not multiregion and it is logging"
|
||||
report.status_extended = f"Trail {trail.name} is not multiregion and it is logging."
|
||||
# Since there exists a logging trail in that region there is no point in checking the reamaining trails
|
||||
# Store the finding and exit the loop
|
||||
findings.append(report)
|
||||
@@ -30,7 +30,7 @@ class cloudtrail_multi_region_enabled(Check):
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
"No CloudTrail trails enabled and logging were found"
|
||||
"No CloudTrail trails enabled and logging were found."
|
||||
)
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
|
||||
@@ -7,7 +7,7 @@ class cloudwatch_cross_account_sharing_disabled(Check):
|
||||
findings = []
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.status = "PASS"
|
||||
report.status_extended = "CloudWatch doesn't allow cross-account sharing"
|
||||
report.status_extended = "CloudWatch doesn't allow cross-account sharing."
|
||||
report.resource_arn = iam_client.audited_account_arn
|
||||
report.resource_id = iam_client.audited_account
|
||||
report.region = iam_client.region
|
||||
|
||||
@@ -81,7 +81,7 @@ class cloudwatch_log_group_no_secrets_in_logs(Check):
|
||||
if log_group_secrets:
|
||||
secrets_string = "; ".join(log_group_secrets)
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Potential secrets found in log group {log_group.name} {secrets_string}"
|
||||
report.status_extended = f"Potential secrets found in log group {log_group.name} {secrets_string}."
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
@@ -6,7 +5,11 @@ from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
class cloudwatch_log_group_retention_policy_specific_days_enabled(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
specific_retention_days = get_config_var("log_group_retention_days")
|
||||
|
||||
# log_group_retention_days, default: 365 days
|
||||
specific_retention_days = logs_client.audit_config.get(
|
||||
"log_group_retention_days", 365
|
||||
)
|
||||
for log_group in logs_client.log_groups:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = log_group.region
|
||||
|
||||
@@ -28,11 +28,11 @@ class codeartifact_packages_external_public_publishing_disabled(Check):
|
||||
== RestrictionValues.ALLOW
|
||||
):
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Internal package {package.name} is vulnerable to dependency confusion in repository {repository.arn}"
|
||||
report.status_extended = f"Internal package {package.name} is vulnerable to dependency confusion in repository {repository.arn}."
|
||||
else:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Internal package {package.name} is not vulnerable to dependency confusion in repository {repository.arn}"
|
||||
report.status_extended = f"Internal package {package.name} is not vulnerable to dependency confusion in repository {repository.arn}."
|
||||
|
||||
findings.append(report)
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -13,17 +13,15 @@ class codebuild_project_older_90_days(Check):
|
||||
report.resource_id = project.name
|
||||
report.resource_arn = project.arn
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"CodeBuild project {project.name} has been invoked in the last 90 days"
|
||||
)
|
||||
report.status_extended = f"CodeBuild project {project.name} has been invoked in the last 90 days."
|
||||
if project.last_invoked_time:
|
||||
if (datetime.now(timezone.utc) - project.last_invoked_time).days > 90:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CodeBuild project {project.name} has not been invoked in the last 90 days"
|
||||
report.status_extended = f"CodeBuild project {project.name} has not been invoked in the last 90 days."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"CodeBuild project {project.name} has never been built"
|
||||
f"CodeBuild project {project.name} has never been built."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -13,13 +13,13 @@ class codebuild_project_user_controlled_buildspec(Check):
|
||||
report.resource_id = project.name
|
||||
report.resource_arn = project.arn
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CodeBuild project {project.name} does not use an user controlled buildspec"
|
||||
report.status_extended = f"CodeBuild project {project.name} does not use an user controlled buildspec."
|
||||
if project.buildspec:
|
||||
if search(r".*\.yaml$", project.buildspec) or search(
|
||||
r".*\.yml$", project.buildspec
|
||||
):
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CodeBuild project {project.name} uses an user controlled buildspec"
|
||||
report.status_extended = f"CodeBuild project {project.name} uses an user controlled buildspec."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -11,13 +11,14 @@ class directoryservice_directory_log_forwarding_enabled(Check):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = directory.region
|
||||
report.resource_id = directory.id
|
||||
report.resource_arn = directory.arn
|
||||
report.resource_tags = directory.tags
|
||||
if directory.log_subscriptions:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Directory Service {directory.id} have log forwarding to CloudWatch enabled"
|
||||
report.status_extended = f"Directory Service {directory.id} have log forwarding to CloudWatch enabled."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Directory Service {directory.id} have log forwarding to CloudWatch disabled"
|
||||
report.status_extended = f"Directory Service {directory.id} have log forwarding to CloudWatch disabled."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -11,16 +11,17 @@ class directoryservice_directory_monitor_notifications(Check):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = directory.region
|
||||
report.resource_id = directory.id
|
||||
report.resource_arn = directory.arn
|
||||
report.resource_tags = directory.tags
|
||||
if directory.event_topics:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Directory Service {directory.id} have SNS messaging enabled"
|
||||
f"Directory Service {directory.id} have SNS messaging enabled."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Directory Service {directory.id} have SNS messaging disabled"
|
||||
f"Directory Service {directory.id} have SNS messaging disabled."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -14,11 +14,12 @@ class directoryservice_directory_snapshots_limit(Check):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = directory.region
|
||||
report.resource_id = directory.id
|
||||
report.resource_arn = directory.arn
|
||||
report.resource_tags = directory.tags
|
||||
if directory.snapshots_limits:
|
||||
if directory.snapshots_limits.manual_snapshots_limit_reached:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Directory Service {directory.id} reached {directory.snapshots_limits.manual_snapshots_limit} Snapshots limit"
|
||||
report.status_extended = f"Directory Service {directory.id} reached {directory.snapshots_limits.manual_snapshots_limit} Snapshots limit."
|
||||
else:
|
||||
limit_remaining = (
|
||||
directory.snapshots_limits.manual_snapshots_limit
|
||||
@@ -26,10 +27,10 @@ class directoryservice_directory_snapshots_limit(Check):
|
||||
)
|
||||
if limit_remaining <= SNAPSHOT_LIMIT_THRESHOLD:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Directory Service {directory.id} is about to reach {directory.snapshots_limits.manual_snapshots_limit} Snapshots which is the limit"
|
||||
report.status_extended = f"Directory Service {directory.id} is about to reach {directory.snapshots_limits.manual_snapshots_limit} Snapshots which is the limit."
|
||||
else:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Directory Service {directory.id} is using {directory.snapshots_limits.manual_snapshots_current_count} out of {directory.snapshots_limits.manual_snapshots_limit} from the Snapshots Limit"
|
||||
report.status_extended = f"Directory Service {directory.id} is using {directory.snapshots_limits.manual_snapshots_current_count} out of {directory.snapshots_limits.manual_snapshots_limit} from the Snapshots Limit."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -17,6 +17,7 @@ class directoryservice_ldap_certificate_expiration(Check):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = directory.region
|
||||
report.resource_id = certificate.id
|
||||
report.resource_arn = directory.arn
|
||||
report.resource_tags = directory.tags
|
||||
|
||||
remaining_days_to_expire = (
|
||||
@@ -30,10 +31,10 @@ class directoryservice_ldap_certificate_expiration(Check):
|
||||
|
||||
if remaining_days_to_expire <= DAYS_TO_EXPIRE_THRESHOLD:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"LDAP Certificate {certificate.id} configured at {directory.id} is about to expire in {remaining_days_to_expire} days"
|
||||
report.status_extended = f"LDAP Certificate {certificate.id} configured at {directory.id} is about to expire in {remaining_days_to_expire} days."
|
||||
else:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"LDAP Certificate {certificate.id} configured at {directory.id} expires in {remaining_days_to_expire} days"
|
||||
report.status_extended = f"LDAP Certificate {certificate.id} configured at {directory.id} expires in {remaining_days_to_expire} days."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -15,16 +15,17 @@ class directoryservice_radius_server_security_protocol(Check):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = directory.region
|
||||
report.resource_id = directory.id
|
||||
report.resource_arn = directory.arn
|
||||
report.resource_tags = directory.tags
|
||||
if (
|
||||
directory.radius_settings.authentication_protocol
|
||||
== AuthenticationProtocol.MS_CHAPv2
|
||||
):
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Radius server of Directory {directory.id} have recommended security protocol for the Radius server"
|
||||
report.status_extended = f"Radius server of Directory {directory.id} have recommended security protocol for the Radius server."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Radius server of Directory {directory.id} does not have recommended security protocol for the Radius server"
|
||||
report.status_extended = f"Radius server of Directory {directory.id} does not have recommended security protocol for the Radius server."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -37,16 +37,19 @@ class DirectoryService(AWSService):
|
||||
)
|
||||
):
|
||||
directory_id = directory["DirectoryId"]
|
||||
directory_arn = f"arn:{self.audited_partition}:ds:{regional_client.region}:{self.audited_account}:directory/{directory_id}"
|
||||
directory_name = directory["Name"]
|
||||
directory_type = directory["Type"]
|
||||
# Radius Configuration
|
||||
radius_authentication_protocol = (
|
||||
directory["RadiusSettings"]["AuthenticationProtocol"]
|
||||
AuthenticationProtocol(
|
||||
directory["RadiusSettings"]["AuthenticationProtocol"]
|
||||
)
|
||||
if "RadiusSettings" in directory
|
||||
else None
|
||||
)
|
||||
radius_status = (
|
||||
directory["RadiusStatus"]
|
||||
RadiusStatus(directory["RadiusStatus"])
|
||||
if "RadiusStatus" in directory
|
||||
else None
|
||||
)
|
||||
@@ -54,6 +57,7 @@ class DirectoryService(AWSService):
|
||||
self.directories[directory_id] = Directory(
|
||||
name=directory_name,
|
||||
id=directory_id,
|
||||
arn=directory_arn,
|
||||
type=directory_type,
|
||||
region=regional_client.region,
|
||||
radius_settings=RadiusSettings(
|
||||
@@ -295,6 +299,7 @@ class DirectoryType(Enum):
|
||||
class Directory(BaseModel):
|
||||
name: str
|
||||
id: str
|
||||
arn: str
|
||||
type: DirectoryType
|
||||
log_subscriptions: list[LogSubscriptions] = []
|
||||
event_topics: list[EventTopics] = []
|
||||
|
||||
@@ -15,16 +15,17 @@ class directoryservice_supported_mfa_radius_enabled(Check):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = directory.region
|
||||
report.resource_id = directory.id
|
||||
report.resource_arn = directory.arn
|
||||
report.resource_tags = directory.tags
|
||||
if directory.radius_settings.status == RadiusStatus.Completed:
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Directory {directory.id} have Radius MFA enabled"
|
||||
f"Directory {directory.id} have Radius MFA enabled."
|
||||
)
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Directory {directory.id} does not have Radius MFA enabled"
|
||||
f"Directory {directory.id} does not have Radius MFA enabled."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -65,18 +65,29 @@ class DynamoDB(AWSService):
|
||||
logger.info("DynamoDB - Describing Continuous Backups...")
|
||||
try:
|
||||
for table in self.tables:
|
||||
regional_client = self.regional_clients[table.region]
|
||||
properties = regional_client.describe_continuous_backups(
|
||||
TableName=table.name
|
||||
)["ContinuousBackupsDescription"]
|
||||
if "PointInTimeRecoveryDescription" in properties:
|
||||
if (
|
||||
properties["PointInTimeRecoveryDescription"][
|
||||
"PointInTimeRecoveryStatus"
|
||||
]
|
||||
== "ENABLED"
|
||||
):
|
||||
table.pitr = True
|
||||
try:
|
||||
regional_client = self.regional_clients[table.region]
|
||||
properties = regional_client.describe_continuous_backups(
|
||||
TableName=table.name
|
||||
)["ContinuousBackupsDescription"]
|
||||
if "PointInTimeRecoveryDescription" in properties:
|
||||
if (
|
||||
properties["PointInTimeRecoveryDescription"][
|
||||
"PointInTimeRecoveryStatus"
|
||||
]
|
||||
== "ENABLED"
|
||||
):
|
||||
table.pitr = True
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "TableNotFoundException":
|
||||
logger.warning(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
continue
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import shodan
|
||||
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
@@ -9,7 +8,7 @@ from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
class ec2_elastic_ip_shodan(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
shodan_api_key = get_config_var("shodan_api_key")
|
||||
shodan_api_key = ec2_client.audit_config.get("shodan_api_key")
|
||||
if shodan_api_key:
|
||||
api = shodan.Shodan(shodan_api_key)
|
||||
for eip in ec2_client.elastic_ips:
|
||||
@@ -21,7 +20,7 @@ class ec2_elastic_ip_shodan(Check):
|
||||
try:
|
||||
shodan_info = api.host(eip.public_ip)
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Elastic IP {eip.public_ip} listed in Shodan with open ports {str(shodan_info['ports'])} and ISP {shodan_info['isp']} in {shodan_info['country_name']}. More info https://www.shodan.io/host/{eip.public_ip}"
|
||||
report.status_extended = f"Elastic IP {eip.public_ip} listed in Shodan with open ports {str(shodan_info['ports'])} and ISP {shodan_info['isp']} in {shodan_info['country_name']}. More info at https://www.shodan.io/host/{eip.public_ip}."
|
||||
report.resource_id = eip.public_ip
|
||||
findings.append(report)
|
||||
except shodan.APIError as error:
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
|
||||
@@ -8,7 +7,11 @@ from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
class ec2_instance_older_than_specific_days(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
max_ec2_instance_age_in_days = get_config_var("max_ec2_instance_age_in_days")
|
||||
|
||||
# max_ec2_instance_age_in_days, default: 180 days
|
||||
max_ec2_instance_age_in_days = ec2_client.audit_config.get(
|
||||
"max_ec2_instance_age_in_days", 180
|
||||
)
|
||||
for instance in ec2_client.instances:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = instance.region
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
from prowler.config.config import get_config_var
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
|
||||
@@ -6,7 +5,11 @@ from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
class ec2_securitygroup_with_many_ingress_egress_rules(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
max_security_group_rules = get_config_var("max_security_group_rules")
|
||||
|
||||
# max_security_group_rules, default: 50
|
||||
max_security_group_rules = ec2_client.audit_config.get(
|
||||
"max_security_group_rules", 50
|
||||
)
|
||||
for security_group in ec2_client.security_groups:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = security_group.region
|
||||
@@ -15,7 +18,7 @@ class ec2_securitygroup_with_many_ingress_egress_rules(Check):
|
||||
report.resource_arn = security_group.arn
|
||||
report.resource_tags = security_group.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Security group {security_group.name} ({security_group.id}) has {len(security_group.ingress_rules)} inbound rules and {len(security_group.egress_rules)} outbound rules"
|
||||
report.status_extended = f"Security group {security_group.name} ({security_group.id}) has {len(security_group.ingress_rules)} inbound rules and {len(security_group.egress_rules)} outbound rules."
|
||||
if (
|
||||
len(security_group.ingress_rules) > max_security_group_rules
|
||||
or len(security_group.egress_rules) > max_security_group_rules
|
||||
|
||||
@@ -33,7 +33,7 @@ def check_security_group(
|
||||
|
||||
@param ports: List of ports to check. (Default: [])
|
||||
|
||||
@param any_address: If True, only 0.0.0.0/0 will be public and do not search for public addresses. (Default: False)
|
||||
@param any_address: If True, only 0.0.0.0/0 or "::/0" will be public and do not search for public addresses. (Default: False)
|
||||
"""
|
||||
# Check for all traffic ingress rules regardless of the protocol
|
||||
if ingress_rule["IpProtocol"] == "-1":
|
||||
@@ -76,7 +76,7 @@ def check_security_group(
|
||||
|
||||
# IPv6
|
||||
for ip_ingress_rule in ingress_rule["Ipv6Ranges"]:
|
||||
if _is_cidr_public(ip_ingress_rule["CidrIpv6"]):
|
||||
if _is_cidr_public(ip_ingress_rule["CidrIpv6"], any_address):
|
||||
# If there are input ports to check
|
||||
if ports:
|
||||
for port in ports:
|
||||
@@ -98,13 +98,10 @@ def _is_cidr_public(cidr: str, any_address: bool = False) -> bool:
|
||||
|
||||
@param cidr: CIDR 10.22.33.44/8
|
||||
|
||||
@param any_address: If True, only 0.0.0.0/0 will be public and do not search for public addresses. (Default: False)
|
||||
@param any_address: If True, only 0.0.0.0/0 or "::/0" will be public and do not search for public addresses. (Default: False)
|
||||
"""
|
||||
public_IPv4 = "0.0.0.0/0"
|
||||
public_IPv6 = "::/0"
|
||||
# Workaround until this issue is fixed
|
||||
# PR https://github.com/python/cpython/pull/97733
|
||||
# Issue https://github.com/python/cpython/issues/82836
|
||||
if cidr in (public_IPv4, public_IPv6):
|
||||
return True
|
||||
if not any_address:
|
||||
|
||||
@@ -14,17 +14,17 @@ class ecr_registry_scan_images_on_push_enabled(Check):
|
||||
# A registry cannot have tags
|
||||
report.resource_tags = []
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"ECR registry {registry.id} has {registry.scan_type} scanning without scan on push enabled"
|
||||
report.status_extended = f"ECR registry {registry.id} has {registry.scan_type} scanning without scan on push enabled."
|
||||
if registry.rules:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"ECR registry {registry.id} has {registry.scan_type} scan with scan on push enabled"
|
||||
report.status_extended = f"ECR registry {registry.id} has {registry.scan_type} scan with scan on push enabled."
|
||||
filters = True
|
||||
for rule in registry.rules:
|
||||
if not rule.scan_filters or "'*'" in str(rule.scan_filters):
|
||||
filters = False
|
||||
if filters:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"ECR registry {registry.id} has {registry.scan_type} scanning with scan on push but with repository filters"
|
||||
report.status_extended = f"ECR registry {registry.id} has {registry.scan_type} scanning with scan on push but with repository filters."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ class ecr_repositories_not_publicly_accessible(Check):
|
||||
report.resource_tags = repository.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Repository {repository.name} is not publicly accesible"
|
||||
f"Repository {repository.name} is not publicly accesible."
|
||||
)
|
||||
if repository.policy:
|
||||
for statement in repository.policy["Statement"]:
|
||||
@@ -24,7 +24,7 @@ class ecr_repositories_not_publicly_accessible(Check):
|
||||
and "*" in statement["Principal"]["AWS"]
|
||||
):
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Repository {repository.name} policy may allow anonymous users to perform actions (Principal: '*')"
|
||||
report.status_extended = f"Repository {repository.name} policy may allow anonymous users to perform actions (Principal: '*')."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -14,12 +14,12 @@ class ecr_repositories_scan_images_on_push_enabled(Check):
|
||||
report.resource_tags = repository.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"ECR repository {repository.name} has scan on push enabled"
|
||||
f"ECR repository {repository.name} has scan on push enabled."
|
||||
)
|
||||
if not repository.scan_on_push:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"ECR repository {repository.name} has scan on push disabled"
|
||||
f"ECR repository {repository.name} has scan on push disabled."
|
||||
)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -18,14 +18,14 @@ class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
|
||||
report.resource_arn = repository.arn
|
||||
report.resource_tags = repository.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned without findings"
|
||||
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned without findings."
|
||||
if not image.scan_findings_status:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} without a scan"
|
||||
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} without a scan."
|
||||
elif image.scan_findings_status == "FAILED":
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"ECR repository {repository.name} with scan status FAILED"
|
||||
f"ECR repository {repository.name} with scan status FAILED."
|
||||
)
|
||||
elif image.scan_findings_status != "FAILED":
|
||||
if image.scan_findings_severity_count and (
|
||||
@@ -34,7 +34,7 @@ class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
|
||||
or image.scan_findings_severity_count.medium
|
||||
):
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}, MEDIUM->{image.scan_findings_severity_count.medium} "
|
||||
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}, MEDIUM->{image.scan_findings_severity_count.medium}."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ class ecs_task_definitions_no_environment_secrets(Check):
|
||||
report.resource_arn = task_definition.arn
|
||||
report.resource_tags = task_definition.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"No secrets found in variables of ECS task definition {task_definition.name} with revision {task_definition.revision}"
|
||||
report.status_extended = f"No secrets found in variables of ECS task definition {task_definition.name} with revision {task_definition.revision}."
|
||||
if task_definition.environment_variables:
|
||||
dump_env_vars = {}
|
||||
for env_var in task_definition.environment_variables:
|
||||
@@ -44,7 +44,7 @@ class ecs_task_definitions_no_environment_secrets(Check):
|
||||
]
|
||||
)
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Potential secret found in variables of ECS task definition {task_definition.name} with revision {task_definition.revision} -> {secrets_string}"
|
||||
report.status_extended = f"Potential secret found in variables of ECS task definition {task_definition.name} with revision {task_definition.revision} -> {secrets_string}."
|
||||
|
||||
os.remove(temp_env_data_file.name)
|
||||
|
||||
|
||||
@@ -13,11 +13,11 @@ class efs_encryption_at_rest_enabled(Check):
|
||||
report.resource_tags = fs.tags
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"EFS {fs.id} does not have encryption at rest enabled"
|
||||
f"EFS {fs.id} does not have encryption at rest enabled."
|
||||
)
|
||||
if fs.encrypted:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"EFS {fs.id} has encryption at rest enabled"
|
||||
report.status_extended = f"EFS {fs.id} has encryption at rest enabled."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -13,11 +13,11 @@ class efs_not_publicly_accessible(Check):
|
||||
report.resource_tags = fs.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"EFS {fs.id} has a policy which does not allow access to everyone"
|
||||
f"EFS {fs.id} has a policy which does not allow access to everyone."
|
||||
)
|
||||
if not fs.policy:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"EFS {fs.id} doesn't have any policy which means it grants full access to any client"
|
||||
report.status_extended = f"EFS {fs.id} doesn't have any policy which means it grants full access to any client."
|
||||
else:
|
||||
for statement in fs.policy["Statement"]:
|
||||
if statement["Effect"] == "Allow":
|
||||
@@ -34,7 +34,7 @@ class efs_not_publicly_accessible(Check):
|
||||
)
|
||||
):
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"EFS {fs.id} has a policy which allows access to everyone"
|
||||
report.status_extended = f"EFS {fs.id} has a policy which allows access to everyone."
|
||||
break
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -13,14 +13,14 @@ class eks_control_plane_endpoint_access_restricted(Check):
|
||||
report.resource_tags = cluster.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Cluster endpoint access is private for EKS cluster {cluster.name}"
|
||||
f"Cluster endpoint access is private for EKS cluster {cluster.name}."
|
||||
)
|
||||
if cluster.endpoint_public_access and not cluster.endpoint_private_access:
|
||||
if "0.0.0.0/0" in cluster.public_access_cidrs:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Cluster control plane access is not restricted for EKS cluster {cluster.name}"
|
||||
report.status_extended = f"Cluster control plane access is not restricted for EKS cluster {cluster.name}."
|
||||
else:
|
||||
report.status_extended = f"Cluster control plane access is restricted for EKS cluster {cluster.name}"
|
||||
report.status_extended = f"Cluster control plane access is restricted for EKS cluster {cluster.name}."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -13,7 +13,7 @@ class eks_control_plane_logging_all_types_enabled(Check):
|
||||
report.resource_tags = cluster.tags
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Control plane logging is not enabled for EKS cluster {cluster.name}"
|
||||
f"Control plane logging is not enabled for EKS cluster {cluster.name}."
|
||||
)
|
||||
if cluster.logging and cluster.logging.enabled:
|
||||
if all(
|
||||
@@ -27,9 +27,9 @@ class eks_control_plane_logging_all_types_enabled(Check):
|
||||
]
|
||||
):
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"Control plane logging enabled and correctly configured for EKS cluster {cluster.name}"
|
||||
report.status_extended = f"Control plane logging enabled and correctly configured for EKS cluster {cluster.name}."
|
||||
else:
|
||||
report.status_extended = f"Control plane logging enabled but not all log types collected for EKS cluster {cluster.name}"
|
||||
report.status_extended = f"Control plane logging enabled but not all log types collected for EKS cluster {cluster.name}."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -13,12 +13,12 @@ class eks_endpoints_not_publicly_accessible(Check):
|
||||
report.resource_tags = cluster.tags
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"Cluster endpoint access is private for EKS cluster {cluster.name}"
|
||||
f"Cluster endpoint access is private for EKS cluster {cluster.name}."
|
||||
)
|
||||
if cluster.endpoint_public_access and not cluster.endpoint_private_access:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"Cluster endpoint access is public for EKS cluster {cluster.name}"
|
||||
f"Cluster endpoint access is public for EKS cluster {cluster.name}."
|
||||
)
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -17,9 +17,9 @@ class elbv2_desync_mitigation_mode(Check):
|
||||
if lb.desync_mitigation_mode == "monitor":
|
||||
if lb.drop_invalid_header_fields == "false":
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"ELBv2 ALB {lb.name} does not have desync mitigation mode set as defensive or strictest and is not dropping invalid header fields"
|
||||
report.status_extended = f"ELBv2 ALB {lb.name} does not have desync mitigation mode set as defensive or strictest and is not dropping invalid header fields."
|
||||
elif lb.drop_invalid_header_fields == "true":
|
||||
report.status_extended = f"ELBv2 ALB {lb.name} does not have desync mitigation mode set as defensive or strictest but is dropping invalid header fields"
|
||||
report.status_extended = f"ELBv2 ALB {lb.name} does not have desync mitigation mode set as defensive or strictest but is dropping invalid header fields."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -14,10 +14,10 @@ class emr_cluster_account_public_block_enabled(Check):
|
||||
region
|
||||
].block_public_security_group_rules:
|
||||
report.status = "PASS"
|
||||
report.status_extended = "EMR Account has Block Public Access enabled"
|
||||
report.status_extended = "EMR Account has Block Public Access enabled."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = "EMR Account has Block Public Access disabled"
|
||||
report.status_extended = "EMR Account has Block Public Access disabled."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user