mirror of
https://github.com/prowler-cloud/prowler.git
synced 2026-03-28 11:02:20 +00:00
Compare commits
58 Commits
3.16.15
...
work-on-au
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bcfdcbde30 | ||
|
|
2f50aaa9c1 | ||
|
|
537081a0f6 | ||
|
|
2eb774bbc9 | ||
|
|
5419117842 | ||
|
|
e72831d428 | ||
|
|
217b8ad250 | ||
|
|
09b4548445 | ||
|
|
0d96583769 | ||
|
|
722fe0a1bc | ||
|
|
445821eceb | ||
|
|
c3d129a4b2 | ||
|
|
36fc575e40 | ||
|
|
24efb34d91 | ||
|
|
c08e244c95 | ||
|
|
c2f8980f1f | ||
|
|
028d29b8ff | ||
|
|
b976cab926 | ||
|
|
197a08ab94 | ||
|
|
0d97780ade | ||
|
|
f2f922d7e8 | ||
|
|
606b4b5a66 | ||
|
|
132056f4c1 | ||
|
|
4845d6033b | ||
|
|
57550e6984 | ||
|
|
040b780af7 | ||
|
|
abaa7855d7 | ||
|
|
e9c6b35698 | ||
|
|
c92740869f | ||
|
|
49003fae08 | ||
|
|
01f3c8656c | ||
|
|
ba705406ff | ||
|
|
d8101acc9c | ||
|
|
0ef85b3dee | ||
|
|
126acc046a | ||
|
|
f324f27016 | ||
|
|
93a2431211 | ||
|
|
5b80082491 | ||
|
|
2ca4656ef9 | ||
|
|
cb4de850e9 | ||
|
|
92e0d74055 | ||
|
|
578b21f424 | ||
|
|
85c44f01c5 | ||
|
|
fb5d6cfd7e | ||
|
|
1b3f830623 | ||
|
|
1fe74937c1 | ||
|
|
6ee016e577 | ||
|
|
f7248dfb1c | ||
|
|
0481435846 | ||
|
|
5554e2be1b | ||
|
|
e97e2e84fc | ||
|
|
19f38dbb63 | ||
|
|
06d9eccebd | ||
|
|
5dfd8460be | ||
|
|
f71052bcfe | ||
|
|
7bfdb8c1f3 | ||
|
|
dedb03cc6e | ||
|
|
856afb3966 |
@@ -102,7 +102,7 @@ All the checks MUST fill the `report.status` and `report.status_extended` with t
|
||||
- Status -- `report.status`
|
||||
- `PASS` --> If the check is passing against the configured value.
|
||||
- `FAIL` --> If the check is passing against the configured value.
|
||||
- `INFO` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
|
||||
- `MANUAL` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
|
||||
- Status Extended -- `report.status_extended`
|
||||
- MUST end in a dot `.`
|
||||
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`
|
||||
|
||||
@@ -37,7 +37,3 @@ If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to
|
||||
|
||||
- ARN of your MFA device
|
||||
- TOTP (Time-Based One-Time Password)
|
||||
|
||||
## STS Endpoint Region
|
||||
|
||||
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.
|
||||
|
||||
@@ -23,14 +23,6 @@ prowler aws -R arn:aws:iam::<account_id>:role/<role_name>
|
||||
prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R arn:aws:iam::<account_id>:role/<role_name>
|
||||
```
|
||||
|
||||
## STS Endpoint Region
|
||||
|
||||
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.
|
||||
|
||||
> Since v3.11.0, Prowler uses a regional token in STS sessions so it can scan all AWS regions without needing the `--sts-endpoint-region` argument.
|
||||
|
||||
> Make sure that you have enabled the AWS Region you want to scan in BOTH AWS Accounts (assumed role account and account from which you assume the role).
|
||||
|
||||
## Role MFA
|
||||
|
||||
If your IAM Role has MFA configured you can use `--mfa` along with `-R`/`--role <role_arn>` and Prowler will ask you to input the following values to get a new temporary session for the IAM Role provided:
|
||||
|
||||
@@ -1,5 +1,18 @@
|
||||
# Compliance
|
||||
Prowler allows you to execute checks based on requirements defined in compliance frameworks.
|
||||
Prowler allows you to execute checks based on requirements defined in compliance frameworks. By default, it will execute and give you an overview of the status of each compliance framework:
|
||||
|
||||
<img src="../img/compliance.png"/>
|
||||
|
||||
> You can find CSVs containing detailed compliance results inside the compliance folder within Prowler's output folder.
|
||||
|
||||
## Execute Prowler based on Compliance Frameworks
|
||||
Prowler can analyze your environment based on a specific compliance framework and get more details, to do it, you can use option `--compliance`:
|
||||
```sh
|
||||
prowler <provider> --compliance <compliance_framework>
|
||||
```
|
||||
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 1.5. For details a CSV file has been generated as well.
|
||||
|
||||
<img src="../img/compliance-cis-sample1.png"/>
|
||||
|
||||
## List Available Compliance Frameworks
|
||||
In order to see which compliance frameworks are cover by Prowler, you can use option `--list-compliance`:
|
||||
@@ -10,9 +23,12 @@ Currently, the available frameworks are:
|
||||
|
||||
- `cis_1.4_aws`
|
||||
- `cis_1.5_aws`
|
||||
- `cis_2.0_aws`
|
||||
- `cisa_aws`
|
||||
- `ens_rd2022_aws`
|
||||
- `aws_audit_manager_control_tower_guardrails_aws`
|
||||
- `aws_foundational_security_best_practices_aws`
|
||||
- `aws_well_architected_framework_reliability_pillar_aws`
|
||||
- `aws_well_architected_framework_security_pillar_aws`
|
||||
- `cisa_aws`
|
||||
- `fedramp_low_revision_4_aws`
|
||||
@@ -22,6 +38,9 @@ Currently, the available frameworks are:
|
||||
- `gxp_eu_annex_11_aws`
|
||||
- `gxp_21_cfr_part_11_aws`
|
||||
- `hipaa_aws`
|
||||
- `iso27001_2013_aws`
|
||||
- `iso27001_2013_aws`
|
||||
- `mitre_attack_aws`
|
||||
- `nist_800_53_revision_4_aws`
|
||||
- `nist_800_53_revision_5_aws`
|
||||
- `nist_800_171_revision_2_aws`
|
||||
@@ -38,7 +57,6 @@ prowler <provider> --list-compliance-requirements <compliance_framework(s)>
|
||||
```
|
||||
|
||||
Example for the first requirements of CIS 1.5 for AWS:
|
||||
|
||||
```
|
||||
Listing CIS 1.5 AWS Compliance Requirements:
|
||||
|
||||
@@ -71,15 +89,6 @@ Requirement Id: 1.5
|
||||
|
||||
```
|
||||
|
||||
## Execute Prowler based on Compliance Frameworks
|
||||
As we mentioned, Prowler can be execute to analyse you environment based on a specific compliance framework, to do it, you can use option `--compliance`:
|
||||
```sh
|
||||
prowler <provider> --compliance <compliance_framework>
|
||||
```
|
||||
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 1.5. For details a CSV file has been generated as well.
|
||||
|
||||
<img src="../img/compliance-cis-sample1.png"/>
|
||||
|
||||
## Create and contribute adding other Security Frameworks
|
||||
|
||||
This information is part of the Developer Guide and can be found here: https://docs.prowler.cloud/en/latest/tutorials/developer-guide/.
|
||||
|
||||
@@ -29,10 +29,10 @@ The following list includes all the AWS checks with configurable variables that
|
||||
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
|
||||
| `ecr_repositories_scan_vulnerabilities_in_latest_image` | `ecr_repository_vulnerability_minimum_severity` | String |
|
||||
| `trustedadvisor_premium_support_plan_subscribed` | `verify_premium_support_plans` | Boolean |
|
||||
| `config_recorder_all_regions_enabled` | `allowlist_non_default_regions` | Boolean |
|
||||
| `drs_job_exist` | `allowlist_non_default_regions` | Boolean |
|
||||
| `guardduty_is_enabled` | `allowlist_non_default_regions` | Boolean |
|
||||
| `securityhub_enabled` | `allowlist_non_default_regions` | Boolean |
|
||||
| `config_recorder_all_regions_enabled` | `mute_non_default_regions` | Boolean |
|
||||
| `drs_job_exist` | `mute_non_default_regions` | Boolean |
|
||||
| `guardduty_is_enabled` | `mute_non_default_regions` | Boolean |
|
||||
| `securityhub_enabled` | `mute_non_default_regions` | Boolean |
|
||||
|
||||
## Azure
|
||||
|
||||
@@ -50,8 +50,8 @@ The following list includes all the AWS checks with configurable variables that
|
||||
aws:
|
||||
|
||||
# AWS Global Configuration
|
||||
# aws.allowlist_non_default_regions --> Allowlist Failed Findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
|
||||
allowlist_non_default_regions: False
|
||||
# aws.mute_non_default_regions --> Mute Failed Findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
|
||||
mute_non_default_regions: False
|
||||
|
||||
# AWS IAM Configuration
|
||||
# aws.iam_user_accesskey_unused --> CIS recommends 45 days
|
||||
|
||||
BIN
docs/tutorials/img/compliance.png
Normal file
BIN
docs/tutorials/img/compliance.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 93 KiB |
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 94 KiB After Width: | Height: | Size: 94 KiB |
@@ -8,7 +8,7 @@ There are different log levels depending on the logging information that is desi
|
||||
|
||||
- **DEBUG**: It will show low-level logs from Python.
|
||||
- **INFO**: It will show all the API calls that are being invoked by the provider.
|
||||
- **WARNING**: It will show all resources that are being **allowlisted**.
|
||||
- **WARNING**: It will show all resources that are being **muted**.
|
||||
- **ERROR**: It will show any errors, e.g., not authorized actions.
|
||||
- **CRITICAL**: The default log level. If a critical log appears, it will **exit** Prowler’s execution.
|
||||
|
||||
|
||||
@@ -9,10 +9,10 @@ Execute Prowler in verbose mode (like in Version 2):
|
||||
```console
|
||||
prowler <provider> --verbose
|
||||
```
|
||||
## Show only Fails
|
||||
Prowler can only display the failed findings:
|
||||
## Filter findings by status
|
||||
Prowler can filter the findings by their status:
|
||||
```console
|
||||
prowler <provider> -q/--quiet
|
||||
prowler <provider> --status [PASS, FAIL, MANUAL]
|
||||
```
|
||||
## Disable Exit Code 3
|
||||
Prowler does not trigger exit code 3 with failed checks:
|
||||
|
||||
@@ -1,19 +1,19 @@
|
||||
# Allowlisting
|
||||
# Mute Listing
|
||||
Sometimes you may find resources that are intentionally configured in a certain way that may be a bad practice but it is all right with it, for example an AWS S3 Bucket open to the internet hosting a web site, or an AWS Security Group with an open port needed in your use case.
|
||||
|
||||
Allowlist option works along with other options and adds a `WARNING` instead of `INFO`, `PASS` or `FAIL` to any output format.
|
||||
Mute List option works along with other options and adds a `MUTED` instead of `MANUAL`, `PASS` or `FAIL` to any output format.
|
||||
|
||||
You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, but first, let's review the syntax.
|
||||
You can use `-w`/`--mutelist-file` with the path of your mutelist yaml file, but first, let's review the syntax.
|
||||
|
||||
## Allowlist Yaml File Syntax
|
||||
## Mute List Yaml File Syntax
|
||||
|
||||
### Account, Check and/or Region can be * to apply for all the cases.
|
||||
### Resources and tags are lists that can have either Regex or Keywords.
|
||||
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
|
||||
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
|
||||
### For each check you can except Accounts, Regions, Resources and/or Tags.
|
||||
########################### ALLOWLIST EXAMPLE ###########################
|
||||
Allowlist:
|
||||
########################### MUTE LIST EXAMPLE ###########################
|
||||
Mute List:
|
||||
Accounts:
|
||||
"123456789012":
|
||||
Checks:
|
||||
@@ -79,10 +79,10 @@ You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, b
|
||||
Tags:
|
||||
- "environment=prod" # Will ignore every resource except in account 123456789012 except the ones containing the string "test" and tag environment=prod
|
||||
|
||||
## Allowlist specific regions
|
||||
If you want to allowlist/mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w allowlist.yaml`:
|
||||
## Mute specific regions
|
||||
If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
|
||||
|
||||
Allowlist:
|
||||
Mute List:
|
||||
Accounts:
|
||||
"*":
|
||||
Checks:
|
||||
@@ -93,50 +93,50 @@ If you want to allowlist/mute failed findings only in specific regions, create a
|
||||
Resources:
|
||||
- "*"
|
||||
|
||||
## Default AWS Allowlist
|
||||
Prowler provides you a Default AWS Allowlist with the AWS Resources that should be allowlisted such as all resources created by AWS Control Tower when setting up a landing zone.
|
||||
You can execute Prowler with this allowlist using the following command:
|
||||
## Default AWS Mute List
|
||||
Prowler provides you a Default AWS Mute List with the AWS Resources that should be muted such as all resources created by AWS Control Tower when setting up a landing zone.
|
||||
You can execute Prowler with this mutelist using the following command:
|
||||
```sh
|
||||
prowler aws --allowlist prowler/config/aws_allowlist.yaml
|
||||
prowler aws --mutelist prowler/config/aws_mutelist.yaml
|
||||
```
|
||||
## Supported Allowlist Locations
|
||||
## Supported Mute List Locations
|
||||
|
||||
The allowlisting flag supports the following locations:
|
||||
The mutelisting flag supports the following locations:
|
||||
|
||||
### Local file
|
||||
You will need to pass the local path where your Allowlist YAML file is located:
|
||||
You will need to pass the local path where your Mute List YAML file is located:
|
||||
```
|
||||
prowler <provider> -w allowlist.yaml
|
||||
prowler <provider> -w mutelist.yaml
|
||||
```
|
||||
### AWS S3 URI
|
||||
You will need to pass the S3 URI where your Allowlist YAML file was uploaded to your bucket:
|
||||
You will need to pass the S3 URI where your Mute List YAML file was uploaded to your bucket:
|
||||
```
|
||||
prowler aws -w s3://<bucket>/<prefix>/allowlist.yaml
|
||||
prowler aws -w s3://<bucket>/<prefix>/mutelist.yaml
|
||||
```
|
||||
> Make sure that the used AWS credentials have s3:GetObject permissions in the S3 path where the allowlist file is located.
|
||||
> Make sure that the used AWS credentials have s3:GetObject permissions in the S3 path where the mutelist file is located.
|
||||
|
||||
### AWS DynamoDB Table ARN
|
||||
|
||||
You will need to pass the DynamoDB Allowlist Table ARN:
|
||||
You will need to pass the DynamoDB Mute List Table ARN:
|
||||
|
||||
```
|
||||
prowler aws -w arn:aws:dynamodb:<region_name>:<account_id>:table/<table_name>
|
||||
```
|
||||
|
||||
1. The DynamoDB Table must have the following String keys:
|
||||
<img src="../img/allowlist-keys.png"/>
|
||||
<img src="../img/mutelist-keys.png"/>
|
||||
|
||||
- The Allowlist Table must have the following columns:
|
||||
- Accounts (String): This field can contain either an Account ID or an `*` (which applies to all the accounts that use this table as an allowlist).
|
||||
- The Mute List Table must have the following columns:
|
||||
- Accounts (String): This field can contain either an Account ID or an `*` (which applies to all the accounts that use this table as an mutelist).
|
||||
- Checks (String): This field can contain either a Prowler Check Name or an `*` (which applies to all the scanned checks).
|
||||
- Regions (List): This field contains a list of regions where this allowlist rule is applied (it can also contains an `*` to apply all scanned regions).
|
||||
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be allowlisted.
|
||||
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be allowlisted.
|
||||
- Exceptions (Map): -Optional- This field contains a map of lists of accounts/regions/resources/tags that are wanted to be excepted in the allowlist.
|
||||
- Regions (List): This field contains a list of regions where this mutelist rule is applied (it can also contains an `*` to apply all scanned regions).
|
||||
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be muted.
|
||||
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be muted.
|
||||
- Exceptions (Map): -Optional- This field contains a map of lists of accounts/regions/resources/tags that are wanted to be excepted in the mutelist.
|
||||
|
||||
The following example will allowlist all resources in all accounts for the EC2 checks in the regions `eu-west-1` and `us-east-1` with the tags `environment=dev` and `environment=prod`, except the resources containing the string `test` in the account `012345678912` and region `eu-west-1` with the tag `environment=prod`:
|
||||
The following example will mute all resources in all accounts for the EC2 checks in the regions `eu-west-1` and `us-east-1` with the tags `environment=dev` and `environment=prod`, except the resources containing the string `test` in the account `012345678912` and region `eu-west-1` with the tag `environment=prod`:
|
||||
|
||||
<img src="../img/allowlist-row.png"/>
|
||||
<img src="../img/mutelist-row.png"/>
|
||||
|
||||
> Make sure that the used AWS credentials have `dynamodb:PartiQLSelect` permissions in the table.
|
||||
|
||||
@@ -151,7 +151,7 @@ prowler aws -w arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
|
||||
Make sure that the credentials that Prowler uses can invoke the Lambda Function:
|
||||
|
||||
```
|
||||
- PolicyName: GetAllowList
|
||||
- PolicyName: GetMuteList
|
||||
PolicyDocument:
|
||||
Version: '2012-10-17'
|
||||
Statement:
|
||||
@@ -160,14 +160,14 @@ Make sure that the credentials that Prowler uses can invoke the Lambda Function:
|
||||
Resource: arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
|
||||
```
|
||||
|
||||
The Lambda Function can then generate an Allowlist dynamically. Here is the code an example Python Lambda Function that
|
||||
generates an Allowlist:
|
||||
The Lambda Function can then generate an Mute List dynamically. Here is the code an example Python Lambda Function that
|
||||
generates an Mute List:
|
||||
|
||||
```
|
||||
def handler(event, context):
|
||||
checks = {}
|
||||
checks["vpc_flow_logs_enabled"] = { "Regions": [ "*" ], "Resources": [ "" ], Optional("Tags"): [ "key:value" ] }
|
||||
|
||||
al = { "Allowlist": { "Accounts": { "*": { "Checks": checks } } } }
|
||||
al = { "Mute List": { "Accounts": { "*": { "Checks": checks } } } }
|
||||
return al
|
||||
```
|
||||
@@ -36,7 +36,7 @@ nav:
|
||||
- Slack Integration: tutorials/integrations.md
|
||||
- Configuration File: tutorials/configuration_file.md
|
||||
- Logging: tutorials/logging.md
|
||||
- Allowlist: tutorials/allowlist.md
|
||||
- Mute List: tutorials/mutelist.md
|
||||
- Check Aliases: tutorials/check-aliases.md
|
||||
- Custom Metadata: tutorials/custom-checks-metadata.md
|
||||
- Ignore Unused Services: tutorials/ignore-unused-services.md
|
||||
|
||||
39
poetry.lock
generated
39
poetry.lock
generated
@@ -1339,6 +1339,32 @@ files = [
|
||||
[package.dependencies]
|
||||
six = "*"
|
||||
|
||||
[[package]]
|
||||
name = "kubernetes"
|
||||
version = "28.1.0"
|
||||
description = "Kubernetes python client"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "kubernetes-28.1.0-py2.py3-none-any.whl", hash = "sha256:10f56f8160dcb73647f15fafda268e7f60cf7dbc9f8e46d52fcd46d3beb0c18d"},
|
||||
{file = "kubernetes-28.1.0.tar.gz", hash = "sha256:1468069a573430fb1cb5ad22876868f57977930f80a6749405da31cd6086a7e9"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = ">=14.05.14"
|
||||
google-auth = ">=1.0.1"
|
||||
oauthlib = ">=3.2.2"
|
||||
python-dateutil = ">=2.5.3"
|
||||
pyyaml = ">=5.4.1"
|
||||
requests = "*"
|
||||
requests-oauthlib = "*"
|
||||
six = ">=1.9.0"
|
||||
urllib3 = ">=1.24.2,<2.0"
|
||||
websocket-client = ">=0.32.0,<0.40.0 || >0.40.0,<0.41.dev0 || >=0.43.dev0"
|
||||
|
||||
[package.extras]
|
||||
adal = ["adal (>=1.0.2)"]
|
||||
|
||||
[[package]]
|
||||
name = "lazy-object-proxy"
|
||||
version = "1.9.0"
|
||||
@@ -2605,17 +2631,17 @@ six = "*"
|
||||
|
||||
[[package]]
|
||||
name = "rich"
|
||||
version = "13.3.5"
|
||||
version = "13.7.0"
|
||||
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
|
||||
optional = false
|
||||
python-versions = ">=3.7.0"
|
||||
files = [
|
||||
{file = "rich-13.3.5-py3-none-any.whl", hash = "sha256:69cdf53799e63f38b95b9bf9c875f8c90e78dd62b2f00c13a911c7a3b9fa4704"},
|
||||
{file = "rich-13.3.5.tar.gz", hash = "sha256:2d11b9b8dd03868f09b4fffadc84a6a8cda574e40dc90821bd845720ebb8e89c"},
|
||||
{file = "rich-13.7.0-py3-none-any.whl", hash = "sha256:6da14c108c4866ee9520bbffa71f6fe3962e193b7da68720583850cd4548e235"},
|
||||
{file = "rich-13.7.0.tar.gz", hash = "sha256:5cb5123b5cf9ee70584244246816e9114227e0b98ad9176eede6ad54bf5403fa"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
markdown-it-py = ">=2.2.0,<3.0.0"
|
||||
markdown-it-py = ">=2.2.0"
|
||||
pygments = ">=2.13.0,<3.0.0"
|
||||
|
||||
[package.extras]
|
||||
@@ -2773,8 +2799,7 @@ files = [
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp310-cp310-win32.whl", hash = "sha256:763d65baa3b952479c4e972669f679fe490eee058d5aa85da483ebae2009d231"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp310-cp310-win_amd64.whl", hash = "sha256:d000f258cf42fec2b1bbf2863c61d7b8918d31ffee905da62dede869254d3b8a"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:045e0626baf1c52e5527bd5db361bc83180faaba2ff586e763d3d5982a876a9e"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:1a6391a7cabb7641c32517539ca42cf84b87b667bad38b78d4d42dd23e957c81"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:9c7617df90c1365638916b98cdd9be833d31d337dbcd722485597b43c4a215bf"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-macosx_12_6_arm64.whl", hash = "sha256:721bc4ba4525f53f6a611ec0967bdcee61b31df5a56801281027a3a6d1c2daf5"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:41d0f1fa4c6830176eef5b276af04c89320ea616655d01327d5ce65e50575c94"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-win32.whl", hash = "sha256:f6d3d39611ac2e4f62c3128a9eed45f19a6608670c5a2f4f07f24e8de3441d38"},
|
||||
{file = "ruamel.yaml.clib-0.2.7-cp311-cp311-win_amd64.whl", hash = "sha256:da538167284de58a52109a9b89b8f6a53ff8437dd6dc26d33b57bf6699153122"},
|
||||
@@ -3296,4 +3321,4 @@ docs = ["mkdocs", "mkdocs-material"]
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.9,<3.12"
|
||||
content-hash = "1820fff10ec1ca49cf833c4c1dbd838b3f89ee71a0264688518817951844df6c"
|
||||
content-hash = "ec078424ecc4e6c85d759cf88c4db94cf4a46021c33e6fe0b4a95072e1aa4c0f"
|
||||
|
||||
@@ -6,13 +6,12 @@ import sys
|
||||
|
||||
from colorama import Fore, Style
|
||||
|
||||
from prowler.lib.banner import print_banner
|
||||
from prowler.config.config import get_available_compliance_frameworks
|
||||
from prowler.lib.check.check import (
|
||||
bulk_load_checks_metadata,
|
||||
bulk_load_compliance_frameworks,
|
||||
exclude_checks_to_run,
|
||||
exclude_services_to_run,
|
||||
execute_checks,
|
||||
list_categories,
|
||||
list_checks_json,
|
||||
list_services,
|
||||
@@ -30,15 +29,16 @@ from prowler.lib.check.custom_checks_metadata import (
|
||||
parse_custom_checks_metadata_file,
|
||||
update_checks_metadata,
|
||||
)
|
||||
from prowler.lib.check.managers import ExecutionManager
|
||||
from prowler.lib.cli.parser import ProwlerArgumentParser
|
||||
from prowler.lib.logger import logger, set_logging_config
|
||||
from prowler.lib.outputs.compliance import display_compliance_table
|
||||
from prowler.lib.outputs.compliance.compliance import display_compliance_table
|
||||
from prowler.lib.outputs.html import add_html_footer, fill_html_overview_statistics
|
||||
from prowler.lib.outputs.json import close_json
|
||||
from prowler.lib.outputs.outputs import extract_findings_statistics
|
||||
from prowler.lib.outputs.slack import send_slack_message
|
||||
from prowler.lib.outputs.summary_table import display_summary_table
|
||||
from prowler.providers.aws.aws_provider import get_available_aws_service_regions
|
||||
from prowler.lib.ui.live_display import live_display
|
||||
from prowler.providers.aws.lib.s3.s3 import send_to_s3_bucket
|
||||
from prowler.providers.aws.lib.security_hub.security_hub import (
|
||||
batch_send_to_security_hub,
|
||||
@@ -46,11 +46,16 @@ from prowler.providers.aws.lib.security_hub.security_hub import (
|
||||
resolve_security_hub_previous_findings,
|
||||
verify_security_hub_integration_enabled_per_region,
|
||||
)
|
||||
from prowler.providers.common.allowlist import set_provider_allowlist
|
||||
from prowler.providers.common.audit_info import (
|
||||
set_provider_audit_info,
|
||||
set_provider_execution_parameters,
|
||||
)
|
||||
from prowler.providers.common.clean import clean_provider_local_output_directories
|
||||
from prowler.providers.common.common import (
|
||||
get_global_provider,
|
||||
set_global_provider_object,
|
||||
)
|
||||
from prowler.providers.common.mutelist import set_provider_mutelist
|
||||
from prowler.providers.common.outputs import set_provider_output_options
|
||||
from prowler.providers.common.quick_inventory import run_provider_quick_inventory
|
||||
|
||||
@@ -73,12 +78,17 @@ def prowler():
|
||||
compliance_framework = args.compliance
|
||||
custom_checks_metadata_file = args.custom_checks_metadata_file
|
||||
|
||||
if not args.no_banner:
|
||||
print_banner(args)
|
||||
live_display.initialize(args)
|
||||
|
||||
# if not args.no_banner:
|
||||
# print_banner(args)
|
||||
|
||||
# We treat the compliance framework as another output format
|
||||
if compliance_framework:
|
||||
args.output_modes.extend(compliance_framework)
|
||||
# If no input compliance framework, set all
|
||||
else:
|
||||
args.output_modes.extend(get_available_compliance_frameworks(provider))
|
||||
|
||||
# Set Logger configuration
|
||||
set_logging_config(args.log_level, args.log_file, args.only_logs)
|
||||
@@ -148,6 +158,7 @@ def prowler():
|
||||
|
||||
# Set the audit info based on the selected provider
|
||||
audit_info = set_provider_audit_info(provider, args.__dict__)
|
||||
set_global_provider_object(args)
|
||||
|
||||
# Import custom checks from folder
|
||||
if checks_folder:
|
||||
@@ -172,12 +183,12 @@ def prowler():
|
||||
# Sort final check list
|
||||
checks_to_execute = sorted(checks_to_execute)
|
||||
|
||||
# Parse Allowlist
|
||||
allowlist_file = set_provider_allowlist(provider, audit_info, args)
|
||||
# Parse Mute List
|
||||
mutelist_file = set_provider_mutelist(provider, audit_info, args)
|
||||
|
||||
# Set output options based on the selected provider
|
||||
audit_output_options = set_provider_output_options(
|
||||
provider, args, audit_info, allowlist_file, bulk_checks_metadata
|
||||
provider, args, audit_info, mutelist_file, bulk_checks_metadata
|
||||
)
|
||||
|
||||
# Run the quick inventory for the provider if available
|
||||
@@ -187,14 +198,16 @@ def prowler():
|
||||
|
||||
# Execute checks
|
||||
findings = []
|
||||
|
||||
if len(checks_to_execute):
|
||||
findings = execute_checks(
|
||||
execution_manager = ExecutionManager(
|
||||
checks_to_execute,
|
||||
provider,
|
||||
audit_info,
|
||||
audit_output_options,
|
||||
custom_checks_metadata,
|
||||
)
|
||||
findings = execution_manager.execute_checks()
|
||||
else:
|
||||
logger.error(
|
||||
"There are no checks to execute. Please, check your input arguments"
|
||||
@@ -256,9 +269,10 @@ def prowler():
|
||||
f"{Style.BRIGHT}\nSending findings to AWS Security Hub, please wait...{Style.RESET_ALL}"
|
||||
)
|
||||
# Verify where AWS Security Hub is enabled
|
||||
global_provider = get_global_provider()
|
||||
aws_security_enabled_regions = []
|
||||
security_hub_regions = (
|
||||
get_available_aws_service_regions("securityhub", audit_info)
|
||||
global_provider.get_available_aws_service_regions("securityhub")
|
||||
if not audit_info.audited_regions
|
||||
else audit_info.audited_regions
|
||||
)
|
||||
@@ -308,8 +322,12 @@ def prowler():
|
||||
provider,
|
||||
)
|
||||
|
||||
if compliance_framework and findings:
|
||||
for compliance in compliance_framework:
|
||||
if findings:
|
||||
compliance_overview = False
|
||||
if not compliance_framework:
|
||||
compliance_overview = True
|
||||
compliance_framework = get_available_compliance_frameworks(provider)
|
||||
for compliance in sorted(compliance_framework):
|
||||
# Display compliance table
|
||||
display_compliance_table(
|
||||
findings,
|
||||
@@ -317,12 +335,20 @@ def prowler():
|
||||
compliance,
|
||||
audit_output_options.output_filename,
|
||||
audit_output_options.output_directory,
|
||||
compliance_overview,
|
||||
)
|
||||
if compliance_overview:
|
||||
print(
|
||||
f"\nDetailed compliance results are in {Fore.YELLOW}{audit_output_options.output_directory}/compliance/{Style.RESET_ALL}\n"
|
||||
)
|
||||
|
||||
# If custom checks were passed, remove the modules
|
||||
if checks_folder:
|
||||
remove_custom_checks_module(checks_folder, provider)
|
||||
|
||||
# clean local directories
|
||||
clean_provider_local_output_directories(args)
|
||||
|
||||
# If there are failed findings exit code 3, except if -z is input
|
||||
if not args.ignore_exit_code_3 and stats["total_fail"] > 0:
|
||||
sys.exit(3)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
Allowlist:
|
||||
Mute List:
|
||||
Accounts:
|
||||
"*":
|
||||
########################### AWS CONTROL TOWER ###########################
|
||||
@@ -3,8 +3,8 @@
|
||||
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
|
||||
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
|
||||
### For each check you can except Accounts, Regions, Resources and/or Tags.
|
||||
########################### ALLOWLIST EXAMPLE ###########################
|
||||
Allowlist:
|
||||
########################### MUTE LIST EXAMPLE ###########################
|
||||
Mute List:
|
||||
Accounts:
|
||||
"123456789012":
|
||||
Checks:
|
||||
@@ -25,13 +25,19 @@ banner_color = "\033[1;92m"
|
||||
# Severities
|
||||
valid_severities = ["critical", "high", "medium", "low", "informational"]
|
||||
|
||||
# Statuses
|
||||
finding_statuses = ["PASS", "FAIL", "MANUAL"]
|
||||
|
||||
# Compliance
|
||||
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
|
||||
|
||||
def get_available_compliance_frameworks():
|
||||
def get_available_compliance_frameworks(provider=None):
|
||||
available_compliance_frameworks = []
|
||||
for provider in ["aws", "gcp", "azure"]:
|
||||
providers = ["aws", "gcp", "azure"]
|
||||
if provider:
|
||||
providers = [provider]
|
||||
for provider in providers:
|
||||
with os.scandir(f"{actual_directory}/../compliance/{provider}") as files:
|
||||
for file in files:
|
||||
if file.is_file() and file.name.endswith(".json"):
|
||||
@@ -50,7 +56,6 @@ aws_services_json_file = "aws_regions_by_service.json"
|
||||
# gcp_zones_json_file = "gcp_zones.json"
|
||||
|
||||
default_output_directory = getcwd() + "/output"
|
||||
|
||||
output_file_timestamp = timestamp.strftime("%Y%m%d%H%M%S")
|
||||
timestamp_iso = timestamp.isoformat(sep=" ", timespec="seconds")
|
||||
csv_file_suffix = ".csv"
|
||||
|
||||
@@ -2,10 +2,10 @@
|
||||
aws:
|
||||
|
||||
# AWS Global Configuration
|
||||
# aws.allowlist_non_default_regions --> Set to True to allowlist failed findings in non-default regions for AccessAnalyzer, GuardDuty, SecurityHub, DRS and Config
|
||||
allowlist_non_default_regions: False
|
||||
# If you want to allowlist/mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w allowlist.yaml`:
|
||||
# Allowlist:
|
||||
# aws.mute_non_default_regions --> Set to True to mute failed findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
|
||||
mute_non_default_regions: False
|
||||
# If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
|
||||
# Mute List:
|
||||
# Accounts:
|
||||
# "*":
|
||||
# Checks:
|
||||
@@ -92,3 +92,6 @@ azure:
|
||||
|
||||
# GCP Configuration
|
||||
gcp:
|
||||
|
||||
# Kubernetes Configuration
|
||||
kubernetes:
|
||||
|
||||
@@ -13,3 +13,7 @@ CustomChecksMetadata:
|
||||
Checks:
|
||||
compute_instance_public_ip:
|
||||
Severity: critical
|
||||
kubernetes:
|
||||
Checks:
|
||||
apiserver_anonymous_requests:
|
||||
Severity: low
|
||||
|
||||
@@ -15,13 +15,13 @@ def print_banner(args):
|
||||
"""
|
||||
print(banner)
|
||||
|
||||
if args.verbose or args.quiet:
|
||||
if args.verbose:
|
||||
print(
|
||||
f"""
|
||||
Color code for results:
|
||||
- {Fore.YELLOW}INFO (Information){Style.RESET_ALL}
|
||||
- {Fore.YELLOW}MANUAL (Manual check){Style.RESET_ALL}
|
||||
- {Fore.GREEN}PASS (Recommended value){Style.RESET_ALL}
|
||||
- {orange_color}WARNING (Ignored by allowlist){Style.RESET_ALL}
|
||||
- {orange_color}MUTED (Muted by muted list){Style.RESET_ALL}
|
||||
- {Fore.RED}FAIL (Fix required){Style.RESET_ALL}
|
||||
"""
|
||||
)
|
||||
|
||||
@@ -10,18 +10,19 @@ from pkgutil import walk_packages
|
||||
from types import ModuleType
|
||||
from typing import Any
|
||||
|
||||
from alive_progress import alive_bar
|
||||
from colorama import Fore, Style
|
||||
|
||||
import prowler
|
||||
from prowler.config.config import orange_color
|
||||
from prowler.lib.check.compliance_models import load_compliance_framework
|
||||
from prowler.lib.check.custom_checks_metadata import update_check_metadata
|
||||
from prowler.lib.check.managers import ExecutionManager
|
||||
from prowler.lib.check.models import Check, load_check_metadata
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.outputs.outputs import report
|
||||
from prowler.lib.ui.live_display import live_display
|
||||
from prowler.lib.utils.utils import open_file, parse_json_file
|
||||
from prowler.providers.aws.lib.allowlist.allowlist import allowlist_findings
|
||||
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
from prowler.providers.common.models import Audit_Metadata
|
||||
from prowler.providers.common.outputs import Provider_Output_Options
|
||||
|
||||
@@ -431,8 +432,10 @@ def execute_checks(
|
||||
services_executed = set()
|
||||
checks_executed = set()
|
||||
|
||||
global_provider = get_global_provider()
|
||||
|
||||
# Initialize the Audit Metadata
|
||||
audit_info.audit_metadata = Audit_Metadata(
|
||||
global_provider.audit_metadata = Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=checks_to_execute,
|
||||
completed_checks=0,
|
||||
@@ -492,47 +495,57 @@ def execute_checks(
|
||||
print(
|
||||
f"{Style.BRIGHT}Executing {checks_num} {check_noun}, please wait...{Style.RESET_ALL}\n"
|
||||
)
|
||||
with alive_bar(
|
||||
total=len(checks_to_execute),
|
||||
ctrl_c=False,
|
||||
bar="blocks",
|
||||
spinner="classic",
|
||||
stats=False,
|
||||
enrich_print=False,
|
||||
) as bar:
|
||||
for check_name in checks_to_execute:
|
||||
# Recover service from check name
|
||||
service = check_name.split("_")[0]
|
||||
bar.title = (
|
||||
f"-> Scanning {orange_color}{service}{Style.RESET_ALL} service"
|
||||
execution_manager = ExecutionManager(provider, checks_to_execute)
|
||||
total_checks = execution_manager.total_checks_per_service()
|
||||
completed_checks = {service: 0 for service in total_checks}
|
||||
service_findings = []
|
||||
for service, check_name in execution_manager.execute_checks():
|
||||
try:
|
||||
check_findings = execute(
|
||||
service,
|
||||
check_name,
|
||||
provider,
|
||||
audit_output_options,
|
||||
audit_info,
|
||||
services_executed,
|
||||
checks_executed,
|
||||
custom_checks_metadata,
|
||||
)
|
||||
try:
|
||||
check_findings = execute(
|
||||
service,
|
||||
check_name,
|
||||
provider,
|
||||
audit_output_options,
|
||||
audit_info,
|
||||
services_executed,
|
||||
checks_executed,
|
||||
custom_checks_metadata,
|
||||
)
|
||||
all_findings.extend(check_findings)
|
||||
all_findings.extend(check_findings)
|
||||
service_findings.extend(check_findings)
|
||||
# Update the completed checks count
|
||||
completed_checks[service] += 1
|
||||
|
||||
# If check does not exists in the provider or is from another provider
|
||||
except ModuleNotFoundError:
|
||||
logger.error(
|
||||
f"Check '{check_name}' was not found for the {provider.upper()} provider"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
bar()
|
||||
bar.title = f"-> {Fore.GREEN}Scan completed!{Style.RESET_ALL}"
|
||||
# Check if all checks for the service are completed
|
||||
if completed_checks[service] == total_checks[service]:
|
||||
# All checks for the service are completed
|
||||
# Add a summary table or perform other actions
|
||||
live_display.add_results_for_service(service, service_findings)
|
||||
# Clear service_findings
|
||||
service_findings = []
|
||||
|
||||
# If check does not exists in the provider or is from another provider
|
||||
except ModuleNotFoundError:
|
||||
logger.error(
|
||||
f"Check '{check_name}' was not found for the {provider.upper()} provider"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
return all_findings
|
||||
|
||||
|
||||
def create_check_service_dict(checks_to_execute):
|
||||
output = {}
|
||||
for check_name in checks_to_execute:
|
||||
service = check_name.split("_")[0]
|
||||
if service not in output.keys():
|
||||
output[service] = []
|
||||
output[service].append(check_name)
|
||||
return output
|
||||
|
||||
|
||||
def execute(
|
||||
service: str,
|
||||
check_name: str,
|
||||
@@ -543,6 +556,7 @@ def execute(
|
||||
checks_executed: set,
|
||||
custom_checks_metadata: Any,
|
||||
):
|
||||
global_provider = get_global_provider()
|
||||
# Import check module
|
||||
check_module_path = (
|
||||
f"prowler.providers.{provider}.services.{service}.{check_name}.{check_name}"
|
||||
@@ -562,15 +576,15 @@ def execute(
|
||||
# Update Audit Status
|
||||
services_executed.add(service)
|
||||
checks_executed.add(check_name)
|
||||
audit_info.audit_metadata = update_audit_metadata(
|
||||
audit_info.audit_metadata, services_executed, checks_executed
|
||||
global_provider.audit_metadata = update_audit_metadata(
|
||||
global_provider.audit_metadata, services_executed, checks_executed
|
||||
)
|
||||
|
||||
# Allowlist findings
|
||||
if audit_output_options.allowlist_file:
|
||||
check_findings = allowlist_findings(
|
||||
audit_output_options.allowlist_file,
|
||||
audit_info.audited_account,
|
||||
# Mute List findings
|
||||
if audit_output_options.mutelist_file:
|
||||
check_findings = mutelist_findings(
|
||||
audit_output_options.mutelist_file,
|
||||
global_provider.audited_account,
|
||||
check_findings,
|
||||
)
|
||||
|
||||
|
||||
48
prowler/lib/check/check_to_client_mapper.py
Normal file
48
prowler/lib/check/check_to_client_mapper.py
Normal file
@@ -0,0 +1,48 @@
|
||||
import ast
|
||||
import os
|
||||
import pathlib
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
|
||||
|
||||
class ImportFinder(ast.NodeVisitor):
|
||||
def __init__(self, provider):
|
||||
self.imports = set()
|
||||
self.provider = provider
|
||||
|
||||
def visit_ImportFrom(self, node):
|
||||
if node.module and f"prowler.providers.{self.provider}.services" in node.module:
|
||||
for name in node.names:
|
||||
if "_client" in name.name:
|
||||
self.imports.add(name.name)
|
||||
self.generic_visit(node)
|
||||
|
||||
|
||||
def analyze_check_file(file_path, provider):
|
||||
# Prase the check file
|
||||
with open(file_path, "r") as file:
|
||||
node = ast.parse(file.read(), filename=file_path)
|
||||
|
||||
finder = ImportFinder(provider)
|
||||
finder.visit(node)
|
||||
return list(finder.imports)
|
||||
|
||||
|
||||
def get_dependencies_for_checks(provider, checks_dict):
|
||||
|
||||
current_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
prowler_dir = current_directory.parent.parent
|
||||
check_dependencies = {}
|
||||
for service_name, checks in checks_dict.items():
|
||||
check_dependencies[service_name] = {}
|
||||
for check_name in checks:
|
||||
relative_path = f"providers/{provider}/services/{service_name}/{check_name}/{check_name}.py"
|
||||
check_file_path = prowler_dir / relative_path
|
||||
if not check_file_path.exists():
|
||||
logger.error(
|
||||
f"{check_name} does not exist at {relative_path}! Cannot determine service dependencies"
|
||||
)
|
||||
continue
|
||||
clients = analyze_check_file(str(check_file_path), provider)
|
||||
check_dependencies[service_name][check_name] = clients
|
||||
return check_dependencies
|
||||
369
prowler/lib/check/managers.py
Normal file
369
prowler/lib/check/managers.py
Normal file
@@ -0,0 +1,369 @@
|
||||
import importlib
|
||||
import os
|
||||
import sys
|
||||
import traceback
|
||||
from types import ModuleType
|
||||
from typing import Any, Set
|
||||
|
||||
from colorama import Fore, Style
|
||||
|
||||
from prowler.lib.check.check_to_client_mapper import get_dependencies_for_checks
|
||||
from prowler.lib.check.custom_checks_metadata import update_check_metadata
|
||||
from prowler.lib.check.models import Check
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.outputs.outputs import report
|
||||
from prowler.lib.ui.live_display import live_display
|
||||
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
from prowler.providers.common.models import Audit_Metadata
|
||||
from prowler.providers.common.outputs import Provider_Output_Options
|
||||
|
||||
|
||||
class ExecutionManager:
|
||||
def __init__(
|
||||
self,
|
||||
checks_to_execute: list,
|
||||
provider: str,
|
||||
audit_info: Any,
|
||||
audit_output_options: Provider_Output_Options,
|
||||
custom_checks_metadata: Any,
|
||||
):
|
||||
self.checks_to_execute = checks_to_execute
|
||||
self.provider = provider
|
||||
self.audit_info = audit_info
|
||||
self.audit_output_options = audit_output_options
|
||||
self.custom_checks_metadata = custom_checks_metadata
|
||||
|
||||
self.live_display = live_display
|
||||
self.live_display.start()
|
||||
self.loaded_clients = {} # defaultdict(lambda: False)
|
||||
self.check_dict = self.create_check_service_dict(checks_to_execute)
|
||||
self.check_dependencies = get_dependencies_for_checks(provider, self.check_dict)
|
||||
self.remaining_checks = self.initialize_remaining_checks(
|
||||
self.check_dependencies
|
||||
)
|
||||
self.services_queue = self.initialize_services_queue(self.check_dependencies)
|
||||
|
||||
# For tracking the executed services and checks
|
||||
self.services_executed: Set[str] = set()
|
||||
self.checks_executed: Set[str] = set()
|
||||
|
||||
# Initialize the Audit Metadata
|
||||
self.audit_info.audit_metadata = Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=self.checks_to_execute,
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
)
|
||||
|
||||
def update_tracking(self, service: str, check: str):
|
||||
self.services_executed.add(service)
|
||||
self.checks_executed.add(check)
|
||||
|
||||
@staticmethod
|
||||
def initialize_remaining_checks(check_dependencies):
|
||||
remaining_checks = {}
|
||||
for service, checks in check_dependencies.items():
|
||||
for check_name, clients in checks.items():
|
||||
remaining_checks[(service, check_name)] = clients
|
||||
return remaining_checks
|
||||
|
||||
@staticmethod
|
||||
def initialize_services_queue(check_dependencies):
|
||||
return list(check_dependencies.keys())
|
||||
|
||||
@staticmethod
|
||||
def create_check_service_dict(checks_to_execute):
|
||||
output = {}
|
||||
for check_name in checks_to_execute:
|
||||
service = check_name.split("_")[0]
|
||||
if service not in output.keys():
|
||||
output[service] = []
|
||||
output[service].append(check_name)
|
||||
return output
|
||||
|
||||
def total_checks_per_service(self):
|
||||
"""Returns a dictionary with the total number of checks for each service."""
|
||||
total_checks = {}
|
||||
for service, checks in self.check_dict.items():
|
||||
total_checks[service] = len(checks)
|
||||
return total_checks
|
||||
|
||||
def find_next_service(self):
|
||||
# Prioritize services that use already loaded clients
|
||||
for service in self.services_queue:
|
||||
checks = self.check_dependencies[service]
|
||||
if any(
|
||||
client in self.loaded_clients
|
||||
for check in checks.values()
|
||||
for client in check
|
||||
):
|
||||
return service
|
||||
return None if not self.services_queue else self.services_queue[0]
|
||||
|
||||
@staticmethod
|
||||
def import_check(check_path: str) -> ModuleType:
|
||||
"""
|
||||
Imports an input check using its path
|
||||
|
||||
When importing a module using importlib.import_module, it's loaded and added to the sys.modules cache.
|
||||
This means that the module remains in memory and is not garbage collected immediately after use, as it's still referenced in sys.modules.
|
||||
This behavior is intentional, as importing modules can be a costly operation, and keeping them in memory allows for faster re-use.
|
||||
release_check deletes this reference if it is no longer required by any of the remaining checks
|
||||
"""
|
||||
lib = importlib.import_module(f"{check_path}")
|
||||
return lib
|
||||
|
||||
# Imports service clients, and tracks if it needs to be imported
|
||||
def import_client(self, client_name):
|
||||
if not self.loaded_clients.get(client_name):
|
||||
# Dynamically import the client
|
||||
module_name, _ = client_name.rsplit("_", 1)
|
||||
client_module = importlib.import_module(
|
||||
f"prowler.providers.{self.provider}.services.{module_name}.{client_name}"
|
||||
)
|
||||
self.loaded_clients[client_name] = client_module
|
||||
|
||||
def release_clients(self, completed_check_clients):
|
||||
for client_name in completed_check_clients:
|
||||
# Determine if any of the remaining checks still require the client
|
||||
if not any(
|
||||
client == client_name
|
||||
for check in self.remaining_checks
|
||||
for client in self.remaining_checks[check]
|
||||
):
|
||||
# Delete the reference to the client for this object
|
||||
del self.loaded_clients[client_name]
|
||||
module_name, _ = client_name.rsplit("_", 1)
|
||||
# Delete the reference to the client in sys.modules
|
||||
del sys.modules[
|
||||
f"prowler.providers.aws.services.{module_name}.{client_name}"
|
||||
]
|
||||
|
||||
def generate_checks(self):
|
||||
"""
|
||||
This is a generator function, which will:
|
||||
* Determine the next service whose checks will be executed
|
||||
* Load all the clients which are required by the checks into memory (init them)
|
||||
* Yield the service and check name, 1-by-1, to be used within execute_checks
|
||||
* Pass the completed checks to release_clients to determine if the clients that were required by the check are no longer needed, and can be garabage collected
|
||||
It will complete the checks for a service, before moving onto the next one
|
||||
It uses find_next_service to prioritize the next service based on if any of that service's checks require a client that has already been loaded
|
||||
"""
|
||||
while self.remaining_checks:
|
||||
current_service = self.find_next_service()
|
||||
if not current_service:
|
||||
# Execution has completed, return
|
||||
break
|
||||
# Remove the service from the services_queue
|
||||
self.services_queue.remove(current_service)
|
||||
|
||||
checks = self.check_dependencies[current_service]
|
||||
clients_for_service = list(
|
||||
set(client for client_list in checks.values() for client in client_list)
|
||||
)
|
||||
|
||||
for client in clients_for_service:
|
||||
self.live_display.add_client_init_section(client)
|
||||
self.import_client(client)
|
||||
|
||||
# Add the display component
|
||||
total_checks = len(self.check_dict[current_service])
|
||||
self.live_display.add_service_section(current_service, total_checks)
|
||||
|
||||
for check_name, clients_for_check in checks.items():
|
||||
|
||||
yield current_service, check_name
|
||||
|
||||
self.live_display.increment_check_progress()
|
||||
self.live_display.increment_overall_check_progress()
|
||||
|
||||
del self.remaining_checks[(current_service, check_name)]
|
||||
self.release_clients(clients_for_check)
|
||||
|
||||
self.live_display.increment_overall_service_progress()
|
||||
|
||||
def execute_checks(self) -> list:
|
||||
# List to store all the check's findings
|
||||
all_findings = []
|
||||
# Services and checks executed for the Audit Status
|
||||
|
||||
global_provider = get_global_provider()
|
||||
|
||||
# Initialize the Audit Metadata
|
||||
global_provider.audit_metadata = Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=self.checks_to_execute,
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
)
|
||||
if os.name != "nt":
|
||||
try:
|
||||
from resource import RLIMIT_NOFILE, getrlimit
|
||||
|
||||
# Check ulimit for the maximum system open files
|
||||
soft, _ = getrlimit(RLIMIT_NOFILE)
|
||||
if soft < 4096:
|
||||
logger.warning(
|
||||
f"Your session file descriptors limit ({soft} open files) is below 4096. We recommend to increase it to avoid errors. Solve it running this command `ulimit -n 4096`. For more info visit https://docs.prowler.cloud/en/latest/troubleshooting/"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error("Unable to retrieve ulimit default settings")
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
# Execution with the --only-logs flag
|
||||
if self.audit_output_options.only_logs:
|
||||
for service, check_name in self.generate_checks():
|
||||
try:
|
||||
check_findings = self.execute(service, check_name)
|
||||
all_findings.extend(check_findings)
|
||||
|
||||
# If check does not exists in the provider or is from another provider
|
||||
except ModuleNotFoundError:
|
||||
logger.error(
|
||||
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
# Default execution
|
||||
total_checks = self.total_checks_per_service()
|
||||
self.live_display.add_overall_progress_section(
|
||||
total_checks_dict=total_checks
|
||||
)
|
||||
# For tracking when a service is completed
|
||||
completed_checks = {service: 0 for service in total_checks}
|
||||
service_findings = []
|
||||
for service, check_name in self.generate_checks():
|
||||
try:
|
||||
check_findings = self.execute(
|
||||
service,
|
||||
check_name,
|
||||
)
|
||||
all_findings.extend(check_findings)
|
||||
service_findings.extend(check_findings)
|
||||
# Update the completed checks count
|
||||
completed_checks[service] += 1
|
||||
|
||||
# Check if all checks for the service are completed
|
||||
if completed_checks[service] == total_checks[service]:
|
||||
# All checks for the service are completed
|
||||
# Add a summary table or perform other actions
|
||||
live_display.add_results_for_service(service, service_findings)
|
||||
# Clear service_findings
|
||||
service_findings = []
|
||||
|
||||
# If check does not exists in the provider or is from another provider
|
||||
except ModuleNotFoundError:
|
||||
logger.error(
|
||||
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
self.live_display.hide_service_section()
|
||||
return all_findings
|
||||
|
||||
def execute(
|
||||
self,
|
||||
service: str,
|
||||
check_name: str,
|
||||
):
|
||||
try:
|
||||
# Import check module
|
||||
check_module_path = f"prowler.providers.{self.provider}.services.{service}.{check_name}.{check_name}"
|
||||
lib = self.import_check(check_module_path)
|
||||
# Recover functions from check
|
||||
check_to_execute = getattr(lib, check_name)
|
||||
c = check_to_execute()
|
||||
|
||||
# Update check metadata to reflect that in the outputs
|
||||
if self.custom_checks_metadata and self.custom_checks_metadata[
|
||||
"Checks"
|
||||
].get(c.CheckID):
|
||||
c = update_check_metadata(
|
||||
c, self.custom_checks_metadata["Checks"][c.CheckID]
|
||||
)
|
||||
|
||||
# Run check
|
||||
check_findings = self.run_check(c, self.audit_output_options)
|
||||
|
||||
# Update Audit Status
|
||||
self.update_tracking(service, check_name)
|
||||
self.update_audit_metadata()
|
||||
|
||||
# Mutelist findings
|
||||
if self.audit_output_options.mutelist_file:
|
||||
check_findings = mutelist_findings(
|
||||
self.audit_output_options.mutelist_file,
|
||||
self.audit_info.audited_account,
|
||||
check_findings,
|
||||
)
|
||||
|
||||
# Report the check's findings
|
||||
report(check_findings, self.audit_output_options, self.audit_info)
|
||||
|
||||
if os.environ.get("PROWLER_REPORT_LIB_PATH"):
|
||||
try:
|
||||
logger.info("Using custom report interface ...")
|
||||
lib = os.environ["PROWLER_REPORT_LIB_PATH"]
|
||||
outputs_module = importlib.import_module(lib)
|
||||
custom_report_interface = getattr(outputs_module, "report")
|
||||
|
||||
custom_report_interface(
|
||||
check_findings, self.audit_output_options, self.audit_info
|
||||
)
|
||||
except Exception:
|
||||
sys.exit(1)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
return check_findings
|
||||
|
||||
@staticmethod
|
||||
def run_check(check: Check, output_options: Provider_Output_Options) -> list:
|
||||
findings = []
|
||||
if output_options.verbose:
|
||||
print(
|
||||
f"\nCheck ID: {check.CheckID} - {Fore.MAGENTA}{check.ServiceName}{Fore.YELLOW} [{check.Severity}]{Style.RESET_ALL}"
|
||||
)
|
||||
logger.debug(f"Executing check: {check.CheckID}")
|
||||
try:
|
||||
findings = check.execute()
|
||||
except Exception as error:
|
||||
if not output_options.only_logs:
|
||||
print(
|
||||
f"Something went wrong in {check.CheckID}, please use --log-level ERROR"
|
||||
)
|
||||
logger.error(
|
||||
f"{check.CheckID} -- {error.__class__.__name__}[{traceback.extract_tb(error.__traceback__)[-1].lineno}]: {error}"
|
||||
)
|
||||
finally:
|
||||
return findings
|
||||
|
||||
def update_audit_metadata(self):
|
||||
"""update_audit_metadata returns the audit_metadata updated with the new status
|
||||
|
||||
Updates the given audit_metadata using the length of the services_executed and checks_executed
|
||||
"""
|
||||
try:
|
||||
self.audit_info.audit_metadata.services_scanned = len(
|
||||
self.services_executed
|
||||
)
|
||||
self.audit_info.audit_metadata.completed_checks = len(self.checks_executed)
|
||||
self.audit_info.audit_metadata.audit_progress = (
|
||||
100
|
||||
* len(self.checks_executed)
|
||||
/ len(self.audit_info.audit_metadata.expected_checks)
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
@@ -2,10 +2,13 @@ import os
|
||||
import sys
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass
|
||||
from functools import wraps
|
||||
|
||||
from pydantic import BaseModel, ValidationError
|
||||
from pydantic.main import ModelMetaclass
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.ui.live_display import live_display
|
||||
|
||||
|
||||
class Code(BaseModel):
|
||||
@@ -57,9 +60,29 @@ class Check_Metadata_Model(BaseModel):
|
||||
Compliance: list = None
|
||||
|
||||
|
||||
class Check(ABC, Check_Metadata_Model):
|
||||
class CheckMeta(ModelMetaclass):
|
||||
"""
|
||||
Dynamically decorates the execute function of all subclasses of the Check class
|
||||
|
||||
By making CheckMeta inherit from ModelMetaclass, it ensures that all features provided by Pydantic's BaseModel (such as data validation, serialization, and so forth) are preserved. CheckMeta just adds additional behavior (decorator application) on top of the existing features.
|
||||
This also works because ModelMetaclass inherits from ABCMeta, as does the ABC class (its got to do with how metaclasses work when applying it to a class that inherits from other classes that have a metaclass).
|
||||
The primary role of CheckMeta is to automatically apply a decorator to the execute method of subclasses. This behavior does not conflict with the typical responsibilities of ModelMetaclass
|
||||
"""
|
||||
|
||||
def __new__(cls, name, bases, dct):
|
||||
if "execute" in dct and not getattr(
|
||||
dct["execute"], "__isabstractmethod__", False
|
||||
):
|
||||
dct["execute"] = Check.update_title_with_findings_decorator(dct["execute"])
|
||||
return super(CheckMeta, cls).__new__(cls, name, bases, dct)
|
||||
|
||||
|
||||
class Check(ABC, Check_Metadata_Model, metaclass=CheckMeta):
|
||||
"""Prowler Check"""
|
||||
|
||||
title_bar_task: int = None
|
||||
progress_task: int = None
|
||||
|
||||
def __init__(self, **data):
|
||||
"""Check's init function. Calls the CheckMetadataModel init."""
|
||||
# Parse the Check's metadata file
|
||||
@@ -72,6 +95,43 @@ class Check(ABC, Check_Metadata_Model):
|
||||
# Calls parents init function
|
||||
super().__init__(**data)
|
||||
|
||||
self.live_display_enabled = False
|
||||
service_section = live_display.get_service_section()
|
||||
if service_section:
|
||||
self.live_display_enabled = True
|
||||
|
||||
self.title_bar_task = service_section.title_bar.add_task(
|
||||
f"{self.CheckTitle}...", start=False
|
||||
)
|
||||
|
||||
def increment_task_progress(self):
|
||||
if self.live_display_enabled:
|
||||
current_section = live_display.get_service_section()
|
||||
current_section.task_progress.update(self.progress_task, advance=1)
|
||||
|
||||
def start_task(self, message, count):
|
||||
if self.live_display_enabled:
|
||||
current_section = live_display.get_service_section()
|
||||
self.progress_task = current_section.task_progress.add_task(
|
||||
description=message, total=count, visible=True
|
||||
)
|
||||
|
||||
def update_title_with_findings(self, findings):
|
||||
if self.live_display_enabled:
|
||||
current_section = live_display.get_service_section()
|
||||
# current_section.task_progress.remove_task(self.progress_task)
|
||||
total_failed = len(
|
||||
[report for report in findings if report.status == "FAIL"]
|
||||
)
|
||||
total_checked = len(findings)
|
||||
if total_failed == 0:
|
||||
message = f"{self.CheckTitle} [pass]All resources passed ({total_checked})[/pass]"
|
||||
else:
|
||||
message = f"{self.CheckTitle} [fail]{total_failed}/{total_checked} failed![/fail]"
|
||||
current_section.title_bar.update(
|
||||
task_id=self.title_bar_task, description=message
|
||||
)
|
||||
|
||||
def metadata(self) -> dict:
|
||||
"""Return the JSON representation of the check's metadata"""
|
||||
return self.json()
|
||||
@@ -80,6 +140,24 @@ class Check(ABC, Check_Metadata_Model):
|
||||
def execute(self):
|
||||
"""Execute the check's logic"""
|
||||
|
||||
@staticmethod
|
||||
def update_title_with_findings_decorator(func):
|
||||
"""
|
||||
Decorator to update the title bar in the live_display with findings after executing a check.
|
||||
"""
|
||||
|
||||
@wraps(func)
|
||||
def wrapper(check_instance, *args, **kwargs):
|
||||
# Execute the original check's logic
|
||||
findings = func(check_instance, *args, **kwargs)
|
||||
|
||||
# Update the title bar with the findings
|
||||
check_instance.update_title_with_findings(findings)
|
||||
|
||||
return findings
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
@dataclass
|
||||
class Check_Report:
|
||||
@@ -146,6 +224,22 @@ class Check_Report_GCP(Check_Report):
|
||||
self.location = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class Check_Report_Kubernetes(Check_Report):
|
||||
# TODO change class name to CheckReportKubernetes
|
||||
"""Contains the Kubernetes Check's finding information."""
|
||||
|
||||
resource_name: str
|
||||
resource_id: str
|
||||
namespace: str
|
||||
|
||||
def __init__(self, metadata):
|
||||
super().__init__(metadata)
|
||||
self.resource_name = ""
|
||||
self.resource_id = ""
|
||||
self.namespace = ""
|
||||
|
||||
|
||||
# Testing Pending
|
||||
def load_check_metadata(metadata_file: str) -> Check_Metadata_Model:
|
||||
"""load_check_metadata loads and parse a Check's metadata file"""
|
||||
|
||||
@@ -8,6 +8,7 @@ from prowler.config.config import (
|
||||
default_config_file_path,
|
||||
default_output_directory,
|
||||
valid_severities,
|
||||
finding_statuses,
|
||||
)
|
||||
from prowler.providers.common.arguments import (
|
||||
init_providers_parser,
|
||||
@@ -116,10 +117,10 @@ Detailed documentation at https://docs.prowler.cloud
|
||||
"Outputs"
|
||||
)
|
||||
common_outputs_parser.add_argument(
|
||||
"-q",
|
||||
"--quiet",
|
||||
action="store_true",
|
||||
help="Store or send only Prowler failed findings",
|
||||
"--status",
|
||||
nargs="+",
|
||||
help=f"Filter by the status of the findings {finding_statuses}",
|
||||
choices=finding_statuses,
|
||||
)
|
||||
common_outputs_parser.add_argument(
|
||||
"-M",
|
||||
|
||||
0
prowler/lib/outputs/compliance/__init__.py
Normal file
0
prowler/lib/outputs/compliance/__init__.py
Normal file
@@ -0,0 +1,55 @@
|
||||
from csv import DictWriter
|
||||
|
||||
from prowler.config.config import timestamp
|
||||
from prowler.lib.outputs.models import (
|
||||
Check_Output_CSV_AWS_Well_Architected,
|
||||
generate_csv_fields,
|
||||
)
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
|
||||
|
||||
def write_compliance_row_aws_well_architected_framework(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
):
|
||||
compliance_output = compliance.Framework
|
||||
if compliance.Version != "":
|
||||
compliance_output += "_" + compliance.Version
|
||||
if compliance.Provider != "":
|
||||
compliance_output += "_" + compliance.Provider
|
||||
compliance_output = compliance_output.lower().replace("-", "_")
|
||||
csv_header = generate_csv_fields(Check_Output_CSV_AWS_Well_Architected)
|
||||
csv_writer = DictWriter(
|
||||
file_descriptors[compliance_output],
|
||||
fieldnames=csv_header,
|
||||
delimiter=";",
|
||||
)
|
||||
for requirement in compliance.Requirements:
|
||||
requirement_description = requirement.Description
|
||||
requirement_id = requirement.Id
|
||||
for attribute in requirement.Attributes:
|
||||
compliance_row = Check_Output_CSV_AWS_Well_Architected(
|
||||
Provider=finding.check_metadata.Provider,
|
||||
Description=compliance.Description,
|
||||
AccountId=audit_info.audited_account,
|
||||
Region=finding.region,
|
||||
AssessmentDate=outputs_unix_timestamp(
|
||||
output_options.unix_timestamp, timestamp
|
||||
),
|
||||
Requirements_Id=requirement_id,
|
||||
Requirements_Description=requirement_description,
|
||||
Requirements_Attributes_Name=attribute.Name,
|
||||
Requirements_Attributes_WellArchitectedQuestionId=attribute.WellArchitectedQuestionId,
|
||||
Requirements_Attributes_WellArchitectedPracticeId=attribute.WellArchitectedPracticeId,
|
||||
Requirements_Attributes_Section=attribute.Section,
|
||||
Requirements_Attributes_SubSection=attribute.SubSection,
|
||||
Requirements_Attributes_LevelOfRisk=attribute.LevelOfRisk,
|
||||
Requirements_Attributes_AssessmentMethod=attribute.AssessmentMethod,
|
||||
Requirements_Attributes_Description=attribute.Description,
|
||||
Requirements_Attributes_ImplementationGuidanceUrl=attribute.ImplementationGuidanceUrl,
|
||||
Status=finding.status,
|
||||
StatusExtended=finding.status_extended,
|
||||
ResourceId=finding.resource_id,
|
||||
CheckId=finding.check_metadata.CheckID,
|
||||
)
|
||||
|
||||
csv_writer.writerow(compliance_row.__dict__)
|
||||
36
prowler/lib/outputs/compliance/cis.py
Normal file
36
prowler/lib/outputs/compliance/cis.py
Normal file
@@ -0,0 +1,36 @@
|
||||
from prowler.lib.outputs.compliance.cis_aws import generate_compliance_row_cis_aws
|
||||
from prowler.lib.outputs.compliance.cis_gcp import generate_compliance_row_cis_gcp
|
||||
from prowler.lib.outputs.csv import write_csv
|
||||
|
||||
|
||||
def write_compliance_row_cis(
|
||||
file_descriptors,
|
||||
finding,
|
||||
compliance,
|
||||
output_options,
|
||||
audit_info,
|
||||
input_compliance_frameworks,
|
||||
):
|
||||
compliance_output = "cis_" + compliance.Version + "_" + compliance.Provider.lower()
|
||||
|
||||
# Only with the version of CIS that was selected
|
||||
if compliance_output in str(input_compliance_frameworks):
|
||||
for requirement in compliance.Requirements:
|
||||
for attribute in requirement.Attributes:
|
||||
if compliance.Provider == "AWS":
|
||||
(compliance_row, csv_header) = generate_compliance_row_cis_aws(
|
||||
finding,
|
||||
compliance,
|
||||
requirement,
|
||||
attribute,
|
||||
output_options,
|
||||
audit_info,
|
||||
)
|
||||
elif compliance.Provider == "GCP":
|
||||
(compliance_row, csv_header) = generate_compliance_row_cis_gcp(
|
||||
finding, compliance, requirement, attribute, output_options
|
||||
)
|
||||
|
||||
write_csv(
|
||||
file_descriptors[compliance_output], csv_header, compliance_row
|
||||
)
|
||||
34
prowler/lib/outputs/compliance/cis_aws.py
Normal file
34
prowler/lib/outputs/compliance/cis_aws.py
Normal file
@@ -0,0 +1,34 @@
|
||||
from prowler.config.config import timestamp
|
||||
from prowler.lib.outputs.models import Check_Output_CSV_AWS_CIS, generate_csv_fields
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
|
||||
|
||||
def generate_compliance_row_cis_aws(
|
||||
finding, compliance, requirement, attribute, output_options, audit_info
|
||||
):
|
||||
compliance_row = Check_Output_CSV_AWS_CIS(
|
||||
Provider=finding.check_metadata.Provider,
|
||||
Description=compliance.Description,
|
||||
AccountId=audit_info.audited_account,
|
||||
Region=finding.region,
|
||||
AssessmentDate=outputs_unix_timestamp(output_options.unix_timestamp, timestamp),
|
||||
Requirements_Id=requirement.Id,
|
||||
Requirements_Description=requirement.Description,
|
||||
Requirements_Attributes_Section=attribute.Section,
|
||||
Requirements_Attributes_Profile=attribute.Profile,
|
||||
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
|
||||
Requirements_Attributes_Description=attribute.Description,
|
||||
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
|
||||
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
|
||||
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
|
||||
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
|
||||
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
|
||||
Requirements_Attributes_References=attribute.References,
|
||||
Status=finding.status,
|
||||
StatusExtended=finding.status_extended,
|
||||
ResourceId=finding.resource_id,
|
||||
CheckId=finding.check_metadata.CheckID,
|
||||
)
|
||||
csv_header = generate_csv_fields(Check_Output_CSV_AWS_CIS)
|
||||
|
||||
return compliance_row, csv_header
|
||||
35
prowler/lib/outputs/compliance/cis_gcp.py
Normal file
35
prowler/lib/outputs/compliance/cis_gcp.py
Normal file
@@ -0,0 +1,35 @@
|
||||
from prowler.config.config import timestamp
|
||||
from prowler.lib.outputs.models import Check_Output_CSV_GCP_CIS, generate_csv_fields
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
|
||||
|
||||
def generate_compliance_row_cis_gcp(
|
||||
finding, compliance, requirement, attribute, output_options
|
||||
):
|
||||
compliance_row = Check_Output_CSV_GCP_CIS(
|
||||
Provider=finding.check_metadata.Provider,
|
||||
Description=compliance.Description,
|
||||
ProjectId=finding.project_id,
|
||||
Location=finding.location.lower(),
|
||||
AssessmentDate=outputs_unix_timestamp(output_options.unix_timestamp, timestamp),
|
||||
Requirements_Id=requirement.Id,
|
||||
Requirements_Description=requirement.Description,
|
||||
Requirements_Attributes_Section=attribute.Section,
|
||||
Requirements_Attributes_Profile=attribute.Profile,
|
||||
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
|
||||
Requirements_Attributes_Description=attribute.Description,
|
||||
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
|
||||
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
|
||||
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
|
||||
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
|
||||
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
|
||||
Requirements_Attributes_References=attribute.References,
|
||||
Status=finding.status,
|
||||
StatusExtended=finding.status_extended,
|
||||
ResourceId=finding.resource_id,
|
||||
ResourceName=finding.resource_name,
|
||||
CheckId=finding.check_metadata.CheckID,
|
||||
)
|
||||
csv_header = generate_csv_fields(Check_Output_CSV_GCP_CIS)
|
||||
|
||||
return compliance_row, csv_header
|
||||
472
prowler/lib/outputs/compliance/compliance.py
Normal file
472
prowler/lib/outputs/compliance/compliance.py
Normal file
@@ -0,0 +1,472 @@
|
||||
import sys
|
||||
|
||||
from colorama import Fore, Style
|
||||
from tabulate import tabulate
|
||||
|
||||
from prowler.config.config import orange_color
|
||||
from prowler.lib.check.models import Check_Report
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.outputs.compliance.aws_well_architected_framework import (
|
||||
write_compliance_row_aws_well_architected_framework,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.cis import write_compliance_row_cis
|
||||
from prowler.lib.outputs.compliance.ens_rd2022_aws import (
|
||||
write_compliance_row_ens_rd2022_aws,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.generic import write_compliance_row_generic
|
||||
from prowler.lib.outputs.compliance.iso27001_2013_aws import (
|
||||
write_compliance_row_iso27001_2013_aws,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.mitre_attack_aws import (
|
||||
write_compliance_row_mitre_attack_aws,
|
||||
)
|
||||
|
||||
|
||||
def add_manual_controls(
|
||||
output_options, audit_info, file_descriptors, input_compliance_frameworks
|
||||
):
|
||||
try:
|
||||
# Check if MANUAL control was already added to output
|
||||
if "manual_check" in output_options.bulk_checks_metadata:
|
||||
manual_finding = Check_Report(
|
||||
output_options.bulk_checks_metadata["manual_check"].json()
|
||||
)
|
||||
manual_finding.status = "MANUAL"
|
||||
manual_finding.status_extended = "Manual check"
|
||||
manual_finding.resource_id = "manual_check"
|
||||
manual_finding.resource_name = "Manual check"
|
||||
manual_finding.region = ""
|
||||
manual_finding.location = ""
|
||||
manual_finding.project_id = ""
|
||||
fill_compliance(
|
||||
output_options,
|
||||
manual_finding,
|
||||
audit_info,
|
||||
file_descriptors,
|
||||
input_compliance_frameworks,
|
||||
)
|
||||
del output_options.bulk_checks_metadata["manual_check"]
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
|
||||
def get_check_compliance_frameworks_in_input(
|
||||
check_id, bulk_checks_metadata, input_compliance_frameworks
|
||||
):
|
||||
"""get_check_compliance_frameworks_in_input returns a list of Compliance for the given check if the compliance framework is present in the input compliance to execute"""
|
||||
check_compliances = []
|
||||
if bulk_checks_metadata and bulk_checks_metadata[check_id]:
|
||||
for compliance in bulk_checks_metadata[check_id].Compliance:
|
||||
compliance_name = ""
|
||||
if compliance.Version:
|
||||
compliance_name = (
|
||||
compliance.Framework.lower()
|
||||
+ "_"
|
||||
+ compliance.Version.lower()
|
||||
+ "_"
|
||||
+ compliance.Provider.lower()
|
||||
)
|
||||
else:
|
||||
compliance_name = (
|
||||
compliance.Framework.lower() + "_" + compliance.Provider.lower()
|
||||
)
|
||||
if compliance_name.replace("-", "_") in input_compliance_frameworks:
|
||||
check_compliances.append(compliance)
|
||||
|
||||
return check_compliances
|
||||
|
||||
|
||||
def fill_compliance(
|
||||
output_options, finding, audit_info, file_descriptors, input_compliance_frameworks
|
||||
):
|
||||
try:
|
||||
# We have to retrieve all the check's compliance requirements and get the ones matching with the input ones
|
||||
check_compliances = get_check_compliance_frameworks_in_input(
|
||||
finding.check_metadata.CheckID,
|
||||
output_options.bulk_checks_metadata,
|
||||
input_compliance_frameworks,
|
||||
)
|
||||
|
||||
for compliance in check_compliances:
|
||||
if compliance.Framework == "ENS" and compliance.Version == "RD2022":
|
||||
write_compliance_row_ens_rd2022_aws(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
)
|
||||
|
||||
elif compliance.Framework == "CIS":
|
||||
write_compliance_row_cis(
|
||||
file_descriptors,
|
||||
finding,
|
||||
compliance,
|
||||
output_options,
|
||||
audit_info,
|
||||
input_compliance_frameworks,
|
||||
)
|
||||
|
||||
elif (
|
||||
"AWS-Well-Architected-Framework" in compliance.Framework
|
||||
and compliance.Provider == "AWS"
|
||||
):
|
||||
write_compliance_row_aws_well_architected_framework(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
)
|
||||
|
||||
elif (
|
||||
compliance.Framework == "ISO27001"
|
||||
and compliance.Version == "2013"
|
||||
and compliance.Provider == "AWS"
|
||||
):
|
||||
write_compliance_row_iso27001_2013_aws(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
)
|
||||
|
||||
elif (
|
||||
compliance.Framework == "MITRE-ATTACK"
|
||||
and compliance.Version == ""
|
||||
and compliance.Provider == "AWS"
|
||||
):
|
||||
write_compliance_row_mitre_attack_aws(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
)
|
||||
|
||||
else:
|
||||
write_compliance_row_generic(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
|
||||
def display_compliance_table(
|
||||
findings: list,
|
||||
bulk_checks_metadata: dict,
|
||||
compliance_framework: str,
|
||||
output_filename: str,
|
||||
output_directory: str,
|
||||
compliance_overview: bool,
|
||||
):
|
||||
try:
|
||||
if "ens_rd2022_aws" == compliance_framework:
|
||||
marcos = {}
|
||||
ens_compliance_table = {
|
||||
"Proveedor": [],
|
||||
"Marco/Categoria": [],
|
||||
"Estado": [],
|
||||
"Alto": [],
|
||||
"Medio": [],
|
||||
"Bajo": [],
|
||||
"Opcional": [],
|
||||
}
|
||||
pass_count = fail_count = 0
|
||||
for finding in findings:
|
||||
check = bulk_checks_metadata[finding.check_metadata.CheckID]
|
||||
check_compliances = check.Compliance
|
||||
for compliance in check_compliances:
|
||||
if (
|
||||
compliance.Framework == "ENS"
|
||||
and compliance.Provider == "AWS"
|
||||
and compliance.Version == "RD2022"
|
||||
):
|
||||
for requirement in compliance.Requirements:
|
||||
for attribute in requirement.Attributes:
|
||||
marco_categoria = (
|
||||
f"{attribute.Marco}/{attribute.Categoria}"
|
||||
)
|
||||
# Check if Marco/Categoria exists
|
||||
if marco_categoria not in marcos:
|
||||
marcos[marco_categoria] = {
|
||||
"Estado": f"{Fore.GREEN}CUMPLE{Style.RESET_ALL}",
|
||||
"Opcional": 0,
|
||||
"Alto": 0,
|
||||
"Medio": 0,
|
||||
"Bajo": 0,
|
||||
}
|
||||
if finding.status == "FAIL":
|
||||
fail_count += 1
|
||||
marcos[marco_categoria][
|
||||
"Estado"
|
||||
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"
|
||||
elif finding.status == "PASS":
|
||||
pass_count += 1
|
||||
if attribute.Nivel == "opcional":
|
||||
marcos[marco_categoria]["Opcional"] += 1
|
||||
elif attribute.Nivel == "alto":
|
||||
marcos[marco_categoria]["Alto"] += 1
|
||||
elif attribute.Nivel == "medio":
|
||||
marcos[marco_categoria]["Medio"] += 1
|
||||
elif attribute.Nivel == "bajo":
|
||||
marcos[marco_categoria]["Bajo"] += 1
|
||||
|
||||
# Add results to table
|
||||
for marco in sorted(marcos):
|
||||
ens_compliance_table["Proveedor"].append(compliance.Provider)
|
||||
ens_compliance_table["Marco/Categoria"].append(marco)
|
||||
ens_compliance_table["Estado"].append(marcos[marco]["Estado"])
|
||||
ens_compliance_table["Opcional"].append(
|
||||
f"{Fore.BLUE}{marcos[marco]['Opcional']}{Style.RESET_ALL}"
|
||||
)
|
||||
ens_compliance_table["Alto"].append(
|
||||
f"{Fore.LIGHTRED_EX}{marcos[marco]['Alto']}{Style.RESET_ALL}"
|
||||
)
|
||||
ens_compliance_table["Medio"].append(
|
||||
f"{orange_color}{marcos[marco]['Medio']}{Style.RESET_ALL}"
|
||||
)
|
||||
ens_compliance_table["Bajo"].append(
|
||||
f"{Fore.YELLOW}{marcos[marco]['Bajo']}{Style.RESET_ALL}"
|
||||
)
|
||||
if fail_count + pass_count < 1:
|
||||
print(
|
||||
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"\nEstado de Cumplimiento de {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}:"
|
||||
)
|
||||
overview_table = [
|
||||
[
|
||||
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
|
||||
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
|
||||
]
|
||||
]
|
||||
print(tabulate(overview_table, tablefmt="rounded_grid"))
|
||||
if not compliance_overview:
|
||||
print(
|
||||
f"\nResultados de {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}:"
|
||||
)
|
||||
print(
|
||||
tabulate(
|
||||
ens_compliance_table,
|
||||
headers="keys",
|
||||
tablefmt="rounded_grid",
|
||||
)
|
||||
)
|
||||
print(
|
||||
f"{Style.BRIGHT}* Solo aparece el Marco/Categoria que contiene resultados.{Style.RESET_ALL}"
|
||||
)
|
||||
print(
|
||||
f"\nResultados detallados de {compliance_framework.upper()} en:"
|
||||
)
|
||||
print(
|
||||
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
|
||||
)
|
||||
elif "cis_" in compliance_framework:
|
||||
sections = {}
|
||||
cis_compliance_table = {
|
||||
"Provider": [],
|
||||
"Section": [],
|
||||
"Level 1": [],
|
||||
"Level 2": [],
|
||||
}
|
||||
pass_count = fail_count = 0
|
||||
for finding in findings:
|
||||
check = bulk_checks_metadata[finding.check_metadata.CheckID]
|
||||
check_compliances = check.Compliance
|
||||
for compliance in check_compliances:
|
||||
if (
|
||||
compliance.Framework == "CIS"
|
||||
and compliance.Version in compliance_framework
|
||||
):
|
||||
for requirement in compliance.Requirements:
|
||||
for attribute in requirement.Attributes:
|
||||
section = attribute.Section
|
||||
# Check if Section exists
|
||||
if section not in sections:
|
||||
sections[section] = {
|
||||
"Status": f"{Fore.GREEN}PASS{Style.RESET_ALL}",
|
||||
"Level 1": {"FAIL": 0, "PASS": 0},
|
||||
"Level 2": {"FAIL": 0, "PASS": 0},
|
||||
}
|
||||
if finding.status == "FAIL":
|
||||
fail_count += 1
|
||||
elif finding.status == "PASS":
|
||||
pass_count += 1
|
||||
if attribute.Profile == "Level 1":
|
||||
if finding.status == "FAIL":
|
||||
sections[section]["Level 1"]["FAIL"] += 1
|
||||
else:
|
||||
sections[section]["Level 1"]["PASS"] += 1
|
||||
elif attribute.Profile == "Level 2":
|
||||
if finding.status == "FAIL":
|
||||
sections[section]["Level 2"]["FAIL"] += 1
|
||||
else:
|
||||
sections[section]["Level 2"]["PASS"] += 1
|
||||
|
||||
# Add results to table
|
||||
sections = dict(sorted(sections.items()))
|
||||
for section in sections:
|
||||
cis_compliance_table["Provider"].append(compliance.Provider)
|
||||
cis_compliance_table["Section"].append(section)
|
||||
if sections[section]["Level 1"]["FAIL"] > 0:
|
||||
cis_compliance_table["Level 1"].append(
|
||||
f"{Fore.RED}FAIL({sections[section]['Level 1']['FAIL']}){Style.RESET_ALL}"
|
||||
)
|
||||
else:
|
||||
cis_compliance_table["Level 1"].append(
|
||||
f"{Fore.GREEN}PASS({sections[section]['Level 1']['PASS']}){Style.RESET_ALL}"
|
||||
)
|
||||
if sections[section]["Level 2"]["FAIL"] > 0:
|
||||
cis_compliance_table["Level 2"].append(
|
||||
f"{Fore.RED}FAIL({sections[section]['Level 2']['FAIL']}){Style.RESET_ALL}"
|
||||
)
|
||||
else:
|
||||
cis_compliance_table["Level 2"].append(
|
||||
f"{Fore.GREEN}PASS({sections[section]['Level 2']['PASS']}){Style.RESET_ALL}"
|
||||
)
|
||||
if fail_count + pass_count < 1:
|
||||
print(
|
||||
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
|
||||
)
|
||||
overview_table = [
|
||||
[
|
||||
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
|
||||
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
|
||||
]
|
||||
]
|
||||
print(tabulate(overview_table, tablefmt="rounded_grid"))
|
||||
if not compliance_overview:
|
||||
print(
|
||||
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
|
||||
)
|
||||
print(
|
||||
tabulate(
|
||||
cis_compliance_table,
|
||||
headers="keys",
|
||||
tablefmt="rounded_grid",
|
||||
)
|
||||
)
|
||||
print(
|
||||
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
|
||||
)
|
||||
print(
|
||||
f"\nDetailed results of {compliance_framework.upper()} are in:"
|
||||
)
|
||||
print(
|
||||
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
|
||||
)
|
||||
elif "mitre_attack" in compliance_framework:
|
||||
tactics = {}
|
||||
mitre_compliance_table = {
|
||||
"Provider": [],
|
||||
"Tactic": [],
|
||||
"Status": [],
|
||||
}
|
||||
pass_count = fail_count = 0
|
||||
for finding in findings:
|
||||
check = bulk_checks_metadata[finding.check_metadata.CheckID]
|
||||
check_compliances = check.Compliance
|
||||
for compliance in check_compliances:
|
||||
if (
|
||||
"MITRE-ATTACK" in compliance.Framework
|
||||
and compliance.Version in compliance_framework
|
||||
):
|
||||
for requirement in compliance.Requirements:
|
||||
for tactic in requirement.Tactics:
|
||||
if tactic not in tactics:
|
||||
tactics[tactic] = {"FAIL": 0, "PASS": 0}
|
||||
if finding.status == "FAIL":
|
||||
fail_count += 1
|
||||
tactics[tactic]["FAIL"] += 1
|
||||
elif finding.status == "PASS":
|
||||
pass_count += 1
|
||||
tactics[tactic]["PASS"] += 1
|
||||
|
||||
# Add results to table
|
||||
tactics = dict(sorted(tactics.items()))
|
||||
for tactic in tactics:
|
||||
mitre_compliance_table["Provider"].append(compliance.Provider)
|
||||
mitre_compliance_table["Tactic"].append(tactic)
|
||||
if tactics[tactic]["FAIL"] > 0:
|
||||
mitre_compliance_table["Status"].append(
|
||||
f"{Fore.RED}FAIL({tactics[tactic]['FAIL']}){Style.RESET_ALL}"
|
||||
)
|
||||
else:
|
||||
mitre_compliance_table["Status"].append(
|
||||
f"{Fore.GREEN}PASS({tactics[tactic]['PASS']}){Style.RESET_ALL}"
|
||||
)
|
||||
if fail_count + pass_count < 1:
|
||||
print(
|
||||
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
|
||||
)
|
||||
overview_table = [
|
||||
[
|
||||
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
|
||||
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
|
||||
]
|
||||
]
|
||||
print(tabulate(overview_table, tablefmt="rounded_grid"))
|
||||
if not compliance_overview:
|
||||
print(
|
||||
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
|
||||
)
|
||||
print(
|
||||
tabulate(
|
||||
mitre_compliance_table,
|
||||
headers="keys",
|
||||
tablefmt="rounded_grid",
|
||||
)
|
||||
)
|
||||
print(
|
||||
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
|
||||
)
|
||||
print(
|
||||
f"\nDetailed results of {compliance_framework.upper()} are in:"
|
||||
)
|
||||
print(
|
||||
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
|
||||
)
|
||||
else:
|
||||
pass_count = fail_count = 0
|
||||
for finding in findings:
|
||||
check = bulk_checks_metadata[finding.check_metadata.CheckID]
|
||||
check_compliances = check.Compliance
|
||||
for compliance in check_compliances:
|
||||
if (
|
||||
compliance.Framework.upper()
|
||||
in compliance_framework.upper().replace("_", "-")
|
||||
and compliance.Version in compliance_framework.upper()
|
||||
and compliance.Provider in compliance_framework.upper()
|
||||
):
|
||||
for requirement in compliance.Requirements:
|
||||
for attribute in requirement.Attributes:
|
||||
if finding.status == "FAIL":
|
||||
fail_count += 1
|
||||
elif finding.status == "PASS":
|
||||
pass_count += 1
|
||||
if fail_count + pass_count < 1:
|
||||
print(
|
||||
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
|
||||
)
|
||||
overview_table = [
|
||||
[
|
||||
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
|
||||
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
|
||||
]
|
||||
]
|
||||
print(tabulate(overview_table, tablefmt="rounded_grid"))
|
||||
if not compliance_overview:
|
||||
print(f"\nDetailed results of {compliance_framework.upper()} are in:")
|
||||
print(
|
||||
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
|
||||
)
|
||||
sys.exit(1)
|
||||
45
prowler/lib/outputs/compliance/ens_rd2022_aws.py
Normal file
45
prowler/lib/outputs/compliance/ens_rd2022_aws.py
Normal file
@@ -0,0 +1,45 @@
|
||||
from csv import DictWriter
|
||||
|
||||
from prowler.config.config import timestamp
|
||||
from prowler.lib.outputs.models import Check_Output_CSV_ENS_RD2022, generate_csv_fields
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
|
||||
|
||||
def write_compliance_row_ens_rd2022_aws(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
):
|
||||
compliance_output = "ens_rd2022_aws"
|
||||
csv_header = generate_csv_fields(Check_Output_CSV_ENS_RD2022)
|
||||
csv_writer = DictWriter(
|
||||
file_descriptors[compliance_output],
|
||||
fieldnames=csv_header,
|
||||
delimiter=";",
|
||||
)
|
||||
for requirement in compliance.Requirements:
|
||||
requirement_description = requirement.Description
|
||||
requirement_id = requirement.Id
|
||||
for attribute in requirement.Attributes:
|
||||
compliance_row = Check_Output_CSV_ENS_RD2022(
|
||||
Provider=finding.check_metadata.Provider,
|
||||
Description=compliance.Description,
|
||||
AccountId=audit_info.audited_account,
|
||||
Region=finding.region,
|
||||
AssessmentDate=outputs_unix_timestamp(
|
||||
output_options.unix_timestamp, timestamp
|
||||
),
|
||||
Requirements_Id=requirement_id,
|
||||
Requirements_Description=requirement_description,
|
||||
Requirements_Attributes_IdGrupoControl=attribute.IdGrupoControl,
|
||||
Requirements_Attributes_Marco=attribute.Marco,
|
||||
Requirements_Attributes_Categoria=attribute.Categoria,
|
||||
Requirements_Attributes_DescripcionControl=attribute.DescripcionControl,
|
||||
Requirements_Attributes_Nivel=attribute.Nivel,
|
||||
Requirements_Attributes_Tipo=attribute.Tipo,
|
||||
Requirements_Attributes_Dimensiones=",".join(attribute.Dimensiones),
|
||||
Status=finding.status,
|
||||
StatusExtended=finding.status_extended,
|
||||
ResourceId=finding.resource_id,
|
||||
CheckId=finding.check_metadata.CheckID,
|
||||
)
|
||||
|
||||
csv_writer.writerow(compliance_row.__dict__)
|
||||
51
prowler/lib/outputs/compliance/generic.py
Normal file
51
prowler/lib/outputs/compliance/generic.py
Normal file
@@ -0,0 +1,51 @@
|
||||
from csv import DictWriter
|
||||
|
||||
from prowler.config.config import timestamp
|
||||
from prowler.lib.outputs.models import (
|
||||
Check_Output_CSV_Generic_Compliance,
|
||||
generate_csv_fields,
|
||||
)
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
|
||||
|
||||
def write_compliance_row_generic(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
):
|
||||
compliance_output = compliance.Framework
|
||||
if compliance.Version != "":
|
||||
compliance_output += "_" + compliance.Version
|
||||
if compliance.Provider != "":
|
||||
compliance_output += "_" + compliance.Provider
|
||||
|
||||
compliance_output = compliance_output.lower().replace("-", "_")
|
||||
csv_header = generate_csv_fields(Check_Output_CSV_Generic_Compliance)
|
||||
csv_writer = DictWriter(
|
||||
file_descriptors[compliance_output],
|
||||
fieldnames=csv_header,
|
||||
delimiter=";",
|
||||
)
|
||||
for requirement in compliance.Requirements:
|
||||
requirement_description = requirement.Description
|
||||
requirement_id = requirement.Id
|
||||
for attribute in requirement.Attributes:
|
||||
compliance_row = Check_Output_CSV_Generic_Compliance(
|
||||
Provider=finding.check_metadata.Provider,
|
||||
Description=compliance.Description,
|
||||
AccountId=audit_info.audited_account,
|
||||
Region=finding.region,
|
||||
AssessmentDate=outputs_unix_timestamp(
|
||||
output_options.unix_timestamp, timestamp
|
||||
),
|
||||
Requirements_Id=requirement_id,
|
||||
Requirements_Description=requirement_description,
|
||||
Requirements_Attributes_Section=attribute.Section,
|
||||
Requirements_Attributes_SubSection=attribute.SubSection,
|
||||
Requirements_Attributes_SubGroup=attribute.SubGroup,
|
||||
Requirements_Attributes_Service=attribute.Service,
|
||||
Requirements_Attributes_Soc_Type=attribute.Soc_Type,
|
||||
Status=finding.status,
|
||||
StatusExtended=finding.status_extended,
|
||||
ResourceId=finding.resource_id,
|
||||
CheckId=finding.check_metadata.CheckID,
|
||||
)
|
||||
csv_writer.writerow(compliance_row.__dict__)
|
||||
53
prowler/lib/outputs/compliance/iso27001_2013_aws.py
Normal file
53
prowler/lib/outputs/compliance/iso27001_2013_aws.py
Normal file
@@ -0,0 +1,53 @@
|
||||
from csv import DictWriter
|
||||
|
||||
from prowler.config.config import timestamp
|
||||
from prowler.lib.outputs.models import (
|
||||
Check_Output_CSV_AWS_ISO27001_2013,
|
||||
generate_csv_fields,
|
||||
)
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
|
||||
|
||||
def write_compliance_row_iso27001_2013_aws(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
):
|
||||
compliance_output = compliance.Framework
|
||||
if compliance.Version != "":
|
||||
compliance_output += "_" + compliance.Version
|
||||
if compliance.Provider != "":
|
||||
compliance_output += "_" + compliance.Provider
|
||||
|
||||
compliance_output = compliance_output.lower().replace("-", "_")
|
||||
csv_header = generate_csv_fields(Check_Output_CSV_AWS_ISO27001_2013)
|
||||
csv_writer = DictWriter(
|
||||
file_descriptors[compliance_output],
|
||||
fieldnames=csv_header,
|
||||
delimiter=";",
|
||||
)
|
||||
for requirement in compliance.Requirements:
|
||||
requirement_description = requirement.Description
|
||||
requirement_id = requirement.Id
|
||||
requirement_name = requirement.Name
|
||||
for attribute in requirement.Attributes:
|
||||
compliance_row = Check_Output_CSV_AWS_ISO27001_2013(
|
||||
Provider=finding.check_metadata.Provider,
|
||||
Description=compliance.Description,
|
||||
AccountId=audit_info.audited_account,
|
||||
Region=finding.region,
|
||||
AssessmentDate=outputs_unix_timestamp(
|
||||
output_options.unix_timestamp, timestamp
|
||||
),
|
||||
Requirements_Id=requirement_id,
|
||||
Requirements_Name=requirement_name,
|
||||
Requirements_Description=requirement_description,
|
||||
Requirements_Attributes_Category=attribute.Category,
|
||||
Requirements_Attributes_Objetive_ID=attribute.Objetive_ID,
|
||||
Requirements_Attributes_Objetive_Name=attribute.Objetive_Name,
|
||||
Requirements_Attributes_Check_Summary=attribute.Check_Summary,
|
||||
Status=finding.status,
|
||||
StatusExtended=finding.status_extended,
|
||||
ResourceId=finding.resource_id,
|
||||
CheckId=finding.check_metadata.CheckID,
|
||||
)
|
||||
|
||||
csv_writer.writerow(compliance_row.__dict__)
|
||||
66
prowler/lib/outputs/compliance/mitre_attack_aws.py
Normal file
66
prowler/lib/outputs/compliance/mitre_attack_aws.py
Normal file
@@ -0,0 +1,66 @@
|
||||
from csv import DictWriter
|
||||
|
||||
from prowler.config.config import timestamp
|
||||
from prowler.lib.outputs.models import (
|
||||
Check_Output_MITRE_ATTACK,
|
||||
generate_csv_fields,
|
||||
unroll_list,
|
||||
)
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
|
||||
|
||||
def write_compliance_row_mitre_attack_aws(
|
||||
file_descriptors, finding, compliance, output_options, audit_info
|
||||
):
|
||||
compliance_output = compliance.Framework
|
||||
if compliance.Version != "":
|
||||
compliance_output += "_" + compliance.Version
|
||||
if compliance.Provider != "":
|
||||
compliance_output += "_" + compliance.Provider
|
||||
|
||||
compliance_output = compliance_output.lower().replace("-", "_")
|
||||
csv_header = generate_csv_fields(Check_Output_MITRE_ATTACK)
|
||||
csv_writer = DictWriter(
|
||||
file_descriptors[compliance_output],
|
||||
fieldnames=csv_header,
|
||||
delimiter=";",
|
||||
)
|
||||
for requirement in compliance.Requirements:
|
||||
requirement_description = requirement.Description
|
||||
requirement_id = requirement.Id
|
||||
requirement_name = requirement.Name
|
||||
attributes_aws_services = ""
|
||||
attributes_categories = ""
|
||||
attributes_values = ""
|
||||
attributes_comments = ""
|
||||
for attribute in requirement.Attributes:
|
||||
attributes_aws_services += attribute.AWSService + "\n"
|
||||
attributes_categories += attribute.Category + "\n"
|
||||
attributes_values += attribute.Value + "\n"
|
||||
attributes_comments += attribute.Comment + "\n"
|
||||
compliance_row = Check_Output_MITRE_ATTACK(
|
||||
Provider=finding.check_metadata.Provider,
|
||||
Description=compliance.Description,
|
||||
AccountId=audit_info.audited_account,
|
||||
Region=finding.region,
|
||||
AssessmentDate=outputs_unix_timestamp(
|
||||
output_options.unix_timestamp, timestamp
|
||||
),
|
||||
Requirements_Id=requirement_id,
|
||||
Requirements_Description=requirement_description,
|
||||
Requirements_Name=requirement_name,
|
||||
Requirements_Tactics=unroll_list(requirement.Tactics),
|
||||
Requirements_SubTechniques=unroll_list(requirement.SubTechniques),
|
||||
Requirements_Platforms=unroll_list(requirement.Platforms),
|
||||
Requirements_TechniqueURL=requirement.TechniqueURL,
|
||||
Requirements_Attributes_AWSServices=attributes_aws_services,
|
||||
Requirements_Attributes_Categories=attributes_categories,
|
||||
Requirements_Attributes_Values=attributes_values,
|
||||
Requirements_Attributes_Comments=attributes_comments,
|
||||
Status=finding.status,
|
||||
StatusExtended=finding.status_extended,
|
||||
ResourceId=finding.resource_id,
|
||||
CheckId=finding.check_metadata.CheckID,
|
||||
)
|
||||
|
||||
csv_writer.writerow(compliance_row.__dict__)
|
||||
10
prowler/lib/outputs/csv.py
Normal file
10
prowler/lib/outputs/csv.py
Normal file
@@ -0,0 +1,10 @@
|
||||
from csv import DictWriter
|
||||
|
||||
|
||||
def write_csv(file_descriptor, headers, row):
|
||||
csv_writer = DictWriter(
|
||||
file_descriptor,
|
||||
fieldnames=headers,
|
||||
delimiter=";",
|
||||
)
|
||||
csv_writer.writerow(row.__dict__)
|
||||
@@ -23,6 +23,7 @@ from prowler.lib.outputs.models import (
|
||||
)
|
||||
from prowler.lib.utils.utils import file_exists, open_file
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
|
||||
from prowler.providers.common.outputs import get_provider_output_model
|
||||
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
|
||||
|
||||
@@ -108,7 +109,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
|
||||
elif isinstance(audit_info, GCP_Audit_Info):
|
||||
if output_mode == "cis_2.0_gcp":
|
||||
filename = f"{output_directory}/{output_filename}_cis_2.0_gcp{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_cis_2.0_gcp{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename, output_mode, audit_info, Check_Output_CSV_GCP_CIS
|
||||
)
|
||||
@@ -123,7 +124,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
file_descriptors.update({output_mode: file_descriptor})
|
||||
|
||||
elif output_mode == "ens_rd2022_aws":
|
||||
filename = f"{output_directory}/{output_filename}_ens_rd2022_aws{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_ens_rd2022_aws{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename,
|
||||
output_mode,
|
||||
@@ -133,14 +134,14 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
file_descriptors.update({output_mode: file_descriptor})
|
||||
|
||||
elif output_mode == "cis_1.5_aws":
|
||||
filename = f"{output_directory}/{output_filename}_cis_1.5_aws{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_cis_1.5_aws{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename, output_mode, audit_info, Check_Output_CSV_AWS_CIS
|
||||
)
|
||||
file_descriptors.update({output_mode: file_descriptor})
|
||||
|
||||
elif output_mode == "cis_1.4_aws":
|
||||
filename = f"{output_directory}/{output_filename}_cis_1.4_aws{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_cis_1.4_aws{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename, output_mode, audit_info, Check_Output_CSV_AWS_CIS
|
||||
)
|
||||
@@ -150,7 +151,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
output_mode
|
||||
== "aws_well_architected_framework_security_pillar_aws"
|
||||
):
|
||||
filename = f"{output_directory}/{output_filename}_aws_well_architected_framework_security_pillar_aws{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_aws_well_architected_framework_security_pillar_aws{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename,
|
||||
output_mode,
|
||||
@@ -163,7 +164,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
output_mode
|
||||
== "aws_well_architected_framework_reliability_pillar_aws"
|
||||
):
|
||||
filename = f"{output_directory}/{output_filename}_aws_well_architected_framework_reliability_pillar_aws{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_aws_well_architected_framework_reliability_pillar_aws{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename,
|
||||
output_mode,
|
||||
@@ -173,7 +174,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
file_descriptors.update({output_mode: file_descriptor})
|
||||
|
||||
elif output_mode == "iso27001_2013_aws":
|
||||
filename = f"{output_directory}/{output_filename}_iso27001_2013_aws{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_iso27001_2013_aws{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename,
|
||||
output_mode,
|
||||
@@ -183,7 +184,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
file_descriptors.update({output_mode: file_descriptor})
|
||||
|
||||
elif output_mode == "mitre_attack_aws":
|
||||
filename = f"{output_directory}/{output_filename}_mitre_attack_aws{csv_file_suffix}"
|
||||
filename = f"{output_directory}/compliance/{output_filename}_mitre_attack_aws{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename,
|
||||
output_mode,
|
||||
@@ -194,14 +195,26 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
|
||||
|
||||
else:
|
||||
# Generic Compliance framework
|
||||
filename = f"{output_directory}/{output_filename}_{output_mode}{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename,
|
||||
output_mode,
|
||||
audit_info,
|
||||
Check_Output_CSV_Generic_Compliance,
|
||||
)
|
||||
file_descriptors.update({output_mode: file_descriptor})
|
||||
if (
|
||||
isinstance(audit_info, AWS_Audit_Info)
|
||||
and "aws" in output_mode
|
||||
or (
|
||||
isinstance(audit_info, Azure_Audit_Info)
|
||||
and "azure" in output_mode
|
||||
)
|
||||
or (
|
||||
isinstance(audit_info, GCP_Audit_Info)
|
||||
and "gcp" in output_mode
|
||||
)
|
||||
):
|
||||
filename = f"{output_directory}/compliance/{output_filename}_{output_mode}{csv_file_suffix}"
|
||||
file_descriptor = initialize_file_descriptor(
|
||||
filename,
|
||||
output_mode,
|
||||
audit_info,
|
||||
Check_Output_CSV_Generic_Compliance,
|
||||
)
|
||||
file_descriptors.update({output_mode: file_descriptor})
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
|
||||
@@ -21,6 +21,7 @@ from prowler.lib.utils.utils import open_file
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
|
||||
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
|
||||
from prowler.providers.kubernetes.lib.audit_info.models import Kubernetes_Audit_Info
|
||||
|
||||
|
||||
def add_html_header(file_descriptor, audit_info):
|
||||
@@ -169,11 +170,11 @@ def add_html_header(file_descriptor, audit_info):
|
||||
def fill_html(file_descriptor, finding, output_options):
|
||||
try:
|
||||
row_class = "p-3 mb-2 bg-success-custom"
|
||||
if finding.status == "INFO":
|
||||
if finding.status == "MANUAL":
|
||||
row_class = "table-info"
|
||||
elif finding.status == "FAIL":
|
||||
row_class = "table-danger"
|
||||
elif finding.status == "WARNING":
|
||||
elif finding.status == "MUTED":
|
||||
row_class = "table-warning"
|
||||
file_descriptor.write(
|
||||
f"""
|
||||
@@ -522,6 +523,53 @@ def get_gcp_html_assessment_summary(audit_info):
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_kubernetes_html_assessment_summary(audit_info):
|
||||
try:
|
||||
if isinstance(audit_info, Kubernetes_Audit_Info):
|
||||
return (
|
||||
"""
|
||||
<div class="col-md-2">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
Kubernetes Assessment Summary
|
||||
</div>
|
||||
<ul class="list-group list-group-flush">
|
||||
<li class="list-group-item">
|
||||
<b>Kubernetes Context:</b> """
|
||||
+ audit_info.context["name"]
|
||||
+ """
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
Kubernetes Credentials
|
||||
</div>
|
||||
<ul class="list-group list-group-flush">
|
||||
<li class="list-group-item">
|
||||
<b>Kubernetes Cluster:</b> """
|
||||
+ audit_info.context["context"]["cluster"]
|
||||
+ """
|
||||
</li>
|
||||
<li class="list-group-item">
|
||||
<b>Kubernetes User:</b> """
|
||||
+ audit_info.context["context"]["user"]
|
||||
+ """
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_assessment_summary(audit_info):
|
||||
"""
|
||||
get_assessment_summary gets the HTML assessment summary for the provider
|
||||
@@ -532,6 +580,7 @@ def get_assessment_summary(audit_info):
|
||||
# AWS_Audit_Info --> aws
|
||||
# GCP_Audit_Info --> gcp
|
||||
# Azure_Audit_Info --> azure
|
||||
# Kubernetes_Audit_Info --> kubernetes
|
||||
provider = audit_info.__class__.__name__.split("_")[0].lower()
|
||||
|
||||
# Dynamically get the Provider quick inventory handler
|
||||
|
||||
@@ -116,8 +116,8 @@ def generate_json_asff_status(status: str) -> str:
|
||||
json_asff_status = "PASSED"
|
||||
elif status == "FAIL":
|
||||
json_asff_status = "FAILED"
|
||||
elif status == "WARNING":
|
||||
json_asff_status = "WARNING"
|
||||
elif status == "MUTED":
|
||||
json_asff_status = "MUTED"
|
||||
else:
|
||||
json_asff_status = "NOT_AVAILABLE"
|
||||
|
||||
@@ -293,7 +293,7 @@ def generate_json_ocsf_status(status: str):
|
||||
json_ocsf_status = "Success"
|
||||
elif status == "FAIL":
|
||||
json_ocsf_status = "Failure"
|
||||
elif status == "WARNING":
|
||||
elif status == "MUTED":
|
||||
json_ocsf_status = "Other"
|
||||
else:
|
||||
json_ocsf_status = "Unknown"
|
||||
@@ -307,7 +307,7 @@ def generate_json_ocsf_status_id(status: str):
|
||||
json_ocsf_status_id = 1
|
||||
elif status == "FAIL":
|
||||
json_ocsf_status_id = 2
|
||||
elif status == "WARNING":
|
||||
elif status == "MUTED":
|
||||
json_ocsf_status_id = 99
|
||||
else:
|
||||
json_ocsf_status_id = 0
|
||||
|
||||
@@ -10,10 +10,19 @@ from prowler.config.config import prowler_version, timestamp
|
||||
from prowler.lib.check.models import Remediation
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.utils.utils import outputs_unix_timestamp
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Organizations_Info
|
||||
from prowler.providers.aws.lib.audit_info.models import AWSOrganizationsInfo
|
||||
|
||||
|
||||
def get_check_compliance(finding, provider, output_options):
|
||||
def get_check_compliance(finding, provider, output_options) -> dict:
|
||||
"""get_check_compliance returns a map with the compliance framework as key and the requirements where the finding's check is present.
|
||||
|
||||
Example:
|
||||
|
||||
{
|
||||
"CIS-1.4": ["2.1.3"],
|
||||
"CIS-1.5": ["2.1.3"],
|
||||
}
|
||||
"""
|
||||
try:
|
||||
check_compliance = {}
|
||||
# We have to retrieve all the check's compliance requirements
|
||||
@@ -76,6 +85,18 @@ def generate_provider_output_csv(
|
||||
)
|
||||
finding_output = output_model(**data)
|
||||
|
||||
if provider == "kubernetes":
|
||||
data["resource_id"] = finding.resource_id
|
||||
data["resource_name"] = finding.resource_name
|
||||
data["namespace"] = finding.namespace
|
||||
data[
|
||||
"finding_unique_id"
|
||||
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.namespace}-{finding.resource_id}"
|
||||
data["compliance"] = unroll_dict(
|
||||
get_check_compliance(finding, provider, output_options)
|
||||
)
|
||||
finding_output = output_model(**data)
|
||||
|
||||
if provider == "aws":
|
||||
data["profile"] = audit_info.profile
|
||||
data["account_id"] = audit_info.audited_account
|
||||
@@ -348,6 +369,16 @@ class Gcp_Check_Output_CSV(Check_Output_CSV):
|
||||
resource_name: str = ""
|
||||
|
||||
|
||||
class Kubernetes_Check_Output_CSV(Check_Output_CSV):
|
||||
"""
|
||||
Kubernetes_Check_Output_CSV generates a finding's output in CSV format for the Kubernetes provider.
|
||||
"""
|
||||
|
||||
namespace: str = ""
|
||||
resource_id: str = ""
|
||||
resource_name: str = ""
|
||||
|
||||
|
||||
def generate_provider_output_json(
|
||||
provider: str, finding, audit_info, mode: str, output_options
|
||||
):
|
||||
@@ -452,7 +483,7 @@ class Aws_Check_Output_JSON(Check_Output_JSON):
|
||||
|
||||
Profile: str = ""
|
||||
AccountId: str = ""
|
||||
OrganizationsInfo: Optional[AWS_Organizations_Info]
|
||||
OrganizationsInfo: Optional[AWSOrganizationsInfo]
|
||||
Region: str = ""
|
||||
ResourceId: str = ""
|
||||
ResourceArn: str = ""
|
||||
@@ -478,7 +509,7 @@ class Azure_Check_Output_JSON(Check_Output_JSON):
|
||||
|
||||
class Gcp_Check_Output_JSON(Check_Output_JSON):
|
||||
"""
|
||||
Gcp_Check_Output_JSON generates a finding's output in JSON format for the AWS provider.
|
||||
Gcp_Check_Output_JSON generates a finding's output in JSON format for the GCP provider.
|
||||
"""
|
||||
|
||||
ProjectId: str = ""
|
||||
@@ -490,6 +521,19 @@ class Gcp_Check_Output_JSON(Check_Output_JSON):
|
||||
super().__init__(**metadata)
|
||||
|
||||
|
||||
class Kubernetes_Check_Output_JSON(Check_Output_JSON):
|
||||
"""
|
||||
Kubernetes_Check_Output_JSON generates a finding's output in JSON format for the Kubernetes provider.
|
||||
"""
|
||||
|
||||
ResourceId: str = ""
|
||||
ResourceName: str = ""
|
||||
Namespace: str = ""
|
||||
|
||||
def __init__(self, **metadata):
|
||||
super().__init__(**metadata)
|
||||
|
||||
|
||||
class Check_Output_MITRE_ATTACK(BaseModel):
|
||||
"""
|
||||
Check_Output_MITRE_ATTACK generates a finding's output in CSV MITRE ATTACK format.
|
||||
|
||||
@@ -4,7 +4,10 @@ from colorama import Fore, Style
|
||||
|
||||
from prowler.config.config import available_compliance_frameworks, orange_color
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.outputs.compliance import add_manual_controls, fill_compliance
|
||||
from prowler.lib.outputs.compliance.compliance import (
|
||||
add_manual_controls,
|
||||
fill_compliance,
|
||||
)
|
||||
from prowler.lib.outputs.file_descriptors import fill_file_descriptors
|
||||
from prowler.lib.outputs.html import fill_html
|
||||
from prowler.lib.outputs.json import fill_json_asff, fill_json_ocsf
|
||||
@@ -17,15 +20,17 @@ from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
|
||||
|
||||
|
||||
def stdout_report(finding, color, verbose, is_quiet):
|
||||
def stdout_report(finding, color, verbose, status):
|
||||
if finding.check_metadata.Provider == "aws":
|
||||
details = finding.region
|
||||
if finding.check_metadata.Provider == "azure":
|
||||
details = finding.check_metadata.ServiceName
|
||||
if finding.check_metadata.Provider == "gcp":
|
||||
details = finding.location.lower()
|
||||
if finding.check_metadata.Provider == "kubernetes":
|
||||
details = finding.namespace.lower()
|
||||
|
||||
if verbose and not (is_quiet and finding.status != "FAIL"):
|
||||
if verbose and (not status or finding.status in status):
|
||||
print(
|
||||
f"\t{color}{finding.status}{Style.RESET_ALL} {details}: {finding.status_extended}"
|
||||
)
|
||||
@@ -57,28 +62,35 @@ def report(check_findings, output_options, audit_info):
|
||||
# Print findings by stdout
|
||||
color = set_report_color(finding.status)
|
||||
stdout_report(
|
||||
finding, color, output_options.verbose, output_options.is_quiet
|
||||
finding, color, output_options.verbose, output_options.status
|
||||
)
|
||||
|
||||
if file_descriptors:
|
||||
# Check if --quiet to only add fails to outputs
|
||||
if not (finding.status != "FAIL" and output_options.is_quiet):
|
||||
if any(
|
||||
compliance in output_options.output_modes
|
||||
for compliance in available_compliance_frameworks
|
||||
):
|
||||
fill_compliance(
|
||||
output_options,
|
||||
finding,
|
||||
audit_info,
|
||||
file_descriptors,
|
||||
# Check if --status is enabled and if the filter applies
|
||||
if (
|
||||
not output_options.status
|
||||
or finding.status in output_options.status
|
||||
):
|
||||
input_compliance_frameworks = list(
|
||||
set(output_options.output_modes).intersection(
|
||||
available_compliance_frameworks
|
||||
)
|
||||
)
|
||||
|
||||
add_manual_controls(
|
||||
output_options,
|
||||
audit_info,
|
||||
file_descriptors,
|
||||
)
|
||||
fill_compliance(
|
||||
output_options,
|
||||
finding,
|
||||
audit_info,
|
||||
file_descriptors,
|
||||
input_compliance_frameworks,
|
||||
)
|
||||
|
||||
add_manual_controls(
|
||||
output_options,
|
||||
audit_info,
|
||||
file_descriptors,
|
||||
input_compliance_frameworks,
|
||||
)
|
||||
|
||||
# AWS specific outputs
|
||||
if finding.check_metadata.Provider == "aws":
|
||||
@@ -140,7 +152,7 @@ def report(check_findings, output_options, audit_info):
|
||||
file_descriptors["json-ocsf"].write(",")
|
||||
|
||||
else: # No service resources in the whole account
|
||||
color = set_report_color("INFO")
|
||||
color = set_report_color("MANUAL")
|
||||
if output_options.verbose:
|
||||
print(f"\t{color}INFO{Style.RESET_ALL} There are no resources")
|
||||
# Separator between findings and bar
|
||||
@@ -165,12 +177,12 @@ def set_report_color(status: str) -> str:
|
||||
color = Fore.RED
|
||||
elif status == "ERROR":
|
||||
color = Fore.BLACK
|
||||
elif status == "WARNING":
|
||||
elif status == "MUTED":
|
||||
color = orange_color
|
||||
elif status == "INFO":
|
||||
elif status == "MANUAL":
|
||||
color = Fore.YELLOW
|
||||
else:
|
||||
raise Exception("Invalid Report Status. Must be PASS, FAIL, ERROR or WARNING")
|
||||
raise Exception("Invalid Report Status. Must be PASS, FAIL, ERROR or MUTED")
|
||||
return color
|
||||
|
||||
|
||||
|
||||
@@ -39,6 +39,9 @@ def display_summary_table(
|
||||
elif provider == "gcp":
|
||||
entity_type = "Project ID/s"
|
||||
audited_entities = ", ".join(audit_info.project_ids)
|
||||
elif provider == "kubernetes":
|
||||
entity_type = "Context"
|
||||
audited_entities = audit_info.context["name"]
|
||||
|
||||
if findings:
|
||||
current = {
|
||||
|
||||
485
prowler/lib/ui/live_display.py
Normal file
485
prowler/lib/ui/live_display.py
Normal file
@@ -0,0 +1,485 @@
|
||||
import os
|
||||
import pathlib
|
||||
from datetime import timedelta
|
||||
from time import time
|
||||
|
||||
from rich.align import Align
|
||||
from rich.console import Console, Group
|
||||
from rich.layout import Layout
|
||||
from rich.live import Live
|
||||
from rich.padding import Padding
|
||||
from rich.panel import Panel
|
||||
from rich.progress import (
|
||||
BarColumn,
|
||||
MofNCompleteColumn,
|
||||
Progress,
|
||||
TextColumn,
|
||||
TimeElapsedColumn,
|
||||
TimeRemainingColumn,
|
||||
)
|
||||
from rich.rule import Rule
|
||||
from rich.table import Table
|
||||
from rich.text import Text
|
||||
from rich.theme import Theme
|
||||
|
||||
from prowler.config.config import prowler_version, timestamp
|
||||
from prowler.providers.aws.models import AWSIdentityInfo, AWSAssumeRole
|
||||
|
||||
# Defines a subclass of Live for creating and managing the live display in the CLI
|
||||
class LiveDisplay(Live):
|
||||
def __init__(self, *args, **kwargs):
|
||||
# Load a theme for the console display from a file
|
||||
theme = self.load_theme_from_file()
|
||||
super().__init__(renderable=None, console=Console(theme=theme), *args, **kwargs)
|
||||
self.sections = {} # Stores different sections of the layout
|
||||
self.enabled = False # Flag to enable or disable the live display
|
||||
|
||||
# Sets up the layout of the live display
|
||||
def make_layout(self):
|
||||
"""
|
||||
Defines the layout.
|
||||
Making sections invisible so it doesnt show the default Layout metadata before content is added
|
||||
Text(" ") is to stop the layout metadata from rendering before the layout is updated with real content
|
||||
client_and_service handles client init (when importing clients) and service check execution
|
||||
"""
|
||||
self.layout = Layout(name="root")
|
||||
# Split layout into intro, overall progress, and main sections
|
||||
self.layout.split(
|
||||
Layout(name="intro", ratio=3, minimum_size=9),
|
||||
Layout(Text(" "), name="overall_progress", minimum_size=5),
|
||||
Layout(name="main", ratio=10),
|
||||
)
|
||||
# Further split intro layout into body and creds sections
|
||||
self.layout["intro"].split_row(
|
||||
Layout(name="body", ratio=3),
|
||||
Layout(name="creds", ratio=2, visible=False),
|
||||
)
|
||||
# Split main layout into client_and_service and results sections
|
||||
self.layout["main"].split_row(
|
||||
Layout(
|
||||
Text(" "), name="client_and_service", ratio=3
|
||||
), # For client_init and service
|
||||
Layout(name="results", ratio=2, visible=False),
|
||||
)
|
||||
|
||||
# Loads a theme from a YAML file located in the same directory as this file
|
||||
def load_theme_from_file(self):
|
||||
# Loads theme.yaml from the same folder as this file
|
||||
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
with open(f"{actual_directory}/theme.yaml") as f:
|
||||
theme = Theme.from_file(f)
|
||||
return theme
|
||||
|
||||
# Initializes the layout and sections based on CLI arguments
|
||||
def initialize(self, args):
|
||||
# A way to get around parsing args to LiveDisplay when it is intialized
|
||||
# This is so that the live_display object can be intialized in this file, and imported to other parts of prowler
|
||||
self.cli_args = args
|
||||
|
||||
self.enabled = not args.only_logs
|
||||
|
||||
if self.enabled:
|
||||
# Initialize layout
|
||||
self.make_layout()
|
||||
# Apply layout
|
||||
self.update(self.layout)
|
||||
# Add Intro section
|
||||
intro_layout = self.layout["intro"]
|
||||
intro_section = IntroSection(args, intro_layout)
|
||||
self.sections["intro"] = intro_section
|
||||
# Start live display
|
||||
self.start()
|
||||
|
||||
# Adds AWS credentials to the display
|
||||
def print_aws_credentials(self, aws_identity_info: AWSIdentityInfo, assumed_role_info: AWSAssumeRole):
|
||||
# Adds the AWS credentials to the display - will need to extend to gcp and azure
|
||||
# Create a new function for gcp and azure in this class, that will call a function in the intro_section class
|
||||
intro_section = self.sections["intro"]
|
||||
intro_section.add_aws_credentials(aws_identity_info, assumed_role_info)
|
||||
|
||||
# Adds and manages the overall progress section
|
||||
def add_overall_progress_section(self, total_checks_dict):
|
||||
overall_progress_section = OverallProgressSection(total_checks_dict)
|
||||
overall_progress_layout = self.layout["overall_progress"]
|
||||
overall_progress_layout.update(overall_progress_section)
|
||||
overall_progress_layout.visible = True
|
||||
self.sections["overall_progress"] = overall_progress_section
|
||||
|
||||
# Add results section
|
||||
self.add_results_section()
|
||||
|
||||
# Wrapper function to increment the overall progress
|
||||
def increment_overall_check_progress(self):
|
||||
# Called by ExecutionManager
|
||||
if self.enabled:
|
||||
section = self.sections["overall_progress"]
|
||||
section.increment_check_progress()
|
||||
|
||||
# Wrapper function to increment the progress for the current service
|
||||
def increment_overall_service_progress(self):
|
||||
# Called by ExecutionManager
|
||||
if self.enabled:
|
||||
section = self.sections["overall_progress"]
|
||||
section.increment_service_progress()
|
||||
|
||||
# Adds and manages the results section
|
||||
def add_results_section(self):
|
||||
# Intializes the results section
|
||||
results_layout = self.layout["results"]
|
||||
results_section = ResultsSection()
|
||||
results_layout.update(results_section)
|
||||
results_layout.visible = True
|
||||
self.sections["results"] = results_section
|
||||
|
||||
def add_results_for_service(self, service_name, service_findings):
|
||||
# Adds rows to the Service Check Results table
|
||||
if self.enabled:
|
||||
results_section = self.sections["results"]
|
||||
results_section.add_results_for_service(service_name, service_findings)
|
||||
|
||||
# Client Init Section
|
||||
def add_client_init_section(self, service_name):
|
||||
# Used to track progress of client init process
|
||||
if self.enabled:
|
||||
client_init_section = ClientInitSection(service_name)
|
||||
self.sections["client_and_service"] = client_init_section
|
||||
self.layout["client_and_service"].update(client_init_section)
|
||||
self.layout["client_and_service"].visible = True
|
||||
|
||||
# Service Section
|
||||
def add_service_section(self, service_name, total_checks):
|
||||
# Used to create the ServiceSection when checks start to execute (after clients have been imported)
|
||||
if self.enabled:
|
||||
service_section = ServiceSection(service_name, total_checks)
|
||||
self.sections["client_and_service"] = service_section
|
||||
self.layout["client_and_service"].update(service_section)
|
||||
|
||||
def increment_check_progress(self):
|
||||
if self.enabled:
|
||||
service_section = self.sections["client_and_service"]
|
||||
service_section.increment_check_progress()
|
||||
|
||||
# Misc
|
||||
def get_service_section(self):
|
||||
# Used by Check
|
||||
if self.enabled:
|
||||
return self.sections["client_and_service"]
|
||||
|
||||
def get_client_init_section(self):
|
||||
# Used by AWSService
|
||||
if self.enabled:
|
||||
return self.sections["client_and_service"]
|
||||
|
||||
def hide_service_section(self):
|
||||
# To hide the last service after execution has completed
|
||||
self.layout["client_and_service"].visible = False
|
||||
|
||||
def print_message(self, message):
|
||||
# No use yet
|
||||
self.console.print(message)
|
||||
|
||||
# The following classes (ServiceSection, ClientInitSection, IntroSection, OverallProgressSection, ResultsSection)
|
||||
# are used to define different sections of the live display, each with its own layout, progress bars,
|
||||
|
||||
class ServiceSection:
|
||||
def __init__(self, service_name, total_checks) -> None:
|
||||
self.service_name = service_name
|
||||
self.total_checks = total_checks
|
||||
self.renderables = self.create_service_section()
|
||||
self.start_check_progress()
|
||||
|
||||
def __rich__(self):
|
||||
return Padding(self.renderables, (2, 2))
|
||||
|
||||
def create_service_section(self):
|
||||
# Create the progress components
|
||||
self.check_progress = Progress(
|
||||
TextColumn("[bold]{task.description}"),
|
||||
BarColumn(bar_width=None),
|
||||
MofNCompleteColumn(),
|
||||
transient=False, # Optional: set True if you want the progress bar to disappear after completion
|
||||
)
|
||||
|
||||
# Used to add titles that dont need progress bars
|
||||
self.title_bar = Progress(
|
||||
TextColumn("[progress.description]{task.description}"), transient=True
|
||||
)
|
||||
# Progress Bar for Service Init and Checks
|
||||
self.task_progress = Progress(
|
||||
TextColumn("[progress.description]{task.description}"),
|
||||
BarColumn(bar_width=None),
|
||||
MofNCompleteColumn(),
|
||||
TimeElapsedColumn(),
|
||||
TimeRemainingColumn(),
|
||||
transient=True,
|
||||
)
|
||||
|
||||
return Group(
|
||||
Panel(
|
||||
Group(
|
||||
self.check_progress,
|
||||
Rule(style="bold blue"),
|
||||
self.title_bar,
|
||||
Rule(style="bold blue"),
|
||||
self.task_progress,
|
||||
),
|
||||
title=f"Service: {self.service_name}",
|
||||
),
|
||||
)
|
||||
|
||||
def start_check_progress(self):
|
||||
self.check_progress_task_id = self.check_progress.add_task(
|
||||
"Checks executed", total=self.total_checks
|
||||
)
|
||||
|
||||
def increment_check_progress(self):
|
||||
self.check_progress.update(self.check_progress_task_id, advance=1)
|
||||
|
||||
|
||||
class ClientInitSection:
|
||||
def __init__(self, client_name) -> None:
|
||||
self.client_name = client_name
|
||||
self.renderables = self.create_client_init_section()
|
||||
|
||||
def __rich__(self):
|
||||
return Padding(self.renderables, (2, 2))
|
||||
|
||||
def create_client_init_section(self):
|
||||
# Progress Bar for Checks
|
||||
self.task_progress_bar = Progress(
|
||||
TextColumn("[progress.description]{task.description}"),
|
||||
BarColumn(bar_width=None),
|
||||
MofNCompleteColumn(),
|
||||
TimeElapsedColumn(),
|
||||
TimeRemainingColumn(),
|
||||
transient=True,
|
||||
)
|
||||
|
||||
return Group(
|
||||
Panel(
|
||||
Group(
|
||||
self.task_progress_bar,
|
||||
),
|
||||
title=f"Intializing {self.client_name.replace('_', ' ')}",
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
class IntroSection:
|
||||
def __init__(self, args, layout: Layout) -> None:
|
||||
self.body_layout = layout["body"]
|
||||
self.creds_layout = layout["creds"]
|
||||
self.renderables = []
|
||||
self.title = f"Prowler v{prowler_version}"
|
||||
if not args.no_banner:
|
||||
self.create_banner(args)
|
||||
|
||||
def __rich__(self):
|
||||
return Group(*self.renderables)
|
||||
|
||||
def create_banner(self, args):
|
||||
banner_text = f"""[banner_color] _
|
||||
_ __ _ __ _____ _| | ___ _ __
|
||||
| '_ \| '__/ _ \ \ /\ / / |/ _ \ '__|
|
||||
| |_) | | | (_) \ V V /| | __/ |
|
||||
| .__/|_| \___/ \_/\_/ |_|\___|_|v{prowler_version}
|
||||
|_|[/banner_color][banner_blue]the handy cloud security tool[/banner_blue]
|
||||
|
||||
[info]Date: {timestamp.strftime('%Y-%m-%d %H:%M:%S')}[/info]
|
||||
"""
|
||||
|
||||
if args.verbose:
|
||||
banner_text += """
|
||||
Color code for results:
|
||||
- [info]INFO (Information)[/info]
|
||||
- [pass]PASS (Recommended value)[/pass]
|
||||
- [orange_color]WARNING (Ignored by mutelist)[/orange_color]
|
||||
- [fail]FAIL (Fix required)[/fail]
|
||||
"""
|
||||
self.renderables.append(banner_text)
|
||||
self.body_layout.update(Group(*self.renderables))
|
||||
self.body_layout.visible = True
|
||||
|
||||
def add_aws_credentials(self, aws_identity_info: AWSIdentityInfo, assumed_role_info: AWSAssumeRole):
|
||||
# Beautify audited regions, and set to "all" if there is no filter region
|
||||
regions = (
|
||||
", ".join(aws_identity_info.audited_regions)
|
||||
if aws_identity_info.audited_regions is not None
|
||||
else "all"
|
||||
)
|
||||
# Beautify audited profile, set and to "default" if there is no profile set
|
||||
profile = aws_identity_info.profile if aws_identity_info.profile is not None else "default"
|
||||
|
||||
content = Text()
|
||||
content.append(
|
||||
"This report is being generated using credentials below:\n\n", style="bold"
|
||||
)
|
||||
|
||||
content.append("AWS-CLI Profile: ", style="bold")
|
||||
content.append(f"[{profile}]\n", style="info")
|
||||
|
||||
content.append("AWS Filter Region: ", style="bold")
|
||||
content.append(f"[{regions}]\n", style="info")
|
||||
|
||||
content.append("AWS Account: ", style="bold")
|
||||
content.append(f"[{aws_identity_info.account}]\n", style="info")
|
||||
|
||||
content.append("UserId: ", style="bold")
|
||||
content.append(f"[{aws_identity_info.user_id}]\n", style="info")
|
||||
|
||||
content.append("Caller Identity ARN: ", style="bold")
|
||||
content.append(f"[{aws_identity_info.identity_arn}]\n", style="info")
|
||||
# If a role has been assumed, print the Assumed Role ARN
|
||||
if assumed_role_info.role_arn is not None:
|
||||
content.append("Assumed Role ARN: ", style="bold")
|
||||
content.append(f"[{assumed_role_info.role_arn}]\n", style="info")
|
||||
|
||||
self.creds_layout.update(content)
|
||||
self.creds_layout.visible = True
|
||||
|
||||
|
||||
class OverallProgressSection:
|
||||
def __init__(self, total_checks_dict: dict) -> None:
|
||||
self.start_time = time() # Start the timer
|
||||
self.renderables = self.create_renderable(total_checks_dict)
|
||||
|
||||
def __rich__(self):
|
||||
elapsed_time = self.total_time_taken()
|
||||
return Group(*self.renderables, f"Total time taken: {elapsed_time}")
|
||||
|
||||
def total_time_taken(self):
|
||||
elapsed_seconds = int(time() - self.start_time)
|
||||
elapsed_time = timedelta(seconds=elapsed_seconds)
|
||||
return elapsed_time
|
||||
|
||||
def create_renderable(self, total_checks_dict):
|
||||
services_num = len(total_checks_dict) # number of keys == number of services
|
||||
checks_num = sum(total_checks_dict.values())
|
||||
|
||||
plural_string = "checks"
|
||||
singular_string = "check"
|
||||
|
||||
check_noun = plural_string if checks_num > 1 else singular_string
|
||||
|
||||
# Create the progress bar
|
||||
self.overall_progress_bar = Progress(
|
||||
TextColumn("[bold]{task.description}"),
|
||||
BarColumn(bar_width=None),
|
||||
MofNCompleteColumn(),
|
||||
transient=False, # Optional: set True if you want the progress bar to disappear after completion
|
||||
)
|
||||
# Create the Services Completed task, to track the number of services completed
|
||||
self.service_progress_task_id = self.overall_progress_bar.add_task(
|
||||
"Services completed", total=services_num
|
||||
)
|
||||
# Create the Checks Completed task, to track the number of checks completed across all services
|
||||
self.check_progress_task_id = self.overall_progress_bar.add_task(
|
||||
"Checks executed", total=checks_num
|
||||
)
|
||||
|
||||
content = Text()
|
||||
content.append(
|
||||
f"Executing {checks_num} {check_noun} across {services_num} services, please wait...\n",
|
||||
style="bold",
|
||||
)
|
||||
|
||||
return [content, self.overall_progress_bar]
|
||||
|
||||
def increment_check_progress(self):
|
||||
self.overall_progress_bar.update(self.check_progress_task_id, advance=1)
|
||||
|
||||
def increment_service_progress(self):
|
||||
self.overall_progress_bar.update(self.service_progress_task_id, advance=1)
|
||||
|
||||
|
||||
class ResultsSection:
|
||||
def __init__(self, verbose=True):
|
||||
self.verbose = verbose
|
||||
self.table = Table(title="Service Check Results")
|
||||
self.table.add_column("Service", justify="left")
|
||||
|
||||
if self.verbose:
|
||||
self.serverities = ["critical", "high", "medium", "low"]
|
||||
# Add columns for each severity level when verbose, report on the count of fails per severity per service
|
||||
for severity in self.serverities:
|
||||
styled_header = (
|
||||
f"[{severity.lower()}]{severity.capitalize()}[/{severity.lower()}]"
|
||||
)
|
||||
self.table.add_column(styled_header, justify="center")
|
||||
|
||||
else:
|
||||
# Dynamically track the status's, report on the status counts for each service
|
||||
self.status_columns = set(["PASS", "FAIL"])
|
||||
self.service_findings = {} # Dictionary to store findings for each service
|
||||
|
||||
# Dictionary to map plain statuses to their stylized forms
|
||||
self.status_headers = {
|
||||
"FAIL": "[fail]Fail[/fail]",
|
||||
"PASS": "[pass]Pass[/pass]",
|
||||
}
|
||||
|
||||
# Add the initial columns with styling
|
||||
for status, header in self.status_headers.items():
|
||||
self.table.add_column(header, justify="center")
|
||||
|
||||
def add_results_for_service(self, service_name, service_findings):
|
||||
if self.verbose:
|
||||
# Count fails per severity
|
||||
severity_counts = {severity: 0 for severity in self.serverities}
|
||||
for finding in service_findings:
|
||||
if finding.status == "FAIL":
|
||||
severity_counts[finding.check_metadata.Severity] += 1
|
||||
|
||||
# Add row with severity counts
|
||||
row = [service_name] + [
|
||||
str(severity_counts[severity]) for severity in self.serverities
|
||||
]
|
||||
self.table.add_row(*row)
|
||||
else:
|
||||
# Update the dictionary with the new findings
|
||||
status_counts = {report.status: 0 for report in service_findings}
|
||||
for report in service_findings:
|
||||
status_counts[report.status] += 1
|
||||
self.service_findings[service_name] = status_counts
|
||||
|
||||
# Update status_columns and table columns
|
||||
self.status_columns.update(status_counts.keys())
|
||||
for status in self.status_columns:
|
||||
if status not in self.status_headers:
|
||||
# [{status.lower()}] is for the styling (defined in theme.yaml)
|
||||
# If new status, add it to status_headers and table
|
||||
styled_header = (
|
||||
f"[{status.lower()}]{status.capitalize()}[/{status.lower()}]"
|
||||
)
|
||||
self.status_headers[status] = styled_header
|
||||
self.table.add_column(styled_header, justify="center")
|
||||
|
||||
# Update the table with findings for all services
|
||||
self._update_table()
|
||||
|
||||
def _update_table(self):
|
||||
# Used for when verbose = false
|
||||
# Clear existing rows
|
||||
self.table.rows.clear()
|
||||
|
||||
# Add updated rows for all services
|
||||
for service, counts in self.service_findings.items():
|
||||
row = [service]
|
||||
for status in self.status_columns:
|
||||
count = counts.get(status, 0)
|
||||
percentage = (
|
||||
f"{(count / sum(counts.values()) * 100):.2f}%" if counts else "0%"
|
||||
)
|
||||
row.append(f"{count} ({percentage})")
|
||||
self.table.add_row(*row)
|
||||
|
||||
def __rich__(self):
|
||||
# This method allows the ResultsSection to be directly rendered by Rich
|
||||
if not self.table.rows:
|
||||
return Text("")
|
||||
return Padding(Align.center(self.table), (0, 2))
|
||||
|
||||
|
||||
# Create an instance of LiveDisplay to import elsewhere (ExecutionManager, the checks, the services)
|
||||
|
||||
live_display = LiveDisplay(vertical_overflow="visible")
|
||||
16
prowler/lib/ui/theme.yaml
Normal file
16
prowler/lib/ui/theme.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
[styles]
|
||||
info = yellow1
|
||||
warning = dark_orange
|
||||
fail = bold red
|
||||
pass = bold green
|
||||
banner_blue = dodger_blue3 bold
|
||||
banner_color = bold green
|
||||
orange_color = dark_orange
|
||||
critical = bold bright_red
|
||||
high = bold red
|
||||
medium = bold dark_orange
|
||||
low = bold yellow1
|
||||
|
||||
|
||||
|
||||
# style names must be lower case, start with a letter, and only contain letters or the characters ".", "-", "_".
|
||||
@@ -11,7 +11,7 @@ from prowler.lib.check.check import list_modules, recover_checks_from_service
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.utils.utils import open_file, parse_json_file
|
||||
from prowler.providers.aws.config import AWS_STS_GLOBAL_ENDPOINT_REGION
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info, AWSAssumeRole
|
||||
from prowler.providers.aws.lib.credentials.credentials import create_sts_session
|
||||
|
||||
|
||||
@@ -109,7 +109,7 @@ class AWS_Provider:
|
||||
|
||||
def assume_role(
|
||||
session: session.Session,
|
||||
assumed_role_info: AWS_Assume_Role,
|
||||
assumed_role_info: AWSAssumeRole,
|
||||
sts_endpoint_region: str = None,
|
||||
) -> dict:
|
||||
try:
|
||||
|
||||
539
prowler/providers/aws/aws_provider_new.py
Normal file
539
prowler/providers/aws/aws_provider_new.py
Normal file
@@ -0,0 +1,539 @@
|
||||
import os
|
||||
import pathlib
|
||||
import sys
|
||||
from argparse import Namespace
|
||||
from typing import Any, Optional
|
||||
|
||||
from boto3 import client, session
|
||||
from botocore.config import Config
|
||||
from botocore.credentials import RefreshableCredentials
|
||||
from botocore.session import get_session
|
||||
from colorama import Fore, Style
|
||||
|
||||
from prowler.config.config import aws_services_json_file
|
||||
from prowler.lib.check.check import list_modules, recover_checks_from_service
|
||||
from prowler.lib.ui.live_display import live_display
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.utils.utils import open_file, parse_json_file
|
||||
from prowler.providers.aws.config import (
|
||||
AWS_STS_GLOBAL_ENDPOINT_REGION,
|
||||
BOTO3_USER_AGENT_EXTRA,
|
||||
)
|
||||
from prowler.providers.aws.models import (
|
||||
AWSOrganizationsInfo,
|
||||
AWSCredentials,
|
||||
AWSAssumeRole,
|
||||
AWSAssumeRoleConfiguration,
|
||||
AWSIdentityInfo,
|
||||
AWSSession,
|
||||
)
|
||||
from prowler.providers.aws.lib.arn.arn import parse_iam_credentials_arn
|
||||
from prowler.providers.aws.lib.credentials.credentials import (
|
||||
create_sts_session,
|
||||
validate_AWSCredentials,
|
||||
)
|
||||
from prowler.providers.aws.lib.organizations.organizations import (
|
||||
get_organizations_metadata,
|
||||
)
|
||||
from prowler.providers.common.provider import Provider
|
||||
|
||||
class AwsProvider(Provider):
|
||||
session: AWSSession = AWSSession(
|
||||
session=None, session_config=None, original_session=None
|
||||
)
|
||||
identity: AWSIdentityInfo = AWSIdentityInfo(
|
||||
account=None,
|
||||
account_arn=None,
|
||||
user_id=None,
|
||||
partition=None,
|
||||
identity_arn=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
audited_regions=[],
|
||||
)
|
||||
assumed_role: AWSAssumeRoleConfiguration = AWSAssumeRoleConfiguration(
|
||||
assumed_role_info=AWSAssumeRole(
|
||||
role_arn=None,
|
||||
session_duration=None,
|
||||
external_id=None,
|
||||
mfa_enabled=False,
|
||||
),
|
||||
assumed_role_credentials=AWSCredentials(
|
||||
aws_access_key_id=None,
|
||||
aws_session_token=None,
|
||||
aws_secret_access_key=None,
|
||||
expiration=None,
|
||||
),
|
||||
)
|
||||
organizations_metadata: AWSOrganizationsInfo = AWSOrganizationsInfo(
|
||||
account_details_email=None,
|
||||
account_details_name=None,
|
||||
account_details_arn=None,
|
||||
account_details_org=None,
|
||||
account_details_tags=None,
|
||||
)
|
||||
audit_resources: Optional[Any]
|
||||
audit_metadata: Optional[Any]
|
||||
audit_config: dict = {}
|
||||
mfa_enabled: bool = False
|
||||
ignore_unused_services: bool = False
|
||||
|
||||
def __init__(self, arguments: Namespace):
|
||||
logger.info("Setting AWS provider ...")
|
||||
# Parse input arguments
|
||||
# Assume Role Options
|
||||
input_role = getattr(arguments, "role", None)
|
||||
input_session_duration = getattr(arguments, "session_duration", None)
|
||||
input_external_id = getattr(arguments, "external_id", None)
|
||||
|
||||
# STS Endpoint Region
|
||||
sts_endpoint_region = getattr(arguments, "sts_endpoint_region", None)
|
||||
|
||||
# MFA Configuration (false by default)
|
||||
input_mfa = getattr(arguments, "mfa", None)
|
||||
|
||||
input_profile = getattr(arguments, "profile", None)
|
||||
input_regions = getattr(arguments, "region", None)
|
||||
organizations_role_arn = getattr(arguments, "organizations_role", None)
|
||||
|
||||
# Set the maximum retries for the standard retrier config
|
||||
aws_retries_max_attempts = getattr(arguments, "aws_retries_max_attempts", None)
|
||||
|
||||
# Set if unused services must be ignored
|
||||
ignore_unused_services = getattr(arguments, "ignore_unused_services", None)
|
||||
|
||||
# Set the maximum retries for the standard retrier config
|
||||
self.session.session_config = self.__set_session_config__(
|
||||
aws_retries_max_attempts
|
||||
)
|
||||
|
||||
# Set ignore unused services
|
||||
self.ignore_unused_services = ignore_unused_services
|
||||
|
||||
# Start populating AWS identity object
|
||||
self.identity.profile = input_profile
|
||||
self.identity.audited_regions = input_regions
|
||||
|
||||
# We need to create an original sessions using regular auth path (creds, profile, etc)
|
||||
logger.info("Generating original session ...")
|
||||
self.session.session = self.setup_session(input_mfa)
|
||||
|
||||
# After the session is created, validate it
|
||||
logger.info("Validating credentials ...")
|
||||
caller_identity = validate_AWSCredentials(
|
||||
self.session.session, input_regions, sts_endpoint_region
|
||||
)
|
||||
|
||||
logger.info("Credentials validated")
|
||||
logger.info(f"Original caller identity UserId: {caller_identity['UserId']}")
|
||||
logger.info(f"Original caller identity ARN: {caller_identity['Arn']}")
|
||||
# Set values of AWS identity object
|
||||
self.identity.account = caller_identity["Account"]
|
||||
self.identity.identity_arn = caller_identity["Arn"]
|
||||
self.identity.user_id = caller_identity["UserId"]
|
||||
self.identity.partition = parse_iam_credentials_arn(
|
||||
caller_identity["Arn"]
|
||||
).partition
|
||||
self.identity.account_arn = (
|
||||
f"arn:{self.identity.partition}:iam::{self.identity.account}:root"
|
||||
)
|
||||
|
||||
# save original session
|
||||
self.session.original_session = self.session.session
|
||||
# time for checking role assumption
|
||||
if input_role:
|
||||
# session will be the assumed one
|
||||
self.session.session = self.setup_assumed_session(
|
||||
input_role,
|
||||
input_external_id,
|
||||
input_mfa,
|
||||
input_session_duration,
|
||||
sts_endpoint_region,
|
||||
)
|
||||
logger.info("Audit session is the new session created assuming role")
|
||||
# check if organizations info is gonna be retrieved
|
||||
if organizations_role_arn:
|
||||
logger.info(
|
||||
f"Getting organizations metadata for account {organizations_role_arn}"
|
||||
)
|
||||
# session will be the assumed one with organizations permissions
|
||||
self.session.session = self.setup_assumed_session(
|
||||
organizations_role_arn,
|
||||
input_external_id,
|
||||
input_mfa,
|
||||
input_session_duration,
|
||||
sts_endpoint_region,
|
||||
)
|
||||
self.organizations_metadata = get_organizations_metadata(
|
||||
self.identity.account, self.assumed_role.assumed_role_credentials
|
||||
)
|
||||
logger.info("Organizations metadata retrieved")
|
||||
if self.session.session.region_name:
|
||||
self.identity.profile_region = self.session.session.region_name
|
||||
else:
|
||||
self.identity.profile_region = "us-east-1"
|
||||
|
||||
if not getattr(arguments, "only_logs", None):
|
||||
self.print_credentials()
|
||||
|
||||
# Parse Scan Tags
|
||||
if getattr(arguments, "resource_tags", None):
|
||||
input_resource_tags = arguments.resource_tags
|
||||
self.audit_resources = self.get_tagged_resources(input_resource_tags)
|
||||
|
||||
# Parse Input Resource ARNs
|
||||
self.audit_resources = getattr(arguments, "resource_arn", None)
|
||||
|
||||
def setup_session(self, input_mfa: bool):
|
||||
logger.info("Creating regular session ...")
|
||||
# Input MFA only if a role is not going to be assumed
|
||||
if input_mfa and not self.assumed_role.assumed_role_info.role_arn:
|
||||
mfa_ARN, mfa_TOTP = self.__input_role_mfa_token_and_code__()
|
||||
get_session_token_arguments = {
|
||||
"SerialNumber": mfa_ARN,
|
||||
"TokenCode": mfa_TOTP,
|
||||
}
|
||||
sts_client = client("sts")
|
||||
session_credentials = sts_client.get_session_token(
|
||||
**get_session_token_arguments
|
||||
)
|
||||
return session.Session(
|
||||
aws_access_key_id=session_credentials["Credentials"]["AccessKeyId"],
|
||||
aws_secret_access_key=session_credentials["Credentials"][
|
||||
"SecretAccessKey"
|
||||
],
|
||||
aws_session_token=session_credentials["Credentials"]["SessionToken"],
|
||||
profile_name=self.identity.profile,
|
||||
)
|
||||
else:
|
||||
return session.Session(
|
||||
profile_name=self.identity.profile,
|
||||
)
|
||||
|
||||
def setup_assumed_session(
|
||||
self,
|
||||
input_role: str,
|
||||
input_external_id: str,
|
||||
input_mfa: str,
|
||||
session_duration: int,
|
||||
sts_endpoint_region: str,
|
||||
):
|
||||
logger.info("Creating assumed session ...")
|
||||
# store information about the role is gonna be assumed
|
||||
self.assumed_role.assumed_role_info.role_arn = input_role
|
||||
self.assumed_role.assumed_role_info.session_duration = session_duration
|
||||
self.assumed_role.assumed_role_info.external_id = input_external_id
|
||||
self.assumed_role.assumed_role_info.mfa_enabled = input_mfa
|
||||
# Check if role arn is valid
|
||||
try:
|
||||
# this returns the arn already parsed into a dict to be used when it is needed to access its fields
|
||||
role_arn_parsed = parse_iam_credentials_arn(
|
||||
self.assumed_role.assumed_role_info.role_arn
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.critical(f"{error.__class__.__name__} -- {error}")
|
||||
sys.exit(1)
|
||||
|
||||
else:
|
||||
logger.info(f"Assuming role {self.assumed_role.assumed_role_info.role_arn}")
|
||||
# Assume the role
|
||||
assumed_role_response = self.__assume_role__(
|
||||
self.session.session,
|
||||
sts_endpoint_region,
|
||||
)
|
||||
logger.info("Role assumed")
|
||||
# Set the info needed to create a session with an assumed role
|
||||
self.assumed_role.assumed_role_credentials = AWSCredentials(
|
||||
aws_access_key_id=assumed_role_response["Credentials"]["AccessKeyId"],
|
||||
aws_session_token=assumed_role_response["Credentials"]["SessionToken"],
|
||||
aws_secret_access_key=assumed_role_response["Credentials"][
|
||||
"SecretAccessKey"
|
||||
],
|
||||
expiration=assumed_role_response["Credentials"]["Expiration"],
|
||||
)
|
||||
# Set identity parameters
|
||||
self.identity.account = role_arn_parsed.account_id
|
||||
self.identity.partition = role_arn_parsed.partition
|
||||
self.identity.account_arn = (
|
||||
f"arn:{self.identity.partition}:iam::{self.identity.account}:root"
|
||||
)
|
||||
# From botocore we can use RefreshableCredentials class, which has an attribute (refresh_using)
|
||||
# that needs to be a method without arguments that retrieves a new set of fresh credentials
|
||||
# asuming the role again. -> https://github.com/boto/botocore/blob/098cc255f81a25b852e1ecdeb7adebd94c7b1b73/botocore/credentials.py#L395
|
||||
assumed_refreshable_credentials = RefreshableCredentials(
|
||||
access_key=self.assumed_role.assumed_role_credentials.aws_access_key_id,
|
||||
secret_key=self.assumed_role.assumed_role_credentials.aws_secret_access_key,
|
||||
token=self.assumed_role.assumed_role_credentials.aws_session_token,
|
||||
expiry_time=self.assumed_role.assumed_role_credentials.expiration,
|
||||
refresh_using=self.refresh_credentials,
|
||||
method="sts-assume-role",
|
||||
)
|
||||
# Here we need the botocore session since it needs to use refreshable credentials
|
||||
assumed_botocore_session = get_session()
|
||||
assumed_botocore_session._credentials = assumed_refreshable_credentials
|
||||
assumed_botocore_session.set_config_variable(
|
||||
"region", self.identity.profile_region
|
||||
)
|
||||
return session.Session(
|
||||
profile_name=self.identity.profile,
|
||||
botocore_session=assumed_botocore_session,
|
||||
)
|
||||
|
||||
# Refresh credentials method using assume role
|
||||
# This method is called "adding ()" to the name, so it cannot accept arguments
|
||||
# https://github.com/boto/botocore/blob/098cc255f81a25b852e1ecdeb7adebd94c7b1b73/botocore/credentials.py#L570
|
||||
def refresh_credentials(self):
|
||||
live_display.print_aws_credentials(self.identity, self.assumed_role.assumed_role_info)
|
||||
|
||||
def generate_regional_clients(
|
||||
self, service: str, global_service: bool = False
|
||||
) -> dict:
|
||||
try:
|
||||
regional_clients = {}
|
||||
service_regions = self.get_available_aws_service_regions(service)
|
||||
# Check if it is global service to gather only one region
|
||||
if global_service:
|
||||
if service_regions:
|
||||
if self.identity.profile_region in service_regions:
|
||||
service_regions = [self.identity.profile_region]
|
||||
service_regions = service_regions[:1]
|
||||
for region in service_regions:
|
||||
regional_client = self.session.session.client(
|
||||
service, region_name=region, config=self.session.session_config
|
||||
)
|
||||
regional_client.region = region
|
||||
regional_clients[region] = regional_client
|
||||
return regional_clients
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def get_available_aws_service_regions(self, service: str) -> list:
|
||||
# Get json locally
|
||||
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
|
||||
data = parse_json_file(f)
|
||||
# Check if it is a subservice
|
||||
json_regions = data["services"][service]["regions"][self.identity.partition]
|
||||
if (
|
||||
self.identity.audited_regions
|
||||
): # Check for input aws audit_info.audited_regions
|
||||
regions = list(
|
||||
set(json_regions).intersection(self.identity.audited_regions)
|
||||
) # Get common regions between input and json
|
||||
else: # Get all regions from json of the service and partition
|
||||
regions = json_regions
|
||||
return regions
|
||||
|
||||
def get_aws_available_regions():
|
||||
try:
|
||||
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
|
||||
data = parse_json_file(f)
|
||||
|
||||
regions = set()
|
||||
for service in data["services"].values():
|
||||
for partition in service["regions"]:
|
||||
for item in service["regions"][partition]:
|
||||
regions.add(item)
|
||||
return list(regions)
|
||||
except Exception as error:
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
return []
|
||||
|
||||
def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
|
||||
"""get_checks_from_input_arn gets the list of checks from the input arns"""
|
||||
checks_from_arn = set()
|
||||
is_subservice_in_checks = False
|
||||
# Handle if there are audit resources so only their services are executed
|
||||
if audit_resources:
|
||||
services_without_subservices = ["guardduty", "kms", "s3", "elb", "efs"]
|
||||
service_list = set()
|
||||
sub_service_list = set()
|
||||
for resource in audit_resources:
|
||||
service = resource.split(":")[2]
|
||||
sub_service = resource.split(":")[5].split("/")[0].replace("-", "_")
|
||||
# WAF Services does not have checks
|
||||
if service != "wafv2" and service != "waf":
|
||||
# Parse services when they are different in the ARNs
|
||||
if service == "lambda":
|
||||
service = "awslambda"
|
||||
elif service == "elasticloadbalancing":
|
||||
service = "elb"
|
||||
elif service == "elasticfilesystem":
|
||||
service = "efs"
|
||||
elif service == "logs":
|
||||
service = "cloudwatch"
|
||||
# Check if Prowler has checks in service
|
||||
try:
|
||||
list_modules(provider, service)
|
||||
except ModuleNotFoundError:
|
||||
# Service is not supported
|
||||
pass
|
||||
else:
|
||||
service_list.add(service)
|
||||
|
||||
# Get subservices to execute only applicable checks
|
||||
if service not in services_without_subservices:
|
||||
# Parse some specific subservices
|
||||
if service == "ec2":
|
||||
if sub_service == "security_group":
|
||||
sub_service = "securitygroup"
|
||||
if sub_service == "network_acl":
|
||||
sub_service = "networkacl"
|
||||
if sub_service == "image":
|
||||
sub_service = "ami"
|
||||
if service == "rds":
|
||||
if sub_service == "cluster_snapshot":
|
||||
sub_service = "snapshot"
|
||||
sub_service_list.add(sub_service)
|
||||
else:
|
||||
sub_service_list.add(service)
|
||||
checks = recover_checks_from_service(service_list, provider)
|
||||
|
||||
# Filter only checks with audited subservices
|
||||
for check in checks:
|
||||
if any(sub_service in check for sub_service in sub_service_list):
|
||||
if not (sub_service == "policy" and "password_policy" in check):
|
||||
checks_from_arn.add(check)
|
||||
is_subservice_in_checks = True
|
||||
|
||||
if not is_subservice_in_checks:
|
||||
checks_from_arn = checks
|
||||
|
||||
# Return final checks list
|
||||
return sorted(checks_from_arn)
|
||||
|
||||
def get_regions_from_audit_resources(audit_resources: list) -> set:
|
||||
"""get_regions_from_audit_resources gets the regions from the audit resources arns"""
|
||||
audited_regions = set()
|
||||
for resource in audit_resources:
|
||||
region = resource.split(":")[3]
|
||||
if region:
|
||||
audited_regions.add(region)
|
||||
return audited_regions
|
||||
|
||||
def get_tagged_resources(self, input_resource_tags: list):
|
||||
"""
|
||||
get_tagged_resources returns a list of the resources that are going to be scanned based on the given input tags
|
||||
"""
|
||||
try:
|
||||
resource_tags = []
|
||||
tagged_resources = []
|
||||
for tag in input_resource_tags:
|
||||
key = tag.split("=")[0]
|
||||
value = tag.split("=")[1]
|
||||
resource_tags.append({"Key": key, "Values": [value]})
|
||||
# Get Resources with resource_tags for all regions
|
||||
for regional_client in self.generate_regional_clients(
|
||||
"resourcegroupstaggingapi"
|
||||
).values():
|
||||
try:
|
||||
get_resources_paginator = regional_client.get_paginator(
|
||||
"get_resources"
|
||||
)
|
||||
for page in get_resources_paginator.paginate(
|
||||
TagFilters=resource_tags
|
||||
):
|
||||
for resource in page["ResourceTagMappingList"]:
|
||||
tagged_resources.append(resource["ResourceARN"])
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
return tagged_resources
|
||||
|
||||
def get_default_region(self, service: str) -> str:
|
||||
"""get_default_region gets the default region based on the profile and audited service regions"""
|
||||
service_regions = self.get_available_aws_service_regions(service)
|
||||
default_region = (
|
||||
self.get_global_region()
|
||||
) # global region of the partition when all regions are audited and there is no profile region
|
||||
if self.identity.profile_region in service_regions:
|
||||
# return profile region only if it is audited
|
||||
default_region = self.identity.profile_region
|
||||
# return first audited region if specific regions are audited
|
||||
elif self.identity.audited_regions:
|
||||
default_region = self.identity.audited_regions[0]
|
||||
return default_region
|
||||
|
||||
def get_global_region(self) -> str:
|
||||
"""get_global_region gets the global region based on the audited partition"""
|
||||
global_region = "us-east-1"
|
||||
if self.identity.partition == "aws-cn":
|
||||
global_region = "cn-north-1"
|
||||
elif self.identity.partition == "aws-us-gov":
|
||||
global_region = "us-gov-east-1"
|
||||
elif "aws-iso" in self.identity.partition:
|
||||
global_region = "aws-iso-global"
|
||||
return global_region
|
||||
|
||||
def __input_role_mfa_token_and_code__() -> tuple[str]:
|
||||
"""input_role_mfa_token_and_code ask for the AWS MFA ARN and TOTP and returns it."""
|
||||
mfa_ARN = input("Enter ARN of MFA: ")
|
||||
mfa_TOTP = input("Enter MFA code: ")
|
||||
return (mfa_ARN.strip(), mfa_TOTP.strip())
|
||||
|
||||
def __set_session_config__(self, aws_retries_max_attempts: bool):
|
||||
session_config = Config(
|
||||
retries={"max_attempts": 3, "mode": "standard"},
|
||||
user_agent_extra=BOTO3_USER_AGENT_EXTRA,
|
||||
)
|
||||
if aws_retries_max_attempts:
|
||||
# Create the new config
|
||||
config = Config(
|
||||
retries={
|
||||
"max_attempts": aws_retries_max_attempts,
|
||||
"mode": "standard",
|
||||
},
|
||||
)
|
||||
# Merge the new configuration
|
||||
session_config = self.session.session_config.merge(config)
|
||||
|
||||
return session_config
|
||||
|
||||
def __assume_role__(
|
||||
self,
|
||||
session,
|
||||
sts_endpoint_region: str,
|
||||
) -> dict:
|
||||
try:
|
||||
assume_role_arguments = {
|
||||
"RoleArn": self.assumed_role.assumed_role_info.role_arn,
|
||||
"RoleSessionName": "ProwlerAsessmentSession",
|
||||
"DurationSeconds": self.assumed_role.assumed_role_info.session_duration,
|
||||
}
|
||||
|
||||
# Set the info to assume the role from the partition, account and role name
|
||||
if self.assumed_role.assumed_role_info.external_id:
|
||||
assume_role_arguments[
|
||||
"ExternalId"
|
||||
] = self.assumed_role.assumed_role_info.external_id
|
||||
|
||||
if self.assumed_role.assumed_role_info.mfa_enabled:
|
||||
mfa_ARN, mfa_TOTP = self.__input_role_mfa_token_and_code__()
|
||||
assume_role_arguments["SerialNumber"] = mfa_ARN
|
||||
assume_role_arguments["TokenCode"] = mfa_TOTP
|
||||
|
||||
# Set the STS Endpoint Region
|
||||
if sts_endpoint_region is None:
|
||||
sts_endpoint_region = AWS_STS_GLOBAL_ENDPOINT_REGION
|
||||
|
||||
sts_client = create_sts_session(session, sts_endpoint_region)
|
||||
assumed_credentials = sts_client.assume_role(**assume_role_arguments)
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
else:
|
||||
return assumed_credentials
|
||||
@@ -27,12 +27,6 @@ def init_parser(self):
|
||||
help="ARN of the role to be assumed",
|
||||
# Pending ARN validation
|
||||
)
|
||||
aws_auth_subparser.add_argument(
|
||||
"--sts-endpoint-region",
|
||||
nargs="?",
|
||||
default=None,
|
||||
help="Specify the AWS STS endpoint region to use. Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html",
|
||||
)
|
||||
aws_auth_subparser.add_argument(
|
||||
"--mfa",
|
||||
action="store_true",
|
||||
@@ -125,14 +119,14 @@ def init_parser(self):
|
||||
default=None,
|
||||
help="Shodan API key used by check ec2_elastic_ip_shodan.",
|
||||
)
|
||||
# Allowlist
|
||||
allowlist_subparser = aws_parser.add_argument_group("Allowlist")
|
||||
allowlist_subparser.add_argument(
|
||||
# Mute List
|
||||
mutelist_subparser = aws_parser.add_argument_group("Mute List")
|
||||
mutelist_subparser.add_argument(
|
||||
"-w",
|
||||
"--allowlist-file",
|
||||
"--mutelist-file",
|
||||
nargs="?",
|
||||
default=None,
|
||||
help="Path for allowlist yaml file. See example prowler/config/aws_allowlist.yaml for reference and format. It also accepts AWS DynamoDB Table or Lambda ARNs or S3 URIs, see more in https://docs.prowler.cloud/en/latest/tutorials/allowlist/",
|
||||
help="Path for mutelist yaml file. See example prowler/config/aws_mutelist.yaml for reference and format. It also accepts AWS DynamoDB Table or Lambda ARNs or S3 URIs, see more in https://docs.prowler.cloud/en/latest/tutorials/mutelist/",
|
||||
)
|
||||
|
||||
# Based Scans
|
||||
|
||||
@@ -2,7 +2,7 @@ from boto3 import session
|
||||
from botocore.config import Config
|
||||
|
||||
from prowler.providers.aws.config import BOTO3_USER_AGENT_EXTRA
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info, AWSAssumeRole
|
||||
|
||||
# Default Current Audit Info
|
||||
current_audit_info = AWS_Audit_Info(
|
||||
@@ -25,7 +25,7 @@ current_audit_info = AWS_Audit_Info(
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=AWS_Assume_Role(
|
||||
assumed_role_info=AWSAssumeRole(
|
||||
role_arn=None,
|
||||
session_duration=None,
|
||||
external_id=None,
|
||||
|
||||
@@ -7,7 +7,7 @@ from botocore.config import Config
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWS_Credentials:
|
||||
class AWSCredentials:
|
||||
aws_access_key_id: str
|
||||
aws_session_token: str
|
||||
aws_secret_access_key: str
|
||||
@@ -15,7 +15,7 @@ class AWS_Credentials:
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWS_Assume_Role:
|
||||
class AWSAssumeRole:
|
||||
role_arn: str
|
||||
session_duration: int
|
||||
external_id: str
|
||||
@@ -23,7 +23,7 @@ class AWS_Assume_Role:
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWS_Organizations_Info:
|
||||
class AWSOrganizationsInfo:
|
||||
account_details_email: str
|
||||
account_details_name: str
|
||||
account_details_arn: str
|
||||
@@ -44,13 +44,13 @@ class AWS_Audit_Info:
|
||||
audited_partition: str
|
||||
profile: str
|
||||
profile_region: str
|
||||
credentials: AWS_Credentials
|
||||
credentials: AWSCredentials
|
||||
mfa_enabled: bool
|
||||
assumed_role_info: AWS_Assume_Role
|
||||
assumed_role_info: AWSAssumeRole
|
||||
audited_regions: list
|
||||
audit_resources: list
|
||||
organizations_metadata: AWS_Organizations_Info
|
||||
audit_metadata: Optional[Any] = None
|
||||
organizations_metadata: AWSOrganizationsInfo
|
||||
audit_metadata: Optional[Any]
|
||||
audit_config: Optional[dict] = None
|
||||
ignore_unused_services: bool = False
|
||||
enabled_regions: set = field(default_factory=set)
|
||||
|
||||
@@ -8,16 +8,12 @@ from prowler.providers.aws.config import AWS_STS_GLOBAL_ENDPOINT_REGION
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
|
||||
|
||||
def validate_aws_credentials(
|
||||
def validate_AWSCredentials(
|
||||
session: session, input_regions: list, sts_endpoint_region: str = None
|
||||
) -> dict:
|
||||
try:
|
||||
# For a valid STS GetCallerIdentity we have to use the right AWS Region
|
||||
# Check if the --sts-endpoint-region is set
|
||||
if sts_endpoint_region is not None:
|
||||
aws_region = sts_endpoint_region
|
||||
# If there is no region passed with -f/--region/--filter-region
|
||||
elif input_regions is None or len(input_regions) == 0:
|
||||
if input_regions is None or len(input_regions) == 0:
|
||||
# If you have a region configured in your AWS config or credentials file
|
||||
if session.region_name is not None:
|
||||
aws_region = session.region_name
|
||||
@@ -42,7 +38,7 @@ def validate_aws_credentials(
|
||||
return caller_identity
|
||||
|
||||
|
||||
def print_aws_credentials(audit_info: AWS_Audit_Info):
|
||||
def print_AWSCredentials(audit_info: AWS_Audit_Info):
|
||||
# Beautify audited regions, set "all" if there is no filter region
|
||||
regions = (
|
||||
", ".join(audit_info.audited_regions)
|
||||
|
||||
0
prowler/providers/aws/lib/mutelist/__init__.py
Normal file
0
prowler/providers/aws/lib/mutelist/__init__.py
Normal file
@@ -9,7 +9,7 @@ from schema import Optional, Schema
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.outputs.models import unroll_tags
|
||||
|
||||
allowlist_schema = Schema(
|
||||
mutelist_schema = Schema(
|
||||
{
|
||||
"Accounts": {
|
||||
str: {
|
||||
@@ -32,38 +32,38 @@ allowlist_schema = Schema(
|
||||
)
|
||||
|
||||
|
||||
def parse_allowlist_file(audit_info, allowlist_file):
|
||||
def parse_mutelist_file(audit_info, mutelist_file):
|
||||
try:
|
||||
# Check if file is a S3 URI
|
||||
if re.search("^s3://([^/]+)/(.*?([^/]+))$", allowlist_file):
|
||||
bucket = allowlist_file.split("/")[2]
|
||||
key = ("/").join(allowlist_file.split("/")[3:])
|
||||
if re.search("^s3://([^/]+)/(.*?([^/]+))$", mutelist_file):
|
||||
bucket = mutelist_file.split("/")[2]
|
||||
key = ("/").join(mutelist_file.split("/")[3:])
|
||||
s3_client = audit_info.audit_session.client("s3")
|
||||
allowlist = yaml.safe_load(
|
||||
mutelist = yaml.safe_load(
|
||||
s3_client.get_object(Bucket=bucket, Key=key)["Body"]
|
||||
)["Allowlist"]
|
||||
)["Mute List"]
|
||||
# Check if file is a Lambda Function ARN
|
||||
elif re.search(r"^arn:(\w+):lambda:", allowlist_file):
|
||||
lambda_region = allowlist_file.split(":")[3]
|
||||
elif re.search(r"^arn:(\w+):lambda:", mutelist_file):
|
||||
lambda_region = mutelist_file.split(":")[3]
|
||||
lambda_client = audit_info.audit_session.client(
|
||||
"lambda", region_name=lambda_region
|
||||
)
|
||||
lambda_response = lambda_client.invoke(
|
||||
FunctionName=allowlist_file, InvocationType="RequestResponse"
|
||||
FunctionName=mutelist_file, InvocationType="RequestResponse"
|
||||
)
|
||||
lambda_payload = lambda_response["Payload"].read()
|
||||
allowlist = yaml.safe_load(lambda_payload)["Allowlist"]
|
||||
mutelist = yaml.safe_load(lambda_payload)["Mute List"]
|
||||
# Check if file is a DynamoDB ARN
|
||||
elif re.search(
|
||||
r"^arn:aws(-cn|-us-gov)?:dynamodb:[a-z]{2}-[a-z-]+-[1-9]{1}:[0-9]{12}:table\/[a-zA-Z0-9._-]+$",
|
||||
allowlist_file,
|
||||
mutelist_file,
|
||||
):
|
||||
allowlist = {"Accounts": {}}
|
||||
table_region = allowlist_file.split(":")[3]
|
||||
mutelist = {"Accounts": {}}
|
||||
table_region = mutelist_file.split(":")[3]
|
||||
dynamodb_resource = audit_info.audit_session.resource(
|
||||
"dynamodb", region_name=table_region
|
||||
)
|
||||
dynamo_table = dynamodb_resource.Table(allowlist_file.split("/")[1])
|
||||
dynamo_table = dynamodb_resource.Table(mutelist_file.split("/")[1])
|
||||
response = dynamo_table.scan(
|
||||
FilterExpression=Attr("Accounts").is_in(
|
||||
[audit_info.audited_account, "*"]
|
||||
@@ -80,8 +80,8 @@ def parse_allowlist_file(audit_info, allowlist_file):
|
||||
)
|
||||
dynamodb_items.update(response["Items"])
|
||||
for item in dynamodb_items:
|
||||
# Create allowlist for every item
|
||||
allowlist["Accounts"][item["Accounts"]] = {
|
||||
# Create mutelist for every item
|
||||
mutelist["Accounts"][item["Accounts"]] = {
|
||||
"Checks": {
|
||||
item["Checks"]: {
|
||||
"Regions": item["Regions"],
|
||||
@@ -90,24 +90,24 @@ def parse_allowlist_file(audit_info, allowlist_file):
|
||||
}
|
||||
}
|
||||
if "Tags" in item:
|
||||
allowlist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
|
||||
mutelist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
|
||||
"Tags"
|
||||
] = item["Tags"]
|
||||
if "Exceptions" in item:
|
||||
allowlist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
|
||||
mutelist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
|
||||
"Exceptions"
|
||||
] = item["Exceptions"]
|
||||
else:
|
||||
with open(allowlist_file) as f:
|
||||
allowlist = yaml.safe_load(f)["Allowlist"]
|
||||
with open(mutelist_file) as f:
|
||||
mutelist = yaml.safe_load(f)["Mute List"]
|
||||
try:
|
||||
allowlist_schema.validate(allowlist)
|
||||
mutelist_schema.validate(mutelist)
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__} -- Allowlist YAML is malformed - {error}[{error.__traceback__.tb_lineno}]"
|
||||
f"{error.__class__.__name__} -- Mute List YAML is malformed - {error}[{error.__traceback__.tb_lineno}]"
|
||||
)
|
||||
sys.exit(1)
|
||||
return allowlist
|
||||
return mutelist
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
|
||||
@@ -115,27 +115,27 @@ def parse_allowlist_file(audit_info, allowlist_file):
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def allowlist_findings(
|
||||
allowlist: dict,
|
||||
def mutelist_findings(
|
||||
mutelist: dict,
|
||||
audited_account: str,
|
||||
check_findings: [Any],
|
||||
):
|
||||
# Check if finding is allowlisted
|
||||
# Check if finding is muted
|
||||
for finding in check_findings:
|
||||
if is_allowlisted(
|
||||
allowlist,
|
||||
if is_muted(
|
||||
mutelist,
|
||||
audited_account,
|
||||
finding.check_metadata.CheckID,
|
||||
finding.region,
|
||||
finding.resource_id,
|
||||
unroll_tags(finding.resource_tags),
|
||||
):
|
||||
finding.status = "WARNING"
|
||||
finding.status = "MUTED"
|
||||
return check_findings
|
||||
|
||||
|
||||
def is_allowlisted(
|
||||
allowlist: dict,
|
||||
def is_muted(
|
||||
mutelist: dict,
|
||||
audited_account: str,
|
||||
check: str,
|
||||
finding_region: str,
|
||||
@@ -143,31 +143,30 @@ def is_allowlisted(
|
||||
finding_tags,
|
||||
):
|
||||
try:
|
||||
allowlisted_checks = {}
|
||||
# By default is not allowlisted
|
||||
is_finding_allowlisted = False
|
||||
# First set account key from allowlist dict
|
||||
if audited_account in allowlist["Accounts"]:
|
||||
allowlisted_checks = allowlist["Accounts"][audited_account]["Checks"]
|
||||
muted_checks = {}
|
||||
# By default is not muted
|
||||
is_finding_muted = False
|
||||
# First set account key from mutelist dict
|
||||
if audited_account in mutelist["Accounts"]:
|
||||
muted_checks = mutelist["Accounts"][audited_account]["Checks"]
|
||||
# If there is a *, it affects to all accounts
|
||||
# This cannot be elif since in the case of * and single accounts we
|
||||
# want to merge allowlisted checks from * to the other accounts check list
|
||||
if "*" in allowlist["Accounts"]:
|
||||
checks_multi_account = allowlist["Accounts"]["*"]["Checks"]
|
||||
allowlisted_checks.update(checks_multi_account)
|
||||
|
||||
# Test if it is allowlisted
|
||||
if is_allowlisted_in_check(
|
||||
allowlisted_checks,
|
||||
# want to merge muted checks from * to the other accounts check list
|
||||
if "*" in mutelist["Accounts"]:
|
||||
checks_multi_account = mutelist["Accounts"]["*"]["Checks"]
|
||||
muted_checks.update(checks_multi_account)
|
||||
# Test if it is muted
|
||||
if is_muted_in_check(
|
||||
muted_checks,
|
||||
audited_account,
|
||||
check,
|
||||
finding_region,
|
||||
finding_resource,
|
||||
finding_tags,
|
||||
):
|
||||
is_finding_allowlisted = True
|
||||
is_finding_muted = True
|
||||
|
||||
return is_finding_allowlisted
|
||||
return is_finding_muted
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
|
||||
@@ -175,8 +174,8 @@ def is_allowlisted(
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def is_allowlisted_in_check(
|
||||
allowlisted_checks,
|
||||
def is_muted_in_check(
|
||||
muted_checks,
|
||||
audited_account,
|
||||
check,
|
||||
finding_region,
|
||||
@@ -184,15 +183,15 @@ def is_allowlisted_in_check(
|
||||
finding_tags,
|
||||
):
|
||||
try:
|
||||
# Default value is not allowlisted
|
||||
is_check_allowlisted = False
|
||||
# Default value is not muted
|
||||
is_check_muted = False
|
||||
|
||||
for allowlisted_check, allowlisted_check_info in allowlisted_checks.items():
|
||||
for muted_check, muted_check_info in muted_checks.items():
|
||||
# map lambda to awslambda
|
||||
allowlisted_check = re.sub("^lambda", "awslambda", allowlisted_check)
|
||||
muted_check = re.sub("^lambda", "awslambda", muted_check)
|
||||
|
||||
# Check if the finding is excepted
|
||||
exceptions = allowlisted_check_info.get("Exceptions")
|
||||
exceptions = muted_check_info.get("Exceptions")
|
||||
if is_excepted(
|
||||
exceptions,
|
||||
audited_account,
|
||||
@@ -203,40 +202,36 @@ def is_allowlisted_in_check(
|
||||
# Break loop and return default value since is excepted
|
||||
break
|
||||
|
||||
allowlisted_regions = allowlisted_check_info.get("Regions")
|
||||
allowlisted_resources = allowlisted_check_info.get("Resources")
|
||||
allowlisted_tags = allowlisted_check_info.get("Tags")
|
||||
muted_regions = muted_check_info.get("Regions")
|
||||
muted_resources = muted_check_info.get("Resources")
|
||||
muted_tags = muted_check_info.get("Tags")
|
||||
# If there is a *, it affects to all checks
|
||||
if (
|
||||
"*" == allowlisted_check
|
||||
or check == allowlisted_check
|
||||
or re.search(allowlisted_check, check)
|
||||
"*" == muted_check
|
||||
or check == muted_check
|
||||
or re.search(muted_check, check)
|
||||
):
|
||||
allowlisted_in_check = True
|
||||
allowlisted_in_region = is_allowlisted_in_region(
|
||||
allowlisted_regions, finding_region
|
||||
)
|
||||
allowlisted_in_resource = is_allowlisted_in_resource(
|
||||
allowlisted_resources, finding_resource
|
||||
)
|
||||
allowlisted_in_tags = is_allowlisted_in_tags(
|
||||
allowlisted_tags, finding_tags
|
||||
muted_in_check = True
|
||||
muted_in_region = is_muted_in_region(muted_regions, finding_region)
|
||||
muted_in_resource = is_muted_in_resource(
|
||||
muted_resources, finding_resource
|
||||
)
|
||||
muted_in_tags = is_muted_in_tags(muted_tags, finding_tags)
|
||||
|
||||
# For a finding to be allowlisted requires the following set to True:
|
||||
# - allowlisted_in_check -> True
|
||||
# - allowlisted_in_region -> True
|
||||
# - allowlisted_in_tags -> True or allowlisted_in_resource -> True
|
||||
# For a finding to be muted requires the following set to True:
|
||||
# - muted_in_check -> True
|
||||
# - muted_in_region -> True
|
||||
# - muted_in_tags -> True or muted_in_resource -> True
|
||||
# - excepted -> False
|
||||
|
||||
if (
|
||||
allowlisted_in_check
|
||||
and allowlisted_in_region
|
||||
and (allowlisted_in_tags or allowlisted_in_resource)
|
||||
muted_in_check
|
||||
and muted_in_region
|
||||
and (muted_in_tags or muted_in_resource)
|
||||
):
|
||||
is_check_allowlisted = True
|
||||
is_check_muted = True
|
||||
|
||||
return is_check_allowlisted
|
||||
return is_check_muted
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
|
||||
@@ -244,12 +239,12 @@ def is_allowlisted_in_check(
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def is_allowlisted_in_region(
|
||||
allowlisted_regions,
|
||||
def is_muted_in_region(
|
||||
mutelist_regions,
|
||||
finding_region,
|
||||
):
|
||||
try:
|
||||
return __is_item_matched__(allowlisted_regions, finding_region)
|
||||
return __is_item_matched__(mutelist_regions, finding_region)
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
|
||||
@@ -257,9 +252,9 @@ def is_allowlisted_in_region(
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def is_allowlisted_in_tags(allowlisted_tags, finding_tags):
|
||||
def is_muted_in_tags(muted_tags, finding_tags):
|
||||
try:
|
||||
return __is_item_matched__(allowlisted_tags, finding_tags)
|
||||
return __is_item_matched__(muted_tags, finding_tags)
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
|
||||
@@ -267,9 +262,9 @@ def is_allowlisted_in_tags(allowlisted_tags, finding_tags):
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def is_allowlisted_in_resource(allowlisted_resources, finding_resource):
|
||||
def is_muted_in_resource(muted_resources, finding_resource):
|
||||
try:
|
||||
return __is_item_matched__(allowlisted_resources, finding_resource)
|
||||
return __is_item_matched__(muted_resources, finding_resource)
|
||||
|
||||
except Exception as error:
|
||||
logger.critical(
|
||||
@@ -3,12 +3,12 @@ import sys
|
||||
from boto3 import client
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Organizations_Info
|
||||
from prowler.providers.aws.lib.audit_info.models import AWSOrganizationsInfo
|
||||
|
||||
|
||||
def get_organizations_metadata(
|
||||
metadata_account: str, assumed_credentials: dict
|
||||
) -> AWS_Organizations_Info:
|
||||
) -> AWSOrganizationsInfo:
|
||||
try:
|
||||
organizations_client = client(
|
||||
"organizations",
|
||||
@@ -30,7 +30,7 @@ def get_organizations_metadata(
|
||||
account_details_tags = ""
|
||||
for tag in list_tags_for_resource["Tags"]:
|
||||
account_details_tags += tag["Key"] + ":" + tag["Value"] + ","
|
||||
organizations_info = AWS_Organizations_Info(
|
||||
organizations_info = AWSOrganizationsInfo(
|
||||
account_details_email=organizations_metadata["Account"]["Email"],
|
||||
account_details_name=organizations_metadata["Account"]["Name"],
|
||||
account_details_arn=organizations_metadata["Account"]["Arn"],
|
||||
|
||||
@@ -4,7 +4,7 @@ def is_condition_block_restrictive(
|
||||
"""
|
||||
is_condition_block_restrictive parses the IAM Condition policy block and, by default, returns True if the source_account passed as argument is within, False if not.
|
||||
|
||||
If argument is_cross_account_allowed is True it tests if the Condition block includes any of the operators allowlisted returning True if does, False if not.
|
||||
If argument is_cross_account_allowed is True it tests if the Condition block includes any of the operators mutelisted returning True if does, False if not.
|
||||
|
||||
|
||||
@param condition_statement: dict with an IAM Condition block, e.g.:
|
||||
@@ -71,6 +71,9 @@ def is_condition_block_restrictive(
|
||||
if is_condition_key_restrictive:
|
||||
is_condition_valid = True
|
||||
|
||||
if is_condition_key_restrictive:
|
||||
is_condition_valid = True
|
||||
|
||||
# value is a string
|
||||
elif isinstance(
|
||||
condition_statement[condition_operator][value],
|
||||
|
||||
@@ -20,18 +20,16 @@ def prepare_security_hub_findings(
|
||||
security_hub_findings_per_region[region] = []
|
||||
|
||||
for finding in findings:
|
||||
# We don't send the INFO findings to AWS Security Hub
|
||||
if finding.status == "INFO":
|
||||
# We don't send the MANUAL findings to AWS Security Hub
|
||||
if finding.status == "MANUAL":
|
||||
continue
|
||||
|
||||
# We don't send findings to not enabled regions
|
||||
if finding.region not in enabled_regions:
|
||||
continue
|
||||
|
||||
# Handle quiet mode
|
||||
if (
|
||||
output_options.is_quiet or output_options.send_sh_only_fails
|
||||
) and finding.status != "FAIL":
|
||||
# Handle status filters, if any
|
||||
if not output_options.status or finding.status in output_options.status:
|
||||
continue
|
||||
|
||||
# Get the finding region
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from functools import wraps
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.ui.live_display import live_display
|
||||
from prowler.providers.aws.aws_provider import (
|
||||
generate_regional_clients,
|
||||
get_default_region,
|
||||
)
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
from prowler.providers.aws.aws_provider_new import AwsProvider
|
||||
|
||||
MAX_WORKERS = 10
|
||||
|
||||
@@ -19,18 +22,18 @@ class AWSService:
|
||||
- Also handles if the AWS Service is Global
|
||||
"""
|
||||
|
||||
def __init__(self, service: str, audit_info: AWS_Audit_Info, global_service=False):
|
||||
def __init__(self, service: str, provider: AwsProvider, global_service=False):
|
||||
# Audit Information
|
||||
self.audit_info = audit_info
|
||||
self.audited_account = audit_info.audited_account
|
||||
self.audited_account_arn = audit_info.audited_account_arn
|
||||
self.audited_partition = audit_info.audited_partition
|
||||
self.audit_resources = audit_info.audit_resources
|
||||
self.audited_checks = audit_info.audit_metadata.expected_checks
|
||||
self.audit_config = audit_info.audit_config
|
||||
self.provider = provider
|
||||
self.audited_account = provider.identity.account
|
||||
self.audited_account_arn = provider.identity.account_arn
|
||||
self.audited_partition = provider.identity.partition
|
||||
self.audit_resources = provider.audit_resources
|
||||
self.audited_checks = provider.audit_metadata.expected_checks
|
||||
self.audit_config = provider.audit_config
|
||||
|
||||
# AWS Session
|
||||
self.session = audit_info.audit_session
|
||||
self.session = provider.session.session
|
||||
|
||||
# We receive the service using __class__.__name__ or the service name in lowercase
|
||||
# e.g.: AccessAnalyzer --> we need a lowercase string, so service.lower()
|
||||
@@ -38,21 +41,33 @@ class AWSService:
|
||||
|
||||
# Generate Regional Clients
|
||||
if not global_service:
|
||||
self.regional_clients = generate_regional_clients(self.service, audit_info)
|
||||
self.regional_clients = provider.generate_regional_clients(
|
||||
self.service, global_service
|
||||
)
|
||||
|
||||
# Get a single region and client if the service needs it (e.g. AWS Global Service)
|
||||
# We cannot include this within an else because some services needs both the regional_clients
|
||||
# and a single client like S3
|
||||
self.region = get_default_region(self.service, audit_info)
|
||||
self.region = provider.get_default_region(self.service)
|
||||
self.client = self.session.client(self.service, self.region)
|
||||
|
||||
# Thread pool for __threading_call__
|
||||
self.thread_pool = ThreadPoolExecutor(max_workers=MAX_WORKERS)
|
||||
|
||||
self.live_display_enabled = False
|
||||
# Progress bar to add tasks to
|
||||
service_init_section = live_display.get_client_init_section()
|
||||
if service_init_section:
|
||||
# Only Flags is not set to True
|
||||
self.task_progress_bar = service_init_section.task_progress_bar
|
||||
self.progress_tasks = []
|
||||
# For us in other functions
|
||||
self.live_display_enabled = True
|
||||
|
||||
def __get_session__(self):
|
||||
return self.session
|
||||
|
||||
def __threading_call__(self, call, iterator=None):
|
||||
def __threading_call__(self, call, iterator=None, *args, **kwargs):
|
||||
# Use the provided iterator, or default to self.regional_clients
|
||||
items = iterator if iterator is not None else self.regional_clients.values()
|
||||
# Determine the total count for logging
|
||||
@@ -73,13 +88,58 @@ class AWSService:
|
||||
f"{self.service.upper()} - Starting threads for '{call_name}' function to process {item_count} items..."
|
||||
)
|
||||
|
||||
if self.live_display_enabled:
|
||||
# Setup the progress bar
|
||||
task_id = self.task_progress_bar.add_task(
|
||||
f"- {call_name}...", total=item_count, task_type="Service"
|
||||
)
|
||||
self.progress_tasks.append(task_id)
|
||||
|
||||
# Submit tasks to the thread pool
|
||||
futures = [self.thread_pool.submit(call, item) for item in items]
|
||||
futures = [
|
||||
self.thread_pool.submit(call, item, *args, **kwargs) for item in items
|
||||
]
|
||||
|
||||
# Wait for all tasks to complete
|
||||
for future in as_completed(futures):
|
||||
try:
|
||||
future.result() # Raises exceptions from the thread, if any
|
||||
if self.live_display_enabled:
|
||||
# Update the progress bar
|
||||
self.task_progress_bar.update(task_id, advance=1)
|
||||
except Exception:
|
||||
# Handle exceptions if necessary
|
||||
pass # Replace 'pass' with any additional exception handling logic. Currently handled within the called function
|
||||
|
||||
# Make the task disappear once completed
|
||||
# self.progress.remove_task(task_id)
|
||||
|
||||
@staticmethod
|
||||
def progress_decorator(func):
|
||||
"""
|
||||
Decorator to update the progress bar before and after a function call.
|
||||
To be used for methods within global services, which do not make use of the __threading_call__ function
|
||||
"""
|
||||
|
||||
@wraps(func)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
# Trim leading and trailing underscores from the call's name
|
||||
func_name = func.__name__.strip("_")
|
||||
# Add Capitalization
|
||||
func_name = " ".join([x.capitalize() for x in func_name.split("_")])
|
||||
|
||||
if self.live_display_enabled:
|
||||
task_id = self.task_progress_bar.add_task(
|
||||
f"- {func_name}...", total=1, task_type="Service"
|
||||
)
|
||||
self.progress_tasks.append(task_id)
|
||||
|
||||
result = func(self, *args, **kwargs) # Execute the function
|
||||
|
||||
if self.live_display_enabled:
|
||||
self.task_progress_bar.update(task_id, advance=1)
|
||||
# self.task_progress_bar.remove_task(task_id) # Uncomment if you want to remove the task on completion
|
||||
|
||||
return result
|
||||
|
||||
return wrapper
|
||||
|
||||
54
prowler/providers/aws/models.py
Normal file
54
prowler/providers/aws/models.py
Normal file
@@ -0,0 +1,54 @@
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
|
||||
from boto3 import session
|
||||
from botocore.config import Config
|
||||
|
||||
@dataclass
|
||||
class AWSOrganizationsInfo:
|
||||
account_details_email: str
|
||||
account_details_name: str
|
||||
account_details_arn: str
|
||||
account_details_org: str
|
||||
account_details_tags: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWSCredentials:
|
||||
aws_access_key_id: str
|
||||
aws_session_token: str
|
||||
aws_secret_access_key: str
|
||||
expiration: datetime
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWSAssumeRole:
|
||||
role_arn: str
|
||||
session_duration: int
|
||||
external_id: str
|
||||
mfa_enabled: bool
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWSAssumeRoleConfiguration:
|
||||
assumed_role_info: AWSAssumeRole
|
||||
assumed_role_credentials: AWSCredentials
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWSIdentityInfo:
|
||||
account: str
|
||||
account_arn: str
|
||||
user_id: str
|
||||
partition: str
|
||||
identity_arn: str
|
||||
profile: str
|
||||
profile_region: str
|
||||
audited_regions: list
|
||||
|
||||
|
||||
@dataclass
|
||||
class AWSSession:
|
||||
session: session.Session
|
||||
session_config: Config
|
||||
original_session: None
|
||||
@@ -1,6 +1,6 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.accessanalyzer.accessanalyzer_service import (
|
||||
AccessAnalyzer,
|
||||
)
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
accessanalyzer_client = AccessAnalyzer(current_audit_info)
|
||||
accessanalyzer_client = AccessAnalyzer(get_global_provider())
|
||||
|
||||
@@ -31,11 +31,11 @@ class accessanalyzer_enabled(Check):
|
||||
)
|
||||
if (
|
||||
accessanalyzer_client.audit_config.get(
|
||||
"allowlist_non_default_regions", False
|
||||
"mute_non_default_regions", False
|
||||
)
|
||||
and not analyzer.region == accessanalyzer_client.region
|
||||
):
|
||||
report.status = "WARNING"
|
||||
report.status = "MUTED"
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -10,9 +10,9 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## AccessAnalyzer
|
||||
class AccessAnalyzer(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.analyzers = []
|
||||
self.__threading_call__(self.__list_analyzers__)
|
||||
self.__list_findings__()
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.account.account_service import Account
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
account_client = Account(current_audit_info)
|
||||
account_client = Account(get_global_provider())
|
||||
|
||||
@@ -10,6 +10,6 @@ class account_maintain_current_contact_details(Check):
|
||||
report.region = account_client.region
|
||||
report.resource_id = account_client.audited_account
|
||||
report.resource_arn = account_client.audited_account_arn
|
||||
report.status = "INFO"
|
||||
report.status_extended = "Manual check: Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Contact Information."
|
||||
report.status = "MANUAL"
|
||||
report.status_extended = "Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Contact Information."
|
||||
return [report]
|
||||
|
||||
@@ -10,6 +10,6 @@ class account_security_contact_information_is_registered(Check):
|
||||
report.region = account_client.region
|
||||
report.resource_id = account_client.audited_account
|
||||
report.resource_arn = account_client.audited_account_arn
|
||||
report.status = "INFO"
|
||||
report.status_extended = "Manual check: Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Alternate Contacts -> Security Section."
|
||||
report.status = "MANUAL"
|
||||
report.status_extended = "Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Alternate Contacts -> Security Section."
|
||||
return [report]
|
||||
|
||||
@@ -10,6 +10,6 @@ class account_security_questions_are_registered_in_the_aws_account(Check):
|
||||
report.region = account_client.region
|
||||
report.resource_id = account_client.audited_account
|
||||
report.resource_arn = account_client.audited_account_arn
|
||||
report.status = "INFO"
|
||||
report.status_extended = "Manual check: Login to the AWS Console as root. Choose your account name on the top right of the window -> My Account -> Configure Security Challenge Questions."
|
||||
report.status = "MANUAL"
|
||||
report.status_extended = "Login to the AWS Console as root. Choose your account name on the top right of the window -> My Account -> Configure Security Challenge Questions."
|
||||
return [report]
|
||||
|
||||
@@ -9,9 +9,9 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
|
||||
class Account(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.number_of_contacts = 4
|
||||
self.contact_base = self.__get_contact_information__()
|
||||
self.contacts_billing = self.__get_alternate_contact__("BILLING")
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.acm.acm_service import ACM
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
acm_client = ACM(current_audit_info)
|
||||
acm_client = ACM(get_global_provider())
|
||||
|
||||
@@ -10,13 +10,13 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## ACM
|
||||
class ACM(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.certificates = []
|
||||
self.__threading_call__(self.__list_certificates__)
|
||||
self.__describe_certificates__()
|
||||
self.__list_tags_for_certificate__()
|
||||
self.__threading_call__(self.__describe_certificates__, self.certificates)
|
||||
self.__threading_call__(self.__list_tags_for_certificate__, self.certificates)
|
||||
|
||||
def __list_certificates__(self, regional_client):
|
||||
logger.info("ACM - Listing Certificates...")
|
||||
@@ -59,33 +59,29 @@ class ACM(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_certificates__(self):
|
||||
logger.info("ACM - Describing Certificates...")
|
||||
def __describe_certificates__(self, certificate):
|
||||
try:
|
||||
for certificate in self.certificates:
|
||||
regional_client = self.regional_clients[certificate.region]
|
||||
response = regional_client.describe_certificate(
|
||||
CertificateArn=certificate.arn
|
||||
)["Certificate"]
|
||||
if (
|
||||
response["Options"]["CertificateTransparencyLoggingPreference"]
|
||||
== "ENABLED"
|
||||
):
|
||||
certificate.transparency_logging = True
|
||||
regional_client = self.regional_clients[certificate.region]
|
||||
response = regional_client.describe_certificate(
|
||||
CertificateArn=certificate.arn
|
||||
)["Certificate"]
|
||||
if (
|
||||
response["Options"]["CertificateTransparencyLoggingPreference"]
|
||||
== "ENABLED"
|
||||
):
|
||||
certificate.transparency_logging = True
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __list_tags_for_certificate__(self):
|
||||
logger.info("ACM - List Tags...")
|
||||
def __list_tags_for_certificate__(self, certificate):
|
||||
try:
|
||||
for certificate in self.certificates:
|
||||
regional_client = self.regional_clients[certificate.region]
|
||||
response = regional_client.list_tags_for_certificate(
|
||||
CertificateArn=certificate.arn
|
||||
)["Tags"]
|
||||
certificate.tags = response
|
||||
regional_client = self.regional_clients[certificate.region]
|
||||
response = regional_client.list_tags_for_certificate(
|
||||
CertificateArn=certificate.arn
|
||||
)["Tags"]
|
||||
certificate.tags = response
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.apigateway.apigateway_service import APIGateway
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
apigateway_client = APIGateway(current_audit_info)
|
||||
apigateway_client = APIGateway(get_global_provider())
|
||||
|
||||
@@ -9,15 +9,15 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## APIGateway
|
||||
class APIGateway(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.rest_apis = []
|
||||
self.__threading_call__(self.__get_rest_apis__)
|
||||
self.__get_authorizers__()
|
||||
self.__get_rest_api__()
|
||||
self.__get_stages__()
|
||||
self.__get_resources__()
|
||||
self.__threading_call__(self.__get_rest_apis__, self.rest_apis)
|
||||
self.__threading_call__(self.__get_authorizers__, self.rest_apis)
|
||||
self.__threading_call__(self.__get_rest_api__, self.rest_apis)
|
||||
self.__threading_call__(self.__get_stages__, self.rest_apis)
|
||||
self.__threading_call__(self.__get_resources__, self.rest_apis)
|
||||
|
||||
def __get_rest_apis__(self, regional_client):
|
||||
logger.info("APIGateway - Getting Rest APIs...")
|
||||
@@ -43,98 +43,88 @@ class APIGateway(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_authorizers__(self):
|
||||
logger.info("APIGateway - Getting Rest APIs authorizer...")
|
||||
def __get_authorizers__(self, rest_api):
|
||||
try:
|
||||
for rest_api in self.rest_apis:
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
|
||||
"items"
|
||||
]
|
||||
if authorizers:
|
||||
rest_api.authorizer = True
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
|
||||
"items"
|
||||
]
|
||||
if authorizers:
|
||||
rest_api.authorizer = True
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_rest_api__(self):
|
||||
logger.info("APIGateway - Describing Rest API...")
|
||||
def __get_rest_api__(self, rest_api):
|
||||
try:
|
||||
for rest_api in self.rest_apis:
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
|
||||
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
|
||||
rest_api.public_endpoint = False
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
|
||||
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
|
||||
rest_api.public_endpoint = False
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_stages__(self):
|
||||
logger.info("APIGateway - Getting stages for Rest APIs...")
|
||||
def __get_stages__(self, rest_api):
|
||||
try:
|
||||
for rest_api in self.rest_apis:
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
stages = regional_client.get_stages(restApiId=rest_api.id)
|
||||
for stage in stages["item"]:
|
||||
waf = None
|
||||
logging = False
|
||||
client_certificate = False
|
||||
if "webAclArn" in stage:
|
||||
waf = stage["webAclArn"]
|
||||
if "methodSettings" in stage:
|
||||
if stage["methodSettings"]:
|
||||
logging = True
|
||||
if "clientCertificateId" in stage:
|
||||
client_certificate = True
|
||||
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
|
||||
rest_api.stages.append(
|
||||
Stage(
|
||||
name=stage["stageName"],
|
||||
arn=arn,
|
||||
logging=logging,
|
||||
client_certificate=client_certificate,
|
||||
waf=waf,
|
||||
tags=[stage.get("tags")],
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
stages = regional_client.get_stages(restApiId=rest_api.id)
|
||||
for stage in stages["item"]:
|
||||
waf = None
|
||||
logging = False
|
||||
client_certificate = False
|
||||
if "webAclArn" in stage:
|
||||
waf = stage["webAclArn"]
|
||||
if "methodSettings" in stage:
|
||||
if stage["methodSettings"]:
|
||||
logging = True
|
||||
if "clientCertificateId" in stage:
|
||||
client_certificate = True
|
||||
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
|
||||
rest_api.stages.append(
|
||||
Stage(
|
||||
name=stage["stageName"],
|
||||
arn=arn,
|
||||
logging=logging,
|
||||
client_certificate=client_certificate,
|
||||
waf=waf,
|
||||
tags=[stage.get("tags")],
|
||||
)
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_resources__(self, rest_api):
|
||||
try:
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
get_resources_paginator = regional_client.get_paginator("get_resources")
|
||||
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
|
||||
for resource in page["items"]:
|
||||
id = resource["id"]
|
||||
resource_methods = []
|
||||
methods_auth = {}
|
||||
for resource_method in resource.get("resourceMethods", {}).keys():
|
||||
resource_methods.append(resource_method)
|
||||
|
||||
for resource_method in resource_methods:
|
||||
if resource_method != "OPTIONS":
|
||||
method_config = regional_client.get_method(
|
||||
restApiId=rest_api.id,
|
||||
resourceId=id,
|
||||
httpMethod=resource_method,
|
||||
)
|
||||
auth_type = method_config["authorizationType"]
|
||||
methods_auth.update({resource_method: auth_type})
|
||||
|
||||
rest_api.resources.append(
|
||||
PathResourceMethods(
|
||||
path=resource["path"], resource_methods=methods_auth
|
||||
)
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_resources__(self):
|
||||
logger.info("APIGateway - Getting API resources...")
|
||||
try:
|
||||
for rest_api in self.rest_apis:
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
get_resources_paginator = regional_client.get_paginator("get_resources")
|
||||
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
|
||||
for resource in page["items"]:
|
||||
id = resource["id"]
|
||||
resource_methods = []
|
||||
methods_auth = {}
|
||||
for resource_method in resource.get(
|
||||
"resourceMethods", {}
|
||||
).keys():
|
||||
resource_methods.append(resource_method)
|
||||
|
||||
for resource_method in resource_methods:
|
||||
if resource_method != "OPTIONS":
|
||||
method_config = regional_client.get_method(
|
||||
restApiId=rest_api.id,
|
||||
resourceId=id,
|
||||
httpMethod=resource_method,
|
||||
)
|
||||
auth_type = method_config["authorizationType"]
|
||||
methods_auth.update({resource_method: auth_type})
|
||||
|
||||
rest_api.resources.append(
|
||||
PathResourceMethods(
|
||||
path=resource["path"], resource_methods=methods_auth
|
||||
)
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.apigatewayv2.apigatewayv2_service import (
|
||||
ApiGatewayV2,
|
||||
)
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
apigatewayv2_client = ApiGatewayV2(current_audit_info)
|
||||
apigatewayv2_client = ApiGatewayV2(get_global_provider())
|
||||
|
||||
@@ -9,13 +9,13 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## ApiGatewayV2
|
||||
class ApiGatewayV2(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.apis = []
|
||||
self.__threading_call__(self.__get_apis__)
|
||||
self.__get_authorizers__()
|
||||
self.__get_stages__()
|
||||
self.__threading_call__(self.__get_apis__, self.apis)
|
||||
self.__threading_call__(self.__get_authorizers__, self.apis)
|
||||
self.__threading_call__(self.__get_stages__, self.apis)
|
||||
|
||||
def __get_apis__(self, regional_client):
|
||||
logger.info("APIGatewayv2 - Getting APIs...")
|
||||
@@ -41,36 +41,32 @@ class ApiGatewayV2(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_authorizers__(self):
|
||||
logger.info("APIGatewayv2 - Getting APIs authorizer...")
|
||||
def __get_authorizers__(self, api):
|
||||
try:
|
||||
for api in self.apis:
|
||||
regional_client = self.regional_clients[api.region]
|
||||
authorizers = regional_client.get_authorizers(ApiId=api.id)["Items"]
|
||||
if authorizers:
|
||||
api.authorizer = True
|
||||
regional_client = self.regional_clients[api.region]
|
||||
authorizers = regional_client.get_authorizers(ApiId=api.id)["Items"]
|
||||
if authorizers:
|
||||
api.authorizer = True
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
|
||||
)
|
||||
|
||||
def __get_stages__(self):
|
||||
logger.info("APIGatewayv2 - Getting stages for APIs...")
|
||||
def __get_stages__(self, api):
|
||||
try:
|
||||
for api in self.apis:
|
||||
regional_client = self.regional_clients[api.region]
|
||||
stages = regional_client.get_stages(ApiId=api.id)
|
||||
for stage in stages["Items"]:
|
||||
logging = False
|
||||
if "AccessLogSettings" in stage:
|
||||
logging = True
|
||||
api.stages.append(
|
||||
Stage(
|
||||
name=stage["StageName"],
|
||||
logging=logging,
|
||||
tags=[stage.get("Tags")],
|
||||
)
|
||||
regional_client = self.regional_clients[api.region]
|
||||
stages = regional_client.get_stages(ApiId=api.id)
|
||||
for stage in stages["Items"]:
|
||||
logging = False
|
||||
if "AccessLogSettings" in stage:
|
||||
logging = True
|
||||
api.stages.append(
|
||||
Stage(
|
||||
name=stage["StageName"],
|
||||
logging=logging,
|
||||
tags=[stage.get("Tags")],
|
||||
)
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.appstream.appstream_service import AppStream
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
appstream_client = AppStream(current_audit_info)
|
||||
appstream_client = AppStream(get_global_provider())
|
||||
|
||||
@@ -9,12 +9,12 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## AppStream
|
||||
class AppStream(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.fleets = []
|
||||
self.__threading_call__(self.__describe_fleets__)
|
||||
self.__list_tags_for_resource__()
|
||||
self.__threading_call__(self.__list_tags_for_resource__, self.fleets)
|
||||
|
||||
def __describe_fleets__(self, regional_client):
|
||||
logger.info("AppStream - Describing Fleets...")
|
||||
@@ -50,15 +50,13 @@ class AppStream(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __list_tags_for_resource__(self):
|
||||
logger.info("AppStream - List Tags...")
|
||||
def __list_tags_for_resource__(self, fleet):
|
||||
try:
|
||||
for fleet in self.fleets:
|
||||
regional_client = self.regional_clients[fleet.region]
|
||||
response = regional_client.list_tags_for_resource(
|
||||
ResourceArn=fleet.arn
|
||||
)["Tags"]
|
||||
fleet.tags = [response]
|
||||
regional_client = self.regional_clients[fleet.region]
|
||||
response = regional_client.list_tags_for_resource(ResourceArn=fleet.arn)[
|
||||
"Tags"
|
||||
]
|
||||
fleet.tags = [response]
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.athena.athena_service import Athena
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
athena_client = Athena(current_audit_info)
|
||||
athena_client = Athena(get_global_provider())
|
||||
|
||||
@@ -9,14 +9,18 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## Athena
|
||||
class Athena(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.workgroups = {}
|
||||
self.__threading_call__(self.__list_workgroups__)
|
||||
self.__get_workgroups__()
|
||||
self.__list_query_executions__()
|
||||
self.__list_tags_for_resource__()
|
||||
self.__threading_call__(self.__get_workgroups__, self.workgroups.values())
|
||||
self.__threading_call__(
|
||||
self.__list_query_executions__, self.workgroups.values()
|
||||
)
|
||||
self.__threading_call__(
|
||||
self.__list_tags_for_resource__, self.workgroups.values()
|
||||
)
|
||||
|
||||
def __list_workgroups__(self, regional_client):
|
||||
logger.info("Athena - Listing WorkGroups...")
|
||||
@@ -44,86 +48,65 @@ class Athena(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_workgroups__(self):
|
||||
logger.info("Athena - Getting WorkGroups...")
|
||||
def __get_workgroups__(self, workgroup):
|
||||
try:
|
||||
for workgroup in self.workgroups.values():
|
||||
try:
|
||||
wg = self.regional_clients[workgroup.region].get_work_group(
|
||||
WorkGroup=workgroup.name
|
||||
)
|
||||
wg = self.regional_clients[workgroup.region].get_work_group(
|
||||
WorkGroup=workgroup.name
|
||||
)
|
||||
|
||||
wg_configuration = wg.get("WorkGroup").get("Configuration")
|
||||
self.workgroups[
|
||||
workgroup.arn
|
||||
].enforce_workgroup_configuration = wg_configuration.get(
|
||||
"EnforceWorkGroupConfiguration", False
|
||||
)
|
||||
wg_configuration = wg.get("WorkGroup").get("Configuration")
|
||||
self.workgroups[
|
||||
workgroup.arn
|
||||
].enforce_workgroup_configuration = wg_configuration.get(
|
||||
"EnforceWorkGroupConfiguration", False
|
||||
)
|
||||
|
||||
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
|
||||
encryption = (
|
||||
wg_configuration.get(
|
||||
"ResultConfiguration",
|
||||
{"EncryptionConfiguration": {}},
|
||||
)
|
||||
.get(
|
||||
"EncryptionConfiguration",
|
||||
{"EncryptionOption": ""},
|
||||
)
|
||||
.get("EncryptionOption")
|
||||
)
|
||||
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
|
||||
encryption = (
|
||||
wg_configuration.get(
|
||||
"ResultConfiguration",
|
||||
{"EncryptionConfiguration": {}},
|
||||
)
|
||||
.get(
|
||||
"EncryptionConfiguration",
|
||||
{"EncryptionOption": ""},
|
||||
)
|
||||
.get("EncryptionOption")
|
||||
)
|
||||
|
||||
if encryption in ["SSE_S3", "SSE_KMS", "CSE_KMS"]:
|
||||
encryption_configuration = EncryptionConfiguration(
|
||||
encryption_option=encryption, encrypted=True
|
||||
)
|
||||
self.workgroups[
|
||||
workgroup.arn
|
||||
].encryption_configuration = encryption_configuration
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
if encryption in ["SSE_S3", "SSE_KMS", "CSE_KMS"]:
|
||||
encryption_configuration = EncryptionConfiguration(
|
||||
encryption_option=encryption, encrypted=True
|
||||
)
|
||||
self.workgroups[
|
||||
workgroup.arn
|
||||
].encryption_configuration = encryption_configuration
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
f"{workgroup.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __list_query_executions__(self):
|
||||
logger.info("Athena - Listing Queries...")
|
||||
def __list_query_executions__(self, workgroup):
|
||||
try:
|
||||
for workgroup in self.workgroups.values():
|
||||
try:
|
||||
queries = (
|
||||
self.regional_clients[workgroup.region]
|
||||
.list_query_executions(WorkGroup=workgroup.name)
|
||||
.get("QueryExecutionIds", [])
|
||||
)
|
||||
if queries:
|
||||
workgroup.queries = True
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
queries = (
|
||||
self.regional_clients[workgroup.region]
|
||||
.list_query_executions(WorkGroup=workgroup.name)
|
||||
.get("QueryExecutionIds", [])
|
||||
)
|
||||
if queries:
|
||||
workgroup.queries = True
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
f"{workgroup.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __list_tags_for_resource__(self):
|
||||
logger.info("Athena - Listing Tags...")
|
||||
def __list_tags_for_resource__(self, workgroup):
|
||||
try:
|
||||
for workgroup in self.workgroups.values():
|
||||
try:
|
||||
regional_client = self.regional_clients[workgroup.region]
|
||||
workgroup.tags = regional_client.list_tags_for_resource(
|
||||
ResourceARN=workgroup.arn
|
||||
)["Tags"]
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
regional_client = self.regional_clients[workgroup.region]
|
||||
workgroup.tags = regional_client.list_tags_for_resource(
|
||||
ResourceARN=workgroup.arn
|
||||
)["Tags"]
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -12,7 +12,7 @@ class athena_workgroup_encryption(Check):
|
||||
# Only check for enabled and used workgroups (has recent queries)
|
||||
if (
|
||||
workgroup.state == "ENABLED" and workgroup.queries
|
||||
) or not athena_client.audit_info.ignore_unused_services:
|
||||
) or not athena_client.provider.ignore_unused_services:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = workgroup.region
|
||||
report.resource_id = workgroup.name
|
||||
|
||||
@@ -12,7 +12,7 @@ class athena_workgroup_enforce_configuration(Check):
|
||||
# Only check for enabled and used workgroups (has recent queries)
|
||||
if (
|
||||
workgroup.state == "ENABLED" and workgroup.queries
|
||||
) or not athena_client.audit_info.ignore_unused_services:
|
||||
) or not athena_client.provider.ignore_unused_services:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = workgroup.region
|
||||
report.resource_id = workgroup.name
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.autoscaling.autoscaling_service import AutoScaling
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
autoscaling_client = AutoScaling(current_audit_info)
|
||||
autoscaling_client = AutoScaling(get_global_provider())
|
||||
|
||||
@@ -7,9 +7,9 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## AutoScaling
|
||||
class AutoScaling(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.launch_configurations = []
|
||||
self.__threading_call__(self.__describe_launch_configurations__)
|
||||
self.groups = []
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.awslambda.awslambda_service import Lambda
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
awslambda_client = Lambda(current_audit_info)
|
||||
awslambda_client = Lambda(get_global_provider())
|
||||
|
||||
@@ -8,7 +8,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
for function in awslambda_client.functions.values():
|
||||
functions = awslambda_client.functions.values()
|
||||
self.start_task("Processing functions...", len(functions))
|
||||
for function in functions:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = function.region
|
||||
report.resource_id = function.name
|
||||
@@ -49,5 +51,7 @@ class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check)
|
||||
report.status_extended = f"Lambda function {function.name} is recorded by CloudTrail trail {trail.name}."
|
||||
break
|
||||
findings.append(report)
|
||||
self.increment_task_progress()
|
||||
|
||||
self.update_title_with_findings(findings)
|
||||
return findings
|
||||
|
||||
@@ -12,6 +12,8 @@ class awslambda_function_no_secrets_in_code(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
if awslambda_client.functions:
|
||||
functions = awslambda_client.functions.values()
|
||||
self.start_task("Processing functions...", len(functions))
|
||||
for function, function_code in awslambda_client.__get_function_code__():
|
||||
if function_code:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
@@ -20,6 +22,40 @@ class awslambda_function_no_secrets_in_code(Check):
|
||||
report.resource_arn = function.arn
|
||||
report.resource_tags = function.tags
|
||||
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"No secrets found in Lambda function {function.name} code."
|
||||
)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
function_code.code_zip.extractall(tmp_dir_name)
|
||||
# List all files
|
||||
files_in_zip = next(os.walk(tmp_dir_name))[2]
|
||||
secrets_findings = []
|
||||
for file in files_in_zip:
|
||||
secrets = SecretsCollection()
|
||||
with default_settings():
|
||||
secrets.scan_file(f"{tmp_dir_name}/{file}")
|
||||
detect_secrets_output = secrets.json()
|
||||
if detect_secrets_output:
|
||||
for (
|
||||
file_name
|
||||
) in (
|
||||
detect_secrets_output.keys()
|
||||
): # Appears that only 1 file is being scanned at a time, so could rework this
|
||||
output_file_name = file_name.replace(
|
||||
f"{tmp_dir_name}/", ""
|
||||
)
|
||||
secrets_string = ", ".join(
|
||||
[
|
||||
f"{secret['type']} on line {secret['line_number']}"
|
||||
for secret in detect_secrets_output[
|
||||
file_name
|
||||
]
|
||||
]
|
||||
)
|
||||
secrets_findings.append(
|
||||
f"{output_file_name}: {secrets_string}"
|
||||
)
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"No secrets found in Lambda function {function.name} code."
|
||||
@@ -61,5 +97,6 @@ class awslambda_function_no_secrets_in_code(Check):
|
||||
report.status_extended = f"Potential {'secrets' if len(secrets_findings) > 1 else 'secret'} found in Lambda function {function.name} code -> {final_output_string}."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
self.increment_task_progress()
|
||||
self.update_title_with_findings(findings)
|
||||
return findings
|
||||
|
||||
@@ -12,6 +12,8 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
|
||||
class awslambda_function_no_secrets_in_variables(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
functions = awslambda_client.functions.values()
|
||||
self.start_task("Processing functions...", len(functions))
|
||||
for function in awslambda_client.functions.values():
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = function.region
|
||||
@@ -52,5 +54,6 @@ class awslambda_function_no_secrets_in_variables(Check):
|
||||
os.remove(temp_env_data_file.name)
|
||||
|
||||
findings.append(report)
|
||||
|
||||
self.increment_task_progress()
|
||||
self.update_title_with_findings(findings)
|
||||
return findings
|
||||
|
||||
@@ -5,7 +5,9 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
|
||||
class awslambda_function_not_publicly_accessible(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
for function in awslambda_client.functions.values():
|
||||
functions = awslambda_client.functions.values()
|
||||
self.start_task("Processing functions...", len(functions))
|
||||
for function in functions:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = function.region
|
||||
report.resource_id = function.name
|
||||
@@ -39,5 +41,6 @@ class awslambda_function_not_publicly_accessible(Check):
|
||||
report.status_extended = f"Lambda function {function.name} has a policy resource-based policy with public access."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
self.increment_task_progress()
|
||||
self.update_title_with_findings(findings)
|
||||
return findings
|
||||
|
||||
@@ -5,6 +5,8 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
|
||||
class awslambda_function_url_cors_policy(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
functions = awslambda_client.functions.values()
|
||||
self.start_task("Processing functions...", len(functions))
|
||||
for function in awslambda_client.functions.values():
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = function.region
|
||||
@@ -20,5 +22,6 @@ class awslambda_function_url_cors_policy(Check):
|
||||
report.status_extended = f"Lambda function {function.name} does not have a wide CORS configuration."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
self.increment_task_progress()
|
||||
self.update_title_with_findings(findings)
|
||||
return findings
|
||||
|
||||
@@ -6,6 +6,8 @@ from prowler.providers.aws.services.awslambda.awslambda_service import AuthType
|
||||
class awslambda_function_url_public(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
functions = awslambda_client.functions.values()
|
||||
self.start_task("Processing functions...", len(functions))
|
||||
for function in awslambda_client.functions.values():
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = function.region
|
||||
@@ -21,5 +23,6 @@ class awslambda_function_url_public(Check):
|
||||
report.status_extended = f"Lambda function {function.name} has a publicly accessible function URL."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
self.increment_task_progress()
|
||||
self.update_title_with_findings(findings)
|
||||
return findings
|
||||
|
||||
@@ -5,6 +5,8 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
|
||||
class awslambda_function_using_supported_runtimes(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
functions = awslambda_client.functions.values()
|
||||
self.start_task("Processing functions...", len(functions))
|
||||
for function in awslambda_client.functions.values():
|
||||
if function.runtime:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
@@ -23,5 +25,7 @@ class awslambda_function_using_supported_runtimes(Check):
|
||||
report.status_extended = f"Lambda function {function.name} is using {function.runtime} which is supported."
|
||||
|
||||
findings.append(report)
|
||||
self.increment_task_progress()
|
||||
|
||||
self.update_title_with_findings(findings)
|
||||
return findings
|
||||
|
||||
@@ -16,17 +16,27 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## Lambda
|
||||
class Lambda(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.functions = {}
|
||||
self.__threading_call__(self.__list_functions__)
|
||||
self.__list_tags_for_resource__()
|
||||
self.__threading_call__(self.__get_policy__)
|
||||
self.__threading_call__(self.__get_function_url_config__)
|
||||
self.__threading_call__(
|
||||
self.__list_tags_for_resource__, self.functions.values()
|
||||
)
|
||||
|
||||
# We only want to retrieve the Lambda code if the
|
||||
# awslambda_function_no_secrets_in_code check is set
|
||||
if (
|
||||
"awslambda_function_no_secrets_in_code"
|
||||
in provider.audit_metadata.expected_checks
|
||||
):
|
||||
self.__threading_call__(self.__get_function_code__,self.functions.values())
|
||||
|
||||
self.__threading_call__(self.__get_policy__,self.functions.values())
|
||||
self.__threading_call__(self.__get_function_url_config__,self.functions.values())
|
||||
|
||||
def __list_functions__(self, regional_client):
|
||||
logger.info("Lambda - Listing Functions...")
|
||||
try:
|
||||
list_functions_paginator = regional_client.get_paginator("list_functions")
|
||||
for page in list_functions_paginator.paginate():
|
||||
@@ -54,7 +64,6 @@ class Lambda(AWSService):
|
||||
"Variables"
|
||||
)
|
||||
self.functions[lambda_arn].environment = lambda_environment
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} --"
|
||||
@@ -98,26 +107,20 @@ class Lambda(AWSService):
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
f"{regional_client.region} --"
|
||||
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
|
||||
f" {error}"
|
||||
)
|
||||
raise
|
||||
|
||||
def __get_policy__(self, regional_client):
|
||||
logger.info("Lambda - Getting Policy...")
|
||||
def __get_policy__(self, function):
|
||||
try:
|
||||
for function in self.functions.values():
|
||||
if function.region == regional_client.region:
|
||||
try:
|
||||
function_policy = regional_client.get_policy(
|
||||
FunctionName=function.name
|
||||
)
|
||||
self.functions[function.arn].policy = json.loads(
|
||||
function_policy["Policy"]
|
||||
)
|
||||
except ClientError as e:
|
||||
if e.response["Error"]["Code"] == "ResourceNotFoundException":
|
||||
self.functions[function.arn].policy = {}
|
||||
|
||||
regional_client = self.regional_clients[function.region]
|
||||
function_policy = regional_client.get_policy(FunctionName=function.name)
|
||||
self.functions[function.arn].policy = json.loads(function_policy["Policy"])
|
||||
except ClientError as e:
|
||||
if e.response["Error"]["Code"] == "ResourceNotFoundException":
|
||||
self.functions[function.arn].policy = {}
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} --"
|
||||
@@ -125,28 +128,24 @@ class Lambda(AWSService):
|
||||
f" {error}"
|
||||
)
|
||||
|
||||
def __get_function_url_config__(self, regional_client):
|
||||
logger.info("Lambda - Getting Function URL Config...")
|
||||
def __get_function_url_config__(self, function):
|
||||
try:
|
||||
for function in self.functions.values():
|
||||
if function.region == regional_client.region:
|
||||
try:
|
||||
function_url_config = regional_client.get_function_url_config(
|
||||
FunctionName=function.name
|
||||
)
|
||||
if "Cors" in function_url_config:
|
||||
allow_origins = function_url_config["Cors"]["AllowOrigins"]
|
||||
else:
|
||||
allow_origins = []
|
||||
self.functions[function.arn].url_config = URLConfig(
|
||||
auth_type=function_url_config["AuthType"],
|
||||
url=function_url_config["FunctionUrl"],
|
||||
cors_config=URLConfigCORS(allow_origins=allow_origins),
|
||||
)
|
||||
except ClientError as e:
|
||||
if e.response["Error"]["Code"] == "ResourceNotFoundException":
|
||||
self.functions[function.arn].url_config = None
|
||||
|
||||
regional_client = self.regional_clients[function.region]
|
||||
function_url_config = regional_client.get_function_url_config(
|
||||
FunctionName=function.name
|
||||
)
|
||||
if "Cors" in function_url_config:
|
||||
allow_origins = function_url_config["Cors"]["AllowOrigins"]
|
||||
else:
|
||||
allow_origins = []
|
||||
self.functions[function.arn].url_config = URLConfig(
|
||||
auth_type=function_url_config["AuthType"],
|
||||
url=function_url_config["FunctionUrl"],
|
||||
cors_config=URLConfigCORS(allow_origins=allow_origins),
|
||||
)
|
||||
except ClientError as e:
|
||||
if e.response["Error"]["Code"] == "ResourceNotFoundException":
|
||||
self.functions[function.arn].url_config = None
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} --"
|
||||
@@ -154,18 +153,14 @@ class Lambda(AWSService):
|
||||
f" {error}"
|
||||
)
|
||||
|
||||
def __list_tags_for_resource__(self):
|
||||
logger.info("Lambda - List Tags...")
|
||||
def __list_tags_for_resource__(self, function):
|
||||
try:
|
||||
for function in self.functions.values():
|
||||
try:
|
||||
regional_client = self.regional_clients[function.region]
|
||||
response = regional_client.list_tags(Resource=function.arn)["Tags"]
|
||||
function.tags = [response]
|
||||
except ClientError as e:
|
||||
if e.response["Error"]["Code"] == "ResourceNotFoundException":
|
||||
function.tags = []
|
||||
|
||||
regional_client = self.regional_clients[function.region]
|
||||
response = regional_client.list_tags(Resource=function.arn)["Tags"]
|
||||
function.tags = [response]
|
||||
except ClientError as e:
|
||||
if e.response["Error"]["Code"] == "ResourceNotFoundException":
|
||||
function.tags = []
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.backup.backup_service import Backup
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
backup_client = Backup(current_audit_info)
|
||||
backup_client = Backup(get_global_provider())
|
||||
|
||||
@@ -10,9 +10,9 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## Backup
|
||||
class Backup(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.backup_vaults = []
|
||||
self.__threading_call__(self.__list_backup_vaults__)
|
||||
self.backup_plans = []
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.cloudformation.cloudformation_service import (
|
||||
CloudFormation,
|
||||
)
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
cloudformation_client = CloudFormation(current_audit_info)
|
||||
cloudformation_client = CloudFormation(get_global_provider())
|
||||
|
||||
@@ -10,12 +10,12 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## CloudFormation
|
||||
class CloudFormation(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
super().__init__(__class__.__name__, provider)
|
||||
self.stacks = []
|
||||
self.__threading_call__(self.__describe_stacks__)
|
||||
self.__describe_stack__()
|
||||
self.__threading_call__(self.__describe_stack__, self.stacks)
|
||||
|
||||
def __describe_stacks__(self, regional_client):
|
||||
"""Get ALL CloudFormation Stacks"""
|
||||
@@ -47,33 +47,30 @@ class CloudFormation(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_stack__(self):
|
||||
def __describe_stack__(self, stack):
|
||||
"""Get Details for a CloudFormation Stack"""
|
||||
logger.info("CloudFormation - Describing Stack to get specific details...")
|
||||
for stack in self.stacks:
|
||||
try:
|
||||
stack_details = self.regional_clients[stack.region].describe_stacks(
|
||||
StackName=stack.name
|
||||
)
|
||||
# Termination Protection
|
||||
stack.enable_termination_protection = stack_details["Stacks"][0][
|
||||
"EnableTerminationProtection"
|
||||
]
|
||||
# Nested Stack
|
||||
if "RootId" in stack_details["Stacks"][0]:
|
||||
stack.root_nested_stack = stack_details["Stacks"][0]["RootId"]
|
||||
stack.is_nested_stack = True if stack.root_nested_stack != "" else False
|
||||
try:
|
||||
stack_details = self.regional_clients[stack.region].describe_stacks(
|
||||
StackName=stack.name
|
||||
)
|
||||
# Termination Protection
|
||||
stack.enable_termination_protection = stack_details["Stacks"][0][
|
||||
"EnableTerminationProtection"
|
||||
]
|
||||
# Nested Stack
|
||||
if "RootId" in stack_details["Stacks"][0]:
|
||||
stack.root_nested_stack = stack_details["Stacks"][0]["RootId"]
|
||||
stack.is_nested_stack = True if stack.root_nested_stack != "" else False
|
||||
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "ValidationError":
|
||||
logger.warning(
|
||||
f"{stack.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
continue
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "ValidationError":
|
||||
logger.warning(
|
||||
f"{stack.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{stack.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
|
||||
class Stack(BaseModel):
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.cloudfront.cloudfront_service import CloudFront
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
cloudfront_client = CloudFront(current_audit_info)
|
||||
cloudfront_client = CloudFront(get_global_provider())
|
||||
|
||||
@@ -10,14 +10,23 @@ from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
################## CloudFront
|
||||
class CloudFront(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
def __init__(self, provider):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info, global_service=True)
|
||||
super().__init__(__class__.__name__, provider, global_service=True)
|
||||
self.distributions = {}
|
||||
self.__list_distributions__(self.client, self.region)
|
||||
self.__get_distribution_config__(self.client, self.distributions, self.region)
|
||||
self.__list_tags_for_resource__(self.client, self.distributions, self.region)
|
||||
self.__threading_call__(
|
||||
self.__get_distribution_config__,
|
||||
iterator=self.distributions,
|
||||
args=(self.client, self.region),
|
||||
)
|
||||
self.__threading_call__(
|
||||
self.__list_tags_for_resource__,
|
||||
iterator=self.distributions,
|
||||
args=(self.client, self.region),
|
||||
)
|
||||
|
||||
@AWSService.progress_decorator
|
||||
def __list_distributions__(self, client, region) -> dict:
|
||||
logger.info("CloudFront - Listing Distributions...")
|
||||
try:
|
||||
@@ -44,57 +53,52 @@ class CloudFront(AWSService):
|
||||
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_distribution_config__(self, client, distributions, region) -> dict:
|
||||
logger.info("CloudFront - Getting Distributions...")
|
||||
def __get_distribution_config__(self, distribution_id, client, region) -> dict:
|
||||
try:
|
||||
for distribution_id in distributions.keys():
|
||||
distribution_config = client.get_distribution_config(Id=distribution_id)
|
||||
# Global Config
|
||||
distributions[distribution_id].logging_enabled = distribution_config[
|
||||
"DistributionConfig"
|
||||
]["Logging"]["Enabled"]
|
||||
distributions[
|
||||
distribution_id
|
||||
].geo_restriction_type = GeoRestrictionType(
|
||||
distribution_config["DistributionConfig"]["Restrictions"][
|
||||
"GeoRestriction"
|
||||
]["RestrictionType"]
|
||||
)
|
||||
distributions[distribution_id].web_acl_id = distribution_config[
|
||||
"DistributionConfig"
|
||||
]["WebACLId"]
|
||||
distribution_config = client.get_distribution_config(Id=distribution_id)
|
||||
# Global Config
|
||||
self.distributions[distribution_id].logging_enabled = distribution_config[
|
||||
"DistributionConfig"
|
||||
]["Logging"]["Enabled"]
|
||||
self.distributions[
|
||||
distribution_id
|
||||
].geo_restriction_type = GeoRestrictionType(
|
||||
distribution_config["DistributionConfig"]["Restrictions"][
|
||||
"GeoRestriction"
|
||||
]["RestrictionType"]
|
||||
)
|
||||
self.distributions[distribution_id].web_acl_id = distribution_config[
|
||||
"DistributionConfig"
|
||||
]["WebACLId"]
|
||||
|
||||
# Default Cache Config
|
||||
default_cache_config = DefaultCacheConfigBehaviour(
|
||||
realtime_log_config_arn=distribution_config["DistributionConfig"][
|
||||
# Default Cache Config
|
||||
default_cache_config = DefaultCacheConfigBehaviour(
|
||||
realtime_log_config_arn=distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("RealtimeLogConfigArn"),
|
||||
viewer_protocol_policy=ViewerProtocolPolicy(
|
||||
distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("RealtimeLogConfigArn"),
|
||||
viewer_protocol_policy=ViewerProtocolPolicy(
|
||||
distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("ViewerProtocolPolicy")
|
||||
),
|
||||
field_level_encryption_id=distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("FieldLevelEncryptionId"),
|
||||
)
|
||||
distributions[
|
||||
distribution_id
|
||||
].default_cache_config = default_cache_config
|
||||
].get("ViewerProtocolPolicy")
|
||||
),
|
||||
field_level_encryption_id=distribution_config["DistributionConfig"][
|
||||
"DefaultCacheBehavior"
|
||||
].get("FieldLevelEncryptionId"),
|
||||
)
|
||||
self.distributions[
|
||||
distribution_id
|
||||
].default_cache_config = default_cache_config
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __list_tags_for_resource__(self, client, distributions, region):
|
||||
def __list_tags_for_resource__(self, distribution, client, region):
|
||||
logger.info("CloudFront - List Tags...")
|
||||
try:
|
||||
for distribution in distributions.values():
|
||||
response = client.list_tags_for_resource(Resource=distribution.arn)[
|
||||
"Tags"
|
||||
]
|
||||
distribution.tags = response.get("Items")
|
||||
response = client.list_tags_for_resource(Resource=distribution.arn)["Tags"]
|
||||
distribution.tags = response.get("Items")
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -27,7 +27,7 @@ class cloudtrail_bucket_requires_mfa_delete(Check):
|
||||
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) has MFA delete enabled."
|
||||
# check if trail bucket is a cross account bucket
|
||||
if not trail_bucket_is_in_account:
|
||||
report.status = "INFO"
|
||||
report.status = "MANUAL"
|
||||
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_service import Cloudtrail
|
||||
from prowler.providers.common.common import get_global_provider
|
||||
|
||||
cloudtrail_client = Cloudtrail(current_audit_info)
|
||||
cloudtrail_client = Cloudtrail(get_global_provider())
|
||||
|
||||
@@ -35,7 +35,7 @@ class cloudtrail_logs_s3_bucket_access_logging_enabled(Check):
|
||||
|
||||
# check if trail is delivering logs in a cross account bucket
|
||||
if not trail_bucket_is_in_account:
|
||||
report.status = "INFO"
|
||||
report.status = "MANUAL"
|
||||
report.status_extended = f"Trail {trail.name} is delivering logs in a cross-account bucket {trail_bucket} in another account out of Prowler's permissions scope, please check it manually."
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -41,7 +41,7 @@ class cloudtrail_logs_s3_bucket_is_not_publicly_accessible(Check):
|
||||
break
|
||||
# check if trail bucket is a cross account bucket
|
||||
if not trail_bucket_is_in_account:
|
||||
report.status = "INFO"
|
||||
report.status = "MANUAL"
|
||||
report.status_extended = f"Trail {trail.name} bucket ({trail_bucket}) is a cross-account bucket in another account out of Prowler's permissions scope, please check it manually."
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ class cloudtrail_s3_dataevents_read_enabled(Check):
|
||||
report.status_extended = f"Trail {trail.name} from home region {trail.home_region} has an advanced data event selector to record all S3 object-level API operations."
|
||||
findings.append(report)
|
||||
if not findings and (
|
||||
s3_client.buckets or not cloudtrail_client.audit_info.ignore_unused_services
|
||||
s3_client.buckets or not cloudtrail_client.provider.ignore_unused_services
|
||||
):
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = cloudtrail_client.region
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user