Compare commits

...

25 Commits

Author SHA1 Message Date
n4ch04
6f39fb47c3 fix(check lib): delete comments and debug 2023-08-18 12:07:06 +02:00
n4ch04
b75226433c feat(checks metdata): only load checks metadata once 2023-08-18 11:55:08 +02:00
Pepe Fagoaga
7c45cb45ae feat(ecr_repositories_scan_vulnerabilities_in_latest_image): Minimum severity is configurable (#2736) 2023-08-18 09:17:02 +02:00
Pepe Fagoaga
ac11c6729b chore(tests): Replace sure with standard assert (#2738) 2023-08-17 11:36:45 +02:00
Pepe Fagoaga
1677654dea docs(audit_config): How to use it (#2739) 2023-08-17 11:36:32 +02:00
Pepe Fagoaga
bc5a7a961b tests(check_security_group) (#2740) 2023-08-17 11:36:17 +02:00
Sergio Garcia
c10462223d chore(regions_update): Changes in regions for AWS services. (#2741)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-17 11:31:31 +02:00
vysakh-devopspace
54a9f412e8 feat(ec2): New check ec2_instance_detailed_monitoring_enabled (#2735)
Co-authored-by: Vysakh <venugopal.vysakh@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-08-16 14:31:06 +02:00
Sergio Garcia
5a107c58bb chore(regions_update): Changes in regions for AWS services. (#2737)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-16 11:42:47 +02:00
Pepe Fagoaga
8f091e7548 fix(gcp): Status extended ends with a dot (#2734) 2023-08-16 10:14:41 +02:00
Pepe Fagoaga
8cdc7b18c7 fix(test-vpc): use the right import paths (#2732) 2023-08-16 09:17:18 +02:00
christiandavilakoobin
9f2e87e9fb fix(is_account_only_allowed_in_condition): Context name on conditions are case-insensitive (#2726) 2023-08-16 08:27:24 +02:00
Sergio Garcia
e119458048 chore(regions_update): Changes in regions for AWS services. (#2733)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-15 16:25:17 +02:00
dependabot[bot]
c2983faf1d build(deps): bump azure-identity from 1.13.0 to 1.14.0 (#2731)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 10:34:56 +02:00
dependabot[bot]
a09855207e build(deps-dev): bump coverage from 7.2.7 to 7.3.0 (#2730)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 09:50:18 +02:00
Pepe Fagoaga
1e1859ba6f docs(style): Add more details (#2724) 2023-08-15 09:26:48 +02:00
dependabot[bot]
a3937e48a8 build(deps): bump google-api-python-client from 2.95.0 to 2.96.0 (#2729)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 09:22:59 +02:00
dependabot[bot]
d2aa53a2ec build(deps): bump mkdocs-material from 9.1.20 to 9.1.21 (#2728)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 08:57:24 +02:00
dependabot[bot]
b0bdeea60f build(deps-dev): bump vulture from 2.7 to 2.8 (#2727) 2023-08-15 08:33:27 +02:00
Pepe Fagoaga
465e64b9ac fix(azure): Status extended ends with a dot (#2725) 2023-08-14 21:48:16 +02:00
Pepe Fagoaga
fc53b28997 test(s3): Mock S3Control when used (#2722) 2023-08-14 21:48:05 +02:00
Pepe Fagoaga
72e701a4b5 fix(security): GitPython issue (#2720) 2023-08-14 21:09:12 +02:00
Pepe Fagoaga
2298d5356d test(coverage): Add Codecov (#2719) 2023-08-14 21:08:45 +02:00
Pepe Fagoaga
54137be92b test(python): 3.9, 3.10, 3.11 (#2718) 2023-08-14 21:08:29 +02:00
Sergio Garcia
7ffb12268d chore(release): update Prowler Version to 3.8.2 (#2721)
Co-authored-by: github-actions <noreply@github.com>
2023-08-14 09:18:23 +02:00
131 changed files with 2257 additions and 558 deletions

View File

@@ -13,19 +13,19 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9"]
python-version: ["3.9", "3.10", "3.11"]
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: |
python -m pip install --upgrade pip
pipx install poetry
python -m pip install --upgrade pip
pipx install poetry
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
cache: "poetry"
- name: Install dependencies
run: |
poetry install
@@ -61,4 +61,8 @@ jobs:
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
run: |
poetry run pytest tests -n auto
poetry run pytest -n auto --cov=./prowler --cov-report=xml tests
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

5
.gitignore vendored
View File

@@ -46,3 +46,8 @@ junit-reports/
# .env
.env*
# Coverage
.coverage*
.coverage
coverage*

View File

@@ -2,12 +2,19 @@
##@ Testing
test: ## Test with pytest
pytest -n auto -vvv -s -x
rm -rf .coverage && \
pytest -n auto -vvv -s --cov=./prowler --cov-report=xml tests
coverage: ## Show Test Coverage
coverage run --skip-covered -m pytest -v && \
coverage report -m && \
rm -rf .coverage
rm -rf .coverage && \
coverage report -m
coverage-html: ## Show Test Coverage
rm -rf ./htmlcov && \
coverage html && \
open htmlcov/index.html
##@ Linting
format: ## Format Code

View File

@@ -2,7 +2,7 @@
Here you can find how to create new checks for Prowler.
**To create a check is required to have a Prowler provider service already created, so if the service is not present or the attribute you want to audit is not retrieved by the service, please refer to the [Service](./service.md) documentation.**
**To create a check is required to have a Prowler provider service already created, so if the service is not present or the attribute you want to audit is not retrieved by the service, please refer to the [Service](./services.md) documentation.**
## Introduction
To create a new check for a supported Prowler provider, you will need to create a folder with the check name inside the specific service for the selected provider.
@@ -20,7 +20,7 @@ Inside that folder, we need to create three files:
The Prowler's check structure is very simple and following it there is nothing more to do to include a check in a provider's service because the load is done dynamically based on the paths.
The following is the code for the `ec2_ami_public` check:
```python
```python title="Check Class"
# At the top of the file we need to import the following:
# - Check class which is in charge of the following:
# - Retrieve the check metadata and expose the `metadata()`
@@ -106,6 +106,7 @@ All the checks MUST fill the `report.status` and `report.status_extended` with t
- Status Extended -- `report.status_extended`
- MUST end in a dot `.`
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`
### Resource ID, Name and ARN
All the hecks must fill the `report.resource_id` and `report.resource_arn` with the following criteria:
@@ -159,6 +160,38 @@ class Check(ABC, Check_Metadata_Model):
"""Execute the check's logic"""
```
### Using the audit config
Prowler has a [configuration file](../tutorials/configuration_file.md) which is used to pass certain configuration values to the checks, like the following:
```python title="ec2_securitygroup_with_many_ingress_egress_rules.py"
class ec2_securitygroup_with_many_ingress_egress_rules(Check):
def execute(self):
findings = []
# max_security_group_rules, default: 50
max_security_group_rules = ec2_client.audit_config.get(
"max_security_group_rules", 50
)
for security_group in ec2_client.security_groups:
```
```yaml title="config.yaml"
# AWS Configuration
aws:
# AWS EC2 Configuration
# aws.ec2_securitygroup_with_many_ingress_egress_rules
# The default value is 50 rules
max_security_group_rules: 50
```
As you can see in the above code, within the service client, in this case the `ec2_client`, there is an object called `audit_config` which is a Python dictionary containing the values read from the configuration file.
In order to use it, you have to check first if the value is present in the configuration file. If the value is not present, you can create it in the `config.yaml` file and then, read it from the check.
> It is mandatory to always use the `dictionary.get(value, default)` syntax to set a default value in the case the configuration value is not present.
## Check Metadata
Each Prowler check has metadata associated which is stored at the same level of the check's folder in a file called A `check_name.metadata.json` containing the check's metadata.

View File

@@ -32,7 +32,7 @@ Due to the complexity and differencies of each provider API we are going to use
The following is the `<service>_service.py` file:
```python
```python title="Service Class"
from datetime import datetime
from typing import Optional
@@ -174,9 +174,15 @@ class <Service>(ServiceParentClass):
logger.error(
f"{<item>.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
```
### Service Models
# In each service class we have to create some classes using the Pydantic's Basemodel for the resources we want to audit.
For each class object we need to model we use the Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) to take advantage of the data validation.
```python title="Service Model"
# In each service class we have to create some classes using
# the Pydantic's Basemodel for the resources we want to audit.
class <Item>(BaseModel):
"""<Item> holds a <Service> <Item>"""
@@ -193,19 +199,18 @@ class <Item>(BaseModel):
"""<Items>[].public"""
# We can create Optional attributes set to None by default
tags: Optional[list] = []
tags: Optional[list]
"""<Items>[].tags"""
```
### Service Objects
In the service each list of resources should be created as a Python [dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) since we are performing lookups all the time the Python dictionary lookup has [O(1) complexity](https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions).
In the service each group of resources should be created as a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries). This is because we are performing lookups all the time and the Python dictionary lookup has [O(1) complexity](https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions).
We MUST set as the dictionary key a unique ID, like the resource Unique ID or ARN.
Example:
```python
self.vpcs = {}
self.vpcs["vpc-01234567890abcdef"] = VPC_Object_Class
self.vpcs["vpc-01234567890abcdef"] = VPC_Object_Class()
```
## Service Client

View File

@@ -54,7 +54,7 @@ When creating tests for some provider's checks we follow these guidelines trying
## How to run Prowler tests
To run the Prowler test suite you need to install the testing dependencies already included in the `pyproject.toml` file. If you didn't install it yet please read the developer guide introduction [here](./developer-guide.md#get-the-code-and-install-all-dependencies).
To run the Prowler test suite you need to install the testing dependencies already included in the `pyproject.toml` file. If you didn't install it yet please read the developer guide introduction [here](./introduction.md#get-the-code-and-install-all-dependencies).
Then in the project's root path execute `pytest -n auto -vvv -s -x` or use the `Makefile` with `make test`.

View File

@@ -10,9 +10,9 @@
For **Prowler v2 Documentation**, please go [here](https://github.com/prowler-cloud/prowler/tree/2.12.0) to the branch and its README.md.
- You are currently in the **Getting Started** section where you can find general information and requirements to help you start with the tool.
- In the [Tutorials](tutorials/overview) section you will see how to take advantage of all the features in Prowler.
- In the [Contact Us](contact) section you can find how to reach us out in case of technical issues.
- In the [About](about) section you will find more information about the Prowler team and license.
- In the [Tutorials](./tutorials/misc.md) section you will see how to take advantage of all the features in Prowler.
- In the [Contact Us](./contact.md) section you can find how to reach us out in case of technical issues.
- In the [About](./about.md) section you will find more information about the Prowler team and license.
## About Prowler
@@ -201,7 +201,7 @@ To run Prowler, you will need to specify the provider (e.g aws, gcp or azure):
prowler <provider>
```
![Prowler Execution](img/short-display.png)
> Running the `prowler` command without options will use your environment variable credentials, see [Requirements](getting-started/requirements/) section to review the credentials settings.
> Running the `prowler` command without options will use your environment variable credentials, see [Requirements](./getting-started/requirements.md) section to review the credentials settings.
If you miss the former output you can use `--verbose` but Prowler v3 is smoking fast, so you won't see much ;)

View File

@@ -11,4 +11,4 @@
This error is also related with a lack of system requirements. To improve performance, Prowler stores information in memory so it may need to be run in a system with more than 1GB of memory.
See section [Logging](/tutorials/logging/) for further information or [contact us](/contact/).
See section [Logging](./tutorials/logging.md) for further information or [contact us](./contact.md).

View File

@@ -2,7 +2,7 @@
Prowler v3 comes with different identifiers but we maintained the same checks that were implemented in v2. The reason for this change is because in previous versions of Prowler, check names were mostly based on CIS Benchmark for AWS. In v3 all checks are independent from any security framework and they have its own name and ID.
If you need more information about how new compliance implementation works in Prowler v3 see [Compliance](../../compliance/) section.
If you need more information about how new compliance implementation works in Prowler v3 see [Compliance](../compliance.md) section.
```
checks_v3_to_v2_mapping = {

View File

@@ -24,4 +24,4 @@ prowler azure --browser-auth --tenant-id "XXXXXXXX"
prowler azure --managed-identity-auth
```
To use Prowler you need to set up also the permissions required to access your resources in your Azure account, to more details refer to [Requirements](/getting-started/requirements)
To use Prowler you need to set up also the permissions required to access your resources in your Azure account, to more details refer to [Requirements](../../getting-started/requirements.md)

View File

@@ -9,36 +9,35 @@ Also you can input a custom configuration file using the `--config-file` argumen
## AWS
### Configurable Checks
The following list includes all the checks with configurable variables that can be changed in the mentioned configuration yaml file:
The following list includes all the AWS checks with configurable variables that can be changed in the configuration yaml file:
1. aws.ec2_elastic_ip_shodan
- shodan_api_key (String)
- aws.ec2_securitygroup_with_many_ingress_egress_rules
- max_security_group_rules (Integer)
- aws.ec2_instance_older_than_specific_days
- max_ec2_instance_age_in_days (Integer)
- aws.vpc_endpoint_connections_trust_boundaries
- trusted_account_ids (List of Strings)
- aws.vpc_endpoint_services_allowed_principals_trust_boundaries
- trusted_account_ids (List of Strings)
- aws.cloudwatch_log_group_retention_policy_specific_days_enabled
- log_group_retention_days (Integer)
- aws.appstream_fleet_session_idle_disconnect_timeout
- max_idle_disconnect_timeout_in_seconds (Integer)
- aws.appstream_fleet_session_disconnect_timeout
- max_disconnect_timeout_in_seconds (Integer)
- aws.appstream_fleet_maximum_session_duration
- max_session_duration_seconds (Integer)
- aws.awslambda_function_using_supported_runtimes
- obsolete_lambda_runtimes (List of Strings)
| Check Name | Value | Type |
|---|---|---|
| `ec2_elastic_ip_shodan` | `shodan_api_key` | String |
| `ec2_securitygroup_with_many_ingress_egress_rules` | `max_security_group_rules` | Integer |
| `ec2_instance_older_than_specific_days` | `max_ec2_instance_age_in_days` | Integer |
| `vpc_endpoint_connections_trust_boundaries` | `trusted_account_ids` | List of Strings |
| `vpc_endpoint_services_allowed_principals_trust_boundaries` | `trusted_account_ids` | List of Strings |
| `cloudwatch_log_group_retention_policy_specific_days_enabled` | `log_group_retention_days` | Integer |
| `appstream_fleet_session_idle_disconnect_timeout` | `max_idle_disconnect_timeout_in_seconds` | Integer |
| `appstream_fleet_session_disconnect_timeout` | `max_disconnect_timeout_in_seconds` | Integer |
| `appstream_fleet_maximum_session_duration` | `max_session_duration_seconds` | Integer |
| `awslambda_function_using_supported_runtimes` | `obsolete_lambda_runtimes` | Integer |
| `organizations_scp_check_deny_regions` | `organizations_enabled_regions` | List of Strings |
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
## Azure
### Configurable Checks
## GCP
### Configurable Checks
## Config YAML File Structure
> This is the new Prowler configuration file format. The old one without provider keys is still compatible just for the AWS provider.
```yaml
```yaml title="config.yaml"
# AWS Configuration
aws:
# AWS EC2 Configuration

View File

@@ -65,7 +65,7 @@ The custom checks folder must contain one subfolder per check, each subfolder mu
- A `check_name.metadata.json` containing the check's metadata.
>The check name must start with the service name followed by an underscore (e.g., ec2_instance_public_ip).
To see more information about how to write checks see the [Developer Guide](../developer-guide/#create-a-new-check-for-a-provider).
To see more information about how to write checks see the [Developer Guide](../developer-guide/checks.md#create-a-new-check-for-a-provider).
> If you want to run ONLY your custom check(s), import it with -x (--checks-folder) and then run it with -c (--checks), e.g.:
```console

View File

@@ -38,7 +38,7 @@ nav:
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Pentesting: tutorials/pentesting.md
- Developer Guide: tutorials/developer-guide/developer-guide.md
- Developer Guide: developer-guide/introduction.md
- AWS:
- Authentication: tutorials/aws/authentication.md
- Assume Role: tutorials/aws/role-assumption.md
@@ -57,17 +57,17 @@ nav:
- Google Cloud:
- Authentication: tutorials/gcp/authentication.md
- Developer Guide:
- Introduction: tutorials/developer-guide/developer-guide.md
- Audit Info: tutorials/developer-guide/audit-info.md
- Services: tutorials/developer-guide/service.md
- Checks: tutorials/developer-guide/checks.md
- Documentation: tutorials/developer-guide/documentation.md
- Compliance: tutorials/developer-guide/security-compliance-framework.md
- Outputs: tutorials/developer-guide/outputs.md
- Integrations: tutorials/developer-guide/integrations.md
- Introduction: developer-guide/introduction.md
- Audit Info: developer-guide/audit-info.md
- Services: developer-guide/services.md
- Checks: developer-guide/checks.md
- Documentation: developer-guide/documentation.md
- Compliance: developer-guide/security-compliance-framework.md
- Outputs: developer-guide/outputs.md
- Integrations: developer-guide/integrations.md
- Testing:
- Unit Tests: tutorials/developer-guide/unit-testing.md
- Integration Tests: tutorials/developer-guide/integration-testing.md
- Unit Tests: developer-guide/unit-testing.md
- Integration Tests: developer-guide/integration-testing.md
- Security: security.md
- Contact Us: contact.md
- Troubleshooting: troubleshooting.md

192
poetry.lock generated
View File

@@ -106,13 +106,13 @@ aio = ["aiohttp (>=3.0)"]
[[package]]
name = "azure-identity"
version = "1.13.0"
version = "1.14.0"
description = "Microsoft Azure Identity Library for Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "azure-identity-1.13.0.zip", hash = "sha256:c931c27301ffa86b07b4dcf574e29da73e3deba9ab5d1fe4f445bb6a3117e260"},
{file = "azure_identity-1.13.0-py3-none-any.whl", hash = "sha256:bd700cebb80cd9862098587c29d8677e819beca33c62568ced6d5a8e5e332b82"},
{file = "azure-identity-1.14.0.zip", hash = "sha256:72441799f8c5c89bfe21026965e266672a7c5d050c2c65119ef899dd5362e2b1"},
{file = "azure_identity-1.14.0-py3-none-any.whl", hash = "sha256:edabf0e010eb85760e1dd19424d5e8f97ba2c9caff73a16e7b30ccbdbcce369b"},
]
[package.dependencies]
@@ -120,7 +120,6 @@ azure-core = ">=1.11.0,<2.0.0"
cryptography = ">=2.5"
msal = ">=1.20.0,<2.0.0"
msal-extensions = ">=0.3.0,<2.0.0"
six = ">=1.12.0"
[[package]]
name = "azure-mgmt-authorization"
@@ -569,73 +568,68 @@ files = [
[[package]]
name = "coverage"
version = "7.2.7"
version = "7.3.0"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "coverage-7.2.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d39b5b4f2a66ccae8b7263ac3c8170994b65266797fb96cbbfd3fb5b23921db8"},
{file = "coverage-7.2.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6d040ef7c9859bb11dfeb056ff5b3872436e3b5e401817d87a31e1750b9ae2fb"},
{file = "coverage-7.2.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba90a9563ba44a72fda2e85302c3abc71c5589cea608ca16c22b9804262aaeb6"},
{file = "coverage-7.2.7-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e7d9405291c6928619403db1d10bd07888888ec1abcbd9748fdaa971d7d661b2"},
{file = "coverage-7.2.7-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31563e97dae5598556600466ad9beea39fb04e0229e61c12eaa206e0aa202063"},
{file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ebba1cd308ef115925421d3e6a586e655ca5a77b5bf41e02eb0e4562a111f2d1"},
{file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:cb017fd1b2603ef59e374ba2063f593abe0fc45f2ad9abdde5b4d83bd922a353"},
{file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d62a5c7dad11015c66fbb9d881bc4caa5b12f16292f857842d9d1871595f4495"},
{file = "coverage-7.2.7-cp310-cp310-win32.whl", hash = "sha256:ee57190f24fba796e36bb6d3aa8a8783c643d8fa9760c89f7a98ab5455fbf818"},
{file = "coverage-7.2.7-cp310-cp310-win_amd64.whl", hash = "sha256:f75f7168ab25dd93110c8a8117a22450c19976afbc44234cbf71481094c1b850"},
{file = "coverage-7.2.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:06a9a2be0b5b576c3f18f1a241f0473575c4a26021b52b2a85263a00f034d51f"},
{file = "coverage-7.2.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5baa06420f837184130752b7c5ea0808762083bf3487b5038d68b012e5937dbe"},
{file = "coverage-7.2.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fdec9e8cbf13a5bf63290fc6013d216a4c7232efb51548594ca3631a7f13c3a3"},
{file = "coverage-7.2.7-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:52edc1a60c0d34afa421c9c37078817b2e67a392cab17d97283b64c5833f427f"},
{file = "coverage-7.2.7-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63426706118b7f5cf6bb6c895dc215d8a418d5952544042c8a2d9fe87fcf09cb"},
{file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:afb17f84d56068a7c29f5fa37bfd38d5aba69e3304af08ee94da8ed5b0865833"},
{file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:48c19d2159d433ccc99e729ceae7d5293fbffa0bdb94952d3579983d1c8c9d97"},
{file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0e1f928eaf5469c11e886fe0885ad2bf1ec606434e79842a879277895a50942a"},
{file = "coverage-7.2.7-cp311-cp311-win32.whl", hash = "sha256:33d6d3ea29d5b3a1a632b3c4e4f4ecae24ef170b0b9ee493883f2df10039959a"},
{file = "coverage-7.2.7-cp311-cp311-win_amd64.whl", hash = "sha256:5b7540161790b2f28143191f5f8ec02fb132660ff175b7747b95dcb77ac26562"},
{file = "coverage-7.2.7-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f2f67fe12b22cd130d34d0ef79206061bfb5eda52feb6ce0dba0644e20a03cf4"},
{file = "coverage-7.2.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a342242fe22407f3c17f4b499276a02b01e80f861f1682ad1d95b04018e0c0d4"},
{file = "coverage-7.2.7-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:171717c7cb6b453aebac9a2ef603699da237f341b38eebfee9be75d27dc38e01"},
{file = "coverage-7.2.7-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:49969a9f7ffa086d973d91cec8d2e31080436ef0fb4a359cae927e742abfaaa6"},
{file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b46517c02ccd08092f4fa99f24c3b83d8f92f739b4657b0f146246a0ca6a831d"},
{file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:a3d33a6b3eae87ceaefa91ffdc130b5e8536182cd6dfdbfc1aa56b46ff8c86de"},
{file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:976b9c42fb2a43ebf304fa7d4a310e5f16cc99992f33eced91ef6f908bd8f33d"},
{file = "coverage-7.2.7-cp312-cp312-win32.whl", hash = "sha256:8de8bb0e5ad103888d65abef8bca41ab93721647590a3f740100cd65c3b00511"},
{file = "coverage-7.2.7-cp312-cp312-win_amd64.whl", hash = "sha256:9e31cb64d7de6b6f09702bb27c02d1904b3aebfca610c12772452c4e6c21a0d3"},
{file = "coverage-7.2.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:58c2ccc2f00ecb51253cbe5d8d7122a34590fac9646a960d1430d5b15321d95f"},
{file = "coverage-7.2.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d22656368f0e6189e24722214ed8d66b8022db19d182927b9a248a2a8a2f67eb"},
{file = "coverage-7.2.7-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a895fcc7b15c3fc72beb43cdcbdf0ddb7d2ebc959edac9cef390b0d14f39f8a9"},
{file = "coverage-7.2.7-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84606b74eb7de6ff581a7915e2dab7a28a0517fbe1c9239eb227e1354064dcd"},
{file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0a5f9e1dbd7fbe30196578ca36f3fba75376fb99888c395c5880b355e2875f8a"},
{file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:419bfd2caae268623dd469eff96d510a920c90928b60f2073d79f8fe2bbc5959"},
{file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2aee274c46590717f38ae5e4650988d1af340fe06167546cc32fe2f58ed05b02"},
{file = "coverage-7.2.7-cp37-cp37m-win32.whl", hash = "sha256:61b9a528fb348373c433e8966535074b802c7a5d7f23c4f421e6c6e2f1697a6f"},
{file = "coverage-7.2.7-cp37-cp37m-win_amd64.whl", hash = "sha256:b1c546aca0ca4d028901d825015dc8e4d56aac4b541877690eb76490f1dc8ed0"},
{file = "coverage-7.2.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:54b896376ab563bd38453cecb813c295cf347cf5906e8b41d340b0321a5433e5"},
{file = "coverage-7.2.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3d376df58cc111dc8e21e3b6e24606b5bb5dee6024f46a5abca99124b2229ef5"},
{file = "coverage-7.2.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e330fc79bd7207e46c7d7fd2bb4af2963f5f635703925543a70b99574b0fea9"},
{file = "coverage-7.2.7-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e9d683426464e4a252bf70c3498756055016f99ddaec3774bf368e76bbe02b6"},
{file = "coverage-7.2.7-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d13c64ee2d33eccf7437961b6ea7ad8673e2be040b4f7fd4fd4d4d28d9ccb1e"},
{file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b7aa5f8a41217360e600da646004f878250a0d6738bcdc11a0a39928d7dc2050"},
{file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8fa03bce9bfbeeef9f3b160a8bed39a221d82308b4152b27d82d8daa7041fee5"},
{file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:245167dd26180ab4c91d5e1496a30be4cd721a5cf2abf52974f965f10f11419f"},
{file = "coverage-7.2.7-cp38-cp38-win32.whl", hash = "sha256:d2c2db7fd82e9b72937969bceac4d6ca89660db0a0967614ce2481e81a0b771e"},
{file = "coverage-7.2.7-cp38-cp38-win_amd64.whl", hash = "sha256:2e07b54284e381531c87f785f613b833569c14ecacdcb85d56b25c4622c16c3c"},
{file = "coverage-7.2.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:537891ae8ce59ef63d0123f7ac9e2ae0fc8b72c7ccbe5296fec45fd68967b6c9"},
{file = "coverage-7.2.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:06fb182e69f33f6cd1d39a6c597294cff3143554b64b9825d1dc69d18cc2fff2"},
{file = "coverage-7.2.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:201e7389591af40950a6480bd9edfa8ed04346ff80002cec1a66cac4549c1ad7"},
{file = "coverage-7.2.7-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f6951407391b639504e3b3be51b7ba5f3528adbf1a8ac3302b687ecababf929e"},
{file = "coverage-7.2.7-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f48351d66575f535669306aa7d6d6f71bc43372473b54a832222803eb956fd1"},
{file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b29019c76039dc3c0fd815c41392a044ce555d9bcdd38b0fb60fb4cd8e475ba9"},
{file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:81c13a1fc7468c40f13420732805a4c38a105d89848b7c10af65a90beff25250"},
{file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:975d70ab7e3c80a3fe86001d8751f6778905ec723f5b110aed1e450da9d4b7f2"},
{file = "coverage-7.2.7-cp39-cp39-win32.whl", hash = "sha256:7ee7d9d4822c8acc74a5e26c50604dff824710bc8de424904c0982e25c39c6cb"},
{file = "coverage-7.2.7-cp39-cp39-win_amd64.whl", hash = "sha256:eb393e5ebc85245347950143969b241d08b52b88a3dc39479822e073a1a8eb27"},
{file = "coverage-7.2.7-pp37.pp38.pp39-none-any.whl", hash = "sha256:b7b4c971f05e6ae490fef852c218b0e79d4e52f79ef0c8475566584a8fb3e01d"},
{file = "coverage-7.2.7.tar.gz", hash = "sha256:924d94291ca674905fe9481f12294eb11f2d3d3fd1adb20314ba89e94f44ed59"},
{file = "coverage-7.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:db76a1bcb51f02b2007adacbed4c88b6dee75342c37b05d1822815eed19edee5"},
{file = "coverage-7.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c02cfa6c36144ab334d556989406837336c1d05215a9bdf44c0bc1d1ac1cb637"},
{file = "coverage-7.3.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:477c9430ad5d1b80b07f3c12f7120eef40bfbf849e9e7859e53b9c93b922d2af"},
{file = "coverage-7.3.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ce2ee86ca75f9f96072295c5ebb4ef2a43cecf2870b0ca5e7a1cbdd929cf67e1"},
{file = "coverage-7.3.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68d8a0426b49c053013e631c0cdc09b952d857efa8f68121746b339912d27a12"},
{file = "coverage-7.3.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b3eb0c93e2ea6445b2173da48cb548364f8f65bf68f3d090404080d338e3a689"},
{file = "coverage-7.3.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:90b6e2f0f66750c5a1178ffa9370dec6c508a8ca5265c42fbad3ccac210a7977"},
{file = "coverage-7.3.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:96d7d761aea65b291a98c84e1250cd57b5b51726821a6f2f8df65db89363be51"},
{file = "coverage-7.3.0-cp310-cp310-win32.whl", hash = "sha256:63c5b8ecbc3b3d5eb3a9d873dec60afc0cd5ff9d9f1c75981d8c31cfe4df8527"},
{file = "coverage-7.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:97c44f4ee13bce914272589b6b41165bbb650e48fdb7bd5493a38bde8de730a1"},
{file = "coverage-7.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:74c160285f2dfe0acf0f72d425f3e970b21b6de04157fc65adc9fd07ee44177f"},
{file = "coverage-7.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b543302a3707245d454fc49b8ecd2c2d5982b50eb63f3535244fd79a4be0c99d"},
{file = "coverage-7.3.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ad0f87826c4ebd3ef484502e79b39614e9c03a5d1510cfb623f4a4a051edc6fd"},
{file = "coverage-7.3.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:13c6cbbd5f31211d8fdb477f0f7b03438591bdd077054076eec362cf2207b4a7"},
{file = "coverage-7.3.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fac440c43e9b479d1241fe9d768645e7ccec3fb65dc3a5f6e90675e75c3f3e3a"},
{file = "coverage-7.3.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:3c9834d5e3df9d2aba0275c9f67989c590e05732439b3318fa37a725dff51e74"},
{file = "coverage-7.3.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4c8e31cf29b60859876474034a83f59a14381af50cbe8a9dbaadbf70adc4b214"},
{file = "coverage-7.3.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7a9baf8e230f9621f8e1d00c580394a0aa328fdac0df2b3f8384387c44083c0f"},
{file = "coverage-7.3.0-cp311-cp311-win32.whl", hash = "sha256:ccc51713b5581e12f93ccb9c5e39e8b5d4b16776d584c0f5e9e4e63381356482"},
{file = "coverage-7.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:887665f00ea4e488501ba755a0e3c2cfd6278e846ada3185f42d391ef95e7e70"},
{file = "coverage-7.3.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d000a739f9feed900381605a12a61f7aaced6beae832719ae0d15058a1e81c1b"},
{file = "coverage-7.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:59777652e245bb1e300e620ce2bef0d341945842e4eb888c23a7f1d9e143c446"},
{file = "coverage-7.3.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c9737bc49a9255d78da085fa04f628a310c2332b187cd49b958b0e494c125071"},
{file = "coverage-7.3.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5247bab12f84a1d608213b96b8af0cbb30d090d705b6663ad794c2f2a5e5b9fe"},
{file = "coverage-7.3.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e2ac9a1de294773b9fa77447ab7e529cf4fe3910f6a0832816e5f3d538cfea9a"},
{file = "coverage-7.3.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:85b7335c22455ec12444cec0d600533a238d6439d8d709d545158c1208483873"},
{file = "coverage-7.3.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:36ce5d43a072a036f287029a55b5c6a0e9bd73db58961a273b6dc11a2c6eb9c2"},
{file = "coverage-7.3.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:211a4576e984f96d9fce61766ffaed0115d5dab1419e4f63d6992b480c2bd60b"},
{file = "coverage-7.3.0-cp312-cp312-win32.whl", hash = "sha256:56afbf41fa4a7b27f6635bc4289050ac3ab7951b8a821bca46f5b024500e6321"},
{file = "coverage-7.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:7f297e0c1ae55300ff688568b04ff26b01c13dfbf4c9d2b7d0cb688ac60df479"},
{file = "coverage-7.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ac0dec90e7de0087d3d95fa0533e1d2d722dcc008bc7b60e1143402a04c117c1"},
{file = "coverage-7.3.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:438856d3f8f1e27f8e79b5410ae56650732a0dcfa94e756df88c7e2d24851fcd"},
{file = "coverage-7.3.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1084393c6bda8875c05e04fce5cfe1301a425f758eb012f010eab586f1f3905e"},
{file = "coverage-7.3.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:49ab200acf891e3dde19e5aa4b0f35d12d8b4bd805dc0be8792270c71bd56c54"},
{file = "coverage-7.3.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a67e6bbe756ed458646e1ef2b0778591ed4d1fcd4b146fc3ba2feb1a7afd4254"},
{file = "coverage-7.3.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8f39c49faf5344af36042b293ce05c0d9004270d811c7080610b3e713251c9b0"},
{file = "coverage-7.3.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:7df91fb24c2edaabec4e0eee512ff3bc6ec20eb8dccac2e77001c1fe516c0c84"},
{file = "coverage-7.3.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:34f9f0763d5fa3035a315b69b428fe9c34d4fc2f615262d6be3d3bf3882fb985"},
{file = "coverage-7.3.0-cp38-cp38-win32.whl", hash = "sha256:bac329371d4c0d456e8d5f38a9b0816b446581b5f278474e416ea0c68c47dcd9"},
{file = "coverage-7.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:b859128a093f135b556b4765658d5d2e758e1fae3e7cc2f8c10f26fe7005e543"},
{file = "coverage-7.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:fc0ed8d310afe013db1eedd37176d0839dc66c96bcfcce8f6607a73ffea2d6ba"},
{file = "coverage-7.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61260ec93f99f2c2d93d264b564ba912bec502f679793c56f678ba5251f0393"},
{file = "coverage-7.3.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97af9554a799bd7c58c0179cc8dbf14aa7ab50e1fd5fa73f90b9b7215874ba28"},
{file = "coverage-7.3.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3558e5b574d62f9c46b76120a5c7c16c4612dc2644c3d48a9f4064a705eaee95"},
{file = "coverage-7.3.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37d5576d35fcb765fca05654f66aa71e2808d4237d026e64ac8b397ffa66a56a"},
{file = "coverage-7.3.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:07ea61bcb179f8f05ffd804d2732b09d23a1238642bf7e51dad62082b5019b34"},
{file = "coverage-7.3.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:80501d1b2270d7e8daf1b64b895745c3e234289e00d5f0e30923e706f110334e"},
{file = "coverage-7.3.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4eddd3153d02204f22aef0825409091a91bf2a20bce06fe0f638f5c19a85de54"},
{file = "coverage-7.3.0-cp39-cp39-win32.whl", hash = "sha256:2d22172f938455c156e9af2612650f26cceea47dc86ca048fa4e0b2d21646ad3"},
{file = "coverage-7.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:60f64e2007c9144375dd0f480a54d6070f00bb1a28f65c408370544091c9bc9e"},
{file = "coverage-7.3.0-pp38.pp39.pp310-none-any.whl", hash = "sha256:5492a6ce3bdb15c6ad66cb68a0244854d9917478877a25671d70378bdc8562d0"},
{file = "coverage-7.3.0.tar.gz", hash = "sha256:49dbb19cdcafc130f597d9e04a29d0a032ceedf729e41b181f51cd170e6ee865"},
]
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
@@ -848,13 +842,13 @@ smmap = ">=3.0.1,<6"
[[package]]
name = "gitpython"
version = "3.1.31"
version = "3.1.32"
description = "GitPython is a Python library used to interact with Git repositories"
optional = false
python-versions = ">=3.7"
files = [
{file = "GitPython-3.1.31-py3-none-any.whl", hash = "sha256:f04893614f6aa713a60cbbe1e6a97403ef633103cdd0ef5eb6efe0deb98dbe8d"},
{file = "GitPython-3.1.31.tar.gz", hash = "sha256:8ce3bcf69adfdf7c7d503e78fd3b1c492af782d58893b650adb2ac8912ddd573"},
{file = "GitPython-3.1.32-py3-none-any.whl", hash = "sha256:e3d59b1c2c6ebb9dfa7a184daf3b6dd4914237e7488a1730a6d8f6f5d0b4187f"},
{file = "GitPython-3.1.32.tar.gz", hash = "sha256:8d9b8cb1e80b9735e8717c9362079d3ce4c6e5ddeebedd0361b228c3a67a62f6"},
]
[package.dependencies]
@@ -884,13 +878,13 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0dev)"]
[[package]]
name = "google-api-python-client"
version = "2.95.0"
version = "2.96.0"
description = "Google API Client Library for Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "google-api-python-client-2.95.0.tar.gz", hash = "sha256:d2731ede12f79e53fbe11fdb913dfe986440b44c0a28431c78a8ec275f4c1541"},
{file = "google_api_python_client-2.95.0-py2.py3-none-any.whl", hash = "sha256:a8aab2da678f42a01f2f52108f787fef4310f23f9dd917c4e64664c3f0c885ba"},
{file = "google-api-python-client-2.96.0.tar.gz", hash = "sha256:f712373d03d338af57b9f5fe98c91f4b5baaa8765469b015bc623c4681c5bd51"},
{file = "google_api_python_client-2.96.0-py2.py3-none-any.whl", hash = "sha256:38c2b61b10d15bb41ec8f89303e3837ec2d2c3e4e38de5800c05ee322492f937"},
]
[package.dependencies]
@@ -1347,20 +1341,20 @@ min-versions = ["babel (==2.9.0)", "click (==7.0)", "colorama (==0.4)", "ghp-imp
[[package]]
name = "mkdocs-material"
version = "9.1.20"
version = "9.1.21"
description = "Documentation that simply works"
optional = true
python-versions = ">=3.7"
files = [
{file = "mkdocs_material-9.1.20-py3-none-any.whl", hash = "sha256:152db66f667825d5aa3398386fe4d227640ec393c31e7cf109b114a569fc40fc"},
{file = "mkdocs_material-9.1.20.tar.gz", hash = "sha256:91621b6a6002138c72d50a0beef20ed12cf367d2af27d1f53382562b3a9625c7"},
{file = "mkdocs_material-9.1.21-py3-none-any.whl", hash = "sha256:58bb2f11ef240632e176d6f0f7d1cff06be1d11c696a5a1b553b808b4280ed47"},
{file = "mkdocs_material-9.1.21.tar.gz", hash = "sha256:71940cdfca84ab296b6362889c25395b1621273fb16c93deda257adb7ff44ec8"},
]
[package.dependencies]
colorama = ">=0.4"
jinja2 = ">=3.0"
markdown = ">=3.2"
mkdocs = ">=1.4.2"
mkdocs = ">=1.5.0"
mkdocs-material-extensions = ">=1.1"
pygments = ">=2.14"
pymdown-extensions = ">=9.9.1"
@@ -1380,13 +1374,13 @@ files = [
[[package]]
name = "mock"
version = "5.0.2"
version = "5.1.0"
description = "Rolling backport of unittest.mock for all Pythons"
optional = false
python-versions = ">=3.6"
files = [
{file = "mock-5.0.2-py3-none-any.whl", hash = "sha256:0e0bc5ba78b8db3667ad636d964eb963dc97a59f04c6f6214c5f0e4a8f726c56"},
{file = "mock-5.0.2.tar.gz", hash = "sha256:06f18d7d65b44428202b145a9a36e99c2ee00d1eb992df0caf881d4664377891"},
{file = "mock-5.1.0-py3-none-any.whl", hash = "sha256:18c694e5ae8a208cdb3d2c20a993ca1a7b0efa258c247a1e565150f477f83744"},
{file = "mock-5.1.0.tar.gz", hash = "sha256:5e96aad5ccda4718e0a229ed94b2024df75cc2d55575ba5762d31f5767b8767d"},
]
[package.extras]
@@ -1912,6 +1906,24 @@ tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
[package.extras]
testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
[[package]]
name = "pytest-cov"
version = "4.1.0"
description = "Pytest plugin for measuring coverage."
optional = false
python-versions = ">=3.7"
files = [
{file = "pytest-cov-4.1.0.tar.gz", hash = "sha256:3904b13dfbfec47f003b8e77fd5b589cd11904a21ddf1ab38a64f204d6a10ef6"},
{file = "pytest_cov-4.1.0-py3-none-any.whl", hash = "sha256:6ba70b9e97e69fcc3fb45bfeab2d0a138fb65c4d0d6a41ef33983ad114be8c3a"},
]
[package.dependencies]
coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
[[package]]
name = "pytest-randomly"
version = "3.13.0"
@@ -2572,20 +2584,6 @@ files = [
[package.dependencies]
pbr = ">=2.0.0,<2.1.0 || >2.1.0"
[[package]]
name = "sure"
version = "2.0.1"
description = "utility belt for automated testing in python for python"
optional = false
python-versions = "*"
files = [
{file = "sure-2.0.1.tar.gz", hash = "sha256:c8fc6fabc0e7f6984eeabb942540e45646e5bef0bb99fe59e02da634e4d4b9ca"},
]
[package.dependencies]
mock = "*"
six = "*"
[[package]]
name = "tabulate"
version = "0.9.0"
@@ -2684,13 +2682,13 @@ socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "vulture"
version = "2.7"
version = "2.8"
description = "Find dead code"
optional = false
python-versions = ">=3.6"
files = [
{file = "vulture-2.7-py2.py3-none-any.whl", hash = "sha256:bccc51064ed76db15a6b58277cea8885936af047f53d2655fb5de575e93d0bca"},
{file = "vulture-2.7.tar.gz", hash = "sha256:67fb80a014ed9fdb599dd44bb96cb54311032a104106fc2e706ef7a6dad88032"},
{file = "vulture-2.8-py2.py3-none-any.whl", hash = "sha256:78bd44972b71d914ac382e64cacd4f56682017dcfa5929d3110ad09453796133"},
{file = "vulture-2.8.tar.gz", hash = "sha256:393293f183508064294b0feb4c8579e7f1f27e5bf74c9def6a3d52f38b29b599"},
]
[package.dependencies]
@@ -2895,4 +2893,4 @@ docs = ["mkdocs", "mkdocs-material"]
[metadata]
lock-version = "2.0"
python-versions = "^3.9"
content-hash = "17459c4c8a7acf4c4a31253edf406113fbcedf8d81d17042f6b33665c3a6f47d"
content-hash = "7dc4127465abad1d20e55c5889ed9e3c61a241fd728c30646d48bf1f24129d5c"

View File

@@ -160,7 +160,11 @@ def prowler():
findings = []
if len(checks_to_execute):
findings = execute_checks(
checks_to_execute, provider, audit_info, audit_output_options
checks_to_execute,
provider,
audit_info,
audit_output_options,
bulk_checks_metadata,
)
else:
logger.error(

View File

@@ -11,7 +11,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "3.8.1"
prowler_version = "3.8.2"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"

View File

@@ -54,6 +54,13 @@ aws:
organizations_enabled_regions: []
organizations_trusted_delegated_administrators: []
# AWS ECR
# ecr_repositories_scan_vulnerabilities_in_latest_image
# CRITICAL
# HIGH
# MEDIUM
ecr_repository_vulnerability_minimum_severity: "MEDIUM"
# Azure Configuration
azure:

View File

@@ -14,7 +14,7 @@ from colorama import Fore, Style
from prowler.config.config import orange_color
from prowler.lib.check.compliance_models import load_compliance_framework
from prowler.lib.check.models import Check, load_check_metadata
from prowler.lib.check.models import Check, Check_Metadata_Model, load_check_metadata
from prowler.lib.logger import logger
try:
@@ -385,20 +385,21 @@ def import_check(check_path: str) -> ModuleType:
def run_check(check: Check, output_options: Provider_Output_Options) -> list:
findings = []
if output_options.verbose:
print(
f"\nCheck ID: {check.CheckID} - {Fore.MAGENTA}{check.ServiceName}{Fore.YELLOW} [{check.Severity}]{Style.RESET_ALL}"
)
logger.debug(f"Executing check: {check.CheckID}")
try:
if output_options.verbose:
print(
f"\nCheck ID: {check.check_metadata.CheckID} - {Fore.MAGENTA}{check.check_metadata.ServiceName}{Fore.YELLOW} [{check.check_metadata.Severity}]{Style.RESET_ALL}"
)
logger.debug(f"Executing check: {check.check_metadata.CheckID}")
findings = check.execute()
except Exception as error:
if not output_options.only_logs:
print(
f"Something went wrong in {check.CheckID}, please use --log-level ERROR"
f"Something went wrong in {check.check_metadata.CheckID}, please use --log-level ERROR"
)
logger.error(
f"{check.CheckID} -- {error.__class__.__name__}[{traceback.extract_tb(error.__traceback__)[-1].lineno}]: {error}"
f"{check.check_metadata.CheckID} -- {error.__class__.__name__}[{traceback.extract_tb(error.__traceback__)[-1].lineno}]: {error}"
)
finally:
return findings
@@ -409,6 +410,7 @@ def execute_checks(
provider: str,
audit_info: Any,
audit_output_options: Provider_Output_Options,
bulk_checks_metadata: dict,
) -> list:
# List to store all the check's findings
all_findings = []
@@ -454,6 +456,7 @@ def execute_checks(
audit_info,
services_executed,
checks_executed,
bulk_checks_metadata,
)
all_findings.extend(check_findings)
@@ -500,6 +503,7 @@ def execute_checks(
audit_info,
services_executed,
checks_executed,
bulk_checks_metadata,
)
all_findings.extend(check_findings)
bar()
@@ -527,25 +531,32 @@ def execute(
audit_info: Any,
services_executed: set,
checks_executed: set,
bulk_checks_metadata: dict[str, Check_Metadata_Model],
):
# Import check module
check_module_path = (
f"prowler.providers.{provider}.services.{service}.{check_name}.{check_name}"
)
lib = import_check(check_module_path)
# Recover functions from check
check_to_execute = getattr(lib, check_name)
c = check_to_execute()
try:
# Import check module
check_module_path = (
f"prowler.providers.{provider}.services.{service}.{check_name}.{check_name}"
)
lib = import_check(check_module_path)
# Recover functions from check
metadata = bulk_checks_metadata[check_name]
check_to_execute = getattr(lib, check_name)
c = check_to_execute(metadata)
# Run check
check_findings = run_check(c, audit_output_options)
# Run check
check_findings = run_check(c, audit_output_options)
# Update Audit Status
services_executed.add(service)
checks_executed.add(check_name)
audit_info.audit_metadata = update_audit_metadata(
audit_info.audit_metadata, services_executed, checks_executed
)
# Update Audit Status
services_executed.add(service)
checks_executed.add(check_name)
audit_info.audit_metadata = update_audit_metadata(
audit_info.audit_metadata, services_executed, checks_executed
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# Report the check's findings
report(check_findings, audit_output_options, audit_info)

View File

@@ -1,4 +1,3 @@
import os
import sys
from abc import ABC, abstractmethod
from dataclasses import dataclass
@@ -56,24 +55,18 @@ class Check_Metadata_Model(BaseModel):
Compliance: list = None
class Check(ABC, Check_Metadata_Model):
class Check(ABC):
"""Prowler Check"""
def __init__(self, **data):
check_metadata: Check_Metadata_Model
def __init__(self, metadata):
"""Check's init function. Calls the CheckMetadataModel init."""
# Parse the Check's metadata file
metadata_file = (
os.path.abspath(sys.modules[self.__module__].__file__)[:-3]
+ ".metadata.json"
)
# Store it to validate them with Pydantic
data = Check_Metadata_Model.parse_file(metadata_file).dict()
# Calls parents init function
super().__init__(**data)
self.check_metadata = metadata
def metadata(self) -> dict:
"""Return the JSON representation of the check's metadata"""
return self.json()
return self.check_metadata.json()
@abstractmethod
def execute(self):

View File

@@ -435,7 +435,7 @@ class Check_Output_JSON(BaseModel):
Risk: str
RelatedUrl: str
Remediation: Remediation
Compliance: Optional[dict]
Compliance: Optional[list]
Categories: List[str]
DependsOn: List[str]
RelatedTo: List[str]

View File

@@ -496,6 +496,17 @@
]
}
},
"appfabric": {
"regions": {
"aws": [
"ap-northeast-1",
"eu-west-1",
"us-east-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"appflow": {
"regions": {
"aws": [
@@ -6981,6 +6992,16 @@
"aws-us-gov": []
}
},
"payment-cryptography": {
"regions": {
"aws": [
"us-east-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"personalize": {
"regions": {
"aws": [
@@ -7181,6 +7202,7 @@
"regions": {
"aws": [
"ap-south-1",
"eu-central-1",
"us-east-1"
],
"aws-cn": [],
@@ -7786,6 +7808,7 @@
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -7844,6 +7867,35 @@
"aws-us-gov": []
}
},
"route53-application-recovery-controller": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"route53-recovery-readiness": {
"regions": {
"aws": [
@@ -8219,6 +8271,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",

View File

@@ -1,31 +1,52 @@
# lista de cuentas y te devuelva las válidas
def is_account_only_allowed_in_condition(
condition_statement: dict, source_account: str
):
"""
is_account_only_allowed_in_condition parses the IAM Condition policy block and returns True if the source_account passed as argument is within, False if not.
@param condition_statement: dict with an IAM Condition block, e.g.:
{
"StringLike": {
"AWS:SourceAccount": 111122223333
}
}
@param source_account: str with a 12-digit AWS Account number, e.g.: 111122223333
"""
is_condition_valid = False
# The conditions must be defined in lowercase since the context key names are not case-sensitive.
# For example, including the aws:SourceAccount context key is equivalent to testing for AWS:SourceAccount
# https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html
valid_condition_options = {
"StringEquals": [
"aws:SourceAccount",
"aws:SourceOwner",
"s3:ResourceAccount",
"aws:PrincipalAccount",
"aws:ResourceAccount",
"aws:sourceaccount",
"aws:sourceowner",
"s3:resourceaccount",
"aws:principalaccount",
"aws:resourceaccount",
],
"StringLike": [
"aws:SourceAccount",
"aws:SourceOwner",
"aws:SourceArn",
"aws:PrincipalArn",
"aws:ResourceAccount",
"aws:PrincipalAccount",
"aws:sourceaccount",
"aws:sourceowner",
"aws:sourcearn",
"aws:principalarn",
"aws:resourceaccount",
"aws:principalaccount",
],
"ArnLike": ["aws:SourceArn", "aws:PrincipalArn"],
"ArnEquals": ["aws:SourceArn", "aws:PrincipalArn"],
"ArnLike": ["aws:sourcearn", "aws:principalarn"],
"ArnEquals": ["aws:sourcearn", "aws:principalarn"],
}
for condition_operator, condition_operator_key in valid_condition_options.items():
if condition_operator in condition_statement:
for value in condition_operator_key:
# We need to transform the condition_statement into lowercase
condition_statement[condition_operator] = {
k.lower(): v
for k, v in condition_statement[condition_operator].items()
}
if value in condition_statement[condition_operator]:
# values are a list
if isinstance(

View File

@@ -0,0 +1,32 @@
{
"Provider": "aws",
"CheckID": "ec2_instance_detailed_monitoring_enabled",
"CheckTitle": "Check if EC2 instances have detailed monitoring enabled.",
"CheckType": [
"Infrastructure Security"
],
"ServiceName": "ec2",
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "low",
"ResourceType": "AwsEc2Instance",
"Description": "Check if EC2 instances have detailed monitoring enabled.",
"Risk": "Enabling detailed monitoring provides enhanced monitoring and granular insights into EC2 instance metrics. Not having detailed monitoring enabled may limit the ability to troubleshoot performance issues effectively.",
"RelatedUrl": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html",
"Remediation": {
"Code": {
"CLI": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/EC2/instance-detailed-monitoring.html",
"NativeIaC": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/EC2/instance-detailed-monitoring.html",
"Other": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html#enable-detailed-monitoring-instance",
"Terraform": "https://docs.bridgecrew.io/docs/ensure-that-detailed-monitoring-is-enabled-for-ec2-instances#terraform"
},
"Recommendation": {
"Text": "Enable detailed monitoring for EC2 instances to gain better insights into performance metrics.",
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html#enable-detailed-monitoring-instance"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}

View File

@@ -0,0 +1,24 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
class ec2_instance_detailed_monitoring_enabled(Check):
def execute(self):
findings = []
for instance in ec2_client.instances:
report = Check_Report_AWS(self.metadata())
report.region = instance.region
report.resource_id = instance.id
report.resource_arn = instance.arn
report.resource_tags = instance.tags
report.status = "PASS"
report.status_extended = (
f"EC2 Instance {instance.id} has detailed monitoring enabled."
)
if instance.monitoring_state != "enabled":
report.status = "FAIL"
report.status_extended = f"EC2 Instance {instance.id} does not have detailed monitoring enabled."
findings.append(report)
return findings

View File

@@ -56,6 +56,7 @@ class EC2(AWSService):
public_ip = None
private_ip = None
instance_profile = None
monitoring_state = "disabled"
if "MetadataOptions" in instance:
http_tokens = instance["MetadataOptions"]["HttpTokens"]
http_endpoint = instance["MetadataOptions"][
@@ -67,6 +68,10 @@ class EC2(AWSService):
):
public_dns = instance["PublicDnsName"]
public_ip = instance["PublicIpAddress"]
if "Monitoring" in instance:
monitoring_state = instance.get(
"Monitoring", {"State": "disabled"}
).get("State", "disabled")
if "PrivateIpAddress" in instance:
private_ip = instance["PrivateIpAddress"]
if "IamInstanceProfile" in instance:
@@ -88,6 +93,7 @@ class EC2(AWSService):
http_tokens=http_tokens,
http_endpoint=http_endpoint,
instance_profile=instance_profile,
monitoring_state=monitoring_state,
tags=instance.get("Tags"),
)
)
@@ -407,6 +413,7 @@ class Instance(BaseModel):
user_data: Optional[str]
http_tokens: Optional[str]
http_endpoint: Optional[str]
monitoring_state: str
instance_profile: Optional[dict]
tags: Optional[list] = []

View File

@@ -5,6 +5,12 @@ from prowler.providers.aws.services.ecr.ecr_client import ecr_client
class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
def execute(self):
findings = []
# Get minimun severity to report
minimum_severity = ecr_client.audit_config.get(
"ecr_repository_vulnerability_minimum_severity", "MEDIUM"
)
for registry in ecr_client.registries.values():
for repository in registry.repositories:
# First check if the repository has images
@@ -27,8 +33,23 @@ class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
report.status_extended = (
f"ECR repository {repository.name} with scan status FAILED."
)
elif image.scan_findings_status != "FAILED":
if image.scan_findings_severity_count and (
elif (
image.scan_findings_status != "FAILED"
and image.scan_findings_severity_count
):
if (
minimum_severity == "CRITICAL"
and image.scan_findings_severity_count.critical
):
report.status = "FAIL"
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}."
elif minimum_severity == "HIGH" and (
image.scan_findings_severity_count.critical
or image.scan_findings_severity_count.high
):
report.status = "FAIL"
report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}."
elif minimum_severity == "MEDIUM" and (
image.scan_findings_severity_count.critical
or image.scan_findings_severity_count.high
or image.scan_findings_severity_count.medium

View File

@@ -1,5 +1,4 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.s3.s3_client import s3_client
from prowler.providers.aws.services.s3.s3control_client import s3control_client
@@ -8,17 +7,17 @@ class s3_account_level_public_access_blocks(Check):
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
report.status_extended = f"Block Public Access is not configured for the account {s3_client.audited_account}."
report.status_extended = f"Block Public Access is not configured for the account {s3control_client.audited_account}."
report.region = s3control_client.region
report.resource_id = s3_client.audited_account
report.resource_arn = s3_client.audited_account_arn
report.resource_id = s3control_client.audited_account
report.resource_arn = s3control_client.audited_account_arn
if (
s3control_client.account_public_access_block
and s3control_client.account_public_access_block.ignore_public_acls
and s3control_client.account_public_access_block.restrict_public_buckets
):
report.status = "PASS"
report.status_extended = f"Block Public Access is configured for the account {s3_client.audited_account}."
report.status_extended = f"Block Public Access is configured for the account {s3control_client.audited_account}."
findings.append(report)

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_app_services_is_on(Check):
report.subscription = subscription
report.resource_name = "Defender plan App Services"
report.resource_id = pricings["AppServices"].resource_id
report.status_extended = f"Defender plan Defender for App Services from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for App Services from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["AppServices"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for App Services from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for App Services from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_arm_is_on(Check):
report.subscription = subscription
report.resource_id = pricings["Arm"].resource_id
report.resource_name = "Defender plan ARM"
report.status_extended = f"Defender plan Defender for ARM from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for ARM from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["Arm"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for ARM from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for ARM from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_azure_sql_databases_is_on(Check):
report.subscription = subscription
report.resource_id = pricings["SqlServers"].resource_id
report.resource_name = "Defender plan Azure SQL DB Servers"
report.status_extended = f"Defender plan Defender for Azure SQL DB Servers from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for Azure SQL DB Servers from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["SqlServers"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for Azure SQL DB Servers from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for Azure SQL DB Servers from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_containers_is_on(Check):
report.subscription = subscription
report.resource_id = pricings["Containers"].resource_id
report.resource_name = "Defender plan Container Registries"
report.status_extended = f"Defender plan Defender for Containers from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for Containers from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["Containers"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for Containers from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for Containers from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_cosmosdb_is_on(Check):
report.subscription = subscription
report.resource_id = pricings["CosmosDbs"].resource_id
report.resource_name = "Defender plan Cosmos DB"
report.status_extended = f"Defender plan Defender for Cosmos DB from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for Cosmos DB from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["CosmosDbs"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for Cosmos DB from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for Cosmos DB from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -17,7 +17,7 @@ class defender_ensure_defender_for_databases_is_on(Check):
report.subscription = subscription
report.resource_id = pricings["SqlServers"].resource_id
report.status = "PASS"
report.status_extended = f"Defender plan Defender for Databases from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for Databases from subscription {subscription} is set to ON (pricing tier standard)."
if (
pricings["SqlServers"].pricing_tier != "Standard"
or pricings["SqlServerVirtualMachines"].pricing_tier != "Standard"
@@ -26,7 +26,7 @@ class defender_ensure_defender_for_databases_is_on(Check):
or pricings["CosmosDbs"].pricing_tier != "Standard"
):
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for Databases from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for Databases from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_dns_is_on(Check):
report.subscription = subscription
report.resource_name = "Defender plan DNS"
report.resource_id = pricings["Dns"].resource_id
report.status_extended = f"Defender plan Defender for DNS from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for DNS from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["Dns"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for DNS from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for DNS from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_keyvault_is_on(Check):
report.subscription = subscription
report.resource_name = "Defender plan KeyVaults"
report.resource_id = pricings["KeyVaults"].resource_id
report.status_extended = f"Defender plan Defender for KeyVaults from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for KeyVaults from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["KeyVaults"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for KeyVaults from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for KeyVaults from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -14,10 +14,10 @@ class defender_ensure_defender_for_os_relational_databases_is_on(Check):
report.resource_id = pricings[
"OpenSourceRelationalDatabases"
].resource_id
report.status_extended = f"Defender plan Defender for Open-Source Relational Databases from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for Open-Source Relational Databases from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["OpenSourceRelationalDatabases"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for Open-Source Relational Databases from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for Open-Source Relational Databases from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_server_is_on(Check):
report.subscription = subscription
report.resource_name = "Defender plan Servers"
report.resource_id = pricings["VirtualMachines"].resource_id
report.status_extended = f"Defender plan Defender for Servers from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for Servers from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["VirtualMachines"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for Servers from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for Servers from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_sql_servers_is_on(Check):
report.subscription = subscription
report.resource_name = "Defender plan SQL Server VMs"
report.resource_id = pricings["SqlServerVirtualMachines"].resource_id
report.status_extended = f"Defender plan Defender for SQL Server VMs from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for SQL Server VMs from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["SqlServerVirtualMachines"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for SQL Server VMs from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for SQL Server VMs from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -12,10 +12,10 @@ class defender_ensure_defender_for_storage_is_on(Check):
report.subscription = subscription
report.resource_name = "Defender plan Storage Accounts"
report.resource_id = pricings["StorageAccounts"].resource_id
report.status_extended = f"Defender plan Defender for Storage Accounts from subscription {subscription} is set to ON (pricing tier standard)"
report.status_extended = f"Defender plan Defender for Storage Accounts from subscription {subscription} is set to ON (pricing tier standard)."
if pricings["StorageAccounts"].pricing_tier != "Standard":
report.status = "FAIL"
report.status_extended = f"Defender plan Defender for Storage Accounts from subscription {subscription} is set to OFF (pricing tier not standard)"
report.status_extended = f"Defender plan Defender for Storage Accounts from subscription {subscription} is set to OFF (pricing tier not standard)."
findings.append(report)
return findings

View File

@@ -14,14 +14,14 @@ class iam_subscription_roles_owner_custom_not_created(Check):
report.resource_id = role.id
report.resource_name = role.name
report.status = "PASS"
report.status_extended = f"Role {role.name} from subscription {subscription} is not a custom owner role"
report.status_extended = f"Role {role.name} from subscription {subscription} is not a custom owner role."
for scope in role.assignable_scopes:
if search("^/.*", scope):
for permission_item in role.permissions:
for action in permission_item.actions:
if action == "*":
report.status = "FAIL"
report.status_extended = f"Role {role.name} from subscription {subscription} is a custom owner role"
report.status_extended = f"Role {role.name} from subscription {subscription} is a custom owner role."
break
findings.append(report)

View File

@@ -10,14 +10,14 @@ class sqlserver_auditing_enabled(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} has a auditing policy configured"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} has a auditing policy configured."
report.resource_name = sql_server.name
report.resource_id = sql_server.id
for auditing_policy in sql_server.auditing_policies:
if auditing_policy.state == "Disabled":
report.status = "FAIL"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} does not have any auditing policy configured"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} does not have any auditing policy configured."
break
findings.append(report)

View File

@@ -10,7 +10,7 @@ class sqlserver_azuread_administrator_enabled(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} has an Active Directory administrator"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} has an Active Directory administrator."
report.resource_name = sql_server.name
report.resource_id = sql_server.id
@@ -19,7 +19,7 @@ class sqlserver_azuread_administrator_enabled(Check):
or sql_server.administrators.administrator_type != "ActiveDirectory"
):
report.status = "FAIL"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} does not have an Active Directory administrator"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} does not have an Active Directory administrator."
findings.append(report)

View File

@@ -10,7 +10,7 @@ class sqlserver_unrestricted_inbound_access(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} does not have firewall rules allowing 0.0.0.0-255.255.255.255"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} does not have firewall rules allowing 0.0.0.0-255.255.255.255."
report.resource_name = sql_server.name
report.resource_id = sql_server.id
@@ -20,7 +20,7 @@ class sqlserver_unrestricted_inbound_access(Check):
and firewall_rule.end_ip_address == "255.255.255.255"
):
report.status = "FAIL"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} has firewall rules allowing 0.0.0.0-255.255.255.255"
report.status_extended = f"SQL Server {sql_server.name} from subscription {subscription} has firewall rules allowing 0.0.0.0-255.255.255.255."
break
findings.append(report)

View File

@@ -10,12 +10,12 @@ class storage_blob_public_access_level_is_disabled(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has allow blob public access enabled"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has allow blob public access enabled."
report.resource_name = storage_account.name
report.resource_id = storage_account.id
if not storage_account.allow_blob_public_access:
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has allow blob public access disabled"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has allow blob public access disabled."
findings.append(report)

View File

@@ -10,12 +10,12 @@ class storage_default_network_access_rule_is_denied(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has network access rule set to Deny"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has network access rule set to Deny."
report.resource_name = storage_account.name
report.resource_id = storage_account.id
if storage_account.network_rule_set.default_action == "Allow":
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has network access rule set to Allow"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has network access rule set to Allow."
findings.append(report)

View File

@@ -10,12 +10,12 @@ class storage_ensure_azure_services_are_trusted_to_access_is_enabled(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} allows trusted Microsoft services to access this storage account"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} allows trusted Microsoft services to access this storage account."
report.resource_name = storage_account.name
report.resource_id = storage_account.id
if "AzureServices" not in storage_account.network_rule_set.bypass:
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not allow trusted Microsoft services to access this storage account"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not allow trusted Microsoft services to access this storage account."
findings.append(report)

View File

@@ -10,12 +10,12 @@ class storage_ensure_encryption_with_customer_managed_keys(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} encrypts with CMKs"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} encrypts with CMKs."
report.resource_name = storage_account.name
report.resource_id = storage_account.id
if storage_account.encryption_type != "Microsoft.Keyvault":
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not encrypt with CMKs"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not encrypt with CMKs."
findings.append(report)

View File

@@ -10,12 +10,12 @@ class storage_ensure_minimum_tls_version_12(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has TLS version set to 1.2"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has TLS version set to 1.2."
report.resource_name = storage_account.name
report.resource_id = storage_account.id
if storage_account.minimum_tls_version != "TLS1_2":
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not have TLS version set to 1.2"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} does not have TLS version set to 1.2."
findings.append(report)

View File

@@ -10,12 +10,12 @@ class storage_infrastructure_encryption_is_enabled(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has infrastructure encryption enabled"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has infrastructure encryption enabled."
report.resource_name = storage_account.name
report.resource_id = storage_account.id
if not storage_account.infrastructure_encryption:
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has infrastructure encryption disabled"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has infrastructure encryption disabled."
findings.append(report)

View File

@@ -10,12 +10,12 @@ class storage_secure_transfer_required_is_enabled(Check):
report = Check_Report_Azure(self.metadata())
report.subscription = subscription
report.status = "PASS"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has secure transfer required enabled"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has secure transfer required enabled."
report.resource_name = storage_account.name
report.resource_id = storage_account.id
if not storage_account.enable_https_traffic_only:
report.status = "FAIL"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has secure transfer required disabled"
report.status_extended = f"Storage account {storage_account.name} from subscription {subscription} has secure transfer required disabled."
findings.append(report)

View File

@@ -12,12 +12,10 @@ class bigquery_dataset_cmk_encryption(Check):
report.resource_name = dataset.name
report.location = dataset.region
report.status = "PASS"
report.status_extended = (
f"Dataset {dataset.name} is encrypted with Customer-Managed Keys (CMKs)"
)
report.status_extended = f"Dataset {dataset.name} is encrypted with Customer-Managed Keys (CMKs)."
if not dataset.cmk_encryption:
report.status = "FAIL"
report.status_extended = f"Dataset {dataset.name} is not encrypted with Customer-Managed Keys (CMKs)"
report.status_extended = f"Dataset {dataset.name} is not encrypted with Customer-Managed Keys (CMKs)."
findings.append(report)
return findings

View File

@@ -13,12 +13,12 @@ class bigquery_dataset_public_access(Check):
report.location = dataset.region
report.status = "PASS"
report.status_extended = (
f"Dataset {dataset.name} is not publicly accessible"
f"Dataset {dataset.name} is not publicly accessible."
)
if dataset.public:
report.status = "FAIL"
report.status_extended = (
f"Dataset {dataset.name} is publicly accessible!"
f"Dataset {dataset.name} is publicly accessible."
)
findings.append(report)

View File

@@ -13,11 +13,11 @@ class bigquery_table_cmk_encryption(Check):
report.location = table.region
report.status = "PASS"
report.status_extended = (
f"Table {table.name} is encrypted with Customer-Managed Keys (CMKs)"
f"Table {table.name} is encrypted with Customer-Managed Keys (CMKs)."
)
if not table.cmk_encryption:
report.status = "FAIL"
report.status_extended = f"Table {table.name} is not encrypted with Customer-Managed Keys (CMKs)"
report.status_extended = f"Table {table.name} is not encrypted with Customer-Managed Keys (CMKs)."
findings.append(report)
return findings

View File

@@ -13,11 +13,11 @@ class cloudsql_instance_automated_backups(Check):
report.location = instance.region
report.status = "PASS"
report.status_extended = (
f"Database Instance {instance.name} has automated backups configured"
f"Database Instance {instance.name} has automated backups configured."
)
if not instance.automated_backups:
report.status = "FAIL"
report.status_extended = f"Database Instance {instance.name} does not have automated backups configured"
report.status_extended = f"Database Instance {instance.name} does not have automated backups configured."
findings.append(report)
return findings

View File

@@ -12,12 +12,12 @@ class cloudsql_instance_private_ip_assignment(Check):
report.resource_name = instance.name
report.location = instance.region
report.status = "PASS"
report.status_extended = f"Database Instance {instance.name} does not have private IP assignments"
report.status_extended = f"Database Instance {instance.name} does not have private IP assignments."
for address in instance.ip_addresses:
if address["type"] != "PRIVATE":
report.status = "FAIL"
report.status_extended = (
f"Database Instance {instance.name} has public IP assignments"
f"Database Instance {instance.name} has public IP assignments."
)
break
findings.append(report)

View File

@@ -12,11 +12,11 @@ class cloudsql_instance_public_access(Check):
report.resource_name = instance.name
report.location = instance.region
report.status = "PASS"
report.status_extended = f"Database Instance {instance.name} does not whitelist all Public IP Addresses"
report.status_extended = f"Database Instance {instance.name} does not whitelist all Public IP Addresses."
for network in instance.authorized_networks:
if network["value"] == "0.0.0.0/0":
report.status = "FAIL"
report.status_extended = f"Database Instance {instance.name} whitelist all Public IP Addresses"
report.status_extended = f"Database Instance {instance.name} whitelist all Public IP Addresses."
findings.append(report)
return findings

View File

@@ -13,14 +13,14 @@ class cloudsql_instance_sqlserver_contained_database_authentication_flag(Check):
report.resource_name = instance.name
report.location = instance.region
report.status = "PASS"
report.status_extended = f"SQL Server Instance {instance.name} has 'contained database authentication' flag set to 'off'"
report.status_extended = f"SQL Server Instance {instance.name} has 'contained database authentication' flag set to 'off'."
for flag in instance.flags:
if (
flag["name"] == "contained database authentication"
and flag["value"] == "on"
):
report.status = "FAIL"
report.status_extended = f"SQL Server Instance {instance.name} has 'contained database authentication' flag set to 'on'"
report.status_extended = f"SQL Server Instance {instance.name} has 'contained database authentication' flag set to 'on'."
break
findings.append(report)

View File

@@ -13,11 +13,11 @@ class cloudsql_instance_sqlserver_trace_flag(Check):
report.resource_name = instance.name
report.location = instance.region
report.status = "PASS"
report.status_extended = f"SQL Server Instance {instance.name} has '3625 (trace flag)' flag set to 'on'"
report.status_extended = f"SQL Server Instance {instance.name} has '3625 (trace flag)' flag set to 'on'."
for flag in instance.flags:
if flag["name"] == "3625" and flag["value"] == "off":
report.status = "FAIL"
report.status_extended = f"SQL Server Instance {instance.name} has '3625 (trace flag)' flag set to 'off'"
report.status_extended = f"SQL Server Instance {instance.name} has '3625 (trace flag)' flag set to 'off'."
break
findings.append(report)

View File

@@ -13,11 +13,11 @@ class cloudsql_instance_ssl_connections(Check):
report.location = instance.region
report.status = "PASS"
report.status_extended = (
f"Database Instance {instance.name} requires SSL connections"
f"Database Instance {instance.name} requires SSL connections."
)
if not instance.ssl:
report.status = "FAIL"
report.status_extended = f"Database Instance {instance.name} does not require SSL connections"
report.status_extended = f"Database Instance {instance.name} does not require SSL connections."
findings.append(report)
return findings

View File

@@ -22,14 +22,14 @@ class cloudstorage_bucket_log_retention_policy_lock(Check):
report.location = bucket.region
report.status = "FAIL"
report.status_extended = (
f"Log Sink Bucket {bucket.name} has no Retention Policy"
f"Log Sink Bucket {bucket.name} has no Retention Policy."
)
if bucket.retention_policy:
report.status = "FAIL"
report.status_extended = f"Log Sink Bucket {bucket.name} has no Retention Policy but without Bucket Lock"
report.status_extended = f"Log Sink Bucket {bucket.name} has no Retention Policy but without Bucket Lock."
if bucket.retention_policy["isLocked"]:
report.status = "PASS"
report.status_extended = f"Log Sink Bucket {bucket.name} has a Retention Policy with Bucket Lock"
report.status_extended = f"Log Sink Bucket {bucket.name} has a Retention Policy with Bucket Lock."
findings.append(report)
return findings

View File

@@ -14,10 +14,10 @@ class cloudstorage_bucket_public_access(Check):
report.resource_name = bucket.name
report.location = bucket.region
report.status = "PASS"
report.status_extended = f"Bucket {bucket.name} is not publicly accessible"
report.status_extended = f"Bucket {bucket.name} is not publicly accessible."
if bucket.public:
report.status = "FAIL"
report.status_extended = f"Bucket {bucket.name} is publicly accessible!"
report.status_extended = f"Bucket {bucket.name} is publicly accessible."
findings.append(report)
return findings

View File

@@ -15,12 +15,12 @@ class cloudstorage_bucket_uniform_bucket_level_access(Check):
report.location = bucket.region
report.status = "PASS"
report.status_extended = (
f"Bucket {bucket.name} has uniform Bucket Level Access enabled"
f"Bucket {bucket.name} has uniform Bucket Level Access enabled."
)
if not bucket.uniform_bucket_level_access:
report.status = "FAIL"
report.status_extended = (
f"Bucket {bucket.name} has uniform Bucket Level Access disabled"
f"Bucket {bucket.name} has uniform Bucket Level Access disabled."
)
findings.append(report)

View File

@@ -13,11 +13,11 @@ class compute_instance_confidential_computing_enabled(Check):
report.location = instance.zone
report.status = "PASS"
report.status_extended = (
f"VM Instance {instance.name} has Confidential Computing enabled"
f"VM Instance {instance.name} has Confidential Computing enabled."
)
if not instance.confidential_computing:
report.status = "FAIL"
report.status_extended = f"VM Instance {instance.name} does not have Confidential Computing enabled"
report.status_extended = f"VM Instance {instance.name} does not have Confidential Computing enabled."
findings.append(report)
return findings

View File

@@ -12,7 +12,7 @@ class compute_instance_default_service_account_in_use(Check):
report.resource_name = instance.name
report.location = instance.zone
report.status = "PASS"
report.status_extended = f"The default service account is not configured to be used with VM Instance {instance.name}"
report.status_extended = f"The default service account is not configured to be used with VM Instance {instance.name}."
if (
any(
[
@@ -23,7 +23,7 @@ class compute_instance_default_service_account_in_use(Check):
and instance.name[:4] != "gke-"
):
report.status = "FAIL"
report.status_extended = f"The default service account is configured to be used with VM Instance {instance.name}"
report.status_extended = f"The default service account is configured to be used with VM Instance {instance.name}."
findings.append(report)
return findings

View File

@@ -12,7 +12,7 @@ class compute_instance_default_service_account_in_use_with_full_api_access(Check
report.resource_name = instance.name
report.location = instance.zone
report.status = "PASS"
report.status_extended = f"The VM Instance {instance.name} is not configured to use the default service account with full access to all cloud APIs "
report.status_extended = f"The VM Instance {instance.name} is not configured to use the default service account with full access to all cloud APIs."
for service_account in instance.service_accounts:
if (
"-compute@developer.gserviceaccount.com" in service_account["email"]
@@ -21,7 +21,7 @@ class compute_instance_default_service_account_in_use_with_full_api_access(Check
and instance.name[:4] != "gke-"
):
report.status = "FAIL"
report.status_extended = f"The VM Instance {instance.name} is configured to use the default service account with full access to all cloud APIs "
report.status_extended = f"The VM Instance {instance.name} is configured to use the default service account with full access to all cloud APIs."
break
findings.append(report)

View File

@@ -12,7 +12,7 @@ class compute_instance_encryption_with_csek_enabled(Check):
report.resource_name = instance.name
report.location = instance.zone
report.status = "FAIL"
report.status_extended = f"The VM Instance {instance.name} has the following unencrypted disks: '{', '.join([i[0] for i in instance.disks_encryption if not i[1]])}'"
report.status_extended = f"The VM Instance {instance.name} has the following unencrypted disks: '{', '.join([i[0] for i in instance.disks_encryption if not i[1]])}'."
if all([i[1] for i in instance.disks_encryption]):
report.status = "PASS"
report.status_extended = (

View File

@@ -13,12 +13,12 @@ class compute_instance_ip_forwarding_is_enabled(Check):
report.location = instance.zone
report.status = "PASS"
report.status_extended = (
f"The IP Forwarding of VM Instance {instance.name} is not enabled"
f"The IP Forwarding of VM Instance {instance.name} is not enabled."
)
if instance.ip_forward and instance.name[:4] != "gke-":
report.status = "FAIL"
report.status_extended = (
f"The IP Forwarding of VM Instance {instance.name} is enabled"
f"The IP Forwarding of VM Instance {instance.name} is enabled."
)
findings.append(report)

View File

@@ -12,9 +12,7 @@ class compute_instance_serial_ports_in_use(Check):
report.resource_name = instance.name
report.location = instance.zone
report.status = "PASS"
report.status_extended = (
f"VM Instance {instance.name} has Enable Connecting to Serial Ports off"
)
report.status_extended = f"VM Instance {instance.name} has Enable Connecting to Serial Ports off."
if instance.metadata.get("items"):
for item in instance.metadata["items"]:
if item["key"] == "serial-port-enable" and item["value"] in [
@@ -22,7 +20,7 @@ class compute_instance_serial_ports_in_use(Check):
"true",
]:
report.status = "FAIL"
report.status_extended = f"VM Instance {instance.name} has Enable Connecting to Serial Ports set to on"
report.status_extended = f"VM Instance {instance.name} has Enable Connecting to Serial Ports set to on."
break
findings.append(report)

View File

@@ -12,13 +12,13 @@ class compute_instance_shielded_vm_enabled(Check):
report.resource_name = instance.name
report.location = instance.zone
report.status = "PASS"
report.status_extended = f"VM Instance {instance.name} has vTPM or Integrity Monitoring set to on"
report.status_extended = f"VM Instance {instance.name} has vTPM or Integrity Monitoring set to on."
if (
not instance.shielded_enabled_vtpm
or not instance.shielded_enabled_integrity_monitoring
):
report.status = "FAIL"
report.status_extended = f"VM Instance {instance.name} doesn't have vTPM and Integrity Monitoring set to on"
report.status_extended = f"VM Instance {instance.name} doesn't have vTPM and Integrity Monitoring set to on."
findings.append(report)
return findings

View File

@@ -12,11 +12,11 @@ class compute_loadbalancer_logging_enabled(Check):
report.resource_name = lb.name
report.location = compute_client.region
report.status = "PASS"
report.status_extended = f"LoadBalancer {lb.name} has logging enabled"
report.status_extended = f"LoadBalancer {lb.name} has logging enabled."
if not lb.logging:
report.status = "FAIL"
report.status_extended = (
f"LoadBalancer {lb.name} does not have logging enabled"
f"LoadBalancer {lb.name} does not have logging enabled."
)
findings.append(report)

View File

@@ -17,7 +17,7 @@ class compute_network_default_in_use(Check):
report.location = "global"
report.status = "FAIL"
report.status_extended = (
f"Default network is in use in project {network.project_id}"
f"Default network is in use in project {network.project_id}."
)
findings.append(report)
@@ -30,7 +30,7 @@ class compute_network_default_in_use(Check):
report.location = "global"
report.status = "PASS"
report.status_extended = (
f"Default network does not exist in project {project}"
f"Default network does not exist in project {project}."
)
return findings

View File

@@ -14,13 +14,13 @@ class compute_network_dns_logging_enabled(Check):
report.location = compute_client.region
report.status = "FAIL"
report.status_extended = (
f"Network {network.name} does not have DNS logging enabled"
f"Network {network.name} does not have DNS logging enabled."
)
for policy in dns_client.policies:
if network.name in policy.networks and policy.logging:
report.status = "PASS"
report.status_extended = (
f"Network {network.name} has DNS logging enabled"
f"Network {network.name} has DNS logging enabled."
)
break
findings.append(report)

View File

@@ -12,10 +12,10 @@ class compute_network_not_legacy(Check):
report.resource_name = network.name
report.location = compute_client.region
report.status = "PASS"
report.status_extended = f"Network {network.name} is not legacy"
report.status_extended = f"Network {network.name} is not legacy."
if network.subnet_mode == "legacy":
report.status = "FAIL"
report.status_extended = f"Legacy network {network.name} exists"
report.status_extended = f"Legacy network {network.name} exists."
findings.append(report)
return findings

View File

@@ -11,11 +11,11 @@ class compute_project_os_login_enabled(Check):
report.resource_id = project.id
report.location = "global"
report.status = "PASS"
report.status_extended = f"Project {project.id} has OS Login enabled"
report.status_extended = f"Project {project.id} has OS Login enabled."
if not project.enable_oslogin:
report.status = "FAIL"
report.status_extended = (
f"Project {project.id} does not have OS Login enabled"
f"Project {project.id} does not have OS Login enabled."
)
findings.append(report)

View File

@@ -12,10 +12,10 @@ class compute_subnet_flow_logs_enabled(Check):
report.resource_name = subnet.name
report.location = subnet.region
report.status = "PASS"
report.status_extended = f"Subnet {subnet.name} in network {subnet.network} has flow logs enabled"
report.status_extended = f"Subnet {subnet.name} in network {subnet.network} has flow logs enabled."
if not subnet.flow_logs:
report.status = "FAIL"
report.status_extended = f"Subnet {subnet.name} in network {subnet.network} does not have flow logs enabled"
report.status_extended = f"Subnet {subnet.name} in network {subnet.network} does not have flow logs enabled."
findings.append(report)
return findings

View File

@@ -13,11 +13,13 @@ class iam_account_access_approval_enabled(Check):
report.resource_id = project_id
report.location = accessapproval_client.region
report.status = "PASS"
report.status_extended = f"Project {project_id} has Access Approval enabled"
report.status_extended = (
f"Project {project_id} has Access Approval enabled."
)
if project_id not in accessapproval_client.settings:
report.status = "FAIL"
report.status_extended = (
f"Project {project_id} does not have Access Approval enabled"
f"Project {project_id} does not have Access Approval enabled."
)
findings.append(report)

View File

@@ -13,11 +13,11 @@ class iam_audit_logs_enabled(Check):
report.location = cloudresourcemanager_client.region
report.resource_id = project.id
report.status = "PASS"
report.status_extended = f"Audit Logs are enabled for project {project.id}"
report.status_extended = f"Audit Logs are enabled for project {project.id}."
if not project.audit_logging:
report.status = "FAIL"
report.status_extended = (
f"Audit Logs are not enabled for project {project.id}"
f"Audit Logs are not enabled for project {project.id}."
)
findings.append(report)

View File

@@ -15,12 +15,12 @@ class iam_organization_essential_contacts_configured(Check):
report.location = essentialcontacts_client.region
report.status = "FAIL"
report.status_extended = (
f"Organization {org.name} does not have essential contacts configured"
f"Organization {org.name} does not have essential contacts configured."
)
if org.contacts:
report.status = "PASS"
report.status_extended = (
f"Organization {org.name} has essential contacts configured"
f"Organization {org.name} has essential contacts configured."
)
findings.append(report)

View File

@@ -15,7 +15,7 @@ class iam_role_kms_enforce_separation_of_duties(Check):
report.location = cloudresourcemanager_client.region
report.resource_id = project
report.status = "PASS"
report.status_extended = f"Principle of separation of duties was enforced for KMS-Related Roles in project {project}"
report.status_extended = f"Principle of separation of duties was enforced for KMS-Related Roles in project {project}."
for binding in cloudresourcemanager_client.bindings:
if binding.project_id == project:
if "roles/cloudkms.admin" in binding.role:
@@ -30,7 +30,7 @@ class iam_role_kms_enforce_separation_of_duties(Check):
non_compliant_members.append(member)
if non_compliant_members:
report.status = "FAIL"
report.status_extended = f"Principle of separation of duties was not enforced for KMS-Related Roles in project {project} in members {','.join(non_compliant_members)}"
report.status_extended = f"Principle of separation of duties was not enforced for KMS-Related Roles in project {project} in members {','.join(non_compliant_members)}."
findings.append(report)
return findings

View File

@@ -14,7 +14,7 @@ class iam_role_sa_enforce_separation_of_duties(Check):
report.location = cloudresourcemanager_client.region
report.resource_id = project
report.status = "PASS"
report.status_extended = f"Principle of separation of duties was enforced for Service-Account Related Roles in project {project}"
report.status_extended = f"Principle of separation of duties was enforced for Service-Account Related Roles in project {project}."
for binding in cloudresourcemanager_client.bindings:
if binding.project_id == project and (
"roles/iam.serviceAccountUser" in binding.role
@@ -23,7 +23,7 @@ class iam_role_sa_enforce_separation_of_duties(Check):
non_compliant_members.extend(binding.members)
if non_compliant_members:
report.status = "FAIL"
report.status_extended = f"Principle of separation of duties was not enforced for Service-Account Related Roles in project {project} in members {','.join(non_compliant_members)}"
report.status_extended = f"Principle of separation of duties was not enforced for Service-Account Related Roles in project {project} in members {','.join(non_compliant_members)}."
findings.append(report)
return findings

View File

@@ -16,7 +16,7 @@ class iam_sa_no_administrative_privileges(Check):
report.location = iam_client.region
report.status = "PASS"
report.status_extended = (
f"Account {account.email} has no administrative privileges"
f"Account {account.email} has no administrative privileges."
)
for binding in cloudresourcemanager_client.bindings:
if f"serviceAccount:{account.email}" in binding.members and (
@@ -25,7 +25,7 @@ class iam_sa_no_administrative_privileges(Check):
or "editor" in binding.role.lower()
):
report.status = "FAIL"
report.status_extended = f"Account {account.email} has administrative privileges with {binding.role}"
report.status_extended = f"Account {account.email} has administrative privileges with {binding.role}."
findings.append(report)
return findings

View File

@@ -17,10 +17,10 @@ class iam_sa_user_managed_key_rotate_90_days(Check):
report.resource_name = account.email
report.location = iam_client.region
report.status = "PASS"
report.status_extended = f"User-managed key {key.name} for account {account.email} was rotated over the last 90 days ({last_rotated} days ago)"
report.status_extended = f"User-managed key {key.name} for account {account.email} was rotated over the last 90 days ({last_rotated} days ago)."
if last_rotated > 90:
report.status = "FAIL"
report.status_extended = f"User-managed key {key.name} for account {account.email} was not rotated over the last 90 days ({last_rotated} days ago)"
report.status_extended = f"User-managed key {key.name} for account {account.email} was not rotated over the last 90 days ({last_rotated} days ago)."
findings.append(report)
return findings

View File

@@ -17,7 +17,7 @@ class kms_key_not_publicly_accessible(Check):
if member == "allUsers" or member == "allAuthenticatedUsers":
report.status = "FAIL"
report.status_extended = (
f"Key {key.name} may be publicly accessible!"
f"Key {key.name} may be publicly accessible."
)
findings.append(report)

View File

@@ -14,10 +14,10 @@ class logging_sink_created(Check):
report.resource_name = sink.name
report.location = logging_client.region
report.status = "FAIL"
report.status_extended = f"Sink {sink.name} is enabled but not exporting copies of all the log entries in project {sink.project_id}"
report.status_extended = f"Sink {sink.name} is enabled but not exporting copies of all the log entries in project {sink.project_id}."
if sink.filter == "all":
report.status = "PASS"
report.status_extended = f"Sink {sink.name} is enabled exporting copies of all the log entries in project {sink.project_id}"
report.status_extended = f"Sink {sink.name} is enabled exporting copies of all the log entries in project {sink.project_id}."
findings.append(report)
for project in logging_client.project_ids:
@@ -28,7 +28,7 @@ class logging_sink_created(Check):
report.resource_name = ""
report.location = logging_client.region
report.status = "FAIL"
report.status_extended = f"There are no logging sinks to export copies of all the log entries in project {project}"
report.status_extended = f"There are no logging sinks to export copies of all the log entries in project {project}."
findings.append(report)
return findings

View File

@@ -15,7 +15,7 @@ class serviceusage_cloudasset_inventory_enabled(Check):
report.location = serviceusage_client.region
report.status = "FAIL"
report.status_extended = (
f"Cloud Asset Inventory is not enabled in project {project_id}"
f"Cloud Asset Inventory is not enabled in project {project_id}."
)
for active_service in serviceusage_client.active_services.get(
project_id, []
@@ -23,7 +23,7 @@ class serviceusage_cloudasset_inventory_enabled(Check):
if active_service.name == "cloudasset.googleapis.com":
report.status = "PASS"
report.status_extended = (
f"Cloud Asset Inventory is enabled in project {project_id}"
f"Cloud Asset Inventory is enabled in project {project_id}."
)
break
findings.append(report)

View File

@@ -22,12 +22,12 @@ packages = [
{include = "prowler"}
]
readme = "README.md"
version = "3.8.1"
version = "3.8.2"
[tool.poetry.dependencies]
alive-progress = "3.1.4"
awsipranges = "0.3.3"
azure-identity = "1.13.0"
azure-identity = "1.14.0"
azure-mgmt-authorization = "4.0.0"
azure-mgmt-security = "5.0.0"
azure-mgmt-sql = "3.0.1"
@@ -38,10 +38,10 @@ boto3 = "1.26.165"
botocore = "1.29.165"
colorama = "0.4.6"
detect-secrets = "1.4.0"
google-api-python-client = "2.95.0"
google-api-python-client = "2.96.0"
google-auth-httplib2 = "^0.1.0"
mkdocs = {version = "1.5.2", optional = true}
mkdocs-material = {version = "9.1.20", optional = true}
mkdocs-material = {version = "9.1.21", optional = true}
msgraph-core = "0.2.2"
pydantic = "1.10.12"
python = "^3.9"
@@ -56,19 +56,20 @@ docs = ["mkdocs", "mkdocs-material"]
[tool.poetry.group.dev.dependencies]
bandit = "1.7.5"
black = "22.12.0"
coverage = "7.2.7"
coverage = "7.3.0"
docker = "6.1.3"
flake8 = "6.1.0"
freezegun = "1.2.2"
mock = "5.1.0"
moto = "4.1.14"
openapi-spec-validator = "0.6.0"
pylint = "2.17.5"
pytest = "7.4.0"
pytest-cov = "4.1.0"
pytest-randomly = "3.13.0"
pytest-xdist = "3.3.1"
safety = "2.3.5"
sure = "2.0.1"
vulture = "2.7"
vulture = "2.8"
[tool.poetry.scripts]
prowler = "prowler.__main__:prowler"

View File

@@ -1,5 +1,6 @@
from re import search
import boto3
import sure # noqa
from mock import patch
from moto import mock_iam, mock_sts
@@ -214,27 +215,30 @@ class Test_AWS_Provider:
credentials = assume_role_response["Credentials"]
# Test the response
# SessionToken
credentials["SessionToken"].should.have.length_of(356)
credentials["SessionToken"].startswith("FQoGZXIvYXdzE")
assert len(credentials["SessionToken"]) == 356
assert search(r"^FQoGZXIvYXdzE.*$", credentials["SessionToken"])
# AccessKeyId
credentials["AccessKeyId"].should.have.length_of(20)
credentials["AccessKeyId"].startswith("ASIA")
assert len(credentials["AccessKeyId"]) == 20
assert search(r"^ASIA.*$", credentials["AccessKeyId"])
# SecretAccessKey
credentials["SecretAccessKey"].should.have.length_of(40)
assert len(credentials["SecretAccessKey"]) == 40
# Assumed Role
assume_role_response["AssumedRoleUser"]["Arn"].should.equal(
f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
assert assume_role_response["AssumedRoleUser"]["AssumedRoleId"].startswith(
"AROA"
assert search(
r"^AROA.*$", assume_role_response["AssumedRoleUser"]["AssumedRoleId"]
)
assert assume_role_response["AssumedRoleUser"]["AssumedRoleId"].endswith(
":" + sessionName
assert search(
rf"^.*:{sessionName}$",
assume_role_response["AssumedRoleUser"]["AssumedRoleId"],
)
assume_role_response["AssumedRoleUser"][
"AssumedRoleId"
].should.have.length_of(21 + 1 + len(sessionName))
assert len(
assume_role_response["AssumedRoleUser"]["AssumedRoleId"]
) == 21 + 1 + len(sessionName)
@mock_iam
@mock_sts
@@ -301,27 +305,30 @@ class Test_AWS_Provider:
credentials = assume_role_response["Credentials"]
# Test the response
# SessionToken
credentials["SessionToken"].should.have.length_of(356)
credentials["SessionToken"].startswith("FQoGZXIvYXdzE")
assert len(credentials["SessionToken"]) == 356
assert search(r"^FQoGZXIvYXdzE.*$", credentials["SessionToken"])
# AccessKeyId
credentials["AccessKeyId"].should.have.length_of(20)
credentials["AccessKeyId"].startswith("ASIA")
assert len(credentials["AccessKeyId"]) == 20
assert search(r"^ASIA.*$", credentials["AccessKeyId"])
# SecretAccessKey
credentials["SecretAccessKey"].should.have.length_of(40)
assert len(credentials["SecretAccessKey"]) == 40
# Assumed Role
assume_role_response["AssumedRoleUser"]["Arn"].should.equal(
f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
assert assume_role_response["AssumedRoleUser"]["AssumedRoleId"].startswith(
"AROA"
assert search(
r"^AROA.*$", assume_role_response["AssumedRoleUser"]["AssumedRoleId"]
)
assert assume_role_response["AssumedRoleUser"]["AssumedRoleId"].endswith(
":" + sessionName
)
assume_role_response["AssumedRoleUser"]["AssumedRoleId"].should.have.length_of(
21 + 1 + len(sessionName)
assert search(
rf"^.*:{sessionName}$",
assume_role_response["AssumedRoleUser"]["AssumedRoleId"],
)
assert len(
assume_role_response["AssumedRoleUser"]["AssumedRoleId"]
) == 21 + 1 + len(sessionName)
@mock_iam
@mock_sts
@@ -390,27 +397,30 @@ class Test_AWS_Provider:
credentials = assume_role_response["Credentials"]
# Test the response
# SessionToken
credentials["SessionToken"].should.have.length_of(356)
credentials["SessionToken"].startswith("FQoGZXIvYXdzE")
assert len(credentials["SessionToken"]) == 356
assert search(r"^FQoGZXIvYXdzE.*$", credentials["SessionToken"])
# AccessKeyId
credentials["AccessKeyId"].should.have.length_of(20)
credentials["AccessKeyId"].startswith("ASIA")
assert len(credentials["AccessKeyId"]) == 20
assert search(r"^ASIA.*$", credentials["AccessKeyId"])
# SecretAccessKey
credentials["SecretAccessKey"].should.have.length_of(40)
assert len(credentials["SecretAccessKey"]) == 40
# Assumed Role
assume_role_response["AssumedRoleUser"]["Arn"].should.equal(
f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
assert assume_role_response["AssumedRoleUser"]["AssumedRoleId"].startswith(
"AROA"
assert search(
r"^AROA.*$", assume_role_response["AssumedRoleUser"]["AssumedRoleId"]
)
assert assume_role_response["AssumedRoleUser"]["AssumedRoleId"].endswith(
":" + sessionName
)
assume_role_response["AssumedRoleUser"]["AssumedRoleId"].should.have.length_of(
21 + 1 + len(sessionName)
assert search(
rf"^.*:{sessionName}$",
assume_role_response["AssumedRoleUser"]["AssumedRoleId"],
)
assert len(
assume_role_response["AssumedRoleUser"]["AssumedRoleId"]
) == 21 + 1 + len(sessionName)
def test_generate_regional_clients(self):
# New Boto3 session with the previously create user

View File

@@ -1,4 +1,3 @@
import sure # noqa
from pytest import raises
from prowler.providers.aws.lib.arn.arn import is_valid_arn, parse_iam_credentials_arn
@@ -250,12 +249,12 @@ class Test_ARN_Parsing:
for test in test_cases:
input_arn = test["input_arn"]
parsed_arn = parse_iam_credentials_arn(input_arn)
parsed_arn.partition.should.equal(test["expected"]["partition"])
parsed_arn.service.should.equal(test["expected"]["service"])
parsed_arn.region.should.equal(test["expected"]["region"])
parsed_arn.account_id.should.equal(test["expected"]["account_id"])
parsed_arn.resource_type.should.equal(test["expected"]["resource_type"])
parsed_arn.resource.should.equal(test["expected"]["resource"])
assert parsed_arn.partition == test["expected"]["partition"]
assert parsed_arn.service == test["expected"]["service"]
assert parsed_arn.region == test["expected"]["region"]
assert parsed_arn.account_id == test["expected"]["account_id"]
assert parsed_arn.resource_type == test["expected"]["resource_type"]
assert parsed_arn.resource == test["expected"]["resource"]
def test_iam_credentials_arn_parsing_raising_RoleArnParsingFailedMissingFields(
self,

View File

@@ -1,7 +1,6 @@
import json
import boto3
import sure # noqa
from moto import mock_iam, mock_organizations, mock_sts
from prowler.providers.aws.lib.organizations.organizations import (
@@ -52,10 +51,11 @@ class Test_AWS_Organizations:
org = get_organizations_metadata(account_id, assumed_role)
org.account_details_email.should.equal(mockemail)
org.account_details_name.should.equal(mockname)
org.account_details_arn.should.equal(
f"arn:aws:organizations::{AWS_ACCOUNT_NUMBER}:account/{org_id}/{account_id}"
assert org.account_details_email == mockemail
assert org.account_details_name == mockname
assert (
org.account_details_arn
== f"arn:aws:organizations::{AWS_ACCOUNT_NUMBER}:account/{org_id}/{account_id}"
)
org.account_details_org.should.equal(org_id)
org.account_details_tags.should.equal("key:value,")
assert org.account_details_org == org_id
assert org.account_details_tags == "key:value,"

View File

@@ -7,6 +7,7 @@ NON_TRUSTED_AWS_ACCOUNT_NUMBER = "111222333444"
class Test_policy_condition_parser:
# Test lowercase context key name --> aws
def test_condition_parser_string_equals_aws_SourceAccount_list(self):
condition_statement = {
"StringEquals": {"aws:SourceAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
@@ -633,3 +634,631 @@ class Test_policy_condition_parser:
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
# Test uppercase context key name --> AWS
def test_condition_parser_string_equals_AWS_SourceAccount_list(self):
condition_statement = {
"StringEquals": {"AWS:SourceAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_SourceAccount_list_not_valid(self):
condition_statement = {
"StringEquals": {
"AWS:SourceAccount": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_SourceAccount_str(self):
condition_statement = {
"StringEquals": {"AWS:SourceAccount": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_SourceAccount_str_not_valid(self):
condition_statement = {
"StringEquals": {"AWS:SourceAccount": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceAccount_list(self):
condition_statement = {
"StringLike": {"AWS:SourceAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceAccount_list_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:SourceAccount": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceAccount_str(self):
condition_statement = {
"StringLike": {"AWS:SourceAccount": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceAccount_str_not_valid(self):
condition_statement = {
"StringLike": {"AWS:SourceAccount": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_SourceOwner_str(self):
condition_statement = {
"StringEquals": {"AWS:SourceOwner": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_SourceOwner_str_not_valid(self):
condition_statement = {
"StringEquals": {"AWS:SourceOwner": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_SourceOwner_list(self):
condition_statement = {
"StringEquals": {"AWS:SourceOwner": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_SourceOwner_list_not_valid(self):
condition_statement = {
"StringEquals": {
"AWS:SourceOwner": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceOwner_list(self):
condition_statement = {
"StringLike": {"AWS:SourceOwner": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceOwner_list_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:SourceOwner": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceOwner_str(self):
condition_statement = {
"StringLike": {"AWS:SourceOwner": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceOwner_str_not_valid(self):
condition_statement = {
"StringLike": {"AWS:SourceOwner": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_S3_ResourceAccount_list(self):
condition_statement = {
"StringEquals": {"S3:ResourceAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_S3_ResourceAccount_list_not_valid(self):
condition_statement = {
"StringEquals": {
"S3:ResourceAccount": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_S3_ResourceAccount_str(self):
condition_statement = {
"StringEquals": {"S3:ResourceAccount": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_S3_ResourceAccount_str_not_valid(self):
condition_statement = {
"StringEquals": {"S3:ResourceAccount": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_PrincipalAccount_list(self):
condition_statement = {
"StringEquals": {"AWS:PrincipalAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_PrincipalAccount_list_not_valid(self):
condition_statement = {
"StringEquals": {
"AWS:PrincipalAccount": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_PrincipalAccount_str(self):
condition_statement = {
"StringEquals": {"AWS:PrincipalAccount": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_PrincipalAccount_str_not_valid(self):
condition_statement = {
"StringEquals": {"AWS:PrincipalAccount": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalAccount_list(self):
condition_statement = {
"StringLike": {"AWS:PrincipalAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalAccount_list_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:PrincipalAccount": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalAccount_str(self):
condition_statement = {
"StringLike": {"AWS:PrincipalAccount": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalAccount_str_not_valid(self):
condition_statement = {
"StringLike": {"AWS:PrincipalAccount": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_SourceArn_list(self):
condition_statement = {
"ArnLike": {
"AWS:SourceArn": [
f"arn:aws:cloudtrail:*:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*"
]
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_SourceArn_list_not_valid(self):
condition_statement = {
"ArnLike": {
"AWS:SourceArn": [
f"arn:aws:cloudtrail:*:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*",
f"arn:aws:cloudtrail:*:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*",
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_SourceArn_str(self):
condition_statement = {
"ArnLike": {
"AWS:SourceArn": f"arn:aws:cloudtrail:*:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*"
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_SourceArn_str_not_valid(self):
condition_statement = {
"ArnLike": {
"AWS:SourceArn": f"arn:aws:cloudtrail:*:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*"
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_PrincipalArn_list(self):
condition_statement = {
"ArnLike": {
"AWS:PrincipalArn": [
f"arn:aws:cloudtrail:*:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*"
]
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_PrincipalArn_list_not_valid(self):
condition_statement = {
"ArnLike": {
"AWS:PrincipalArn": [
f"arn:aws:cloudtrail:*:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*",
f"arn:aws:cloudtrail:*:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*",
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_PrincipalArn_str(self):
condition_statement = {
"ArnLike": {
"AWS:PrincipalArn": f"arn:aws:cloudtrail:*:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*"
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_like_AWS_PrincipalArn_str_not_valid(self):
condition_statement = {
"ArnLike": {
"AWS:PrincipalArn": f"arn:aws:cloudtrail:*:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/*"
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_SourceArn_list(self):
condition_statement = {
"ArnEquals": {
"AWS:SourceArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
]
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_SourceArn_list_not_valid(self):
condition_statement = {
"ArnEquals": {
"AWS:SourceArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_SourceArn_str(self):
condition_statement = {
"ArnEquals": {
"AWS:SourceArn": f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_SourceArn_str_not_valid(self):
condition_statement = {
"ArnEquals": {
"AWS:SourceArn": f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_PrincipalArn_list(self):
condition_statement = {
"ArnEquals": {
"AWS:PrincipalArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
]
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_PrincipalArn_list_not_valid(self):
condition_statement = {
"ArnEquals": {
"AWS:PrincipalArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_PrincipalArn_str(self):
condition_statement = {
"ArnEquals": {
"AWS:PrincipalArn": f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_arn_equals_AWS_PrincipalArn_str_not_valid(self):
condition_statement = {
"ArnEquals": {
"AWS:PrincipalArn": f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceArn_list(self):
condition_statement = {
"StringLike": {
"AWS:SourceArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
]
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceArn_list_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:SourceArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceArn_str(self):
condition_statement = {
"StringLike": {
"AWS:SourceArn": f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_SourceArn_str_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:SourceArn": f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalArn_list(self):
condition_statement = {
"StringLike": {
"AWS:PrincipalArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
]
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalArn_list_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:PrincipalArn": [
f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test",
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalArn_str(self):
condition_statement = {
"StringLike": {
"AWS:PrincipalArn": f"arn:aws:cloudtrail:eu-west-1:{TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_PrincipalArn_str_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:PrincipalArn": f"arn:aws:cloudtrail:eu-west-1:{NON_TRUSTED_AWS_ACCOUNT_NUMBER}:trail/test"
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_ResourceAccount_list(self):
condition_statement = {
"StringEquals": {"AWS:ResourceAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_ResourceAccount_list_not_valid(self):
condition_statement = {
"StringEquals": {
"AWS:ResourceAccount": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_ResourceAccount_str(self):
condition_statement = {
"StringEquals": {"AWS:ResourceAccount": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_equals_AWS_ResourceAccount_str_not_valid(self):
condition_statement = {
"StringEquals": {"AWS:ResourceAccount": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_ResourceAccount_list(self):
condition_statement = {
"StringLike": {"AWS:ResourceAccount": [TRUSTED_AWS_ACCOUNT_NUMBER]}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_ResourceAccount_list_not_valid(self):
condition_statement = {
"StringLike": {
"AWS:ResourceAccount": [
TRUSTED_AWS_ACCOUNT_NUMBER,
NON_TRUSTED_AWS_ACCOUNT_NUMBER,
]
}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_ResourceAccount_str(self):
condition_statement = {
"StringLike": {"AWS:ResourceAccount": TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)
def test_condition_parser_string_like_AWS_ResourceAccount_str_not_valid(self):
condition_statement = {
"StringLike": {"AWS:ResourceAccount": NON_TRUSTED_AWS_ACCOUNT_NUMBER}
}
assert not is_account_only_allowed_in_condition(
condition_statement, TRUSTED_AWS_ACCOUNT_NUMBER
)

View File

@@ -0,0 +1,150 @@
from unittest import mock
from boto3 import resource, session
from moto import mock_ec2
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
EXAMPLE_AMI_ID = "ami-12c6146b"
AWS_ACCOUNT_NUMBER = "123456789012"
class Test_ec2_instance_detailed_monitoring_enabled:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_ec2
def test_ec2_no_instances(self):
from prowler.providers.aws.services.ec2.ec2_service import EC2
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.ec2.ec2_instance_detailed_monitoring_enabled.ec2_instance_detailed_monitoring_enabled.ec2_client",
new=EC2(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.ec2.ec2_instance_detailed_monitoring_enabled.ec2_instance_detailed_monitoring_enabled import (
ec2_instance_detailed_monitoring_enabled,
)
check = ec2_instance_detailed_monitoring_enabled()
result = check.execute()
assert len(result) == 0
@mock_ec2
def test_instance_with_enhanced_monitoring_disabled(self):
ec2 = resource("ec2", region_name=AWS_REGION)
instance = ec2.create_instances(
ImageId=EXAMPLE_AMI_ID,
MinCount=1,
MaxCount=1,
Monitoring={"Enabled": False},
)[0]
from prowler.providers.aws.services.ec2.ec2_service import EC2
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.ec2.ec2_instance_detailed_monitoring_enabled.ec2_instance_detailed_monitoring_enabled.ec2_client",
new=EC2(current_audit_info),
):
from prowler.providers.aws.services.ec2.ec2_instance_detailed_monitoring_enabled.ec2_instance_detailed_monitoring_enabled import (
ec2_instance_detailed_monitoring_enabled,
)
check = ec2_instance_detailed_monitoring_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"
assert (
result[0].status_extended
== f"EC2 Instance {instance.id} does not have detailed monitoring enabled."
)
assert result[0].resource_id == instance.id
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:ec2:{AWS_REGION}:{current_audit_info.audited_account}:instance/{instance.id}"
)
@mock_ec2
def test_instance_with_enhanced_monitoring_enabled(self):
ec2 = resource("ec2", region_name=AWS_REGION)
instance = ec2.create_instances(
ImageId=EXAMPLE_AMI_ID,
MinCount=1,
MaxCount=1,
Monitoring={"Enabled": True},
)[0]
from prowler.providers.aws.services.ec2.ec2_service import EC2
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.ec2.ec2_instance_detailed_monitoring_enabled.ec2_instance_detailed_monitoring_enabled.ec2_client",
new=EC2(current_audit_info),
) as ec2_client:
# Moto does not handle the Monitoring key in the instances, so we have to update it manually
ec2_client.instances[0].monitoring_state = "enabled"
from prowler.providers.aws.services.ec2.ec2_instance_detailed_monitoring_enabled.ec2_instance_detailed_monitoring_enabled import (
ec2_instance_detailed_monitoring_enabled,
)
check = ec2_instance_detailed_monitoring_enabled()
result = check.execute()
assert len(result) == 1
assert result[0].status == "PASS"
assert (
result[0].status_extended
== f"EC2 Instance {instance.id} has detailed monitoring enabled."
)
assert result[0].resource_id == instance.id
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:ec2:{AWS_REGION}:{current_audit_info.audited_account}:instance/{instance.id}"
)

View File

@@ -1,23 +1,37 @@
import pytest
from prowler.providers.aws.services.ec2.lib.security_groups import _is_cidr_public
from prowler.providers.aws.services.ec2.lib.security_groups import (
_is_cidr_public,
check_security_group,
)
TRANSPORT_PROTOCOL_TCP = "tcp"
TRANSPORT_PROTOCOL_ALL = "-1"
IP_V4_ALL_CIDRS = "0.0.0.0/0"
IP_V4_PUBLIC_CIDR = "84.28.12.2/32"
IP_V4_PRIVATE_CIDR = "10.1.0.0/16"
IP_V6_ALL_CIDRS = "::/0"
IP_V6_PUBLIC_CIDR = "cafe:cafe:cafe:cafe::/64"
IP_V6_PRIVATE_CIDR = "fc00::/7"
class Test_security_groups:
class Test_is_cidr_public:
def test__is_cidr_public_Public_IPv4_all_IPs_any_address_false(self):
cidr = "0.0.0.0/0"
cidr = IP_V4_ALL_CIDRS
assert _is_cidr_public(cidr)
def test__is_cidr_public_Public_IPv4__all_IPs_any_address_true(self):
cidr = "0.0.0.0/0"
cidr = IP_V4_ALL_CIDRS
assert _is_cidr_public(cidr, any_address=True)
def test__is_cidr_public_Public_IPv4_any_address_false(self):
cidr = "84.28.12.2/32"
cidr = IP_V4_PUBLIC_CIDR
assert _is_cidr_public(cidr)
def test__is_cidr_public_Public_IPv4_any_address_true(self):
cidr = "84.28.12.2/32"
cidr = IP_V4_PUBLIC_CIDR
assert not _is_cidr_public(cidr, any_address=True)
def test__is_cidr_public_Private_IPv4(self):
@@ -37,25 +51,300 @@ class Test_security_groups:
assert ex.match(f"{cidr} has host bits set")
def test__is_cidr_public_Public_IPv6_all_IPs_any_address_false(self):
cidr = "::/0"
cidr = IP_V6_ALL_CIDRS
assert _is_cidr_public(cidr)
def test__is_cidr_public_Public_IPv6_all_IPs_any_adress_true(self):
cidr = "::/0"
cidr = IP_V6_ALL_CIDRS
assert _is_cidr_public(cidr, any_address=True)
def test__is_cidr_public_Public_IPv6(self):
cidr = "cafe:cafe:cafe:cafe::/64"
cidr = IP_V6_PUBLIC_CIDR
assert _is_cidr_public(cidr)
def test__is_cidr_public_Public_IPv6_any_adress_true(self):
cidr = "cafe:cafe:cafe:cafe::/64"
cidr = IP_V6_PUBLIC_CIDR
assert not _is_cidr_public(cidr, any_address=True)
def test__is_cidr_public_Private_IPv6(self):
cidr = "fc00::/7"
cidr = IP_V6_PRIVATE_CIDR
assert not _is_cidr_public(cidr)
def test__is_cidr_public_Private_IPv6_any_adress_true(self):
cidr = "fc00::/7"
cidr = IP_V6_PRIVATE_CIDR
assert not _is_cidr_public(cidr, any_address=True)
class Test_check_security_group:
def generate_ip_ranges_list(self, input_ip_ranges: [str], v4=True):
cidr_ranges = "CidrIp" if v4 else "CidrIpv6"
return [{cidr_ranges: ip, "Description": ""} for ip in input_ip_ranges]
def ingress_rule_generator(
self,
from_port: int,
to_port: int,
ip_protocol: str,
input_ipv4_ranges: [str],
input_ipv6_ranges: [str],
):
"""
ingress_rule_generator returns the following AWS Security Group IpPermissions Ingress Rule based on the input arguments
{
'FromPort': 123,
'IpProtocol': 'string',
'IpRanges': [
{
'CidrIp': 'string',
'Description': 'string'
},
],
'Ipv6Ranges': [
{
'CidrIpv6': 'string',
'Description': 'string'
},
],
'ToPort': 123,
}
"""
ipv4_ranges = self.generate_ip_ranges_list(input_ipv4_ranges)
ipv6_ranges = self.generate_ip_ranges_list(input_ipv6_ranges, v4=False)
ingress_rule = {
"FromPort": from_port,
"ToPort": to_port,
"IpProtocol": ip_protocol,
"IpRanges": ipv4_ranges,
"Ipv6Ranges": ipv6_ranges,
}
return ingress_rule
# TCP Protocol - IP_V4_ALL_CIDRS - Ingress 22 to 22 - check 22 - Any Address - Open
def test_all_public_ipv4_address_open_22_tcp_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_ALL_CIDRS], []
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# TCP Protocol - IP_v4_PUBLIC_CIDR - Ingress 22 to 22 - check 22 - Open
def test_public_ipv4_address_open_22_tcp(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_PUBLIC_CIDR], []
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], False)
# TCP Protocol - IP_v4_PUBLIC_CIDR - Ingress 22 to 22 - check 22 - Any Address - Closed
def test_public_ipv4_address_open_22_tcp_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_PUBLIC_CIDR], []
)
assert not check_security_group(
ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True
)
# TCP Protocol - IP_V4_PRIVATE_CIDR - Ingress 22 to 22 - check 22 - Any Address - Closed
def test_private_ipv4_address_open_22_tcp_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_PRIVATE_CIDR], []
)
assert not check_security_group(
ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], False
)
# TCP Protocol - IP_V4_PRIVATE_CIDR - Ingress 22 to 22 - check 22 - Closed
def test_private_ipv4_address_open_22_tcp(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_PRIVATE_CIDR], []
)
assert not check_security_group(
ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], False
)
# TCP Protocol - IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Any Address - Open
def test_all_public_ipv6_address_open_22_tcp_any_address(self):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [], [IP_V6_ALL_CIDRS]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# TCP Protocol - IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Open
def test_all_public_ipv6_address_open_22_tcp(self):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [], [IP_V6_ALL_CIDRS]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], False)
# TCP Protocol - IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Open
def test_public_ipv6_address_open_22_tcp(self):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [], [IP_V6_PUBLIC_CIDR]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], False)
# TCP Protocol - IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Any Address - Closed
def test_public_ipv6_address_open_22_tcp_any_address(self):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [], [IP_V6_PUBLIC_CIDR]
)
assert not check_security_group(
ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True
)
# TCP Protocol - IP_V6_PRIVATE_CIDR - Ingress 22 to 22 - check 22 - Any Address - Closed
def test_all_private_ipv6_address_open_22_tcp_any_address(self):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [], [IP_V6_PRIVATE_CIDR]
)
assert not check_security_group(
ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True
)
# TCP Protocol - IP_V6_PRIVATE_CIDR - Ingress 22 to 22 - check 22 - Closed
def test_all_private_ipv6_address_open_22_tcp(self):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [], [IP_V6_PRIVATE_CIDR]
)
assert not check_security_group(
ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True
)
# TCP Protocol - IP_V4_PRIVATE_CIDR + IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Any Address - Open
def test_private_ipv4_all_public_ipv6_address_open_22_tcp_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_PRIVATE_CIDR], [IP_V6_ALL_CIDRS]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# TCP Protocol - IP_V4_PRIVATE_CIDR + IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Open
def test_private_ipv4_all_public_ipv6_address_open_22_tcp(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_PRIVATE_CIDR], [IP_V6_ALL_CIDRS]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# TCP Protocol - IP_V4_ALL_CIDRS + IP_V6_PRIVATE_CIDR - Ingress 22 to 22 - check 22 - Any Address - Open
def test_all_public_ipv4_private_ipv6_address_open_22_tcp_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_ALL_CIDRS], [IP_V6_PRIVATE_CIDR]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# TCP Protocol - IP_V4_ALL_CIDRS + IP_V6_PRIVATE_CIDR - Ingress 22 to 22 - check 22 - Open
def test_all_public_ipv4_private_ipv6_address_open_22_tcp(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_TCP, [IP_V4_ALL_CIDRS], [IP_V6_PRIVATE_CIDR]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], False)
# ALL (-1) Protocol - IP_V4_ALL_CIDRS - Ingress 22 to 22 - check 22 - Any Address - Open
def test_all_public_ipv4_address_open_22_any_protocol_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_ALL, [IP_V4_ALL_CIDRS], []
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# ALL (-1) Protocol - IP_V4_PUBLIC_CIDR - Ingress 22 to 22 - check 22 - Closed
def test_all_public_ipv4_address_open_22_any_protocol(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_ALL, [IP_V4_PUBLIC_CIDR], []
)
assert not check_security_group(
ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True
)
# ALL (-1) Protocol - IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Open
def test_all_public_ipv6_address_open_22_any_protocol_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_ALL, [], [IP_V6_ALL_CIDRS]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# ALL (-1) Protocol - IP_V4_PRIVATE_CIDR + IP_V6_ALL_CIDRS - Ingress 22 to 22 - check 22 - Open
def test_private_ipv4_all_public_ipv6_address_open_22_any_protocol_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_ALL, [IP_V4_PRIVATE_CIDR], [IP_V6_ALL_CIDRS]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# ALL (-1) Protocol - IP_V4_ALL_CIDRS + IP_V6_PRIVATE_CIDR - Ingress 22 to 22 - check 22 - Any Address - Open
def test_all_public_ipv4_private_ipv6_address_open_22_any_protocol_any_address(
self,
):
port = 22
ingress_rule = self.ingress_rule_generator(
port, port, TRANSPORT_PROTOCOL_ALL, [IP_V4_ALL_CIDRS], [IP_V6_PRIVATE_CIDR]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [port], True)
# TCP Protocol - IP_V4_ALL_CIDRS - Ingress 21 to 23 - check 22 - Any Address - Any Address - Open
def test_all_public_ipv4_address_open_21_to_23_check_22_tcp_any_address(
self,
):
ingress_rule = self.ingress_rule_generator(
21, 23, TRANSPORT_PROTOCOL_TCP, [IP_V4_ALL_CIDRS], []
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, [22], True)
# TCP Protocol - IP_V4_ALL_CIDRS - All Ports - check None - Any Address - Open
def test_all_public_ipv4_address_open_all_ports_check_all_tcp_any_address(
self,
):
ingress_rule = self.ingress_rule_generator(
0, 65535, TRANSPORT_PROTOCOL_TCP, [IP_V4_ALL_CIDRS], []
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, None, True)
# TCP Protocol - IP_V6_ALL_CIDRS - All Ports - check None - Any Address - Open
def test_all_public_ipv6_address_open_all_ports_check_all_tcp_any_address(
self,
):
ingress_rule = self.ingress_rule_generator(
0, 65535, TRANSPORT_PROTOCOL_TCP, [], [IP_V6_ALL_CIDRS]
)
assert check_security_group(ingress_rule, TRANSPORT_PROTOCOL_TCP, None, True)

Some files were not shown because too many files have changed in this diff Show More