Compare commits

..

2 Commits

Author SHA1 Message Date
Pepe Fagoaga
0184bac5be fix(vpc_different_regions): Handle no VPCs and add tests 2023-11-30 08:34:05 +01:00
William Brady
bc2edd02ad bug:3080 vpc check 2023-11-29 14:12:10 -05:00
424 changed files with 15687 additions and 11669 deletions

View File

@@ -13,10 +13,10 @@ name: "CodeQL"
on:
push:
branches: [ "master", "prowler-4.0-dev" ]
branches: [ "master", prowler-2, prowler-3.0-dev ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "master", "prowler-4.0-dev" ]
branches: [ "master" ]
schedule:
- cron: '00 12 * * *'

View File

@@ -4,11 +4,9 @@ on:
push:
branches:
- "master"
- "prowler-4.0-dev"
pull_request:
branches:
- "master"
- "prowler-4.0-dev"
jobs:
build:
runs-on: ubuntu-latest
@@ -20,7 +18,7 @@ jobs:
- uses: actions/checkout@v3
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@v41
uses: tj-actions/changed-files@v39
with:
files: ./**
files_ignore: |
@@ -28,7 +26,6 @@ jobs:
README.md
docs/**
permissions/**
mkdocs.yml
- name: Install poetry
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |

View File

@@ -80,9 +80,9 @@ repos:
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
# entry: bash -c 'trufflehog git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
stages: ["commit", "push"]

View File

@@ -14,11 +14,11 @@
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=prowler%20downloads"></a>
<a href="https://pypistats.org/packages/prowler-cloud"><img alt="PyPI Prowler-Cloud Downloads" src="https://img.shields.io/pypi/dw/prowler-cloud.svg?label=prowler-cloud%20downloads"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://codecov.io/gh/prowler-cloud/prowler"><img src="https://codecov.io/gh/prowler-cloud/prowler/graph/badge.svg?token=OflBGsdpDl"/></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler"><img alt="Repo size" src="https://img.shields.io/github/repo-size/prowler-cloud/prowler"></a>

View File

@@ -136,16 +136,26 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
=== "AWS CloudShell"
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [2](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
_Requirements_:
* Open AWS CloudShell `bash`.
* First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
_Commands_:
* Once Python 3.9 is available we can install Prowler from pip:
```
pip install prowler
pip3.9 install prowler
prowler -v
```

View File

@@ -32,14 +32,3 @@ Prowler's AWS Provider uses the Boto3 [Standard](https://boto3.amazonaws.com/v1/
- Retry attempts on nondescriptive, transient error codes. Specifically, these HTTP status codes: 500, 502, 503, 504.
- Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.
## Notes for validating retry attempts
If you are making changes to Prowler, and want to validate if requests are being retried or given up on, you can take the following approach
* Run prowler with `--log-level DEBUG` and `--log-file debuglogs.txt`
* Search for retry attempts using `grep -i 'Retry needed' debuglogs.txt`
This is based off of the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html#checking-retry-attempts-in-your-client-logs), which states that if a retry is performed, you will see a message starting with "Retry needed".
You can determine the total number of calls made using `grep -i 'Sending http request' debuglogs.txt | wc -l`

View File

@@ -1,26 +1,26 @@
# AWS CloudShell
## Installation
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [[2]](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
```shell
pip install prowler
Prowler can be easily executed in AWS CloudShell but it has some prerequisites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
- First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
- Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
```
- Now enjoy Prowler:
```
prowler -v
prowler
```
## Download Files
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
## Clone Prowler from Github
The limited storage that AWS CloudShell provides for the user's home directory causes issues when installing the poetry dependencies to run Prowler from GitHub. Here is a workaround:
```shell
git clone https://github.com/prowler-cloud/prowler.git
cd prowler
pip install poetry
mkdir /tmp/pypoetry
poetry config cache-dir /tmp/pypoetry
poetry shell
poetry install
python prowler.py -v
```
- To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`

View File

@@ -23,15 +23,6 @@ prowler aws -R arn:aws:iam::<account_id>:role/<role_name>
prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R arn:aws:iam::<account_id>:role/<role_name>
```
## Custom Role Session Name
Prowler can use your custom Role Session name with:
```console
prowler aws --role-session-name <role_session_name>
```
> It defaults to `ProwlerAssessmentSession`
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.

View File

@@ -1,187 +0,0 @@
# Parallel Execution
The strategy used here will be to execute Prowler once per service. You can modify this approach as per your requirements.
This can help for really large accounts, but please be aware of AWS API rate limits:
1. **Service-Specific Limits**: Each AWS service has its own rate limits. For instance, Amazon EC2 might have different rate limits for launching instances versus making API calls to describe instances.
2. **API Rate Limits**: Most of the rate limits in AWS are applied at the API level. Each API call to an AWS service counts towards the rate limit for that service.
3. **Throttling Responses**: When you exceed the rate limit for a service, AWS responds with a throttling error. In AWS SDKs, these are typically represented as `ThrottlingException` or `RateLimitExceeded` errors.
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.cloud/en/latest/tutorials/aws/boto3-configuration/).
> Note: You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
## Linux
Generate a list of services that Prowler supports, and populate this info into a file:
```bash
prowler aws --list-services | awk -F"- " '{print $2}' | sed '/^$/d' > services
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.sh` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run Prowler with.
```bash
#!/bin/bash
# Change these variables as needed
profile="your_profile"
account_id=$(aws sts get-caller-identity --profile "${profile}" --query 'Account' --output text)
echo "Executing in account: ${account_id}"
# Maximum number of concurrent processes
MAX_PROCESSES=5
# Loop through the services
while read service; do
echo "$(date '+%Y-%m-%d %H:%M:%S'): Starting job for service: ${service}"
# Run the command in the background
(prowler -p "$profile" -s "$service" -F "${account_id}-${service}" --ignore-unused-services --only-logs; echo "$(date '+%Y-%m-%d %H:%M:%S') - ${service} has completed") &
# Check if we have reached the maximum number of processes
while [ $(jobs -r | wc -l) -ge ${MAX_PROCESSES} ]; do
# Wait for a second before checking again
sleep 1
done
done < ./services
# Wait for all background processes to finish
wait
echo "All jobs completed"
```
Output will be stored in the `output/` folder that is in the same directory from which you executed the script.
## Windows
Generate a list of services that Prowler supports, and populate this info into a file:
```powershell
prowler aws --list-services | ForEach-Object {
# Capture lines that are likely service names
if ($_ -match '^\- \w+$') {
$_.Trim().Substring(2)
}
} | Where-Object {
# Filter out empty or null lines
$_ -ne $null -and $_ -ne ''
} | Set-Content -Path "services"
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.ps1` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run prowler with.
Change any parameters you would like when calling prowler in the `Start-Job -ScriptBlock` section. Note that you need to keep the `--only-logs` parameter, else some encoding issue occurs when trying to render the progress-bar and prowler won't successfully execute.
```powershell
$profile = "your_profile"
$account_id = Invoke-Expression -Command "aws sts get-caller-identity --profile $profile --query 'Account' --output text"
Write-Host "Executing Prowler in $account_id"
# Maximum number of concurrent jobs
$MAX_PROCESSES = 5
# Read services from a file
$services = Get-Content -Path "services"
# Array to keep track of started jobs
$jobs = @()
foreach ($service in $services) {
# Start the command as a job
$job = Start-Job -ScriptBlock {
prowler -p ${using:profile} -s ${using:service} -F "${using:account_id}-${using:service}" --ignore-unused-services --only-logs
$endTimestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
Write-Output "${endTimestamp} - $using:service has completed"
}
$jobs += $job
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - Starting job for service: $service"
# Check if we have reached the maximum number of jobs
while (($jobs | Where-Object { $_.State -eq 'Running' }).Count -ge $MAX_PROCESSES) {
Start-Sleep -Seconds 1
# Check for any completed jobs and receive their output
$completedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($completedJob in $completedJobs) {
Receive-Job -Job $completedJob -Keep | ForEach-Object { Write-Host $_ }
$jobs = $jobs | Where-Object { $_.Id -ne $completedJob.Id }
Remove-Job -Job $completedJob
}
}
}
# Check for any remaining completed jobs
$remainingCompletedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($remainingJob in $remainingCompletedJobs) {
Receive-Job -Job $remainingJob -Keep | ForEach-Object { Write-Host $_ }
Remove-Job -Job $remainingJob
}
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - All jobs completed"
```
Output will be stored in `C:\Users\YOUR-USER\Documents\output\`
## Combining the output files
Guidance is provided for the CSV file format. From the ouput directory, execute either the following Bash or PowerShell script. The script will collect the output from the CSV files, only include the header from the first file, and then output the result as CombinedCSV.csv in the current working directory.
There is no logic implemented in terms of which CSV files it will combine. If you have additional CSV files from other actions, such as running a quick inventory, you will need to move that out of the current (or any nested) directory, or move the output you want to combine into its own folder and run the script from there.
```bash
#!/bin/bash
# Initialize a variable to indicate the first file
firstFile=true
# Find all CSV files and loop through them
find . -name "*.csv" -print0 | while IFS= read -r -d '' file; do
if [ "$firstFile" = true ]; then
# For the first file, keep the header
cat "$file" > CombinedCSV.csv
firstFile=false
else
# For subsequent files, skip the header
tail -n +2 "$file" >> CombinedCSV.csv
fi
done
```
```powershell
# Get all CSV files from current directory and its subdirectories
$csvFiles = Get-ChildItem -Recurse -Filter "*.csv"
# Initialize a variable to track if it's the first file
$firstFile = $true
# Loop through each CSV file
foreach ($file in $csvFiles) {
if ($firstFile) {
# For the first file, keep the header and change the flag
$combinedCsv = Import-Csv -Path $file.FullName
$firstFile = $false
} else {
# For subsequent files, skip the header
$tempCsv = Import-Csv -Path $file.FullName
$combinedCsv += $tempCsv | Select-Object * -Skip 1
}
}
# Export the combined data to a new CSV file
$combinedCsv | Export-Csv -Path "CombinedCSV.csv" -NoTypeInformation
```
## TODO: Additional Improvements
Some services need to instantiate another service to perform a check. For instance, `cloudwatch` will instantiate Prowler's `iam` service to perform the `cloudwatch_cross_account_sharing_disabled` check. When the `iam` service is instantiated, it will perform the `__init__` function, and pull all the information required for that service. This provides an opportunity for an improvement in the above script to group related services together so that the `iam` services (or any other cross-service references) isn't repeatedily instantiated by grouping dependant services together. A complete mapping between these services still needs to be further investigated, but these are the cross-references that have been noted:
* inspector2 needs lambda and ec2
* cloudwatch needs iam
* dlm needs ec2

View File

@@ -43,71 +43,46 @@ Hereunder is the structure for each of the supported report formats by Prowler:
![HTML Output](../img/output-html.png)
### CSV
CSV format has a set of common columns for all the providers, and then provider specific columns.
The common columns are the following:
The following are the columns present in the CSV format:
- ASSESSMENT_START_TIME
- FINDING_UNIQUE_ID
- PROVIDER
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- COMPLIANCE
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
And then by the provider specific columns:
#### AWS
- PROFILE
- ACCOUNT_ID
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- RESOURCE_ID
- RESOURCE_ARN
#### AZURE
- TENANT_DOMAIN
- SUBSCRIPTION
- RESOURCE_ID
- RESOURCE_NAME
#### GCP
- PROJECT_ID
- LOCATION
- RESOURCE_ID
- RESOURCE_NAME
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_ID
- RESOURCE_ARN
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- COMPLIANCE
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
> Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
### JSON

View File

@@ -41,7 +41,6 @@ nav:
- Custom Metadata: tutorials/custom-checks-metadata.md
- Ignore Unused Services: tutorials/ignore-unused-services.md
- Pentesting: tutorials/pentesting.md
- Parallel Execution: tutorials/parallel-execution.md
- Developer Guide: developer-guide/introduction.md
- AWS:
- Authentication: tutorials/aws/authentication.md

274
poetry.lock generated
View File

@@ -295,18 +295,18 @@ files = [
[[package]]
name = "bandit"
version = "1.7.6"
version = "1.7.5"
description = "Security oriented static analyser for python code."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.7"
files = [
{file = "bandit-1.7.6-py3-none-any.whl", hash = "sha256:36da17c67fc87579a5d20c323c8d0b1643a890a2b93f00b3d1229966624694ff"},
{file = "bandit-1.7.6.tar.gz", hash = "sha256:72ce7bc9741374d96fb2f1c9a8960829885f1243ffde743de70a19cee353e8f3"},
{file = "bandit-1.7.5-py3-none-any.whl", hash = "sha256:75665181dc1e0096369112541a056c59d1c5f66f9bb74a8d686c3c362b83f549"},
{file = "bandit-1.7.5.tar.gz", hash = "sha256:bdfc739baa03b880c2d15d0431b31c658ffc348e907fe197e54e0389dd59e11e"},
]
[package.dependencies]
colorama = {version = ">=0.3.9", markers = "platform_system == \"Windows\""}
GitPython = ">=3.1.30"
GitPython = ">=1.0.1"
PyYAML = ">=5.3.1"
rich = "*"
stevedore = ">=1.20.0"
@@ -649,63 +649,63 @@ files = [
[[package]]
name = "coverage"
version = "7.4.0"
version = "7.3.2"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "coverage-7.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:36b0ea8ab20d6a7564e89cb6135920bc9188fb5f1f7152e94e8300b7b189441a"},
{file = "coverage-7.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0676cd0ba581e514b7f726495ea75aba3eb20899d824636c6f59b0ed2f88c471"},
{file = "coverage-7.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d0ca5c71a5a1765a0f8f88022c52b6b8be740e512980362f7fdbb03725a0d6b9"},
{file = "coverage-7.4.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a7c97726520f784239f6c62506bc70e48d01ae71e9da128259d61ca5e9788516"},
{file = "coverage-7.4.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815ac2d0f3398a14286dc2cea223a6f338109f9ecf39a71160cd1628786bc6f5"},
{file = "coverage-7.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:80b5ee39b7f0131ebec7968baa9b2309eddb35b8403d1869e08f024efd883566"},
{file = "coverage-7.4.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5b2ccb7548a0b65974860a78c9ffe1173cfb5877460e5a229238d985565574ae"},
{file = "coverage-7.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:995ea5c48c4ebfd898eacb098164b3cc826ba273b3049e4a889658548e321b43"},
{file = "coverage-7.4.0-cp310-cp310-win32.whl", hash = "sha256:79287fd95585ed36e83182794a57a46aeae0b64ca53929d1176db56aacc83451"},
{file = "coverage-7.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:5b14b4f8760006bfdb6e08667af7bc2d8d9bfdb648351915315ea17645347137"},
{file = "coverage-7.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:04387a4a6ecb330c1878907ce0dc04078ea72a869263e53c72a1ba5bbdf380ca"},
{file = "coverage-7.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ea81d8f9691bb53f4fb4db603203029643caffc82bf998ab5b59ca05560f4c06"},
{file = "coverage-7.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:74775198b702868ec2d058cb92720a3c5a9177296f75bd97317c787daf711505"},
{file = "coverage-7.4.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:76f03940f9973bfaee8cfba70ac991825611b9aac047e5c80d499a44079ec0bc"},
{file = "coverage-7.4.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:485e9f897cf4856a65a57c7f6ea3dc0d4e6c076c87311d4bc003f82cfe199d25"},
{file = "coverage-7.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:6ae8c9d301207e6856865867d762a4b6fd379c714fcc0607a84b92ee63feff70"},
{file = "coverage-7.4.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:bf477c355274a72435ceb140dc42de0dc1e1e0bf6e97195be30487d8eaaf1a09"},
{file = "coverage-7.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:83c2dda2666fe32332f8e87481eed056c8b4d163fe18ecc690b02802d36a4d26"},
{file = "coverage-7.4.0-cp311-cp311-win32.whl", hash = "sha256:697d1317e5290a313ef0d369650cfee1a114abb6021fa239ca12b4849ebbd614"},
{file = "coverage-7.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:26776ff6c711d9d835557ee453082025d871e30b3fd6c27fcef14733f67f0590"},
{file = "coverage-7.4.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:13eaf476ec3e883fe3e5fe3707caeb88268a06284484a3daf8250259ef1ba143"},
{file = "coverage-7.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846f52f46e212affb5bcf131c952fb4075b55aae6b61adc9856222df89cbe3e2"},
{file = "coverage-7.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26f66da8695719ccf90e794ed567a1549bb2644a706b41e9f6eae6816b398c4a"},
{file = "coverage-7.4.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:164fdcc3246c69a6526a59b744b62e303039a81e42cfbbdc171c91a8cc2f9446"},
{file = "coverage-7.4.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:316543f71025a6565677d84bc4df2114e9b6a615aa39fb165d697dba06a54af9"},
{file = "coverage-7.4.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:bb1de682da0b824411e00a0d4da5a784ec6496b6850fdf8c865c1d68c0e318dd"},
{file = "coverage-7.4.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:0e8d06778e8fbffccfe96331a3946237f87b1e1d359d7fbe8b06b96c95a5407a"},
{file = "coverage-7.4.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a56de34db7b7ff77056a37aedded01b2b98b508227d2d0979d373a9b5d353daa"},
{file = "coverage-7.4.0-cp312-cp312-win32.whl", hash = "sha256:51456e6fa099a8d9d91497202d9563a320513fcf59f33991b0661a4a6f2ad450"},
{file = "coverage-7.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:cd3c1e4cb2ff0083758f09be0f77402e1bdf704adb7f89108007300a6da587d0"},
{file = "coverage-7.4.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e9d1bf53c4c8de58d22e0e956a79a5b37f754ed1ffdbf1a260d9dcfa2d8a325e"},
{file = "coverage-7.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:109f5985182b6b81fe33323ab4707011875198c41964f014579cf82cebf2bb85"},
{file = "coverage-7.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3cc9d4bc55de8003663ec94c2f215d12d42ceea128da8f0f4036235a119c88ac"},
{file = "coverage-7.4.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc6d65b21c219ec2072c1293c505cf36e4e913a3f936d80028993dd73c7906b1"},
{file = "coverage-7.4.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5a10a4920def78bbfff4eff8a05c51be03e42f1c3735be42d851f199144897ba"},
{file = "coverage-7.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b8e99f06160602bc64da35158bb76c73522a4010f0649be44a4e167ff8555952"},
{file = "coverage-7.4.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:7d360587e64d006402b7116623cebf9d48893329ef035278969fa3bbf75b697e"},
{file = "coverage-7.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:29f3abe810930311c0b5d1a7140f6395369c3db1be68345638c33eec07535105"},
{file = "coverage-7.4.0-cp38-cp38-win32.whl", hash = "sha256:5040148f4ec43644702e7b16ca864c5314ccb8ee0751ef617d49aa0e2d6bf4f2"},
{file = "coverage-7.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:9864463c1c2f9cb3b5db2cf1ff475eed2f0b4285c2aaf4d357b69959941aa555"},
{file = "coverage-7.4.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:936d38794044b26c99d3dd004d8af0035ac535b92090f7f2bb5aa9c8e2f5cd42"},
{file = "coverage-7.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:799c8f873794a08cdf216aa5d0531c6a3747793b70c53f70e98259720a6fe2d7"},
{file = "coverage-7.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e7defbb9737274023e2d7af02cac77043c86ce88a907c58f42b580a97d5bcca9"},
{file = "coverage-7.4.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a1526d265743fb49363974b7aa8d5899ff64ee07df47dd8d3e37dcc0818f09ed"},
{file = "coverage-7.4.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf635a52fc1ea401baf88843ae8708591aa4adff875e5c23220de43b1ccf575c"},
{file = "coverage-7.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:756ded44f47f330666843b5781be126ab57bb57c22adbb07d83f6b519783b870"},
{file = "coverage-7.4.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:0eb3c2f32dabe3a4aaf6441dde94f35687224dfd7eb2a7f47f3fd9428e421058"},
{file = "coverage-7.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:bfd5db349d15c08311702611f3dccbef4b4e2ec148fcc636cf8739519b4a5c0f"},
{file = "coverage-7.4.0-cp39-cp39-win32.whl", hash = "sha256:53d7d9158ee03956e0eadac38dfa1ec8068431ef8058fe6447043db1fb40d932"},
{file = "coverage-7.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:cfd2a8b6b0d8e66e944d47cdec2f47c48fef2ba2f2dff5a9a75757f64172857e"},
{file = "coverage-7.4.0-pp38.pp39.pp310-none-any.whl", hash = "sha256:c530833afc4707fe48524a44844493f36d8727f04dcce91fb978c414a8556cc6"},
{file = "coverage-7.4.0.tar.gz", hash = "sha256:707c0f58cb1712b8809ece32b68996ee1e609f71bd14615bd8f87a1293cb610e"},
{file = "coverage-7.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d872145f3a3231a5f20fd48500274d7df222e291d90baa2026cc5152b7ce86bf"},
{file = "coverage-7.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:310b3bb9c91ea66d59c53fa4989f57d2436e08f18fb2f421a1b0b6b8cc7fffda"},
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f47d39359e2c3779c5331fc740cf4bce6d9d680a7b4b4ead97056a0ae07cb49a"},
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aa72dbaf2c2068404b9870d93436e6d23addd8bbe9295f49cbca83f6e278179c"},
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:beaa5c1b4777f03fc63dfd2a6bd820f73f036bfb10e925fce067b00a340d0f3f"},
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:dbc1b46b92186cc8074fee9d9fbb97a9dd06c6cbbef391c2f59d80eabdf0faa6"},
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:315a989e861031334d7bee1f9113c8770472db2ac484e5b8c3173428360a9148"},
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d1bc430677773397f64a5c88cb522ea43175ff16f8bfcc89d467d974cb2274f9"},
{file = "coverage-7.3.2-cp310-cp310-win32.whl", hash = "sha256:a889ae02f43aa45032afe364c8ae84ad3c54828c2faa44f3bfcafecb5c96b02f"},
{file = "coverage-7.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:c0ba320de3fb8c6ec16e0be17ee1d3d69adcda99406c43c0409cb5c41788a611"},
{file = "coverage-7.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ac8c802fa29843a72d32ec56d0ca792ad15a302b28ca6203389afe21f8fa062c"},
{file = "coverage-7.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:89a937174104339e3a3ffcf9f446c00e3a806c28b1841c63edb2b369310fd074"},
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e267e9e2b574a176ddb983399dec325a80dbe161f1a32715c780b5d14b5f583a"},
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2443cbda35df0d35dcfb9bf8f3c02c57c1d6111169e3c85fc1fcc05e0c9f39a3"},
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4175e10cc8dda0265653e8714b3174430b07c1dca8957f4966cbd6c2b1b8065a"},
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:0cbf38419fb1a347aaf63481c00f0bdc86889d9fbf3f25109cf96c26b403fda1"},
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:5c913b556a116b8d5f6ef834038ba983834d887d82187c8f73dec21049abd65c"},
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1981f785239e4e39e6444c63a98da3a1db8e971cb9ceb50a945ba6296b43f312"},
{file = "coverage-7.3.2-cp311-cp311-win32.whl", hash = "sha256:43668cabd5ca8258f5954f27a3aaf78757e6acf13c17604d89648ecc0cc66640"},
{file = "coverage-7.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10c39c0452bf6e694511c901426d6b5ac005acc0f78ff265dbe36bf81f808a2"},
{file = "coverage-7.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:4cbae1051ab791debecc4a5dcc4a1ff45fc27b91b9aee165c8a27514dd160836"},
{file = "coverage-7.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:12d15ab5833a997716d76f2ac1e4b4d536814fc213c85ca72756c19e5a6b3d63"},
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c7bba973ebee5e56fe9251300c00f1579652587a9f4a5ed8404b15a0471f216"},
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fe494faa90ce6381770746077243231e0b83ff3f17069d748f645617cefe19d4"},
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6e9589bd04d0461a417562649522575d8752904d35c12907d8c9dfeba588faf"},
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:d51ac2a26f71da1b57f2dc81d0e108b6ab177e7d30e774db90675467c847bbdf"},
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:99b89d9f76070237975b315b3d5f4d6956ae354a4c92ac2388a5695516e47c84"},
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:fa28e909776dc69efb6ed975a63691bc8172b64ff357e663a1bb06ff3c9b589a"},
{file = "coverage-7.3.2-cp312-cp312-win32.whl", hash = "sha256:289fe43bf45a575e3ab10b26d7b6f2ddb9ee2dba447499f5401cfb5ecb8196bb"},
{file = "coverage-7.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:7dbc3ed60e8659bc59b6b304b43ff9c3ed858da2839c78b804973f613d3e92ed"},
{file = "coverage-7.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f94b734214ea6a36fe16e96a70d941af80ff3bfd716c141300d95ebc85339738"},
{file = "coverage-7.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:af3d828d2c1cbae52d34bdbb22fcd94d1ce715d95f1a012354a75e5913f1bda2"},
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:630b13e3036e13c7adc480ca42fa7afc2a5d938081d28e20903cf7fd687872e2"},
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c9eacf273e885b02a0273bb3a2170f30e2d53a6d53b72dbe02d6701b5296101c"},
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8f17966e861ff97305e0801134e69db33b143bbfb36436efb9cfff6ec7b2fd9"},
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b4275802d16882cf9c8b3d057a0839acb07ee9379fa2749eca54efbce1535b82"},
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:72c0cfa5250f483181e677ebc97133ea1ab3eb68645e494775deb6a7f6f83901"},
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:cb536f0dcd14149425996821a168f6e269d7dcd2c273a8bff8201e79f5104e76"},
{file = "coverage-7.3.2-cp38-cp38-win32.whl", hash = "sha256:307adb8bd3abe389a471e649038a71b4eb13bfd6b7dd9a129fa856f5c695cf92"},
{file = "coverage-7.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:88ed2c30a49ea81ea3b7f172e0269c182a44c236eb394718f976239892c0a27a"},
{file = "coverage-7.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b631c92dfe601adf8f5ebc7fc13ced6bb6e9609b19d9a8cd59fa47c4186ad1ce"},
{file = "coverage-7.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d3d9df4051c4a7d13036524b66ecf7a7537d14c18a384043f30a303b146164e9"},
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f7363d3b6a1119ef05015959ca24a9afc0ea8a02c687fe7e2d557705375c01f"},
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2f11cc3c967a09d3695d2a6f03fb3e6236622b93be7a4b5dc09166a861be6d25"},
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:149de1d2401ae4655c436a3dced6dd153f4c3309f599c3d4bd97ab172eaf02d9"},
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:3a4006916aa6fee7cd38db3bfc95aa9c54ebb4ffbfc47c677c8bba949ceba0a6"},
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9028a3871280110d6e1aa2df1afd5ef003bab5fb1ef421d6dc748ae1c8ef2ebc"},
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9f805d62aec8eb92bab5b61c0f07329275b6f41c97d80e847b03eb894f38d083"},
{file = "coverage-7.3.2-cp39-cp39-win32.whl", hash = "sha256:d1c88ec1a7ff4ebca0219f5b1ef863451d828cccf889c173e1253aa84b1e07ce"},
{file = "coverage-7.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b4767da59464bb593c07afceaddea61b154136300881844768037fd5e859353f"},
{file = "coverage-7.3.2-pp38.pp39.pp310-none-any.whl", hash = "sha256:ae97af89f0fbf373400970c0a21eef5aa941ffeed90aee43650b81f7d7f47637"},
{file = "coverage-7.3.2.tar.gz", hash = "sha256:be32ad29341b0170e795ca590e1c07e81fc061cb5b10c74ce7203491484404ef"},
]
[package.dependencies]
@@ -794,13 +794,13 @@ graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docker"
version = "7.0.0"
version = "6.1.3"
description = "A Python library for the Docker Engine API."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.7"
files = [
{file = "docker-7.0.0-py3-none-any.whl", hash = "sha256:12ba681f2777a0ad28ffbcc846a69c31b4dfd9752b47eb425a274ee269c5e14b"},
{file = "docker-7.0.0.tar.gz", hash = "sha256:323736fb92cd9418fc5e7133bc953e11a9da04f4483f828b527db553f1e7e5a3"},
{file = "docker-6.1.3-py3-none-any.whl", hash = "sha256:aecd2277b8bf8e506e484f6ab7aec39abe0038e29fa4a6d3ba86c3fe01844ed9"},
{file = "docker-6.1.3.tar.gz", hash = "sha256:aa6d17830045ba5ef0168d5eaa34d37beeb113948c413affe1d5991fc11f9a20"},
]
[package.dependencies]
@@ -808,10 +808,10 @@ packaging = ">=14.0"
pywin32 = {version = ">=304", markers = "sys_platform == \"win32\""}
requests = ">=2.26.0"
urllib3 = ">=1.26.0"
websocket-client = ">=0.32.0"
[package.extras]
ssh = ["paramiko (>=2.4.3)"]
websockets = ["websocket-client (>=1.3.0)"]
[[package]]
name = "dparse"
@@ -895,29 +895,29 @@ testing = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "diff-cover (>=7.5)", "p
[[package]]
name = "flake8"
version = "7.0.0"
version = "6.1.0"
description = "the modular source code checker: pep8 pyflakes and co"
optional = false
python-versions = ">=3.8.1"
files = [
{file = "flake8-7.0.0-py2.py3-none-any.whl", hash = "sha256:a6dfbb75e03252917f2473ea9653f7cd799c3064e54d4c8140044c5c065f53c3"},
{file = "flake8-7.0.0.tar.gz", hash = "sha256:33f96621059e65eec474169085dc92bf26e7b2d47366b70be2f67ab80dc25132"},
{file = "flake8-6.1.0-py2.py3-none-any.whl", hash = "sha256:ffdfce58ea94c6580c77888a86506937f9a1a227dfcd15f245d694ae20a6b6e5"},
{file = "flake8-6.1.0.tar.gz", hash = "sha256:d5b3857f07c030bdb5bf41c7f53799571d75c4491748a3adcd47de929e34cd23"},
]
[package.dependencies]
mccabe = ">=0.7.0,<0.8.0"
pycodestyle = ">=2.11.0,<2.12.0"
pyflakes = ">=3.2.0,<3.3.0"
pyflakes = ">=3.1.0,<3.2.0"
[[package]]
name = "freezegun"
version = "1.4.0"
version = "1.2.2"
description = "Let your Python tests travel through time"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.6"
files = [
{file = "freezegun-1.4.0-py3-none-any.whl", hash = "sha256:55e0fc3c84ebf0a96a5aa23ff8b53d70246479e9a68863f1fcac5a3e52f19dd6"},
{file = "freezegun-1.4.0.tar.gz", hash = "sha256:10939b0ba0ff5adaecf3b06a5c2f73071d9678e507c5eaedb23c761d56ac774b"},
{file = "freezegun-1.2.2-py3-none-any.whl", hash = "sha256:ea1b963b993cb9ea195adbd893a48d573fda951b0da64f60883d7e988b606c9f"},
{file = "freezegun-1.2.2.tar.gz", hash = "sha256:cd22d1ba06941384410cd967d8a99d5ae2442f57dfafeff2fda5de8dc5c05446"},
]
[package.dependencies]
@@ -956,20 +956,20 @@ smmap = ">=3.0.1,<6"
[[package]]
name = "gitpython"
version = "3.1.41"
version = "3.1.37"
description = "GitPython is a Python library used to interact with Git repositories"
optional = false
python-versions = ">=3.7"
files = [
{file = "GitPython-3.1.41-py3-none-any.whl", hash = "sha256:c36b6634d069b3f719610175020a9aed919421c87552185b085e04fbbdb10b7c"},
{file = "GitPython-3.1.41.tar.gz", hash = "sha256:ed66e624884f76df22c8e16066d567aaa5a37d5b5fa19db2c6df6f7156db9048"},
{file = "GitPython-3.1.37-py3-none-any.whl", hash = "sha256:5f4c4187de49616d710a77e98ddf17b4782060a1788df441846bddefbb89ab33"},
{file = "GitPython-3.1.37.tar.gz", hash = "sha256:f9b9ddc0761c125d5780eab2d64be4873fc6817c2899cbcb34b02344bdc7bc54"},
]
[package.dependencies]
gitdb = ">=4.0.1,<5"
[package.extras]
test = ["black", "coverage[toml]", "ddt (>=1.1.1,!=1.4.3)", "mock", "mypy", "pre-commit", "pytest (>=7.3.1)", "pytest-cov", "pytest-instafail", "pytest-mock", "pytest-sugar", "sumtypes"]
test = ["black", "coverage[toml]", "ddt (>=1.1.1,!=1.4.3)", "mypy", "pre-commit", "pytest", "pytest-cov", "pytest-sugar"]
[[package]]
name = "google-api-core"
@@ -995,13 +995,13 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0dev)"]
[[package]]
name = "google-api-python-client"
version = "2.113.0"
version = "2.108.0"
description = "Google API Client Library for Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "google-api-python-client-2.113.0.tar.gz", hash = "sha256:bcffbc8ffbad631f699cf85aa91993f3dc03060b234ca9e6e2f9135028bd9b52"},
{file = "google_api_python_client-2.113.0-py2.py3-none-any.whl", hash = "sha256:25659d488df6c8a69615b2a510af0e63b4c47ab2cb87d71c1e13b28715906e27"},
{file = "google-api-python-client-2.108.0.tar.gz", hash = "sha256:6396efca83185fb205c0abdbc1c2ee57b40475578c6af37f6d0e30a639aade99"},
{file = "google_api_python_client-2.108.0-py2.py3-none-any.whl", hash = "sha256:9d1327213e388943ebcd7db5ce6e7f47987a7e6874e3e1f6116010eea4a0e75d"},
]
[package.dependencies]
@@ -1037,13 +1037,13 @@ requests = ["requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-httplib2"
version = "0.2.0"
version = "0.1.1"
description = "Google Authentication Library: httplib2 transport"
optional = false
python-versions = "*"
files = [
{file = "google-auth-httplib2-0.2.0.tar.gz", hash = "sha256:38aa7badf48f974f1eb9861794e9c0cb2a0511a4ec0679b1f886d108f5640e05"},
{file = "google_auth_httplib2-0.2.0-py2.py3-none-any.whl", hash = "sha256:b65a0a2123300dd71281a7bf6e64d65a0759287df52729bdd1ae2e47dc311a3d"},
{file = "google-auth-httplib2-0.1.1.tar.gz", hash = "sha256:c64bc555fdc6dd788ea62ecf7bccffcf497bf77244887a3f3d7a5a02f8e3fc29"},
{file = "google_auth_httplib2-0.1.1-py2.py3-none-any.whl", hash = "sha256:42c50900b8e4dcdf8222364d1f0efe32b8421fb6ed72f2613f12f75cc933478c"},
]
[package.dependencies]
@@ -1179,13 +1179,13 @@ requirements-deprecated-finder = ["pip-api", "pipreqs"]
[[package]]
name = "jinja2"
version = "3.1.3"
version = "3.1.2"
description = "A very fast and expressive template engine."
optional = false
python-versions = ">=3.7"
files = [
{file = "Jinja2-3.1.3-py3-none-any.whl", hash = "sha256:7d6d50dd97d52cbc355597bd845fabfbac3f551e1f99619e39a35ce8c370b5fa"},
{file = "Jinja2-3.1.3.tar.gz", hash = "sha256:ac8bd6544d4bb2c9792bf3a159e80bba8fda7f07e81bc3aed565432d5925ba90"},
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
[package.dependencies]
@@ -1275,13 +1275,13 @@ files = [
[[package]]
name = "jsonschema"
version = "4.20.0"
version = "4.18.0"
description = "An implementation of JSON Schema validation for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "jsonschema-4.20.0-py3-none-any.whl", hash = "sha256:ed6231f0429ecf966f5bc8dfef245998220549cbbcf140f913b7464c52c3b6b3"},
{file = "jsonschema-4.20.0.tar.gz", hash = "sha256:4f614fd46d8d61258610998997743ec5492a648b33cf478c1ddc23ed4598a5fa"},
{file = "jsonschema-4.18.0-py3-none-any.whl", hash = "sha256:b508dd6142bd03f4c3670534c80af68cd7bbff9ea830b9cf2625d4a3c49ddf60"},
{file = "jsonschema-4.18.0.tar.gz", hash = "sha256:8caf5b57a990a98e9b39832ef3cb35c176fe331414252b6e1b26fd5866f891a4"},
]
[package.dependencies]
@@ -1551,13 +1551,13 @@ min-versions = ["babel (==2.9.0)", "click (==7.0)", "colorama (==0.4)", "ghp-imp
[[package]]
name = "mkdocs-material"
version = "9.5.3"
version = "9.4.14"
description = "Documentation that simply works"
optional = true
python-versions = ">=3.8"
files = [
{file = "mkdocs_material-9.5.3-py3-none-any.whl", hash = "sha256:76c93a8525cceb0b395b9cedab3428bf518cf6439adef2b940f1c1574b775d89"},
{file = "mkdocs_material-9.5.3.tar.gz", hash = "sha256:5899219f422f0a6de784232d9d40374416302ffae3c160cacc72969fcc1ee372"},
{file = "mkdocs_material-9.4.14-py3-none-any.whl", hash = "sha256:dbc78a4fea97b74319a6aa9a2f0be575a6028be6958f813ba367188f7b8428f6"},
{file = "mkdocs_material-9.4.14.tar.gz", hash = "sha256:a511d3ff48fa8718b033e7e37d17abd9cc1de0fdf0244a625ca2ae2387e2416d"},
]
[package.dependencies]
@@ -1565,7 +1565,7 @@ babel = ">=2.10,<3.0"
colorama = ">=0.4,<1.0"
jinja2 = ">=3.0,<4.0"
markdown = ">=3.2,<4.0"
mkdocs = ">=1.5.3,<1.6.0"
mkdocs = ">=1.5.3,<2.0"
mkdocs-material-extensions = ">=1.3,<2.0"
paginate = ">=0.5,<1.0"
pygments = ">=2.16,<3.0"
@@ -1607,13 +1607,13 @@ test = ["pytest", "pytest-cov"]
[[package]]
name = "moto"
version = "4.2.13"
version = "4.2.10"
description = ""
optional = false
python-versions = ">=3.7"
files = [
{file = "moto-4.2.13-py2.py3-none-any.whl", hash = "sha256:93e0fd13b624bd79115494f833308c3641b2be0fc9f4f18aa9264aa01f6168e0"},
{file = "moto-4.2.13.tar.gz", hash = "sha256:01aef6a489a725c8d725bd3dc6f70ff1bedaee3e2641752e4b471ff0ede4b4d7"},
{file = "moto-4.2.10-py2.py3-none-any.whl", hash = "sha256:5cf0736d1f43cb887498d00b00ae522774bfddb7db1f4994fedea65b290b9f0e"},
{file = "moto-4.2.10.tar.gz", hash = "sha256:92595fe287474a31ac3ef847941ebb097e8ffb0c3d6c106e47cf573db06933b2"},
]
[package.dependencies]
@@ -1629,7 +1629,7 @@ Jinja2 = ">=2.10.1"
jsondiff = {version = ">=1.1.2", optional = true, markers = "extra == \"all\""}
multipart = {version = "*", optional = true, markers = "extra == \"all\""}
openapi-spec-validator = {version = ">=0.5.0", optional = true, markers = "extra == \"all\""}
py-partiql-parser = {version = "0.5.0", optional = true, markers = "extra == \"all\""}
py-partiql-parser = {version = "0.4.2", optional = true, markers = "extra == \"all\""}
pyparsing = {version = ">=3.0.7", optional = true, markers = "extra == \"all\""}
python-dateutil = ">=2.1,<3.0.0"
python-jose = {version = ">=3.1.0,<4.0.0", extras = ["cryptography"], optional = true, markers = "extra == \"all\""}
@@ -1642,29 +1642,29 @@ werkzeug = ">=0.5,<2.2.0 || >2.2.0,<2.2.1 || >2.2.1"
xmltodict = "*"
[package.extras]
all = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
all = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
apigateway = ["PyYAML (>=5.1)", "ecdsa (!=0.15)", "openapi-spec-validator (>=0.5.0)", "python-jose[cryptography] (>=3.1.0,<4.0.0)"]
apigatewayv2 = ["PyYAML (>=5.1)"]
appsync = ["graphql-core"]
awslambda = ["docker (>=3.0.0)"]
batch = ["docker (>=3.0.0)"]
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
cognitoidp = ["ecdsa (!=0.15)", "python-jose[cryptography] (>=3.1.0,<4.0.0)"]
ds = ["sshpubkeys (>=3.1.0)"]
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.0)"]
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.0)"]
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.4.2)"]
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.4.2)"]
ebs = ["sshpubkeys (>=3.1.0)"]
ec2 = ["sshpubkeys (>=3.1.0)"]
efs = ["sshpubkeys (>=3.1.0)"]
eks = ["sshpubkeys (>=3.1.0)"]
glue = ["pyparsing (>=3.0.7)"]
iotdata = ["jsondiff (>=1.1.2)"]
proxy = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "sshpubkeys (>=3.1.0)"]
proxy = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "sshpubkeys (>=3.1.0)"]
route53resolver = ["sshpubkeys (>=3.1.0)"]
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.5.0)"]
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.5.0)"]
server = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.4.2)"]
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.4.2)"]
server = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
ssm = ["PyYAML (>=5.1)"]
xray = ["aws-xray-sdk (>=0.93,!=0.96)", "setuptools"]
@@ -1828,17 +1828,17 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "openapi-schema-validator"
version = "0.6.2"
version = "0.6.0"
description = "OpenAPI schema validation for Python"
optional = false
python-versions = ">=3.8.0,<4.0.0"
files = [
{file = "openapi_schema_validator-0.6.2-py3-none-any.whl", hash = "sha256:c4887c1347c669eb7cded9090f4438b710845cd0f90d1fb9e1b3303fb37339f8"},
{file = "openapi_schema_validator-0.6.2.tar.gz", hash = "sha256:11a95c9c9017912964e3e5f2545a5b11c3814880681fcacfb73b1759bb4f2804"},
{file = "openapi_schema_validator-0.6.0-py3-none-any.whl", hash = "sha256:9e95b95b621efec5936245025df0d6a7ffacd1551e91d09196b3053040c931d7"},
{file = "openapi_schema_validator-0.6.0.tar.gz", hash = "sha256:921b7c1144b856ca3813e41ecff98a4050f7611824dfc5c6ead7072636af0520"},
]
[package.dependencies]
jsonschema = ">=4.19.1,<5.0.0"
jsonschema = ">=4.18.0,<5.0.0"
jsonschema-specifications = ">=2023.5.2,<2024.0.0"
rfc3339-validator = "*"
@@ -1989,17 +1989,17 @@ files = [
[[package]]
name = "py-partiql-parser"
version = "0.5.0"
version = "0.4.2"
description = "Pure Python PartiQL Parser"
optional = false
python-versions = "*"
files = [
{file = "py-partiql-parser-0.5.0.tar.gz", hash = "sha256:427a662e87d51a0a50150fc8b75c9ebb4a52d49129684856c40c88b8c8e027e4"},
{file = "py_partiql_parser-0.5.0-py3-none-any.whl", hash = "sha256:dc454c27526adf62deca5177ea997bf41fac4fd109c5d4c8d81f984de738ba8f"},
{file = "py-partiql-parser-0.4.2.tar.gz", hash = "sha256:9c99d545be7897c6bfa97a107f6cfbcd92e359d394e4f3b95430e6409e8dd1e1"},
{file = "py_partiql_parser-0.4.2-py3-none-any.whl", hash = "sha256:f3f34de8dddf65ed2d47b4263560bbf97be1ecc6bd5c61da039ede90f26a10ce"},
]
[package.extras]
dev = ["black (==22.6.0)", "flake8", "mypy", "pytest"]
dev = ["black (==22.6.0)", "flake8", "mypy (==0.971)", "pytest"]
[[package]]
name = "pyasn1"
@@ -2102,13 +2102,13 @@ email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pyflakes"
version = "3.2.0"
version = "3.1.0"
description = "passive checker of Python programs"
optional = false
python-versions = ">=3.8"
files = [
{file = "pyflakes-3.2.0-py2.py3-none-any.whl", hash = "sha256:84b5be138a2dfbb40689ca07e2152deb896a65c3a3e24c251c5c62489568074a"},
{file = "pyflakes-3.2.0.tar.gz", hash = "sha256:1c61603ff154621fb2a9172037d84dca3500def8c8b630657d1701f026f8af3f"},
{file = "pyflakes-3.1.0-py2.py3-none-any.whl", hash = "sha256:4132f6d49cb4dae6819e5379898f2b8cce3c5f23994194c24b77d5da2e36f774"},
{file = "pyflakes-3.1.0.tar.gz", hash = "sha256:a0aae034c444db0071aa077972ba4768d40c830d9539fd45bf4cd3f8f6992efc"},
]
[[package]]
@@ -2147,13 +2147,13 @@ tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
[[package]]
name = "pylint"
version = "3.0.3"
version = "3.0.2"
description = "python code static checker"
optional = false
python-versions = ">=3.8.0"
files = [
{file = "pylint-3.0.3-py3-none-any.whl", hash = "sha256:7a1585285aefc5165db81083c3e06363a27448f6b467b3b0f30dbd0ac1f73810"},
{file = "pylint-3.0.3.tar.gz", hash = "sha256:58c2398b0301e049609a8429789ec6edf3aabe9b6c5fec916acd18639c16de8b"},
{file = "pylint-3.0.2-py3-none-any.whl", hash = "sha256:60ed5f3a9ff8b61839ff0348b3624ceeb9e6c2a92c514d81c9cc273da3b6bcda"},
{file = "pylint-3.0.2.tar.gz", hash = "sha256:0d4c286ef6d2f66c8bfb527a7f8a629009e42c99707dec821a03e1b51a4c1496"},
]
[package.dependencies]
@@ -2163,7 +2163,7 @@ dill = [
{version = ">=0.2", markers = "python_version < \"3.11\""},
{version = ">=0.3.6", markers = "python_version >= \"3.11\""},
]
isort = ">=4.2.5,<5.13.0 || >5.13.0,<6"
isort = ">=4.2.5,<6"
mccabe = ">=0.6,<0.8"
platformdirs = ">=2.2.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
@@ -2208,13 +2208,13 @@ diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pytest"
version = "7.4.4"
version = "7.4.3"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "pytest-7.4.4-py3-none-any.whl", hash = "sha256:b090cdf5ed60bf4c45261be03239c2c1c22df034fbffe691abe93cd80cea01d8"},
{file = "pytest-7.4.4.tar.gz", hash = "sha256:2cf0005922c6ace4a3e2ec8b4080eb0d9753fdc93107415332f50ce9e7994280"},
{file = "pytest-7.4.3-py3-none-any.whl", hash = "sha256:0d009c083ea859a71b76adf7c1d502e4bc170b80a8ef002da5806527b9591fac"},
{file = "pytest-7.4.3.tar.gz", hash = "sha256:d989d136982de4e3b29dabcc838ad581c64e8ed52c11fbe86ddebd9da0818cd5"},
]
[package.dependencies]
@@ -2892,12 +2892,12 @@ testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (
[[package]]
name = "shodan"
version = "1.31.0"
version = "1.30.1"
description = "Python library and command-line utility for Shodan (https://developer.shodan.io)"
optional = false
python-versions = "*"
files = [
{file = "shodan-1.31.0.tar.gz", hash = "sha256:c73275386ea02390e196c35c660706a28dd4d537c5a21eb387ab6236fac251f6"},
{file = "shodan-1.30.1.tar.gz", hash = "sha256:bedb6e8c2b4459592c1bc17b4d4b57dab0cb58a455ad589ee26a6304242cd505"},
]
[package.dependencies]
@@ -2921,18 +2921,18 @@ files = [
[[package]]
name = "slack-sdk"
version = "3.26.1"
version = "3.26.0"
description = "The Slack API Platform SDK for Python"
optional = false
python-versions = ">=3.6.0"
files = [
{file = "slack_sdk-3.26.1-py2.py3-none-any.whl", hash = "sha256:f80f0d15f0fce539b470447d2a07b03ecdad6b24f69c1edd05d464cf21253a06"},
{file = "slack_sdk-3.26.1.tar.gz", hash = "sha256:d1600211eaa37c71a5f92daf4404074c3e6b3f5359a37c93c818b39d88ab4ca0"},
{file = "slack_sdk-3.26.0-py2.py3-none-any.whl", hash = "sha256:b84c2d93163166eb682e290c19334683c2d0f0cb4a5479c809706b44038fdda1"},
{file = "slack_sdk-3.26.0.tar.gz", hash = "sha256:147946f388ce73b17c377b823759fcb39c0eca7444ca0a942dc12a3940a4f44f"},
]
[package.extras]
optional = ["SQLAlchemy (>=1.4,<3)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=10,<11)"]
testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "flake8 (>=5.0.4,<7)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=7.0.1,<8)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"]
testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "flake8 (>=5,<6)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=6.2.5,<7)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"]
[[package]]
name = "smmap"
@@ -3157,6 +3157,22 @@ files = [
[package.extras]
watchmedo = ["PyYAML (>=3.10)"]
[[package]]
name = "websocket-client"
version = "1.5.1"
description = "WebSocket client for Python with low level API options"
optional = false
python-versions = ">=3.7"
files = [
{file = "websocket-client-1.5.1.tar.gz", hash = "sha256:3f09e6d8230892547132177f575a4e3e73cfdf06526e20cc02aa1c3b47184d40"},
{file = "websocket_client-1.5.1-py3-none-any.whl", hash = "sha256:cdf5877568b7e83aa7cf2244ab56a3213de587bbe0ce9d8b9600fc77b455d89e"},
]
[package.extras]
docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
optional = ["python-socks", "wsaccel"]
test = ["websockets"]
[[package]]
name = "werkzeug"
version = "3.0.1"
@@ -3296,4 +3312,4 @@ docs = ["mkdocs", "mkdocs-material"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.9,<3.12"
content-hash = "ded23fafe3c73eaaec15eaaf040af7640ffce1d0c33473c9997af4a7c6118d81"
content-hash = "7e28daf704e53d057e66bc8fb71558361ab36a7cca85c7498a963f6406f54ef4"

View File

@@ -51,6 +51,7 @@ from prowler.providers.common.audit_info import (
set_provider_audit_info,
set_provider_execution_parameters,
)
from prowler.providers.common.clean import clean_provider_local_output_directories
from prowler.providers.common.outputs import set_provider_output_options
from prowler.providers.common.quick_inventory import run_provider_quick_inventory
@@ -323,6 +324,9 @@ def prowler():
if checks_folder:
remove_custom_checks_module(checks_folder, provider)
# clean local directories
clean_provider_local_output_directories(args)
# If there are failed findings exit code 3, except if -z is input
if not args.ignore_exit_code_3 and stats["total_fail"] > 0:
sys.exit(3)

View File

@@ -211,31 +211,6 @@
"iam_avoid_root_usage"
]
},
{
"Id": "op.acc.4.aws.iam.8",
"Description": "Proceso de gestión de derechos de acceso",
"Attributes": [
{
"IdGrupoControl": "op.acc.4",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Se restringirá todo acceso a las acciones especificadas para el usuario root de una cuenta.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"organizations_account_part_of_organizations",
"organizations_scp_check_deny_regions"
]
},
{
"Id": "op.acc.4.aws.iam.9",
"Description": "Proceso de gestión de derechos de acceso",
@@ -1146,30 +1121,6 @@
"cloudtrail_insights_exist"
]
},
{
"Id": "op.exp.8.r1.aws.ct.3",
"Description": "Revisión de los registros",
"Attributes": [
{
"IdGrupoControl": "op.exp.8.r1",
"Marco": "operacional",
"Categoria": "explotación",
"DescripcionControl": "Registrar los eventos de lectura y escritura de datos.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_insights_exist"
]
},
{
"Id": "op.exp.8.r1.aws.ct.4",
"Description": "Revisión de los registros",
@@ -1282,33 +1233,6 @@
"iam_role_cross_service_confused_deputy_prevention"
]
},
{
"Id": "op.exp.8.r4.aws.ct.1",
"Description": "Control de acceso",
"Attributes": [
{
"IdGrupoControl": "op.exp.8.r4",
"Marco": "operacional",
"Categoria": "explotación",
"DescripcionControl": "Asignar correctamente las políticas AWS IAM para el acceso y borrado de los registros y sus copias de seguridad haciendo uso del principio de mínimo privilegio.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"iam_policy_allows_privilege_escalation",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_customer_unattached_policy_no_administrative_privilege",
"iam_no_custom_policy_permissive_role_assumption",
"iam_policy_attached_only_to_group_or_roles",
"iam_role_cross_service_confused_deputy_prevention",
"iam_policy_no_full_access_to_cloudtrail"
]
},
{
"Id": "op.exp.8.r4.aws.ct.2",
"Description": "Control de acceso",
@@ -2186,7 +2110,7 @@
}
],
"Checks": [
"fms_policy_compliant"
"networkfirewall_in_all_vpc"
]
},
{
@@ -2327,31 +2251,6 @@
"cloudfront_distributions_https_enabled"
]
},
{
"Id": "mp.com.4.aws.ws.1",
"Description": "Separación de flujos de información en la red",
"Attributes": [
{
"IdGrupoControl": "mp.com.4",
"Marco": "medidas de protección",
"Categoria": "segregación de redes",
"DescripcionControl": "Se deberán abrir solo los puertos necesarios para el uso del servicio AWS WorkSpaces.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad",
"disponibilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"workspaces_vpc_2private_1public_subnets_nat"
]
},
{
"Id": "mp.com.4.aws.vpc.1",
"Description": "Separación de flujos de información en la red",
@@ -2424,8 +2323,7 @@
}
],
"Checks": [
"vpc_subnet_separate_private_public",
"vpc_different_regions"
"vpc_subnet_separate_private_public"
]
},
{
@@ -2472,8 +2370,7 @@
}
],
"Checks": [
"vpc_subnet_different_az",
"vpc_different_regions"
"vpc_subnet_different_az"
]
},
{

View File

@@ -11,7 +11,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "3.12.1"
prowler_version = "3.11.3"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
@@ -22,9 +22,6 @@ gcp_logo = "https://user-images.githubusercontent.com/38561120/235928332-eb4accd
orange_color = "\033[38;5;208m"
banner_color = "\033[1;92m"
# Severities
valid_severities = ["critical", "high", "medium", "low", "informational"]
# Compliance
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))

View File

@@ -69,8 +69,8 @@ aws:
# AWS Organizations
# organizations_scp_check_deny_regions
# organizations_enabled_regions: [
# "eu-central-1",
# "eu-west-1",
# 'eu-central-1',
# 'eu-west-1',
# "us-east-1"
# ]
organizations_enabled_regions: []

View File

@@ -107,20 +107,14 @@ def exclude_services_to_run(
# Load checks from checklist.json
def parse_checks_from_file(input_file: str, provider: str) -> set:
"""parse_checks_from_file returns a set of checks read from the given file"""
try:
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
return checks_to_execute
# Load checks from custom folder
@@ -316,7 +310,7 @@ def print_checks(
def parse_checks_from_compliance_framework(
compliance_frameworks: list, bulk_compliance_frameworks: dict
) -> list:
"""parse_checks_from_compliance_framework returns a set of checks from the given compliance_frameworks"""
"""Parse checks from compliance frameworks specification"""
checks_to_execute = set()
try:
for framework in compliance_frameworks:
@@ -613,32 +607,22 @@ def update_audit_metadata(
)
def recover_checks_from_service(service_list: list, provider: str) -> set:
"""
Recover all checks from the selected provider and service
def recover_checks_from_service(service_list: list, provider: str) -> list:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
modules = recover_checks_from_provider(provider, service)
if not modules:
logger.error(f"Service '{service}' does not have checks.")
Returns a set of checks from the given services
"""
try:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
service_checks = recover_checks_from_provider(provider, service)
if not service_checks:
logger.error(f"Service '{service}' does not have checks.")
else:
for check in service_checks:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
for check_module in modules:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_module[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks

View File

@@ -1,6 +1,5 @@
from colorama import Fore, Style
from prowler.config.config import valid_severities
from prowler.lib.check.check import (
parse_checks_from_compliance_framework,
parse_checks_from_file,
@@ -11,6 +10,7 @@ from prowler.lib.logger import logger
# Generate the list of checks to execute
# PENDING Test for this function
def load_checks_to_execute(
bulk_checks_metadata: dict,
bulk_compliance_frameworks: dict,
@@ -22,93 +22,73 @@ def load_checks_to_execute(
categories: set,
provider: str,
) -> set:
"""Generate the list of checks to execute based on the cloud provider and the input arguments given"""
try:
# Local subsets
checks_to_execute = set()
check_aliases = {}
check_severities = {key: [] for key in valid_severities}
check_categories = {}
"""Generate the list of checks to execute based on the cloud provider and input arguments specified"""
checks_to_execute = set()
# First, loop over the bulk_checks_metadata to extract the needed subsets
for check, metadata in bulk_checks_metadata.items():
# Aliases
for alias in metadata.CheckAliases:
check_aliases[alias] = check
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# Severities
if metadata.Severity:
check_severities[metadata.Severity].append(check)
# Handle if there are some severities passed using --severity
elif severities:
for check in bulk_checks_metadata:
# Check check's severity
if bulk_checks_metadata[check].Severity in severities:
checks_to_execute.add(check)
if service_list:
checks_to_execute = (
recover_checks_from_service(service_list, provider) & checks_to_execute
)
# Categories
for category in metadata.Categories:
if category not in check_categories:
check_categories[category] = []
check_categories[category].append(check)
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# Handle if there are some severities passed using --severity
elif severities:
for severity in severities:
checks_to_execute.update(check_severities[severity])
if service_list:
checks_to_execute = (
recover_checks_from_service(service_list, provider)
& checks_to_execute
)
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
try:
checks_to_execute = parse_checks_from_file(checks_file, provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
try:
checks_to_execute = parse_checks_from_compliance_framework(
compliance_frameworks, bulk_compliance_frameworks
)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are categories passed using --categories
elif categories:
for category in categories:
checks_to_execute.update(check_categories[category])
# Handle if there are categories passed using --categories
elif categories:
for cat in categories:
for check in bulk_checks_metadata:
# Check check's categories
if cat in bulk_checks_metadata[check].Categories:
checks_to_execute.add(check)
# If there are no checks passed as argument
else:
# If there are no checks passed as argument
else:
try:
# Get all check modules to run with the specific provider
checks = recover_checks_from_provider(provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
else:
for check_info in checks:
# Recover check name from import path (last part)
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_info[0]
checks_to_execute.add(check_name)
# Check Aliases
checks_to_execute = update_checks_to_execute_with_aliases(
checks_to_execute, check_aliases
)
# Get Check Aliases mapping
check_aliases = {}
for check, metadata in bulk_checks_metadata.items():
for alias in metadata.CheckAliases:
check_aliases[alias] = check
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
def update_checks_to_execute_with_aliases(
checks_to_execute: set, check_aliases: dict
) -> set:
"""update_checks_to_execute_with_aliases returns the checks_to_execute updated using the check aliases."""
# Verify if any input check is an alias of another check
for input_check in checks_to_execute:
if (
@@ -121,4 +101,5 @@ def update_checks_to_execute_with_aliases(
print(
f"\nUsing alias {Fore.YELLOW}{input_check}{Style.RESET_ALL} for check {Fore.YELLOW}{check_aliases[input_check]}{Style.RESET_ALL}...\n"
)
return checks_to_execute

View File

@@ -3,9 +3,9 @@ import sys
import yaml
from jsonschema import validate
from prowler.config.config import valid_severities
from prowler.lib.logger import logger
valid_severities = ["critical", "high", "medium", "low", "informational"]
custom_checks_metadata_schema = {
"type": "object",
"properties": {

View File

@@ -7,7 +7,6 @@ from prowler.config.config import (
check_current_version,
default_config_file_path,
default_output_directory,
valid_severities,
)
from prowler.providers.common.arguments import (
init_providers_parser,
@@ -225,8 +224,8 @@ Detailed documentation at https://docs.prowler.cloud
common_checks_parser.add_argument(
"--severity",
nargs="+",
help=f"List of severities to be executed {valid_severities}",
choices=valid_severities,
help="List of severities to be executed [informational, low, medium, high, critical]",
choices=["informational", "low", "medium", "high", "critical"],
)
group.add_argument(
"--compliance",

View File

@@ -401,8 +401,7 @@ def display_compliance_table(
"Bajo": 0,
}
if finding.status == "FAIL":
if attribute.Tipo != "recomendacion":
fail_count += 1
fail_count += 1
marcos[marco_categoria][
"Estado"
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"

View File

@@ -407,7 +407,7 @@ def get_azure_html_assessment_summary(audit_info):
if isinstance(audit_info, Azure_Audit_Info):
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = f"{key} : {value}"
intermediate = key + " : " + value
printed_subscriptions.append(intermediate)
# check if identity is str(coming from SP) or dict(coming from browser or)

View File

@@ -13,7 +13,7 @@ def send_slack_message(token, channel, stats, provider, audit_info):
response = client.chat_postMessage(
username="Prowler",
icon_url=square_logo_img,
channel=f"#{channel}",
channel="#" + channel,
blocks=create_message_blocks(identity, logo, stats),
)
return response
@@ -35,7 +35,7 @@ def create_message_identity(provider, audit_info):
elif provider == "azure":
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = f"- *{key}: {value}*\n"
intermediate = "- *" + key + ": " + value + "*\n"
printed_subscriptions.append(intermediate)
identity = f"Azure Subscriptions:\n{''.join(printed_subscriptions)}"
logo = azure_logo

View File

@@ -10,10 +10,7 @@ from prowler.config.config import aws_services_json_file
from prowler.lib.check.check import list_modules, recover_checks_from_service
from prowler.lib.logger import logger
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.config import (
AWS_STS_GLOBAL_ENDPOINT_REGION,
ROLE_SESSION_NAME,
)
from prowler.providers.aws.config import AWS_STS_GLOBAL_ENDPOINT_REGION
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.aws.lib.credentials.credentials import create_sts_session
@@ -116,15 +113,9 @@ def assume_role(
sts_endpoint_region: str = None,
) -> dict:
try:
role_session_name = (
assumed_role_info.role_session_name
if assumed_role_info.role_session_name
else ROLE_SESSION_NAME
)
assume_role_arguments = {
"RoleArn": assumed_role_info.role_arn,
"RoleSessionName": role_session_name,
"RoleSessionName": "ProwlerAsessmentSession",
"DurationSeconds": assumed_role_info.session_duration,
}
@@ -161,31 +152,23 @@ def input_role_mfa_token_and_code() -> tuple[str]:
def generate_regional_clients(
service: str,
audit_info: AWS_Audit_Info,
service: str, audit_info: AWS_Audit_Info, global_service: bool = False
) -> dict:
"""generate_regional_clients returns a dict with the following format for the given service:
Example:
{"eu-west-1": boto3_service_client}
"""
try:
regional_clients = {}
service_regions = get_available_aws_service_regions(service, audit_info)
# Get the regions enabled for the account and get the intersection with the service available regions
if audit_info.enabled_regions:
enabled_regions = service_regions.intersection(audit_info.enabled_regions)
else:
enabled_regions = service_regions
for region in enabled_regions:
# Check if it is global service to gather only one region
if global_service:
if service_regions:
if audit_info.profile_region in service_regions:
service_regions = [audit_info.profile_region]
service_regions = service_regions[:1]
for region in service_regions:
regional_client = audit_info.audit_session.client(
service, region_name=region, config=audit_info.session_config
)
regional_client.region = region
regional_clients[region] = regional_client
return regional_clients
except Exception as error:
logger.error(
@@ -193,26 +176,6 @@ def generate_regional_clients(
)
def get_aws_enabled_regions(audit_info: AWS_Audit_Info) -> set:
"""get_aws_enabled_regions returns a set of enabled AWS regions"""
# EC2 Client to check enabled regions
service = "ec2"
default_region = get_default_region(service, audit_info)
ec2_client = audit_info.audit_session.client(service, region_name=default_region)
enabled_regions = set()
try:
# With AllRegions=False we only get the enabled regions for the account
for region in ec2_client.describe_regions(AllRegions=False).get("Regions", []):
enabled_regions.add(region.get("RegionName"))
except Exception as error:
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return enabled_regions
def get_aws_available_regions():
try:
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
@@ -253,8 +216,6 @@ def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
service = "efs"
elif service == "logs":
service = "cloudwatch"
elif service == "cognito":
service = "cognito-idp"
# Check if Prowler has checks in service
try:
list_modules(provider, service)
@@ -306,18 +267,17 @@ def get_regions_from_audit_resources(audit_resources: list) -> set:
return audited_regions
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> set:
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> list:
# Get json locally
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
json_regions = set(
data["services"][service]["regions"][audit_info.audited_partition]
)
# Check for input aws audit_info.audited_regions
if audit_info.audited_regions:
# Get common regions between input and json
regions = json_regions.intersection(audit_info.audited_regions)
# Check if it is a subservice
json_regions = data["services"][service]["regions"][audit_info.audited_partition]
if audit_info.audited_regions: # Check for input aws audit_info.audited_regions
regions = list(
set(json_regions).intersection(audit_info.audited_regions)
) # Get common regions between input and json
else: # Get all regions from json of the service and partition
regions = json_regions
return regions

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,2 @@
AWS_STS_GLOBAL_ENDPOINT_REGION = "us-east-1"
BOTO3_USER_AGENT_EXTRA = "APN_1826889"
ROLE_SESSION_NAME = "ProwlerAssessmentSession"

View File

@@ -143,23 +143,29 @@ def is_allowlisted(
finding_tags,
):
try:
allowlisted_checks = {}
# By default is not allowlisted
is_finding_allowlisted = False
# First set account key from allowlist dict
if audited_account in allowlist["Accounts"]:
allowlisted_checks = allowlist["Accounts"][audited_account]["Checks"]
# If there is a *, it affects to all accounts
# This cannot be elif since in the case of * and single accounts we
# want to merge allowlisted checks from * to the other accounts check list
if "*" in allowlist["Accounts"]:
checks_multi_account = allowlist["Accounts"]["*"]["Checks"]
allowlisted_checks.update(checks_multi_account)
# We always check all the accounts present in the allowlist
# if one allowlists the finding we set the finding as allowlisted
for account in allowlist["Accounts"]:
if account == audited_account or account == "*":
if is_allowlisted_in_check(
allowlist["Accounts"][account]["Checks"],
audited_account,
check,
finding_region,
finding_resource,
finding_tags,
):
is_finding_allowlisted = True
break
# Test if it is allowlisted
if is_allowlisted_in_check(
allowlisted_checks,
audited_account,
check,
finding_region,
finding_resource,
finding_tags,
):
is_finding_allowlisted = True
return is_finding_allowlisted
except Exception as error:
@@ -304,17 +310,10 @@ def is_excepted(
is_tag_excepted = __is_item_matched__(excepted_tags, finding_tags)
if (
not is_account_excepted
and not is_region_excepted
and not is_resource_excepted
and not is_tag_excepted
):
excepted = False
elif (
(is_account_excepted or not excepted_accounts)
and (is_region_excepted or not excepted_regions)
and (is_resource_excepted or not excepted_resources)
and (is_tag_excepted or not excepted_tags)
is_account_excepted
and is_region_excepted
and is_resource_excepted
and is_tag_excepted
):
excepted = True
return excepted

View File

@@ -1,8 +1,6 @@
from argparse import ArgumentTypeError, Namespace
from re import fullmatch, search
from prowler.providers.aws.aws_provider import get_aws_available_regions
from prowler.providers.aws.config import ROLE_SESSION_NAME
from prowler.providers.aws.lib.arn.arn import arn_type
@@ -28,13 +26,6 @@ def init_parser(self):
help="ARN of the role to be assumed",
# Pending ARN validation
)
aws_auth_subparser.add_argument(
"--role-session-name",
nargs="?",
default=ROLE_SESSION_NAME,
help="An identifier for the assumed role session. Defaults to ProwlerAssessmentSession",
type=validate_role_session_name,
)
aws_auth_subparser.add_argument(
"--sts-endpoint-region",
nargs="?",
@@ -93,11 +84,6 @@ def init_parser(self):
action="store_true",
help="Skip updating previous findings of Prowler in Security Hub",
)
aws_security_hub_subparser.add_argument(
"--send-sh-only-fails",
action="store_true",
help="Send only Prowler failed findings to SecurityHub",
)
# AWS Quick Inventory
aws_quick_inventory_subparser = aws_parser.add_argument_group("Quick Inventory")
aws_quick_inventory_subparser.add_argument(
@@ -113,7 +99,6 @@ def init_parser(self):
"-B",
"--output-bucket",
nargs="?",
type=validate_bucket,
default=None,
help="Custom output bucket, requires -M <mode> and it can work also with -o flag.",
)
@@ -121,7 +106,6 @@ def init_parser(self):
"-D",
"--output-bucket-no-assume",
nargs="?",
type=validate_bucket,
default=None,
help="Same as -B but do not use the assumed role credentials to put objects to the bucket, instead uses the initial credentials.",
)
@@ -195,37 +179,9 @@ def validate_arguments(arguments: Namespace) -> tuple[bool, str]:
# Handle if session_duration is not the default value or external_id is set
if (
(arguments.session_duration and arguments.session_duration != 3600)
or arguments.external_id
or arguments.role_session_name != ROLE_SESSION_NAME
):
arguments.session_duration and arguments.session_duration != 3600
) or arguments.external_id:
if not arguments.role:
return (
False,
"To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed",
)
return (False, "To use -I/-T options -R option is needed")
return (True, "")
def validate_bucket(bucket_name):
"""validate_bucket validates that the input bucket_name is valid"""
if search("(?!(^xn--|.+-s3alias$))^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$", bucket_name):
return bucket_name
else:
raise ArgumentTypeError(
"Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
)
def validate_role_session_name(session_name):
"""
validates that the role session name is valid
Documentation: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
"""
if fullmatch("[\w+=,.@-]{2,64}", session_name):
return session_name
else:
raise ArgumentTypeError(
"Role Session Name must be 2-64 characters long and consist only of upper- and lower-case alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@-"
)

View File

@@ -30,7 +30,6 @@ current_audit_info = AWS_Audit_Info(
session_duration=None,
external_id=None,
mfa_enabled=None,
role_session_name=None,
),
mfa_enabled=None,
audit_resources=None,
@@ -39,5 +38,4 @@ current_audit_info = AWS_Audit_Info(
audit_metadata=None,
audit_config=None,
ignore_unused_services=False,
enabled_regions=set(),
)

View File

@@ -1,4 +1,4 @@
from dataclasses import dataclass, field
from dataclasses import dataclass
from datetime import datetime
from typing import Any, Optional
@@ -20,7 +20,6 @@ class AWS_Assume_Role:
session_duration: int
external_id: str
mfa_enabled: bool
role_session_name: str
@dataclass
@@ -54,4 +53,3 @@ class AWS_Audit_Info:
audit_metadata: Optional[Any] = None
audit_config: Optional[dict] = None
ignore_unused_services: bool = False
enabled_regions: set = field(default_factory=set)

View File

@@ -1,11 +1,8 @@
def is_condition_block_restrictive(
condition_statement: dict, source_account: str, is_cross_account_allowed=False
def is_account_only_allowed_in_condition(
condition_statement: dict, source_account: str
):
"""
is_condition_block_restrictive parses the IAM Condition policy block and, by default, returns True if the source_account passed as argument is within, False if not.
If argument is_cross_account_allowed is True it tests if the Condition block includes any of the operators allowlisted returning True if does, False if not.
is_account_only_allowed_in_condition parses the IAM Condition policy block and returns True if the source_account passed as argument is within, False if not.
@param condition_statement: dict with an IAM Condition block, e.g.:
{
@@ -57,16 +54,13 @@ def is_condition_block_restrictive(
condition_statement[condition_operator][value],
list,
):
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
is_condition_key_restrictive = True
# if cross account is not allowed check for each condition block looking for accounts
# different than default
if not is_cross_account_allowed:
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_key_restrictive = False
break
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_key_restrictive = False
break
if is_condition_key_restrictive:
is_condition_valid = True
@@ -76,13 +70,10 @@ def is_condition_block_restrictive(
condition_statement[condition_operator][value],
str,
):
if is_cross_account_allowed:
if (
source_account
in condition_statement[condition_operator][value]
):
is_condition_valid = True
else:
if (
source_account
in condition_statement[condition_operator][value]
):
is_condition_valid = True
return is_condition_valid

View File

@@ -1,3 +1,5 @@
import sys
from prowler.config.config import (
csv_file_suffix,
html_file_suffix,
@@ -27,7 +29,7 @@ def send_to_s3_bucket(
else: # Compliance output mode
filename = f"{output_filename}_{output_mode}{csv_file_suffix}"
logger.info(f"Sending output file {filename} to S3 bucket {output_bucket_name}")
logger.info(f"Sending outputs to S3 bucket {output_bucket_name}")
# File location
file_name = output_directory + "/" + filename
@@ -39,9 +41,10 @@ def send_to_s3_bucket(
s3_client.upload_file(file_name, output_bucket_name, object_name)
except Exception as error:
logger.error(
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_s3_object_path(output_directory: str) -> str:

View File

@@ -29,9 +29,7 @@ def prepare_security_hub_findings(
continue
# Handle quiet mode
if (
output_options.is_quiet or output_options.send_sh_only_fails
) and finding.status != "FAIL":
if output_options.is_quiet and finding.status != "FAIL":
continue
# Get the finding region

View File

@@ -1,21 +1,17 @@
from concurrent.futures import ThreadPoolExecutor, as_completed
import threading
from prowler.lib.logger import logger
from prowler.providers.aws.aws_provider import (
generate_regional_clients,
get_default_region,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
MAX_WORKERS = 10
class AWSService:
"""The AWSService class offers a parent class for each AWS Service to generate:
- AWS Regional Clients
- Shared information like the account ID and ARN, the the AWS partition and the checks audited
- AWS Session
- Thread pool for the __threading_call__
- Also handles if the AWS Service is Global
"""
@@ -38,7 +34,9 @@ class AWSService:
# Generate Regional Clients
if not global_service:
self.regional_clients = generate_regional_clients(self.service, audit_info)
self.regional_clients = generate_regional_clients(
self.service, audit_info, global_service
)
# Get a single region and client if the service needs it (e.g. AWS Global Service)
# We cannot include this within an else because some services needs both the regional_clients
@@ -46,40 +44,14 @@ class AWSService:
self.region = get_default_region(self.service, audit_info)
self.client = self.session.client(self.service, self.region)
# Thread pool for __threading_call__
self.thread_pool = ThreadPoolExecutor(max_workers=MAX_WORKERS)
def __get_session__(self):
return self.session
def __threading_call__(self, call, iterator=None):
# Use the provided iterator, or default to self.regional_clients
items = iterator if iterator is not None else self.regional_clients.values()
# Determine the total count for logging
item_count = len(items)
# Trim leading and trailing underscores from the call's name
call_name = call.__name__.strip("_")
# Add Capitalization
call_name = " ".join([x.capitalize() for x in call_name.split("_")])
# Print a message based on the call's name, and if its regional or processing a list of items
if iterator is None:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function across {item_count} regions..."
)
else:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function to process {item_count} items..."
)
# Submit tasks to the thread pool
futures = [self.thread_pool.submit(call, item) for item in items]
# Wait for all tasks to complete
for future in as_completed(futures):
try:
future.result() # Raises exceptions from the thread, if any
except Exception:
# Handle exceptions if necessary
pass # Replace 'pass' with any additional exception handling logic. Currently handled within the called function
def __threading_call__(self, call):
threads = []
for regional_client in self.regional_clients.values():
threads.append(threading.Thread(target=call, args=(regional_client,)))
for t in threads:
t.start()
for t in threads:
t.join()

View File

@@ -85,36 +85,21 @@ class AccessAnalyzer(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# TODO: We need to include ListFindingsV2
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/accessanalyzer/client/list_findings_v2.html
def __list_findings__(self):
logger.info("AccessAnalyzer - Listing Findings per Analyzer...")
try:
for analyzer in self.analyzers:
try:
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except ClientError as error:
if error.response["Error"]["Code"] == "ValidationException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,7 +1,7 @@
{
"Provider": "aws",
"CheckID": "apigateway_restapi_authorizers_enabled",
"CheckTitle": "Check if API Gateway has configured authorizers at api or method level.",
"CheckTitle": "Check if API Gateway has configured authorizers.",
"CheckAliases": [
"apigateway_authorizers_enabled"
],
@@ -13,7 +13,7 @@
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "medium",
"ResourceType": "AwsApiGatewayRestApi",
"Description": "Check if API Gateway has configured authorizers at api or method level.",
"Description": "Check if API Gateway has configured authorizers.",
"Risk": "If no authorizer is enabled anyone can use the service.",
"RelatedUrl": "",
"Remediation": {

View File

@@ -13,41 +13,12 @@ class apigateway_restapi_authorizers_enabled(Check):
report.resource_id = rest_api.name
report.resource_arn = rest_api.arn
report.resource_tags = rest_api.tags
# it there are not authorizers at api level and resources without methods (default case) ->
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured at api level."
if rest_api.authorizer:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured at api level"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured."
else:
# we want to know if api has not authorizers and all the resources don't have methods configured
resources_have_methods = False
all_methods_authorized = True
resource_paths_with_unathorized_methods = []
for resource in rest_api.resources:
# if the resource has methods test if they have all configured authorizer
if resource.resource_methods:
resources_have_methods = True
for (
http_method,
authorization_method,
) in resource.resource_methods.items():
if authorization_method == "NONE":
all_methods_authorized = False
unauthorized_method = (
f"{resource.path} -> {http_method}"
)
resource_paths_with_unathorized_methods.append(
unauthorized_method
)
# if there are methods in at least one resource and are all authorized
if all_methods_authorized and resources_have_methods:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has all methods authorized"
# if there are methods in at least one result but some of then are not authorized-> list it
elif not all_methods_authorized:
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have authorizers at api level and the following paths and methods are unauthorized: {'; '.join(resource_paths_with_unathorized_methods)}."
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured."
findings.append(report)
return findings

View File

@@ -17,7 +17,6 @@ class APIGateway(AWSService):
self.__get_authorizers__()
self.__get_rest_api__()
self.__get_stages__()
self.__get_resources__()
def __get_rest_apis__(self, regional_client):
logger.info("APIGateway - Getting Rest APIs...")
@@ -54,9 +53,7 @@ class APIGateway(AWSService):
if authorizers:
rest_api.authorizer = True
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
logger.error(f"{error.__class__.__name__}: {error}")
def __get_rest_api__(self):
logger.info("APIGateway - Describing Rest API...")
@@ -67,9 +64,7 @@ class APIGateway(AWSService):
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
logger.error(f"{error.__class__.__name__}: {error}")
def __get_stages__(self):
logger.info("APIGateway - Getting stages for Rest APIs...")
@@ -100,46 +95,7 @@ class APIGateway(AWSService):
)
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_resources__(self):
logger.info("APIGateway - Getting API resources...")
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator("get_resources")
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get(
"resourceMethods", {}
).keys():
resource_methods.append(resource_method)
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
)
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
logger.error(f"{error.__class__.__name__}: {error}")
class Stage(BaseModel):
@@ -151,11 +107,6 @@ class Stage(BaseModel):
tags: Optional[list] = []
class PathResourceMethods(BaseModel):
path: str
resource_methods: dict
class RestAPI(BaseModel):
id: str
arn: str
@@ -165,4 +116,3 @@ class RestAPI(BaseModel):
public_endpoint: bool = True
stages: list[Stage] = []
tags: Optional[list] = []
resources: list[PathResourceMethods] = []

View File

@@ -14,13 +14,13 @@ class apigatewayv2_api_access_logging_enabled(Check):
if stage.logging:
report.status = "PASS"
report.status_extended = f"API Gateway V2 {api.name} ID {api.id} in stage {stage.name} has access logging enabled."
report.resource_id = f"{api.name}-{stage.name}"
report.resource_id = api.name
report.resource_arn = api.arn
report.resource_tags = api.tags
else:
report.status = "FAIL"
report.status_extended = f"API Gateway V2 {api.name} ID {api.id} in stage {stage.name} has access logging disabled."
report.resource_id = f"{api.name}-{stage.name}"
report.resource_id = api.name
report.resource_arn = api.arn
report.resource_tags = api.tags
findings.append(report)

View File

@@ -11,55 +11,57 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_no_secrets_in_code(Check):
def execute(self):
findings = []
if awslambda_client.functions:
for function, function_code in awslambda_client.__get_function_code__():
if function_code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
for function in awslambda_client.functions.values():
if function.code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function_code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[
file_name
]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function.code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[file_name]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
report.status_extended = f"Potential {'secrets' if len(secrets_findings) > 1 else 'secret'} found in Lambda function {function.name} code -> {final_output_string}."
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
# report.status_extended = f"Potential {'secrets' if len(secrets_findings)>1 else 'secret'} found in Lambda function {function.name} code. {final_output_string}."
if len(secrets_findings) > 1:
report.status_extended = f"Potential secrets found in Lambda function {function.name} code -> {final_output_string}."
else:
report.status_extended = f"Potential secret found in Lambda function {function.name} code -> {final_output_string}."
# break // Don't break as there may be additional findings
findings.append(report)
findings.append(report)
return findings

View File

@@ -1,7 +1,6 @@
import io
import json
import zipfile
from concurrent.futures import as_completed
from enum import Enum
from typing import Any, Optional
@@ -22,6 +21,15 @@ class Lambda(AWSService):
self.functions = {}
self.__threading_call__(self.__list_functions__)
self.__list_tags_for_resource__()
# We only want to retrieve the Lambda code if the
# awslambda_function_no_secrets_in_code check is set
if (
"awslambda_function_no_secrets_in_code"
in audit_info.audit_metadata.expected_checks
):
self.__threading_call__(self.__get_function__)
self.__threading_call__(self.__get_policy__)
self.__threading_call__(self.__get_function_url_config__)
@@ -62,45 +70,28 @@ class Lambda(AWSService):
f" {error}"
)
def __get_function_code__(self):
logger.info("Lambda - Getting Function Code...")
# Use a thread pool handle the queueing and execution of the __fetch_function_code__ tasks, up to max_workers tasks concurrently.
lambda_functions_to_fetch = {
self.thread_pool.submit(
self.__fetch_function_code__, function.name, function.region
): function
for function in self.functions.values()
}
for fetched_lambda_code in as_completed(lambda_functions_to_fetch):
function = lambda_functions_to_fetch[fetched_lambda_code]
try:
function_code = fetched_lambda_code.result()
if function_code:
yield function, function_code
except Exception as error:
logger.error(
f"{function.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __fetch_function_code__(self, function_name, function_region):
def __get_function__(self, regional_client):
logger.info("Lambda - Getting Function...")
try:
regional_client = self.regional_clients[function_region]
function_information = regional_client.get_function(
FunctionName=function_name
)
if "Location" in function_information["Code"]:
code_location_uri = function_information["Code"]["Location"]
raw_code_zip = requests.get(code_location_uri).content
return LambdaCode(
location=code_location_uri,
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
)
for function in self.functions.values():
if function.region == regional_client.region:
function_information = regional_client.get_function(
FunctionName=function.name
)
if "Location" in function_information["Code"]:
code_location_uri = function_information["Code"]["Location"]
raw_code_zip = requests.get(code_location_uri).content
self.functions[function.arn].code = LambdaCode(
location=code_location_uri,
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{regional_client.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
)
raise
def __get_policy__(self, regional_client):
logger.info("Lambda - Getting Policy...")

View File

@@ -140,16 +140,7 @@ class Cloudtrail(AWSService):
error.response["Error"]["Code"]
== "InsightNotEnabledException"
):
logger.warning(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
elif (
error.response["Error"]["Code"]
== "UnsupportedOperationException"
):
logger.warning(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
continue
else:
logger.error(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_network_acls_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_network_gateways_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_network_route_tables_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_vpcs_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -25,13 +24,26 @@ class cloudwatch_log_metric_filter_and_alarm_for_aws_config_configuration_change
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -25,13 +24,26 @@ class cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_change
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_authentication_failures(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_aws_organizations_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_disable_or_scheduled_deletion_of_kms_cmk(Chec
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,14 +22,26 @@ class cloudwatch_log_metric_filter_for_s3_bucket_policy_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_policy_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_root_usage(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_security_group_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_sign_in_without_mfa(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,3 +1,5 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_unauthorized_api_calls(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
findings.append(report)
return findings

View File

@@ -1,34 +0,0 @@
import re
from prowler.lib.check.models import Check_Report_AWS
def check_cloudwatch_log_metric_filter(
metric_filter_pattern: str,
trails: list,
metric_filters: list,
metric_alarms: list,
report: Check_Report_AWS,
):
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in metric_filters:
if metric_filter.log_group in log_groups:
if re.search(metric_filter_pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
return report

View File

@@ -1,4 +0,0 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.cognito.cognito_service import CognitoIDP
cognito_idp_client = CognitoIDP(current_audit_info)

View File

@@ -1,122 +0,0 @@
from datetime import datetime
from typing import Optional
from pydantic import BaseModel
from prowler.lib.logger import logger
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
from prowler.providers.aws.lib.service.service import AWSService
################## CognitoIDP
class CognitoIDP(AWSService):
def __init__(self, audit_info):
super().__init__("cognito-idp", audit_info)
self.user_pools = {}
self.__threading_call__(self.__list_user_pools__)
self.__describe_user_pools__()
self.__get_user_pool_mfa_config__()
def __list_user_pools__(self, regional_client):
logger.info("Cognito - Listing User Pools...")
try:
user_pools_paginator = regional_client.get_paginator("list_user_pools")
for page in user_pools_paginator.paginate(MaxResults=60):
for user_pool in page["UserPools"]:
arn = f"arn:{self.audited_partition}:cognito-idp:{regional_client.region}:{self.audited_account}:userpool/{user_pool['Id']}"
if not self.audit_resources or (
is_resource_filtered(arn, self.audit_resources)
):
try:
self.user_pools[arn] = UserPool(
id=user_pool["Id"],
arn=arn,
name=user_pool["Name"],
region=regional_client.region,
last_modified=user_pool["LastModifiedDate"],
creation_date=user_pool["CreationDate"],
status=user_pool.get("Status", "Disabled"),
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_user_pools__(self):
logger.info("Cognito - Describing User Pools...")
try:
for user_pool in self.user_pools.values():
try:
user_pool_details = self.regional_clients[
user_pool.region
].describe_user_pool(UserPoolId=user_pool.id)["UserPool"]
user_pool.password_policy = user_pool_details.get(
"Policies", {}
).get("PasswordPolicy", {})
user_pool.deletion_protection = user_pool_details.get(
"DeletionProtection", "INACTIVE"
)
user_pool.advanced_security_mode = user_pool_details.get(
"UserPoolAddOns", {}
).get("AdvancedSecurityMode", "OFF")
user_pool.tags = [user_pool_details.get("UserPoolTags", "")]
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_user_pool_mfa_config__(self):
logger.info("Cognito - Getting User Pool MFA Configuration...")
try:
for user_pool in self.user_pools.values():
try:
mfa_config = self.regional_clients[
user_pool.region
].get_user_pool_mfa_config(UserPoolId=user_pool.id)
if mfa_config["MfaConfiguration"] != "OFF":
user_pool.mfa_config = MFAConfig(
sms_authentication=mfa_config.get(
"SmsMfaConfiguration", {}
),
software_token_mfa_authentication=mfa_config.get(
"SoftwareTokenMfaConfiguration", {}
),
status=mfa_config["MfaConfiguration"],
)
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
class MFAConfig(BaseModel):
sms_authentication: Optional[dict]
software_token_mfa_authentication: Optional[dict]
status: str
class UserPool(BaseModel):
id: str
arn: str
name: str
region: str
advanced_security_mode: str = "OFF"
deletion_protection: str = "INACTIVE"
last_modified: datetime
creation_date: datetime
status: str
password_policy: Optional[dict]
mfa_config: Optional[MFAConfig]
tags: Optional[list] = []

View File

@@ -17,7 +17,7 @@ class EC2(AWSService):
super().__init__(__class__.__name__, audit_info)
self.instances = []
self.__threading_call__(self.__describe_instances__)
self.__threading_call__(self.__get_instance_user_data__, self.instances)
self.__get_instance_user_data__()
self.security_groups = []
self.regions_with_sgs = []
self.__threading_call__(self.__describe_security_groups__)
@@ -27,7 +27,7 @@ class EC2(AWSService):
self.volumes_with_snapshots = {}
self.regions_with_snapshots = {}
self.__threading_call__(self.__describe_snapshots__)
self.__threading_call__(self.__determine_public_snapshots__, self.snapshots)
self.__get_snapshot_public__()
self.network_interfaces = []
self.__threading_call__(self.__describe_public_network_interfaces__)
self.__threading_call__(self.__describe_sg_network_interfaces__)
@@ -36,11 +36,12 @@ class EC2(AWSService):
self.volumes = []
self.__threading_call__(self.__describe_volumes__)
self.ebs_encryption_by_default = []
self.__threading_call__(self.__get_ebs_encryption_settings__)
self.__threading_call__(self.__get_ebs_encryption_by_default__)
self.elastic_ips = []
self.__threading_call__(self.__describe_ec2_addresses__)
self.__threading_call__(self.__describe_addresses__)
def __describe_instances__(self, regional_client):
logger.info("EC2 - Describing EC2 Instances...")
try:
describe_instances_paginator = regional_client.get_paginator(
"describe_instances"
@@ -105,6 +106,7 @@ class EC2(AWSService):
)
def __describe_security_groups__(self, regional_client):
logger.info("EC2 - Describing Security Groups...")
try:
describe_security_groups_paginator = regional_client.get_paginator(
"describe_security_groups"
@@ -153,6 +155,7 @@ class EC2(AWSService):
)
def __describe_network_acls__(self, regional_client):
logger.info("EC2 - Describing Network ACLs...")
try:
describe_network_acls_paginator = regional_client.get_paginator(
"describe_network_acls"
@@ -183,6 +186,7 @@ class EC2(AWSService):
)
def __describe_snapshots__(self, regional_client):
logger.info("EC2 - Describing Snapshots...")
try:
snapshots_in_region = False
describe_snapshots_paginator = regional_client.get_paginator(
@@ -215,30 +219,35 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __determine_public_snapshots__(self, snapshot):
try:
regional_client = self.regional_clients[snapshot.region]
snapshot_public = regional_client.describe_snapshot_attribute(
Attribute="createVolumePermission", SnapshotId=snapshot.id
)
for permission in snapshot_public["CreateVolumePermissions"]:
if "Group" in permission:
if permission["Group"] == "all":
snapshot.public = True
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidSnapshot.NotFound":
logger.warning(
f"{snapshot.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
def __get_snapshot_public__(self):
logger.info("EC2 - Getting snapshot volume attribute permissions...")
for snapshot in self.snapshots:
try:
regional_client = self.regional_clients[snapshot.region]
snapshot_public = regional_client.describe_snapshot_attribute(
Attribute="createVolumePermission", SnapshotId=snapshot.id
)
for permission in snapshot_public["CreateVolumePermissions"]:
if "Group" in permission:
if permission["Group"] == "all":
snapshot.public = True
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidSnapshot.NotFound":
logger.warning(
f"{snapshot.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
)
continue
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_public_network_interfaces__(self, regional_client):
logger.info("EC2 - Describing Network Interfaces...")
try:
# Get Network Interfaces with Public IPs
describe_network_interfaces_paginator = regional_client.get_paginator(
@@ -265,6 +274,7 @@ class EC2(AWSService):
)
def __describe_sg_network_interfaces__(self, regional_client):
logger.info("EC2 - Describing Network Interfaces...")
try:
# Get Network Interfaces for Security Groups
for sg in self.security_groups:
@@ -289,25 +299,30 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_instance_user_data__(self, instance):
try:
regional_client = self.regional_clients[instance.region]
user_data = regional_client.describe_instance_attribute(
Attribute="userData", InstanceId=instance.id
)["UserData"]
if "Value" in user_data:
instance.user_data = user_data["Value"]
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
logger.warning(
def __get_instance_user_data__(self):
logger.info("EC2 - Getting instance user data...")
for instance in self.instances:
try:
regional_client = self.regional_clients[instance.region]
user_data = regional_client.describe_instance_attribute(
Attribute="userData", InstanceId=instance.id
)["UserData"]
if "Value" in user_data:
instance.user_data = user_data["Value"]
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
continue
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_images__(self, regional_client):
logger.info("EC2 - Describing Images...")
try:
for image in regional_client.describe_images(Owners=["self"])["Images"]:
arn = f"arn:{self.audited_partition}:ec2:{regional_client.region}:{self.audited_account}:image/{image['ImageId']}"
@@ -330,6 +345,7 @@ class EC2(AWSService):
)
def __describe_volumes__(self, regional_client):
logger.info("EC2 - Describing Volumes...")
try:
describe_volumes_paginator = regional_client.get_paginator(
"describe_volumes"
@@ -354,7 +370,8 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_ec2_addresses__(self, regional_client):
def __describe_addresses__(self, regional_client):
logger.info("EC2 - Describing Elastic IPs...")
try:
for address in regional_client.describe_addresses()["Addresses"]:
public_ip = None
@@ -385,7 +402,8 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_ebs_encryption_settings__(self, regional_client):
def __get_ebs_encryption_by_default__(self, regional_client):
logger.info("EC2 - Get EBS Encryption By Default...")
try:
volumes_in_region = False
for volume in self.volumes:

View File

@@ -4,6 +4,7 @@ from pydantic import BaseModel
from prowler.lib.logger import logger
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
from prowler.providers.aws.aws_provider import generate_regional_clients
from prowler.providers.aws.lib.service.service import AWSService
@@ -12,6 +13,7 @@ class EKS(AWSService):
def __init__(self, audit_info):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
self.regional_clients = generate_regional_clients(self.service, audit_info)
self.clusters = []
self.__threading_call__(self.__list_clusters__)
self.__describe_cluster__(self.regional_clients)

View File

@@ -1,6 +1,5 @@
from typing import Optional
from botocore.exceptions import ClientError
from pydantic import BaseModel
from prowler.lib.logger import logger
@@ -74,15 +73,7 @@ class ElastiCache(AWSService):
cluster.tags = regional_client.list_tags_for_resource(
ResourceName=cluster.arn
)["TagList"]
except ClientError as error:
if error.response["Error"]["Code"] == "CacheClusterNotFound":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -33,7 +33,7 @@ class elbv2_insecure_ssl_ciphers(Check):
and listener.ssl_policy not in secure_ssl_policies
):
report.status = "FAIL"
report.status_extended = f"ELBv2 {lb.name} has listeners with insecure SSL protocols or ciphers ({listener.ssl_policy})."
report.status_extended = f"ELBv2 {lb.name} has listeners with insecure SSL protocols or ciphers."
findings.append(report)

View File

@@ -13,24 +13,17 @@ class fms_policy_compliant(Check):
report.status = "PASS"
report.status_extended = "FMS enabled with all compliant accounts."
non_compliant_policy = False
if fms_client.fms_policies:
for policy in fms_client.fms_policies:
for policy_to_account in policy.compliance_status:
if (
policy_to_account.status == "NON_COMPLIANT"
or not policy_to_account.status
):
report.status = "FAIL"
report.status_extended = f"FMS with non-compliant policy {policy.name} for account {policy_to_account.account_id}."
report.resource_id = policy.id
report.resource_arn = policy.arn
non_compliant_policy = True
break
if non_compliant_policy:
for policy in fms_client.fms_policies:
for policy_to_account in policy.compliance_status:
if policy_to_account.status == "NON_COMPLIANT":
report.status = "FAIL"
report.status_extended = f"FMS with non-compliant policy {policy.name} for account {policy_to_account.account_id}."
report.resource_id = policy.id
report.resource_arn = policy.arn
non_compliant_policy = True
break
else:
report.status = "FAIL"
report.status_extended = f"FMS without any compliant policy for account {fms_client.audited_account}."
if non_compliant_policy:
break
findings.append(report)
return findings

View File

@@ -5,6 +5,8 @@ from prowler.lib.logger import logger
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
from prowler.providers.aws.lib.service.service import AWSService
# from prowler.providers.aws.aws_provider import generate_regional_clients
################## FMS
class FMS(AWSService):
@@ -66,19 +68,14 @@ class FMS(AWSService):
for page in list_compliance_status_paginator.paginate(
PolicyId=fms_policy.id
):
for fms_compliance_status in page.get(
"PolicyComplianceStatusList", []
):
compliance_status = ""
if fms_compliance_status.get("EvaluationResults"):
compliance_status = fms_compliance_status.get(
"EvaluationResults"
)[0].get("ComplianceStatus", "")
for fms_compliance_status in page["PolicyComplianceStatusList"]:
fms_policy.compliance_status.append(
PolicyAccountComplianceStatus(
account_id=fms_compliance_status.get("MemberAccount"),
policy_id=fms_compliance_status.get("PolicyId"),
status=compliance_status,
status=fms_compliance_status.get("EvaluationResults")[
0
].get("ComplianceStatus"),
)
)

View File

@@ -1,6 +1,6 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_condition_block_restrictive,
is_account_only_allowed_in_condition,
)
from prowler.providers.aws.services.iam.iam_client import iam_client
@@ -30,7 +30,7 @@ class iam_role_cross_service_confused_deputy_prevention(Check):
and "Service" in statement["Principal"]
# Check to see if the appropriate condition statements have been implemented
and "Condition" in statement
and is_condition_block_restrictive(
and is_account_only_allowed_in_condition(
statement["Condition"], iam_client.audited_account
)
):

View File

@@ -494,30 +494,11 @@ class IAM(AWSService):
document=inline_group_policy_doc,
)
)
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchEntity":
logger.warning(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
group.inline_policies = inline_group_policies
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchEntity":
logger.warning(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(

View File

@@ -48,12 +48,11 @@ class organizations_scp_check_deny_regions(Check):
and "aws:RequestedRegion"
in statement["Condition"]["StringNotEquals"]
):
if all(
region
in statement["Condition"]["StringNotEquals"][
if (
organizations_enabled_regions
== statement["Condition"]["StringNotEquals"][
"aws:RequestedRegion"
]
for region in organizations_enabled_regions
):
# All defined regions are restricted, we exit here, no need to continue.
report.status = "PASS"
@@ -74,12 +73,11 @@ class organizations_scp_check_deny_regions(Check):
and "aws:RequestedRegion"
in statement["Condition"]["StringEquals"]
):
if all(
region
in statement["Condition"]["StringEquals"][
if (
organizations_enabled_regions
== statement["Condition"]["StringEquals"][
"aws:RequestedRegion"
]
for region in organizations_enabled_regions
):
# All defined regions are restricted, we exit here, no need to continue.
report.status = "PASS"

View File

@@ -232,15 +232,7 @@ class RDS(AWSService):
for att in response["DBClusterSnapshotAttributes"]:
if "all" in att["AttributeValues"]:
snapshot.public = True
except ClientError as error:
if error.response["Error"]["Code"] == "DBClusterSnapshotNotFoundFault":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -28,7 +28,6 @@ class S3(AWSService):
self.__threading_call__(self.__get_bucket_tagging__)
# In the S3 service we override the "__threading_call__" method because we spawn a process per bucket instead of per region
# TODO: Replace the above function with the service __threading_call__ using the buckets as the iterator
def __threading_call__(self, call):
threads = []
for bucket in self.buckets:
@@ -102,15 +101,6 @@ class S3(AWSService):
if "MFADelete" in bucket_versioning:
if "Enabled" == bucket_versioning["MFADelete"]:
bucket.mfa_delete = True
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if bucket.region:
logger.error(
@@ -163,15 +153,6 @@ class S3(AWSService):
bucket.logging_target_bucket = bucket_logging["LoggingEnabled"][
"TargetBucket"
]
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if regional_client:
logger.error(
@@ -243,15 +224,6 @@ class S3(AWSService):
grantee.permission = grant["Permission"]
grantees.append(grantee)
bucket.acl_grantees = grantees
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if regional_client:
logger.error(
@@ -269,26 +241,18 @@ class S3(AWSService):
bucket.policy = json.loads(
regional_client.get_bucket_policy(Bucket=bucket.name)["Policy"]
)
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucketPolicy":
bucket.policy = {}
elif error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if regional_client:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
if "NoSuchBucketPolicy" in str(error):
bucket.policy = {}
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
if regional_client:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_bucket_ownership_controls__(self, bucket):
logger.info("S3 - Get buckets ownership controls...")

View File

@@ -1,6 +1,6 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_condition_block_restrictive,
is_account_only_allowed_in_condition,
)
from prowler.providers.aws.services.sns.sns_client import sns_client
@@ -35,7 +35,7 @@ class sns_topics_not_publicly_accessible(Check):
):
if (
"Condition" in statement
and is_condition_block_restrictive(
and is_account_only_allowed_in_condition(
statement["Condition"], sns_client.audited_account
)
):

View File

@@ -1,6 +1,6 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_condition_block_restrictive,
is_account_only_allowed_in_condition,
)
from prowler.providers.aws.services.sqs.sqs_client import sqs_client
@@ -32,10 +32,8 @@ class sqs_queues_not_publicly_accessible(Check):
)
):
if "Condition" in statement:
if is_condition_block_restrictive(
statement["Condition"],
sqs_client.audited_account,
True,
if is_account_only_allowed_in_condition(
statement["Condition"], sqs_client.audited_account
):
report.status_extended = f"SQS queue {queue.id} is not public because its policy only allows access from the same account."
else:

View File

@@ -7,7 +7,7 @@
"SubServiceName": "",
"ResourceIdTemplate": "arn:aws:iam::AWS_ACCOUNT_NUMBER:root",
"Severity": "low",
"ResourceType": "Other",
"ResourceType": "",
"Description": "Check if a Premium support plan is subscribed.",
"Risk": "Ensure that the appropriate support level is enabled for the necessary AWS accounts. For example, if an AWS account is being used to host production systems and environments, it is highly recommended that the minimum AWS Support Plan should be Business.",
"RelatedUrl": "https://aws.amazon.com/premiumsupport/plans/",

View File

@@ -16,10 +16,10 @@ class vpc_different_regions(Check):
report.resource_id = vpc_client.audited_account
report.resource_arn = vpc_client.audited_account_arn
report.status = "FAIL"
report.status_extended = "VPCs found only in one region."
if len(vpc_regions) > 1:
if len(vpc_regions) == 1:
report.status = "FAIL"
report.status_extended = "VPCs found only in one region."
else:
report.status = "PASS"
report.status_extended = "VPCs found in more than one region."
findings.append(report)

View File

@@ -2,7 +2,7 @@ from re import compile
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_condition_block_restrictive,
is_account_only_allowed_in_condition,
)
from prowler.providers.aws.services.vpc.vpc_client import vpc_client
@@ -35,7 +35,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
if "Condition" in statement:
for account_id in trusted_account_ids:
if is_condition_block_restrictive(
if is_account_only_allowed_in_condition(
statement["Condition"], account_id
):
access_from_trusted_accounts = True
@@ -70,7 +70,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
access_from_trusted_accounts = False
if "Condition" in statement:
for account_id in trusted_account_ids:
if is_condition_block_restrictive(
if is_account_only_allowed_in_condition(
statement["Condition"], account_id
):
access_from_trusted_accounts = True
@@ -102,7 +102,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
if "Condition" in statement:
for account_id in trusted_account_ids:
if is_condition_block_restrictive(
if is_account_only_allowed_in_condition(
statement["Condition"], account_id
):
access_from_trusted_accounts = True

View File

@@ -8,7 +8,6 @@ from prowler.lib.logger import logger
from prowler.providers.aws.aws_provider import (
AWS_Provider,
assume_role,
get_aws_enabled_regions,
get_checks_from_input_arn,
get_regions_from_audit_resources,
)
@@ -63,7 +62,7 @@ GCP Account: {Fore.YELLOW}[{profile}]{Style.RESET_ALL} GCP Project IDs: {Fore.Y
def print_azure_credentials(self, audit_info: Azure_Audit_Info):
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = f"{key} : {value}"
intermediate = key + " : " + value
printed_subscriptions.append(intermediate)
report = f"""
This report is being generated using the identity below:
@@ -85,7 +84,6 @@ Azure Identity Type: {Fore.YELLOW}[{audit_info.identity.identity_type}]{Style.RE
current_audit_info.assumed_role_info.role_arn = input_role
input_session_duration = arguments.get("session_duration")
input_external_id = arguments.get("external_id")
input_role_session_name = arguments.get("role_session_name")
# STS Endpoint Region
sts_endpoint_region = arguments.get("sts_endpoint_region")
@@ -154,9 +152,6 @@ Azure Identity Type: {Fore.YELLOW}[{audit_info.identity.identity_type}]{Style.RE
)
current_audit_info.assumed_role_info.external_id = input_external_id
current_audit_info.assumed_role_info.mfa_enabled = input_mfa
current_audit_info.assumed_role_info.role_session_name = (
input_role_session_name
)
# Check if role arn is valid
try:
@@ -262,9 +257,6 @@ Azure Identity Type: {Fore.YELLOW}[{audit_info.identity.identity_type}]{Style.RE
if arguments.get("resource_arn"):
current_audit_info.audit_resources = arguments.get("resource_arn")
# Get Enabled Regions
current_audit_info.enabled_regions = get_aws_enabled_regions(current_audit_info)
return current_audit_info
def set_aws_execution_parameters(self, provider, audit_info) -> list[str]:

View File

@@ -0,0 +1,32 @@
import importlib
import sys
from shutil import rmtree
from prowler.config.config import default_output_directory
from prowler.lib.logger import logger
def clean_provider_local_output_directories(args):
"""
clean_provider_local_output_directories deletes the output files generated locally in custom directories when the output is sent to a remote storage provider
"""
try:
# import provider cleaning function
provider_clean_function = f"clean_{args.provider}_local_output_directories"
getattr(importlib.import_module(__name__), provider_clean_function)(args)
except AttributeError as attribute_exception:
logger.info(
f"Cleaning local output directories not initialized for provider {args.provider}: {attribute_exception}"
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
sys.exit(1)
def clean_aws_local_output_directories(args):
"""clean_aws_local_output_directories deletes the output files generated locally in custom directories when output is sent to a remote storage provider for AWS"""
if args.output_bucket or args.output_bucket_no_assume:
if args.output_directory != default_output_directory:
rmtree(args.output_directory)

View File

@@ -69,8 +69,7 @@ class Provider_Output_Options:
if arguments.output_directory:
if not isdir(arguments.output_directory):
if arguments.output_modes:
# exist_ok is set to True not to raise FileExistsError
makedirs(arguments.output_directory, exist_ok=True)
makedirs(arguments.output_directory)
class Azure_Output_Options(Provider_Output_Options):
@@ -135,7 +134,6 @@ class Aws_Output_Options(Provider_Output_Options):
# Security Hub Outputs
self.security_hub_enabled = arguments.security_hub
self.send_sh_only_fails = arguments.send_sh_only_fails
if arguments.security_hub:
if not self.output_modes:
self.output_modes = ["json-asff"]

View File

@@ -1,7 +1,6 @@
import os
import sys
from colorama import Fore, Style
from google import auth
from googleapiclient import discovery
@@ -90,7 +89,4 @@ class GCP_Provider:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
print(
f"\n{Fore.YELLOW}Cloud Resource Manager API {Style.RESET_ALL}has not been used before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/ then retry."
)
return []

View File

@@ -27,7 +27,7 @@ class GCPService:
self.default_project_id = audit_info.default_project_id
self.region = region
self.client = self.__generate_client__(
self.service, api_version, audit_info.credentials
service, api_version, audit_info.credentials
)
# Only project ids that have their API enabled will be scanned
self.project_ids = self.__is_api_active__(audit_info.project_ids)
@@ -62,7 +62,7 @@ class GCPService:
project_ids.append(project_id)
else:
print(
f"\n{Fore.YELLOW}{self.service} API {Style.RESET_ALL}has not been used in project {project_id} before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/{self.service}.googleapis.com/overview?project={project_id} then retry."
f"\n{Fore.YELLOW}{self.service} API {Style.RESET_ALL}has not been used in project {project_id} before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/dataproc.googleapis.com/overview?project={project_id} then retry."
)
except Exception as error:
logger.error(

View File

@@ -22,7 +22,7 @@ packages = [
{include = "prowler"}
]
readme = "README.md"
version = "3.12.1"
version = "3.11.3"
[tool.poetry.dependencies]
alive-progress = "3.1.5"
@@ -38,36 +38,35 @@ boto3 = "1.26.165"
botocore = "1.29.165"
colorama = "0.4.6"
detect-secrets = "1.4.0"
google-api-python-client = "2.113.0"
google-auth-httplib2 = ">=0.1,<0.3"
jsonschema = "4.20.0"
google-api-python-client = "2.108.0"
google-auth-httplib2 = "^0.1.0"
jsonschema = "4.18.0"
mkdocs = {version = "1.5.3", optional = true}
mkdocs-material = {version = "9.5.3", optional = true}
mkdocs-material = {version = "9.4.14", optional = true}
msgraph-core = "0.2.2"
msrestazure = "^0.6.4"
pydantic = "1.10.13"
python = ">=3.9,<3.12"
schema = "0.7.5"
shodan = "1.31.0"
slack-sdk = "3.26.1"
shodan = "1.30.1"
slack-sdk = "3.26.0"
tabulate = "0.9.0"
[tool.poetry.extras]
docs = ["mkdocs", "mkdocs-material"]
[tool.poetry.group.dev.dependencies]
bandit = "1.7.6"
bandit = "1.7.5"
black = "22.12.0"
coverage = "7.4.0"
docker = "7.0.0"
flake8 = "7.0.0"
freezegun = "1.4.0"
coverage = "7.3.2"
docker = "6.1.3"
flake8 = "6.1.0"
freezegun = "1.2.2"
mock = "5.1.0"
moto = {extras = ["all"], version = "4.2.13"}
moto = {extras = ["all"], version = "4.2.10"}
openapi-spec-validator = "0.7.1"
openapi-schema-validator = "0.6.2"
pylint = "3.0.3"
pytest = "7.4.4"
pylint = "3.0.2"
pytest = "7.4.3"
pytest-cov = "4.1.0"
pytest-randomly = "3.15.0"
pytest-xdist = "3.5.0"

View File

@@ -54,7 +54,7 @@ config_aws = {
class Test_Config:
def test_get_aws_available_regions(self):
assert len(get_aws_available_regions()) == 33
assert len(get_aws_available_regions()) == 32
@mock.patch(
"prowler.config.config.requests.get", new=mock_prowler_get_latest_release

View File

@@ -1,319 +0,0 @@
from mock import patch
from prowler.lib.check.checks_loader import (
load_checks_to_execute,
update_checks_to_execute_with_aliases,
)
from prowler.lib.check.models import (
Check_Metadata_Model,
Code,
Recommendation,
Remediation,
)
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME = "s3_bucket_level_public_access_block"
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_CUSTOM_ALIAS = (
"s3_bucket_level_public_access_block"
)
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY = "medium"
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE = "s3"
class TestCheckLoader:
provider = "aws"
def get_custom_check_metadata(self):
return Check_Metadata_Model(
Provider="aws",
CheckID=S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME,
CheckTitle="Check S3 Bucket Level Public Access Block.",
CheckType=["Data Protection"],
CheckAliases=[S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_CUSTOM_ALIAS],
ServiceName=S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE,
SubServiceName="",
ResourceIdTemplate="arn:partition:s3:::bucket_name",
Severity=S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY,
ResourceType="AwsS3Bucket",
Description="Check S3 Bucket Level Public Access Block.",
Risk="Public access policies may be applied to sensitive data buckets.",
RelatedUrl="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html",
Remediation=Remediation(
Code=Code(
NativeIaC="",
Terraform="https://docs.bridgecrew.io/docs/bc_aws_s3_20#terraform",
CLI="aws s3api put-public-access-block --region <REGION_NAME> --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true --bucket <BUCKET_NAME>",
Other="https://github.com/cloudmatos/matos/tree/master/remediations/aws/s3/s3/block-public-access",
),
Recommendation=Recommendation(
Text="You can enable Public Access Block at the bucket level to prevent the exposure of your data stored in S3.",
Url="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html",
),
),
Categories=["internet-exposed"],
DependsOn=[],
RelatedTo=[],
Notes="",
Compliance=[],
)
def test_load_checks_to_execute(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = None
service_list = None
severities = None
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_provider",
return_value=[
(
f"{S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME}",
"path/to/{S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME}",
)
],
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_check_list(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME]
service_list = None
severities = None
compliance_frameworks = None
categories = None
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_severities(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = None
severities = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY]
compliance_frameworks = None
categories = None
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_severities_and_services(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE]
severities = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY]
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_service",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_severities_and_services_not_within_severity(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = ["ec2"]
severities = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY]
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_service",
return_value={"ec2_ami_public"},
):
assert set() == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_checks_file(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = "path/to/test_file"
check_list = []
service_list = []
severities = []
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.parse_checks_from_file",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_service_list(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE]
severities = []
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_service",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_compliance_frameworks(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = []
severities = []
compliance_frameworks = ["test-compliance-framework"]
categories = None
with patch(
"prowler.lib.check.checks_loader.parse_checks_from_compliance_framework",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_categories(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = []
severities = []
compliance_frameworks = []
categories = {"internet-exposed"}
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_update_checks_to_execute_with_aliases(self):
checks_to_execute = {"renamed_check"}
check_aliases = {"renamed_check": "check_name"}
assert {"check_name"} == update_checks_to_execute_with_aliases(
checks_to_execute, check_aliases
)

View File

@@ -3,7 +3,7 @@ import pathlib
from importlib.machinery import FileFinder
from pkgutil import ModuleInfo
from boto3 import client
from boto3 import client, session
from fixtures.bulk_checks_metadata import test_bulk_checks_metadata
from mock import patch
from moto import mock_s3
@@ -27,7 +27,8 @@ from prowler.providers.aws.aws_provider import (
get_checks_from_input_arn,
get_regions_from_audit_resources,
)
from tests.providers.aws.audit_info_utils import set_mocked_aws_audit_info
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_REGION = "us-east-1"
@@ -256,11 +257,37 @@ def mock_recover_checks_from_aws_provider_rds_service(*_):
]
def mock_recover_checks_from_aws_provider_cognito_service(*_):
return []
class Test_Check:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
def test_load_check_metadata(self):
test_cases = [
{
@@ -336,7 +363,7 @@ class Test_Check:
provider = test["input"]["provider"]
assert (
parse_checks_from_folder(
set_mocked_aws_audit_info(), check_folder, provider
self.set_mocked_audit_info(), check_folder, provider
)
== test["expected"]
)
@@ -569,19 +596,6 @@ class Test_Check:
recovered_checks = get_checks_from_input_arn(audit_resources, provider)
assert recovered_checks == expected_checks
@patch(
"prowler.lib.check.check.recover_checks_from_provider",
new=mock_recover_checks_from_aws_provider_cognito_service,
)
def test_get_checks_from_input_arn_cognito(self):
audit_resources = [
f"arn:aws:cognito-idp:us-east-1:{AWS_ACCOUNT_NUMBER}:userpool/test"
]
provider = "aws"
expected_checks = []
recovered_checks = get_checks_from_input_arn(audit_resources, provider)
assert recovered_checks == expected_checks
@patch(
"prowler.lib.check.check.recover_checks_from_provider",
new=mock_recover_checks_from_aws_provider_ec2_service,

View File

@@ -5,11 +5,6 @@ import pytest
from mock import patch
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.providers.aws.config import ROLE_SESSION_NAME
from prowler.providers.aws.lib.arguments.arguments import (
validate_bucket,
validate_role_session_name,
)
from prowler.providers.azure.lib.arguments.arguments import validate_azure_region
prowler_command = "prowler"
@@ -744,7 +739,7 @@ class Test_Parser:
assert wrapped_exit.value.code == 2
assert (
capsys.readouterr().err
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
)
def test_aws_parser_session_duration_long(self, capsys):
@@ -757,7 +752,7 @@ class Test_Parser:
assert wrapped_exit.value.code == 2
assert (
capsys.readouterr().err
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
)
# TODO
@@ -778,7 +773,7 @@ class Test_Parser:
assert wrapped_exit.value.code == 2
assert (
capsys.readouterr().err
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
)
def test_aws_parser_external_id_long(self, capsys):
@@ -791,7 +786,7 @@ class Test_Parser:
assert wrapped_exit.value.code == 2
assert (
capsys.readouterr().err
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
)
def test_aws_parser_region_f(self):
@@ -887,12 +882,6 @@ class Test_Parser:
parsed = self.parser.parse(command)
assert parsed.skip_sh_update
def test_aws_parser_send_only_fail(self):
argument = "--send-sh-only-fails"
command = [prowler_command, argument]
parsed = self.parser.parse(command)
assert parsed.send_sh_only_fails
def test_aws_parser_quick_inventory_short(self):
argument = "-i"
command = [prowler_command, argument]
@@ -1016,13 +1005,6 @@ class Test_Parser:
parsed = self.parser.parse(command)
assert parsed.sts_endpoint_region == sts_endpoint_region
def test_aws_parser_role_session_name(self):
argument = "--role-session-name"
role_session_name = ROLE_SESSION_NAME
command = [prowler_command, argument, role_session_name]
parsed = self.parser.parse(command)
assert parsed.role_session_name == role_session_name
def test_parser_azure_auth_sp(self):
argument = "--sp-env-auth"
command = [prowler_command, "azure", argument]
@@ -1150,50 +1132,3 @@ class Test_Parser:
match=f"Region {invalid_region} not allowed, allowed regions are {' '.join(expected_regions)}",
):
validate_azure_region(invalid_region)
def test_validate_bucket_invalid_bucket_names(self):
bad_bucket_names = [
"xn--bucket-name",
"mrryadfpcwlscicvnrchmtmyhwrvzkgfgdxnlnvaaummnywciixnzvycnzmhhpwb",
"192.168.5.4",
"bucket-name-s3alias",
"bucket-name-s3alias-",
"bucket-n$ame",
"bu",
]
for bucket_name in bad_bucket_names:
with pytest.raises(ArgumentTypeError) as argument_error:
validate_bucket(bucket_name)
assert argument_error.type == ArgumentTypeError
assert (
argument_error.value.args[0]
== "Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
)
def test_validate_bucket_valid_bucket_names(self):
valid_bucket_names = ["bucket-name" "test" "test-test-test"]
for bucket_name in valid_bucket_names:
assert validate_bucket(bucket_name) == bucket_name
def test_validate_role_session_name_invalid_role_names(self):
bad_role_names = [
"role name",
"adasD*",
"test#",
"role-name?",
]
for role_name in bad_role_names:
with pytest.raises(ArgumentTypeError) as argument_error:
validate_role_session_name(role_name)
assert argument_error.type == ArgumentTypeError
assert (
argument_error.value.args[0]
== "Role Session Name must be 2-64 characters long and consist only of upper- and lower-case alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@-"
)
def test_validate_role_session_name_valid_role_names(self):
valid_role_names = ["prowler-role" "test@" "test=test+test,."]
for role_name in valid_role_names:
assert validate_role_session_name(role_name) == role_name

View File

@@ -1,43 +1,15 @@
from boto3 import session
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
# Root AWS Account
AWS_REGION_US_EAST_1 = "us-east-1"
AWS_REGION_EU_WEST_1 = "eu-west-1"
AWS_REGION_EU_WEST_2 = "eu-west-2"
AWS_PARTITION = "aws"
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
# Commercial Regions
AWS_REGION_US_EAST_1 = "us-east-1"
AWS_REGION_US_EAST_1_AZA = "us-east-1a"
AWS_REGION_US_EAST_1_AZB = "us-east-1b"
AWS_REGION_EU_WEST_1 = "eu-west-1"
AWS_REGION_EU_WEST_1_AZA = "eu-west-1a"
AWS_REGION_EU_WEST_1_AZB = "eu-west-1b"
AWS_REGION_EU_WEST_2 = "eu-west-2"
AWS_REGION_CN_NORTHWEST_1 = "cn-northwest-1"
AWS_REGION_CN_NORTH_1 = "cn-north-1"
AWS_REGION_EU_SOUTH_2 = "eu-south-2"
AWS_REGION_EU_SOUTH_3 = "eu-south-3"
AWS_REGION_US_WEST_2 = "us-west-2"
AWS_REGION_US_EAST_2 = "us-east-2"
AWS_REGION_EU_CENTRAL_1 = "eu-central-1"
# China Regions
AWS_REGION_CHINA_NORHT_1 = "cn-north-1"
# Gov Cloud Regions
AWS_REGION_GOV_CLOUD_US_EAST_1 = "us-gov-east-1"
# Iso Regions
AWS_REGION_ISO_GLOBAL = "aws-iso-global"
# AWS Partitions
AWS_COMMERCIAL_PARTITION = "aws"
AWS_GOV_CLOUD_PARTITION = "aws-us-gov"
AWS_CHINA_PARTITION = "aws-cn"
AWS_ISO_PARTITION = "aws-iso"
# Mocked AWS Audit Info
@@ -45,44 +17,32 @@ def set_mocked_aws_audit_info(
audited_regions: [str] = [],
audited_account: str = AWS_ACCOUNT_NUMBER,
audited_account_arn: str = AWS_ACCOUNT_ARN,
audited_partition: str = AWS_COMMERCIAL_PARTITION,
expected_checks: [str] = [],
profile_region: str = None,
audit_config: dict = {},
ignore_unused_services: bool = False,
assumed_role_info: AWS_Assume_Role = None,
audit_session: session.Session = session.Session(
profile_name=None,
botocore_session=None,
),
original_session: session.Session = None,
enabled_regions: set = None,
):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=original_session,
audit_session=audit_session,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=audited_account,
audited_account_arn=audited_account_arn,
audited_user_id=None,
audited_partition=audited_partition,
audited_partition=AWS_PARTITION,
audited_identity_arn=None,
profile=None,
profile_region=profile_region,
profile_region=None,
credentials=None,
assumed_role_info=assumed_role_info,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=[],
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=expected_checks,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
audit_config=audit_config,
ignore_unused_services=ignore_unused_services,
enabled_regions=enabled_regions if enabled_regions else set(audited_regions),
)
return audit_info

View File

@@ -12,29 +12,21 @@ from prowler.providers.aws.aws_provider import (
get_default_region,
get_global_region,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_CHINA_PARTITION,
AWS_GOV_CLOUD_PARTITION,
AWS_ISO_PARTITION,
AWS_REGION_CHINA_NORHT_1,
AWS_REGION_EU_WEST_1,
AWS_REGION_GOV_CLOUD_US_EAST_1,
AWS_REGION_ISO_GLOBAL,
AWS_REGION_US_EAST_1,
AWS_REGION_US_EAST_2,
set_mocked_aws_audit_info,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
ACCOUNT_ID = 123456789012
AWS_REGION = "us-east-1"
class Test_AWS_Provider:
@mock_iam
@mock_sts
def test_aws_provider_user_without_mfa(self):
# sessionName = "ProwlerAssessmentSession"
audited_regions = ["eu-west-1"]
# sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
iam_client = boto3.client("iam", region_name=AWS_REGION)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -46,28 +38,44 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION_US_EAST_1,
region_name=AWS_REGION,
)
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=AWS_Assume_Role(
role_arn=None,
session_duration=None,
external_id=None,
mfa_enabled=False,
role_session_name="ProwlerAssessmentSession",
),
original_session=session,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
# Call assume_role
with patch(
"prowler.providers.aws.aws_provider.input_role_mfa_token_and_code",
return_value=(
f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:mfa/test-role-mfa",
"111111",
),
return_value=(f"arn:aws:iam::{ACCOUNT_ID}:mfa/test-role-mfa", "111111"),
):
aws_provider = AWS_Provider(audit_info)
assert aws_provider.aws_session.region_name is None
@@ -76,14 +84,14 @@ class Test_AWS_Provider:
session_duration=None,
external_id=None,
mfa_enabled=False,
role_session_name="ProwlerAssessmentSession",
)
@mock_iam
@mock_sts
def test_aws_provider_user_with_mfa(self):
audited_regions = "eu-west-1"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
iam_client = boto3.client("iam", region_name=AWS_REGION)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -95,29 +103,38 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION_US_EAST_1,
region_name=AWS_REGION,
)
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=AWS_REGION,
credentials=None,
assumed_role_info=AWS_Assume_Role(
role_arn=None,
session_duration=None,
external_id=None,
mfa_enabled=False,
role_session_name="ProwlerAssessmentSession",
),
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=True,
)
# Call assume_role
# # Call assume_role
with patch(
"prowler.providers.aws.aws_provider.input_role_mfa_token_and_code",
return_value=(
f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:mfa/test-role-mfa",
"111111",
),
return_value=(f"arn:aws:iam::{ACCOUNT_ID}:mfa/test-role-mfa", "111111"),
):
aws_provider = AWS_Provider(audit_info)
assert aws_provider.aws_session.region_name is None
@@ -126,7 +143,6 @@ class Test_AWS_Provider:
session_duration=None,
external_id=None,
mfa_enabled=False,
role_session_name="ProwlerAssessmentSession",
)
@mock_iam
@@ -134,12 +150,12 @@ class Test_AWS_Provider:
def test_aws_provider_assume_role_with_mfa(self):
# Variables
role_name = "test-role"
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
role_arn = f"arn:aws:iam::{ACCOUNT_ID}:role/{role_name}"
session_duration_seconds = 900
sessionName = "ProwlerAssessmentSession"
audited_regions = ["eu-west-1"]
sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
iam_client = boto3.client("iam", region_name=AWS_REGION)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -151,30 +167,46 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION_US_EAST_1,
region_name=AWS_REGION,
)
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=AWS_Assume_Role(
role_arn=role_arn,
session_duration=session_duration_seconds,
external_id=None,
mfa_enabled=True,
role_session_name="ProwlerAssessmentSession",
),
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
# Call assume_role
aws_provider = AWS_Provider(audit_info)
# Patch MFA
with patch(
"prowler.providers.aws.aws_provider.input_role_mfa_token_and_code",
return_value=(
f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:mfa/test-role-mfa",
"111111",
),
return_value=(f"arn:aws:iam::{ACCOUNT_ID}:mfa/test-role-mfa", "111111"),
):
assume_role_response = assume_role(
aws_provider.aws_session, aws_provider.role_info
@@ -193,7 +225,7 @@ class Test_AWS_Provider:
# Assumed Role
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{AWS_ACCOUNT_NUMBER}:assumed-role/{role_name}/{sessionName}"
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
@@ -213,12 +245,12 @@ class Test_AWS_Provider:
def test_aws_provider_assume_role_without_mfa(self):
# Variables
role_name = "test-role"
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
role_arn = f"arn:aws:iam::{ACCOUNT_ID}:role/{role_name}"
session_duration_seconds = 900
sessionName = "ProwlerAssessmentSession"
audited_regions = "eu-west-1"
sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
iam_client = boto3.client("iam", region_name=AWS_REGION)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -230,22 +262,41 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION_US_EAST_1,
region_name=AWS_REGION,
)
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=AWS_Assume_Role(
role_arn=role_arn,
session_duration=session_duration_seconds,
external_id=None,
mfa_enabled=False,
role_session_name="ProwlerAssessmentSession",
),
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
# Call assume_role
aws_provider = AWS_Provider(audit_info)
assume_role_response = assume_role(
aws_provider.aws_session, aws_provider.role_info
@@ -264,7 +315,7 @@ class Test_AWS_Provider:
# Assumed Role
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{AWS_ACCOUNT_NUMBER}:assumed-role/{role_name}/{sessionName}"
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
@@ -284,14 +335,14 @@ class Test_AWS_Provider:
def test_assume_role_with_sts_endpoint_region(self):
# Variables
role_name = "test-role"
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
role_arn = f"arn:aws:iam::{ACCOUNT_ID}:role/{role_name}"
session_duration_seconds = 900
AWS_REGION_US_EAST_1 = AWS_REGION_EU_WEST_1
sts_endpoint_region = AWS_REGION_US_EAST_1
sessionName = "ProwlerAssessmentSession"
aws_region = "eu-west-1"
sts_endpoint_region = aws_region
audited_regions = [aws_region]
sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
iam_client = boto3.client("iam", region_name=AWS_REGION)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -303,22 +354,41 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION_US_EAST_1,
region_name=AWS_REGION,
)
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=AWS_Assume_Role(
role_arn=role_arn,
session_duration=session_duration_seconds,
external_id=None,
mfa_enabled=False,
role_session_name="ProwlerAssessmentSession",
),
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
# Call assume_role
aws_provider = AWS_Provider(audit_info)
assume_role_response = assume_role(
aws_provider.aws_session, aws_provider.role_info, sts_endpoint_region
@@ -337,7 +407,7 @@ class Test_AWS_Provider:
# Assumed Role
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{AWS_ACCOUNT_NUMBER}:assumed-role/{role_name}/{sessionName}"
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
@@ -353,78 +423,368 @@ class Test_AWS_Provider:
) == 21 + 1 + len(sessionName)
def test_generate_regional_clients(self):
audited_regions = [AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
audit_info = set_mocked_aws_audit_info(
audited_regions=audited_regions,
audit_session=boto3.session.Session(
region_name=AWS_REGION_US_EAST_1,
),
enabled_regions=audited_regions,
# New Boto3 session with the previously create user
session = boto3.session.Session(
region_name=AWS_REGION,
)
audited_regions = ["eu-west-1", AWS_REGION]
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
generate_regional_clients_response = generate_regional_clients(
"ec2", audit_info
)
assert set(generate_regional_clients_response.keys()) == set(audited_regions)
def test_generate_regional_clients_cn_partition(self):
audited_regions = ["cn-northwest-1", "cn-north-1"]
audit_info = set_mocked_aws_audit_info(
def test_generate_regional_clients_global_service(self):
# New Boto3 session with the previously create user
session = boto3.session.Session(
region_name=AWS_REGION,
)
audited_regions = ["eu-west-1", AWS_REGION]
profile_region = AWS_REGION
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
audit_session=boto3.session.Session(
region_name=AWS_REGION_US_EAST_1,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
enabled_regions=audited_regions,
)
generate_regional_clients_response = generate_regional_clients(
"shield", audit_info
"route53", audit_info, global_service=True
)
assert list(generate_regional_clients_response.keys()) == [profile_region]
def test_generate_regional_clients_cn_partition(self):
# New Boto3 session with the previously create user
session = boto3.session.Session(
region_name=AWS_REGION,
)
audited_regions = ["cn-northwest-1", "cn-north-1"]
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-cn",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
generate_regional_clients_response = generate_regional_clients(
"shield", audit_info, global_service=True
)
# Shield does not exist in China
assert generate_regional_clients_response == {}
def test_get_default_region(self):
audit_info = set_mocked_aws_audit_info(
profile_region=AWS_REGION_EU_WEST_1,
audited_regions=[AWS_REGION_EU_WEST_1],
audited_regions = ["eu-west-1"]
profile_region = "eu-west-1"
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
assert get_default_region("ec2", audit_info) == "eu-west-1"
def test_get_default_region_profile_region_not_audited(self):
audit_info = set_mocked_aws_audit_info(
profile_region=AWS_REGION_US_EAST_2,
audited_regions=[AWS_REGION_EU_WEST_1],
audited_regions = ["eu-west-1"]
profile_region = "us-east-2"
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
assert get_default_region("ec2", audit_info) == "eu-west-1"
def test_get_default_region_non_profile_region(self):
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
audited_regions = ["eu-west-1"]
profile_region = None
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
assert get_default_region("ec2", audit_info) == "eu-west-1"
def test_get_default_region_non_profile_or_audited_region(self):
audit_info = set_mocked_aws_audit_info()
assert get_default_region("ec2", audit_info) == AWS_REGION_US_EAST_1
audited_regions = None
profile_region = None
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_default_region("ec2", audit_info) == "us-east-1"
def test_aws_get_global_region(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_default_region("ec2", audit_info) == "us-east-1"
def test_aws_gov_get_global_region(self):
audit_info = set_mocked_aws_audit_info(
audited_partition=AWS_GOV_CLOUD_PARTITION
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-us-gov",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_global_region(audit_info) == AWS_REGION_GOV_CLOUD_US_EAST_1
assert get_global_region(audit_info) == "us-gov-east-1"
def test_aws_cn_get_global_region(self):
audit_info = set_mocked_aws_audit_info(audited_partition=AWS_CHINA_PARTITION)
assert get_global_region(audit_info) == AWS_REGION_CHINA_NORHT_1
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-cn",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_global_region(audit_info) == "cn-north-1"
def test_aws_iso_get_global_region(self):
audit_info = set_mocked_aws_audit_info(audited_partition=AWS_ISO_PARTITION)
assert get_global_region(audit_info) == AWS_REGION_ISO_GLOBAL
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-iso",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_global_region(audit_info) == "aws-iso-global"
def test_get_available_aws_service_regions_with_us_east_1_audited(self):
audit_info = set_mocked_aws_audit_info(audited_regions=[AWS_REGION_US_EAST_1])
audited_regions = ["us-east-1"]
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
with patch(
"prowler.providers.aws.aws_provider.parse_json_file",
return_value={
@@ -439,7 +799,7 @@ class Test_AWS_Provider:
"eu-north-1",
"eu-south-1",
"eu-south-2",
AWS_REGION_EU_WEST_1,
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
@@ -455,13 +815,33 @@ class Test_AWS_Provider:
}
},
):
assert get_available_aws_service_regions("ec2", audit_info) == {
AWS_REGION_US_EAST_1
}
assert get_available_aws_service_regions("ec2", audit_info) == ["us-east-1"]
def test_get_available_aws_service_regions_with_all_regions_audited(self):
audit_info = set_mocked_aws_audit_info()
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
with patch(
"prowler.providers.aws.aws_provider.parse_json_file",
return_value={
@@ -476,7 +856,7 @@ class Test_AWS_Provider:
"eu-north-1",
"eu-south-1",
"eu-south-2",
AWS_REGION_EU_WEST_1,
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",

View File

@@ -1,5 +1,5 @@
import yaml
from boto3 import resource
from boto3 import resource, session
from mock import MagicMock
from moto import mock_dynamodb, mock_s3
@@ -13,21 +13,51 @@ from prowler.providers.aws.lib.allowlist.allowlist import (
is_excepted,
parse_allowlist_file,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_CENTRAL_1,
AWS_REGION_EU_SOUTH_3,
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
class Test_Allowlist:
# Mocked Audit Info
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
# Test S3 allowlist
@mock_s3
def test_s3_allowlist(self):
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
# Create bucket and upload allowlist yaml
s3_resource = resource("s3", region_name=AWS_REGION_US_EAST_1)
s3_resource.create_bucket(Bucket="test-allowlist")
@@ -46,7 +76,7 @@ class Test_Allowlist:
# Test DynamoDB allowlist
@mock_dynamodb
def test_dynamo_allowlist(self):
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
# Create table and put item
dynamodb_resource = resource("dynamodb", region_name=AWS_REGION_US_EAST_1)
table_name = "test-allowlist"
@@ -90,7 +120,7 @@ class Test_Allowlist:
@mock_dynamodb
def test_dynamo_allowlist_with_tags(self):
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
# Create table and put item
dynamodb_resource = resource("dynamodb", region_name=AWS_REGION_US_EAST_1)
table_name = "test-allowlist"
@@ -134,7 +164,8 @@ class Test_Allowlist:
)
# Allowlist tests
def test_allowlist_findings_only_wildcard(self):
def test_allowlist_findings(self):
# Allowlist example
allowlist = {
"Accounts": {
@@ -167,46 +198,6 @@ class Test_Allowlist:
assert len(allowlisted_findings) == 1
assert allowlisted_findings[0].status == "WARNING"
def test_allowlist_all_exceptions_empty(self):
# Allowlist example
allowlist = {
"Accounts": {
"*": {
"Checks": {
"*": {
"Tags": ["*"],
"Regions": [AWS_REGION_US_EAST_1],
"Resources": ["*"],
"Exceptions": {
"Tags": [],
"Regions": [],
"Accounts": [],
"Resources": [],
},
}
}
}
}
}
# Check Findings
check_findings = []
finding_1 = MagicMock
finding_1.check_metadata = MagicMock
finding_1.check_metadata.CheckID = "check_test"
finding_1.status = "FAIL"
finding_1.region = AWS_REGION_US_EAST_1
finding_1.resource_id = "prowler"
finding_1.resource_tags = []
check_findings.append(finding_1)
allowlisted_findings = allowlist_findings(
allowlist, AWS_ACCOUNT_NUMBER, check_findings
)
assert len(allowlisted_findings) == 1
assert allowlisted_findings[0].status == "WARNING"
def test_is_allowlisted_with_everything_excepted(self):
allowlist = {
"Accounts": {
@@ -246,6 +237,12 @@ class Test_Allowlist:
"Tags": ["*"],
"Regions": ["*"],
"Resources": ["*"],
"Exceptions": {
"Tags": [],
"Regions": [],
"Accounts": [],
"Resources": [],
},
}
}
}
@@ -479,155 +476,6 @@ class Test_Allowlist:
)
)
def test_is_allowlisted_all_and_single_account_with_different_resources(self):
# Allowlist example
allowlist = {
"Accounts": {
"*": {
"Checks": {
"check_test_1": {
"Regions": ["*"],
"Resources": ["resource_1", "resource_2"],
},
}
},
AWS_ACCOUNT_NUMBER: {
"Checks": {
"check_test_1": {
"Regions": ["*"],
"Resources": ["resource_3"],
}
}
},
}
}
assert is_allowlisted(
allowlist,
"111122223333",
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_1",
"",
)
assert is_allowlisted(
allowlist,
"111122223333",
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_2",
"",
)
assert not is_allowlisted(
allowlist,
"111122223333",
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_3",
"",
)
assert is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_3",
"",
)
assert is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_2",
"",
)
def test_is_allowlisted_all_and_single_account_with_different_resources_and_exceptions(
self,
):
# Allowlist example
allowlist = {
"Accounts": {
"*": {
"Checks": {
"check_test_1": {
"Regions": ["*"],
"Resources": ["resource_1", "resource_2"],
"Exceptions": {"Regions": [AWS_REGION_US_EAST_1]},
},
}
},
AWS_ACCOUNT_NUMBER: {
"Checks": {
"check_test_1": {
"Regions": ["*"],
"Resources": ["resource_3"],
"Exceptions": {"Regions": [AWS_REGION_EU_WEST_1]},
}
}
},
}
}
assert not is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_2",
"",
)
assert not is_allowlisted(
allowlist,
"111122223333",
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_1",
"",
)
assert is_allowlisted(
allowlist,
"111122223333",
"check_test_1",
AWS_REGION_EU_WEST_1,
"resource_2",
"",
)
assert not is_allowlisted(
allowlist,
"111122223333",
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_3",
"",
)
assert is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"check_test_1",
AWS_REGION_US_EAST_1,
"resource_3",
"",
)
assert not is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"check_test_1",
AWS_REGION_EU_WEST_1,
"resource_3",
"",
)
def test_is_allowlisted_single_account(self):
allowlist = {
"Accounts": {
@@ -901,111 +749,6 @@ class Test_Allowlist:
)
)
def test_is_allowlisted_specific_account_with_other_account_excepted(self):
# Allowlist example
allowlist = {
"Accounts": {
AWS_ACCOUNT_NUMBER: {
"Checks": {
"check_test": {
"Regions": [AWS_REGION_EU_WEST_1],
"Resources": ["*"],
"Tags": [],
"Exceptions": {"Accounts": ["111122223333"]},
}
}
}
}
}
assert is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"check_test",
AWS_REGION_EU_WEST_1,
"prowler",
"environment=dev",
)
assert not is_allowlisted(
allowlist,
"111122223333",
"check_test",
AWS_REGION_EU_WEST_1,
"prowler",
"environment=dev",
)
def test_is_allowlisted_complex_allowlist(self):
# Allowlist example
allowlist = {
"Accounts": {
"*": {
"Checks": {
"s3_bucket_object_versioning": {
"Regions": [AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1],
"Resources": ["ci-logs", "logs", ".+-logs"],
},
"ecs_task_definitions_no_environment_secrets": {
"Regions": ["*"],
"Resources": ["*"],
"Exceptions": {
"Accounts": [AWS_ACCOUNT_NUMBER],
"Regions": [
AWS_REGION_EU_WEST_1,
AWS_REGION_EU_SOUTH_3,
],
},
},
"*": {
"Regions": ["*"],
"Resources": ["*"],
"Tags": ["environment=dev"],
},
}
},
AWS_ACCOUNT_NUMBER: {
"Checks": {
"*": {
"Regions": ["*"],
"Resources": ["*"],
"Exceptions": {
"Resources": ["test"],
"Tags": ["environment=prod"],
},
}
}
},
}
}
assert is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"test_check",
AWS_REGION_EU_WEST_1,
"prowler-logs",
"environment=dev",
)
assert is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"ecs_task_definitions_no_environment_secrets",
AWS_REGION_EU_WEST_1,
"prowler",
"environment=dev",
)
assert is_allowlisted(
allowlist,
AWS_ACCOUNT_NUMBER,
"s3_bucket_object_versioning",
AWS_REGION_EU_WEST_1,
"prowler-logs",
"environment=dev",
)
def test_is_allowlisted_in_tags(self):
allowlist_tags = ["environment=dev", "project=prowler"]
@@ -1080,107 +823,6 @@ class Test_Allowlist:
"environment=test",
)
def test_is_excepted_only_in_account(self):
# Allowlist example
exceptions = {
"Accounts": [AWS_ACCOUNT_NUMBER],
"Regions": [],
"Resources": [],
"Tags": [],
}
assert is_excepted(
exceptions,
AWS_ACCOUNT_NUMBER,
"eu-central-1",
"test",
"environment=test",
)
def test_is_excepted_only_in_region(self):
# Allowlist example
exceptions = {
"Accounts": [],
"Regions": [AWS_REGION_EU_CENTRAL_1, AWS_REGION_EU_SOUTH_3],
"Resources": [],
"Tags": [],
}
assert is_excepted(
exceptions,
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_CENTRAL_1,
"test",
"environment=test",
)
def test_is_excepted_only_in_resources(self):
# Allowlist example
exceptions = {
"Accounts": [],
"Regions": [],
"Resources": ["resource_1"],
"Tags": [],
}
assert is_excepted(
exceptions,
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_CENTRAL_1,
"resource_1",
"environment=test",
)
def test_is_excepted_only_in_tags(self):
# Allowlist example
exceptions = {
"Accounts": [],
"Regions": [],
"Resources": [],
"Tags": ["environment=test"],
}
assert is_excepted(
exceptions,
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_CENTRAL_1,
"resource_1",
"environment=test",
)
def test_is_excepted_in_account_and_tags(self):
# Allowlist example
exceptions = {
"Accounts": [AWS_ACCOUNT_NUMBER],
"Regions": [],
"Resources": [],
"Tags": ["environment=test"],
}
assert is_excepted(
exceptions,
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_CENTRAL_1,
"resource_1",
"environment=test",
)
assert not is_excepted(
exceptions,
"111122223333",
AWS_REGION_EU_CENTRAL_1,
"resource_1",
"environment=test",
)
assert not is_excepted(
exceptions,
"111122223333",
AWS_REGION_EU_CENTRAL_1,
"resource_1",
"environment=dev",
)
def test_is_excepted_all_wildcard(self):
exceptions = {
"Accounts": ["*"],
@@ -1227,22 +869,6 @@ class Test_Allowlist:
"environment=pro",
)
def test_is_excepted_all_empty(self):
exceptions = {
"Accounts": [],
"Regions": [],
"Resources": [],
"Tags": [],
}
assert not is_excepted(
exceptions,
AWS_ACCOUNT_NUMBER,
"eu-south-2",
"test",
"environment=test",
)
def test_is_allowlisted_in_resource(self):
allowlist_resources = ["prowler", "^test", "prowler-pro"]

View File

@@ -21,49 +21,6 @@ from tests.providers.aws.audit_info_utils import (
set_mocked_aws_audit_info,
)
def get_security_hub_finding(status: str):
return {
"SchemaVersion": "2018-10-08",
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
"RecordState": "ACTIVE",
"ProductFields": {
"ProviderName": "Prowler",
"ProviderVersion": prowler_version,
"ProwlerResourceName": "test",
},
"GeneratorId": "prowler-iam_user_accesskey_unused",
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
"Types": ["Software and Configuration Checks"],
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"Severity": {"Label": "LOW"},
"Title": "Ensure Access Keys unused are disabled",
"Description": "test",
"Resources": [
{
"Type": "AwsIamAccessAnalyzer",
"Id": "test",
"Partition": "aws",
"Region": f"{AWS_REGION_EU_WEST_1}",
}
],
"Compliance": {
"Status": status,
"RelatedRequirements": [],
"AssociatedStandards": [],
},
"Remediation": {
"Recommendation": {
"Text": "Run sudo yum update and cross your fingers and toes.",
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
}
},
}
# Mocking Security Hub Get Findings
make_api_call = botocore.client.BaseClient._make_api_call
@@ -107,13 +64,10 @@ class Test_SecurityHub:
return finding
def set_mocked_output_options(
self, is_quiet: bool = False, send_sh_only_fails: bool = False
):
def set_mocked_output_options(self, is_quiet):
output_options = MagicMock
output_options.bulk_checks_metadata = {}
output_options.is_quiet = is_quiet
output_options.send_sh_only_fails = send_sh_only_fails
return output_options
@@ -144,7 +98,47 @@ class Test_SecurityHub:
output_options,
enabled_regions,
) == {
AWS_REGION_EU_WEST_1: [get_security_hub_finding("PASSED")],
AWS_REGION_EU_WEST_1: [
{
"SchemaVersion": "2018-10-08",
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
"RecordState": "ACTIVE",
"ProductFields": {
"ProviderName": "Prowler",
"ProviderVersion": prowler_version,
"ProwlerResourceName": "test",
},
"GeneratorId": "prowler-iam_user_accesskey_unused",
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
"Types": ["Software and Configuration Checks"],
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"Severity": {"Label": "LOW"},
"Title": "Ensure Access Keys unused are disabled",
"Description": "test",
"Resources": [
{
"Type": "AwsIamAccessAnalyzer",
"Id": "test",
"Partition": "aws",
"Region": f"{AWS_REGION_EU_WEST_1}",
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [],
"AssociatedStandards": [],
},
"Remediation": {
"Recommendation": {
"Text": "Run sudo yum update and cross your fingers and toes.",
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
}
},
}
],
}
def test_prepare_security_hub_findings_quiet_INFO_finding(self):
@@ -177,7 +171,7 @@ class Test_SecurityHub:
enabled_regions,
) == {AWS_REGION_EU_WEST_1: []}
def test_prepare_security_hub_findings_quiet_PASS(self):
def test_prepare_security_hub_findings_quiet(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(is_quiet=True)
findings = [self.generate_finding("PASS", AWS_REGION_EU_WEST_1)]
@@ -192,51 +186,6 @@ class Test_SecurityHub:
enabled_regions,
) == {AWS_REGION_EU_WEST_1: []}
def test_prepare_security_hub_findings_quiet_FAIL(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(is_quiet=True)
findings = [self.generate_finding("FAIL", AWS_REGION_EU_WEST_1)]
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
)
assert prepare_security_hub_findings(
findings,
audit_info,
output_options,
enabled_regions,
) == {AWS_REGION_EU_WEST_1: [get_security_hub_finding("FAILED")]}
def test_prepare_security_hub_findings_send_sh_only_fails_PASS(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(send_sh_only_fails=True)
findings = [self.generate_finding("PASS", AWS_REGION_EU_WEST_1)]
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
)
assert prepare_security_hub_findings(
findings,
audit_info,
output_options,
enabled_regions,
) == {AWS_REGION_EU_WEST_1: []}
def test_prepare_security_hub_findings_send_sh_only_fails_FAIL(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(send_sh_only_fails=True)
findings = [self.generate_finding("FAIL", AWS_REGION_EU_WEST_1)]
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
)
assert prepare_security_hub_findings(
findings,
audit_info,
output_options,
enabled_regions,
) == {AWS_REGION_EU_WEST_1: [get_security_hub_finding("FAILED")]}
def test_prepare_security_hub_findings_no_audited_regions(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(is_quiet=False)
@@ -249,7 +198,47 @@ class Test_SecurityHub:
output_options,
enabled_regions,
) == {
AWS_REGION_EU_WEST_1: [get_security_hub_finding("PASSED")],
AWS_REGION_EU_WEST_1: [
{
"SchemaVersion": "2018-10-08",
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
"RecordState": "ACTIVE",
"ProductFields": {
"ProviderName": "Prowler",
"ProviderVersion": prowler_version,
"ProwlerResourceName": "test",
},
"GeneratorId": "prowler-iam_user_accesskey_unused",
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
"Types": ["Software and Configuration Checks"],
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"Severity": {"Label": "LOW"},
"Title": "Ensure Access Keys unused are disabled",
"Description": "test",
"Resources": [
{
"Type": "AwsIamAccessAnalyzer",
"Id": "test",
"Partition": "aws",
"Region": f"{AWS_REGION_EU_WEST_1}",
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [],
"AssociatedStandards": [],
},
"Remediation": {
"Recommendation": {
"Text": "Run sudo yum update and cross your fingers and toes.",
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
}
},
}
],
}
@patch("botocore.client.BaseClient._make_api_call", new=mock_make_api_call)

View File

@@ -1,21 +1,20 @@
from boto3 import session
from mock import patch
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.lib.service.service import AWSService
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_ARN,
AWS_ACCOUNT_NUMBER,
AWS_COMMERCIAL_PARTITION,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
from prowler.providers.common.models import Audit_Metadata
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
AWS_PARTITION = "aws"
AWS_REGION = "us-east-1"
def mock_generate_regional_clients(service, audit_info):
regional_client = audit_info.audit_session.client(
service, region_name=AWS_REGION_US_EAST_1
)
regional_client.region = AWS_REGION_US_EAST_1
return {AWS_REGION_US_EAST_1: regional_client}
def mock_generate_regional_clients(service, audit_info, _):
regional_client = audit_info.audit_session.client(service, region_name=AWS_REGION)
regional_client.region = AWS_REGION
return {AWS_REGION: regional_client}
@patch(
@@ -23,40 +22,50 @@ def mock_generate_regional_clients(service, audit_info):
new=mock_generate_regional_clients,
)
class Test_AWSService:
def test_AWSService_init(self):
service_name = "s3"
audit_info = set_mocked_aws_audit_info()
service = AWSService(service_name, audit_info)
assert service.audit_info == audit_info
assert service.audited_account == AWS_ACCOUNT_NUMBER
assert service.audited_account_arn == AWS_ACCOUNT_ARN
assert service.audited_partition == AWS_COMMERCIAL_PARTITION
assert service.audit_resources == []
assert service.audited_checks == []
assert service.session == audit_info.audit_session
assert service.service == service_name
assert len(service.regional_clients) == 1
assert (
service.regional_clients[AWS_REGION_US_EAST_1].__class__.__name__
== service_name.upper()
# Mocked Audit Info
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=AWS_ACCOUNT_ARN,
audited_user_id=None,
audited_partition=AWS_PARTITION,
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=[],
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert service.region == AWS_REGION_US_EAST_1
assert service.client.__class__.__name__ == service_name.upper()
return audit_info
def test_AWSService_init_global_service(self):
service_name = "cloudfront"
audit_info = set_mocked_aws_audit_info()
service = AWSService(service_name, audit_info, global_service=True)
def test_AWSService_init(self):
audit_info = self.set_mocked_audit_info()
service = AWSService("s3", audit_info)
assert service.audit_info == audit_info
assert service.audited_account == AWS_ACCOUNT_NUMBER
assert service.audited_account_arn == AWS_ACCOUNT_ARN
assert service.audited_partition == AWS_COMMERCIAL_PARTITION
assert service.audited_partition == AWS_PARTITION
assert service.audit_resources == []
assert service.audited_checks == []
assert service.session == audit_info.audit_session
assert service.service == service_name
assert not hasattr(service, "regional_clients")
assert service.region == AWS_REGION_US_EAST_1
assert service.client.__class__.__name__ == "CloudFront"
assert service.service == "s3"
assert len(service.regional_clients) == 1
assert service.regional_clients[AWS_REGION].__class__.__name__ == "S3"
assert service.region == AWS_REGION
assert service.client.__class__.__name__ == "S3"

View File

@@ -1,15 +1,19 @@
from unittest.mock import patch
import botocore
from boto3 import session
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.accessanalyzer.accessanalyzer_service import (
AccessAnalyzer,
)
from tests.providers.aws.audit_info_utils import (
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
from prowler.providers.common.models import Audit_Metadata
# Mock Test Region
AWS_REGION = "eu-west-1"
AWS_ACCOUNT_NUMBER = "123456789012"
# Mocking Access Analyzer Calls
make_api_call = botocore.client.BaseClient._make_api_call
@@ -54,12 +58,10 @@ def mock_make_api_call(self, operation_name, kwarg):
return make_api_call(self, operation_name, kwarg)
def mock_generate_regional_clients(service, audit_info):
regional_client = audit_info.audit_session.client(
service, region_name=AWS_REGION_EU_WEST_1
)
regional_client.region = AWS_REGION_EU_WEST_1
return {AWS_REGION_EU_WEST_1: regional_client}
def mock_generate_regional_clients(service, audit_info, _):
regional_client = audit_info.audit_session.client(service, region_name=AWS_REGION)
regional_client.region = AWS_REGION
return {AWS_REGION: regional_client}
# Patch every AWS call using Boto3 and generate_regional_clients to have 1 client
@@ -69,46 +71,66 @@ def mock_generate_regional_clients(service, audit_info):
new=mock_generate_regional_clients,
)
class Test_AccessAnalyzer_Service:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
# Test AccessAnalyzer Client
def test__get_client__(self):
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
assert (
access_analyzer.regional_clients[AWS_REGION_EU_WEST_1].__class__.__name__
access_analyzer.regional_clients[AWS_REGION].__class__.__name__
== "AccessAnalyzer"
)
# Test AccessAnalyzer Session
def test__get_session__(self):
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
assert access_analyzer.session.__class__.__name__ == "Session"
# Test AccessAnalyzer Service
def test__get_service__(self):
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
assert access_analyzer.service == "accessanalyzer"
def test__list_analyzers__(self):
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
assert len(access_analyzer.analyzers) == 1
assert access_analyzer.analyzers[0].arn == "ARN"
assert access_analyzer.analyzers[0].name == "Test Analyzer"
assert access_analyzer.analyzers[0].status == "ACTIVE"
assert access_analyzer.analyzers[0].tags == [{"test": "test"}]
assert access_analyzer.analyzers[0].type == "ACCOUNT"
assert access_analyzer.analyzers[0].region == AWS_REGION_EU_WEST_1
assert access_analyzer.analyzers[0].region == AWS_REGION
def test__list_findings__(self):
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
assert len(access_analyzer.analyzers) == 1
assert len(access_analyzer.analyzers[0].findings) == 1
assert access_analyzer.analyzers[0].findings[0].status == "ARCHIVED"

View File

@@ -1,11 +1,14 @@
import botocore
from boto3 import session
from mock import patch
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.account.account_service import Account, Contact
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
set_mocked_aws_audit_info,
)
from prowler.providers.common.models import Audit_Metadata
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
AWS_REGION = "us-east-1"
# Mocking Access Analyzer Calls
make_api_call = botocore.client.BaseClient._make_api_call
@@ -53,34 +56,65 @@ def mock_make_api_call(self, operation_name, kwargs):
# Patch every AWS call using Boto3
@patch("botocore.client.BaseClient._make_api_call", new=mock_make_api_call)
class Test_Account_Service:
# Mocked Audit Info
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=AWS_ACCOUNT_ARN,
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
# Test Account Service
def test_service(self):
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
account = Account(audit_info)
assert account.service == "account"
# Test Account Client
def test_client(self):
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
account = Account(audit_info)
assert account.client.__class__.__name__ == "Account"
# Test Account Session
def test__get_session__(self):
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
account = Account(audit_info)
assert account.session.__class__.__name__ == "Session"
# Test Account Session
def test_audited_account(self):
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
account = Account(audit_info)
assert account.audited_account == AWS_ACCOUNT_NUMBER
# Test Account Get Account Contacts
def test_get_account_contacts(self):
# Account client for this test class
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
account = Account(audit_info)
assert account.number_of_contacts == 4
assert account.contact_base == Contact(

View File

@@ -2,20 +2,26 @@ import uuid
from datetime import datetime
import botocore
from boto3 import session
from freezegun import freeze_time
from mock import patch
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.acm.acm_service import ACM
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
from prowler.providers.common.models import Audit_Metadata
# from moto import mock_acm
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_REGION = "us-east-1"
# Mocking Access Analyzer Calls
make_api_call = botocore.client.BaseClient._make_api_call
certificate_arn = f"arn:aws:acm:{AWS_REGION_US_EAST_1}:{AWS_ACCOUNT_NUMBER}:certificate/{str(uuid.uuid4())}"
certificate_arn = (
f"arn:aws:acm:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:certificate/{str(uuid.uuid4())}"
)
certificate_name = "test-certificate.com"
certificate_type = "AMAZON_ISSUED"
@@ -74,12 +80,10 @@ def mock_make_api_call(self, operation_name, kwargs):
# Mock generate_regional_clients()
def mock_generate_regional_clients(service, audit_info):
regional_client = audit_info.audit_session.client(
service, region_name=AWS_REGION_US_EAST_1
)
regional_client.region = AWS_REGION_US_EAST_1
return {AWS_REGION_US_EAST_1: regional_client}
def mock_generate_regional_clients(service, audit_info, _):
regional_client = audit_info.audit_session.client(service, region_name=AWS_REGION)
regional_client.region = AWS_REGION
return {AWS_REGION: regional_client}
# Patch every AWS call using Boto3 and generate_regional_clients to have 1 client
@@ -92,11 +96,42 @@ def mock_generate_regional_clients(service, audit_info):
@freeze_time("2023-01-01")
# FIXME: Pending Moto PR to update ACM responses
class Test_ACM_Service:
# Mocked Audit Info
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
# Test ACM Service
# @mock_acm
def test_service(self):
# ACM client for this test class
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
acm = ACM(audit_info)
assert acm.service == "acm"
@@ -104,7 +139,7 @@ class Test_ACM_Service:
# @mock_acm
def test_client(self):
# ACM client for this test class
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
acm = ACM(audit_info)
for regional_client in acm.regional_clients.values():
assert regional_client.__class__.__name__ == "ACM"
@@ -113,7 +148,7 @@ class Test_ACM_Service:
# @mock_acm
def test__get_session__(self):
# ACM client for this test class
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
acm = ACM(audit_info)
assert acm.session.__class__.__name__ == "Session"
@@ -121,7 +156,7 @@ class Test_ACM_Service:
# @mock_acm
def test_audited_account(self):
# ACM client for this test class
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
acm = ACM(audit_info)
assert acm.audited_account == AWS_ACCOUNT_NUMBER
@@ -136,7 +171,7 @@ class Test_ACM_Service:
# )
# ACM client for this test class
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
acm = ACM(audit_info)
assert len(acm.certificates) == 1
assert acm.certificates[0].arn == certificate_arn
@@ -144,7 +179,7 @@ class Test_ACM_Service:
assert acm.certificates[0].type == certificate_type
assert acm.certificates[0].expiration_days == 365
assert acm.certificates[0].transparency_logging is False
assert acm.certificates[0].region == AWS_REGION_US_EAST_1
assert acm.certificates[0].region == AWS_REGION
# Test ACM List Tags
# @mock_acm
@@ -157,7 +192,7 @@ class Test_ACM_Service:
# )
# ACM client for this test class
audit_info = set_mocked_aws_audit_info()
audit_info = self.set_mocked_audit_info()
acm = ACM(audit_info)
assert len(acm.certificates) == 1
assert acm.certificates[0].tags == [

View File

@@ -1,26 +1,55 @@
from unittest import mock
from boto3 import client
from boto3 import client, session
from moto import mock_apigateway, mock_iam, mock_lambda
from moto.core import DEFAULT_ACCOUNT_ID as ACCOUNT_ID
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
AWS_ACCOUNT_NUMBER = "123456789012"
class Test_apigateway_restapi_authorizers_enabled:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_apigateway
def test_apigateway_no_rest_apis(self):
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -44,8 +73,8 @@ class Test_apigateway_restapi_authorizers_enabled:
@mock_lambda
def test_apigateway_one_rest_api_with_lambda_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
lambda_client = client("lambda", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
lambda_client = client("lambda", region_name=AWS_REGION)
iam_client = client("iam")
# Create APIGateway Rest API
role_arn = iam_client.create_role(
@@ -68,15 +97,13 @@ class Test_apigateway_restapi_authorizers_enabled:
name="test",
restApiId=rest_api["id"],
type="TOKEN",
authorizerUri=f"arn:aws:apigateway:{apigateway_client.meta.region_name}:lambda:path/2015-03-31/functions/arn:aws:lambda:{apigateway_client.meta.region_name}:{AWS_ACCOUNT_NUMBER}:function:{authorizer['FunctionName']}/invocations",
authorizerUri=f"arn:aws:apigateway:{apigateway_client.meta.region_name}:lambda:path/2015-03-31/functions/arn:aws:lambda:{apigateway_client.meta.region_name}:{ACCOUNT_ID}:function:{authorizer['FunctionName']}/invocations",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -97,20 +124,20 @@ class Test_apigateway_restapi_authorizers_enabled:
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} has an authorizer configured at api level"
== f"API Gateway test-rest-api ID {rest_api['id']} has an authorizer configured."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == [{}]
@mock_apigateway
def test_apigateway_one_rest_api_without_lambda_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Rest API
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -119,9 +146,7 @@ class Test_apigateway_restapi_authorizers_enabled:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -142,342 +167,12 @@ class Test_apigateway_restapi_authorizers_enabled:
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured at api level."
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_or_methods_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="NONE",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_auth_but_one_method_auth(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="AWS_IAM",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "PASS"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} has all methods authorized"
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_auth_but_methods_auth_and_not(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="POST",
authorizationType="AWS_IAM",
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="NONE",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_auth_but_methods_not_auth_and_auth(
self,
):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="NONE",
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="POST",
authorizationType="AWS_IAM",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_authorizers_with_various_resources_without_endpoints(
self,
):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test2"
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured at api level."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == [{}]

View File

@@ -1,21 +1,52 @@
from unittest import mock
from boto3 import client
from boto3 import client, session
from moto import mock_apigateway
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.apigateway.apigateway_service import Stage
from tests.providers.aws.audit_info_utils import (
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
AWS_ACCOUNT_NUMBER = "123456789012"
class Test_apigateway_restapi_client_certificate_enabled:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_apigateway
def test_apigateway_no_stages(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Rest API
apigateway_client.create_rest_api(
name="test-rest-api",
@@ -24,9 +55,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -48,7 +77,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
@mock_apigateway
def test_apigateway_one_stage_without_certificate(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -84,9 +113,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -112,15 +139,15 @@ class Test_apigateway_restapi_client_certificate_enabled:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}/stages/test"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}/stages/test"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == [None]
@mock_apigateway
def test_apigateway_one_stage_with_certificate(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -129,9 +156,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -148,7 +173,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
service_client.rest_apis[0].stages.append(
Stage(
name="test",
arn=f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/test-rest-api/stages/test",
arn=f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/test-rest-api/stages/test",
logging=True,
client_certificate=True,
waf=True,
@@ -167,7 +192,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/test-rest-api/stages/test"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/test-rest-api/stages/test"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == []

View File

@@ -1,25 +1,54 @@
from unittest import mock
from boto3 import client
from boto3 import client, session
from moto import mock_apigateway
from tests.providers.aws.audit_info_utils import (
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
AWS_ACCOUNT_NUMBER = "123456789012"
class Test_apigateway_restapi_public:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_apigateway
def test_apigateway_no_rest_apis(self):
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -41,7 +70,7 @@ class Test_apigateway_restapi_public:
@mock_apigateway
def test_apigateway_one_private_rest_api(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -55,9 +84,7 @@ class Test_apigateway_restapi_public:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -83,15 +110,15 @@ class Test_apigateway_restapi_public:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == [{}]
@mock_apigateway
def test_apigateway_one_public_rest_api(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -105,9 +132,7 @@ class Test_apigateway_restapi_public:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -133,7 +158,7 @@ class Test_apigateway_restapi_public:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == [{}]

View File

@@ -1,27 +1,56 @@
from unittest import mock
from boto3 import client
from boto3 import client, session
from moto import mock_apigateway
from tests.providers.aws.audit_info_utils import (
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
AWS_ACCOUNT_NUMBER = "123456789012"
API_GW_NAME = "test-rest-api"
class Test_apigateway_restapi_public_with_authorizer:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_apigateway
def test_apigateway_no_rest_apis(self):
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -43,7 +72,7 @@ class Test_apigateway_restapi_public_with_authorizer:
@mock_apigateway
def test_apigateway_one_public_rest_api_without_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name=API_GW_NAME,
@@ -57,9 +86,7 @@ class Test_apigateway_restapi_public_with_authorizer:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -85,15 +112,15 @@ class Test_apigateway_restapi_public_with_authorizer:
assert result[0].resource_id == API_GW_NAME
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == [{}]
@mock_apigateway
def test_apigateway_one_public_rest_api_with_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
apigateway_client = client("apigateway", region_name=AWS_REGION)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -110,9 +137,7 @@ class Test_apigateway_restapi_public_with_authorizer:
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
current_audit_info = self.set_mocked_audit_info()
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -138,7 +163,7 @@ class Test_apigateway_restapi_public_with_authorizer:
assert result[0].resource_id == API_GW_NAME
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].region == AWS_REGION
assert result[0].resource_tags == [{}]

Some files were not shown because too many files have changed in this diff Show More