mirror of
https://github.com/prowler-cloud/prowler.git
synced 2026-03-26 05:48:03 +00:00
Compare commits
14 Commits
3.12.0
...
fix-audit-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
70fde82284 | ||
|
|
620de6f68e | ||
|
|
20495d2b1f | ||
|
|
2db9c359a0 | ||
|
|
1584ac3dec | ||
|
|
5cf72e5a27 | ||
|
|
de01f45f6e | ||
|
|
be24317733 | ||
|
|
e7b2b344e8 | ||
|
|
34c01d2ee4 | ||
|
|
3a0dcba279 | ||
|
|
dda8c0264c | ||
|
|
f1cea0c3cd | ||
|
|
f7766fa4de |
4
.github/workflows/codeql.yml
vendored
4
.github/workflows/codeql.yml
vendored
@@ -13,10 +13,10 @@ name: "CodeQL"
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "master", "prowler-4.0-dev" ]
|
||||
branches: [ "master", prowler-2, prowler-3.0-dev ]
|
||||
pull_request:
|
||||
# The branches below must be a subset of the branches above
|
||||
branches: [ "master", "prowler-4.0-dev" ]
|
||||
branches: [ "master" ]
|
||||
schedule:
|
||||
- cron: '00 12 * * *'
|
||||
|
||||
|
||||
5
.github/workflows/pull-request.yml
vendored
5
.github/workflows/pull-request.yml
vendored
@@ -4,11 +4,9 @@ on:
|
||||
push:
|
||||
branches:
|
||||
- "master"
|
||||
- "prowler-4.0-dev"
|
||||
pull_request:
|
||||
branches:
|
||||
- "master"
|
||||
- "prowler-4.0-dev"
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -20,7 +18,7 @@ jobs:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Test if changes are in not ignored paths
|
||||
id: are-non-ignored-files-changed
|
||||
uses: tj-actions/changed-files@v41
|
||||
uses: tj-actions/changed-files@v39
|
||||
with:
|
||||
files: ./**
|
||||
files_ignore: |
|
||||
@@ -28,7 +26,6 @@ jobs:
|
||||
README.md
|
||||
docs/**
|
||||
permissions/**
|
||||
mkdocs.yml
|
||||
- name: Install poetry
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
run: |
|
||||
|
||||
@@ -136,16 +136,26 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
|
||||
|
||||
=== "AWS CloudShell"
|
||||
|
||||
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [2](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
|
||||
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
|
||||
|
||||
_Requirements_:
|
||||
|
||||
* Open AWS CloudShell `bash`.
|
||||
|
||||
* First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
|
||||
```
|
||||
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
|
||||
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
|
||||
tar zxf Python-3.9.16.tgz
|
||||
cd Python-3.9.16/
|
||||
./configure --enable-optimizations
|
||||
sudo make altinstall
|
||||
python3.9 --version
|
||||
cd
|
||||
```
|
||||
_Commands_:
|
||||
|
||||
* Once Python 3.9 is available we can install Prowler from pip:
|
||||
```
|
||||
pip install prowler
|
||||
pip3.9 install prowler
|
||||
prowler -v
|
||||
```
|
||||
|
||||
|
||||
@@ -32,14 +32,3 @@ Prowler's AWS Provider uses the Boto3 [Standard](https://boto3.amazonaws.com/v1/
|
||||
- Retry attempts on nondescriptive, transient error codes. Specifically, these HTTP status codes: 500, 502, 503, 504.
|
||||
|
||||
- Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.
|
||||
|
||||
## Notes for validating retry attempts
|
||||
|
||||
If you are making changes to Prowler, and want to validate if requests are being retried or given up on, you can take the following approach
|
||||
|
||||
* Run prowler with `--log-level DEBUG` and `--log-file debuglogs.txt`
|
||||
* Search for retry attempts using `grep -i 'Retry needed' debuglogs.txt`
|
||||
|
||||
This is based off of the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html#checking-retry-attempts-in-your-client-logs), which states that if a retry is performed, you will see a message starting with "Retry needed".
|
||||
|
||||
You can determine the total number of calls made using `grep -i 'Sending http request' debuglogs.txt | wc -l`
|
||||
|
||||
@@ -1,26 +1,26 @@
|
||||
# AWS CloudShell
|
||||
|
||||
## Installation
|
||||
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [[2]](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
|
||||
```shell
|
||||
pip install prowler
|
||||
Prowler can be easily executed in AWS CloudShell but it has some prerequisites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
|
||||
|
||||
- First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
|
||||
```
|
||||
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
|
||||
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
|
||||
tar zxf Python-3.9.16.tgz
|
||||
cd Python-3.9.16/
|
||||
./configure --enable-optimizations
|
||||
sudo make altinstall
|
||||
python3.9 --version
|
||||
cd
|
||||
```
|
||||
- Once Python 3.9 is available we can install Prowler from pip:
|
||||
```
|
||||
pip3.9 install prowler
|
||||
```
|
||||
- Now enjoy Prowler:
|
||||
```
|
||||
prowler -v
|
||||
prowler
|
||||
```
|
||||
|
||||
## Download Files
|
||||
|
||||
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
|
||||
|
||||
## Clone Prowler from Github
|
||||
|
||||
The limited storage that AWS CloudShell provides for the user's home directory causes issues when installing the poetry dependencies to run Prowler from GitHub. Here is a workaround:
|
||||
```shell
|
||||
git clone https://github.com/prowler-cloud/prowler.git
|
||||
cd prowler
|
||||
pip install poetry
|
||||
mkdir /tmp/pypoetry
|
||||
poetry config cache-dir /tmp/pypoetry
|
||||
poetry shell
|
||||
poetry install
|
||||
python prowler.py -v
|
||||
```
|
||||
- To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
|
||||
@@ -23,15 +23,6 @@ prowler aws -R arn:aws:iam::<account_id>:role/<role_name>
|
||||
prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R arn:aws:iam::<account_id>:role/<role_name>
|
||||
```
|
||||
|
||||
## Custom Role Session Name
|
||||
|
||||
Prowler can use your custom Role Session name with:
|
||||
```console
|
||||
prowler aws --role-session-name <role_session_name>
|
||||
```
|
||||
|
||||
> It defaults to `ProwlerAssessmentSession`
|
||||
|
||||
## STS Endpoint Region
|
||||
|
||||
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.
|
||||
|
||||
@@ -1,187 +0,0 @@
|
||||
# Parallel Execution
|
||||
|
||||
The strategy used here will be to execute Prowler once per service. You can modify this approach as per your requirements.
|
||||
|
||||
This can help for really large accounts, but please be aware of AWS API rate limits:
|
||||
|
||||
1. **Service-Specific Limits**: Each AWS service has its own rate limits. For instance, Amazon EC2 might have different rate limits for launching instances versus making API calls to describe instances.
|
||||
2. **API Rate Limits**: Most of the rate limits in AWS are applied at the API level. Each API call to an AWS service counts towards the rate limit for that service.
|
||||
3. **Throttling Responses**: When you exceed the rate limit for a service, AWS responds with a throttling error. In AWS SDKs, these are typically represented as `ThrottlingException` or `RateLimitExceeded` errors.
|
||||
|
||||
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.cloud/en/latest/tutorials/aws/boto3-configuration/).
|
||||
|
||||
> Note: You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
|
||||
|
||||
## Linux
|
||||
|
||||
Generate a list of services that Prowler supports, and populate this info into a file:
|
||||
|
||||
```bash
|
||||
prowler aws --list-services | awk -F"- " '{print $2}' | sed '/^$/d' > services
|
||||
```
|
||||
|
||||
Make any modifications for services you would like to skip scanning by modifying this file.
|
||||
|
||||
Then create a new PowerShell script file `parallel-prowler.sh` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run Prowler with.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Change these variables as needed
|
||||
profile="your_profile"
|
||||
account_id=$(aws sts get-caller-identity --profile "${profile}" --query 'Account' --output text)
|
||||
|
||||
echo "Executing in account: ${account_id}"
|
||||
|
||||
# Maximum number of concurrent processes
|
||||
MAX_PROCESSES=5
|
||||
|
||||
# Loop through the services
|
||||
while read service; do
|
||||
echo "$(date '+%Y-%m-%d %H:%M:%S'): Starting job for service: ${service}"
|
||||
|
||||
# Run the command in the background
|
||||
(prowler -p "$profile" -s "$service" -F "${account_id}-${service}" --ignore-unused-services --only-logs; echo "$(date '+%Y-%m-%d %H:%M:%S') - ${service} has completed") &
|
||||
|
||||
# Check if we have reached the maximum number of processes
|
||||
while [ $(jobs -r | wc -l) -ge ${MAX_PROCESSES} ]; do
|
||||
# Wait for a second before checking again
|
||||
sleep 1
|
||||
done
|
||||
done < ./services
|
||||
|
||||
# Wait for all background processes to finish
|
||||
wait
|
||||
echo "All jobs completed"
|
||||
```
|
||||
|
||||
Output will be stored in the `output/` folder that is in the same directory from which you executed the script.
|
||||
|
||||
## Windows
|
||||
|
||||
Generate a list of services that Prowler supports, and populate this info into a file:
|
||||
|
||||
```powershell
|
||||
prowler aws --list-services | ForEach-Object {
|
||||
# Capture lines that are likely service names
|
||||
if ($_ -match '^\- \w+$') {
|
||||
$_.Trim().Substring(2)
|
||||
}
|
||||
} | Where-Object {
|
||||
# Filter out empty or null lines
|
||||
$_ -ne $null -and $_ -ne ''
|
||||
} | Set-Content -Path "services"
|
||||
```
|
||||
|
||||
Make any modifications for services you would like to skip scanning by modifying this file.
|
||||
|
||||
Then create a new PowerShell script file `parallel-prowler.ps1` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run prowler with.
|
||||
|
||||
Change any parameters you would like when calling prowler in the `Start-Job -ScriptBlock` section. Note that you need to keep the `--only-logs` parameter, else some encoding issue occurs when trying to render the progress-bar and prowler won't successfully execute.
|
||||
|
||||
```powershell
|
||||
$profile = "your_profile"
|
||||
$account_id = Invoke-Expression -Command "aws sts get-caller-identity --profile $profile --query 'Account' --output text"
|
||||
|
||||
Write-Host "Executing Prowler in $account_id"
|
||||
|
||||
# Maximum number of concurrent jobs
|
||||
$MAX_PROCESSES = 5
|
||||
|
||||
# Read services from a file
|
||||
$services = Get-Content -Path "services"
|
||||
|
||||
# Array to keep track of started jobs
|
||||
$jobs = @()
|
||||
|
||||
foreach ($service in $services) {
|
||||
# Start the command as a job
|
||||
$job = Start-Job -ScriptBlock {
|
||||
prowler -p ${using:profile} -s ${using:service} -F "${using:account_id}-${using:service}" --ignore-unused-services --only-logs
|
||||
$endTimestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
|
||||
Write-Output "${endTimestamp} - $using:service has completed"
|
||||
}
|
||||
$jobs += $job
|
||||
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - Starting job for service: $service"
|
||||
|
||||
# Check if we have reached the maximum number of jobs
|
||||
while (($jobs | Where-Object { $_.State -eq 'Running' }).Count -ge $MAX_PROCESSES) {
|
||||
Start-Sleep -Seconds 1
|
||||
# Check for any completed jobs and receive their output
|
||||
$completedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
|
||||
foreach ($completedJob in $completedJobs) {
|
||||
Receive-Job -Job $completedJob -Keep | ForEach-Object { Write-Host $_ }
|
||||
$jobs = $jobs | Where-Object { $_.Id -ne $completedJob.Id }
|
||||
Remove-Job -Job $completedJob
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Check for any remaining completed jobs
|
||||
$remainingCompletedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
|
||||
foreach ($remainingJob in $remainingCompletedJobs) {
|
||||
Receive-Job -Job $remainingJob -Keep | ForEach-Object { Write-Host $_ }
|
||||
Remove-Job -Job $remainingJob
|
||||
}
|
||||
|
||||
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - All jobs completed"
|
||||
```
|
||||
|
||||
Output will be stored in `C:\Users\YOUR-USER\Documents\output\`
|
||||
|
||||
## Combining the output files
|
||||
|
||||
Guidance is provided for the CSV file format. From the ouput directory, execute either the following Bash or PowerShell script. The script will collect the output from the CSV files, only include the header from the first file, and then output the result as CombinedCSV.csv in the current working directory.
|
||||
|
||||
There is no logic implemented in terms of which CSV files it will combine. If you have additional CSV files from other actions, such as running a quick inventory, you will need to move that out of the current (or any nested) directory, or move the output you want to combine into its own folder and run the script from there.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Initialize a variable to indicate the first file
|
||||
firstFile=true
|
||||
|
||||
# Find all CSV files and loop through them
|
||||
find . -name "*.csv" -print0 | while IFS= read -r -d '' file; do
|
||||
if [ "$firstFile" = true ]; then
|
||||
# For the first file, keep the header
|
||||
cat "$file" > CombinedCSV.csv
|
||||
firstFile=false
|
||||
else
|
||||
# For subsequent files, skip the header
|
||||
tail -n +2 "$file" >> CombinedCSV.csv
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
```powershell
|
||||
# Get all CSV files from current directory and its subdirectories
|
||||
$csvFiles = Get-ChildItem -Recurse -Filter "*.csv"
|
||||
|
||||
# Initialize a variable to track if it's the first file
|
||||
$firstFile = $true
|
||||
|
||||
# Loop through each CSV file
|
||||
foreach ($file in $csvFiles) {
|
||||
if ($firstFile) {
|
||||
# For the first file, keep the header and change the flag
|
||||
$combinedCsv = Import-Csv -Path $file.FullName
|
||||
$firstFile = $false
|
||||
} else {
|
||||
# For subsequent files, skip the header
|
||||
$tempCsv = Import-Csv -Path $file.FullName
|
||||
$combinedCsv += $tempCsv | Select-Object * -Skip 1
|
||||
}
|
||||
}
|
||||
|
||||
# Export the combined data to a new CSV file
|
||||
$combinedCsv | Export-Csv -Path "CombinedCSV.csv" -NoTypeInformation
|
||||
```
|
||||
|
||||
## TODO: Additional Improvements
|
||||
|
||||
Some services need to instantiate another service to perform a check. For instance, `cloudwatch` will instantiate Prowler's `iam` service to perform the `cloudwatch_cross_account_sharing_disabled` check. When the `iam` service is instantiated, it will perform the `__init__` function, and pull all the information required for that service. This provides an opportunity for an improvement in the above script to group related services together so that the `iam` services (or any other cross-service references) isn't repeatedily instantiated by grouping dependant services together. A complete mapping between these services still needs to be further investigated, but these are the cross-references that have been noted:
|
||||
|
||||
* inspector2 needs lambda and ec2
|
||||
* cloudwatch needs iam
|
||||
* dlm needs ec2
|
||||
@@ -43,71 +43,46 @@ Hereunder is the structure for each of the supported report formats by Prowler:
|
||||

|
||||
### CSV
|
||||
|
||||
CSV format has a set of common columns for all the providers, and then provider specific columns.
|
||||
The common columns are the following:
|
||||
The following are the columns present in the CSV format:
|
||||
|
||||
- ASSESSMENT_START_TIME
|
||||
- FINDING_UNIQUE_ID
|
||||
- PROVIDER
|
||||
- CHECK_ID
|
||||
- CHECK_TITLE
|
||||
- CHECK_TYPE
|
||||
- STATUS
|
||||
- STATUS_EXTENDED
|
||||
- SERVICE_NAME
|
||||
- SUBSERVICE_NAME
|
||||
- SEVERITY
|
||||
- RESOURCE_TYPE
|
||||
- RESOURCE_DETAILS
|
||||
- RESOURCE_TAGS
|
||||
- DESCRIPTION
|
||||
- RISK
|
||||
- RELATED_URL
|
||||
- REMEDIATION_RECOMMENDATION_TEXT
|
||||
- REMEDIATION_RECOMMENDATION_URL
|
||||
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
|
||||
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
|
||||
- REMEDIATION_RECOMMENDATION_CODE_CLI
|
||||
- REMEDIATION_RECOMMENDATION_CODE_OTHER
|
||||
- COMPLIANCE
|
||||
- CATEGORIES
|
||||
- DEPENDS_ON
|
||||
- RELATED_TO
|
||||
- NOTES
|
||||
|
||||
And then by the provider specific columns:
|
||||
|
||||
#### AWS
|
||||
|
||||
- PROFILE
|
||||
- ACCOUNT_ID
|
||||
- ACCOUNT_NAME
|
||||
- ACCOUNT_EMAIL
|
||||
- ACCOUNT_ARN
|
||||
- ACCOUNT_ORG
|
||||
- ACCOUNT_TAGS
|
||||
- REGION
|
||||
- RESOURCE_ID
|
||||
- RESOURCE_ARN
|
||||
|
||||
|
||||
#### AZURE
|
||||
|
||||
- TENANT_DOMAIN
|
||||
- SUBSCRIPTION
|
||||
- RESOURCE_ID
|
||||
- RESOURCE_NAME
|
||||
|
||||
|
||||
#### GCP
|
||||
|
||||
- PROJECT_ID
|
||||
- LOCATION
|
||||
- RESOURCE_ID
|
||||
- RESOURCE_NAME
|
||||
|
||||
|
||||
|
||||
- ACCOUNT_NAME
|
||||
- ACCOUNT_EMAIL
|
||||
- ACCOUNT_ARN
|
||||
- ACCOUNT_ORG
|
||||
- ACCOUNT_TAGS
|
||||
- REGION
|
||||
- CHECK_ID
|
||||
- CHECK_TITLE
|
||||
- CHECK_TYPE
|
||||
- STATUS
|
||||
- STATUS_EXTENDED
|
||||
- SERVICE_NAME
|
||||
- SUBSERVICE_NAME
|
||||
- SEVERITY
|
||||
- RESOURCE_ID
|
||||
- RESOURCE_ARN
|
||||
- RESOURCE_TYPE
|
||||
- RESOURCE_DETAILS
|
||||
- RESOURCE_TAGS
|
||||
- DESCRIPTION
|
||||
- COMPLIANCE
|
||||
- RISK
|
||||
- RELATED_URL
|
||||
- REMEDIATION_RECOMMENDATION_TEXT
|
||||
- REMEDIATION_RECOMMENDATION_URL
|
||||
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
|
||||
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
|
||||
- REMEDIATION_RECOMMENDATION_CODE_CLI
|
||||
- REMEDIATION_RECOMMENDATION_CODE_OTHER
|
||||
- CATEGORIES
|
||||
- DEPENDS_ON
|
||||
- RELATED_TO
|
||||
- NOTES
|
||||
|
||||
> Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
|
||||
### JSON
|
||||
|
||||
@@ -41,7 +41,6 @@ nav:
|
||||
- Custom Metadata: tutorials/custom-checks-metadata.md
|
||||
- Ignore Unused Services: tutorials/ignore-unused-services.md
|
||||
- Pentesting: tutorials/pentesting.md
|
||||
- Parallel Execution: tutorials/parallel-execution.md
|
||||
- Developer Guide: developer-guide/introduction.md
|
||||
- AWS:
|
||||
- Authentication: tutorials/aws/authentication.md
|
||||
|
||||
246
poetry.lock
generated
246
poetry.lock
generated
@@ -295,18 +295,18 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "bandit"
|
||||
version = "1.7.6"
|
||||
version = "1.7.5"
|
||||
description = "Security oriented static analyser for python code."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "bandit-1.7.6-py3-none-any.whl", hash = "sha256:36da17c67fc87579a5d20c323c8d0b1643a890a2b93f00b3d1229966624694ff"},
|
||||
{file = "bandit-1.7.6.tar.gz", hash = "sha256:72ce7bc9741374d96fb2f1c9a8960829885f1243ffde743de70a19cee353e8f3"},
|
||||
{file = "bandit-1.7.5-py3-none-any.whl", hash = "sha256:75665181dc1e0096369112541a056c59d1c5f66f9bb74a8d686c3c362b83f549"},
|
||||
{file = "bandit-1.7.5.tar.gz", hash = "sha256:bdfc739baa03b880c2d15d0431b31c658ffc348e907fe197e54e0389dd59e11e"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
colorama = {version = ">=0.3.9", markers = "platform_system == \"Windows\""}
|
||||
GitPython = ">=3.1.30"
|
||||
GitPython = ">=1.0.1"
|
||||
PyYAML = ">=5.3.1"
|
||||
rich = "*"
|
||||
stevedore = ">=1.20.0"
|
||||
@@ -649,63 +649,63 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "coverage"
|
||||
version = "7.4.0"
|
||||
version = "7.3.2"
|
||||
description = "Code coverage measurement for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "coverage-7.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:36b0ea8ab20d6a7564e89cb6135920bc9188fb5f1f7152e94e8300b7b189441a"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0676cd0ba581e514b7f726495ea75aba3eb20899d824636c6f59b0ed2f88c471"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d0ca5c71a5a1765a0f8f88022c52b6b8be740e512980362f7fdbb03725a0d6b9"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a7c97726520f784239f6c62506bc70e48d01ae71e9da128259d61ca5e9788516"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815ac2d0f3398a14286dc2cea223a6f338109f9ecf39a71160cd1628786bc6f5"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:80b5ee39b7f0131ebec7968baa9b2309eddb35b8403d1869e08f024efd883566"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5b2ccb7548a0b65974860a78c9ffe1173cfb5877460e5a229238d985565574ae"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:995ea5c48c4ebfd898eacb098164b3cc826ba273b3049e4a889658548e321b43"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-win32.whl", hash = "sha256:79287fd95585ed36e83182794a57a46aeae0b64ca53929d1176db56aacc83451"},
|
||||
{file = "coverage-7.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:5b14b4f8760006bfdb6e08667af7bc2d8d9bfdb648351915315ea17645347137"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:04387a4a6ecb330c1878907ce0dc04078ea72a869263e53c72a1ba5bbdf380ca"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ea81d8f9691bb53f4fb4db603203029643caffc82bf998ab5b59ca05560f4c06"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:74775198b702868ec2d058cb92720a3c5a9177296f75bd97317c787daf711505"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:76f03940f9973bfaee8cfba70ac991825611b9aac047e5c80d499a44079ec0bc"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:485e9f897cf4856a65a57c7f6ea3dc0d4e6c076c87311d4bc003f82cfe199d25"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:6ae8c9d301207e6856865867d762a4b6fd379c714fcc0607a84b92ee63feff70"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:bf477c355274a72435ceb140dc42de0dc1e1e0bf6e97195be30487d8eaaf1a09"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:83c2dda2666fe32332f8e87481eed056c8b4d163fe18ecc690b02802d36a4d26"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-win32.whl", hash = "sha256:697d1317e5290a313ef0d369650cfee1a114abb6021fa239ca12b4849ebbd614"},
|
||||
{file = "coverage-7.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:26776ff6c711d9d835557ee453082025d871e30b3fd6c27fcef14733f67f0590"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:13eaf476ec3e883fe3e5fe3707caeb88268a06284484a3daf8250259ef1ba143"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846f52f46e212affb5bcf131c952fb4075b55aae6b61adc9856222df89cbe3e2"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26f66da8695719ccf90e794ed567a1549bb2644a706b41e9f6eae6816b398c4a"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:164fdcc3246c69a6526a59b744b62e303039a81e42cfbbdc171c91a8cc2f9446"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:316543f71025a6565677d84bc4df2114e9b6a615aa39fb165d697dba06a54af9"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:bb1de682da0b824411e00a0d4da5a784ec6496b6850fdf8c865c1d68c0e318dd"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:0e8d06778e8fbffccfe96331a3946237f87b1e1d359d7fbe8b06b96c95a5407a"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a56de34db7b7ff77056a37aedded01b2b98b508227d2d0979d373a9b5d353daa"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-win32.whl", hash = "sha256:51456e6fa099a8d9d91497202d9563a320513fcf59f33991b0661a4a6f2ad450"},
|
||||
{file = "coverage-7.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:cd3c1e4cb2ff0083758f09be0f77402e1bdf704adb7f89108007300a6da587d0"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e9d1bf53c4c8de58d22e0e956a79a5b37f754ed1ffdbf1a260d9dcfa2d8a325e"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:109f5985182b6b81fe33323ab4707011875198c41964f014579cf82cebf2bb85"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3cc9d4bc55de8003663ec94c2f215d12d42ceea128da8f0f4036235a119c88ac"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc6d65b21c219ec2072c1293c505cf36e4e913a3f936d80028993dd73c7906b1"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5a10a4920def78bbfff4eff8a05c51be03e42f1c3735be42d851f199144897ba"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b8e99f06160602bc64da35158bb76c73522a4010f0649be44a4e167ff8555952"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:7d360587e64d006402b7116623cebf9d48893329ef035278969fa3bbf75b697e"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:29f3abe810930311c0b5d1a7140f6395369c3db1be68345638c33eec07535105"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-win32.whl", hash = "sha256:5040148f4ec43644702e7b16ca864c5314ccb8ee0751ef617d49aa0e2d6bf4f2"},
|
||||
{file = "coverage-7.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:9864463c1c2f9cb3b5db2cf1ff475eed2f0b4285c2aaf4d357b69959941aa555"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:936d38794044b26c99d3dd004d8af0035ac535b92090f7f2bb5aa9c8e2f5cd42"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:799c8f873794a08cdf216aa5d0531c6a3747793b70c53f70e98259720a6fe2d7"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e7defbb9737274023e2d7af02cac77043c86ce88a907c58f42b580a97d5bcca9"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a1526d265743fb49363974b7aa8d5899ff64ee07df47dd8d3e37dcc0818f09ed"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf635a52fc1ea401baf88843ae8708591aa4adff875e5c23220de43b1ccf575c"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:756ded44f47f330666843b5781be126ab57bb57c22adbb07d83f6b519783b870"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:0eb3c2f32dabe3a4aaf6441dde94f35687224dfd7eb2a7f47f3fd9428e421058"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:bfd5db349d15c08311702611f3dccbef4b4e2ec148fcc636cf8739519b4a5c0f"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-win32.whl", hash = "sha256:53d7d9158ee03956e0eadac38dfa1ec8068431ef8058fe6447043db1fb40d932"},
|
||||
{file = "coverage-7.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:cfd2a8b6b0d8e66e944d47cdec2f47c48fef2ba2f2dff5a9a75757f64172857e"},
|
||||
{file = "coverage-7.4.0-pp38.pp39.pp310-none-any.whl", hash = "sha256:c530833afc4707fe48524a44844493f36d8727f04dcce91fb978c414a8556cc6"},
|
||||
{file = "coverage-7.4.0.tar.gz", hash = "sha256:707c0f58cb1712b8809ece32b68996ee1e609f71bd14615bd8f87a1293cb610e"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d872145f3a3231a5f20fd48500274d7df222e291d90baa2026cc5152b7ce86bf"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:310b3bb9c91ea66d59c53fa4989f57d2436e08f18fb2f421a1b0b6b8cc7fffda"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f47d39359e2c3779c5331fc740cf4bce6d9d680a7b4b4ead97056a0ae07cb49a"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aa72dbaf2c2068404b9870d93436e6d23addd8bbe9295f49cbca83f6e278179c"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:beaa5c1b4777f03fc63dfd2a6bd820f73f036bfb10e925fce067b00a340d0f3f"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:dbc1b46b92186cc8074fee9d9fbb97a9dd06c6cbbef391c2f59d80eabdf0faa6"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:315a989e861031334d7bee1f9113c8770472db2ac484e5b8c3173428360a9148"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d1bc430677773397f64a5c88cb522ea43175ff16f8bfcc89d467d974cb2274f9"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-win32.whl", hash = "sha256:a889ae02f43aa45032afe364c8ae84ad3c54828c2faa44f3bfcafecb5c96b02f"},
|
||||
{file = "coverage-7.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:c0ba320de3fb8c6ec16e0be17ee1d3d69adcda99406c43c0409cb5c41788a611"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ac8c802fa29843a72d32ec56d0ca792ad15a302b28ca6203389afe21f8fa062c"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:89a937174104339e3a3ffcf9f446c00e3a806c28b1841c63edb2b369310fd074"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e267e9e2b574a176ddb983399dec325a80dbe161f1a32715c780b5d14b5f583a"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2443cbda35df0d35dcfb9bf8f3c02c57c1d6111169e3c85fc1fcc05e0c9f39a3"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4175e10cc8dda0265653e8714b3174430b07c1dca8957f4966cbd6c2b1b8065a"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:0cbf38419fb1a347aaf63481c00f0bdc86889d9fbf3f25109cf96c26b403fda1"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:5c913b556a116b8d5f6ef834038ba983834d887d82187c8f73dec21049abd65c"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1981f785239e4e39e6444c63a98da3a1db8e971cb9ceb50a945ba6296b43f312"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-win32.whl", hash = "sha256:43668cabd5ca8258f5954f27a3aaf78757e6acf13c17604d89648ecc0cc66640"},
|
||||
{file = "coverage-7.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10c39c0452bf6e694511c901426d6b5ac005acc0f78ff265dbe36bf81f808a2"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:4cbae1051ab791debecc4a5dcc4a1ff45fc27b91b9aee165c8a27514dd160836"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:12d15ab5833a997716d76f2ac1e4b4d536814fc213c85ca72756c19e5a6b3d63"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c7bba973ebee5e56fe9251300c00f1579652587a9f4a5ed8404b15a0471f216"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fe494faa90ce6381770746077243231e0b83ff3f17069d748f645617cefe19d4"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6e9589bd04d0461a417562649522575d8752904d35c12907d8c9dfeba588faf"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:d51ac2a26f71da1b57f2dc81d0e108b6ab177e7d30e774db90675467c847bbdf"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:99b89d9f76070237975b315b3d5f4d6956ae354a4c92ac2388a5695516e47c84"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:fa28e909776dc69efb6ed975a63691bc8172b64ff357e663a1bb06ff3c9b589a"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-win32.whl", hash = "sha256:289fe43bf45a575e3ab10b26d7b6f2ddb9ee2dba447499f5401cfb5ecb8196bb"},
|
||||
{file = "coverage-7.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:7dbc3ed60e8659bc59b6b304b43ff9c3ed858da2839c78b804973f613d3e92ed"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f94b734214ea6a36fe16e96a70d941af80ff3bfd716c141300d95ebc85339738"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:af3d828d2c1cbae52d34bdbb22fcd94d1ce715d95f1a012354a75e5913f1bda2"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:630b13e3036e13c7adc480ca42fa7afc2a5d938081d28e20903cf7fd687872e2"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c9eacf273e885b02a0273bb3a2170f30e2d53a6d53b72dbe02d6701b5296101c"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8f17966e861ff97305e0801134e69db33b143bbfb36436efb9cfff6ec7b2fd9"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b4275802d16882cf9c8b3d057a0839acb07ee9379fa2749eca54efbce1535b82"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:72c0cfa5250f483181e677ebc97133ea1ab3eb68645e494775deb6a7f6f83901"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:cb536f0dcd14149425996821a168f6e269d7dcd2c273a8bff8201e79f5104e76"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-win32.whl", hash = "sha256:307adb8bd3abe389a471e649038a71b4eb13bfd6b7dd9a129fa856f5c695cf92"},
|
||||
{file = "coverage-7.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:88ed2c30a49ea81ea3b7f172e0269c182a44c236eb394718f976239892c0a27a"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b631c92dfe601adf8f5ebc7fc13ced6bb6e9609b19d9a8cd59fa47c4186ad1ce"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d3d9df4051c4a7d13036524b66ecf7a7537d14c18a384043f30a303b146164e9"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f7363d3b6a1119ef05015959ca24a9afc0ea8a02c687fe7e2d557705375c01f"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2f11cc3c967a09d3695d2a6f03fb3e6236622b93be7a4b5dc09166a861be6d25"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:149de1d2401ae4655c436a3dced6dd153f4c3309f599c3d4bd97ab172eaf02d9"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:3a4006916aa6fee7cd38db3bfc95aa9c54ebb4ffbfc47c677c8bba949ceba0a6"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9028a3871280110d6e1aa2df1afd5ef003bab5fb1ef421d6dc748ae1c8ef2ebc"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9f805d62aec8eb92bab5b61c0f07329275b6f41c97d80e847b03eb894f38d083"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-win32.whl", hash = "sha256:d1c88ec1a7ff4ebca0219f5b1ef863451d828cccf889c173e1253aa84b1e07ce"},
|
||||
{file = "coverage-7.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b4767da59464bb593c07afceaddea61b154136300881844768037fd5e859353f"},
|
||||
{file = "coverage-7.3.2-pp38.pp39.pp310-none-any.whl", hash = "sha256:ae97af89f0fbf373400970c0a21eef5aa941ffeed90aee43650b81f7d7f47637"},
|
||||
{file = "coverage-7.3.2.tar.gz", hash = "sha256:be32ad29341b0170e795ca590e1c07e81fc061cb5b10c74ce7203491484404ef"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -794,13 +794,13 @@ graph = ["objgraph (>=1.7.2)"]
|
||||
|
||||
[[package]]
|
||||
name = "docker"
|
||||
version = "7.0.0"
|
||||
version = "6.1.3"
|
||||
description = "A Python library for the Docker Engine API."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "docker-7.0.0-py3-none-any.whl", hash = "sha256:12ba681f2777a0ad28ffbcc846a69c31b4dfd9752b47eb425a274ee269c5e14b"},
|
||||
{file = "docker-7.0.0.tar.gz", hash = "sha256:323736fb92cd9418fc5e7133bc953e11a9da04f4483f828b527db553f1e7e5a3"},
|
||||
{file = "docker-6.1.3-py3-none-any.whl", hash = "sha256:aecd2277b8bf8e506e484f6ab7aec39abe0038e29fa4a6d3ba86c3fe01844ed9"},
|
||||
{file = "docker-6.1.3.tar.gz", hash = "sha256:aa6d17830045ba5ef0168d5eaa34d37beeb113948c413affe1d5991fc11f9a20"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -808,10 +808,10 @@ packaging = ">=14.0"
|
||||
pywin32 = {version = ">=304", markers = "sys_platform == \"win32\""}
|
||||
requests = ">=2.26.0"
|
||||
urllib3 = ">=1.26.0"
|
||||
websocket-client = ">=0.32.0"
|
||||
|
||||
[package.extras]
|
||||
ssh = ["paramiko (>=2.4.3)"]
|
||||
websockets = ["websocket-client (>=1.3.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "dparse"
|
||||
@@ -911,13 +911,13 @@ pyflakes = ">=3.1.0,<3.2.0"
|
||||
|
||||
[[package]]
|
||||
name = "freezegun"
|
||||
version = "1.4.0"
|
||||
version = "1.2.2"
|
||||
description = "Let your Python tests travel through time"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "freezegun-1.4.0-py3-none-any.whl", hash = "sha256:55e0fc3c84ebf0a96a5aa23ff8b53d70246479e9a68863f1fcac5a3e52f19dd6"},
|
||||
{file = "freezegun-1.4.0.tar.gz", hash = "sha256:10939b0ba0ff5adaecf3b06a5c2f73071d9678e507c5eaedb23c761d56ac774b"},
|
||||
{file = "freezegun-1.2.2-py3-none-any.whl", hash = "sha256:ea1b963b993cb9ea195adbd893a48d573fda951b0da64f60883d7e988b606c9f"},
|
||||
{file = "freezegun-1.2.2.tar.gz", hash = "sha256:cd22d1ba06941384410cd967d8a99d5ae2442f57dfafeff2fda5de8dc5c05446"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -995,13 +995,13 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0dev)"]
|
||||
|
||||
[[package]]
|
||||
name = "google-api-python-client"
|
||||
version = "2.111.0"
|
||||
version = "2.108.0"
|
||||
description = "Google API Client Library for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "google-api-python-client-2.111.0.tar.gz", hash = "sha256:3a45a53c031478d1c82c7162dd25c9a965247bca6bd438af0838a9d9b8219405"},
|
||||
{file = "google_api_python_client-2.111.0-py2.py3-none-any.whl", hash = "sha256:b605adee2d09a843b97a59925757802904679e44e5599708cedb8939900dfbc7"},
|
||||
{file = "google-api-python-client-2.108.0.tar.gz", hash = "sha256:6396efca83185fb205c0abdbc1c2ee57b40475578c6af37f6d0e30a639aade99"},
|
||||
{file = "google_api_python_client-2.108.0-py2.py3-none-any.whl", hash = "sha256:9d1327213e388943ebcd7db5ce6e7f47987a7e6874e3e1f6116010eea4a0e75d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1037,13 +1037,13 @@ requests = ["requests (>=2.20.0,<3.0.0dev)"]
|
||||
|
||||
[[package]]
|
||||
name = "google-auth-httplib2"
|
||||
version = "0.2.0"
|
||||
version = "0.1.1"
|
||||
description = "Google Authentication Library: httplib2 transport"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "google-auth-httplib2-0.2.0.tar.gz", hash = "sha256:38aa7badf48f974f1eb9861794e9c0cb2a0511a4ec0679b1f886d108f5640e05"},
|
||||
{file = "google_auth_httplib2-0.2.0-py2.py3-none-any.whl", hash = "sha256:b65a0a2123300dd71281a7bf6e64d65a0759287df52729bdd1ae2e47dc311a3d"},
|
||||
{file = "google-auth-httplib2-0.1.1.tar.gz", hash = "sha256:c64bc555fdc6dd788ea62ecf7bccffcf497bf77244887a3f3d7a5a02f8e3fc29"},
|
||||
{file = "google_auth_httplib2-0.1.1-py2.py3-none-any.whl", hash = "sha256:42c50900b8e4dcdf8222364d1f0efe32b8421fb6ed72f2613f12f75cc933478c"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1275,13 +1275,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "jsonschema"
|
||||
version = "4.20.0"
|
||||
version = "4.18.0"
|
||||
description = "An implementation of JSON Schema validation for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "jsonschema-4.20.0-py3-none-any.whl", hash = "sha256:ed6231f0429ecf966f5bc8dfef245998220549cbbcf140f913b7464c52c3b6b3"},
|
||||
{file = "jsonschema-4.20.0.tar.gz", hash = "sha256:4f614fd46d8d61258610998997743ec5492a648b33cf478c1ddc23ed4598a5fa"},
|
||||
{file = "jsonschema-4.18.0-py3-none-any.whl", hash = "sha256:b508dd6142bd03f4c3670534c80af68cd7bbff9ea830b9cf2625d4a3c49ddf60"},
|
||||
{file = "jsonschema-4.18.0.tar.gz", hash = "sha256:8caf5b57a990a98e9b39832ef3cb35c176fe331414252b6e1b26fd5866f891a4"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1551,13 +1551,13 @@ min-versions = ["babel (==2.9.0)", "click (==7.0)", "colorama (==0.4)", "ghp-imp
|
||||
|
||||
[[package]]
|
||||
name = "mkdocs-material"
|
||||
version = "9.5.3"
|
||||
version = "9.4.14"
|
||||
description = "Documentation that simply works"
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "mkdocs_material-9.5.3-py3-none-any.whl", hash = "sha256:76c93a8525cceb0b395b9cedab3428bf518cf6439adef2b940f1c1574b775d89"},
|
||||
{file = "mkdocs_material-9.5.3.tar.gz", hash = "sha256:5899219f422f0a6de784232d9d40374416302ffae3c160cacc72969fcc1ee372"},
|
||||
{file = "mkdocs_material-9.4.14-py3-none-any.whl", hash = "sha256:dbc78a4fea97b74319a6aa9a2f0be575a6028be6958f813ba367188f7b8428f6"},
|
||||
{file = "mkdocs_material-9.4.14.tar.gz", hash = "sha256:a511d3ff48fa8718b033e7e37d17abd9cc1de0fdf0244a625ca2ae2387e2416d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1565,7 +1565,7 @@ babel = ">=2.10,<3.0"
|
||||
colorama = ">=0.4,<1.0"
|
||||
jinja2 = ">=3.0,<4.0"
|
||||
markdown = ">=3.2,<4.0"
|
||||
mkdocs = ">=1.5.3,<1.6.0"
|
||||
mkdocs = ">=1.5.3,<2.0"
|
||||
mkdocs-material-extensions = ">=1.3,<2.0"
|
||||
paginate = ">=0.5,<1.0"
|
||||
pygments = ">=2.16,<3.0"
|
||||
@@ -1607,13 +1607,13 @@ test = ["pytest", "pytest-cov"]
|
||||
|
||||
[[package]]
|
||||
name = "moto"
|
||||
version = "4.2.12"
|
||||
version = "4.2.10"
|
||||
description = ""
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "moto-4.2.12-py2.py3-none-any.whl", hash = "sha256:bdcad46e066a55b7d308a786e5dca863b3cba04c6239c6974135a48d1198b3ab"},
|
||||
{file = "moto-4.2.12.tar.gz", hash = "sha256:7c4d37f47becb4a0526b64df54484e988c10fde26861fc3b5c065bc78800cb59"},
|
||||
{file = "moto-4.2.10-py2.py3-none-any.whl", hash = "sha256:5cf0736d1f43cb887498d00b00ae522774bfddb7db1f4994fedea65b290b9f0e"},
|
||||
{file = "moto-4.2.10.tar.gz", hash = "sha256:92595fe287474a31ac3ef847941ebb097e8ffb0c3d6c106e47cf573db06933b2"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1629,7 +1629,7 @@ Jinja2 = ">=2.10.1"
|
||||
jsondiff = {version = ">=1.1.2", optional = true, markers = "extra == \"all\""}
|
||||
multipart = {version = "*", optional = true, markers = "extra == \"all\""}
|
||||
openapi-spec-validator = {version = ">=0.5.0", optional = true, markers = "extra == \"all\""}
|
||||
py-partiql-parser = {version = "0.5.0", optional = true, markers = "extra == \"all\""}
|
||||
py-partiql-parser = {version = "0.4.2", optional = true, markers = "extra == \"all\""}
|
||||
pyparsing = {version = ">=3.0.7", optional = true, markers = "extra == \"all\""}
|
||||
python-dateutil = ">=2.1,<3.0.0"
|
||||
python-jose = {version = ">=3.1.0,<4.0.0", extras = ["cryptography"], optional = true, markers = "extra == \"all\""}
|
||||
@@ -1642,29 +1642,29 @@ werkzeug = ">=0.5,<2.2.0 || >2.2.0,<2.2.1 || >2.2.1"
|
||||
xmltodict = "*"
|
||||
|
||||
[package.extras]
|
||||
all = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
all = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
apigateway = ["PyYAML (>=5.1)", "ecdsa (!=0.15)", "openapi-spec-validator (>=0.5.0)", "python-jose[cryptography] (>=3.1.0,<4.0.0)"]
|
||||
apigatewayv2 = ["PyYAML (>=5.1)"]
|
||||
appsync = ["graphql-core"]
|
||||
awslambda = ["docker (>=3.0.0)"]
|
||||
batch = ["docker (>=3.0.0)"]
|
||||
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
cognitoidp = ["ecdsa (!=0.15)", "python-jose[cryptography] (>=3.1.0,<4.0.0)"]
|
||||
ds = ["sshpubkeys (>=3.1.0)"]
|
||||
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.0)"]
|
||||
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.0)"]
|
||||
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.4.2)"]
|
||||
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.4.2)"]
|
||||
ebs = ["sshpubkeys (>=3.1.0)"]
|
||||
ec2 = ["sshpubkeys (>=3.1.0)"]
|
||||
efs = ["sshpubkeys (>=3.1.0)"]
|
||||
eks = ["sshpubkeys (>=3.1.0)"]
|
||||
glue = ["pyparsing (>=3.0.7)"]
|
||||
iotdata = ["jsondiff (>=1.1.2)"]
|
||||
proxy = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "sshpubkeys (>=3.1.0)"]
|
||||
proxy = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "sshpubkeys (>=3.1.0)"]
|
||||
route53resolver = ["sshpubkeys (>=3.1.0)"]
|
||||
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.5.0)"]
|
||||
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.5.0)"]
|
||||
server = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.4.2)"]
|
||||
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.4.2)"]
|
||||
server = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
|
||||
ssm = ["PyYAML (>=5.1)"]
|
||||
xray = ["aws-xray-sdk (>=0.93,!=0.96)", "setuptools"]
|
||||
|
||||
@@ -1828,17 +1828,17 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
|
||||
|
||||
[[package]]
|
||||
name = "openapi-schema-validator"
|
||||
version = "0.6.2"
|
||||
version = "0.6.0"
|
||||
description = "OpenAPI schema validation for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8.0,<4.0.0"
|
||||
files = [
|
||||
{file = "openapi_schema_validator-0.6.2-py3-none-any.whl", hash = "sha256:c4887c1347c669eb7cded9090f4438b710845cd0f90d1fb9e1b3303fb37339f8"},
|
||||
{file = "openapi_schema_validator-0.6.2.tar.gz", hash = "sha256:11a95c9c9017912964e3e5f2545a5b11c3814880681fcacfb73b1759bb4f2804"},
|
||||
{file = "openapi_schema_validator-0.6.0-py3-none-any.whl", hash = "sha256:9e95b95b621efec5936245025df0d6a7ffacd1551e91d09196b3053040c931d7"},
|
||||
{file = "openapi_schema_validator-0.6.0.tar.gz", hash = "sha256:921b7c1144b856ca3813e41ecff98a4050f7611824dfc5c6ead7072636af0520"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
jsonschema = ">=4.19.1,<5.0.0"
|
||||
jsonschema = ">=4.18.0,<5.0.0"
|
||||
jsonschema-specifications = ">=2023.5.2,<2024.0.0"
|
||||
rfc3339-validator = "*"
|
||||
|
||||
@@ -1989,17 +1989,17 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "py-partiql-parser"
|
||||
version = "0.5.0"
|
||||
version = "0.4.2"
|
||||
description = "Pure Python PartiQL Parser"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "py-partiql-parser-0.5.0.tar.gz", hash = "sha256:427a662e87d51a0a50150fc8b75c9ebb4a52d49129684856c40c88b8c8e027e4"},
|
||||
{file = "py_partiql_parser-0.5.0-py3-none-any.whl", hash = "sha256:dc454c27526adf62deca5177ea997bf41fac4fd109c5d4c8d81f984de738ba8f"},
|
||||
{file = "py-partiql-parser-0.4.2.tar.gz", hash = "sha256:9c99d545be7897c6bfa97a107f6cfbcd92e359d394e4f3b95430e6409e8dd1e1"},
|
||||
{file = "py_partiql_parser-0.4.2-py3-none-any.whl", hash = "sha256:f3f34de8dddf65ed2d47b4263560bbf97be1ecc6bd5c61da039ede90f26a10ce"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
dev = ["black (==22.6.0)", "flake8", "mypy", "pytest"]
|
||||
dev = ["black (==22.6.0)", "flake8", "mypy (==0.971)", "pytest"]
|
||||
|
||||
[[package]]
|
||||
name = "pyasn1"
|
||||
@@ -2147,13 +2147,13 @@ tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pylint"
|
||||
version = "3.0.3"
|
||||
version = "3.0.2"
|
||||
description = "python code static checker"
|
||||
optional = false
|
||||
python-versions = ">=3.8.0"
|
||||
files = [
|
||||
{file = "pylint-3.0.3-py3-none-any.whl", hash = "sha256:7a1585285aefc5165db81083c3e06363a27448f6b467b3b0f30dbd0ac1f73810"},
|
||||
{file = "pylint-3.0.3.tar.gz", hash = "sha256:58c2398b0301e049609a8429789ec6edf3aabe9b6c5fec916acd18639c16de8b"},
|
||||
{file = "pylint-3.0.2-py3-none-any.whl", hash = "sha256:60ed5f3a9ff8b61839ff0348b3624ceeb9e6c2a92c514d81c9cc273da3b6bcda"},
|
||||
{file = "pylint-3.0.2.tar.gz", hash = "sha256:0d4c286ef6d2f66c8bfb527a7f8a629009e42c99707dec821a03e1b51a4c1496"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2163,7 +2163,7 @@ dill = [
|
||||
{version = ">=0.2", markers = "python_version < \"3.11\""},
|
||||
{version = ">=0.3.6", markers = "python_version >= \"3.11\""},
|
||||
]
|
||||
isort = ">=4.2.5,<5.13.0 || >5.13.0,<6"
|
||||
isort = ">=4.2.5,<6"
|
||||
mccabe = ">=0.6,<0.8"
|
||||
platformdirs = ">=2.2.0"
|
||||
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
|
||||
@@ -2208,13 +2208,13 @@ diagrams = ["jinja2", "railroad-diagrams"]
|
||||
|
||||
[[package]]
|
||||
name = "pytest"
|
||||
version = "7.4.4"
|
||||
version = "7.4.3"
|
||||
description = "pytest: simple powerful testing with Python"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "pytest-7.4.4-py3-none-any.whl", hash = "sha256:b090cdf5ed60bf4c45261be03239c2c1c22df034fbffe691abe93cd80cea01d8"},
|
||||
{file = "pytest-7.4.4.tar.gz", hash = "sha256:2cf0005922c6ace4a3e2ec8b4080eb0d9753fdc93107415332f50ce9e7994280"},
|
||||
{file = "pytest-7.4.3-py3-none-any.whl", hash = "sha256:0d009c083ea859a71b76adf7c1d502e4bc170b80a8ef002da5806527b9591fac"},
|
||||
{file = "pytest-7.4.3.tar.gz", hash = "sha256:d989d136982de4e3b29dabcc838ad581c64e8ed52c11fbe86ddebd9da0818cd5"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2892,12 +2892,12 @@ testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (
|
||||
|
||||
[[package]]
|
||||
name = "shodan"
|
||||
version = "1.31.0"
|
||||
version = "1.30.1"
|
||||
description = "Python library and command-line utility for Shodan (https://developer.shodan.io)"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "shodan-1.31.0.tar.gz", hash = "sha256:c73275386ea02390e196c35c660706a28dd4d537c5a21eb387ab6236fac251f6"},
|
||||
{file = "shodan-1.30.1.tar.gz", hash = "sha256:bedb6e8c2b4459592c1bc17b4d4b57dab0cb58a455ad589ee26a6304242cd505"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2921,18 +2921,18 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "slack-sdk"
|
||||
version = "3.26.1"
|
||||
version = "3.26.0"
|
||||
description = "The Slack API Platform SDK for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.6.0"
|
||||
files = [
|
||||
{file = "slack_sdk-3.26.1-py2.py3-none-any.whl", hash = "sha256:f80f0d15f0fce539b470447d2a07b03ecdad6b24f69c1edd05d464cf21253a06"},
|
||||
{file = "slack_sdk-3.26.1.tar.gz", hash = "sha256:d1600211eaa37c71a5f92daf4404074c3e6b3f5359a37c93c818b39d88ab4ca0"},
|
||||
{file = "slack_sdk-3.26.0-py2.py3-none-any.whl", hash = "sha256:b84c2d93163166eb682e290c19334683c2d0f0cb4a5479c809706b44038fdda1"},
|
||||
{file = "slack_sdk-3.26.0.tar.gz", hash = "sha256:147946f388ce73b17c377b823759fcb39c0eca7444ca0a942dc12a3940a4f44f"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
optional = ["SQLAlchemy (>=1.4,<3)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=10,<11)"]
|
||||
testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "flake8 (>=5.0.4,<7)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=7.0.1,<8)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"]
|
||||
testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "flake8 (>=5,<6)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=6.2.5,<7)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"]
|
||||
|
||||
[[package]]
|
||||
name = "smmap"
|
||||
@@ -3157,6 +3157,22 @@ files = [
|
||||
[package.extras]
|
||||
watchmedo = ["PyYAML (>=3.10)"]
|
||||
|
||||
[[package]]
|
||||
name = "websocket-client"
|
||||
version = "1.5.1"
|
||||
description = "WebSocket client for Python with low level API options"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "websocket-client-1.5.1.tar.gz", hash = "sha256:3f09e6d8230892547132177f575a4e3e73cfdf06526e20cc02aa1c3b47184d40"},
|
||||
{file = "websocket_client-1.5.1-py3-none-any.whl", hash = "sha256:cdf5877568b7e83aa7cf2244ab56a3213de587bbe0ce9d8b9600fc77b455d89e"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
|
||||
optional = ["python-socks", "wsaccel"]
|
||||
test = ["websockets"]
|
||||
|
||||
[[package]]
|
||||
name = "werkzeug"
|
||||
version = "3.0.1"
|
||||
@@ -3296,4 +3312,4 @@ docs = ["mkdocs", "mkdocs-material"]
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.9,<3.12"
|
||||
content-hash = "621fca1ef771f0785049b74b4bd1132371b8fa3f1a4da7a81b7eb2f1256b61ff"
|
||||
content-hash = "7e28daf704e53d057e66bc8fb71558361ab36a7cca85c7498a963f6406f54ef4"
|
||||
|
||||
@@ -211,31 +211,6 @@
|
||||
"iam_avoid_root_usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "op.acc.4.aws.iam.8",
|
||||
"Description": "Proceso de gestión de derechos de acceso",
|
||||
"Attributes": [
|
||||
{
|
||||
"IdGrupoControl": "op.acc.4",
|
||||
"Marco": "operacional",
|
||||
"Categoria": "control de acceso",
|
||||
"DescripcionControl": "Se restringirá todo acceso a las acciones especificadas para el usuario root de una cuenta.",
|
||||
"Nivel": "alto",
|
||||
"Tipo": "requisito",
|
||||
"Dimensiones": [
|
||||
"confidencialidad",
|
||||
"integridad",
|
||||
"trazabilidad",
|
||||
"autenticidad"
|
||||
],
|
||||
"ModoEjecucion": "automático"
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"organizations_account_part_of_organizations",
|
||||
"organizations_scp_check_deny_regions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "op.acc.4.aws.iam.9",
|
||||
"Description": "Proceso de gestión de derechos de acceso",
|
||||
@@ -1146,30 +1121,6 @@
|
||||
"cloudtrail_insights_exist"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "op.exp.8.r1.aws.ct.3",
|
||||
"Description": "Revisión de los registros",
|
||||
"Attributes": [
|
||||
{
|
||||
"IdGrupoControl": "op.exp.8.r1",
|
||||
"Marco": "operacional",
|
||||
"Categoria": "explotación",
|
||||
"DescripcionControl": "Registrar los eventos de lectura y escritura de datos.",
|
||||
"Nivel": "alto",
|
||||
"Tipo": "refuerzo",
|
||||
"Dimensiones": [
|
||||
"trazabilidad"
|
||||
],
|
||||
"ModoEjecucion": "automático"
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled",
|
||||
"cloudtrail_s3_dataevents_write_enabled",
|
||||
"cloudtrail_s3_dataevents_read_enabled",
|
||||
"cloudtrail_insights_exist"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "op.exp.8.r1.aws.ct.4",
|
||||
"Description": "Revisión de los registros",
|
||||
@@ -1282,33 +1233,6 @@
|
||||
"iam_role_cross_service_confused_deputy_prevention"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "op.exp.8.r4.aws.ct.1",
|
||||
"Description": "Control de acceso",
|
||||
"Attributes": [
|
||||
{
|
||||
"IdGrupoControl": "op.exp.8.r4",
|
||||
"Marco": "operacional",
|
||||
"Categoria": "explotación",
|
||||
"DescripcionControl": "Asignar correctamente las políticas AWS IAM para el acceso y borrado de los registros y sus copias de seguridad haciendo uso del principio de mínimo privilegio.",
|
||||
"Nivel": "alto",
|
||||
"Tipo": "refuerzo",
|
||||
"Dimensiones": [
|
||||
"trazabilidad"
|
||||
],
|
||||
"ModoEjecucion": "automático"
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"iam_policy_allows_privilege_escalation",
|
||||
"iam_customer_attached_policy_no_administrative_privileges",
|
||||
"iam_customer_unattached_policy_no_administrative_privilege",
|
||||
"iam_no_custom_policy_permissive_role_assumption",
|
||||
"iam_policy_attached_only_to_group_or_roles",
|
||||
"iam_role_cross_service_confused_deputy_prevention",
|
||||
"iam_policy_no_full_access_to_cloudtrail"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "op.exp.8.r4.aws.ct.2",
|
||||
"Description": "Control de acceso",
|
||||
@@ -2186,7 +2110,7 @@
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"fms_policy_compliant"
|
||||
"networkfirewall_in_all_vpc"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2327,31 +2251,6 @@
|
||||
"cloudfront_distributions_https_enabled"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "mp.com.4.aws.ws.1",
|
||||
"Description": "Separación de flujos de información en la red",
|
||||
"Attributes": [
|
||||
{
|
||||
"IdGrupoControl": "mp.com.4",
|
||||
"Marco": "medidas de protección",
|
||||
"Categoria": "segregación de redes",
|
||||
"DescripcionControl": "Se deberán abrir solo los puertos necesarios para el uso del servicio AWS WorkSpaces.",
|
||||
"Nivel": "alto",
|
||||
"Tipo": "requisito",
|
||||
"Dimensiones": [
|
||||
"confidencialidad",
|
||||
"integridad",
|
||||
"trazabilidad",
|
||||
"autenticidad",
|
||||
"disponibilidad"
|
||||
],
|
||||
"ModoEjecucion": "automático"
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"workspaces_vpc_2private_1public_subnets_nat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Id": "mp.com.4.aws.vpc.1",
|
||||
"Description": "Separación de flujos de información en la red",
|
||||
@@ -2424,8 +2323,7 @@
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"vpc_subnet_separate_private_public",
|
||||
"vpc_different_regions"
|
||||
"vpc_subnet_separate_private_public"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2472,8 +2370,7 @@
|
||||
}
|
||||
],
|
||||
"Checks": [
|
||||
"vpc_subnet_different_az",
|
||||
"vpc_different_regions"
|
||||
"vpc_subnet_different_az"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -11,7 +11,7 @@ from prowler.lib.logger import logger
|
||||
|
||||
timestamp = datetime.today()
|
||||
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
|
||||
prowler_version = "3.12.0"
|
||||
prowler_version = "3.11.3"
|
||||
html_logo_url = "https://github.com/prowler-cloud/prowler/"
|
||||
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
|
||||
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
|
||||
|
||||
@@ -69,8 +69,8 @@ aws:
|
||||
# AWS Organizations
|
||||
# organizations_scp_check_deny_regions
|
||||
# organizations_enabled_regions: [
|
||||
# "eu-central-1",
|
||||
# "eu-west-1",
|
||||
# 'eu-central-1',
|
||||
# 'eu-west-1',
|
||||
# "us-east-1"
|
||||
# ]
|
||||
organizations_enabled_regions: []
|
||||
|
||||
@@ -401,8 +401,7 @@ def display_compliance_table(
|
||||
"Bajo": 0,
|
||||
}
|
||||
if finding.status == "FAIL":
|
||||
if attribute.Tipo != "recomendacion":
|
||||
fail_count += 1
|
||||
fail_count += 1
|
||||
marcos[marco_categoria][
|
||||
"Estado"
|
||||
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"
|
||||
|
||||
@@ -407,7 +407,7 @@ def get_azure_html_assessment_summary(audit_info):
|
||||
if isinstance(audit_info, Azure_Audit_Info):
|
||||
printed_subscriptions = []
|
||||
for key, value in audit_info.identity.subscriptions.items():
|
||||
intermediate = f"{key} : {value}"
|
||||
intermediate = key + " : " + value
|
||||
printed_subscriptions.append(intermediate)
|
||||
|
||||
# check if identity is str(coming from SP) or dict(coming from browser or)
|
||||
|
||||
@@ -13,7 +13,7 @@ def send_slack_message(token, channel, stats, provider, audit_info):
|
||||
response = client.chat_postMessage(
|
||||
username="Prowler",
|
||||
icon_url=square_logo_img,
|
||||
channel=f"#{channel}",
|
||||
channel="#" + channel,
|
||||
blocks=create_message_blocks(identity, logo, stats),
|
||||
)
|
||||
return response
|
||||
@@ -35,7 +35,7 @@ def create_message_identity(provider, audit_info):
|
||||
elif provider == "azure":
|
||||
printed_subscriptions = []
|
||||
for key, value in audit_info.identity.subscriptions.items():
|
||||
intermediate = f"- *{key}: {value}*\n"
|
||||
intermediate = "- *" + key + ": " + value + "*\n"
|
||||
printed_subscriptions.append(intermediate)
|
||||
identity = f"Azure Subscriptions:\n{''.join(printed_subscriptions)}"
|
||||
logo = azure_logo
|
||||
|
||||
@@ -10,10 +10,7 @@ from prowler.config.config import aws_services_json_file
|
||||
from prowler.lib.check.check import list_modules, recover_checks_from_service
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.utils.utils import open_file, parse_json_file
|
||||
from prowler.providers.aws.config import (
|
||||
AWS_STS_GLOBAL_ENDPOINT_REGION,
|
||||
ROLE_SESSION_NAME,
|
||||
)
|
||||
from prowler.providers.aws.config import AWS_STS_GLOBAL_ENDPOINT_REGION
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
|
||||
from prowler.providers.aws.lib.credentials.credentials import create_sts_session
|
||||
|
||||
@@ -116,15 +113,9 @@ def assume_role(
|
||||
sts_endpoint_region: str = None,
|
||||
) -> dict:
|
||||
try:
|
||||
role_session_name = (
|
||||
assumed_role_info.role_session_name
|
||||
if assumed_role_info.role_session_name
|
||||
else ROLE_SESSION_NAME
|
||||
)
|
||||
|
||||
assume_role_arguments = {
|
||||
"RoleArn": assumed_role_info.role_arn,
|
||||
"RoleSessionName": role_session_name,
|
||||
"RoleSessionName": "ProwlerAsessmentSession",
|
||||
"DurationSeconds": assumed_role_info.session_duration,
|
||||
}
|
||||
|
||||
@@ -161,23 +152,21 @@ def input_role_mfa_token_and_code() -> tuple[str]:
|
||||
|
||||
|
||||
def generate_regional_clients(
|
||||
service: str,
|
||||
audit_info: AWS_Audit_Info,
|
||||
service: str, audit_info: AWS_Audit_Info, global_service: bool = False
|
||||
) -> dict:
|
||||
"""generate_regional_clients returns a dict with the following format for the given service:
|
||||
|
||||
Example:
|
||||
{"eu-west-1": boto3_service_client}
|
||||
"""
|
||||
try:
|
||||
regional_clients = {}
|
||||
service_regions = get_available_aws_service_regions(service, audit_info)
|
||||
|
||||
# Check if it is global service to gather only one region
|
||||
if global_service:
|
||||
if service_regions:
|
||||
if audit_info.profile_region in service_regions:
|
||||
service_regions = [audit_info.profile_region]
|
||||
service_regions = service_regions[:1]
|
||||
|
||||
# Get the regions enabled for the account and get the intersection with the service available regions
|
||||
if audit_info.enabled_regions:
|
||||
enabled_regions = service_regions.intersection(audit_info.enabled_regions)
|
||||
else:
|
||||
enabled_regions = service_regions
|
||||
enabled_regions = service_regions.intersection(audit_info.enabled_regions)
|
||||
|
||||
for region in enabled_regions:
|
||||
regional_client = audit_info.audit_session.client(
|
||||
@@ -202,14 +191,10 @@ def get_aws_enabled_regions(audit_info: AWS_Audit_Info) -> set:
|
||||
ec2_client = audit_info.audit_session.client(service, region_name=default_region)
|
||||
|
||||
enabled_regions = set()
|
||||
try:
|
||||
# With AllRegions=False we only get the enabled regions for the account
|
||||
for region in ec2_client.describe_regions(AllRegions=False).get("Regions", []):
|
||||
enabled_regions.add(region.get("RegionName"))
|
||||
except Exception as error:
|
||||
logger.warning(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
# With AllRegions=False we only get the enabled regions for the account
|
||||
for region in ec2_client.describe_regions(AllRegions=False).get("Regions", []):
|
||||
enabled_regions.add(region.get("RegionName"))
|
||||
|
||||
return enabled_regions
|
||||
|
||||
|
||||
@@ -253,8 +238,6 @@ def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
|
||||
service = "efs"
|
||||
elif service == "logs":
|
||||
service = "cloudwatch"
|
||||
elif service == "cognito":
|
||||
service = "cognito-idp"
|
||||
# Check if Prowler has checks in service
|
||||
try:
|
||||
list_modules(provider, service)
|
||||
@@ -311,6 +294,7 @@ def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info)
|
||||
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
|
||||
data = parse_json_file(f)
|
||||
# Check if it is a subservice
|
||||
json_regions = set(
|
||||
data["services"][service]["regions"][audit_info.audited_partition]
|
||||
)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,3 +1,2 @@
|
||||
AWS_STS_GLOBAL_ENDPOINT_REGION = "us-east-1"
|
||||
BOTO3_USER_AGENT_EXTRA = "APN_1826889"
|
||||
ROLE_SESSION_NAME = "ProwlerAssessmentSession"
|
||||
|
||||
@@ -143,23 +143,29 @@ def is_allowlisted(
|
||||
finding_tags,
|
||||
):
|
||||
try:
|
||||
allowlisted_checks = {}
|
||||
# By default is not allowlisted
|
||||
is_finding_allowlisted = False
|
||||
# First set account key from allowlist dict
|
||||
if audited_account in allowlist["Accounts"]:
|
||||
allowlisted_checks = allowlist["Accounts"][audited_account]["Checks"]
|
||||
# If there is a *, it affects to all accounts
|
||||
# This cannot be elif since in the case of * and single accounts we
|
||||
# want to merge allowlisted checks from * to the other accounts check list
|
||||
if "*" in allowlist["Accounts"]:
|
||||
checks_multi_account = allowlist["Accounts"]["*"]["Checks"]
|
||||
allowlisted_checks.update(checks_multi_account)
|
||||
|
||||
# We always check all the accounts present in the allowlist
|
||||
# if one allowlists the finding we set the finding as allowlisted
|
||||
for account in allowlist["Accounts"]:
|
||||
if account == audited_account or account == "*":
|
||||
if is_allowlisted_in_check(
|
||||
allowlist["Accounts"][account]["Checks"],
|
||||
audited_account,
|
||||
check,
|
||||
finding_region,
|
||||
finding_resource,
|
||||
finding_tags,
|
||||
):
|
||||
is_finding_allowlisted = True
|
||||
break
|
||||
# Test if it is allowlisted
|
||||
if is_allowlisted_in_check(
|
||||
allowlisted_checks,
|
||||
audited_account,
|
||||
check,
|
||||
finding_region,
|
||||
finding_resource,
|
||||
finding_tags,
|
||||
):
|
||||
is_finding_allowlisted = True
|
||||
|
||||
return is_finding_allowlisted
|
||||
except Exception as error:
|
||||
@@ -304,10 +310,10 @@ def is_excepted(
|
||||
is_tag_excepted = __is_item_matched__(excepted_tags, finding_tags)
|
||||
|
||||
if (
|
||||
(is_account_excepted or not excepted_accounts)
|
||||
and (is_region_excepted or not excepted_regions)
|
||||
and (is_resource_excepted or not excepted_resources)
|
||||
and (is_tag_excepted or not excepted_tags)
|
||||
is_account_excepted
|
||||
and is_region_excepted
|
||||
and is_resource_excepted
|
||||
and is_tag_excepted
|
||||
):
|
||||
excepted = True
|
||||
return excepted
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
from argparse import ArgumentTypeError, Namespace
|
||||
from re import fullmatch, search
|
||||
|
||||
from prowler.providers.aws.aws_provider import get_aws_available_regions
|
||||
from prowler.providers.aws.config import ROLE_SESSION_NAME
|
||||
from prowler.providers.aws.lib.arn.arn import arn_type
|
||||
|
||||
|
||||
@@ -28,13 +26,6 @@ def init_parser(self):
|
||||
help="ARN of the role to be assumed",
|
||||
# Pending ARN validation
|
||||
)
|
||||
aws_auth_subparser.add_argument(
|
||||
"--role-session-name",
|
||||
nargs="?",
|
||||
default=ROLE_SESSION_NAME,
|
||||
help="An identifier for the assumed role session. Defaults to ProwlerAssessmentSession",
|
||||
type=validate_role_session_name,
|
||||
)
|
||||
aws_auth_subparser.add_argument(
|
||||
"--sts-endpoint-region",
|
||||
nargs="?",
|
||||
@@ -93,11 +84,6 @@ def init_parser(self):
|
||||
action="store_true",
|
||||
help="Skip updating previous findings of Prowler in Security Hub",
|
||||
)
|
||||
aws_security_hub_subparser.add_argument(
|
||||
"--send-sh-only-fails",
|
||||
action="store_true",
|
||||
help="Send only Prowler failed findings to SecurityHub",
|
||||
)
|
||||
# AWS Quick Inventory
|
||||
aws_quick_inventory_subparser = aws_parser.add_argument_group("Quick Inventory")
|
||||
aws_quick_inventory_subparser.add_argument(
|
||||
@@ -113,7 +99,6 @@ def init_parser(self):
|
||||
"-B",
|
||||
"--output-bucket",
|
||||
nargs="?",
|
||||
type=validate_bucket,
|
||||
default=None,
|
||||
help="Custom output bucket, requires -M <mode> and it can work also with -o flag.",
|
||||
)
|
||||
@@ -121,7 +106,6 @@ def init_parser(self):
|
||||
"-D",
|
||||
"--output-bucket-no-assume",
|
||||
nargs="?",
|
||||
type=validate_bucket,
|
||||
default=None,
|
||||
help="Same as -B but do not use the assumed role credentials to put objects to the bucket, instead uses the initial credentials.",
|
||||
)
|
||||
@@ -195,37 +179,9 @@ def validate_arguments(arguments: Namespace) -> tuple[bool, str]:
|
||||
|
||||
# Handle if session_duration is not the default value or external_id is set
|
||||
if (
|
||||
(arguments.session_duration and arguments.session_duration != 3600)
|
||||
or arguments.external_id
|
||||
or arguments.role_session_name != ROLE_SESSION_NAME
|
||||
):
|
||||
arguments.session_duration and arguments.session_duration != 3600
|
||||
) or arguments.external_id:
|
||||
if not arguments.role:
|
||||
return (
|
||||
False,
|
||||
"To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed",
|
||||
)
|
||||
return (False, "To use -I/-T options -R option is needed")
|
||||
|
||||
return (True, "")
|
||||
|
||||
|
||||
def validate_bucket(bucket_name):
|
||||
"""validate_bucket validates that the input bucket_name is valid"""
|
||||
if search("(?!(^xn--|.+-s3alias$))^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$", bucket_name):
|
||||
return bucket_name
|
||||
else:
|
||||
raise ArgumentTypeError(
|
||||
"Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
|
||||
)
|
||||
|
||||
|
||||
def validate_role_session_name(session_name):
|
||||
"""
|
||||
validates that the role session name is valid
|
||||
Documentation: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
|
||||
"""
|
||||
if fullmatch("[\w+=,.@-]{2,64}", session_name):
|
||||
return session_name
|
||||
else:
|
||||
raise ArgumentTypeError(
|
||||
"Role Session Name must be 2-64 characters long and consist only of upper- and lower-case alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@-"
|
||||
)
|
||||
|
||||
@@ -30,7 +30,6 @@ current_audit_info = AWS_Audit_Info(
|
||||
session_duration=None,
|
||||
external_id=None,
|
||||
mfa_enabled=None,
|
||||
role_session_name=None,
|
||||
),
|
||||
mfa_enabled=None,
|
||||
audit_resources=None,
|
||||
|
||||
@@ -20,7 +20,6 @@ class AWS_Assume_Role:
|
||||
session_duration: int
|
||||
external_id: str
|
||||
mfa_enabled: bool
|
||||
role_session_name: str
|
||||
|
||||
|
||||
@dataclass
|
||||
|
||||
@@ -1,11 +1,8 @@
|
||||
def is_condition_block_restrictive(
|
||||
condition_statement: dict, source_account: str, is_cross_account_allowed=False
|
||||
def is_account_only_allowed_in_condition(
|
||||
condition_statement: dict, source_account: str
|
||||
):
|
||||
"""
|
||||
is_condition_block_restrictive parses the IAM Condition policy block and, by default, returns True if the source_account passed as argument is within, False if not.
|
||||
|
||||
If argument is_cross_account_allowed is True it tests if the Condition block includes any of the operators allowlisted returning True if does, False if not.
|
||||
|
||||
is_account_only_allowed_in_condition parses the IAM Condition policy block and returns True if the source_account passed as argument is within, False if not.
|
||||
|
||||
@param condition_statement: dict with an IAM Condition block, e.g.:
|
||||
{
|
||||
@@ -57,16 +54,13 @@ def is_condition_block_restrictive(
|
||||
condition_statement[condition_operator][value],
|
||||
list,
|
||||
):
|
||||
# if there is an arn/account without the source account -> we do not consider it safe
|
||||
# here by default we assume is true and look for false entries
|
||||
is_condition_key_restrictive = True
|
||||
# if cross account is not allowed check for each condition block looking for accounts
|
||||
# different than default
|
||||
if not is_cross_account_allowed:
|
||||
# if there is an arn/account without the source account -> we do not consider it safe
|
||||
# here by default we assume is true and look for false entries
|
||||
for item in condition_statement[condition_operator][value]:
|
||||
if source_account not in item:
|
||||
is_condition_key_restrictive = False
|
||||
break
|
||||
for item in condition_statement[condition_operator][value]:
|
||||
if source_account not in item:
|
||||
is_condition_key_restrictive = False
|
||||
break
|
||||
|
||||
if is_condition_key_restrictive:
|
||||
is_condition_valid = True
|
||||
@@ -76,13 +70,10 @@ def is_condition_block_restrictive(
|
||||
condition_statement[condition_operator][value],
|
||||
str,
|
||||
):
|
||||
if is_cross_account_allowed:
|
||||
if (
|
||||
source_account
|
||||
in condition_statement[condition_operator][value]
|
||||
):
|
||||
is_condition_valid = True
|
||||
else:
|
||||
if (
|
||||
source_account
|
||||
in condition_statement[condition_operator][value]
|
||||
):
|
||||
is_condition_valid = True
|
||||
|
||||
return is_condition_valid
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import sys
|
||||
|
||||
from prowler.config.config import (
|
||||
csv_file_suffix,
|
||||
html_file_suffix,
|
||||
@@ -39,9 +41,10 @@ def send_to_s3_bucket(
|
||||
s3_client.upload_file(file_name, output_bucket_name, object_name)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
logger.critical(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_s3_object_path(output_directory: str) -> str:
|
||||
|
||||
@@ -29,9 +29,7 @@ def prepare_security_hub_findings(
|
||||
continue
|
||||
|
||||
# Handle quiet mode
|
||||
if (
|
||||
output_options.is_quiet or output_options.send_sh_only_fails
|
||||
) and finding.status != "FAIL":
|
||||
if output_options.is_quiet and finding.status != "FAIL":
|
||||
continue
|
||||
|
||||
# Get the finding region
|
||||
|
||||
@@ -1,21 +1,17 @@
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
import threading
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.providers.aws.aws_provider import (
|
||||
generate_regional_clients,
|
||||
get_default_region,
|
||||
)
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
|
||||
MAX_WORKERS = 10
|
||||
|
||||
|
||||
class AWSService:
|
||||
"""The AWSService class offers a parent class for each AWS Service to generate:
|
||||
- AWS Regional Clients
|
||||
- Shared information like the account ID and ARN, the the AWS partition and the checks audited
|
||||
- AWS Session
|
||||
- Thread pool for the __threading_call__
|
||||
- Also handles if the AWS Service is Global
|
||||
"""
|
||||
|
||||
@@ -38,7 +34,9 @@ class AWSService:
|
||||
|
||||
# Generate Regional Clients
|
||||
if not global_service:
|
||||
self.regional_clients = generate_regional_clients(self.service, audit_info)
|
||||
self.regional_clients = generate_regional_clients(
|
||||
self.service, audit_info, global_service
|
||||
)
|
||||
|
||||
# Get a single region and client if the service needs it (e.g. AWS Global Service)
|
||||
# We cannot include this within an else because some services needs both the regional_clients
|
||||
@@ -46,40 +44,14 @@ class AWSService:
|
||||
self.region = get_default_region(self.service, audit_info)
|
||||
self.client = self.session.client(self.service, self.region)
|
||||
|
||||
# Thread pool for __threading_call__
|
||||
self.thread_pool = ThreadPoolExecutor(max_workers=MAX_WORKERS)
|
||||
|
||||
def __get_session__(self):
|
||||
return self.session
|
||||
|
||||
def __threading_call__(self, call, iterator=None):
|
||||
# Use the provided iterator, or default to self.regional_clients
|
||||
items = iterator if iterator is not None else self.regional_clients.values()
|
||||
# Determine the total count for logging
|
||||
item_count = len(items)
|
||||
|
||||
# Trim leading and trailing underscores from the call's name
|
||||
call_name = call.__name__.strip("_")
|
||||
# Add Capitalization
|
||||
call_name = " ".join([x.capitalize() for x in call_name.split("_")])
|
||||
|
||||
# Print a message based on the call's name, and if its regional or processing a list of items
|
||||
if iterator is None:
|
||||
logger.info(
|
||||
f"{self.service.upper()} - Starting threads for '{call_name}' function across {item_count} regions..."
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
f"{self.service.upper()} - Starting threads for '{call_name}' function to process {item_count} items..."
|
||||
)
|
||||
|
||||
# Submit tasks to the thread pool
|
||||
futures = [self.thread_pool.submit(call, item) for item in items]
|
||||
|
||||
# Wait for all tasks to complete
|
||||
for future in as_completed(futures):
|
||||
try:
|
||||
future.result() # Raises exceptions from the thread, if any
|
||||
except Exception:
|
||||
# Handle exceptions if necessary
|
||||
pass # Replace 'pass' with any additional exception handling logic. Currently handled within the called function
|
||||
def __threading_call__(self, call):
|
||||
threads = []
|
||||
for regional_client in self.regional_clients.values():
|
||||
threads.append(threading.Thread(target=call, args=(regional_client,)))
|
||||
for t in threads:
|
||||
t.start()
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
@@ -85,36 +85,21 @@ class AccessAnalyzer(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
# TODO: We need to include ListFindingsV2
|
||||
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/accessanalyzer/client/list_findings_v2.html
|
||||
def __list_findings__(self):
|
||||
logger.info("AccessAnalyzer - Listing Findings per Analyzer...")
|
||||
try:
|
||||
for analyzer in self.analyzers:
|
||||
try:
|
||||
if analyzer.status == "ACTIVE":
|
||||
regional_client = self.regional_clients[analyzer.region]
|
||||
list_findings_paginator = regional_client.get_paginator(
|
||||
"list_findings"
|
||||
)
|
||||
for page in list_findings_paginator.paginate(
|
||||
analyzerArn=analyzer.arn
|
||||
):
|
||||
for finding in page["findings"]:
|
||||
analyzer.findings.append(Finding(id=finding["id"]))
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "ValidationException":
|
||||
logger.warning(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
if analyzer.status == "ACTIVE":
|
||||
regional_client = self.regional_clients[analyzer.region]
|
||||
list_findings_paginator = regional_client.get_paginator(
|
||||
"list_findings"
|
||||
)
|
||||
for page in list_findings_paginator.paginate(
|
||||
analyzerArn=analyzer.arn
|
||||
):
|
||||
for finding in page["findings"]:
|
||||
analyzer.findings.append(Finding(id=finding["id"]))
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"Provider": "aws",
|
||||
"CheckID": "apigateway_restapi_authorizers_enabled",
|
||||
"CheckTitle": "Check if API Gateway has configured authorizers at api or method level.",
|
||||
"CheckTitle": "Check if API Gateway has configured authorizers.",
|
||||
"CheckAliases": [
|
||||
"apigateway_authorizers_enabled"
|
||||
],
|
||||
@@ -13,7 +13,7 @@
|
||||
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "AwsApiGatewayRestApi",
|
||||
"Description": "Check if API Gateway has configured authorizers at api or method level.",
|
||||
"Description": "Check if API Gateway has configured authorizers.",
|
||||
"Risk": "If no authorizer is enabled anyone can use the service.",
|
||||
"RelatedUrl": "",
|
||||
"Remediation": {
|
||||
|
||||
@@ -13,41 +13,12 @@ class apigateway_restapi_authorizers_enabled(Check):
|
||||
report.resource_id = rest_api.name
|
||||
report.resource_arn = rest_api.arn
|
||||
report.resource_tags = rest_api.tags
|
||||
# it there are not authorizers at api level and resources without methods (default case) ->
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured at api level."
|
||||
if rest_api.authorizer:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured at api level"
|
||||
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured."
|
||||
else:
|
||||
# we want to know if api has not authorizers and all the resources don't have methods configured
|
||||
resources_have_methods = False
|
||||
all_methods_authorized = True
|
||||
resource_paths_with_unathorized_methods = []
|
||||
for resource in rest_api.resources:
|
||||
# if the resource has methods test if they have all configured authorizer
|
||||
if resource.resource_methods:
|
||||
resources_have_methods = True
|
||||
for (
|
||||
http_method,
|
||||
authorization_method,
|
||||
) in resource.resource_methods.items():
|
||||
if authorization_method == "NONE":
|
||||
all_methods_authorized = False
|
||||
unauthorized_method = (
|
||||
f"{resource.path} -> {http_method}"
|
||||
)
|
||||
resource_paths_with_unathorized_methods.append(
|
||||
unauthorized_method
|
||||
)
|
||||
# if there are methods in at least one resource and are all authorized
|
||||
if all_methods_authorized and resources_have_methods:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has all methods authorized"
|
||||
# if there are methods in at least one result but some of then are not authorized-> list it
|
||||
elif not all_methods_authorized:
|
||||
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have authorizers at api level and the following paths and methods are unauthorized: {'; '.join(resource_paths_with_unathorized_methods)}."
|
||||
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured."
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -17,7 +17,6 @@ class APIGateway(AWSService):
|
||||
self.__get_authorizers__()
|
||||
self.__get_rest_api__()
|
||||
self.__get_stages__()
|
||||
self.__get_resources__()
|
||||
|
||||
def __get_rest_apis__(self, regional_client):
|
||||
logger.info("APIGateway - Getting Rest APIs...")
|
||||
@@ -54,9 +53,7 @@ class APIGateway(AWSService):
|
||||
if authorizers:
|
||||
rest_api.authorizer = True
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
|
||||
def __get_rest_api__(self):
|
||||
logger.info("APIGateway - Describing Rest API...")
|
||||
@@ -67,9 +64,7 @@ class APIGateway(AWSService):
|
||||
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
|
||||
rest_api.public_endpoint = False
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
|
||||
def __get_stages__(self):
|
||||
logger.info("APIGateway - Getting stages for Rest APIs...")
|
||||
@@ -100,46 +95,7 @@ class APIGateway(AWSService):
|
||||
)
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_resources__(self):
|
||||
logger.info("APIGateway - Getting API resources...")
|
||||
try:
|
||||
for rest_api in self.rest_apis:
|
||||
regional_client = self.regional_clients[rest_api.region]
|
||||
get_resources_paginator = regional_client.get_paginator("get_resources")
|
||||
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
|
||||
for resource in page["items"]:
|
||||
id = resource["id"]
|
||||
resource_methods = []
|
||||
methods_auth = {}
|
||||
for resource_method in resource.get(
|
||||
"resourceMethods", {}
|
||||
).keys():
|
||||
resource_methods.append(resource_method)
|
||||
|
||||
for resource_method in resource_methods:
|
||||
if resource_method != "OPTIONS":
|
||||
method_config = regional_client.get_method(
|
||||
restApiId=rest_api.id,
|
||||
resourceId=id,
|
||||
httpMethod=resource_method,
|
||||
)
|
||||
auth_type = method_config["authorizationType"]
|
||||
methods_auth.update({resource_method: auth_type})
|
||||
|
||||
rest_api.resources.append(
|
||||
PathResourceMethods(
|
||||
path=resource["path"], resource_methods=methods_auth
|
||||
)
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
|
||||
|
||||
class Stage(BaseModel):
|
||||
@@ -151,11 +107,6 @@ class Stage(BaseModel):
|
||||
tags: Optional[list] = []
|
||||
|
||||
|
||||
class PathResourceMethods(BaseModel):
|
||||
path: str
|
||||
resource_methods: dict
|
||||
|
||||
|
||||
class RestAPI(BaseModel):
|
||||
id: str
|
||||
arn: str
|
||||
@@ -165,4 +116,3 @@ class RestAPI(BaseModel):
|
||||
public_endpoint: bool = True
|
||||
stages: list[Stage] = []
|
||||
tags: Optional[list] = []
|
||||
resources: list[PathResourceMethods] = []
|
||||
|
||||
@@ -11,55 +11,57 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
|
||||
class awslambda_function_no_secrets_in_code(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
if awslambda_client.functions:
|
||||
for function, function_code in awslambda_client.__get_function_code__():
|
||||
if function_code:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = function.region
|
||||
report.resource_id = function.name
|
||||
report.resource_arn = function.arn
|
||||
report.resource_tags = function.tags
|
||||
for function in awslambda_client.functions.values():
|
||||
if function.code:
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
report.region = function.region
|
||||
report.resource_id = function.name
|
||||
report.resource_arn = function.arn
|
||||
report.resource_tags = function.tags
|
||||
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"No secrets found in Lambda function {function.name} code."
|
||||
)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
function_code.code_zip.extractall(tmp_dir_name)
|
||||
# List all files
|
||||
files_in_zip = next(os.walk(tmp_dir_name))[2]
|
||||
secrets_findings = []
|
||||
for file in files_in_zip:
|
||||
secrets = SecretsCollection()
|
||||
with default_settings():
|
||||
secrets.scan_file(f"{tmp_dir_name}/{file}")
|
||||
detect_secrets_output = secrets.json()
|
||||
if detect_secrets_output:
|
||||
for (
|
||||
file_name
|
||||
) in (
|
||||
detect_secrets_output.keys()
|
||||
): # Appears that only 1 file is being scanned at a time, so could rework this
|
||||
output_file_name = file_name.replace(
|
||||
f"{tmp_dir_name}/", ""
|
||||
)
|
||||
secrets_string = ", ".join(
|
||||
[
|
||||
f"{secret['type']} on line {secret['line_number']}"
|
||||
for secret in detect_secrets_output[
|
||||
file_name
|
||||
]
|
||||
]
|
||||
)
|
||||
secrets_findings.append(
|
||||
f"{output_file_name}: {secrets_string}"
|
||||
)
|
||||
report.status = "PASS"
|
||||
report.status_extended = (
|
||||
f"No secrets found in Lambda function {function.name} code."
|
||||
)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
function.code.code_zip.extractall(tmp_dir_name)
|
||||
# List all files
|
||||
files_in_zip = next(os.walk(tmp_dir_name))[2]
|
||||
secrets_findings = []
|
||||
for file in files_in_zip:
|
||||
secrets = SecretsCollection()
|
||||
with default_settings():
|
||||
secrets.scan_file(f"{tmp_dir_name}/{file}")
|
||||
detect_secrets_output = secrets.json()
|
||||
if detect_secrets_output:
|
||||
for (
|
||||
file_name
|
||||
) in (
|
||||
detect_secrets_output.keys()
|
||||
): # Appears that only 1 file is being scanned at a time, so could rework this
|
||||
output_file_name = file_name.replace(
|
||||
f"{tmp_dir_name}/", ""
|
||||
)
|
||||
secrets_string = ", ".join(
|
||||
[
|
||||
f"{secret['type']} on line {secret['line_number']}"
|
||||
for secret in detect_secrets_output[file_name]
|
||||
]
|
||||
)
|
||||
secrets_findings.append(
|
||||
f"{output_file_name}: {secrets_string}"
|
||||
)
|
||||
|
||||
if secrets_findings:
|
||||
final_output_string = "; ".join(secrets_findings)
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"Potential {'secrets' if len(secrets_findings) > 1 else 'secret'} found in Lambda function {function.name} code -> {final_output_string}."
|
||||
if secrets_findings:
|
||||
final_output_string = "; ".join(secrets_findings)
|
||||
report.status = "FAIL"
|
||||
# report.status_extended = f"Potential {'secrets' if len(secrets_findings)>1 else 'secret'} found in Lambda function {function.name} code. {final_output_string}."
|
||||
if len(secrets_findings) > 1:
|
||||
report.status_extended = f"Potential secrets found in Lambda function {function.name} code -> {final_output_string}."
|
||||
else:
|
||||
report.status_extended = f"Potential secret found in Lambda function {function.name} code -> {final_output_string}."
|
||||
# break // Don't break as there may be additional findings
|
||||
|
||||
findings.append(report)
|
||||
findings.append(report)
|
||||
|
||||
return findings
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import io
|
||||
import json
|
||||
import zipfile
|
||||
from concurrent.futures import as_completed
|
||||
from enum import Enum
|
||||
from typing import Any, Optional
|
||||
|
||||
@@ -22,6 +21,15 @@ class Lambda(AWSService):
|
||||
self.functions = {}
|
||||
self.__threading_call__(self.__list_functions__)
|
||||
self.__list_tags_for_resource__()
|
||||
|
||||
# We only want to retrieve the Lambda code if the
|
||||
# awslambda_function_no_secrets_in_code check is set
|
||||
if (
|
||||
"awslambda_function_no_secrets_in_code"
|
||||
in audit_info.audit_metadata.expected_checks
|
||||
):
|
||||
self.__threading_call__(self.__get_function__)
|
||||
|
||||
self.__threading_call__(self.__get_policy__)
|
||||
self.__threading_call__(self.__get_function_url_config__)
|
||||
|
||||
@@ -62,45 +70,28 @@ class Lambda(AWSService):
|
||||
f" {error}"
|
||||
)
|
||||
|
||||
def __get_function_code__(self):
|
||||
logger.info("Lambda - Getting Function Code...")
|
||||
# Use a thread pool handle the queueing and execution of the __fetch_function_code__ tasks, up to max_workers tasks concurrently.
|
||||
lambda_functions_to_fetch = {
|
||||
self.thread_pool.submit(
|
||||
self.__fetch_function_code__, function.name, function.region
|
||||
): function
|
||||
for function in self.functions.values()
|
||||
}
|
||||
|
||||
for fetched_lambda_code in as_completed(lambda_functions_to_fetch):
|
||||
function = lambda_functions_to_fetch[fetched_lambda_code]
|
||||
try:
|
||||
function_code = fetched_lambda_code.result()
|
||||
if function_code:
|
||||
yield function, function_code
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{function.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __fetch_function_code__(self, function_name, function_region):
|
||||
def __get_function__(self, regional_client):
|
||||
logger.info("Lambda - Getting Function...")
|
||||
try:
|
||||
regional_client = self.regional_clients[function_region]
|
||||
function_information = regional_client.get_function(
|
||||
FunctionName=function_name
|
||||
)
|
||||
if "Location" in function_information["Code"]:
|
||||
code_location_uri = function_information["Code"]["Location"]
|
||||
raw_code_zip = requests.get(code_location_uri).content
|
||||
return LambdaCode(
|
||||
location=code_location_uri,
|
||||
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
|
||||
)
|
||||
for function in self.functions.values():
|
||||
if function.region == regional_client.region:
|
||||
function_information = regional_client.get_function(
|
||||
FunctionName=function.name
|
||||
)
|
||||
if "Location" in function_information["Code"]:
|
||||
code_location_uri = function_information["Code"]["Location"]
|
||||
raw_code_zip = requests.get(code_location_uri).content
|
||||
self.functions[function.arn].code = LambdaCode(
|
||||
location=code_location_uri,
|
||||
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
f"{regional_client.region} --"
|
||||
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
|
||||
f" {error}"
|
||||
)
|
||||
raise
|
||||
|
||||
def __get_policy__(self, regional_client):
|
||||
logger.info("Lambda - Getting Policy...")
|
||||
|
||||
@@ -140,16 +140,7 @@ class Cloudtrail(AWSService):
|
||||
error.response["Error"]["Code"]
|
||||
== "InsightNotEnabledException"
|
||||
):
|
||||
logger.warning(
|
||||
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
elif (
|
||||
error.response["Error"]["Code"]
|
||||
== "UnsupportedOperationException"
|
||||
):
|
||||
logger.warning(
|
||||
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
continue
|
||||
else:
|
||||
logger.error(
|
||||
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_network_acls_alarm_configured(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_network_gateways_alarm_configured(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_network_route_tables_alarm_configured(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_changes_to_vpcs_alarm_configured(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -25,13 +24,26 @@ class cloudwatch_log_metric_filter_and_alarm_for_aws_config_configuration_change
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -25,13 +24,26 @@ class cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_change
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_authentication_failures(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_aws_organizations_changes(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_disable_or_scheduled_deletion_of_kms_cmk(Chec
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,14 +22,26 @@ class cloudwatch_log_metric_filter_for_s3_bucket_policy_changes(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_policy_changes(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_root_usage(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_security_group_changes(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_sign_in_without_mfa(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
cloudtrail_client,
|
||||
@@ -5,9 +7,6 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
|
||||
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
|
||||
cloudwatch_client,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
|
||||
check_cloudwatch_log_metric_filter,
|
||||
)
|
||||
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
|
||||
|
||||
|
||||
@@ -23,13 +22,26 @@ class cloudwatch_log_metric_filter_unauthorized_api_calls(Check):
|
||||
report.region = cloudwatch_client.region
|
||||
report.resource_id = cloudtrail_client.audited_account
|
||||
report.resource_arn = cloudtrail_client.audited_account_arn
|
||||
report = check_cloudwatch_log_metric_filter(
|
||||
pattern,
|
||||
cloudtrail_client.trails,
|
||||
logs_client.metric_filters,
|
||||
cloudwatch_client.metric_alarms,
|
||||
report,
|
||||
)
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in cloudtrail_client.trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in logs_client.metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in cloudwatch_client.metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
import re
|
||||
|
||||
from prowler.lib.check.models import Check_Report_AWS
|
||||
|
||||
|
||||
def check_cloudwatch_log_metric_filter(
|
||||
metric_filter_pattern: str,
|
||||
trails: list,
|
||||
metric_filters: list,
|
||||
metric_alarms: list,
|
||||
report: Check_Report_AWS,
|
||||
):
|
||||
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
|
||||
log_groups = []
|
||||
for trail in trails:
|
||||
if trail.log_group_arn:
|
||||
log_groups.append(trail.log_group_arn.split(":")[6])
|
||||
# 2. Describe metric filters for previous log groups
|
||||
for metric_filter in metric_filters:
|
||||
if metric_filter.log_group in log_groups:
|
||||
if re.search(metric_filter_pattern, metric_filter.pattern, flags=re.DOTALL):
|
||||
report.resource_id = metric_filter.log_group
|
||||
report.resource_arn = metric_filter.arn
|
||||
report.region = metric_filter.region
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
|
||||
# 3. Check if there is an alarm for the metric
|
||||
for alarm in metric_alarms:
|
||||
if alarm.metric == metric_filter.metric:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
|
||||
break
|
||||
|
||||
return report
|
||||
@@ -1,4 +0,0 @@
|
||||
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
|
||||
from prowler.providers.aws.services.cognito.cognito_service import CognitoIDP
|
||||
|
||||
cognito_idp_client = CognitoIDP(current_audit_info)
|
||||
@@ -1,122 +0,0 @@
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
|
||||
from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
|
||||
################## CognitoIDP
|
||||
class CognitoIDP(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
super().__init__("cognito-idp", audit_info)
|
||||
self.user_pools = {}
|
||||
self.__threading_call__(self.__list_user_pools__)
|
||||
self.__describe_user_pools__()
|
||||
self.__get_user_pool_mfa_config__()
|
||||
|
||||
def __list_user_pools__(self, regional_client):
|
||||
logger.info("Cognito - Listing User Pools...")
|
||||
try:
|
||||
user_pools_paginator = regional_client.get_paginator("list_user_pools")
|
||||
for page in user_pools_paginator.paginate(MaxResults=60):
|
||||
for user_pool in page["UserPools"]:
|
||||
arn = f"arn:{self.audited_partition}:cognito-idp:{regional_client.region}:{self.audited_account}:userpool/{user_pool['Id']}"
|
||||
if not self.audit_resources or (
|
||||
is_resource_filtered(arn, self.audit_resources)
|
||||
):
|
||||
try:
|
||||
self.user_pools[arn] = UserPool(
|
||||
id=user_pool["Id"],
|
||||
arn=arn,
|
||||
name=user_pool["Name"],
|
||||
region=regional_client.region,
|
||||
last_modified=user_pool["LastModifiedDate"],
|
||||
creation_date=user_pool["CreationDate"],
|
||||
status=user_pool.get("Status", "Disabled"),
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_user_pools__(self):
|
||||
logger.info("Cognito - Describing User Pools...")
|
||||
try:
|
||||
for user_pool in self.user_pools.values():
|
||||
try:
|
||||
user_pool_details = self.regional_clients[
|
||||
user_pool.region
|
||||
].describe_user_pool(UserPoolId=user_pool.id)["UserPool"]
|
||||
user_pool.password_policy = user_pool_details.get(
|
||||
"Policies", {}
|
||||
).get("PasswordPolicy", {})
|
||||
user_pool.deletion_protection = user_pool_details.get(
|
||||
"DeletionProtection", "INACTIVE"
|
||||
)
|
||||
user_pool.advanced_security_mode = user_pool_details.get(
|
||||
"UserPoolAddOns", {}
|
||||
).get("AdvancedSecurityMode", "OFF")
|
||||
user_pool.tags = [user_pool_details.get("UserPoolTags", "")]
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_user_pool_mfa_config__(self):
|
||||
logger.info("Cognito - Getting User Pool MFA Configuration...")
|
||||
try:
|
||||
for user_pool in self.user_pools.values():
|
||||
try:
|
||||
mfa_config = self.regional_clients[
|
||||
user_pool.region
|
||||
].get_user_pool_mfa_config(UserPoolId=user_pool.id)
|
||||
if mfa_config["MfaConfiguration"] != "OFF":
|
||||
user_pool.mfa_config = MFAConfig(
|
||||
sms_authentication=mfa_config.get(
|
||||
"SmsMfaConfiguration", {}
|
||||
),
|
||||
software_token_mfa_authentication=mfa_config.get(
|
||||
"SoftwareTokenMfaConfiguration", {}
|
||||
),
|
||||
status=mfa_config["MfaConfiguration"],
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
|
||||
class MFAConfig(BaseModel):
|
||||
sms_authentication: Optional[dict]
|
||||
software_token_mfa_authentication: Optional[dict]
|
||||
status: str
|
||||
|
||||
|
||||
class UserPool(BaseModel):
|
||||
id: str
|
||||
arn: str
|
||||
name: str
|
||||
region: str
|
||||
advanced_security_mode: str = "OFF"
|
||||
deletion_protection: str = "INACTIVE"
|
||||
last_modified: datetime
|
||||
creation_date: datetime
|
||||
status: str
|
||||
password_policy: Optional[dict]
|
||||
mfa_config: Optional[MFAConfig]
|
||||
tags: Optional[list] = []
|
||||
@@ -17,7 +17,7 @@ class EC2(AWSService):
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
self.instances = []
|
||||
self.__threading_call__(self.__describe_instances__)
|
||||
self.__threading_call__(self.__get_instance_user_data__, self.instances)
|
||||
self.__get_instance_user_data__()
|
||||
self.security_groups = []
|
||||
self.regions_with_sgs = []
|
||||
self.__threading_call__(self.__describe_security_groups__)
|
||||
@@ -27,7 +27,7 @@ class EC2(AWSService):
|
||||
self.volumes_with_snapshots = {}
|
||||
self.regions_with_snapshots = {}
|
||||
self.__threading_call__(self.__describe_snapshots__)
|
||||
self.__threading_call__(self.__determine_public_snapshots__, self.snapshots)
|
||||
self.__get_snapshot_public__()
|
||||
self.network_interfaces = []
|
||||
self.__threading_call__(self.__describe_public_network_interfaces__)
|
||||
self.__threading_call__(self.__describe_sg_network_interfaces__)
|
||||
@@ -36,11 +36,12 @@ class EC2(AWSService):
|
||||
self.volumes = []
|
||||
self.__threading_call__(self.__describe_volumes__)
|
||||
self.ebs_encryption_by_default = []
|
||||
self.__threading_call__(self.__get_ebs_encryption_settings__)
|
||||
self.__threading_call__(self.__get_ebs_encryption_by_default__)
|
||||
self.elastic_ips = []
|
||||
self.__threading_call__(self.__describe_ec2_addresses__)
|
||||
self.__threading_call__(self.__describe_addresses__)
|
||||
|
||||
def __describe_instances__(self, regional_client):
|
||||
logger.info("EC2 - Describing EC2 Instances...")
|
||||
try:
|
||||
describe_instances_paginator = regional_client.get_paginator(
|
||||
"describe_instances"
|
||||
@@ -105,6 +106,7 @@ class EC2(AWSService):
|
||||
)
|
||||
|
||||
def __describe_security_groups__(self, regional_client):
|
||||
logger.info("EC2 - Describing Security Groups...")
|
||||
try:
|
||||
describe_security_groups_paginator = regional_client.get_paginator(
|
||||
"describe_security_groups"
|
||||
@@ -153,6 +155,7 @@ class EC2(AWSService):
|
||||
)
|
||||
|
||||
def __describe_network_acls__(self, regional_client):
|
||||
logger.info("EC2 - Describing Network ACLs...")
|
||||
try:
|
||||
describe_network_acls_paginator = regional_client.get_paginator(
|
||||
"describe_network_acls"
|
||||
@@ -183,6 +186,7 @@ class EC2(AWSService):
|
||||
)
|
||||
|
||||
def __describe_snapshots__(self, regional_client):
|
||||
logger.info("EC2 - Describing Snapshots...")
|
||||
try:
|
||||
snapshots_in_region = False
|
||||
describe_snapshots_paginator = regional_client.get_paginator(
|
||||
@@ -215,30 +219,35 @@ class EC2(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __determine_public_snapshots__(self, snapshot):
|
||||
try:
|
||||
regional_client = self.regional_clients[snapshot.region]
|
||||
snapshot_public = regional_client.describe_snapshot_attribute(
|
||||
Attribute="createVolumePermission", SnapshotId=snapshot.id
|
||||
)
|
||||
for permission in snapshot_public["CreateVolumePermissions"]:
|
||||
if "Group" in permission:
|
||||
if permission["Group"] == "all":
|
||||
snapshot.public = True
|
||||
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "InvalidSnapshot.NotFound":
|
||||
logger.warning(
|
||||
f"{snapshot.region} --"
|
||||
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
|
||||
f" {error}"
|
||||
def __get_snapshot_public__(self):
|
||||
logger.info("EC2 - Getting snapshot volume attribute permissions...")
|
||||
for snapshot in self.snapshots:
|
||||
try:
|
||||
regional_client = self.regional_clients[snapshot.region]
|
||||
snapshot_public = regional_client.describe_snapshot_attribute(
|
||||
Attribute="createVolumePermission", SnapshotId=snapshot.id
|
||||
)
|
||||
for permission in snapshot_public["CreateVolumePermissions"]:
|
||||
if "Group" in permission:
|
||||
if permission["Group"] == "all":
|
||||
snapshot.public = True
|
||||
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "InvalidSnapshot.NotFound":
|
||||
logger.warning(
|
||||
f"{snapshot.region} --"
|
||||
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
|
||||
f" {error}"
|
||||
)
|
||||
continue
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_public_network_interfaces__(self, regional_client):
|
||||
logger.info("EC2 - Describing Network Interfaces...")
|
||||
try:
|
||||
# Get Network Interfaces with Public IPs
|
||||
describe_network_interfaces_paginator = regional_client.get_paginator(
|
||||
@@ -265,6 +274,7 @@ class EC2(AWSService):
|
||||
)
|
||||
|
||||
def __describe_sg_network_interfaces__(self, regional_client):
|
||||
logger.info("EC2 - Describing Network Interfaces...")
|
||||
try:
|
||||
# Get Network Interfaces for Security Groups
|
||||
for sg in self.security_groups:
|
||||
@@ -289,25 +299,30 @@ class EC2(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_instance_user_data__(self, instance):
|
||||
try:
|
||||
regional_client = self.regional_clients[instance.region]
|
||||
user_data = regional_client.describe_instance_attribute(
|
||||
Attribute="userData", InstanceId=instance.id
|
||||
)["UserData"]
|
||||
if "Value" in user_data:
|
||||
instance.user_data = user_data["Value"]
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
|
||||
logger.warning(
|
||||
def __get_instance_user_data__(self):
|
||||
logger.info("EC2 - Getting instance user data...")
|
||||
for instance in self.instances:
|
||||
try:
|
||||
regional_client = self.regional_clients[instance.region]
|
||||
user_data = regional_client.describe_instance_attribute(
|
||||
Attribute="userData", InstanceId=instance.id
|
||||
)["UserData"]
|
||||
if "Value" in user_data:
|
||||
instance.user_data = user_data["Value"]
|
||||
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
|
||||
logger.warning(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
continue
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_images__(self, regional_client):
|
||||
logger.info("EC2 - Describing Images...")
|
||||
try:
|
||||
for image in regional_client.describe_images(Owners=["self"])["Images"]:
|
||||
arn = f"arn:{self.audited_partition}:ec2:{regional_client.region}:{self.audited_account}:image/{image['ImageId']}"
|
||||
@@ -330,6 +345,7 @@ class EC2(AWSService):
|
||||
)
|
||||
|
||||
def __describe_volumes__(self, regional_client):
|
||||
logger.info("EC2 - Describing Volumes...")
|
||||
try:
|
||||
describe_volumes_paginator = regional_client.get_paginator(
|
||||
"describe_volumes"
|
||||
@@ -354,7 +370,8 @@ class EC2(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __describe_ec2_addresses__(self, regional_client):
|
||||
def __describe_addresses__(self, regional_client):
|
||||
logger.info("EC2 - Describing Elastic IPs...")
|
||||
try:
|
||||
for address in regional_client.describe_addresses()["Addresses"]:
|
||||
public_ip = None
|
||||
@@ -385,7 +402,8 @@ class EC2(AWSService):
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_ebs_encryption_settings__(self, regional_client):
|
||||
def __get_ebs_encryption_by_default__(self, regional_client):
|
||||
logger.info("EC2 - Get EBS Encryption By Default...")
|
||||
try:
|
||||
volumes_in_region = False
|
||||
for volume in self.volumes:
|
||||
|
||||
@@ -4,6 +4,7 @@ from pydantic import BaseModel
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
|
||||
from prowler.providers.aws.aws_provider import generate_regional_clients
|
||||
from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
|
||||
@@ -12,6 +13,7 @@ class EKS(AWSService):
|
||||
def __init__(self, audit_info):
|
||||
# Call AWSService's __init__
|
||||
super().__init__(__class__.__name__, audit_info)
|
||||
self.regional_clients = generate_regional_clients(self.service, audit_info)
|
||||
self.clusters = []
|
||||
self.__threading_call__(self.__list_clusters__)
|
||||
self.__describe_cluster__(self.regional_clients)
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
from typing import Optional
|
||||
|
||||
from botocore.exceptions import ClientError
|
||||
from pydantic import BaseModel
|
||||
|
||||
from prowler.lib.logger import logger
|
||||
@@ -74,15 +73,7 @@ class ElastiCache(AWSService):
|
||||
cluster.tags = regional_client.list_tags_for_resource(
|
||||
ResourceName=cluster.arn
|
||||
)["TagList"]
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "CacheClusterNotFound":
|
||||
logger.warning(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
|
||||
@@ -33,7 +33,7 @@ class elbv2_insecure_ssl_ciphers(Check):
|
||||
and listener.ssl_policy not in secure_ssl_policies
|
||||
):
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"ELBv2 {lb.name} has listeners with insecure SSL protocols or ciphers ({listener.ssl_policy})."
|
||||
report.status_extended = f"ELBv2 {lb.name} has listeners with insecure SSL protocols or ciphers."
|
||||
|
||||
findings.append(report)
|
||||
|
||||
|
||||
@@ -13,21 +13,17 @@ class fms_policy_compliant(Check):
|
||||
report.status = "PASS"
|
||||
report.status_extended = "FMS enabled with all compliant accounts."
|
||||
non_compliant_policy = False
|
||||
if fms_client.fms_policies:
|
||||
for policy in fms_client.fms_policies:
|
||||
for policy_to_account in policy.compliance_status:
|
||||
if policy_to_account.status == "NON_COMPLIANT":
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"FMS with non-compliant policy {policy.name} for account {policy_to_account.account_id}."
|
||||
report.resource_id = policy.id
|
||||
report.resource_arn = policy.arn
|
||||
non_compliant_policy = True
|
||||
break
|
||||
if non_compliant_policy:
|
||||
for policy in fms_client.fms_policies:
|
||||
for policy_to_account in policy.compliance_status:
|
||||
if policy_to_account.status == "NON_COMPLIANT":
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"FMS with non-compliant policy {policy.name} for account {policy_to_account.account_id}."
|
||||
report.resource_id = policy.id
|
||||
report.resource_arn = policy.arn
|
||||
non_compliant_policy = True
|
||||
break
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = f"FMS without any compliant policy for account {fms_client.audited_account}."
|
||||
if non_compliant_policy:
|
||||
break
|
||||
|
||||
findings.append(report)
|
||||
return findings
|
||||
|
||||
@@ -5,6 +5,8 @@ from prowler.lib.logger import logger
|
||||
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
|
||||
from prowler.providers.aws.lib.service.service import AWSService
|
||||
|
||||
# from prowler.providers.aws.aws_provider import generate_regional_clients
|
||||
|
||||
|
||||
################## FMS
|
||||
class FMS(AWSService):
|
||||
@@ -66,9 +68,7 @@ class FMS(AWSService):
|
||||
for page in list_compliance_status_paginator.paginate(
|
||||
PolicyId=fms_policy.id
|
||||
):
|
||||
for fms_compliance_status in page.get(
|
||||
"PolicyComplianceStatusList", []
|
||||
):
|
||||
for fms_compliance_status in page["PolicyComplianceStatusList"]:
|
||||
fms_policy.compliance_status.append(
|
||||
PolicyAccountComplianceStatus(
|
||||
account_id=fms_compliance_status.get("MemberAccount"),
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
|
||||
is_condition_block_restrictive,
|
||||
is_account_only_allowed_in_condition,
|
||||
)
|
||||
from prowler.providers.aws.services.iam.iam_client import iam_client
|
||||
|
||||
@@ -30,7 +30,7 @@ class iam_role_cross_service_confused_deputy_prevention(Check):
|
||||
and "Service" in statement["Principal"]
|
||||
# Check to see if the appropriate condition statements have been implemented
|
||||
and "Condition" in statement
|
||||
and is_condition_block_restrictive(
|
||||
and is_account_only_allowed_in_condition(
|
||||
statement["Condition"], iam_client.audited_account
|
||||
)
|
||||
):
|
||||
|
||||
@@ -494,30 +494,11 @@ class IAM(AWSService):
|
||||
document=inline_group_policy_doc,
|
||||
)
|
||||
)
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "NoSuchEntity":
|
||||
logger.warning(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
group.inline_policies = inline_group_policies
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "NoSuchEntity":
|
||||
logger.warning(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
|
||||
@@ -48,12 +48,11 @@ class organizations_scp_check_deny_regions(Check):
|
||||
and "aws:RequestedRegion"
|
||||
in statement["Condition"]["StringNotEquals"]
|
||||
):
|
||||
if all(
|
||||
region
|
||||
in statement["Condition"]["StringNotEquals"][
|
||||
if (
|
||||
organizations_enabled_regions
|
||||
== statement["Condition"]["StringNotEquals"][
|
||||
"aws:RequestedRegion"
|
||||
]
|
||||
for region in organizations_enabled_regions
|
||||
):
|
||||
# All defined regions are restricted, we exit here, no need to continue.
|
||||
report.status = "PASS"
|
||||
@@ -74,12 +73,11 @@ class organizations_scp_check_deny_regions(Check):
|
||||
and "aws:RequestedRegion"
|
||||
in statement["Condition"]["StringEquals"]
|
||||
):
|
||||
if all(
|
||||
region
|
||||
in statement["Condition"]["StringEquals"][
|
||||
if (
|
||||
organizations_enabled_regions
|
||||
== statement["Condition"]["StringEquals"][
|
||||
"aws:RequestedRegion"
|
||||
]
|
||||
for region in organizations_enabled_regions
|
||||
):
|
||||
# All defined regions are restricted, we exit here, no need to continue.
|
||||
report.status = "PASS"
|
||||
|
||||
@@ -28,7 +28,6 @@ class S3(AWSService):
|
||||
self.__threading_call__(self.__get_bucket_tagging__)
|
||||
|
||||
# In the S3 service we override the "__threading_call__" method because we spawn a process per bucket instead of per region
|
||||
# TODO: Replace the above function with the service __threading_call__ using the buckets as the iterator
|
||||
def __threading_call__(self, call):
|
||||
threads = []
|
||||
for bucket in self.buckets:
|
||||
@@ -102,15 +101,6 @@ class S3(AWSService):
|
||||
if "MFADelete" in bucket_versioning:
|
||||
if "Enabled" == bucket_versioning["MFADelete"]:
|
||||
bucket.mfa_delete = True
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "NoSuchBucket":
|
||||
logger.warning(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
if bucket.region:
|
||||
logger.error(
|
||||
@@ -163,15 +153,6 @@ class S3(AWSService):
|
||||
bucket.logging_target_bucket = bucket_logging["LoggingEnabled"][
|
||||
"TargetBucket"
|
||||
]
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "NoSuchBucket":
|
||||
logger.warning(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
if regional_client:
|
||||
logger.error(
|
||||
@@ -243,15 +224,6 @@ class S3(AWSService):
|
||||
grantee.permission = grant["Permission"]
|
||||
grantees.append(grantee)
|
||||
bucket.acl_grantees = grantees
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "NoSuchBucket":
|
||||
logger.warning(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
if regional_client:
|
||||
logger.error(
|
||||
@@ -269,26 +241,18 @@ class S3(AWSService):
|
||||
bucket.policy = json.loads(
|
||||
regional_client.get_bucket_policy(Bucket=bucket.name)["Policy"]
|
||||
)
|
||||
except ClientError as error:
|
||||
if error.response["Error"]["Code"] == "NoSuchBucketPolicy":
|
||||
bucket.policy = {}
|
||||
elif error.response["Error"]["Code"] == "NoSuchBucket":
|
||||
logger.warning(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
except Exception as error:
|
||||
if regional_client:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
if "NoSuchBucketPolicy" in str(error):
|
||||
bucket.policy = {}
|
||||
else:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
if regional_client:
|
||||
logger.error(
|
||||
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
|
||||
def __get_bucket_ownership_controls__(self, bucket):
|
||||
logger.info("S3 - Get buckets ownership controls...")
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
|
||||
is_condition_block_restrictive,
|
||||
is_account_only_allowed_in_condition,
|
||||
)
|
||||
from prowler.providers.aws.services.sns.sns_client import sns_client
|
||||
|
||||
@@ -35,7 +35,7 @@ class sns_topics_not_publicly_accessible(Check):
|
||||
):
|
||||
if (
|
||||
"Condition" in statement
|
||||
and is_condition_block_restrictive(
|
||||
and is_account_only_allowed_in_condition(
|
||||
statement["Condition"], sns_client.audited_account
|
||||
)
|
||||
):
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
|
||||
is_condition_block_restrictive,
|
||||
is_account_only_allowed_in_condition,
|
||||
)
|
||||
from prowler.providers.aws.services.sqs.sqs_client import sqs_client
|
||||
|
||||
@@ -32,10 +32,8 @@ class sqs_queues_not_publicly_accessible(Check):
|
||||
)
|
||||
):
|
||||
if "Condition" in statement:
|
||||
if is_condition_block_restrictive(
|
||||
statement["Condition"],
|
||||
sqs_client.audited_account,
|
||||
True,
|
||||
if is_account_only_allowed_in_condition(
|
||||
statement["Condition"], sqs_client.audited_account
|
||||
):
|
||||
report.status_extended = f"SQS queue {queue.id} is not public because its policy only allows access from the same account."
|
||||
else:
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "arn:aws:iam::AWS_ACCOUNT_NUMBER:root",
|
||||
"Severity": "low",
|
||||
"ResourceType": "Other",
|
||||
"ResourceType": "",
|
||||
"Description": "Check if a Premium support plan is subscribed.",
|
||||
"Risk": "Ensure that the appropriate support level is enabled for the necessary AWS accounts. For example, if an AWS account is being used to host production systems and environments, it is highly recommended that the minimum AWS Support Plan should be Business.",
|
||||
"RelatedUrl": "https://aws.amazon.com/premiumsupport/plans/",
|
||||
|
||||
@@ -2,7 +2,7 @@ from re import compile
|
||||
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
|
||||
is_condition_block_restrictive,
|
||||
is_account_only_allowed_in_condition,
|
||||
)
|
||||
from prowler.providers.aws.services.vpc.vpc_client import vpc_client
|
||||
|
||||
@@ -35,7 +35,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
|
||||
|
||||
if "Condition" in statement:
|
||||
for account_id in trusted_account_ids:
|
||||
if is_condition_block_restrictive(
|
||||
if is_account_only_allowed_in_condition(
|
||||
statement["Condition"], account_id
|
||||
):
|
||||
access_from_trusted_accounts = True
|
||||
@@ -70,7 +70,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
|
||||
access_from_trusted_accounts = False
|
||||
if "Condition" in statement:
|
||||
for account_id in trusted_account_ids:
|
||||
if is_condition_block_restrictive(
|
||||
if is_account_only_allowed_in_condition(
|
||||
statement["Condition"], account_id
|
||||
):
|
||||
access_from_trusted_accounts = True
|
||||
@@ -102,7 +102,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
|
||||
|
||||
if "Condition" in statement:
|
||||
for account_id in trusted_account_ids:
|
||||
if is_condition_block_restrictive(
|
||||
if is_account_only_allowed_in_condition(
|
||||
statement["Condition"], account_id
|
||||
):
|
||||
access_from_trusted_accounts = True
|
||||
|
||||
@@ -63,7 +63,7 @@ GCP Account: {Fore.YELLOW}[{profile}]{Style.RESET_ALL} GCP Project IDs: {Fore.Y
|
||||
def print_azure_credentials(self, audit_info: Azure_Audit_Info):
|
||||
printed_subscriptions = []
|
||||
for key, value in audit_info.identity.subscriptions.items():
|
||||
intermediate = f"{key} : {value}"
|
||||
intermediate = key + " : " + value
|
||||
printed_subscriptions.append(intermediate)
|
||||
report = f"""
|
||||
This report is being generated using the identity below:
|
||||
@@ -85,7 +85,6 @@ Azure Identity Type: {Fore.YELLOW}[{audit_info.identity.identity_type}]{Style.RE
|
||||
current_audit_info.assumed_role_info.role_arn = input_role
|
||||
input_session_duration = arguments.get("session_duration")
|
||||
input_external_id = arguments.get("external_id")
|
||||
input_role_session_name = arguments.get("role_session_name")
|
||||
|
||||
# STS Endpoint Region
|
||||
sts_endpoint_region = arguments.get("sts_endpoint_region")
|
||||
@@ -154,9 +153,6 @@ Azure Identity Type: {Fore.YELLOW}[{audit_info.identity.identity_type}]{Style.RE
|
||||
)
|
||||
current_audit_info.assumed_role_info.external_id = input_external_id
|
||||
current_audit_info.assumed_role_info.mfa_enabled = input_mfa
|
||||
current_audit_info.assumed_role_info.role_session_name = (
|
||||
input_role_session_name
|
||||
)
|
||||
|
||||
# Check if role arn is valid
|
||||
try:
|
||||
|
||||
@@ -69,8 +69,7 @@ class Provider_Output_Options:
|
||||
if arguments.output_directory:
|
||||
if not isdir(arguments.output_directory):
|
||||
if arguments.output_modes:
|
||||
# exist_ok is set to True not to raise FileExistsError
|
||||
makedirs(arguments.output_directory, exist_ok=True)
|
||||
makedirs(arguments.output_directory)
|
||||
|
||||
|
||||
class Azure_Output_Options(Provider_Output_Options):
|
||||
@@ -135,7 +134,6 @@ class Aws_Output_Options(Provider_Output_Options):
|
||||
|
||||
# Security Hub Outputs
|
||||
self.security_hub_enabled = arguments.security_hub
|
||||
self.send_sh_only_fails = arguments.send_sh_only_fails
|
||||
if arguments.security_hub:
|
||||
if not self.output_modes:
|
||||
self.output_modes = ["json-asff"]
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import os
|
||||
import sys
|
||||
|
||||
from colorama import Fore, Style
|
||||
from google import auth
|
||||
from googleapiclient import discovery
|
||||
|
||||
@@ -90,7 +89,4 @@ class GCP_Provider:
|
||||
logger.error(
|
||||
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
|
||||
)
|
||||
print(
|
||||
f"\n{Fore.YELLOW}Cloud Resource Manager API {Style.RESET_ALL}has not been used before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/ then retry."
|
||||
)
|
||||
return []
|
||||
|
||||
@@ -27,7 +27,7 @@ class GCPService:
|
||||
self.default_project_id = audit_info.default_project_id
|
||||
self.region = region
|
||||
self.client = self.__generate_client__(
|
||||
self.service, api_version, audit_info.credentials
|
||||
service, api_version, audit_info.credentials
|
||||
)
|
||||
# Only project ids that have their API enabled will be scanned
|
||||
self.project_ids = self.__is_api_active__(audit_info.project_ids)
|
||||
@@ -62,7 +62,7 @@ class GCPService:
|
||||
project_ids.append(project_id)
|
||||
else:
|
||||
print(
|
||||
f"\n{Fore.YELLOW}{self.service} API {Style.RESET_ALL}has not been used in project {project_id} before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/{self.service}.googleapis.com/overview?project={project_id} then retry."
|
||||
f"\n{Fore.YELLOW}{self.service} API {Style.RESET_ALL}has not been used in project {project_id} before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/dataproc.googleapis.com/overview?project={project_id} then retry."
|
||||
)
|
||||
except Exception as error:
|
||||
logger.error(
|
||||
|
||||
@@ -22,7 +22,7 @@ packages = [
|
||||
{include = "prowler"}
|
||||
]
|
||||
readme = "README.md"
|
||||
version = "3.12.0"
|
||||
version = "3.11.3"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
alive-progress = "3.1.5"
|
||||
@@ -38,36 +38,35 @@ boto3 = "1.26.165"
|
||||
botocore = "1.29.165"
|
||||
colorama = "0.4.6"
|
||||
detect-secrets = "1.4.0"
|
||||
google-api-python-client = "2.111.0"
|
||||
google-auth-httplib2 = ">=0.1,<0.3"
|
||||
jsonschema = "4.20.0"
|
||||
google-api-python-client = "2.108.0"
|
||||
google-auth-httplib2 = "^0.1.0"
|
||||
jsonschema = "4.18.0"
|
||||
mkdocs = {version = "1.5.3", optional = true}
|
||||
mkdocs-material = {version = "9.5.3", optional = true}
|
||||
mkdocs-material = {version = "9.4.14", optional = true}
|
||||
msgraph-core = "0.2.2"
|
||||
msrestazure = "^0.6.4"
|
||||
pydantic = "1.10.13"
|
||||
python = ">=3.9,<3.12"
|
||||
schema = "0.7.5"
|
||||
shodan = "1.31.0"
|
||||
slack-sdk = "3.26.1"
|
||||
shodan = "1.30.1"
|
||||
slack-sdk = "3.26.0"
|
||||
tabulate = "0.9.0"
|
||||
|
||||
[tool.poetry.extras]
|
||||
docs = ["mkdocs", "mkdocs-material"]
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
bandit = "1.7.6"
|
||||
bandit = "1.7.5"
|
||||
black = "22.12.0"
|
||||
coverage = "7.4.0"
|
||||
docker = "7.0.0"
|
||||
coverage = "7.3.2"
|
||||
docker = "6.1.3"
|
||||
flake8 = "6.1.0"
|
||||
freezegun = "1.4.0"
|
||||
freezegun = "1.2.2"
|
||||
mock = "5.1.0"
|
||||
moto = {extras = ["all"], version = "4.2.12"}
|
||||
moto = {extras = ["all"], version = "4.2.10"}
|
||||
openapi-spec-validator = "0.7.1"
|
||||
openapi-schema-validator = "0.6.2"
|
||||
pylint = "3.0.3"
|
||||
pytest = "7.4.4"
|
||||
pylint = "3.0.2"
|
||||
pytest = "7.4.3"
|
||||
pytest-cov = "4.1.0"
|
||||
pytest-randomly = "3.15.0"
|
||||
pytest-xdist = "3.5.0"
|
||||
|
||||
@@ -54,7 +54,7 @@ config_aws = {
|
||||
|
||||
class Test_Config:
|
||||
def test_get_aws_available_regions(self):
|
||||
assert len(get_aws_available_regions()) == 33
|
||||
assert len(get_aws_available_regions()) == 32
|
||||
|
||||
@mock.patch(
|
||||
"prowler.config.config.requests.get", new=mock_prowler_get_latest_release
|
||||
|
||||
@@ -256,10 +256,6 @@ def mock_recover_checks_from_aws_provider_rds_service(*_):
|
||||
]
|
||||
|
||||
|
||||
def mock_recover_checks_from_aws_provider_cognito_service(*_):
|
||||
return []
|
||||
|
||||
|
||||
class Test_Check:
|
||||
def test_load_check_metadata(self):
|
||||
test_cases = [
|
||||
@@ -569,19 +565,6 @@ class Test_Check:
|
||||
recovered_checks = get_checks_from_input_arn(audit_resources, provider)
|
||||
assert recovered_checks == expected_checks
|
||||
|
||||
@patch(
|
||||
"prowler.lib.check.check.recover_checks_from_provider",
|
||||
new=mock_recover_checks_from_aws_provider_cognito_service,
|
||||
)
|
||||
def test_get_checks_from_input_arn_cognito(self):
|
||||
audit_resources = [
|
||||
f"arn:aws:cognito-idp:us-east-1:{AWS_ACCOUNT_NUMBER}:userpool/test"
|
||||
]
|
||||
provider = "aws"
|
||||
expected_checks = []
|
||||
recovered_checks = get_checks_from_input_arn(audit_resources, provider)
|
||||
assert recovered_checks == expected_checks
|
||||
|
||||
@patch(
|
||||
"prowler.lib.check.check.recover_checks_from_provider",
|
||||
new=mock_recover_checks_from_aws_provider_ec2_service,
|
||||
|
||||
@@ -5,11 +5,6 @@ import pytest
|
||||
from mock import patch
|
||||
|
||||
from prowler.lib.cli.parser import ProwlerArgumentParser
|
||||
from prowler.providers.aws.config import ROLE_SESSION_NAME
|
||||
from prowler.providers.aws.lib.arguments.arguments import (
|
||||
validate_bucket,
|
||||
validate_role_session_name,
|
||||
)
|
||||
from prowler.providers.azure.lib.arguments.arguments import validate_azure_region
|
||||
|
||||
prowler_command = "prowler"
|
||||
@@ -744,7 +739,7 @@ class Test_Parser:
|
||||
assert wrapped_exit.value.code == 2
|
||||
assert (
|
||||
capsys.readouterr().err
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
|
||||
)
|
||||
|
||||
def test_aws_parser_session_duration_long(self, capsys):
|
||||
@@ -757,7 +752,7 @@ class Test_Parser:
|
||||
assert wrapped_exit.value.code == 2
|
||||
assert (
|
||||
capsys.readouterr().err
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
|
||||
)
|
||||
|
||||
# TODO
|
||||
@@ -778,7 +773,7 @@ class Test_Parser:
|
||||
assert wrapped_exit.value.code == 2
|
||||
assert (
|
||||
capsys.readouterr().err
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
|
||||
)
|
||||
|
||||
def test_aws_parser_external_id_long(self, capsys):
|
||||
@@ -791,7 +786,7 @@ class Test_Parser:
|
||||
assert wrapped_exit.value.code == 2
|
||||
assert (
|
||||
capsys.readouterr().err
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed\n"
|
||||
== f"{prowler_default_usage_error}\nprowler: error: aws: To use -I/-T options -R option is needed\n"
|
||||
)
|
||||
|
||||
def test_aws_parser_region_f(self):
|
||||
@@ -887,12 +882,6 @@ class Test_Parser:
|
||||
parsed = self.parser.parse(command)
|
||||
assert parsed.skip_sh_update
|
||||
|
||||
def test_aws_parser_send_only_fail(self):
|
||||
argument = "--send-sh-only-fails"
|
||||
command = [prowler_command, argument]
|
||||
parsed = self.parser.parse(command)
|
||||
assert parsed.send_sh_only_fails
|
||||
|
||||
def test_aws_parser_quick_inventory_short(self):
|
||||
argument = "-i"
|
||||
command = [prowler_command, argument]
|
||||
@@ -1016,13 +1005,6 @@ class Test_Parser:
|
||||
parsed = self.parser.parse(command)
|
||||
assert parsed.sts_endpoint_region == sts_endpoint_region
|
||||
|
||||
def test_aws_parser_role_session_name(self):
|
||||
argument = "--role-session-name"
|
||||
role_session_name = ROLE_SESSION_NAME
|
||||
command = [prowler_command, argument, role_session_name]
|
||||
parsed = self.parser.parse(command)
|
||||
assert parsed.role_session_name == role_session_name
|
||||
|
||||
def test_parser_azure_auth_sp(self):
|
||||
argument = "--sp-env-auth"
|
||||
command = [prowler_command, "azure", argument]
|
||||
@@ -1150,50 +1132,3 @@ class Test_Parser:
|
||||
match=f"Region {invalid_region} not allowed, allowed regions are {' '.join(expected_regions)}",
|
||||
):
|
||||
validate_azure_region(invalid_region)
|
||||
|
||||
def test_validate_bucket_invalid_bucket_names(self):
|
||||
bad_bucket_names = [
|
||||
"xn--bucket-name",
|
||||
"mrryadfpcwlscicvnrchmtmyhwrvzkgfgdxnlnvaaummnywciixnzvycnzmhhpwb",
|
||||
"192.168.5.4",
|
||||
"bucket-name-s3alias",
|
||||
"bucket-name-s3alias-",
|
||||
"bucket-n$ame",
|
||||
"bu",
|
||||
]
|
||||
for bucket_name in bad_bucket_names:
|
||||
with pytest.raises(ArgumentTypeError) as argument_error:
|
||||
validate_bucket(bucket_name)
|
||||
|
||||
assert argument_error.type == ArgumentTypeError
|
||||
assert (
|
||||
argument_error.value.args[0]
|
||||
== "Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
|
||||
)
|
||||
|
||||
def test_validate_bucket_valid_bucket_names(self):
|
||||
valid_bucket_names = ["bucket-name" "test" "test-test-test"]
|
||||
for bucket_name in valid_bucket_names:
|
||||
assert validate_bucket(bucket_name) == bucket_name
|
||||
|
||||
def test_validate_role_session_name_invalid_role_names(self):
|
||||
bad_role_names = [
|
||||
"role name",
|
||||
"adasD*",
|
||||
"test#",
|
||||
"role-name?",
|
||||
]
|
||||
for role_name in bad_role_names:
|
||||
with pytest.raises(ArgumentTypeError) as argument_error:
|
||||
validate_role_session_name(role_name)
|
||||
|
||||
assert argument_error.type == ArgumentTypeError
|
||||
assert (
|
||||
argument_error.value.args[0]
|
||||
== "Role Session Name must be 2-64 characters long and consist only of upper- and lower-case alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@-"
|
||||
)
|
||||
|
||||
def test_validate_role_session_name_valid_role_names(self):
|
||||
valid_role_names = ["prowler-role" "test@" "test=test+test,."]
|
||||
for role_name in valid_role_names:
|
||||
assert validate_role_session_name(role_name) == role_name
|
||||
|
||||
@@ -1,43 +1,14 @@
|
||||
from boto3 import session
|
||||
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
|
||||
from prowler.providers.common.models import Audit_Metadata
|
||||
|
||||
# Root AWS Account
|
||||
AWS_REGION_US_EAST_1 = "us-east-1"
|
||||
AWS_REGION_EU_WEST_1 = "eu-west-1"
|
||||
AWS_REGION_EU_WEST_2 = "eu-west-2"
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
|
||||
|
||||
# Commercial Regions
|
||||
AWS_REGION_US_EAST_1 = "us-east-1"
|
||||
AWS_REGION_US_EAST_1_AZA = "us-east-1a"
|
||||
AWS_REGION_US_EAST_1_AZB = "us-east-1b"
|
||||
AWS_REGION_EU_WEST_1 = "eu-west-1"
|
||||
AWS_REGION_EU_WEST_1_AZA = "eu-west-1a"
|
||||
AWS_REGION_EU_WEST_1_AZB = "eu-west-1b"
|
||||
AWS_REGION_EU_WEST_2 = "eu-west-2"
|
||||
AWS_REGION_CN_NORTHWEST_1 = "cn-northwest-1"
|
||||
AWS_REGION_CN_NORTH_1 = "cn-north-1"
|
||||
AWS_REGION_EU_SOUTH_2 = "eu-south-2"
|
||||
AWS_REGION_EU_SOUTH_3 = "eu-south-3"
|
||||
AWS_REGION_US_WEST_2 = "us-west-2"
|
||||
AWS_REGION_US_EAST_2 = "us-east-2"
|
||||
AWS_REGION_EU_CENTRAL_1 = "eu-central-1"
|
||||
|
||||
|
||||
# China Regions
|
||||
AWS_REGION_CHINA_NORHT_1 = "cn-north-1"
|
||||
|
||||
# Gov Cloud Regions
|
||||
AWS_REGION_GOV_CLOUD_US_EAST_1 = "us-gov-east-1"
|
||||
|
||||
# Iso Regions
|
||||
AWS_REGION_ISO_GLOBAL = "aws-iso-global"
|
||||
|
||||
# AWS Partitions
|
||||
AWS_COMMERCIAL_PARTITION = "aws"
|
||||
AWS_GOV_CLOUD_PARTITION = "aws-us-gov"
|
||||
AWS_CHINA_PARTITION = "aws-cn"
|
||||
AWS_ISO_PARTITION = "aws-iso"
|
||||
|
||||
|
||||
# Mocked AWS Audit Info
|
||||
@@ -45,35 +16,29 @@ def set_mocked_aws_audit_info(
|
||||
audited_regions: [str] = [],
|
||||
audited_account: str = AWS_ACCOUNT_NUMBER,
|
||||
audited_account_arn: str = AWS_ACCOUNT_ARN,
|
||||
audited_partition: str = AWS_COMMERCIAL_PARTITION,
|
||||
expected_checks: [str] = [],
|
||||
profile_region: str = None,
|
||||
audit_config: dict = {},
|
||||
ignore_unused_services: bool = False,
|
||||
assumed_role_info: AWS_Assume_Role = None,
|
||||
audit_session: session.Session = session.Session(
|
||||
profile_name=None,
|
||||
botocore_session=None,
|
||||
),
|
||||
original_session: session.Session = None,
|
||||
enabled_regions: set = None,
|
||||
):
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=original_session,
|
||||
audit_session=audit_session,
|
||||
original_session=None,
|
||||
audit_session=session.Session(
|
||||
profile_name=None,
|
||||
botocore_session=None,
|
||||
),
|
||||
audited_account=audited_account,
|
||||
audited_account_arn=audited_account_arn,
|
||||
audited_user_id=None,
|
||||
audited_partition=audited_partition,
|
||||
audited_partition=AWS_COMMERCIAL_PARTITION,
|
||||
audited_identity_arn=None,
|
||||
audit_config=audit_config,
|
||||
profile=None,
|
||||
profile_region=profile_region,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=assumed_role_info,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=[],
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
@@ -81,8 +46,6 @@ def set_mocked_aws_audit_info(
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
audit_config=audit_config,
|
||||
ignore_unused_services=ignore_unused_services,
|
||||
enabled_regions=enabled_regions if enabled_regions else set(audited_regions),
|
||||
enabled_regions=set(audited_regions),
|
||||
)
|
||||
return audit_info
|
||||
|
||||
@@ -12,19 +12,11 @@ from prowler.providers.aws.aws_provider import (
|
||||
get_default_region,
|
||||
get_global_region,
|
||||
)
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role
|
||||
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
|
||||
from prowler.providers.common.models import Audit_Metadata
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_CHINA_PARTITION,
|
||||
AWS_GOV_CLOUD_PARTITION,
|
||||
AWS_ISO_PARTITION,
|
||||
AWS_REGION_CHINA_NORHT_1,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
AWS_REGION_GOV_CLOUD_US_EAST_1,
|
||||
AWS_REGION_ISO_GLOBAL,
|
||||
AWS_REGION_US_EAST_1,
|
||||
AWS_REGION_US_EAST_2,
|
||||
set_mocked_aws_audit_info,
|
||||
)
|
||||
|
||||
|
||||
@@ -32,9 +24,10 @@ class Test_AWS_Provider:
|
||||
@mock_iam
|
||||
@mock_sts
|
||||
def test_aws_provider_user_without_mfa(self):
|
||||
# sessionName = "ProwlerAssessmentSession"
|
||||
audited_regions = ["eu-west-1"]
|
||||
# sessionName = "ProwlerAsessmentSession"
|
||||
# Boto 3 client to create our user
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_EU_WEST_1)
|
||||
# IAM user
|
||||
iam_user = iam_client.create_user(UserName="test-user")["User"]
|
||||
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
|
||||
@@ -46,19 +39,38 @@ class Test_AWS_Provider:
|
||||
session = boto3.session.Session(
|
||||
aws_access_key_id=access_key_id,
|
||||
aws_secret_access_key=secret_access_key,
|
||||
region_name=AWS_REGION_US_EAST_1,
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=session,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition=None,
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=AWS_Assume_Role(
|
||||
role_arn=None,
|
||||
session_duration=None,
|
||||
external_id=None,
|
||||
mfa_enabled=False,
|
||||
role_session_name="ProwlerAssessmentSession",
|
||||
),
|
||||
original_session=session,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
|
||||
# Call assume_role
|
||||
@@ -76,14 +88,14 @@ class Test_AWS_Provider:
|
||||
session_duration=None,
|
||||
external_id=None,
|
||||
mfa_enabled=False,
|
||||
role_session_name="ProwlerAssessmentSession",
|
||||
)
|
||||
|
||||
@mock_iam
|
||||
@mock_sts
|
||||
def test_aws_provider_user_with_mfa(self):
|
||||
audited_regions = "eu-west-1"
|
||||
# Boto 3 client to create our user
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_EU_WEST_1)
|
||||
# IAM user
|
||||
iam_user = iam_client.create_user(UserName="test-user")["User"]
|
||||
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
|
||||
@@ -95,23 +107,35 @@ class Test_AWS_Provider:
|
||||
session = boto3.session.Session(
|
||||
aws_access_key_id=access_key_id,
|
||||
aws_secret_access_key=secret_access_key,
|
||||
region_name=AWS_REGION_US_EAST_1,
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=session,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition=None,
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=AWS_REGION_EU_WEST_1,
|
||||
credentials=None,
|
||||
assumed_role_info=AWS_Assume_Role(
|
||||
role_arn=None,
|
||||
session_duration=None,
|
||||
external_id=None,
|
||||
mfa_enabled=False,
|
||||
role_session_name="ProwlerAssessmentSession",
|
||||
),
|
||||
original_session=session,
|
||||
profile_region=AWS_REGION_US_EAST_1,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=True,
|
||||
)
|
||||
|
||||
# Call assume_role
|
||||
# # Call assume_role
|
||||
with patch(
|
||||
"prowler.providers.aws.aws_provider.input_role_mfa_token_and_code",
|
||||
return_value=(
|
||||
@@ -126,7 +150,6 @@ class Test_AWS_Provider:
|
||||
session_duration=None,
|
||||
external_id=None,
|
||||
mfa_enabled=False,
|
||||
role_session_name="ProwlerAssessmentSession",
|
||||
)
|
||||
|
||||
@mock_iam
|
||||
@@ -136,10 +159,10 @@ class Test_AWS_Provider:
|
||||
role_name = "test-role"
|
||||
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
|
||||
session_duration_seconds = 900
|
||||
sessionName = "ProwlerAssessmentSession"
|
||||
|
||||
audited_regions = ["eu-west-1"]
|
||||
sessionName = "ProwlerAsessmentSession"
|
||||
# Boto 3 client to create our user
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_EU_WEST_1)
|
||||
# IAM user
|
||||
iam_user = iam_client.create_user(UserName="test-user")["User"]
|
||||
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
|
||||
@@ -151,22 +174,41 @@ class Test_AWS_Provider:
|
||||
session = boto3.session.Session(
|
||||
aws_access_key_id=access_key_id,
|
||||
aws_secret_access_key=secret_access_key,
|
||||
region_name=AWS_REGION_US_EAST_1,
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=session,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition=None,
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=AWS_Assume_Role(
|
||||
role_arn=role_arn,
|
||||
session_duration=session_duration_seconds,
|
||||
external_id=None,
|
||||
mfa_enabled=True,
|
||||
role_session_name="ProwlerAssessmentSession",
|
||||
),
|
||||
original_session=session,
|
||||
profile_region=AWS_REGION_US_EAST_1,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
|
||||
# Call assume_role
|
||||
aws_provider = AWS_Provider(audit_info)
|
||||
# Patch MFA
|
||||
with patch(
|
||||
@@ -215,10 +257,10 @@ class Test_AWS_Provider:
|
||||
role_name = "test-role"
|
||||
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
|
||||
session_duration_seconds = 900
|
||||
sessionName = "ProwlerAssessmentSession"
|
||||
|
||||
audited_regions = "eu-west-1"
|
||||
sessionName = "ProwlerAsessmentSession"
|
||||
# Boto 3 client to create our user
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_EU_WEST_1)
|
||||
# IAM user
|
||||
iam_user = iam_client.create_user(UserName="test-user")["User"]
|
||||
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
|
||||
@@ -230,22 +272,41 @@ class Test_AWS_Provider:
|
||||
session = boto3.session.Session(
|
||||
aws_access_key_id=access_key_id,
|
||||
aws_secret_access_key=secret_access_key,
|
||||
region_name=AWS_REGION_US_EAST_1,
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=session,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition=None,
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=AWS_Assume_Role(
|
||||
role_arn=role_arn,
|
||||
session_duration=session_duration_seconds,
|
||||
external_id=None,
|
||||
mfa_enabled=False,
|
||||
role_session_name="ProwlerAssessmentSession",
|
||||
),
|
||||
original_session=session,
|
||||
profile_region=AWS_REGION_US_EAST_1,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
|
||||
# Call assume_role
|
||||
aws_provider = AWS_Provider(audit_info)
|
||||
assume_role_response = assume_role(
|
||||
aws_provider.aws_session, aws_provider.role_info
|
||||
@@ -286,12 +347,12 @@ class Test_AWS_Provider:
|
||||
role_name = "test-role"
|
||||
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
|
||||
session_duration_seconds = 900
|
||||
AWS_REGION_US_EAST_1 = AWS_REGION_EU_WEST_1
|
||||
sts_endpoint_region = AWS_REGION_US_EAST_1
|
||||
sessionName = "ProwlerAssessmentSession"
|
||||
|
||||
aws_region = "eu-west-1"
|
||||
sts_endpoint_region = aws_region
|
||||
audited_regions = [aws_region]
|
||||
sessionName = "ProwlerAsessmentSession"
|
||||
# Boto 3 client to create our user
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
|
||||
iam_client = boto3.client("iam", region_name=AWS_REGION_EU_WEST_1)
|
||||
# IAM user
|
||||
iam_user = iam_client.create_user(UserName="test-user")["User"]
|
||||
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
|
||||
@@ -303,22 +364,41 @@ class Test_AWS_Provider:
|
||||
session = boto3.session.Session(
|
||||
aws_access_key_id=access_key_id,
|
||||
aws_secret_access_key=secret_access_key,
|
||||
region_name=AWS_REGION_US_EAST_1,
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=session,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition=None,
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=AWS_Assume_Role(
|
||||
role_arn=role_arn,
|
||||
session_duration=session_duration_seconds,
|
||||
external_id=None,
|
||||
mfa_enabled=False,
|
||||
role_session_name="ProwlerAssessmentSession",
|
||||
),
|
||||
original_session=session,
|
||||
profile_region=AWS_REGION_US_EAST_1,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
|
||||
# Call assume_role
|
||||
aws_provider = AWS_Provider(audit_info)
|
||||
assume_role_response = assume_role(
|
||||
aws_provider.aws_session, aws_provider.role_info, sts_endpoint_region
|
||||
@@ -353,78 +433,368 @@ class Test_AWS_Provider:
|
||||
) == 21 + 1 + len(sessionName)
|
||||
|
||||
def test_generate_regional_clients(self):
|
||||
audited_regions = [AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=audited_regions,
|
||||
audit_session=boto3.session.Session(
|
||||
region_name=AWS_REGION_US_EAST_1,
|
||||
),
|
||||
enabled_regions=audited_regions,
|
||||
# New Boto3 session with the previously create user
|
||||
session = boto3.session.Session(
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
audited_regions = ["eu-west-1", AWS_REGION_EU_WEST_1]
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=session,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
|
||||
generate_regional_clients_response = generate_regional_clients(
|
||||
"ec2", audit_info
|
||||
)
|
||||
|
||||
assert set(generate_regional_clients_response.keys()) == set(audited_regions)
|
||||
|
||||
def test_generate_regional_clients_cn_partition(self):
|
||||
audited_regions = ["cn-northwest-1", "cn-north-1"]
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
def test_generate_regional_clients_global_service(self):
|
||||
# New Boto3 session with the previously create user
|
||||
session = boto3.session.Session(
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
audited_regions = ["eu-west-1", AWS_REGION_EU_WEST_1]
|
||||
profile_region = AWS_REGION_EU_WEST_1
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=session,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=profile_region,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
audit_session=boto3.session.Session(
|
||||
region_name=AWS_REGION_US_EAST_1,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
enabled_regions=audited_regions,
|
||||
)
|
||||
generate_regional_clients_response = generate_regional_clients(
|
||||
"shield", audit_info
|
||||
"route53", audit_info, global_service=True
|
||||
)
|
||||
|
||||
assert list(generate_regional_clients_response.keys()) == [profile_region]
|
||||
|
||||
def test_generate_regional_clients_cn_partition(self):
|
||||
# New Boto3 session with the previously create user
|
||||
session = boto3.session.Session(
|
||||
region_name=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
audited_regions = ["cn-northwest-1", "cn-north-1"]
|
||||
# Fulfil the input session object for Prowler
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=session,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws-cn",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
generate_regional_clients_response = generate_regional_clients(
|
||||
"shield", audit_info, global_service=True
|
||||
)
|
||||
|
||||
# Shield does not exist in China
|
||||
assert generate_regional_clients_response == {}
|
||||
|
||||
def test_get_default_region(self):
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
profile_region=AWS_REGION_EU_WEST_1,
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
audited_regions = ["eu-west-1"]
|
||||
profile_region = "eu-west-1"
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=profile_region,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
|
||||
assert get_default_region("ec2", audit_info) == "eu-west-1"
|
||||
|
||||
def test_get_default_region_profile_region_not_audited(self):
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
profile_region=AWS_REGION_US_EAST_2,
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
audited_regions = ["eu-west-1"]
|
||||
profile_region = "us-east-2"
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=profile_region,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
|
||||
assert get_default_region("ec2", audit_info) == "eu-west-1"
|
||||
|
||||
def test_get_default_region_non_profile_region(self):
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1],
|
||||
audited_regions = ["eu-west-1"]
|
||||
profile_region = None
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=profile_region,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
|
||||
assert get_default_region("ec2", audit_info) == "eu-west-1"
|
||||
|
||||
def test_get_default_region_non_profile_or_audited_region(self):
|
||||
audit_info = set_mocked_aws_audit_info()
|
||||
assert get_default_region("ec2", audit_info) == AWS_REGION_US_EAST_1
|
||||
audited_regions = None
|
||||
profile_region = None
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=profile_region,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_default_region("ec2", audit_info) == "us-east-1"
|
||||
|
||||
def test_aws_get_global_region(self):
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=None,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_default_region("ec2", audit_info) == "us-east-1"
|
||||
|
||||
def test_aws_gov_get_global_region(self):
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_partition=AWS_GOV_CLOUD_PARTITION
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws-us-gov",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=None,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_global_region(audit_info) == AWS_REGION_GOV_CLOUD_US_EAST_1
|
||||
assert get_global_region(audit_info) == "us-gov-east-1"
|
||||
|
||||
def test_aws_cn_get_global_region(self):
|
||||
audit_info = set_mocked_aws_audit_info(audited_partition=AWS_CHINA_PARTITION)
|
||||
assert get_global_region(audit_info) == AWS_REGION_CHINA_NORHT_1
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws-cn",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=None,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_global_region(audit_info) == "cn-north-1"
|
||||
|
||||
def test_aws_iso_get_global_region(self):
|
||||
audit_info = set_mocked_aws_audit_info(audited_partition=AWS_ISO_PARTITION)
|
||||
assert get_global_region(audit_info) == AWS_REGION_ISO_GLOBAL
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws-iso",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=None,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
assert get_global_region(audit_info) == "aws-iso-global"
|
||||
|
||||
def test_get_available_aws_service_regions_with_us_east_1_audited(self):
|
||||
audit_info = set_mocked_aws_audit_info(audited_regions=[AWS_REGION_US_EAST_1])
|
||||
|
||||
audited_regions = ["us-east-1"]
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=audited_regions,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
with patch(
|
||||
"prowler.providers.aws.aws_provider.parse_json_file",
|
||||
return_value={
|
||||
@@ -439,7 +809,7 @@ class Test_AWS_Provider:
|
||||
"eu-north-1",
|
||||
"eu-south-1",
|
||||
"eu-south-2",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"me-central-1",
|
||||
@@ -455,13 +825,33 @@ class Test_AWS_Provider:
|
||||
}
|
||||
},
|
||||
):
|
||||
assert get_available_aws_service_regions("ec2", audit_info) == {
|
||||
AWS_REGION_US_EAST_1
|
||||
}
|
||||
assert get_available_aws_service_regions("ec2", audit_info) == ["us-east-1"]
|
||||
|
||||
def test_get_available_aws_service_regions_with_all_regions_audited(self):
|
||||
audit_info = set_mocked_aws_audit_info()
|
||||
|
||||
audit_info = AWS_Audit_Info(
|
||||
session_config=None,
|
||||
original_session=None,
|
||||
audit_session=None,
|
||||
audited_account=None,
|
||||
audited_account_arn=None,
|
||||
audited_partition="aws",
|
||||
audited_identity_arn=None,
|
||||
audited_user_id=None,
|
||||
profile=None,
|
||||
profile_region=None,
|
||||
credentials=None,
|
||||
assumed_role_info=None,
|
||||
audited_regions=None,
|
||||
organizations_metadata=None,
|
||||
audit_resources=None,
|
||||
mfa_enabled=False,
|
||||
audit_metadata=Audit_Metadata(
|
||||
services_scanned=0,
|
||||
expected_checks=[],
|
||||
completed_checks=0,
|
||||
audit_progress=0,
|
||||
),
|
||||
)
|
||||
with patch(
|
||||
"prowler.providers.aws.aws_provider.parse_json_file",
|
||||
return_value={
|
||||
@@ -476,7 +866,7 @@ class Test_AWS_Provider:
|
||||
"eu-north-1",
|
||||
"eu-south-1",
|
||||
"eu-south-2",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"eu-west-1",
|
||||
"eu-west-2",
|
||||
"eu-west-3",
|
||||
"me-central-1",
|
||||
|
||||
@@ -15,8 +15,6 @@ from prowler.providers.aws.lib.allowlist.allowlist import (
|
||||
)
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_CENTRAL_1,
|
||||
AWS_REGION_EU_SOUTH_3,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
AWS_REGION_US_EAST_1,
|
||||
set_mocked_aws_audit_info,
|
||||
@@ -134,7 +132,8 @@ class Test_Allowlist:
|
||||
)
|
||||
|
||||
# Allowlist tests
|
||||
def test_allowlist_findings_only_wildcard(self):
|
||||
|
||||
def test_allowlist_findings(self):
|
||||
# Allowlist example
|
||||
allowlist = {
|
||||
"Accounts": {
|
||||
@@ -206,6 +205,12 @@ class Test_Allowlist:
|
||||
"Tags": ["*"],
|
||||
"Regions": ["*"],
|
||||
"Resources": ["*"],
|
||||
"Exceptions": {
|
||||
"Tags": [],
|
||||
"Regions": [],
|
||||
"Accounts": [],
|
||||
"Resources": [],
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -439,155 +444,6 @@ class Test_Allowlist:
|
||||
)
|
||||
)
|
||||
|
||||
def test_is_allowlisted_all_and_single_account_with_different_resources(self):
|
||||
# Allowlist example
|
||||
allowlist = {
|
||||
"Accounts": {
|
||||
"*": {
|
||||
"Checks": {
|
||||
"check_test_1": {
|
||||
"Regions": ["*"],
|
||||
"Resources": ["resource_1", "resource_2"],
|
||||
},
|
||||
}
|
||||
},
|
||||
AWS_ACCOUNT_NUMBER: {
|
||||
"Checks": {
|
||||
"check_test_1": {
|
||||
"Regions": ["*"],
|
||||
"Resources": ["resource_3"],
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
"111122223333",
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_1",
|
||||
"",
|
||||
)
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
"111122223333",
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_2",
|
||||
"",
|
||||
)
|
||||
|
||||
assert not is_allowlisted(
|
||||
allowlist,
|
||||
"111122223333",
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_3",
|
||||
"",
|
||||
)
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_3",
|
||||
"",
|
||||
)
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_2",
|
||||
"",
|
||||
)
|
||||
|
||||
def test_is_allowlisted_all_and_single_account_with_different_resources_and_exceptions(
|
||||
self,
|
||||
):
|
||||
# Allowlist example
|
||||
allowlist = {
|
||||
"Accounts": {
|
||||
"*": {
|
||||
"Checks": {
|
||||
"check_test_1": {
|
||||
"Regions": ["*"],
|
||||
"Resources": ["resource_1", "resource_2"],
|
||||
"Exceptions": {"Regions": [AWS_REGION_US_EAST_1]},
|
||||
},
|
||||
}
|
||||
},
|
||||
AWS_ACCOUNT_NUMBER: {
|
||||
"Checks": {
|
||||
"check_test_1": {
|
||||
"Regions": ["*"],
|
||||
"Resources": ["resource_3"],
|
||||
"Exceptions": {"Regions": [AWS_REGION_EU_WEST_1]},
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
assert not is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_2",
|
||||
"",
|
||||
)
|
||||
|
||||
assert not is_allowlisted(
|
||||
allowlist,
|
||||
"111122223333",
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_1",
|
||||
"",
|
||||
)
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
"111122223333",
|
||||
"check_test_1",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"resource_2",
|
||||
"",
|
||||
)
|
||||
|
||||
assert not is_allowlisted(
|
||||
allowlist,
|
||||
"111122223333",
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_3",
|
||||
"",
|
||||
)
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"check_test_1",
|
||||
AWS_REGION_US_EAST_1,
|
||||
"resource_3",
|
||||
"",
|
||||
)
|
||||
|
||||
assert not is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"check_test_1",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"resource_3",
|
||||
"",
|
||||
)
|
||||
|
||||
def test_is_allowlisted_single_account(self):
|
||||
allowlist = {
|
||||
"Accounts": {
|
||||
@@ -861,111 +717,6 @@ class Test_Allowlist:
|
||||
)
|
||||
)
|
||||
|
||||
def test_is_allowlisted_specific_account_with_other_account_excepted(self):
|
||||
# Allowlist example
|
||||
allowlist = {
|
||||
"Accounts": {
|
||||
AWS_ACCOUNT_NUMBER: {
|
||||
"Checks": {
|
||||
"check_test": {
|
||||
"Regions": [AWS_REGION_EU_WEST_1],
|
||||
"Resources": ["*"],
|
||||
"Tags": [],
|
||||
"Exceptions": {"Accounts": ["111122223333"]},
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"check_test",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"prowler",
|
||||
"environment=dev",
|
||||
)
|
||||
|
||||
assert not is_allowlisted(
|
||||
allowlist,
|
||||
"111122223333",
|
||||
"check_test",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"prowler",
|
||||
"environment=dev",
|
||||
)
|
||||
|
||||
def test_is_allowlisted_complex_allowlist(self):
|
||||
# Allowlist example
|
||||
allowlist = {
|
||||
"Accounts": {
|
||||
"*": {
|
||||
"Checks": {
|
||||
"s3_bucket_object_versioning": {
|
||||
"Regions": [AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1],
|
||||
"Resources": ["ci-logs", "logs", ".+-logs"],
|
||||
},
|
||||
"ecs_task_definitions_no_environment_secrets": {
|
||||
"Regions": ["*"],
|
||||
"Resources": ["*"],
|
||||
"Exceptions": {
|
||||
"Accounts": [AWS_ACCOUNT_NUMBER],
|
||||
"Regions": [
|
||||
AWS_REGION_EU_WEST_1,
|
||||
AWS_REGION_EU_SOUTH_3,
|
||||
],
|
||||
},
|
||||
},
|
||||
"*": {
|
||||
"Regions": ["*"],
|
||||
"Resources": ["*"],
|
||||
"Tags": ["environment=dev"],
|
||||
},
|
||||
}
|
||||
},
|
||||
AWS_ACCOUNT_NUMBER: {
|
||||
"Checks": {
|
||||
"*": {
|
||||
"Regions": ["*"],
|
||||
"Resources": ["*"],
|
||||
"Exceptions": {
|
||||
"Resources": ["test"],
|
||||
"Tags": ["environment=prod"],
|
||||
},
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"test_check",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"prowler-logs",
|
||||
"environment=dev",
|
||||
)
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"ecs_task_definitions_no_environment_secrets",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"prowler",
|
||||
"environment=dev",
|
||||
)
|
||||
|
||||
assert is_allowlisted(
|
||||
allowlist,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"s3_bucket_object_versioning",
|
||||
AWS_REGION_EU_WEST_1,
|
||||
"prowler-logs",
|
||||
"environment=dev",
|
||||
)
|
||||
|
||||
def test_is_allowlisted_in_tags(self):
|
||||
allowlist_tags = ["environment=dev", "project=prowler"]
|
||||
|
||||
@@ -1040,107 +791,6 @@ class Test_Allowlist:
|
||||
"environment=test",
|
||||
)
|
||||
|
||||
def test_is_excepted_only_in_account(self):
|
||||
# Allowlist example
|
||||
exceptions = {
|
||||
"Accounts": [AWS_ACCOUNT_NUMBER],
|
||||
"Regions": [],
|
||||
"Resources": [],
|
||||
"Tags": [],
|
||||
}
|
||||
|
||||
assert is_excepted(
|
||||
exceptions,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
"eu-central-1",
|
||||
"test",
|
||||
"environment=test",
|
||||
)
|
||||
|
||||
def test_is_excepted_only_in_region(self):
|
||||
# Allowlist example
|
||||
exceptions = {
|
||||
"Accounts": [],
|
||||
"Regions": [AWS_REGION_EU_CENTRAL_1, AWS_REGION_EU_SOUTH_3],
|
||||
"Resources": [],
|
||||
"Tags": [],
|
||||
}
|
||||
|
||||
assert is_excepted(
|
||||
exceptions,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_CENTRAL_1,
|
||||
"test",
|
||||
"environment=test",
|
||||
)
|
||||
|
||||
def test_is_excepted_only_in_resources(self):
|
||||
# Allowlist example
|
||||
exceptions = {
|
||||
"Accounts": [],
|
||||
"Regions": [],
|
||||
"Resources": ["resource_1"],
|
||||
"Tags": [],
|
||||
}
|
||||
|
||||
assert is_excepted(
|
||||
exceptions,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_CENTRAL_1,
|
||||
"resource_1",
|
||||
"environment=test",
|
||||
)
|
||||
|
||||
def test_is_excepted_only_in_tags(self):
|
||||
# Allowlist example
|
||||
exceptions = {
|
||||
"Accounts": [],
|
||||
"Regions": [],
|
||||
"Resources": [],
|
||||
"Tags": ["environment=test"],
|
||||
}
|
||||
|
||||
assert is_excepted(
|
||||
exceptions,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_CENTRAL_1,
|
||||
"resource_1",
|
||||
"environment=test",
|
||||
)
|
||||
|
||||
def test_is_excepted_in_account_and_tags(self):
|
||||
# Allowlist example
|
||||
exceptions = {
|
||||
"Accounts": [AWS_ACCOUNT_NUMBER],
|
||||
"Regions": [],
|
||||
"Resources": [],
|
||||
"Tags": ["environment=test"],
|
||||
}
|
||||
|
||||
assert is_excepted(
|
||||
exceptions,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_CENTRAL_1,
|
||||
"resource_1",
|
||||
"environment=test",
|
||||
)
|
||||
|
||||
assert not is_excepted(
|
||||
exceptions,
|
||||
"111122223333",
|
||||
AWS_REGION_EU_CENTRAL_1,
|
||||
"resource_1",
|
||||
"environment=test",
|
||||
)
|
||||
|
||||
assert not is_excepted(
|
||||
exceptions,
|
||||
"111122223333",
|
||||
AWS_REGION_EU_CENTRAL_1,
|
||||
"resource_1",
|
||||
"environment=dev",
|
||||
)
|
||||
|
||||
def test_is_excepted_all_wildcard(self):
|
||||
exceptions = {
|
||||
"Accounts": ["*"],
|
||||
|
||||
@@ -287,7 +287,7 @@ class Test_ARN_Parsing:
|
||||
assert error._excinfo[0] == RoleArnParsingServiceNotIAMnorSTS
|
||||
|
||||
def test_iam_credentials_arn_parsing_raising_RoleArnParsingInvalidAccountID(self):
|
||||
input_arn = "arn:aws:iam::AWS_ACCOUNT_ID:user/prowler"
|
||||
input_arn = "arn:aws:iam::AWS_ACCOUNT_NUMBER:user/prowler"
|
||||
with raises(RoleArnParsingInvalidAccountID) as error:
|
||||
parse_iam_credentials_arn(input_arn)
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -7,9 +7,10 @@ from moto import mock_s3
|
||||
|
||||
from prowler.config.config import csv_file_suffix
|
||||
from prowler.providers.aws.lib.s3.s3 import get_s3_object_path, send_to_s3_bucket
|
||||
|
||||
AWS_ACCOUNT_ID = "123456789012"
|
||||
AWS_REGION = "us-east-1"
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
ACTUAL_DIRECTORY = Path(path.dirname(path.realpath(__file__)))
|
||||
FIXTURES_DIR_NAME = "fixtures"
|
||||
@@ -27,8 +28,10 @@ class TestS3:
|
||||
audit_info = MagicMock()
|
||||
|
||||
# Create mock session
|
||||
audit_info.audit_session = boto3.session.Session(region_name=AWS_REGION)
|
||||
audit_info.audited_account = AWS_ACCOUNT_ID
|
||||
audit_info.audit_session = boto3.session.Session(
|
||||
region_name=AWS_REGION_EU_WEST_1
|
||||
)
|
||||
audit_info.audited_account = AWS_ACCOUNT_NUMBER
|
||||
|
||||
# Create mock bucket
|
||||
client = audit_info.audit_session.client("s3")
|
||||
@@ -66,8 +69,10 @@ class TestS3:
|
||||
audit_info = MagicMock()
|
||||
|
||||
# Create mock session
|
||||
audit_info.audit_session = boto3.session.Session(region_name=AWS_REGION)
|
||||
audit_info.audited_account = AWS_ACCOUNT_ID
|
||||
audit_info.audit_session = boto3.session.Session(
|
||||
region_name=AWS_REGION_EU_WEST_1
|
||||
)
|
||||
audit_info.audited_account = AWS_ACCOUNT_NUMBER
|
||||
|
||||
# Create mock bucket
|
||||
client = audit_info.audit_session.client("s3")
|
||||
|
||||
@@ -21,49 +21,6 @@ from tests.providers.aws.audit_info_utils import (
|
||||
set_mocked_aws_audit_info,
|
||||
)
|
||||
|
||||
|
||||
def get_security_hub_finding(status: str):
|
||||
return {
|
||||
"SchemaVersion": "2018-10-08",
|
||||
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
|
||||
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
|
||||
"RecordState": "ACTIVE",
|
||||
"ProductFields": {
|
||||
"ProviderName": "Prowler",
|
||||
"ProviderVersion": prowler_version,
|
||||
"ProwlerResourceName": "test",
|
||||
},
|
||||
"GeneratorId": "prowler-iam_user_accesskey_unused",
|
||||
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
|
||||
"Types": ["Software and Configuration Checks"],
|
||||
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"Severity": {"Label": "LOW"},
|
||||
"Title": "Ensure Access Keys unused are disabled",
|
||||
"Description": "test",
|
||||
"Resources": [
|
||||
{
|
||||
"Type": "AwsIamAccessAnalyzer",
|
||||
"Id": "test",
|
||||
"Partition": "aws",
|
||||
"Region": f"{AWS_REGION_EU_WEST_1}",
|
||||
}
|
||||
],
|
||||
"Compliance": {
|
||||
"Status": status,
|
||||
"RelatedRequirements": [],
|
||||
"AssociatedStandards": [],
|
||||
},
|
||||
"Remediation": {
|
||||
"Recommendation": {
|
||||
"Text": "Run sudo yum update and cross your fingers and toes.",
|
||||
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
# Mocking Security Hub Get Findings
|
||||
make_api_call = botocore.client.BaseClient._make_api_call
|
||||
|
||||
@@ -107,13 +64,10 @@ class Test_SecurityHub:
|
||||
|
||||
return finding
|
||||
|
||||
def set_mocked_output_options(
|
||||
self, is_quiet: bool = False, send_sh_only_fails: bool = False
|
||||
):
|
||||
def set_mocked_output_options(self, is_quiet):
|
||||
output_options = MagicMock
|
||||
output_options.bulk_checks_metadata = {}
|
||||
output_options.is_quiet = is_quiet
|
||||
output_options.send_sh_only_fails = send_sh_only_fails
|
||||
|
||||
return output_options
|
||||
|
||||
@@ -144,7 +98,47 @@ class Test_SecurityHub:
|
||||
output_options,
|
||||
enabled_regions,
|
||||
) == {
|
||||
AWS_REGION_EU_WEST_1: [get_security_hub_finding("PASSED")],
|
||||
AWS_REGION_EU_WEST_1: [
|
||||
{
|
||||
"SchemaVersion": "2018-10-08",
|
||||
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
|
||||
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
|
||||
"RecordState": "ACTIVE",
|
||||
"ProductFields": {
|
||||
"ProviderName": "Prowler",
|
||||
"ProviderVersion": prowler_version,
|
||||
"ProwlerResourceName": "test",
|
||||
},
|
||||
"GeneratorId": "prowler-iam_user_accesskey_unused",
|
||||
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
|
||||
"Types": ["Software and Configuration Checks"],
|
||||
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"Severity": {"Label": "LOW"},
|
||||
"Title": "Ensure Access Keys unused are disabled",
|
||||
"Description": "test",
|
||||
"Resources": [
|
||||
{
|
||||
"Type": "AwsIamAccessAnalyzer",
|
||||
"Id": "test",
|
||||
"Partition": "aws",
|
||||
"Region": f"{AWS_REGION_EU_WEST_1}",
|
||||
}
|
||||
],
|
||||
"Compliance": {
|
||||
"Status": "PASSED",
|
||||
"RelatedRequirements": [],
|
||||
"AssociatedStandards": [],
|
||||
},
|
||||
"Remediation": {
|
||||
"Recommendation": {
|
||||
"Text": "Run sudo yum update and cross your fingers and toes.",
|
||||
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
|
||||
}
|
||||
},
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
def test_prepare_security_hub_findings_quiet_INFO_finding(self):
|
||||
@@ -177,7 +171,7 @@ class Test_SecurityHub:
|
||||
enabled_regions,
|
||||
) == {AWS_REGION_EU_WEST_1: []}
|
||||
|
||||
def test_prepare_security_hub_findings_quiet_PASS(self):
|
||||
def test_prepare_security_hub_findings_quiet(self):
|
||||
enabled_regions = [AWS_REGION_EU_WEST_1]
|
||||
output_options = self.set_mocked_output_options(is_quiet=True)
|
||||
findings = [self.generate_finding("PASS", AWS_REGION_EU_WEST_1)]
|
||||
@@ -192,51 +186,6 @@ class Test_SecurityHub:
|
||||
enabled_regions,
|
||||
) == {AWS_REGION_EU_WEST_1: []}
|
||||
|
||||
def test_prepare_security_hub_findings_quiet_FAIL(self):
|
||||
enabled_regions = [AWS_REGION_EU_WEST_1]
|
||||
output_options = self.set_mocked_output_options(is_quiet=True)
|
||||
findings = [self.generate_finding("FAIL", AWS_REGION_EU_WEST_1)]
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
|
||||
)
|
||||
|
||||
assert prepare_security_hub_findings(
|
||||
findings,
|
||||
audit_info,
|
||||
output_options,
|
||||
enabled_regions,
|
||||
) == {AWS_REGION_EU_WEST_1: [get_security_hub_finding("FAILED")]}
|
||||
|
||||
def test_prepare_security_hub_findings_send_sh_only_fails_PASS(self):
|
||||
enabled_regions = [AWS_REGION_EU_WEST_1]
|
||||
output_options = self.set_mocked_output_options(send_sh_only_fails=True)
|
||||
findings = [self.generate_finding("PASS", AWS_REGION_EU_WEST_1)]
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
|
||||
)
|
||||
|
||||
assert prepare_security_hub_findings(
|
||||
findings,
|
||||
audit_info,
|
||||
output_options,
|
||||
enabled_regions,
|
||||
) == {AWS_REGION_EU_WEST_1: []}
|
||||
|
||||
def test_prepare_security_hub_findings_send_sh_only_fails_FAIL(self):
|
||||
enabled_regions = [AWS_REGION_EU_WEST_1]
|
||||
output_options = self.set_mocked_output_options(send_sh_only_fails=True)
|
||||
findings = [self.generate_finding("FAIL", AWS_REGION_EU_WEST_1)]
|
||||
audit_info = set_mocked_aws_audit_info(
|
||||
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
|
||||
)
|
||||
|
||||
assert prepare_security_hub_findings(
|
||||
findings,
|
||||
audit_info,
|
||||
output_options,
|
||||
enabled_regions,
|
||||
) == {AWS_REGION_EU_WEST_1: [get_security_hub_finding("FAILED")]}
|
||||
|
||||
def test_prepare_security_hub_findings_no_audited_regions(self):
|
||||
enabled_regions = [AWS_REGION_EU_WEST_1]
|
||||
output_options = self.set_mocked_output_options(is_quiet=False)
|
||||
@@ -249,7 +198,47 @@ class Test_SecurityHub:
|
||||
output_options,
|
||||
enabled_regions,
|
||||
) == {
|
||||
AWS_REGION_EU_WEST_1: [get_security_hub_finding("PASSED")],
|
||||
AWS_REGION_EU_WEST_1: [
|
||||
{
|
||||
"SchemaVersion": "2018-10-08",
|
||||
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
|
||||
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
|
||||
"RecordState": "ACTIVE",
|
||||
"ProductFields": {
|
||||
"ProviderName": "Prowler",
|
||||
"ProviderVersion": prowler_version,
|
||||
"ProwlerResourceName": "test",
|
||||
},
|
||||
"GeneratorId": "prowler-iam_user_accesskey_unused",
|
||||
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
|
||||
"Types": ["Software and Configuration Checks"],
|
||||
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"Severity": {"Label": "LOW"},
|
||||
"Title": "Ensure Access Keys unused are disabled",
|
||||
"Description": "test",
|
||||
"Resources": [
|
||||
{
|
||||
"Type": "AwsIamAccessAnalyzer",
|
||||
"Id": "test",
|
||||
"Partition": "aws",
|
||||
"Region": f"{AWS_REGION_EU_WEST_1}",
|
||||
}
|
||||
],
|
||||
"Compliance": {
|
||||
"Status": "PASSED",
|
||||
"RelatedRequirements": [],
|
||||
"AssociatedStandards": [],
|
||||
},
|
||||
"Remediation": {
|
||||
"Recommendation": {
|
||||
"Text": "Run sudo yum update and cross your fingers and toes.",
|
||||
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
|
||||
}
|
||||
},
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
@patch("botocore.client.BaseClient._make_api_call", new=mock_make_api_call)
|
||||
|
||||
@@ -10,7 +10,7 @@ from tests.providers.aws.audit_info_utils import (
|
||||
)
|
||||
|
||||
|
||||
def mock_generate_regional_clients(service, audit_info):
|
||||
def mock_generate_regional_clients(service, audit_info, _):
|
||||
regional_client = audit_info.audit_session.client(
|
||||
service, region_name=AWS_REGION_US_EAST_1
|
||||
)
|
||||
@@ -24,9 +24,8 @@ def mock_generate_regional_clients(service, audit_info):
|
||||
)
|
||||
class Test_AWSService:
|
||||
def test_AWSService_init(self):
|
||||
service_name = "s3"
|
||||
audit_info = set_mocked_aws_audit_info()
|
||||
service = AWSService(service_name, audit_info)
|
||||
service = AWSService("s3", audit_info)
|
||||
|
||||
assert service.audit_info == audit_info
|
||||
assert service.audited_account == AWS_ACCOUNT_NUMBER
|
||||
@@ -35,28 +34,8 @@ class Test_AWSService:
|
||||
assert service.audit_resources == []
|
||||
assert service.audited_checks == []
|
||||
assert service.session == audit_info.audit_session
|
||||
assert service.service == service_name
|
||||
assert service.service == "s3"
|
||||
assert len(service.regional_clients) == 1
|
||||
assert (
|
||||
service.regional_clients[AWS_REGION_US_EAST_1].__class__.__name__
|
||||
== service_name.upper()
|
||||
)
|
||||
assert service.regional_clients[AWS_REGION_US_EAST_1].__class__.__name__ == "S3"
|
||||
assert service.region == AWS_REGION_US_EAST_1
|
||||
assert service.client.__class__.__name__ == service_name.upper()
|
||||
|
||||
def test_AWSService_init_global_service(self):
|
||||
service_name = "cloudfront"
|
||||
audit_info = set_mocked_aws_audit_info()
|
||||
service = AWSService(service_name, audit_info, global_service=True)
|
||||
|
||||
assert service.audit_info == audit_info
|
||||
assert service.audited_account == AWS_ACCOUNT_NUMBER
|
||||
assert service.audited_account_arn == AWS_ACCOUNT_ARN
|
||||
assert service.audited_partition == AWS_COMMERCIAL_PARTITION
|
||||
assert service.audit_resources == []
|
||||
assert service.audited_checks == []
|
||||
assert service.session == audit_info.audit_session
|
||||
assert service.service == service_name
|
||||
assert not hasattr(service, "regional_clients")
|
||||
assert service.region == AWS_REGION_US_EAST_1
|
||||
assert service.client.__class__.__name__ == "CloudFront"
|
||||
assert service.client.__class__.__name__ == "S3"
|
||||
|
||||
@@ -3,13 +3,15 @@ from unittest import mock
|
||||
from prowler.providers.aws.services.accessanalyzer.accessanalyzer_service import (
|
||||
Analyzer,
|
||||
)
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_ARN,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
AWS_REGION_EU_WEST_2,
|
||||
)
|
||||
|
||||
AWS_REGION_1 = "eu-west-1"
|
||||
AWS_REGION_2 = "eu-west-2"
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
|
||||
ACCESS_ANALYZER_NAME = "test-analyzer"
|
||||
ACCESS_ANALYZER_ARN = f"arn:aws:access-analyzer:{AWS_REGION_2}:{AWS_ACCOUNT_NUMBER}:analyzer/{ACCESS_ANALYZER_NAME}"
|
||||
ACCESS_ANALYZER_ARN = f"arn:aws:access-analyzer:{AWS_REGION_EU_WEST_2}:{AWS_ACCOUNT_NUMBER}:analyzer/{ACCESS_ANALYZER_NAME}"
|
||||
|
||||
|
||||
class Test_accessanalyzer_enabled:
|
||||
@@ -33,7 +35,7 @@ class Test_accessanalyzer_enabled:
|
||||
def test_one_analyzer_not_available(self):
|
||||
# Include analyzers to check
|
||||
accessanalyzer_client = mock.MagicMock
|
||||
accessanalyzer_client.region = AWS_REGION_1
|
||||
accessanalyzer_client.region = AWS_REGION_EU_WEST_1
|
||||
accessanalyzer_client.analyzers = [
|
||||
Analyzer(
|
||||
arn=AWS_ACCOUNT_ARN,
|
||||
@@ -41,7 +43,7 @@ class Test_accessanalyzer_enabled:
|
||||
status="NOT_AVAILABLE",
|
||||
tags=[],
|
||||
type="",
|
||||
region=AWS_REGION_1,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
]
|
||||
with mock.patch(
|
||||
@@ -63,13 +65,13 @@ class Test_accessanalyzer_enabled:
|
||||
)
|
||||
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
|
||||
assert result[0].resource_arn == AWS_ACCOUNT_ARN
|
||||
assert result[0].region == AWS_REGION_1
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
def test_one_analyzer_not_available_allowlisted(self):
|
||||
# Include analyzers to check
|
||||
accessanalyzer_client = mock.MagicMock
|
||||
accessanalyzer_client.region = AWS_REGION_2
|
||||
accessanalyzer_client.region = AWS_REGION_EU_WEST_2
|
||||
accessanalyzer_client.audit_config = {"allowlist_non_default_regions": True}
|
||||
accessanalyzer_client.analyzers = [
|
||||
Analyzer(
|
||||
@@ -78,7 +80,7 @@ class Test_accessanalyzer_enabled:
|
||||
status="NOT_AVAILABLE",
|
||||
tags=[],
|
||||
type="",
|
||||
region=AWS_REGION_1,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
]
|
||||
with mock.patch(
|
||||
@@ -100,12 +102,12 @@ class Test_accessanalyzer_enabled:
|
||||
)
|
||||
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
|
||||
assert result[0].resource_arn == AWS_ACCOUNT_ARN
|
||||
assert result[0].region == AWS_REGION_1
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
def test_two_analyzers(self):
|
||||
accessanalyzer_client = mock.MagicMock
|
||||
accessanalyzer_client.region = AWS_REGION_1
|
||||
accessanalyzer_client.region = AWS_REGION_EU_WEST_1
|
||||
accessanalyzer_client.analyzers = [
|
||||
Analyzer(
|
||||
arn=AWS_ACCOUNT_ARN,
|
||||
@@ -113,7 +115,7 @@ class Test_accessanalyzer_enabled:
|
||||
status="NOT_AVAILABLE",
|
||||
tags=[],
|
||||
type="",
|
||||
region=AWS_REGION_1,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
),
|
||||
Analyzer(
|
||||
arn=ACCESS_ANALYZER_ARN,
|
||||
@@ -121,7 +123,7 @@ class Test_accessanalyzer_enabled:
|
||||
status="ACTIVE",
|
||||
tags=[],
|
||||
type="",
|
||||
region=AWS_REGION_2,
|
||||
region=AWS_REGION_EU_WEST_2,
|
||||
),
|
||||
]
|
||||
|
||||
@@ -148,7 +150,7 @@ class Test_accessanalyzer_enabled:
|
||||
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
|
||||
assert result[0].resource_arn == AWS_ACCOUNT_ARN
|
||||
assert result[0].resource_tags == []
|
||||
assert result[0].region == AWS_REGION_1
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
|
||||
assert result[1].status == "PASS"
|
||||
assert (
|
||||
@@ -158,7 +160,7 @@ class Test_accessanalyzer_enabled:
|
||||
assert result[1].resource_id == ACCESS_ANALYZER_NAME
|
||||
assert result[1].resource_arn == ACCESS_ANALYZER_ARN
|
||||
assert result[1].resource_tags == []
|
||||
assert result[1].region == AWS_REGION_2
|
||||
assert result[1].region == AWS_REGION_EU_WEST_2
|
||||
|
||||
def test_one_active_analyzer(self):
|
||||
accessanalyzer_client = mock.MagicMock
|
||||
@@ -169,7 +171,7 @@ class Test_accessanalyzer_enabled:
|
||||
status="ACTIVE",
|
||||
tags=[],
|
||||
type="",
|
||||
region=AWS_REGION_2,
|
||||
region=AWS_REGION_EU_WEST_2,
|
||||
)
|
||||
]
|
||||
|
||||
@@ -195,4 +197,4 @@ class Test_accessanalyzer_enabled:
|
||||
assert result[0].resource_id == ACCESS_ANALYZER_NAME
|
||||
assert result[0].resource_arn == ACCESS_ANALYZER_ARN
|
||||
assert result[0].resource_tags == []
|
||||
assert result[0].region == AWS_REGION_2
|
||||
assert result[0].region == AWS_REGION_EU_WEST_2
|
||||
|
||||
@@ -4,13 +4,15 @@ from prowler.providers.aws.services.accessanalyzer.accessanalyzer_service import
|
||||
Analyzer,
|
||||
Finding,
|
||||
)
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_ARN,
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
AWS_REGION_EU_WEST_2,
|
||||
)
|
||||
|
||||
AWS_REGION_1 = "eu-west-1"
|
||||
AWS_REGION_2 = "eu-west-2"
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
|
||||
ACCESS_ANALYZER_NAME = "test-analyzer"
|
||||
ACCESS_ANALYZER_ARN = f"arn:aws:access-analyzer:{AWS_REGION_2}:{AWS_ACCOUNT_NUMBER}:analyzer/{ACCESS_ANALYZER_NAME}"
|
||||
ACCESS_ANALYZER_ARN = f"arn:aws:access-analyzer:{AWS_REGION_EU_WEST_2}:{AWS_ACCOUNT_NUMBER}:analyzer/{ACCESS_ANALYZER_NAME}"
|
||||
|
||||
|
||||
class Test_accessanalyzer_enabled_without_findings:
|
||||
@@ -42,7 +44,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
tags=[],
|
||||
type="",
|
||||
fidings=[],
|
||||
region=AWS_REGION_1,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
]
|
||||
with mock.patch(
|
||||
@@ -68,7 +70,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
tags=[],
|
||||
fidings=[],
|
||||
type="",
|
||||
region=AWS_REGION_1,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
),
|
||||
Analyzer(
|
||||
arn=ACCESS_ANALYZER_ARN,
|
||||
@@ -86,7 +88,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
],
|
||||
tags=[],
|
||||
type="",
|
||||
region=AWS_REGION_2,
|
||||
region=AWS_REGION_EU_WEST_2,
|
||||
),
|
||||
]
|
||||
|
||||
@@ -112,7 +114,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
)
|
||||
assert result[0].resource_id == ACCESS_ANALYZER_NAME
|
||||
assert result[0].resource_arn == ACCESS_ANALYZER_ARN
|
||||
assert result[0].region == AWS_REGION_2
|
||||
assert result[0].region == AWS_REGION_EU_WEST_2
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
def test_one_active_analyzer_without_findings(self):
|
||||
@@ -125,7 +127,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
tags=[],
|
||||
fidings=[],
|
||||
type="",
|
||||
region=AWS_REGION_2,
|
||||
region=AWS_REGION_EU_WEST_2,
|
||||
)
|
||||
]
|
||||
|
||||
@@ -149,7 +151,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
)
|
||||
assert result[0].resource_id == ACCESS_ANALYZER_NAME
|
||||
assert result[0].resource_arn == ACCESS_ANALYZER_ARN
|
||||
assert result[0].region == AWS_REGION_2
|
||||
assert result[0].region == AWS_REGION_EU_WEST_2
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
def test_one_active_analyzer_not_active_without_findings(self):
|
||||
@@ -162,7 +164,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
tags=[],
|
||||
fidings=[],
|
||||
type="",
|
||||
region=AWS_REGION_1,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
),
|
||||
]
|
||||
# Patch AccessAnalyzer Client
|
||||
@@ -195,7 +197,7 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
],
|
||||
tags=[],
|
||||
type="",
|
||||
region=AWS_REGION_1,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
),
|
||||
]
|
||||
|
||||
@@ -220,5 +222,5 @@ class Test_accessanalyzer_enabled_without_findings:
|
||||
)
|
||||
assert result[0].resource_id == ACCESS_ANALYZER_NAME
|
||||
assert result[0].resource_arn == ACCESS_ANALYZER_ARN
|
||||
assert result[0].region == AWS_REGION_1
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
@@ -54,7 +54,7 @@ def mock_make_api_call(self, operation_name, kwarg):
|
||||
return make_api_call(self, operation_name, kwarg)
|
||||
|
||||
|
||||
def mock_generate_regional_clients(service, audit_info):
|
||||
def mock_generate_regional_clients(service, audit_info, _):
|
||||
regional_client = audit_info.audit_session.client(
|
||||
service, region_name=AWS_REGION_EU_WEST_1
|
||||
)
|
||||
|
||||
@@ -1,16 +1,18 @@
|
||||
from unittest import mock
|
||||
|
||||
from prowler.providers.aws.services.account.account_service import Contact
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
|
||||
AWS_REGION = "us-east-1"
|
||||
|
||||
|
||||
class Test_account_maintain_different_contact_details_to_security_billing_and_operations:
|
||||
def test_contacts_not_configured_or_equal(self):
|
||||
account_client = mock.MagicMock
|
||||
account_client.region = AWS_REGION
|
||||
account_client.region = AWS_REGION_EU_WEST_1
|
||||
account_client.audited_account = AWS_ACCOUNT_NUMBER
|
||||
account_client.audited_account_arn = AWS_ACCOUNT_ARN
|
||||
|
||||
@@ -49,13 +51,13 @@ class Test_account_maintain_different_contact_details_to_security_billing_and_op
|
||||
result[0].status_extended
|
||||
== "SECURITY, BILLING and OPERATIONS contacts not found or they are not different between each other and between ROOT contact."
|
||||
)
|
||||
assert result[0].region == AWS_REGION
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
|
||||
assert result[0].resource_arn == AWS_ACCOUNT_ARN
|
||||
|
||||
def test_contacts_diffent(self):
|
||||
account_client = mock.MagicMock
|
||||
account_client.region = AWS_REGION
|
||||
account_client.region = AWS_REGION_EU_WEST_1
|
||||
account_client.audited_account = AWS_ACCOUNT_NUMBER
|
||||
account_client.audited_account_arn = AWS_ACCOUNT_ARN
|
||||
|
||||
@@ -98,6 +100,6 @@ class Test_account_maintain_different_contact_details_to_security_billing_and_op
|
||||
result[0].status_extended
|
||||
== "SECURITY, BILLING and OPERATIONS contacts found and they are different between each other and between ROOT contact."
|
||||
)
|
||||
assert result[0].region == AWS_REGION
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
|
||||
assert result[0].resource_arn == AWS_ACCOUNT_ARN
|
||||
|
||||
@@ -2,9 +2,11 @@ import uuid
|
||||
from unittest import mock
|
||||
|
||||
from prowler.providers.aws.services.acm.acm_service import Certificate
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
AWS_REGION = "us-east-1"
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
DAYS_TO_EXPIRE_THRESHOLD = 7
|
||||
|
||||
|
||||
@@ -29,7 +31,7 @@ class Test_acm_certificates_expiration_check:
|
||||
|
||||
def test_acm_certificate_expirated(self):
|
||||
certificate_id = str(uuid.uuid4())
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION_EU_WEST_1}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_name = "test-certificate.com"
|
||||
certificate_type = "AMAZON_ISSUED"
|
||||
|
||||
@@ -42,7 +44,7 @@ class Test_acm_certificates_expiration_check:
|
||||
type=certificate_type,
|
||||
expiration_days=5,
|
||||
transparency_logging=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
]
|
||||
|
||||
@@ -66,12 +68,12 @@ class Test_acm_certificates_expiration_check:
|
||||
)
|
||||
assert result[0].resource_id == certificate_id
|
||||
assert result[0].resource_arn == certificate_arn
|
||||
assert result[0].region == AWS_REGION
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
def test_acm_certificate_not_expirated(self):
|
||||
certificate_id = str(uuid.uuid4())
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION_EU_WEST_1}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_name = "test-certificate.com"
|
||||
certificate_type = "AMAZON_ISSUED"
|
||||
expiration_days = 365
|
||||
@@ -85,7 +87,7 @@ class Test_acm_certificates_expiration_check:
|
||||
type=certificate_type,
|
||||
expiration_days=expiration_days,
|
||||
transparency_logging=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
]
|
||||
|
||||
@@ -109,5 +111,5 @@ class Test_acm_certificates_expiration_check:
|
||||
)
|
||||
assert result[0].resource_id == certificate_id
|
||||
assert result[0].resource_arn == certificate_arn
|
||||
assert result[0].region == AWS_REGION
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
@@ -2,9 +2,10 @@ import uuid
|
||||
from unittest import mock
|
||||
|
||||
from prowler.providers.aws.services.acm.acm_service import Certificate
|
||||
|
||||
AWS_REGION = "us-east-1"
|
||||
AWS_ACCOUNT_NUMBER = "123456789012"
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
|
||||
class Test_acm_certificates_transparency_logs_enabled:
|
||||
@@ -28,7 +29,7 @@ class Test_acm_certificates_transparency_logs_enabled:
|
||||
|
||||
def test_acm_certificate_with_logging(self):
|
||||
certificate_id = str(uuid.uuid4())
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION_EU_WEST_1}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_name = "test-certificate.com"
|
||||
certificate_type = "AMAZON_ISSUED"
|
||||
|
||||
@@ -41,7 +42,7 @@ class Test_acm_certificates_transparency_logs_enabled:
|
||||
type=certificate_type,
|
||||
expiration_days=365,
|
||||
transparency_logging=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
]
|
||||
|
||||
@@ -65,12 +66,12 @@ class Test_acm_certificates_transparency_logs_enabled:
|
||||
)
|
||||
assert result[0].resource_id == certificate_id
|
||||
assert result[0].resource_arn == certificate_arn
|
||||
assert result[0].region == AWS_REGION
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
def test_acm_certificate_without_logging(self):
|
||||
certificate_id = str(uuid.uuid4())
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_arn = f"arn:aws:acm:{AWS_REGION_EU_WEST_1}:{AWS_ACCOUNT_NUMBER}:certificate/{certificate_id}"
|
||||
certificate_name = "test-certificate.com"
|
||||
certificate_type = "AMAZON_ISSUED"
|
||||
|
||||
@@ -83,7 +84,7 @@ class Test_acm_certificates_transparency_logs_enabled:
|
||||
type=certificate_type,
|
||||
expiration_days=365,
|
||||
transparency_logging=False,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
]
|
||||
|
||||
@@ -107,5 +108,5 @@ class Test_acm_certificates_transparency_logs_enabled:
|
||||
)
|
||||
assert result[0].resource_id == certificate_id
|
||||
assert result[0].resource_arn == certificate_arn
|
||||
assert result[0].region == AWS_REGION
|
||||
assert result[0].region == AWS_REGION_EU_WEST_1
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
@@ -74,7 +74,7 @@ def mock_make_api_call(self, operation_name, kwargs):
|
||||
|
||||
|
||||
# Mock generate_regional_clients()
|
||||
def mock_generate_regional_clients(service, audit_info):
|
||||
def mock_generate_regional_clients(service, audit_info, _):
|
||||
regional_client = audit_info.audit_session.client(
|
||||
service, region_name=AWS_REGION_US_EAST_1
|
||||
)
|
||||
@@ -129,7 +129,7 @@ class Test_ACM_Service:
|
||||
# @mock_acm
|
||||
def test__list_and_describe_certificates__(self):
|
||||
# Generate ACM Client
|
||||
# acm_client = client("acm", region_name=AWS_REGION)
|
||||
# acm_client = client("acm", region_name=AWS_REGION_EU_WEST_1)
|
||||
# Request ACM certificate
|
||||
# certificate = acm_client.request_certificate(
|
||||
# DomainName="test.com",
|
||||
@@ -150,7 +150,7 @@ class Test_ACM_Service:
|
||||
# @mock_acm
|
||||
def test__list_tags_for_certificate__(self):
|
||||
# Generate ACM Client
|
||||
# acm_client = client("acm", region_name=AWS_REGION)
|
||||
# acm_client = client("acm", region_name=AWS_REGION_EU_WEST_1)
|
||||
# Request ACM certificate
|
||||
# certificate = acm_client.request_certificate(
|
||||
# DomainName="test.com",
|
||||
|
||||
@@ -2,9 +2,9 @@ from unittest import mock
|
||||
|
||||
from boto3 import client
|
||||
from moto import mock_apigateway, mock_iam, mock_lambda
|
||||
from moto.core import DEFAULT_ACCOUNT_ID as ACCOUNT_ID
|
||||
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
AWS_REGION_US_EAST_1,
|
||||
set_mocked_aws_audit_info,
|
||||
@@ -68,7 +68,7 @@ class Test_apigateway_restapi_authorizers_enabled:
|
||||
name="test",
|
||||
restApiId=rest_api["id"],
|
||||
type="TOKEN",
|
||||
authorizerUri=f"arn:aws:apigateway:{apigateway_client.meta.region_name}:lambda:path/2015-03-31/functions/arn:aws:lambda:{apigateway_client.meta.region_name}:{AWS_ACCOUNT_NUMBER}:function:{authorizer['FunctionName']}/invocations",
|
||||
authorizerUri=f"arn:aws:apigateway:{apigateway_client.meta.region_name}:lambda:path/2015-03-31/functions/arn:aws:lambda:{apigateway_client.meta.region_name}:{ACCOUNT_ID}:function:{authorizer['FunctionName']}/invocations",
|
||||
)
|
||||
from prowler.providers.aws.services.apigateway.apigateway_service import (
|
||||
APIGateway,
|
||||
@@ -97,7 +97,7 @@ class Test_apigateway_restapi_authorizers_enabled:
|
||||
assert len(result) == 1
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} has an authorizer configured at api level"
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} has an authorizer configured."
|
||||
)
|
||||
assert result[0].resource_id == "test-rest-api"
|
||||
assert (
|
||||
@@ -142,337 +142,7 @@ class Test_apigateway_restapi_authorizers_enabled:
|
||||
assert len(result) == 1
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured at api level."
|
||||
)
|
||||
assert result[0].resource_id == "test-rest-api"
|
||||
assert (
|
||||
result[0].resource_arn
|
||||
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
|
||||
)
|
||||
assert result[0].region == AWS_REGION_US_EAST_1
|
||||
assert result[0].resource_tags == [{}]
|
||||
|
||||
@mock_apigateway
|
||||
@mock_iam
|
||||
@mock_lambda
|
||||
def test_apigateway_one_rest_api_without_api_or_methods_authorizer(self):
|
||||
# Create APIGateway Mocked Resources
|
||||
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
|
||||
|
||||
rest_api = apigateway_client.create_rest_api(
|
||||
name="test-rest-api",
|
||||
)
|
||||
|
||||
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
|
||||
"items"
|
||||
][0]["id"]
|
||||
|
||||
api_resource = apigateway_client.create_resource(
|
||||
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="GET",
|
||||
authorizationType="NONE",
|
||||
)
|
||||
|
||||
from prowler.providers.aws.services.apigateway.apigateway_service import (
|
||||
APIGateway,
|
||||
)
|
||||
|
||||
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
|
||||
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
|
||||
)
|
||||
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
new=current_audit_info,
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
|
||||
new=APIGateway(current_audit_info),
|
||||
):
|
||||
# Test Check
|
||||
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
|
||||
apigateway_restapi_authorizers_enabled,
|
||||
)
|
||||
|
||||
check = apigateway_restapi_authorizers_enabled()
|
||||
result = check.execute()
|
||||
|
||||
assert result[0].status == "FAIL"
|
||||
assert len(result) == 1
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
|
||||
)
|
||||
assert result[0].resource_id == "test-rest-api"
|
||||
assert (
|
||||
result[0].resource_arn
|
||||
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
|
||||
)
|
||||
assert result[0].region == AWS_REGION_US_EAST_1
|
||||
assert result[0].resource_tags == [{}]
|
||||
|
||||
@mock_apigateway
|
||||
@mock_iam
|
||||
@mock_lambda
|
||||
def test_apigateway_one_rest_api_without_api_auth_but_one_method_auth(self):
|
||||
# Create APIGateway Mocked Resources
|
||||
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
|
||||
|
||||
rest_api = apigateway_client.create_rest_api(
|
||||
name="test-rest-api",
|
||||
)
|
||||
|
||||
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
|
||||
"items"
|
||||
][0]["id"]
|
||||
|
||||
api_resource = apigateway_client.create_resource(
|
||||
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="GET",
|
||||
authorizationType="AWS_IAM",
|
||||
)
|
||||
|
||||
from prowler.providers.aws.services.apigateway.apigateway_service import (
|
||||
APIGateway,
|
||||
)
|
||||
|
||||
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
|
||||
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
|
||||
)
|
||||
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
new=current_audit_info,
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
|
||||
new=APIGateway(current_audit_info),
|
||||
):
|
||||
# Test Check
|
||||
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
|
||||
apigateway_restapi_authorizers_enabled,
|
||||
)
|
||||
|
||||
check = apigateway_restapi_authorizers_enabled()
|
||||
result = check.execute()
|
||||
|
||||
assert result[0].status == "PASS"
|
||||
assert len(result) == 1
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} has all methods authorized"
|
||||
)
|
||||
assert result[0].resource_id == "test-rest-api"
|
||||
assert (
|
||||
result[0].resource_arn
|
||||
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
|
||||
)
|
||||
assert result[0].region == AWS_REGION_US_EAST_1
|
||||
assert result[0].resource_tags == [{}]
|
||||
|
||||
@mock_apigateway
|
||||
@mock_iam
|
||||
@mock_lambda
|
||||
def test_apigateway_one_rest_api_without_api_auth_but_methods_auth_and_not(self):
|
||||
# Create APIGateway Mocked Resources
|
||||
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
|
||||
|
||||
rest_api = apigateway_client.create_rest_api(
|
||||
name="test-rest-api",
|
||||
)
|
||||
|
||||
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
|
||||
"items"
|
||||
][0]["id"]
|
||||
|
||||
api_resource = apigateway_client.create_resource(
|
||||
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="POST",
|
||||
authorizationType="AWS_IAM",
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="GET",
|
||||
authorizationType="NONE",
|
||||
)
|
||||
|
||||
from prowler.providers.aws.services.apigateway.apigateway_service import (
|
||||
APIGateway,
|
||||
)
|
||||
|
||||
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
|
||||
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
|
||||
)
|
||||
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
new=current_audit_info,
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
|
||||
new=APIGateway(current_audit_info),
|
||||
):
|
||||
# Test Check
|
||||
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
|
||||
apigateway_restapi_authorizers_enabled,
|
||||
)
|
||||
|
||||
check = apigateway_restapi_authorizers_enabled()
|
||||
result = check.execute()
|
||||
|
||||
assert result[0].status == "FAIL"
|
||||
assert len(result) == 1
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
|
||||
)
|
||||
assert result[0].resource_id == "test-rest-api"
|
||||
assert (
|
||||
result[0].resource_arn
|
||||
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
|
||||
)
|
||||
assert result[0].region == AWS_REGION_US_EAST_1
|
||||
assert result[0].resource_tags == [{}]
|
||||
|
||||
@mock_apigateway
|
||||
@mock_iam
|
||||
@mock_lambda
|
||||
def test_apigateway_one_rest_api_without_api_auth_but_methods_not_auth_and_auth(
|
||||
self,
|
||||
):
|
||||
# Create APIGateway Mocked Resources
|
||||
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
|
||||
|
||||
rest_api = apigateway_client.create_rest_api(
|
||||
name="test-rest-api",
|
||||
)
|
||||
|
||||
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
|
||||
"items"
|
||||
][0]["id"]
|
||||
|
||||
api_resource = apigateway_client.create_resource(
|
||||
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="GET",
|
||||
authorizationType="NONE",
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="POST",
|
||||
authorizationType="AWS_IAM",
|
||||
)
|
||||
|
||||
from prowler.providers.aws.services.apigateway.apigateway_service import (
|
||||
APIGateway,
|
||||
)
|
||||
|
||||
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
|
||||
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
|
||||
)
|
||||
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
new=current_audit_info,
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
|
||||
new=APIGateway(current_audit_info),
|
||||
):
|
||||
# Test Check
|
||||
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
|
||||
apigateway_restapi_authorizers_enabled,
|
||||
)
|
||||
|
||||
check = apigateway_restapi_authorizers_enabled()
|
||||
result = check.execute()
|
||||
|
||||
assert result[0].status == "FAIL"
|
||||
assert len(result) == 1
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
|
||||
)
|
||||
assert result[0].resource_id == "test-rest-api"
|
||||
assert (
|
||||
result[0].resource_arn
|
||||
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
|
||||
)
|
||||
assert result[0].region == AWS_REGION_US_EAST_1
|
||||
assert result[0].resource_tags == [{}]
|
||||
|
||||
@mock_apigateway
|
||||
@mock_iam
|
||||
@mock_lambda
|
||||
def test_apigateway_one_rest_api_without_authorizers_with_various_resources_without_endpoints(
|
||||
self,
|
||||
):
|
||||
# Create APIGateway Mocked Resources
|
||||
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
|
||||
|
||||
rest_api = apigateway_client.create_rest_api(
|
||||
name="test-rest-api",
|
||||
)
|
||||
|
||||
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
|
||||
"items"
|
||||
][0]["id"]
|
||||
|
||||
apigateway_client.create_resource(
|
||||
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
|
||||
)
|
||||
|
||||
apigateway_client.create_resource(
|
||||
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test2"
|
||||
)
|
||||
|
||||
from prowler.providers.aws.services.apigateway.apigateway_service import (
|
||||
APIGateway,
|
||||
)
|
||||
|
||||
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
|
||||
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
|
||||
)
|
||||
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
new=current_audit_info,
|
||||
), mock.patch(
|
||||
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
|
||||
new=APIGateway(current_audit_info),
|
||||
):
|
||||
# Test Check
|
||||
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
|
||||
apigateway_restapi_authorizers_enabled,
|
||||
)
|
||||
|
||||
check = apigateway_restapi_authorizers_enabled()
|
||||
result = check.execute()
|
||||
|
||||
assert result[0].status == "FAIL"
|
||||
assert len(result) == 1
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured at api level."
|
||||
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured."
|
||||
)
|
||||
assert result[0].resource_id == "test-rest-api"
|
||||
assert (
|
||||
|
||||
@@ -146,45 +146,3 @@ class Test_APIGateway_Service:
|
||||
audit_info = set_mocked_aws_audit_info([AWS_REGION_US_EAST_1])
|
||||
apigateway = APIGateway(audit_info)
|
||||
assert apigateway.rest_apis[0].stages[0].logging is True
|
||||
|
||||
# Test APIGateway __get_resources__
|
||||
@mock_apigateway
|
||||
def test__get_resources__(self):
|
||||
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
|
||||
|
||||
rest_api = apigateway_client.create_rest_api(
|
||||
name="test-rest-api",
|
||||
)
|
||||
|
||||
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
|
||||
"items"
|
||||
][0]["id"]
|
||||
|
||||
api_resource = apigateway_client.create_resource(
|
||||
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="GET",
|
||||
authorizationType="AWS_IAM",
|
||||
)
|
||||
|
||||
apigateway_client.put_method(
|
||||
restApiId=rest_api["id"],
|
||||
resourceId=api_resource["id"],
|
||||
httpMethod="OPTIONS",
|
||||
authorizationType="AWS_IAM",
|
||||
)
|
||||
|
||||
audit_info = set_mocked_aws_audit_info([AWS_REGION_US_EAST_1])
|
||||
apigateway = APIGateway(audit_info)
|
||||
|
||||
# we skip OPTIONS methods
|
||||
assert list(apigateway.rest_apis[0].resources[1].resource_methods.keys()) == [
|
||||
"GET"
|
||||
]
|
||||
assert list(apigateway.rest_apis[0].resources[1].resource_methods.values()) == [
|
||||
"AWS_IAM"
|
||||
]
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
from unittest import mock
|
||||
|
||||
from prowler.providers.aws.services.appstream.appstream_service import Fleet
|
||||
|
||||
# Mock Test Region
|
||||
AWS_REGION = "eu-west-1"
|
||||
from tests.providers.aws.audit_info_utils import AWS_REGION_EU_WEST_1
|
||||
|
||||
|
||||
class Test_appstream_fleet_default_internet_access_disabled:
|
||||
@@ -34,7 +32,7 @@ class Test_appstream_fleet_default_internet_access_disabled:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -72,7 +70,7 @@ class Test_appstream_fleet_default_internet_access_disabled:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=False,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -110,7 +108,7 @@ class Test_appstream_fleet_default_internet_access_disabled:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
fleet2 = Fleet(
|
||||
arn="arn",
|
||||
@@ -119,7 +117,7 @@ class Test_appstream_fleet_default_internet_access_disabled:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=False,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
from unittest import mock
|
||||
|
||||
from prowler.providers.aws.services.appstream.appstream_service import Fleet
|
||||
|
||||
# Mock Test Region
|
||||
AWS_REGION = "eu-west-1"
|
||||
from tests.providers.aws.audit_info_utils import AWS_REGION_EU_WEST_1
|
||||
|
||||
|
||||
class Test_appstream_fleet_maximum_session_duration:
|
||||
@@ -35,7 +33,7 @@ class Test_appstream_fleet_maximum_session_duration:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -76,7 +74,7 @@ class Test_appstream_fleet_maximum_session_duration:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -119,7 +117,7 @@ class Test_appstream_fleet_maximum_session_duration:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
fleet2 = Fleet(
|
||||
arn="arn",
|
||||
@@ -129,7 +127,7 @@ class Test_appstream_fleet_maximum_session_duration:
|
||||
disconnect_timeout_in_seconds=900,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=False,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
from unittest import mock
|
||||
|
||||
from prowler.providers.aws.services.appstream.appstream_service import Fleet
|
||||
|
||||
# Mock Test Region
|
||||
AWS_REGION = "eu-west-1"
|
||||
from tests.providers.aws.audit_info_utils import AWS_REGION_EU_WEST_1
|
||||
|
||||
|
||||
class Test_appstream_fleet_session_disconnect_timeout:
|
||||
@@ -35,7 +33,7 @@ class Test_appstream_fleet_session_disconnect_timeout:
|
||||
disconnect_timeout_in_seconds=1 * 60 * 60,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -75,7 +73,7 @@ class Test_appstream_fleet_session_disconnect_timeout:
|
||||
disconnect_timeout_in_seconds=4 * 60,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -118,7 +116,7 @@ class Test_appstream_fleet_session_disconnect_timeout:
|
||||
disconnect_timeout_in_seconds=1 * 60 * 60,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
fleet2 = Fleet(
|
||||
arn="arn",
|
||||
@@ -128,7 +126,7 @@ class Test_appstream_fleet_session_disconnect_timeout:
|
||||
disconnect_timeout_in_seconds=3 * 60,
|
||||
idle_disconnect_timeout_in_seconds=900,
|
||||
enable_default_internet_access=False,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
from unittest import mock
|
||||
|
||||
from prowler.providers.aws.services.appstream.appstream_service import Fleet
|
||||
|
||||
# Mock Test Region
|
||||
AWS_REGION = "eu-west-1"
|
||||
from tests.providers.aws.audit_info_utils import AWS_REGION_EU_WEST_1
|
||||
|
||||
|
||||
class Test_appstream_fleet_session_idle_disconnect_timeout:
|
||||
@@ -35,7 +33,7 @@ class Test_appstream_fleet_session_idle_disconnect_timeout:
|
||||
# 15 minutes
|
||||
idle_disconnect_timeout_in_seconds=15 * 60,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -76,7 +74,7 @@ class Test_appstream_fleet_session_idle_disconnect_timeout:
|
||||
# 8 minutes
|
||||
idle_disconnect_timeout_in_seconds=8 * 60,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
@@ -119,7 +117,7 @@ class Test_appstream_fleet_session_idle_disconnect_timeout:
|
||||
# 5 minutes
|
||||
idle_disconnect_timeout_in_seconds=5 * 60,
|
||||
enable_default_internet_access=True,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
fleet2 = Fleet(
|
||||
arn="arn",
|
||||
@@ -129,7 +127,7 @@ class Test_appstream_fleet_session_idle_disconnect_timeout:
|
||||
# 45 minutes
|
||||
idle_disconnect_timeout_in_seconds=45 * 60,
|
||||
enable_default_internet_access=False,
|
||||
region=AWS_REGION,
|
||||
region=AWS_REGION_EU_WEST_1,
|
||||
)
|
||||
|
||||
appstream_client.fleets.append(fleet1)
|
||||
|
||||
@@ -1,17 +1,15 @@
|
||||
from unittest.mock import patch
|
||||
|
||||
import botocore
|
||||
from moto.core import DEFAULT_ACCOUNT_ID
|
||||
|
||||
from prowler.providers.aws.services.appstream.appstream_service import AppStream
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_EU_WEST_1,
|
||||
AWS_REGION_US_EAST_1,
|
||||
set_mocked_aws_audit_info,
|
||||
)
|
||||
|
||||
# Mock Test Region
|
||||
AWS_REGION = "eu-west-1"
|
||||
|
||||
# Mocking Access Analyzer Calls
|
||||
make_api_call = botocore.client.BaseClient._make_api_call
|
||||
|
||||
@@ -28,7 +26,7 @@ def mock_make_api_call(self, operation_name, kwarg):
|
||||
return {
|
||||
"Fleets": [
|
||||
{
|
||||
"Arn": f"arn:aws:appstream:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:fleet/test-prowler3-0",
|
||||
"Arn": f"arn:aws:appstream:{AWS_REGION_EU_WEST_1}:{DEFAULT_ACCOUNT_ID}:fleet/test-prowler3-0",
|
||||
"Name": "test-prowler3-0",
|
||||
"MaxUserDurationInSeconds": 100,
|
||||
"DisconnectTimeoutInSeconds": 900,
|
||||
@@ -36,7 +34,7 @@ def mock_make_api_call(self, operation_name, kwarg):
|
||||
"EnableDefaultInternetAccess": False,
|
||||
},
|
||||
{
|
||||
"Arn": f"arn:aws:appstream:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:fleet/test-prowler3-1",
|
||||
"Arn": f"arn:aws:appstream:{AWS_REGION_EU_WEST_1}:{DEFAULT_ACCOUNT_ID}:fleet/test-prowler3-1",
|
||||
"Name": "test-prowler3-1",
|
||||
"MaxUserDurationInSeconds": 57600,
|
||||
"DisconnectTimeoutInSeconds": 900,
|
||||
@@ -51,10 +49,12 @@ def mock_make_api_call(self, operation_name, kwarg):
|
||||
|
||||
|
||||
# Mock generate_regional_clients()
|
||||
def mock_generate_regional_clients(service, audit_info):
|
||||
regional_client = audit_info.audit_session.client(service, region_name=AWS_REGION)
|
||||
regional_client.region = AWS_REGION
|
||||
return {AWS_REGION: regional_client}
|
||||
def mock_generate_regional_clients(service, audit_info, _):
|
||||
regional_client = audit_info.audit_session.client(
|
||||
service, region_name=AWS_REGION_EU_WEST_1
|
||||
)
|
||||
regional_client.region = AWS_REGION_EU_WEST_1
|
||||
return {AWS_REGION_EU_WEST_1: regional_client}
|
||||
|
||||
|
||||
# Patch every AWS call using Boto3 and generate_regional_clients to have 1 client
|
||||
@@ -67,7 +67,10 @@ class Test_AppStream_Service:
|
||||
# Test AppStream Client
|
||||
def test__get_client__(self):
|
||||
appstream = AppStream(set_mocked_aws_audit_info([AWS_REGION_US_EAST_1]))
|
||||
assert appstream.regional_clients[AWS_REGION].__class__.__name__ == "AppStream"
|
||||
assert (
|
||||
appstream.regional_clients[AWS_REGION_EU_WEST_1].__class__.__name__
|
||||
== "AppStream"
|
||||
)
|
||||
|
||||
# Test AppStream Session
|
||||
def test__get_session__(self):
|
||||
@@ -86,25 +89,25 @@ class Test_AppStream_Service:
|
||||
|
||||
assert (
|
||||
appstream.fleets[0].arn
|
||||
== f"arn:aws:appstream:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:fleet/test-prowler3-0"
|
||||
== f"arn:aws:appstream:{AWS_REGION_EU_WEST_1}:{DEFAULT_ACCOUNT_ID}:fleet/test-prowler3-0"
|
||||
)
|
||||
assert appstream.fleets[0].name == "test-prowler3-0"
|
||||
assert appstream.fleets[0].max_user_duration_in_seconds == 100
|
||||
assert appstream.fleets[0].disconnect_timeout_in_seconds == 900
|
||||
assert appstream.fleets[0].idle_disconnect_timeout_in_seconds == 900
|
||||
assert appstream.fleets[0].enable_default_internet_access is False
|
||||
assert appstream.fleets[0].region == AWS_REGION
|
||||
assert appstream.fleets[0].region == AWS_REGION_EU_WEST_1
|
||||
|
||||
assert (
|
||||
appstream.fleets[1].arn
|
||||
== f"arn:aws:appstream:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:fleet/test-prowler3-1"
|
||||
== f"arn:aws:appstream:{AWS_REGION_EU_WEST_1}:{DEFAULT_ACCOUNT_ID}:fleet/test-prowler3-1"
|
||||
)
|
||||
assert appstream.fleets[1].name == "test-prowler3-1"
|
||||
assert appstream.fleets[1].max_user_duration_in_seconds == 57600
|
||||
assert appstream.fleets[1].disconnect_timeout_in_seconds == 900
|
||||
assert appstream.fleets[1].idle_disconnect_timeout_in_seconds == 900
|
||||
assert appstream.fleets[1].enable_default_internet_access is True
|
||||
assert appstream.fleets[1].region == AWS_REGION
|
||||
assert appstream.fleets[1].region == AWS_REGION_EU_WEST_1
|
||||
|
||||
def test__list_tags_for_resource__(self):
|
||||
# Set partition for the service
|
||||
|
||||
@@ -39,7 +39,7 @@ def mock_make_api_call(self, operation_name, kwarg):
|
||||
|
||||
|
||||
# Mock generate_regional_clients()
|
||||
def mock_generate_regional_clients(service, audit_info):
|
||||
def mock_generate_regional_clients(service, audit_info, _):
|
||||
regional_client = audit_info.audit_session.client(
|
||||
service, region_name=AWS_REGION_EU_WEST_1
|
||||
)
|
||||
@@ -86,7 +86,7 @@ class Test_Athena_Service:
|
||||
# Athena client
|
||||
# This API call is not implemented by Moto
|
||||
# athena_client = audit_info.audit_session.client(
|
||||
# "athena", region_name=AWS_REGION
|
||||
# "athena", region_name=AWS_REGION_EU_WEST_1
|
||||
# )
|
||||
# athena_client.update_work_group(
|
||||
# WorkGroup=default_workgroup_name,
|
||||
|
||||
@@ -3,17 +3,17 @@ from unittest import mock
|
||||
from boto3 import client
|
||||
from mock import patch
|
||||
from moto import mock_cloudtrail, mock_s3
|
||||
from moto.core import DEFAULT_ACCOUNT_ID
|
||||
|
||||
from prowler.providers.aws.services.awslambda.awslambda_service import Function
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_US_EAST_1,
|
||||
set_mocked_aws_audit_info,
|
||||
)
|
||||
|
||||
|
||||
# Mock generate_regional_clients()
|
||||
def mock_generate_regional_clients(service, audit_info):
|
||||
def mock_generate_regional_clients(service, audit_info, _):
|
||||
regional_client = audit_info.audit_session.client(
|
||||
service, region_name=AWS_REGION_US_EAST_1
|
||||
)
|
||||
@@ -65,7 +65,7 @@ class Test_awslambda_function_invoke_api_operations_cloudtrail_logging_enabled:
|
||||
lambda_client = mock.MagicMock
|
||||
function_name = "test-lambda"
|
||||
function_runtime = "python3.9"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{AWS_ACCOUNT_NUMBER}:function/{function_name}"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{DEFAULT_ACCOUNT_ID}:function/{function_name}"
|
||||
lambda_client.functions = {
|
||||
function_name: Function(
|
||||
name=function_name,
|
||||
@@ -128,7 +128,7 @@ class Test_awslambda_function_invoke_api_operations_cloudtrail_logging_enabled:
|
||||
lambda_client = mock.MagicMock
|
||||
function_name = "test-lambda"
|
||||
function_runtime = "python3.9"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{AWS_ACCOUNT_NUMBER}:function/{function_name}"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{DEFAULT_ACCOUNT_ID}:function/{function_name}"
|
||||
lambda_client.functions = {
|
||||
function_name: Function(
|
||||
name=function_name,
|
||||
@@ -203,7 +203,7 @@ class Test_awslambda_function_invoke_api_operations_cloudtrail_logging_enabled:
|
||||
lambda_client = mock.MagicMock
|
||||
function_name = "test-lambda"
|
||||
function_runtime = "python3.9"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{AWS_ACCOUNT_NUMBER}:function/{function_name}"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{DEFAULT_ACCOUNT_ID}:function/{function_name}"
|
||||
lambda_client.functions = {
|
||||
function_name: Function(
|
||||
name=function_name,
|
||||
|
||||
@@ -1,62 +1,17 @@
|
||||
import zipfile
|
||||
from unittest import mock
|
||||
|
||||
from awslambda_service_test import create_zip_file
|
||||
from moto.core import DEFAULT_ACCOUNT_ID
|
||||
|
||||
from prowler.providers.aws.services.awslambda.awslambda_service import (
|
||||
Function,
|
||||
LambdaCode,
|
||||
)
|
||||
from tests.providers.aws.audit_info_utils import (
|
||||
AWS_ACCOUNT_NUMBER,
|
||||
AWS_REGION_US_EAST_1,
|
||||
set_mocked_aws_audit_info,
|
||||
)
|
||||
from tests.providers.aws.services.awslambda.awslambda_service_test import (
|
||||
create_zip_file,
|
||||
)
|
||||
|
||||
LAMBDA_FUNCTION_NAME = "test-lambda"
|
||||
LAMBDA_FUNCTION_RUNTIME = "nodejs4.3"
|
||||
LAMBDA_FUNCTION_ARN = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{AWS_ACCOUNT_NUMBER}:function/{LAMBDA_FUNCTION_NAME}"
|
||||
LAMBDA_FUNCTION_CODE_WITH_SECRETS = """
|
||||
def lambda_handler(event, context):
|
||||
db_password = "test-password"
|
||||
print("custom log event")
|
||||
return event
|
||||
"""
|
||||
LAMBDA_FUNCTION_CODE_WITHOUT_SECRETS = """
|
||||
def lambda_handler(event, context):
|
||||
print("custom log event")
|
||||
return event
|
||||
"""
|
||||
|
||||
|
||||
def create_lambda_function() -> Function:
|
||||
return Function(
|
||||
name=LAMBDA_FUNCTION_NAME,
|
||||
security_groups=[],
|
||||
arn=LAMBDA_FUNCTION_ARN,
|
||||
region=AWS_REGION_US_EAST_1,
|
||||
runtime=LAMBDA_FUNCTION_RUNTIME,
|
||||
)
|
||||
|
||||
|
||||
def get_lambda_code_with_secrets(code):
|
||||
return LambdaCode(
|
||||
location="",
|
||||
code_zip=zipfile.ZipFile(create_zip_file(code)),
|
||||
)
|
||||
|
||||
|
||||
def mock__get_function_code__with_secrets():
|
||||
yield create_lambda_function(), get_lambda_code_with_secrets(
|
||||
LAMBDA_FUNCTION_CODE_WITH_SECRETS
|
||||
)
|
||||
|
||||
|
||||
def mock__get_function_code__without_secrets():
|
||||
yield create_lambda_function(), get_lambda_code_with_secrets(
|
||||
LAMBDA_FUNCTION_CODE_WITHOUT_SECRETS
|
||||
)
|
||||
|
||||
|
||||
class Test_awslambda_function_no_secrets_in_code:
|
||||
@@ -83,8 +38,29 @@ class Test_awslambda_function_no_secrets_in_code:
|
||||
|
||||
def test_function_code_with_secrets(self):
|
||||
lambda_client = mock.MagicMock
|
||||
lambda_client.functions = {LAMBDA_FUNCTION_ARN: create_lambda_function()}
|
||||
lambda_client.__get_function_code__ = mock__get_function_code__with_secrets
|
||||
function_name = "test-lambda"
|
||||
function_runtime = "nodejs4.3"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{DEFAULT_ACCOUNT_ID}:function/{function_name}"
|
||||
code_with_secrets = """
|
||||
def lambda_handler(event, context):
|
||||
db_password = "test-password"
|
||||
print("custom log event")
|
||||
return event
|
||||
"""
|
||||
lambda_client.functions = {
|
||||
"function_name": Function(
|
||||
name=function_name,
|
||||
security_groups=[],
|
||||
arn=function_arn,
|
||||
region=AWS_REGION_US_EAST_1,
|
||||
runtime=function_runtime,
|
||||
code=LambdaCode(
|
||||
location="",
|
||||
code_zip=zipfile.ZipFile(create_zip_file(code_with_secrets)),
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
set_mocked_aws_audit_info(),
|
||||
@@ -102,20 +78,38 @@ class Test_awslambda_function_no_secrets_in_code:
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0].region == AWS_REGION_US_EAST_1
|
||||
assert result[0].resource_id == LAMBDA_FUNCTION_NAME
|
||||
assert result[0].resource_arn == LAMBDA_FUNCTION_ARN
|
||||
assert result[0].resource_id == function_name
|
||||
assert result[0].resource_arn == function_arn
|
||||
assert result[0].status == "FAIL"
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"Potential secret found in Lambda function {LAMBDA_FUNCTION_NAME} code -> lambda_function.py: Secret Keyword on line 3."
|
||||
== f"Potential secret found in Lambda function {function_name} code -> lambda_function.py: Secret Keyword on line 3."
|
||||
)
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
def test_function_code_without_secrets(self):
|
||||
lambda_client = mock.MagicMock
|
||||
lambda_client.functions = {LAMBDA_FUNCTION_ARN: create_lambda_function()}
|
||||
|
||||
lambda_client.__get_function_code__ = mock__get_function_code__without_secrets
|
||||
function_name = "test-lambda"
|
||||
function_runtime = "nodejs4.3"
|
||||
function_arn = f"arn:aws:lambda:{AWS_REGION_US_EAST_1}:{DEFAULT_ACCOUNT_ID}:function/{function_name}"
|
||||
code_with_secrets = """
|
||||
def lambda_handler(event, context):
|
||||
print("custom log event")
|
||||
return event
|
||||
"""
|
||||
lambda_client.functions = {
|
||||
"function_name": Function(
|
||||
name=function_name,
|
||||
security_groups=[],
|
||||
arn=function_arn,
|
||||
region=AWS_REGION_US_EAST_1,
|
||||
runtime=function_runtime,
|
||||
code=LambdaCode(
|
||||
location="",
|
||||
code_zip=zipfile.ZipFile(create_zip_file(code_with_secrets)),
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
|
||||
@@ -134,11 +128,11 @@ class Test_awslambda_function_no_secrets_in_code:
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0].region == AWS_REGION_US_EAST_1
|
||||
assert result[0].resource_id == LAMBDA_FUNCTION_NAME
|
||||
assert result[0].resource_arn == LAMBDA_FUNCTION_ARN
|
||||
assert result[0].resource_id == function_name
|
||||
assert result[0].resource_arn == function_arn
|
||||
assert result[0].status == "PASS"
|
||||
assert (
|
||||
result[0].status_extended
|
||||
== f"No secrets found in Lambda function {LAMBDA_FUNCTION_NAME} code."
|
||||
== f"No secrets found in Lambda function {function_name} code."
|
||||
)
|
||||
assert result[0].resource_tags == []
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user