Compare commits

..

25 Commits

Author SHA1 Message Date
github-actions
3a054b6707 chore(release): 3.15.3 2024-03-22 09:56:06 +00:00
Pepe Fagoaga
7f99bd4b5e fix(securityhub): Remove region from exception match (#3593) 2024-03-22 10:38:08 +01:00
Sergio Garcia
36078e20c8 fix(apigatewayv2): handle empty names (#3592) 2024-03-22 10:38:04 +01:00
Pepe Fagoaga
07db08744c chore(release): update Prowler Version to 3.15.2. (#3591) 2024-03-22 10:37:38 +01:00
Pepe Fagoaga
f4ab90187e fix(json-asff): Remediation.Recommendation.Text < 512 chars (#3589) 2024-03-22 10:37:25 +01:00
Sergio Garcia
530bfa240d chore(gcp): remove unnecessary default project id (#3586) 2024-03-22 10:37:17 +01:00
Pedro Martín
ef66bf8d4a fix(compliance): fix csv output for framework Mitre Attack v3 (#3584) 2024-03-22 10:37:07 +01:00
Pepe Fagoaga
50d4cd5c64 chore(actions): Set branch based on version (#3580) 2024-03-22 10:37:00 +01:00
Nacho Rivera
6029506f25 chore(regions_update): Changes in regions for AWS services. (#3581)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-03-22 10:36:53 +01:00
github-actions
e37986c1b0 chore(release): 3.15.2 2024-03-21 09:21:15 +00:00
Sergio Garcia
85d6d025c5 fix(cloudtrail): use dictionary instead of list (#3579) 2024-03-21 09:56:17 +01:00
Pepe Fagoaga
c32f7ba158 fix(actions): Remove indent (#3577) 2024-03-21 09:56:09 +01:00
github-actions
50d8b23b19 chore(release): 3.15.1 2024-03-20 14:00:02 +00:00
Pepe Fagoaga
2039ec0c9f fix(action): Release on whatever branch (#3576) 2024-03-20 14:52:56 +01:00
Nacho Rivera
6a6ffbab45 chore(regions_update): Changes in regions for AWS services. (#3571)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
Nacho Rivera
6871169730 chore(regions_update): Changes in regions for AWS services. (#3566)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
8e4d8b5a04 build(deps-dev): bump mkdocs-material from 9.5.12 to 9.5.14
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.5.12 to 9.5.14.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.5.12...9.5.14)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
7ce499ec37 build(deps): bump azure-mgmt-compute from 30.5.0 to 30.6.0 (#3559)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
990cb7dae2 build(deps-dev): bump black from 24.2.0 to 24.3.0
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.2.0...24.3.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
9fea275472 build(deps): bump trufflesecurity/trufflehog from 3.69.0 to 3.70.2 (#3561)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
853cf8be25 build(deps): bump tj-actions/changed-files from 42 to 43 (#3560)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
0a3f972239 build(deps-dev): bump coverage from 7.4.3 to 7.4.4 (#3558)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
Sergio Garcia
180864dab4 fix(iam): handle KeyError in service_last_accessed (#3555) 2024-03-20 14:52:56 +01:00
Sergio Garcia
d06ccb9af1 chore(compliance): rename AWS FTR compliance (#3550) 2024-03-20 14:52:56 +01:00
Nacho Rivera
00d9a391c4 chore(regions_update): Changes in regions for AWS services. (#3552)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
135 changed files with 834 additions and 12423 deletions

View File

@@ -3,8 +3,6 @@ name: build-lint-push-containers
on:
push:
branches:
# TODO: update it for v3 and v4
# - "v3"
- "master"
paths-ignore:
- ".github/**"
@@ -15,27 +13,15 @@ on:
types: [published]
env:
# AWS Configuration
AWS_REGION_STG: eu-west-1
AWS_REGION_PLATFORM: eu-west-1
AWS_REGION: us-east-1
# Container's configuration
IMAGE_NAME: prowler
DOCKERFILE_PATH: ./Dockerfile
# Tags
LATEST_TAG: latest
STABLE_TAG: stable
# The RELEASE_TAG is set during runtime in releases
RELEASE_TAG: ""
# The PROWLER_VERSION and PROWLER_VERSION_MAJOR are set during runtime in releases
PROWLER_VERSION: ""
PROWLER_VERSION_MAJOR: ""
# TEMPORARY_TAG: temporary
# Python configuration
PYTHON_VERSION: 3.11
TEMPORARY_TAG: temporary
DOCKERFILE_PATH: ./Dockerfile
PYTHON_VERSION: 3.9
jobs:
# Build Prowler OSS container
@@ -44,57 +30,26 @@ jobs:
runs-on: ubuntu-latest
env:
POETRY_VIRTUALENVS_CREATE: "false"
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python
- name: Setup python (release)
if: github.event_name == 'release'
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install Poetry
- name: Install dependencies (release)
if: github.event_name == 'release'
run: |
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: Get Prowler version
run: |
PROWLER_VERSION="$(poetry version -s 2>/dev/null)"
# Store prowler version major just for the release
PROWLER_VERSION_MAJOR="${PROWLER_VERSION%%.*}"
echo "PROWLER_VERSION_MAJOR=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_ENV}"
case ${PROWLER_VERSION_MAJOR} in
3)
# TODO: update it for v3 and v4
# echo "LATEST=v3-latest" >> "${GITHUB_ENV}"
# echo "STABLE_TAG=v3-stable" >> "${GITHUB_ENV}"
echo "LATEST=latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=stable" >> "${GITHUB_ENV}"
;;
# TODO: uncomment for v3 and v4
# 4)
# echo "LATEST=latest" >> "${GITHUB_ENV}"
# echo "STABLE_TAG=stable" >> "${GITHUB_ENV}"
# ;;
*)
# Fallback if any other version is present
echo "Releasing another Prowler major version, aborting..."
exit 1
;;
esac
- name: Update Prowler version (release)
if: github.event_name == 'release'
run: |
PROWLER_VERSION="${{ github.event.release.tag_name }}"
poetry version "${PROWLER_VERSION}"
echo "PROWLER_VERSION=${PROWLER_VERSION}" >> "${GITHUB_ENV}"
poetry version ${{ github.event.release.tag_name }}
- name: Login to DockerHub
uses: docker/login-action@v3
@@ -135,9 +90,9 @@ jobs:
context: .
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
@@ -147,26 +102,16 @@ jobs:
needs: container-build-push
runs-on: ubuntu-latest
steps:
- name: Get latest commit info (latest)
- name: Get latest commit info
if: github.event_name == 'push'
run: |
LATEST_COMMIT_HASH=$(echo ${{ github.event.after }} | cut -b -7)
echo "LATEST_COMMIT_HASH=${LATEST_COMMIT_HASH}" >> $GITHUB_ENV
- name: Dispatch event (latest)
if: github.event_name == 'push' && ${{ env. PROWLER_VERSION_MAJOR }} == '3'
- name: Dispatch event for latest
if: github.event_name == 'push'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
--data '{"event_type":"dispatch","client_payload":{"version":"latest", "tag": "${{ env.LATEST_COMMIT_HASH }}"}}'
- name: Dispatch event (release)
if: github.event_name == 'release' && ${{ env. PROWLER_VERSION_MAJOR }} == '3'
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"latest", "tag": "${{ env.LATEST_COMMIT_HASH }}"}}'
- name: Dispatch event for release
if: github.event_name == 'release'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
--data '{"event_type":"dispatch","client_payload":{"version":"release", "tag":"${{ env.PROWLER_VERSION }}"}}'
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"release", "tag":"${{ github.event.release.tag_name }}"}}'

View File

@@ -11,7 +11,7 @@ jobs:
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.71.2
uses: trufflesecurity/trufflehog@v3.70.2
with:
path: ./
base: ${{ github.event.repository.default_branch }}

View File

@@ -20,7 +20,7 @@ jobs:
- uses: actions/checkout@v4
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@v44
uses: tj-actions/changed-files@v43
with:
files: ./**
files_ignore: |

View File

@@ -8,7 +8,10 @@ env:
RELEASE_TAG: ${{ github.event.release.tag_name }}
PYTHON_VERSION: 3.11
CACHE: "poetry"
# TODO: create a bot user for this kind of tasks, like prowler-bot
# This base branch is used to create a PR with the updated version
# We'd need to handle the base branch for v4 and v3, since they will be
# `master` and `3.0-dev`, respectively
GITHUB_BASE_BRANCH: "master"
GIT_COMMITTER_EMAIL: "sergio@prowler.com"
jobs:
@@ -18,16 +21,24 @@ jobs:
POETRY_VIRTUALENVS_CREATE: "false"
name: Release Prowler to PyPI
steps:
- name: Get Prowler version
- name: Get base branch regarding Prowler version
run: |
PROWLER_VERSION="${{ env.RELEASE_TAG }}"
BASE_BRANCH=""
case ${PROWLER_VERSION%%.*} in
3)
echo "Releasing Prowler v3 with tag ${PROWLER_VERSION}"
# TODO: modify it once v4 is ready
# echo "BASE_BRANCH=3.0-dev" >> "${GITHUB_ENV}"
echo "BASE_BRANCH=master" >> "${GITHUB_ENV}"
;;
4)
echo "Releasing Prowler v4 with tag ${PROWLER_VERSION}"
# TODO: modify it once v4 is ready
# echo "BASE_BRANCH=master" >> "${GITHUB_ENV}"
echo "Not available, aborting..."
exit 1
;;
*)
echo "Releasing another Prowler major version, aborting..."
@@ -53,7 +64,7 @@ jobs:
poetry version ${{ env.RELEASE_TAG }}
- name: Import GPG key
uses: crazy-max/ghaction-import-gpg@v6
uses: crazy-max/ghaction-import-gpg@v4
with:
gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.GPG_PASSPHRASE }}
@@ -76,6 +87,11 @@ jobs:
# Push the tag
git push -f origin ${{ env.RELEASE_TAG }}
- name: Create new branch for the version update
run: |
git switch -c release-${{ env.RELEASE_TAG }}
git push --set-upstream origin release-${{ env.RELEASE_TAG }}
- name: Build Prowler package
run: |
poetry build
@@ -85,6 +101,23 @@ jobs:
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish
- name: Create PR to update version in the branch
run: |
echo "### Description
This PR updates Prowler Version to ${{ env.RELEASE_TAG }}.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license." |\
gh pr create \
--base ${{ env.BASE_BRANCH }} \
--head release-${{ env.RELEASE_TAG }} \
--title "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}." \
--body-file -
env:
GH_TOKEN: ${{ secrets.PROWLER_ACCESS_TOKEN }}
- name: Replicate PyPI package
run: |
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info

View File

@@ -8,7 +8,7 @@
<p align="center">
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img width="30" height="30" alt="Prowler community on Slack" src="https://github.com/prowler-cloud/prowler/assets/3985464/3617e470-670c-47c9-9794-ce895ebdb627"></a>
<br>
@@ -49,7 +49,7 @@ It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, Fe
|---|---|---|---|---|
| AWS | 304 | 61 -> `prowler aws --list-services` | 28 -> `prowler aws --list-compliance` | 6 -> `prowler aws --list-categories` |
| GCP | 75 | 11 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
| Azure | 126 | 16 -> `prowler azure --list-services` | 2 -> `prowler azure --list-compliance` | 2 -> `prowler azure --list-categories` |
| Azure | 109 | 16 -> `prowler azure --list-services` | CIS soon | 2 -> `prowler azure --list-categories` |
| Kubernetes | Work In Progress | - | CIS soon | - |
# 📖 Documentation

View File

@@ -1,6 +1,6 @@
## Contribute with documentation
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them. To install all necessary dependencies use `poetry install --with docs`.
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them.
1. Install `mkdocs` with your favorite package manager.
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.

View File

@@ -64,17 +64,16 @@ The other three cases does not need additional configuration, `--az-cli-auth` an
To use each one you need to pass the proper flag to the execution. Prowler for Azure handles two types of permission scopes, which are:
- **Microsoft Entra ID permissions**: Used to retrieve metadata from the identity assumed by Prowler (not mandatory to have access to execute the tool).
- **Azure Active Directory permissions**: Used to retrieve metadata from the identity assumed by Prowler and future AAD checks (not mandatory to have access to execute the tool)
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool.
#### Microsoft Entra ID scope
#### Azure Active Directory scope
Microsoft Entra ID (AAD earlier) permissions required by the tool are the following:
- `Directory.Read.All`
- `Policy.Read.All`
- `UserAuthenticationMethod.Read.All`
The best way to assign it is through the Azure web console:
@@ -87,10 +86,9 @@ The best way to assign it is through the Azure web console:
5. In the left menu bar, select "API permissions"
6. Then click on "+ Add a permission" and select "Microsoft Graph"
7. Once in the "Microsoft Graph" view, select "Application permissions"
8. Finally, search for "Directory", "Policy" and "UserAuthenticationMethod" select the following permissions:
8. Finally, search for "Directory" and "Policy" and select the following permissions:
- `Directory.Read.All`
- `Policy.Read.All`
- `UserAuthenticationMethod.Read.All`
![EntraID Permissions](../img/AAD-permissions.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 376 KiB

After

Width:  |  Height:  |  Size: 358 KiB

View File

@@ -17,8 +17,6 @@ Currently, the available frameworks are:
- `cis_1.5_aws`
- `cis_2.0_aws`
- `cis_2.0_gcp`
- `cis_2.0_azure`
- `cis_2.1_azure`
- `cis_3.0_aws`
- `cisa_aws`
- `ens_rd2022_aws`

144
poetry.lock generated
View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.8.2 and should not be changed by hand.
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
[[package]]
name = "about-time"
@@ -153,17 +153,6 @@ files = [
about-time = "4.2.1"
grapheme = "0.6.0"
[[package]]
name = "antlr4-python3-runtime"
version = "4.13.1"
description = "ANTLR 4.13.1 runtime for Python 3"
optional = false
python-versions = "*"
files = [
{file = "antlr4-python3-runtime-4.13.1.tar.gz", hash = "sha256:3cd282f5ea7cfb841537fe01f143350fdb1c0b1ce7981443a2fa8513fddb6d1a"},
{file = "antlr4_python3_runtime-4.13.1-py3-none-any.whl", hash = "sha256:78ec57aad12c97ac039ca27403ad61cb98aaec8a3f9bb8144f889aa0fa28b943"},
]
[[package]]
name = "anyio"
version = "4.3.0"
@@ -1457,20 +1446,20 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.dev0)"]
[[package]]
name = "google-api-python-client"
version = "2.124.0"
version = "2.122.0"
description = "Google API Client Library for Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "google-api-python-client-2.124.0.tar.gz", hash = "sha256:f6d3258420f7c76b0f5266b5e402e6f804e30351b018a10083f4a46c3ec33773"},
{file = "google_api_python_client-2.124.0-py2.py3-none-any.whl", hash = "sha256:07dc674449ed353704b1169fdee792f74438d024261dad71b6ce7bb9c683d51f"},
{file = "google-api-python-client-2.122.0.tar.gz", hash = "sha256:77447bf2d6b6ea9e686fd66fc2f12ee7a63e3889b7427676429ebf09fcb5dcf9"},
{file = "google_api_python_client-2.122.0-py2.py3-none-any.whl", hash = "sha256:a5953e60394b77b98bcc7ff7c4971ed784b3b693e9a569c176eaccb1549330f2"},
]
[package.dependencies]
google-api-core = ">=1.31.5,<2.0.dev0 || >2.3.0,<3.0.0.dev0"
google-auth = ">=1.32.0,<2.24.0 || >2.24.0,<2.25.0 || >2.25.0,<3.0.0.dev0"
google-auth-httplib2 = ">=0.2.0,<1.0.0"
httplib2 = ">=0.19.0,<1.dev0"
google-auth = ">=1.19.0,<3.0.0.dev0"
google-auth-httplib2 = ">=0.1.0"
httplib2 = ">=0.15.0,<1.dev0"
uritemplate = ">=3.0.1,<5"
[[package]]
@@ -1815,20 +1804,6 @@ files = [
[package.dependencies]
jsonpointer = ">=1.9"
[[package]]
name = "jsonpath-ng"
version = "1.6.1"
description = "A final implementation of JSONPath for Python that aims to be standard compliant, including arithmetic and binary comparison operators and providing clear AST for metaprogramming."
optional = false
python-versions = "*"
files = [
{file = "jsonpath-ng-1.6.1.tar.gz", hash = "sha256:086c37ba4917304850bd837aeab806670224d3f038fe2833ff593a672ef0a5fa"},
{file = "jsonpath_ng-1.6.1-py3-none-any.whl", hash = "sha256:8f22cd8273d7772eea9aaa84d922e0841aa36fdb8a2c6b7f6c3791a16a9bc0be"},
]
[package.dependencies]
ply = "*"
[[package]]
name = "jsonpickle"
version = "3.0.3"
@@ -2262,13 +2237,13 @@ pytz = "*"
[[package]]
name = "mkdocs-material"
version = "9.5.17"
version = "9.5.14"
description = "Documentation that simply works"
optional = false
python-versions = ">=3.8"
files = [
{file = "mkdocs_material-9.5.17-py3-none-any.whl", hash = "sha256:14a2a60119a785e70e765dd033e6211367aca9fc70230e577c1cf6a326949571"},
{file = "mkdocs_material-9.5.17.tar.gz", hash = "sha256:06ae1275a72db1989cf6209de9e9ecdfbcfdbc24c58353877b2bb927dbe413e4"},
{file = "mkdocs_material-9.5.14-py3-none-any.whl", hash = "sha256:a45244ac221fda46ecf8337f00ec0e5cb5348ab9ffb203ca2a0c313b0d4dbc27"},
{file = "mkdocs_material-9.5.14.tar.gz", hash = "sha256:2a1f8e67cda2587ab93ecea9ba42d0ca61d1d7b5fad8cf690eeaeb39dcd4b9af"},
]
[package.dependencies]
@@ -2318,17 +2293,16 @@ test = ["pytest", "pytest-cov"]
[[package]]
name = "moto"
version = "5.0.4"
version = "5.0.3"
description = ""
optional = false
python-versions = ">=3.8"
files = [
{file = "moto-5.0.4-py2.py3-none-any.whl", hash = "sha256:4054360b882b6e7bab25d52d057e196b978b8d15f1921333f534c4d8f6510bbb"},
{file = "moto-5.0.4.tar.gz", hash = "sha256:8d19125d40c919cb40df62f4576904c2647c4e9a0e1ebc42491dd7787d09e107"},
{file = "moto-5.0.3-py2.py3-none-any.whl", hash = "sha256:261d312d1d69c2afccb450a0566666d7b75d76ed6a7d00aac278a9633b073ff0"},
{file = "moto-5.0.3.tar.gz", hash = "sha256:070ac2edf89ad7aee28534481ce68e2f344c8a6a8fefec5427eea0d599bfdbdb"},
]
[package.dependencies]
antlr4-python3-runtime = {version = "*", optional = true, markers = "extra == \"all\""}
aws-xray-sdk = {version = ">=0.93,<0.96 || >0.96", optional = true, markers = "extra == \"all\""}
boto3 = ">=1.9.201"
botocore = ">=1.14.0"
@@ -2339,10 +2313,9 @@ graphql-core = {version = "*", optional = true, markers = "extra == \"all\""}
Jinja2 = ">=2.10.1"
joserfc = {version = ">=0.9.0", optional = true, markers = "extra == \"all\""}
jsondiff = {version = ">=1.1.2", optional = true, markers = "extra == \"all\""}
jsonpath-ng = {version = "*", optional = true, markers = "extra == \"all\""}
multipart = {version = "*", optional = true, markers = "extra == \"all\""}
openapi-spec-validator = {version = ">=0.5.0", optional = true, markers = "extra == \"all\""}
py-partiql-parser = {version = "0.5.2", optional = true, markers = "extra == \"all\""}
py-partiql-parser = {version = "0.5.1", optional = true, markers = "extra == \"all\""}
pyparsing = {version = ">=3.0.7", optional = true, markers = "extra == \"all\""}
python-dateutil = ">=2.1,<3.0.0"
PyYAML = {version = ">=5.1", optional = true, markers = "extra == \"all\""}
@@ -2353,25 +2326,24 @@ werkzeug = ">=0.5,<2.2.0 || >2.2.0,<2.2.1 || >2.2.1"
xmltodict = "*"
[package.extras]
all = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "jsonpath-ng", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.2)", "pyparsing (>=3.0.7)", "setuptools"]
all = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.1)", "pyparsing (>=3.0.7)", "setuptools"]
apigateway = ["PyYAML (>=5.1)", "joserfc (>=0.9.0)", "openapi-spec-validator (>=0.5.0)"]
apigatewayv2 = ["PyYAML (>=5.1)", "openapi-spec-validator (>=0.5.0)"]
appsync = ["graphql-core"]
awslambda = ["docker (>=3.0.0)"]
batch = ["docker (>=3.0.0)"]
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.2)", "pyparsing (>=3.0.7)", "setuptools"]
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.1)", "pyparsing (>=3.0.7)", "setuptools"]
cognitoidp = ["joserfc (>=0.9.0)"]
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.2)"]
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.2)"]
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.1)"]
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.1)"]
glue = ["pyparsing (>=3.0.7)"]
iotdata = ["jsondiff (>=1.1.2)"]
proxy = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "jsonpath-ng", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.2)", "pyparsing (>=3.0.7)", "setuptools"]
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.2)", "pyparsing (>=3.0.7)"]
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.5.2)"]
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.5.2)"]
server = ["PyYAML (>=5.1)", "antlr4-python3-runtime", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "jsonpath-ng", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.2)", "pyparsing (>=3.0.7)", "setuptools"]
proxy = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.1)", "pyparsing (>=3.0.7)", "setuptools"]
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.1)", "pyparsing (>=3.0.7)"]
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.5.1)"]
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.5.1)"]
server = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "joserfc (>=0.9.0)", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.1)", "pyparsing (>=3.0.7)", "setuptools"]
ssm = ["PyYAML (>=5.1)"]
stepfunctions = ["antlr4-python3-runtime", "jsonpath-ng"]
xray = ["aws-xray-sdk (>=0.93,!=0.96)", "setuptools"]
[[package]]
@@ -2451,13 +2423,13 @@ dev = ["bumpver", "isort", "mypy", "pylint", "pytest", "yapf"]
[[package]]
name = "msgraph-sdk"
version = "1.0.0"
version = "1.1.0"
description = "The Microsoft Graph Python SDK"
optional = false
python-versions = ">=3.8"
files = [
{file = "msgraph-sdk-1.0.0.tar.gz", hash = "sha256:78c301d2aac7ac19bf3e2ace4356c6a077f50f3fd6e6da222fbe4dcbac0c5540"},
{file = "msgraph_sdk-1.0.0-py3-none-any.whl", hash = "sha256:977cf806490077743530d0aa239c517be0035025299b62d50395b7fbf7e534ef"},
{file = "msgraph-sdk-1.1.0.tar.gz", hash = "sha256:c81bbd7c82b1237a25baee46d2a40ea3eabd15e06c5e50621ad4a9045012da5d"},
{file = "msgraph_sdk-1.1.0-py3-none-any.whl", hash = "sha256:661b6ddcc76391434af505bef0c3729405418402b3a7e673f1d094cb4fd5f80a"},
]
[package.dependencies]
@@ -2467,7 +2439,7 @@ microsoft-kiota-authentication-azure = ">=1.0.0,<2.0.0"
microsoft-kiota-http = ">=1.0.0,<2.0.0"
microsoft-kiota-serialization-json = ">=1.0.0,<2.0.0"
microsoft-kiota-serialization-text = ">=1.0.0,<2.0.0"
msgraph-core = ">=1.0.0a2"
msgraph-core = ">=1.0.0"
[package.extras]
dev = ["bumpver", "isort", "mypy", "pylint", "pytest", "yapf"]
@@ -2922,17 +2894,6 @@ files = [
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
optional = false
python-versions = "*"
files = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
[[package]]
name = "portalocker"
version = "2.8.2"
@@ -2974,13 +2935,13 @@ files = [
[[package]]
name = "py-partiql-parser"
version = "0.5.2"
version = "0.5.1"
description = "Pure Python PartiQL Parser"
optional = false
python-versions = "*"
files = [
{file = "py-partiql-parser-0.5.2.tar.gz", hash = "sha256:bdec65fe17d6093c05e9bc1742a99a041ef810b50a71cc0d9e74a88218d938cf"},
{file = "py_partiql_parser-0.5.2-py3-none-any.whl", hash = "sha256:9c79b59bbe0cb50daa8090020f2e7f3e5a0e33f7846b48924f19a8f7704f4877"},
{file = "py-partiql-parser-0.5.1.tar.gz", hash = "sha256:aeac8f46529d8651bbae88a1a6c14dc3aa38ebc4bc6bd1eb975044c0564246c6"},
{file = "py_partiql_parser-0.5.1-py3-none-any.whl", hash = "sha256:53053e70987dea2983e1990ad85f87a7d8cec13dd4a4b065a740bcfd661f5a6b"},
]
[package.extras]
@@ -3217,13 +3178,13 @@ testing = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygm
[[package]]
name = "pytest-cov"
version = "5.0.0"
version = "4.1.0"
description = "Pytest plugin for measuring coverage."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.7"
files = [
{file = "pytest-cov-5.0.0.tar.gz", hash = "sha256:5837b58e9f6ebd335b0f8060eecce69b662415b16dc503883a02f45dfeb14857"},
{file = "pytest_cov-5.0.0-py3-none-any.whl", hash = "sha256:4f0764a1219df53214206bf1feea4633c3b558a2925c8b59f144f682861ce652"},
{file = "pytest-cov-4.1.0.tar.gz", hash = "sha256:3904b13dfbfec47f003b8e77fd5b589cd11904a21ddf1ab38a64f204d6a10ef6"},
{file = "pytest_cov-4.1.0-py3-none-any.whl", hash = "sha256:6ba70b9e97e69fcc3fb45bfeab2d0a138fb65c4d0d6a41ef33983ad114be8c3a"},
]
[package.dependencies]
@@ -3231,7 +3192,7 @@ coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "virtualenv"]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
[[package]]
name = "pytest-env"
@@ -3346,6 +3307,7 @@ files = [
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
{file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"},
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
@@ -3353,8 +3315,16 @@ files = [
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
{file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"},
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
{file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
{file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a08c6f0fe150303c1c6b71ebcd7213c2858041a7e01975da3a99aed1e7a378ef"},
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
{file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
{file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
{file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"},
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
@@ -3371,6 +3341,7 @@ files = [
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
{file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"},
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
@@ -3378,6 +3349,7 @@ files = [
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
{file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"},
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
@@ -3767,24 +3739,24 @@ python-versions = ">=3.6"
files = [
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:b42169467c42b692c19cf539c38d4602069d8c1505e97b86387fcf7afb766e1d"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-macosx_13_0_arm64.whl", hash = "sha256:07238db9cbdf8fc1e9de2489a4f68474e70dffcb32232db7c08fa61ca0c7c462"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:d92f81886165cb14d7b067ef37e142256f1c6a90a65cd156b063a43da1708cfd"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:fff3573c2db359f091e1589c3d7c5fc2f86f5bdb6f24252c2d8e539d4e45f412"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-manylinux_2_24_aarch64.whl", hash = "sha256:aa2267c6a303eb483de8d02db2871afb5c5fc15618d894300b88958f729ad74f"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:840f0c7f194986a63d2c2465ca63af8ccbbc90ab1c6001b1978f05119b5e7334"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:024cfe1fc7c7f4e1aff4a81e718109e13409767e4f871443cbff3dba3578203d"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-win32.whl", hash = "sha256:c69212f63169ec1cfc9bb44723bf2917cbbd8f6191a00ef3410f5a7fe300722d"},
{file = "ruamel.yaml.clib-0.2.8-cp310-cp310-win_amd64.whl", hash = "sha256:cabddb8d8ead485e255fe80429f833172b4cadf99274db39abc080e068cbcc31"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:bef08cd86169d9eafb3ccb0a39edb11d8e25f3dae2b28f5c52fd997521133069"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:b16420e621d26fdfa949a8b4b47ade8810c56002f5389970db4ddda51dbff248"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:b5edda50e5e9e15e54a6a8a0070302b00c518a9d32accc2346ad6c984aacd279"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:25c515e350e5b739842fc3228d662413ef28f295791af5e5110b543cf0b57d9b"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-manylinux_2_24_aarch64.whl", hash = "sha256:1707814f0d9791df063f8c19bb51b0d1278b8e9a2353abbb676c2f685dee6afe"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:46d378daaac94f454b3a0e3d8d78cafd78a026b1d71443f4966c696b48a6d899"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:09b055c05697b38ecacb7ac50bdab2240bfca1a0c4872b0fd309bb07dc9aa3a9"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-win32.whl", hash = "sha256:53a300ed9cea38cf5a2a9b069058137c2ca1ce658a874b79baceb8f892f915a7"},
{file = "ruamel.yaml.clib-0.2.8-cp311-cp311-win_amd64.whl", hash = "sha256:c2a72e9109ea74e511e29032f3b670835f8a59bbdc9ce692c5b4ed91ccf1eedb"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:ebc06178e8821efc9692ea7544aa5644217358490145629914d8020042c24aa1"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-macosx_13_0_arm64.whl", hash = "sha256:edaef1c1200c4b4cb914583150dcaa3bc30e592e907c01117c08b13a07255ec2"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:7048c338b6c86627afb27faecf418768acb6331fc24cfa56c93e8c9780f815fa"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d176b57452ab5b7028ac47e7b3cf644bcfdc8cacfecf7e71759f7f51a59e5c92"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-manylinux_2_24_aarch64.whl", hash = "sha256:1dc67314e7e1086c9fdf2680b7b6c2be1c0d8e3a8279f2e993ca2a7545fecf62"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:3213ece08ea033eb159ac52ae052a4899b56ecc124bb80020d9bbceeb50258e9"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:aab7fd643f71d7946f2ee58cc88c9b7bfc97debd71dcc93e03e2d174628e7e2d"},
{file = "ruamel.yaml.clib-0.2.8-cp312-cp312-win32.whl", hash = "sha256:5c365d91c88390c8d0a8545df0b5857172824b1c604e867161e6b3d59a827eaa"},
@@ -3792,7 +3764,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.8-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a5aa27bad2bb83670b71683aae140a1f52b0857a2deff56ad3f6c13a017a26ed"},
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c58ecd827313af6864893e7af0a3bb85fd529f862b6adbefe14643947cfe2942"},
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-macosx_12_0_arm64.whl", hash = "sha256:f481f16baec5290e45aebdc2a5168ebc6d35189ae6fea7a58787613a25f6e875"},
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:3fcc54cb0c8b811ff66082de1680b4b14cf8a81dce0d4fbf665c2265a81e07a1"},
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-manylinux_2_24_aarch64.whl", hash = "sha256:77159f5d5b5c14f7c34073862a6b7d34944075d9f93e681638f6d753606c6ce6"},
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:7f67a1ee819dc4562d444bbafb135832b0b909f81cc90f7aa00260968c9ca1b3"},
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:4ecbf9c3e19f9562c7fdd462e8d18dd902a47ca046a2e64dba80699f0b6c09b7"},
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:87ea5ff66d8064301a154b3933ae406b0863402a799b16e4a1d24d9fbbcbe0d3"},
@@ -3800,7 +3772,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.8-cp37-cp37m-win_amd64.whl", hash = "sha256:3f215c5daf6a9d7bbed4a0a4f760f3113b10e82ff4c5c44bec20a68c8014f675"},
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1b617618914cb00bf5c34d4357c37aa15183fa229b24767259657746c9077615"},
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:a6a9ffd280b71ad062eae53ac1659ad86a17f59a0fdc7699fd9be40525153337"},
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:665f58bfd29b167039f714c6998178d27ccd83984084c286110ef26b230f259f"},
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-manylinux_2_24_aarch64.whl", hash = "sha256:305889baa4043a09e5b76f8e2a51d4ffba44259f6b4c72dec8ca56207d9c6fe1"},
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:700e4ebb569e59e16a976857c8798aee258dceac7c7d6b50cab63e080058df91"},
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:e2b4c44b60eadec492926a7270abb100ef9f72798e18743939bdbf037aab8c28"},
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e79e5db08739731b0ce4850bed599235d601701d5694c36570a99a0c5ca41a9d"},
@@ -3808,7 +3780,7 @@ files = [
{file = "ruamel.yaml.clib-0.2.8-cp38-cp38-win_amd64.whl", hash = "sha256:56f4252222c067b4ce51ae12cbac231bce32aee1d33fbfc9d17e5b8d6966c312"},
{file = "ruamel.yaml.clib-0.2.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:03d1162b6d1df1caa3a4bd27aa51ce17c9afc2046c31b0ad60a0a96ec22f8001"},
{file = "ruamel.yaml.clib-0.2.8-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:bba64af9fa9cebe325a62fa398760f5c7206b215201b0ec825005f1b18b9bccf"},
{file = "ruamel.yaml.clib-0.2.8-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:9eb5dee2772b0f704ca2e45b1713e4e5198c18f515b52743576d196348f374d3"},
{file = "ruamel.yaml.clib-0.2.8-cp39-cp39-manylinux_2_24_aarch64.whl", hash = "sha256:a1a45e0bb052edf6a1d3a93baef85319733a888363938e1fc9924cb00c8df24c"},
{file = "ruamel.yaml.clib-0.2.8-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:da09ad1c359a728e112d60116f626cc9f29730ff3e0e7db72b9a2dbc2e4beed5"},
{file = "ruamel.yaml.clib-0.2.8-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:184565012b60405d93838167f425713180b949e9d8dd0bbc7b49f074407c5a8b"},
{file = "ruamel.yaml.clib-0.2.8-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a75879bacf2c987c003368cf14bed0ffe99e8e85acfa6c0bfffc21a090f16880"},
@@ -3836,13 +3808,13 @@ crt = ["botocore[crt] (>=1.20.29,<2.0a.0)"]
[[package]]
name = "safety"
version = "3.1.0"
version = "3.0.1"
description = "Checks installed dependencies for known vulnerabilities and licenses."
optional = false
python-versions = ">=3.7"
files = [
{file = "safety-3.1.0-py3-none-any.whl", hash = "sha256:f2ba2d36f15ac1e24751547a73b854509a7d6db31efd30b57f64ffdf9d021934"},
{file = "safety-3.1.0.tar.gz", hash = "sha256:71f47b82ece153ec2f240e277f7cbfa70d5da2e0d143162c67f63b2f7459a1aa"},
{file = "safety-3.0.1-py3-none-any.whl", hash = "sha256:1ed058bc4bef132b974e58d7fcad020fb897cd255328016f8a5a194b94ca91d2"},
{file = "safety-3.0.1.tar.gz", hash = "sha256:1f2000f03652f3a0bfc67f8fd1e98bc5723ccb76e15cb1bdd68545c3d803df01"},
]
[package.dependencies]
@@ -3852,11 +3824,11 @@ dparse = ">=0.6.4b0"
jinja2 = ">=3.1.0"
marshmallow = ">=3.15.0"
packaging = ">=21.0"
pydantic = ">=1.10.12"
pydantic = ">=1.10.12,<2.0"
requests = "*"
rich = "*"
"ruamel.yaml" = ">=0.17.21"
safety-schemas = ">=0.0.2"
safety-schemas = ">=0.0.1"
setuptools = ">=65.5.1"
typer = "*"
typing-extensions = ">=4.7.1"
@@ -4454,4 +4426,4 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p
[metadata]
lock-version = "2.0"
python-versions = ">=3.9,<3.13"
content-hash = "3dfd114a67913676b0bd4e1d0c3aeb918443b1e25455fe2a3b35bba8b212b0b3"
content-hash = "1e1b7b3f2d7388580096bab9889f44901a3bde26c71bddd5e9f4b275a0e7e116"

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -87,7 +87,6 @@ class CIS_Requirement_Attribute(BaseModel):
RemediationProcedure: str
AuditProcedure: str
AdditionalInformation: str
DefaultValue: Optional[str]
References: str

View File

@@ -11,7 +11,6 @@ from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
Check_Output_CSV_AWS_Well_Architected,
Check_Output_CSV_AZURE_CIS,
Check_Output_CSV_ENS_RD2022,
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
@@ -36,7 +35,6 @@ def add_manual_controls(output_options, audit_info, file_descriptors):
manual_finding.region = ""
manual_finding.location = ""
manual_finding.project_id = ""
manual_finding.subscription = ""
fill_compliance(
output_options, manual_finding, audit_info, file_descriptors
)
@@ -163,36 +161,7 @@ def fill_compliance(output_options, finding, audit_info, file_descriptors):
csv_header = generate_csv_fields(
Check_Output_CSV_GCP_CIS
)
elif compliance.Provider == "Azure":
compliance_row = Check_Output_CSV_AZURE_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
Subscription=finding.subscription,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_DefaultValue=attribute.DefaultValue,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
ResourceName=finding.resource_name,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(
Check_Output_CSV_AZURE_CIS
)
elif (
"AWS-Well-Architected-Framework" in compliance.Framework
and compliance.Provider == "AWS"

View File

@@ -15,7 +15,6 @@ from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
Check_Output_CSV_AWS_Well_Architected,
Check_Output_CSV_AZURE_CIS,
Check_Output_CSV_ENS_RD2022,
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
@@ -24,7 +23,6 @@ from prowler.lib.outputs.models import (
)
from prowler.lib.utils.utils import file_exists, open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.common.outputs import get_provider_output_model
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
@@ -115,16 +113,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
filename, output_mode, audit_info, Check_Output_CSV_GCP_CIS
)
file_descriptors.update({output_mode: file_descriptor})
elif isinstance(audit_info, Azure_Audit_Info):
filename = f"{output_directory}/{output_filename}_{output_mode}{csv_file_suffix}"
if "cis_" in output_mode:
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AZURE_CIS,
)
file_descriptors.update({output_mode: file_descriptor})
elif isinstance(audit_info, AWS_Audit_Info):
if output_mode == "json-asff":
filename = f"{output_directory}/{output_filename}{json_asff_file_suffix}"

View File

@@ -599,35 +599,6 @@ class Check_Output_CSV_GCP_CIS(BaseModel):
CheckId: str
class Check_Output_CSV_AZURE_CIS(BaseModel):
"""
Check_Output_CSV_CIS generates a finding's output in CSV CIS format.
"""
Provider: str
Description: str
Subscription: str
AssessmentDate: str
Requirements_Id: str
Requirements_Description: str
Requirements_Attributes_Section: str
Requirements_Attributes_Profile: str
Requirements_Attributes_AssessmentStatus: str
Requirements_Attributes_Description: str
Requirements_Attributes_RationaleStatement: str
Requirements_Attributes_ImpactStatement: str
Requirements_Attributes_RemediationProcedure: str
Requirements_Attributes_AuditProcedure: str
Requirements_Attributes_AdditionalInformation: str
Requirements_Attributes_DefaultValue: str
Requirements_Attributes_References: str
Status: str
StatusExtended: str
ResourceId: str
ResourceName: str
CheckId: str
class Check_Output_CSV_Generic_Compliance(BaseModel):
"""
Check_Output_CSV_Generic_Compliance generates a finding's output in CSV Generic Compliance format.

View File

@@ -234,7 +234,6 @@
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
@@ -261,7 +260,6 @@
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
@@ -288,7 +286,6 @@
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
@@ -1255,7 +1252,6 @@
"ap-northeast-1",
"ap-southeast-1",
"eu-central-1",
"eu-west-3",
"us-east-1",
"us-west-2"
],
@@ -1419,14 +1415,8 @@
"chime-sdk-media-pipelines": {
"regions": {
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-west-2",
"us-east-1",
"us-west-2"
],
@@ -2272,7 +2262,6 @@
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
@@ -2307,7 +2296,6 @@
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
@@ -2344,7 +2332,6 @@
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
@@ -2838,22 +2825,6 @@
"aws-us-gov": []
}
},
"deadline-cloud": {
"regions": {
"aws": [
"ap-northeast-1",
"ap-southeast-1",
"ap-southeast-2",
"eu-central-1",
"eu-west-1",
"us-east-1",
"us-east-2",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"deepcomposer": {
"regions": {
"aws": [
@@ -3097,7 +3068,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6022,9 +5992,7 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-west-1"
]
"aws-us-gov": []
}
},
"lexv2-models": {
@@ -6043,9 +6011,7 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-west-1"
]
"aws-us-gov": []
}
},
"license-manager": {
@@ -6610,7 +6576,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6632,7 +6597,6 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-4",
@@ -6642,7 +6606,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6772,7 +6735,6 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-4",
@@ -6781,7 +6743,6 @@
"eu-north-1",
"eu-west-1",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -7183,7 +7144,6 @@
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -9848,7 +9808,6 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
@@ -9858,7 +9817,6 @@
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",

View File

@@ -69,9 +69,6 @@ Caller Identity ARN: {Fore.YELLOW}[{audit_info.audited_identity_arn}]{Style.RESE
def create_sts_session(
session: session.Session, aws_region: str
) -> session.Session.client:
sts_endpoint_url = (
f"https://sts.{aws_region}.amazonaws.com"
if "cn-" not in aws_region
else f"https://sts.{aws_region}.amazonaws.com.cn"
return session.client(
"sts", aws_region, endpoint_url=f"https://sts.{aws_region}.amazonaws.com"
)
return session.client("sts", aws_region, endpoint_url=sts_endpoint_url)

View File

@@ -1,6 +1,5 @@
from typing import Optional
from botocore.exceptions import ClientError
from pydantic import BaseModel
from prowler.lib.logger import logger
@@ -48,28 +47,12 @@ class APIGateway(AWSService):
logger.info("APIGateway - Getting Rest APIs authorizer...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(
restApiId=rest_api.id
)["items"]
if authorizers:
rest_api.authorizer = True
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
"items"
]
if authorizers:
rest_api.authorizer = True
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -79,25 +62,10 @@ class APIGateway(AWSService):
logger.info("APIGateway - Describing Rest API...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -107,44 +75,29 @@ class APIGateway(AWSService):
logger.info("APIGateway - Getting stages for Rest APIs...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
)
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
)
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
@@ -155,50 +108,33 @@ class APIGateway(AWSService):
logger.info("APIGateway - Getting API resources...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator(
"get_resources"
)
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get(
"resourceMethods", {}
).keys():
resource_methods.append(resource_method)
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator("get_resources")
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get(
"resourceMethods", {}
).keys():
resource_methods.append(resource_method)
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
)
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
)
)
except Exception as error:
logger.error(

View File

@@ -11,7 +11,7 @@
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "high",
"ResourceType": "AwsIamRole",
"ResourceType": "AwsIamPolicy",
"Description": "Ensure inline policies that allow full \"*:*\" administrative privileges are not associated to IAM identities",
"Risk": "IAM policies are the means by which privileges are granted to users, groups or roles. It is recommended and considered a standard security advice to grant least privilege—that is; granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks instead of allowing full administrative privileges. Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions.",
"RelatedUrl": "",

View File

@@ -43,7 +43,6 @@ class sns_topics_not_publicly_accessible(Check):
else:
report.status = "FAIL"
report.status_extended = f"SNS topic {topic.name} is public because its policy allows public access."
break
findings.append(report)

View File

@@ -41,11 +41,9 @@ class sqs_queues_not_publicly_accessible(Check):
else:
report.status = "FAIL"
report.status_extended = f"SQS queue {queue.id} is public because its policy allows public access, and the condition does not limit access to resources within the same account."
break
else:
report.status = "FAIL"
report.status_extended = f"SQS queue {queue.id} is public because its policy allows public access."
break
findings.append(report)
return findings

View File

@@ -1,21 +0,0 @@
from uuid import UUID
# Service management API
WINDOWS_AZURE_SERVICE_MANAGEMENT_API = "797f4846-ba00-4fd7-ba43-dac1f8f63013"
# Authorization policy roles
GUEST_USER_ACCESS_NO_RESTRICTICTED = UUID("a0b1b346-4d3e-4e8b-98f8-753987be4970")
GUEST_USER_ACCESS_RESTRICTICTED = UUID("2af84b1e-32c8-42b7-82bc-daa82404023b")
# General built-in roles
CONTRIBUTOR_ROLE_ID = "b24988ac-6180-42a0-ab88-20f7382dd24c"
OWNER_ROLE_ID = "8e3af657-a8ff-443c-a75c-2fe8c4bcb635"
# Compute roles
VIRTUAL_MACHINE_CONTRIBUTOR_ROLE_ID = "9980e02c-c2be-4d73-94e8-173b1dc7cf3c"
VIRTUAL_MACHINE_ADMINISTRATOR_LOGIN_ROLE_ID = "1c0163c0-47e6-4577-8991-ea5c82e286e4"
VIRTUAL_MACHINE_USER_LOGIN_ROLE_ID = "fb879df8-f326-4884-b1cf-06f3ad86be52"
VIRTUAL_MACHINE_LOCAL_USER_LOGIN_ROLE_ID = "602da2ba-a5c2-41da-b01d-5360126ab525"
WINDOWS_ADMIN_CENTER_ADMINISTRATOR_LOGIN_ROLE_ID = (
"a6333a3e-0164-44c3-b281-7a577aff287f"
)

View File

@@ -1,5 +1,3 @@
from typing import Any
from prowler.lib.logger import logger
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
@@ -7,7 +5,7 @@ from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
class AzureService:
def __init__(
self,
service: Any,
service: str,
audit_info: Azure_Audit_Info,
):
self.clients = self.__set_clients__(

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "app_http_logs_enabled",
"CheckTitle": "Ensure that logging for Azure AppService 'HTTP logs' is enabled",
"CheckType": [],
"ServiceName": "app",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "Microsoft.Web/sites/config",
"Description": "Enable AppServiceHTTPLogs diagnostic log category for Azure App Service instances to ensure all http requests are captured and centrally logged.",
"Risk": "Capturing web requests can be important supporting information for security analysts performing monitoring and incident response activities. Once logging, these logs can be ingested into SIEM or other central aggregation point for the organization.",
"RelatedUrl": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-logging-threat-detection#lt-3-enable-logging-for-security-investigation",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/ensure-that-app-service-enables-http-logging#terraform"
},
"Recommendation": {
"Text": "1. Go to App Services For each App Service: 2. Go to Diagnostic Settings 3. Click Add Diagnostic Setting 4. Check the checkbox next to 'HTTP logs' 5. Configure a destination based on your specific logging consumption capability (for example Stream to an event hub and then consuming with SIEM integration for Event Hub logging).",
"Url": "https://docs.microsoft.com/en-us/azure/app-service/troubleshoot-diagnostic-logs"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Log consumption and processing will incur additional cost."
}

View File

@@ -1,29 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.app.app_client import app_client
class app_http_logs_enabled(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for subscription_name, apps in app_client.apps.items():
for app_name, app in apps.items():
if "functionapp" not in app.kind:
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = subscription_name
report.resource_name = app_name
report.resource_id = app.resource_id
if not app.monitor_diagnostic_settings:
report.status_extended = f"App {app_name} does not have a diagnostic setting in subscription {subscription_name}."
else:
for diagnostic_setting in app.monitor_diagnostic_settings:
report.status_extended = f"App {app_name} does not have HTTP Logs enabled in diagnostic setting {diagnostic_setting.name} in subscription {subscription_name}"
for log in diagnostic_setting.logs:
if log.category == "AppServiceHTTPLogs" and log.enabled:
report.status = "PASS"
report.status_extended = f"App {app_name} has HTTP Logs enabled in diagnostic setting {diagnostic_setting.name} in subscription {subscription_name}"
break
findings.append(report)
return findings

View File

@@ -6,8 +6,6 @@ from azure.mgmt.web.models import ManagedServiceIdentity, SiteConfigResource
from prowler.lib.logger import logger
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.azure.lib.service.service import AzureService
from prowler.providers.azure.services.monitor.monitor_client import monitor_client
from prowler.providers.azure.services.monitor.monitor_service import DiagnosticSetting
########################## App
@@ -51,12 +49,8 @@ class App(AzureService):
getattr(app, "client_cert_enabled", False),
getattr(app, "client_cert_mode", "Ignore"),
),
monitor_diagnostic_settings=self.__get_app_monitor_settings__(
app.name, app.resource_group, subscription_name
),
https_only=getattr(app, "https_only", False),
identity=getattr(app, "identity", None),
kind=getattr(app, "kind", "app"),
)
}
)
@@ -84,21 +78,6 @@ class App(AzureService):
return cert_mode
def __get_app_monitor_settings__(self, app_name, resource_group, subscription):
logger.info(f"App - Getting monitor diagnostics settings for {app_name}...")
monitor_diagnostics_settings = []
try:
monitor_diagnostics_settings = monitor_client.diagnostic_settings_with_uri(
self.subscriptions[subscription],
f"subscriptions/{self.subscriptions[subscription]}/resourceGroups/{resource_group}/providers/Microsoft.Web/sites/{app_name}",
monitor_client.clients[subscription],
)
except Exception as error:
logger.error(
f"Subscription name: {self.subscription} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return monitor_diagnostics_settings
@dataclass
class WebApp:
@@ -108,5 +87,3 @@ class WebApp:
client_cert_mode: str = "Ignore"
auth_enabled: bool = False
https_only: bool = False
monitor_diagnostic_settings: list[DiagnosticSetting] = None
kind: str = "app"

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_conditional_access_policy_require_mfa_for_management_api",
"CheckTitle": "Ensure Multifactor Authentication is Required for Windows Azure Service Management API",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.conditionalAccess",
"Description": "This recommendation ensures that users accessing the Windows Azure Service Management API (i.e. Azure Powershell, Azure CLI, Azure Resource Manager API, etc.) are required to use multifactor authentication (MFA) credentials when accessing resources through the Windows Azure Service Management API.",
"Risk": "Administrative access to the Windows Azure Service Management API should be secured with a higher level of scrutiny to authenticating mechanisms. Enabling multifactor authentication is recommended to reduce the potential for abuse of Administrative actions, and to prevent intruders or compromised admin credentials from changing administrative settings.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/conditional-access/howto-conditional-access-policy-azure-management",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From the Azure Admin Portal dashboard, open Microsoft Entra ID. 2. Click Security in the Entra ID blade. 3. Click Conditional Access in the Security blade. 4. Click Policies in the Conditional Access blade. 5. Click + New policy. 6. Enter a name for the policy. 7. Click the blue text under Users. 8. Under Include, select All users. 9. Under Exclude, check Users and groups. 10. Select users or groups to be exempted from this policy (e.g. break-glass emergency accounts, and non-interactive service accounts) then click the Select button. 11. Click the blue text under Target Resources. 12. Under Include, click the Select apps radio button. 13. Click the blue text under Select. 14. Check the box next to Windows Azure Service Management APIs then click the Select button. 15. Click the blue text under Grant. 16. Under Grant access check the box for Require multifactor authentication then click the Select button. 17. Before creating, set Enable policy to Report-only. 18. Click Create. After testing the policy in report-only mode, update the Enable policy setting from Report-only to On.",
"Url": "https://learn.microsoft.com/en-us/entra/identity/conditional-access/concept-conditional-access-cloud-apps"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Conditional Access policies require Microsoft Entra ID P1 or P2 licenses. Similarly, they may require additional overhead to maintain if users lose access to their MFA. Any users or groups which are granted an exception to this policy should be carefully tracked, be granted only minimal necessary privileges, and conditional access exceptions should be regularly reviewed or investigated."
}

View File

@@ -1,44 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.config import WINDOWS_AZURE_SERVICE_MANAGEMENT_API
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_conditional_access_policy_require_mfa_for_management_api(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for (
tenant_name,
conditional_access_policies,
) in entra_client.conditional_access_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_name}"
report.resource_name = "Conditional Access Policy"
report.resource_id = "Conditional Access Policy"
report.status_extended = (
"Conditional Access Policy does not require MFA for management API."
)
for policy_id, policy in conditional_access_policies.items():
if (
policy.state == "enabled"
and "All" in policy.users["include"]
and WINDOWS_AZURE_SERVICE_MANAGEMENT_API
in policy.target_resources["include"]
and any(
"mfa" in access_control.lower()
for access_control in policy.access_controls["grant"]
)
):
report.status = "PASS"
report.status_extended = (
"Conditional Access Policy requires MFA for management API."
)
report.resource_id = policy_id
report.resource_name = policy.name
break
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_global_admin_in_less_than_five_users",
"CheckTitle": "Ensure fewer than 5 users have global administrator assignment",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.directoryRole",
"Description": "This recommendation aims to maintain a balance between security and operational efficiency by ensuring that a minimum of 2 and a maximum of 4 users are assigned the Global Administrator role in Microsoft Entra ID. Having at least two Global Administrators ensures redundancy, while limiting the number to four reduces the risk of excessive privileged access.",
"Risk": "The Global Administrator role has extensive privileges across all services in Microsoft Entra ID. The Global Administrator role should never be used in regular daily activities; administrators should have a regular user account for daily activities, and a separate account for administrative responsibilities. Limiting the number of Global Administrators helps mitigate the risk of unauthorized access, reduces the potential impact of human error, and aligns with the principle of least privilege to reduce the attack surface of an Azure tenant. Conversely, having at least two Global Administrators ensures that administrative functions can be performed without interruption in case of unavailability of a single admin.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/best-practices#5-limit-the-number-of-global-administrators-to-less-than-5",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Roles and Administrators 4. Select Global Administrator 5. Ensure less than 5 users are actively assigned the role. 6. Ensure that at least 2 users are actively assigned the role.",
"Url": "https://learn.microsoft.com/en-us/microsoft-365/admin/add-users/about-admin-roles?view=o365-worldwide#security-guidelines-for-assigning-roles"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Implementing this recommendation may require changes in administrative workflows or the redistribution of roles and responsibilities. Adequate training and awareness should be provided to all Global Administrators."
}

View File

@@ -1,36 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_global_admin_in_less_than_five_users(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, directory_roles in entra_client.directory_roles.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = "Global Administrator"
if "Global Administrator" in directory_roles:
report.resource_id = getattr(
directory_roles["Global Administrator"],
"id",
"Global Administrator",
)
num_global_admins = len(
getattr(directory_roles["Global Administrator"], "members", [])
)
if num_global_admins < 5:
report.status = "PASS"
report.status_extended = (
f"There are {num_global_admins} global administrators."
)
else:
report.status_extended = f"There are {num_global_admins} global administrators. It should be less than five."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_non_privileged_user_has_mfa",
"CheckTitle": "Ensure that 'Multi-Factor Auth Status' is 'Enabled' for all Non-Privileged Users",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.users",
"Description": "Enable multi-factor authentication for all non-privileged users.",
"Risk": "Multi-factor authentication requires an individual to present a minimum of two separate forms of authentication before access is granted. Multi-factor authentication provides additional assurance that the individual attempting to gain access is who they claim to be. With multi-factor authentication, an attacker would need to compromise at least two different authentication mechanisms, increasing the difficulty of compromise and thus reducing the risk.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/authentication/concept-mfa-howitworks",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/multi-factor-authentication-for-all-non-privileged-users.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "Activate one of the available multi-factor authentication methods for users in Microsoft Entra ID.",
"Url": "https://learn.microsoft.com/en-us/entra/identity/authentication/tutorial-enable-azure-mfa"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Users would require two forms of authentication before any access is granted. Also, this requires an overhead for managing dual forms of authentication."
}

View File

@@ -1,34 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
from prowler.providers.azure.services.entra.lib.user_privileges import (
is_privileged_user,
)
class entra_non_privileged_user_has_mfa(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, users in entra_client.users.items():
for user_domain_name, user in users.items():
if not is_privileged_user(
user, entra_client.directory_roles[tenant_domain]
):
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = user_domain_name
report.resource_id = user.id
report.status_extended = (
f"Non-privileged user {user.name} does not have MFA."
)
if len(user.authentication_methods) > 1:
report.status = "PASS"
report.status_extended = (
f"Non-privileged user {user.name} has MFA."
)
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_default_users_cannot_create_security_groups",
"CheckTitle": "Ensure that 'Users can create security groups in Azure portals, API or PowerShell' is set to 'No'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Restrict security group creation to administrators only.",
"Risk": "When creating security groups is enabled, all users in the directory are allowed to create new security groups and add members to those groups. Unless a business requires this day-to-day delegation, security group creation should be restricted to administrators only.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/users/groups-self-service-management",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/users-can-create-security-groups.html",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Groups 4. Select General under Settings 5. Set Users can create security groups in Azure portals, API or PowerShell to No",
"Url": ""
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Enabling this setting could create a number of requests that would need to be managed by an administrator."
}

View File

@@ -1,30 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_default_users_cannot_create_security_groups(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Non-privileged users are able to create security groups via the Access Panel and the Azure administration portal."
if getattr(
auth_policy, "default_user_role_permissions", None
) and not getattr(
auth_policy.default_user_role_permissions,
"allowed_to_create_security_groups",
True,
):
report.status = "PASS"
report.status_extended = "Non-privileged users are not able to create security groups via the Access Panel and the Azure administration portal."
findings.append(report)
return findings

View File

@@ -15,7 +15,7 @@
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/users-can-register-applications.html",
"Other": "",
"Terraform": ""
},
"Recommendation": {

View File

@@ -10,14 +10,12 @@ class entra_policy_ensure_default_user_cannot_create_apps(Check):
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.subscription = f"All from tenant '{tenant_domain}'"
report.resource_name = auth_policy.name
report.resource_id = auth_policy.id
report.status_extended = "App creation is not disabled for non-admin users."
if getattr(
auth_policy, "default_user_role_permissions", None
) and not getattr(
if auth_policy.default_user_role_permissions and not getattr(
auth_policy.default_user_role_permissions,
"allowed_to_create_apps",
True,

View File

@@ -7,18 +7,17 @@ class entra_policy_ensure_default_user_cannot_create_tenants(Check):
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.subscription = f"All from tenant '{tenant_domain}'"
report.resource_name = auth_policy.name
report.resource_id = auth_policy.id
report.status_extended = (
"Tenants creation is not disabled for non-admin users."
)
if getattr(
auth_policy, "default_user_role_permissions", None
) and not getattr(
if auth_policy.default_user_role_permissions and not getattr(
auth_policy.default_user_role_permissions,
"allowed_to_create_tenants",
True,

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_guest_invite_only_for_admin_roles",
"CheckTitle": "Ensure that 'Guest invite restrictions' is set to 'Only users assigned to specific admin roles can invite guest users'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Restrict invitations to users with specific administrative roles only.",
"Risk": "Restricting invitations to users with specific administrator roles ensures that only authorized accounts have access to cloud resources. This helps to maintain 'Need to Know' permissions and prevents inadvertent access to data. By default the setting Guest invite restrictions is set to Anyone in the organization can invite guest users including guests and non-admins. This would allow anyone within the organization to invite guests and non-admins to the tenant, posing a security risk.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/external-id/external-collaboration-settings-configure",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Then External Identities 4. Select External collaboration settings 5. Under Guest invite settings, for Guest invite restrictions, ensure that Only users assigned to specific admin roles can invite guest users is selected",
"Url": "https://learn.microsoft.com/en-us/answers/questions/685101/how-to-allow-only-admins-to-add-guests"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "With the option of Only users assigned to specific admin roles can invite guest users selected, users with specific admin roles will be in charge of sending invitations to the external users, requiring additional overhead by them to manage user accounts. This will mean coordinating with other departments as they are onboarding new users."
}

View File

@@ -1,27 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_guest_invite_only_for_admin_roles(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Guest invitations are not restricted to users with specific administrative roles only."
if (
getattr(auth_policy, "guest_invite_settings", "everyone")
== "adminsAndGuestInviters"
or getattr(auth_policy, "guest_invite_settings", "everyone") == "none"
):
report.status = "PASS"
report.status_extended = "Guest invitations are restricted to users with specific administrative roles only."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_guest_users_access_restrictions",
"CheckTitle": "Ensure That 'Guest users access restrictions' is set to 'Guest user access is restricted to properties and memberships of their own directory objects'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Limit guest user permissions.",
"Risk": "Limiting guest access ensures that guest accounts do not have permission for certain directory tasks, such as enumerating users, groups or other directory resources, and cannot be assigned to administrative roles in your directory. Guest access has three levels of restriction. 1. Guest users have the same access as members (most inclusive), 2. Guest users have limited access to properties and memberships of directory objects (default value), 3. Guest user access is restricted to properties and memberships of their own directory objects (most restrictive). The recommended option is the 3rd, most restrictive: 'Guest user access is restricted to their own directory object'.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/users/users-restrict-guest-permissions",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Then External Identities 4. Select External collaboration settings 5. Under Guest user access, change Guest user access restrictions to be Guest user access is restricted to properties and memberships of their own directory objects",
"Url": "https://learn.microsoft.com/en-us/entra/fundamentals/users-default-permissions#member-and-guest-users"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "This may create additional requests for permissions to access resources that administrators will need to approve. According to https://learn.microsoft.com/en-us/azure/active-directory/enterprise- users/users-restrict-guest-permissions#services-currently-not-supported Service without current support might have compatibility issues with the new guest restriction setting."
}

View File

@@ -1,27 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.config import GUEST_USER_ACCESS_RESTRICTICTED
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_guest_users_access_restrictions(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Guest user access is not restricted to properties and memberships of their own directory objects"
if (
getattr(auth_policy, "guest_user_role_id", None)
== GUEST_USER_ACCESS_RESTRICTICTED
):
report.status = "PASS"
report.status_extended = "Guest user access is restricted to properties and memberships of their own directory objects"
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_restricts_user_consent_for_apps",
"CheckTitle": "Ensure 'User consent for applications' is set to 'Do not allow user consent'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Require administrators to provide consent for applications before use.",
"Risk": "If Microsoft Entra ID is running as an identity provider for third-party applications, permissions and consent should be limited to administrators or pre-approved. Malicious applications may attempt to exfiltrate data or abuse privileged user accounts.",
"RelatedUrl": "https://learn.microsoft.com/en-gb/entra/identity/enterprise-apps/configure-user-consent?pivots=portal",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/users-can-consent-to-apps-accessing-company-data-on-their-behalf.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Enterprise Applications 4. Select Consent and permissions 5. Select User consent settings 6. Set User consent for applications to Do not allow user consent 7. Click save",
"Url": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-privileged-access#pa-1-separate-and-limit-highly-privilegedadministrative-users"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Enforcing this setting may create additional requests that administrators need to review."
}

View File

@@ -1,30 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_restricts_user_consent_for_apps(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Entra allows users to consent apps accessing company data on their behalf"
if getattr(auth_policy, "default_user_role_permissions", None) and not any(
"ManagePermissionGrantsForSelf" in policy_assigned
for policy_assigned in getattr(
auth_policy.default_user_role_permissions,
"permission_grant_policies_assigned",
["ManagePermissionGrantsForSelf.microsoft-user-default-legacy"],
)
):
report.status = "PASS"
report.status_extended = "Entra does not allow users to consent apps accessing company data on their behalf"
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_user_consent_for_verified_apps",
"CheckTitle": "Ensure 'User consent for applications' Is Set To 'Allow for Verified Publishers'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Allow users to provide consent for selected permissions when a request is coming from a verified publisher.",
"Risk": "If Microsoft Entra ID is running as an identity provider for third-party applications, permissions and consent should be limited to administrators or pre-approved. Malicious applications may attempt to exfiltrate data or abuse privileged user accounts.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-user-consent?pivots=portal#configure-user-consent-to-applications",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Enterprise Applications 4. Select Consent and permissions 5. Select User consent settings 6. Under User consent for applications, select Allow user consent for apps from verified publishers, for selected permissions 7. Select Save",
"Url": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-privileged-access#pa-1-separate-and-limit-highly-privilegedadministrative-users"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Enforcing this setting may create additional requests that administrators need to review."
}

View File

@@ -1,31 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_user_consent_for_verified_apps(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "PASS"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Entra does not allow users to consent non-verified apps accessing company data on their behalf."
if getattr(auth_policy, "default_user_role_permissions", None) and any(
"ManagePermissionGrantsForSelf.microsoft-user-default-legacy"
in policy_assigned
for policy_assigned in getattr(
auth_policy.default_user_role_permissions,
"permission_grant_policies_assigned",
["ManagePermissionGrantsForSelf.microsoft-user-default-legacy"],
)
):
report.status = "FAIL"
report.status_extended = "Entra allows users to consent apps accessing company data on their behalf."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_privileged_user_has_mfa",
"CheckTitle": "Ensure that 'Multi-Factor Auth Status' is 'Enabled' for all Privileged Users",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.users",
"Description": "Enable multi-factor authentication for all roles, groups, and users that have write access or permissions to Azure resources. These include custom created objects or built-in roles such as; - Service Co-Administrators - Subscription Owners - Contributors",
"Risk": "Multi-factor authentication requires an individual to present a minimum of two separate forms of authentication before access is granted. Multi-factor authentication provides additional assurance that the individual attempting to gain access is who they claim to be. With multi-factor authentication, an attacker would need to compromise at least two different authentication mechanisms, increasing the difficulty of compromise and thus reducing the risk.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/authentication/concept-mfa-howitworks",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/multi-factor-authentication-for-all-privileged-users.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "Activate one of the available multi-factor authentication methods for users in Microsoft Entra ID.",
"Url": "https://learn.microsoft.com/en-us/entra/identity/authentication/tutorial-enable-azure-mfa"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Users would require two forms of authentication before any access is granted. Additional administrative time will be required for managing dual forms of authentication when enabling multi-factor authentication."
}

View File

@@ -1,32 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
from prowler.providers.azure.services.entra.lib.user_privileges import (
is_privileged_user,
)
class entra_privileged_user_has_mfa(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, users in entra_client.users.items():
for user_domain_name, user in users.items():
if is_privileged_user(
user, entra_client.directory_roles[tenant_domain]
):
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = user_domain_name
report.resource_id = user.id
report.status_extended = (
f"Privileged user {user.name} does not have MFA."
)
if len(user.authentication_methods) > 1:
report.status = "PASS"
report.status_extended = f"Privileged user {user.name} has MFA."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_security_defaults_enabled",
"CheckTitle": "Ensure Security Defaults is enabled on Microsoft Entra ID",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.identitySecurityDefaultsEnforcementPolicy",
"Description": "Security defaults in Microsoft Entra ID make it easier to be secure and help protect your organization. Security defaults contain preconfigured security settings for common attacks. Security defaults is available to everyone. The goal is to ensure that all organizations have a basic level of security enabled at no extra cost. You may turn on security defaults in the Azure portal.",
"Risk": "Security defaults provide secure default settings that we manage on behalf of organizations to keep customers safe until they are ready to manage their own identity security settings. For example, doing the following: - Requiring all users and admins to register for MFA. - Challenging users with MFA - when necessary, based on factors such as location, device, role, and task. - Disabling authentication from legacy authentication clients, which cant do MFA.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/fundamentals/security-defaults",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/security-defaults-enabled.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu. 2. Browse to Microsoft Entra ID > Properties 3. Select Manage security defaults 4. Set the Enable security defaults to Enabled 5. Select Save",
"Url": "https://techcommunity.microsoft.com/t5/microsoft-entra-blog/introducing-security-defaults/ba-p/1061414"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "This recommendation should be implemented initially and then may be overridden by other service/product specific CIS Benchmarks. Administrators should also be aware that certain configurations in Microsoft Entra ID may impact other Microsoft services such as Microsoft 365."
}

View File

@@ -1,26 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_security_defaults_enabled(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for (
tenant,
security_default,
) in entra_client.security_default.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant}"
report.resource_name = getattr(security_default, "name", "Security Default")
report.resource_id = getattr(security_default, "id", "Security Default")
report.status_extended = "Entra security defaults is diabled."
if getattr(security_default, "is_enabled", False):
report.status = "PASS"
report.status_extended = "Entra security defaults is enabled."
findings.append(report)
return findings

View File

@@ -1,20 +1,18 @@
import asyncio
from dataclasses import dataclass
from typing import Any, List, Optional
from uuid import UUID
from typing import Optional
from msgraph import GraphServiceClient
from msgraph.generated.models.default_user_role_permissions import (
DefaultUserRolePermissions,
)
from msgraph.generated.models.setting_value import SettingValue
from pydantic import BaseModel
from prowler.lib.logger import logger
from prowler.providers.azure.config import GUEST_USER_ACCESS_NO_RESTRICTICTED
from prowler.providers.azure.lib.service.service import AzureService
########################## Entra
class Entra(AzureService):
def __init__(self, azure_audit_info):
super().__init__(GraphServiceClient, azure_audit_info)
@@ -22,26 +20,10 @@ class Entra(AzureService):
self.authorization_policy = asyncio.get_event_loop().run_until_complete(
self.__get_authorization_policy__()
)
self.group_settings = asyncio.get_event_loop().run_until_complete(
self.__get_group_settings__()
)
self.security_default = asyncio.get_event_loop().run_until_complete(
self.__get_security_default__()
)
self.named_locations = asyncio.get_event_loop().run_until_complete(
self.__get_named_locations__()
)
self.directory_roles = asyncio.get_event_loop().run_until_complete(
self.__get_directory_roles__()
)
self.conditional_access_policy = asyncio.get_event_loop().run_until_complete(
self.__get_conditional_access_policy__()
)
async def __get_users__(self):
logger.info("Entra - Getting users...")
users = {}
try:
users = {}
for tenant, client in self.clients.items():
users_list = await client.users.get()
users.update({tenant: {}})
@@ -49,36 +31,20 @@ class Entra(AzureService):
users[tenant].update(
{
user.user_principal_name: User(
id=user.id,
name=user.display_name,
authentication_methods=(
await client.users.by_user_id(
user.id
).authentication.methods.get()
).value,
id=user.id, name=user.display_name
)
}
)
except Exception as error:
if (
error.__class__.__name__ == "ODataError"
and error.__dict__.get("response_status_code", None) == 403
):
logger.error(
"You need 'UserAuthenticationMethod.Read.All' permission to access this information. It only can be granted through Service Principal authentication."
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return users
async def __get_authorization_policy__(self):
logger.info("Entra - Getting authorization policy...")
authorization_policy = {}
try:
authorization_policy = {}
for tenant, client in self.clients.items():
auth_policy = await client.policies.authorization_policy.get()
authorization_policy.update(
@@ -90,16 +56,6 @@ class Entra(AzureService):
default_user_role_permissions=getattr(
auth_policy, "default_user_role_permissions", None
),
guest_invite_settings=(
auth_policy.allow_invites_from.value
if getattr(auth_policy, "allow_invites_from", None)
else "everyone"
),
guest_user_role_id=getattr(
auth_policy,
"guest_user_role_id",
GUEST_USER_ACCESS_NO_RESTRICTICTED,
),
)
}
)
@@ -110,202 +66,10 @@ class Entra(AzureService):
return authorization_policy
async def __get_group_settings__(self):
logger.info("Entra - Getting group settings...")
group_settings = {}
try:
for tenant, client in self.clients.items():
group_settings_list = await client.group_settings.get()
group_settings.update({tenant: {}})
for group_setting in group_settings_list.value:
group_settings[tenant].update(
{
group_setting.id: GroupSetting(
name=getattr(group_setting, "display_name", None),
template_id=getattr(group_setting, "template_id", None),
settings=getattr(group_setting, "values", []),
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return group_settings
async def __get_security_default__(self):
logger.info("Entra - Getting security default...")
try:
security_defaults = {}
for tenant, client in self.clients.items():
security_default = (
await client.policies.identity_security_defaults_enforcement_policy.get()
)
security_defaults.update(
{
tenant: SecurityDefault(
id=security_default.id,
name=security_default.display_name,
is_enabled=security_default.is_enabled,
),
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return security_defaults
async def __get_named_locations__(self):
logger.info("Entra - Getting named locations...")
named_locations = {}
try:
for tenant, client in self.clients.items():
named_locations_list = (
await client.identity.conditional_access.named_locations.get()
)
named_locations.update({tenant: {}})
for named_location in getattr(named_locations_list, "value", []):
named_locations[tenant].update(
{
named_location.id: NamedLocation(
name=named_location.display_name,
ip_ranges_addresses=[
getattr(ip_range, "cidr_address", None)
for ip_range in getattr(
named_location, "ip_ranges", []
)
],
is_trusted=getattr(named_location, "is_trusted", False),
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return named_locations
async def __get_directory_roles__(self):
logger.info("Entra - Getting directory roles...")
directory_roles_with_members = {}
try:
for tenant, client in self.clients.items():
directory_roles_with_members.update({tenant: {}})
directory_roles = await client.directory_roles.get()
for directory_role in directory_roles.value:
directory_role_members = (
await client.directory_roles.by_directory_role_id(
directory_role.id
).members.get()
)
directory_roles_with_members[tenant].update(
{
directory_role.display_name: DirectoryRole(
id=directory_role.id,
members=[
self.users[tenant][member.user_principal_name]
for member in directory_role_members.value
if self.users[tenant].get(
member.user_principal_name, None
)
],
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return directory_roles_with_members
async def __get_conditional_access_policy__(self):
logger.info("Entra - Getting conditional access policy...")
conditional_access_policy = {}
try:
for tenant, client in self.clients.items():
conditional_access_policies = (
await client.identity.conditional_access.policies.get()
)
conditional_access_policy.update({tenant: {}})
for policy in getattr(conditional_access_policies, "value", []):
conditions = getattr(policy, "conditions", None)
included_apps = []
excluded_apps = []
if getattr(conditions, "applications", None):
if getattr(conditions.applications, "include_applications", []):
included_apps = conditions.applications.include_applications
elif getattr(
conditions.applications, "include_user_actions", []
):
included_apps = conditions.applications.include_user_actions
if getattr(conditions.applications, "exclude_applications", []):
excluded_apps = conditions.applications.exclude_applications
elif getattr(
conditions.applications, "exclude_user_actions", []
):
excluded_apps = conditions.applications.exclude_user_actions
grant_access_controls = []
block_access_controls = []
for access_control in (
getattr(policy.grant_controls, "built_in_controls")
if policy.grant_controls
else []
):
if "Grant" in str(access_control):
grant_access_controls.append(str(access_control))
else:
block_access_controls.append(str(access_control))
conditional_access_policy[tenant].update(
{
policy.id: ConditionalAccessPolicy(
name=policy.display_name,
state=getattr(policy, "state", "None"),
users={
"include": (
getattr(conditions.users, "include_users", [])
if getattr(conditions, "users", None)
else []
),
"exclude": (
getattr(conditions.users, "exclude_users", [])
if getattr(conditions, "users", None)
else []
),
},
target_resources={
"include": included_apps,
"exclude": excluded_apps,
},
access_controls={
"grant": grant_access_controls,
"block": block_access_controls,
},
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return conditional_access_policy
class User(BaseModel):
id: str
name: str
authentication_methods: List[Any] = []
@dataclass
@@ -314,37 +78,3 @@ class AuthorizationPolicy:
name: str
description: str
default_user_role_permissions: Optional[DefaultUserRolePermissions]
guest_invite_settings: str
guest_user_role_id: UUID
@dataclass
class GroupSetting:
name: Optional[str]
template_id: Optional[str]
settings: List[SettingValue]
class SecurityDefault(BaseModel):
id: str
name: str
is_enabled: bool
class NamedLocation(BaseModel):
name: str
ip_ranges_addresses: List[str]
is_trusted: bool
class DirectoryRole(BaseModel):
id: str
members: List[User]
class ConditionalAccessPolicy(BaseModel):
name: str
state: str
users: dict[str, List[str]]
target_resources: dict[str, List[str]]
access_controls: dict[str, List[str]]

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_trusted_named_locations_exists",
"CheckTitle": "Ensure Trusted Locations Are Defined",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.ipNamedLocation",
"Description": "Microsoft Entra ID Conditional Access allows an organization to configure Named locations and configure whether those locations are trusted or untrusted. These settings provide organizations the means to specify Geographical locations for use in conditional access policies, or define actual IP addresses and IP ranges and whether or not those IP addresses and/or ranges are trusted by the organization.",
"Risk": "Defining trusted source IP addresses or ranges helps organizations create and enforce Conditional Access policies around those trusted or untrusted IP addresses and ranges. Users authenticating from trusted IP addresses and/or ranges may have less access restrictions or access requirements when compared to users that try to authenticate to Microsoft Entra ID from untrusted locations or untrusted source IP addresses/ranges.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/conditional-access/location-condition",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. Navigate to the Microsoft Entra ID Conditional Access Blade 2. Click on the Named locations blade 3. Within the Named locations blade, click on IP ranges location 4. Enter a name for this location setting in the Name text box 5. Click on the + sign 6. Add an IP Address Range in CIDR notation inside the text box that appears 7. Click on the Add button 8. Repeat steps 5 through 7 for each IP Range that needs to be added 9. If the information entered are trusted ranges, select the Mark as trusted location check box 10. Once finished, click on Create",
"Url": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-identity-management#im-7-restrict-resource-access-based-on--conditions"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "When configuring Named locations, the organization can create locations using Geographical location data or by defining source IP addresses or ranges. Configuring Named locations using a Country location does not provide the organization the ability to mark those locations as trusted, and any Conditional Access policy relying on those Countries location setting will not be able to use the All trusted locations setting within the Conditional Access policy. They instead will have to rely on the Select locations setting. This may add additional resource requirements when configuring, and will require thorough organizational testing. In general, Conditional Access policies may completely prevent users from authenticating to Microsoft Entra ID, and thorough testing is recommended. To avoid complete lockout, a 'Break Glass' account with full Global Administrator rights is recommended in the event all other administrators are locked out of authenticating to Microsoft Entra ID. This 'Break Glass' account should be excluded from Conditional Access Policies and should be configured with the longest pass phrase feasible. This account should only be used in the event of an emergency and complete administrator lockout."
}

View File

@@ -1,29 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_trusted_named_locations_exists(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant, named_locations in entra_client.named_locations.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant}"
report.resource_name = "Named Locations"
report.resource_id = "Named Locations"
report.status_extended = (
"There is no trusted location with IP ranges defined."
)
for named_location_id, named_location in named_locations.items():
report.resource_name = named_location.name
report.resource_id = named_location_id
if named_location.ip_ranges_addresses and named_location.is_trusted:
report.status = "PASS"
report.status_extended = f"Exits trusted location with trusted IP ranges, this IPs ranges are: {[ip_range for ip_range in named_location.ip_ranges_addresses if ip_range]}"
break
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_user_with_vm_access_has_mfa",
"CheckTitle": "Ensure only MFA enabled identities can access privileged Virtual Machine",
"CheckType": [],
"ServiceName": "iam",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.users",
"Description": "Verify identities without MFA that can log in to a privileged virtual machine using separate login credentials. An adversary can leverage the access to move laterally and perform actions with the virtual machine's managed identity. Make sure the virtual machine only has necessary permissions, and revoke the admin-level permissions according to the least privileges principal",
"Risk": "Managed disks are by default encrypted on the underlying hardware, so no additional encryption is required for basic protection. It is available if additional encryption is required. Managed disks are by design more resilient that storage accounts. For ARM-deployed Virtual Machines, Azure Adviser will at some point recommend moving VHDs to managed disks both from a security and cost management perspective.",
"RelatedUrl": "",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. Log in to the Azure portal. Reducing access of managed identities attached to virtual machines. 2. This can be remediated by enabling MFA for user, Removing user access or • Case I : Enable MFA for users having access on virtual machines. 1. Navigate to Azure AD from the left pane and select Users from the Manage section. 2. Click on Per-User MFA from the top menu options and select each user with MULTI-FACTOR AUTH STATUS as Disabled and can login to virtual machines:  From quick steps on the right side select enable.  Click on enable multi-factor auth and share the link with the user to setup MFA as required. • Case II : Removing user access on a virtual machine. 1. Select the Subscription, then click on Access control (IAM). 2. Select Role assignments and search for Virtual Machine Administrator Login or Virtual Machine User Login or any role that provides access to log into virtual machines. 3. Click on Role Name, Select Assignments, and remove identities with no MFA configured. • Case III : Reducing access of managed identities attached to virtual machines. 1. Select the Subscription, then click on Access control (IAM). 2. Select Role Assignments from the top menu and apply filters on Assignment type as Privileged administrator roles and Type as Virtual Machines. 3. Click on Role Name, Select Assignments, and remove identities access make sure this follows the least privileges principal.",
"Url": ""
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "This recommendation requires an Azure AD P2 License to implement. Ensure that identities that are provisioned to a virtual machine utilizes an RBAC/ABAC group and is allocated a role using Azure PIM, and the Role settings require MFA or use another PAM solution (like CyberArk) for accessing Virtual Machines."
}

View File

@@ -1,53 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.config import (
CONTRIBUTOR_ROLE_ID,
OWNER_ROLE_ID,
VIRTUAL_MACHINE_ADMINISTRATOR_LOGIN_ROLE_ID,
VIRTUAL_MACHINE_CONTRIBUTOR_ROLE_ID,
VIRTUAL_MACHINE_LOCAL_USER_LOGIN_ROLE_ID,
VIRTUAL_MACHINE_USER_LOGIN_ROLE_ID,
WINDOWS_ADMIN_CENTER_ADMINISTRATOR_LOGIN_ROLE_ID,
)
from prowler.providers.azure.services.entra.entra_client import entra_client
from prowler.providers.azure.services.iam.iam_client import iam_client
class entra_user_with_vm_access_has_mfa(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for users in entra_client.users.values():
for user_domain_name, user in users.items():
for (
subscription_name,
role_assigns,
) in iam_client.role_assignments.items():
for assignment in role_assigns.values():
if (
assignment.agent_type == "User"
and assignment.role_id
in [
CONTRIBUTOR_ROLE_ID,
OWNER_ROLE_ID,
VIRTUAL_MACHINE_CONTRIBUTOR_ROLE_ID,
VIRTUAL_MACHINE_ADMINISTRATOR_LOGIN_ROLE_ID,
VIRTUAL_MACHINE_USER_LOGIN_ROLE_ID,
VIRTUAL_MACHINE_LOCAL_USER_LOGIN_ROLE_ID,
WINDOWS_ADMIN_CENTER_ADMINISTRATOR_LOGIN_ROLE_ID,
]
and assignment.agent_id == user.id
):
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.status_extended = f"User {user.name} without MFA can access VMs in subscription {subscription_name}"
report.subscription = subscription_name
report.resource_name = user_domain_name
report.resource_id = user.id
if len(user.authentication_methods) > 1:
report.status = "PASS"
report.status_extended = f"User {user.name} can access VMs in subscription {subscription_name} but it has MFA."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_users_cannot_create_microsoft_365_groups",
"CheckTitle": "Ensure that 'Users can create Microsoft 365 groups in Azure portals, API or PowerShell' is set to 'No'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "Microsoft.Users/Settings",
"Description": "Restrict Microsoft 365 group creation to administrators only.",
"Risk": "Restricting Microsoft 365 group creation to administrators only ensures that creation of Microsoft 365 groups is controlled by the administrator. Appropriate groups should be created and managed by the administrator and group creation rights should not be delegated to any other user.",
"RelatedUrl": "https://learn.microsoft.com/en-us/microsoft-365/community/all-about-groups#microsoft-365-groups",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/users-can-create-office-365-groups.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Then Groups 4. Select General in settings 5. Set Users can create Microsoft 365 groups in Azure portals, API or PowerShell to No",
"Url": "https://learn.microsoft.com/en-us/microsoft-365/solutions/manage-creation-of-groups?view=o365-worldwide&redirectSourcePath=%252fen-us%252farticle%252fControl-who-can-create-Office-365-Groups-4c46c8cb-17d0-44b5-9776-005fced8e618"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Enabling this setting could create a number of requests that would need to be managed by an administrator."
}

View File

@@ -1,32 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_users_cannot_create_microsoft_365_groups(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, group_settings in entra_client.group_settings.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = "Microsoft365 Groups"
report.resource_id = "Microsoft365 Groups"
report.status_extended = "Users can create Microsoft 365 groups."
for group_setting in group_settings.values():
if group_setting.name == "Group.Unified":
for setting_value in group_setting.settings:
if (
getattr(setting_value, "name", "") == "EnableGroupCreation"
and setting_value.value != "true"
):
report.status = "PASS"
report.status_extended = (
"Users cannot create Microsoft 365 groups."
)
break
findings.append(report)
return findings

View File

@@ -1,25 +0,0 @@
"""
This module contains functions with user privileges in Azure.
"""
def is_privileged_user(user, privileged_roles) -> bool:
"""
Checks if a user is a privileged user.
Args:
user: An object representing the user to be checked.
privileged_roles: A dictionary containing privileged roles.
Returns:
A boolean value indicating whether the user is a privileged user.
"""
is_privileged = False
for role in privileged_roles.values():
if user in role.members:
is_privileged = True
break
return is_privileged

View File

@@ -12,7 +12,6 @@ class IAM(AzureService):
def __init__(self, audit_info):
super().__init__(AuthorizationManagementClient, audit_info)
self.roles, self.custom_roles = self.__get_roles__()
self.role_assignments = self.__get_role_assignments__()
def __get_roles__(self):
logger.info("IAM - Getting roles...")
@@ -53,34 +52,6 @@ class IAM(AzureService):
)
return builtin_roles, custom_roles
def __get_role_assignments__(self):
logger.info("IAM - Getting role assignments...")
role_assignments = {}
for subscription, client in self.clients.items():
try:
role_assignments.update({subscription: {}})
all_role_assignments = client.role_assignments.list_for_subscription(
filter="atScope()"
)
for role_assignment in all_role_assignments:
role_assignments[subscription].update(
{
role_assignment.id: RoleAssignment(
agent_id=role_assignment.principal_id,
agent_type=role_assignment.principal_type,
role_id=role_assignment.role_definition_id.split("/")[
-1
],
)
}
)
except Exception as error:
logger.error(f"Subscription name: {subscription}")
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return role_assignments
@dataclass
class Role:
@@ -89,10 +60,3 @@ class Role:
type: str
assignable_scopes: list[str]
permissions: list[Permission]
@dataclass
class RoleAssignment:
agent_id: str
agent_type: str
role_id: str

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "keyvault_logging_enabled",
"CheckTitle": "Ensure that logging for Azure Key Vault is 'Enabled'",
"CheckType": [],
"ServiceName": "keyvault",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "KeyVault",
"Description": "Enable AuditEvent logging for key vault instances to ensure interactions with key vaults are logged and available.",
"Risk": "Monitoring how and when key vaults are accessed, and by whom, enables an audit trail of interactions with confidential information, keys, and certificates managed by Azure Keyvault. Enabling logging for Key Vault saves information in an Azure storage account which the user provides. This creates a new container named insights-logs-auditevent automatically for the specified storage account. This same storage account can be used for collecting logs for multiple key vaults.",
"RelatedUrl": "https://docs.microsoft.com/en-us/azure/key-vault/key-vault-logging",
"Remediation": {
"Code": {
"CLI": "az monitor diagnostic-settings create --name <diagnostic settings name> --resource <key vault resource ID> --logs'[{category:AuditEvents,enabled:true,retention-policy:{enabled:true,days:180}}]' --metrics'[{category:AllMetrics,enabled:true,retention-policy:{enabled:true,days:180}}]' <[--event-hub <event hub ID> --event-hub-rule <event hub auth rule ID> | --storage-account <storage account ID> |--workspace <log analytics workspace ID> | --marketplace-partner-id <full resource ID of third-party solution>]>",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/azure/KeyVault/enable-audit-event-logging-for-azure-key-vaults.html",
"Terraform": ""
},
"Recommendation": {
"Text": "1. Go to Key vaults 2. For each Key vault 3. Go to Diagnostic settings 4. Click on Edit Settings 5. Ensure that Archive to a storage account is Enabled 6. Ensure that AuditEvent is checked, and the retention days is set to 180 days or as appropriate",
"Url": "https://docs.microsoft.com/en-us/security/benchmark/azure/security-controls-v3-data-protection#dp-8-ensure-security-of-key-and-certificate-repository"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "By default, Diagnostic AuditEvent logging is not enabled for Key Vault instances."
}

View File

@@ -1,43 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.keyvault.keyvault_client import keyvault_client
class keyvault_logging_enabled(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for subscription, key_vaults in keyvault_client.key_vaults.items():
for keyvault in key_vaults:
keyvault_name = keyvault.name
subscription_name = subscription
if not keyvault.monitor_diagnostic_settings:
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = subscription_name
report.resource_name = keyvault.name
report.resource_id = keyvault.id
report.status_extended = f"There are no diagnostic settings capturing audit logs for Key Vault {keyvault_name} in subscription {subscription_name}."
findings.append(report)
else:
for diagnostic_setting in keyvault.monitor_diagnostic_settings:
report = Check_Report_Azure(self.metadata())
report.subscription = subscription_name
report.resource_name = diagnostic_setting.name
report.resource_id = diagnostic_setting.id
report.status = "FAIL"
report.status_extended = f"Diagnostic setting {diagnostic_setting.name} for Key Vault {keyvault_name} in subscription {subscription_name} does not have audit logging."
audit = False
allLogs = False
for log in diagnostic_setting.logs:
if log.category_group == "audit" and log.enabled:
audit = True
if log.category_group == "allLogs" and log.enabled:
allLogs = True
if audit and allLogs:
report.status = "PASS"
report.status_extended = f"Diagnostic setting {diagnostic_setting.name} for Key Vault {keyvault_name} in subscription {subscription_name} has audit logging."
break
findings.append(report)
return findings

View File

@@ -11,11 +11,9 @@ from azure.mgmt.keyvault.v2023_07_01.models import (
from prowler.lib.logger import logger
from prowler.providers.azure.lib.service.service import AzureService
from prowler.providers.azure.services.monitor.monitor_client import monitor_client
from prowler.providers.azure.services.monitor.monitor_service import DiagnosticSetting
########################## KeyVault
########################## Storage
class KeyVault(AzureService):
def __init__(self, audit_info):
super().__init__(KeyVaultManagementClient, audit_info)
@@ -49,9 +47,6 @@ class KeyVault(AzureService):
properties=keyvault_properties,
keys=keys,
secrets=secrets,
monitor_diagnostic_settings=self.__get_vault_monitor_settings__(
keyvault_name, resource_group, subscription
),
)
)
except Exception as error:
@@ -121,25 +116,6 @@ class KeyVault(AzureService):
)
return secrets
def __get_vault_monitor_settings__(
self, keyvault_name, resource_group, subscription
):
logger.info(
f"KeyVault - Getting monitor diagnostics settings for {keyvault_name}..."
)
monitor_diagnostics_settings = []
try:
monitor_diagnostics_settings = monitor_client.diagnostic_settings_with_uri(
self.subscriptions[subscription],
f"subscriptions/{self.subscriptions[subscription]}/resourceGroups/{resource_group}/providers/Microsoft.KeyVault/vaults/{keyvault_name}",
monitor_client.clients[subscription],
)
except Exception as error:
logger.error(
f"Subscription name: {self.subscription} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return monitor_diagnostics_settings
@dataclass
class Key:
@@ -169,4 +145,3 @@ class KeyVaultInfo:
properties: VaultProperties
keys: list[Key] = None
secrets: list[Secret] = None
monitor_diagnostic_settings: list[DiagnosticSetting] = None

View File

@@ -29,4 +29,5 @@ class monitor_alert_create_update_public_ip_address_rule(Check):
break
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "monitor_diagnostic_settings_exists",
"CheckTitle": "Ensure that a 'Diagnostic Setting' exists for Subscription Activity Logs ",
"CheckType": [],
"ServiceName": "monitor",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "Monitor",
"Description": "Enable Diagnostic settings for exporting activity logs. Diagnostic settings are available for each individual resource within a subscription. Settings should be configured for all appropriate resources for your environment.",
"Risk": "A diagnostic setting controls how a diagnostic log is exported. By default, logs are retained only for 90 days. Diagnostic settings should be defined so that logs can be exported and stored for a longer duration in order to analyze security activities within an Azure subscription.",
"RelatedUrl": "https://learn.microsoft.com/en-us/cli/azure/monitor/diagnostic-settings?view=azure-cli-latest",
"Remediation": {
"Code": {
"CLI": "az monitor diagnostic-settings subscription create --subscription <subscription id> --name <diagnostic settings name> --location <location> <[- -event-hub <event hub ID> --event-hub-auth-rule <event hub auth rule ID>] [-- storage-account <storage account ID>] [--workspace <log analytics workspace ID>] --logs '<JSON encoded categories>' (e.g. [{category:Security,enabled:true},{category:Administrative,enabled:true},{cat egory:Alert,enabled:true},{category:Policy,enabled:true}])",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity-staging/knowledge-base/azure/Monitor/subscription-activity-log-diagnostic-settings.html#trendmicro",
"Terraform": ""
},
"Recommendation": {
"Text": "To enable Diagnostic Settings on a Subscription: 1. Go to Monitor 2. Click on Activity Log 3. Click on Export Activity Logs 4. Click + Add diagnostic setting 5. Enter a Diagnostic setting name 6. Select Categories for the diagnostic settings 7. Select the appropriate Destination details (this may be Log Analytics, Storage Account, Event Hub, or Partner solution) 8. Click Save To enable Diagnostic Settings on a specific resource: 1. Go to Monitor 2. Click Diagnostic settings 3. Click on the resource that has a diagnostics status of disabled 4. Select Add Diagnostic Setting 5. Enter a Diagnostic setting name 6. Select the appropriate log, metric, and destination. (this may be Log Analytics, Storage Account, Event Hub, or Partner solution) 7. Click save Repeat these step for all resources as needed.",
"Url": "https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs#export-the-activity-log-with-a-log-profile"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "By default, diagnostic setting is not set."
}

View File

@@ -1,27 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.monitor.monitor_client import monitor_client
class monitor_diagnostic_settings_exists(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for (
subscription_name,
diagnostic_settings,
) in monitor_client.diagnostics_settings.items():
report = Check_Report_Azure(self.metadata())
report.subscription = subscription_name
report.status = "FAIL"
report.status_extended = (
f"No diagnostic settings found in subscription {subscription_name}."
)
if diagnostic_settings:
report.status = "PASS"
report.status_extended = (
f"Diagnostic settings found in subscription {subscription_name}."
)
findings.append(report)
return findings

View File

@@ -17,46 +17,31 @@ class Monitor(AzureService):
def __get_diagnostics_settings__(self):
logger.info("Monitor - Getting diagnostics settings...")
diagnostics_settings_list = []
diagnostics_settings = {}
for subscription, client in self.clients.items():
try:
diagnostics_settings_list = self.diagnostic_settings_with_uri(
subscription,
f"subscriptions/{self.subscriptions[subscription]}/",
client,
diagnostics_settings.update({subscription: []})
settings = client.diagnostic_settings.list(
resource_uri=f"subscriptions/{self.subscriptions[subscription]}/"
)
diagnostics_settings.update({subscription: diagnostics_settings_list})
for setting in settings:
diagnostics_settings[subscription].append(
DiagnosticSetting(
id=setting.id,
storage_account_name=setting.storage_account_id.split("/")[
-1
],
logs=setting.logs,
storage_account_id=setting.storage_account_id,
)
)
except Exception as error:
logger.error(
f"Subscription name: {subscription} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return diagnostics_settings
def diagnostic_settings_with_uri(self, subscription, uri, client):
diagnostics_settings = []
try:
settings = client.diagnostic_settings.list(resource_uri=uri)
for setting in settings:
diagnostics_settings.append(
DiagnosticSetting(
id=setting.id,
name=setting.id.split("/")[-1],
storage_account_name=(
setting.storage_account_id.split("/")[-1]
if getattr(setting, "storage_account_id", None)
else None
),
logs=setting.logs,
storage_account_id=setting.storage_account_id,
)
)
except Exception as error:
logger.error(
f"Subscription id: {subscription} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return diagnostics_settings
def get_alert_rules(self):
logger.info("Monitor - Getting alert rules...")
alert_rules = {}
@@ -87,7 +72,6 @@ class DiagnosticSetting:
storage_account_id: str
storage_account_name: str
logs: LogSettings
name: str
@dataclass

View File

@@ -19,7 +19,7 @@
"Terraform": "https://docs.bridgecrew.io/docs/ensure-virtual-machines-are-utilizing-managed-disks#terraform"
},
"Recommendation": {
"Text": "1. Using the search feature, go to Virtual Machines 2. Select the virtual machine you would like to convert 3. Select Disks in the menu for the VM 4. At the top select Migrate to managed disks 5. You may follow the prompts to convert the disk and finish by selecting Migrate to start the process",
"Text": "There are additional costs for managed disks based off of disk space allocated. When converting to managed disks, VMs will be powered off and back on.",
"Url": "https://docs.microsoft.com/en-us/security/benchmark/azure/security-controls-v3-data-protection#dp-4-enable-data-at-rest-encryption-by-default"
}
},

View File

@@ -31,7 +31,6 @@ class VirtualMachines(AzureService):
resource_id=vm.id,
resource_name=vm.name,
storage_profile=getattr(vm, "storage_profile", None),
security_profile=vm.security_profile,
)
}
)
@@ -77,24 +76,11 @@ class VirtualMachines(AzureService):
return disks
@dataclass
class UefiSettings:
secure_boot_enabled: bool
v_tpm_enabled: bool
@dataclass
class SecurityProfile:
security_type: str
uefi_settings: UefiSettings
@dataclass
class VirtualMachine:
resource_id: str
resource_name: str
storage_profile: StorageProfile
security_profile: SecurityProfile
@dataclass

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "vm_trusted_launch_enabled",
"CheckTitle": "Ensure Trusted Launch is enabled on Virtual Machines",
"CheckType": [],
"ServiceName": "vm",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "Microsoft.Compute/virtualMachines",
"Description": "When Secure Boot and vTPM are enabled together, they provide a strong foundation for protecting your VM from boot attacks. For example, if an attacker attempts to replace the bootloader with a malicious version, Secure Boot will prevent the VM from booting. If the attacker is able to bypass Secure Boot and install a malicious bootloader, vTPM can be used to detect the intrusion and alert you.",
"Risk": "Secure Boot and vTPM work together to protect your VM from a variety of boot attacks, including bootkits, rootkits, and firmware rootkits. Not enabling Trusted Launch in Azure VM can lead to increased vulnerability to rootkits and boot-level malware, reduced ability to detect and prevent unauthorized changes to the boot process, and a potential compromise of system integrity and data security.",
"RelatedUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/trusted-launch-existing-vm?tabs=portal",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. Go to Virtual Machines 2. For each VM, under Settings, click on Configuration on the left blade 3. Under Security Type, select 'Trusted Launch Virtual Machines' 4. Make sure Enable Secure Boot & Enable vTPM are checked 5. Click on Apply.",
"Url": "https://learn.microsoft.com/en-us/azure/virtual-machines/trusted-launch-existing-vm?tabs=portal#enable-trusted-launch-on-existing-vm"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Secure Boot and vTPM are not currently supported for Azure Generation 1 VMs. IMPORTANT: Before enabling Secure Boot and vTPM on a Generation 2 VM which does not already have both enabled, it is highly recommended to create a restore point of the VM prior to remediation."
}

View File

@@ -1,28 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.vm.vm_client import vm_client
class vm_trusted_launch_enabled(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for subscription_name, vms in vm_client.virtual_machines.items():
for vm_id, vm in vms.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = subscription_name
report.resource_name = vm.resource_name
report.resource_id = vm_id
report.status_extended = f"VM {vm.resource_name} has trusted launch disabled in subscription {subscription_name}"
if (
vm.security_profile.security_type == "TrustedLaunch"
and vm.security_profile.uefi_settings.secure_boot_enabled
and vm.security_profile.uefi_settings.v_tpm_enabled
):
report.status = "PASS"
report.status_extended = f"VM {vm.resource_name} has trusted launch enabled in subscription {subscription_name}"
findings.append(report)
return findings

View File

@@ -49,11 +49,11 @@ boto3 = "1.26.165"
botocore = "1.29.165"
colorama = "0.4.6"
detect-secrets = "1.4.0"
google-api-python-client = "2.124.0"
google-api-python-client = "2.122.0"
google-auth-httplib2 = ">=0.1,<0.3"
jsonschema = "4.21.1"
msgraph-sdk = "1.0.0"
msrestazure = "0.6.4"
msgraph-sdk = "^1.0.0"
msrestazure = "^0.6.4"
pydantic = "1.10.14"
python = ">=3.9,<3.13"
schema = "0.7.5"
@@ -69,16 +69,16 @@ docker = "7.0.0"
flake8 = "7.0.0"
freezegun = "1.4.0"
mock = "5.1.0"
moto = {extras = ["all"], version = "5.0.4"}
moto = {extras = ["all"], version = "5.0.3"}
openapi-schema-validator = "0.6.2"
openapi-spec-validator = "0.7.1"
pylint = "3.1.0"
pytest = "8.1.1"
pytest-cov = "5.0.0"
pytest-cov = "4.1.0"
pytest-env = "1.1.3"
pytest-randomly = "3.15.0"
pytest-xdist = "3.5.0"
safety = "3.1.0"
safety = "3.0.1"
vulture = "2.11"
[tool.poetry.group.docs]
@@ -87,7 +87,7 @@ optional = true
[tool.poetry.group.docs.dependencies]
mkdocs = "1.5.3"
mkdocs-git-revision-date-localized-plugin = "1.2.4"
mkdocs-material = "9.5.17"
mkdocs-material = "9.5.14"
mkdocs-material-extensions = "1.3.1"
[tool.poetry.scripts]

View File

@@ -507,4 +507,4 @@ class Test_AWS_Credentials:
sts_client = create_sts_session(session, aws_region)
assert sts_client._endpoint._endpoint_prefix == "sts"
assert sts_client._endpoint.host == f"https://sts.{aws_region}.amazonaws.com.cn"
assert sts_client._endpoint.host == f"https://sts.{aws_region}.amazonaws.com"

View File

@@ -232,7 +232,11 @@ class Test_Lambda_Service:
}
assert awslambda.functions[lambda_arn_2].region == AWS_REGION_US_EAST_1
# Emtpy policy
assert awslambda.functions[lambda_arn_2].policy == {}
assert awslambda.functions[lambda_arn_2].policy == {
"Id": "default",
"Statement": [],
"Version": "2012-10-17",
}
# Lambda Code
with tempfile.TemporaryDirectory() as tmp_dir_name:

View File

@@ -1,12 +1,14 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.aks.aks_service import Cluster
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
class Test_aks_cluster_rbac_enabled:
def test_aks_no_subscriptions(self):
aks_client = mock.MagicMock
aks_client.clusters = {}
with mock.patch(
"prowler.providers.azure.services.aks.aks_cluster_rbac_enabled.aks_cluster_rbac_enabled.aks_client",
@@ -16,8 +18,6 @@ class Test_aks_cluster_rbac_enabled:
aks_cluster_rbac_enabled,
)
aks_client.clusters = {}
check = aks_cluster_rbac_enabled()
result = check.execute()
assert len(result) == 0
@@ -41,6 +41,18 @@ class Test_aks_cluster_rbac_enabled:
def test_aks_cluster_rbac_enabled(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=False)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_cluster_rbac_enabled.aks_cluster_rbac_enabled.aks_client",
@@ -49,22 +61,6 @@ class Test_aks_cluster_rbac_enabled:
from prowler.providers.azure.services.aks.aks_cluster_rbac_enabled.aks_cluster_rbac_enabled import (
aks_cluster_rbac_enabled,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False)
],
rbac_enabled=True,
)
}
}
check = aks_cluster_rbac_enabled()
result = check.execute()
@@ -81,6 +77,18 @@ class Test_aks_cluster_rbac_enabled:
def test_aks_rbac_not_enabled(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=False)],
rbac_enabled=False,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_cluster_rbac_enabled.aks_cluster_rbac_enabled.aks_client",
@@ -89,22 +97,6 @@ class Test_aks_cluster_rbac_enabled:
from prowler.providers.azure.services.aks.aks_cluster_rbac_enabled.aks_cluster_rbac_enabled import (
aks_cluster_rbac_enabled,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False)
],
rbac_enabled=False,
)
}
}
check = aks_cluster_rbac_enabled()
result = check.execute()

View File

@@ -1,12 +1,14 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.aks.aks_service import Cluster
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
class Test_aks_clusters_created_with_private_nodes:
def test_aks_no_subscriptions(self):
aks_client = mock.MagicMock
aks_client.clusters = {}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes.aks_client",
@@ -16,14 +18,13 @@ class Test_aks_clusters_created_with_private_nodes:
aks_clusters_created_with_private_nodes,
)
aks_client.clusters = {}
check = aks_clusters_created_with_private_nodes()
result = check.execute()
assert len(result) == 0
def test_aks_subscription_empty(self):
aks_client = mock.MagicMock
aks_client.clusters = {AZURE_SUBSCRIPTION: {}}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes.aks_client",
@@ -33,8 +34,6 @@ class Test_aks_clusters_created_with_private_nodes:
aks_clusters_created_with_private_nodes,
)
aks_client.clusters = {AZURE_SUBSCRIPTION: {}}
check = aks_clusters_created_with_private_nodes()
result = check.execute()
assert len(result) == 0
@@ -42,6 +41,18 @@ class Test_aks_clusters_created_with_private_nodes:
def test_aks_cluster_no_private_nodes(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="",
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=True)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes.aks_client",
@@ -50,22 +61,6 @@ class Test_aks_clusters_created_with_private_nodes:
from prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes import (
aks_clusters_created_with_private_nodes,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="",
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=True)
],
rbac_enabled=True,
)
}
}
check = aks_clusters_created_with_private_nodes()
result = check.execute()
@@ -82,6 +77,18 @@ class Test_aks_clusters_created_with_private_nodes:
def test_aks_cluster_private_nodes(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=False)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes.aks_client",
@@ -90,22 +97,6 @@ class Test_aks_clusters_created_with_private_nodes:
from prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes import (
aks_clusters_created_with_private_nodes,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False)
],
rbac_enabled=True,
)
}
}
check = aks_clusters_created_with_private_nodes()
result = check.execute()
@@ -122,6 +113,22 @@ class Test_aks_clusters_created_with_private_nodes:
def test_aks_cluster_public_and_private_nodes(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False),
mock.MagicMock(enable_node_public_ip=True),
mock.MagicMock(enable_node_public_ip=False),
],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes.aks_client",
@@ -130,24 +137,6 @@ class Test_aks_clusters_created_with_private_nodes:
from prowler.providers.azure.services.aks.aks_clusters_created_with_private_nodes.aks_clusters_created_with_private_nodes import (
aks_clusters_created_with_private_nodes,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False),
mock.MagicMock(enable_node_public_ip=True),
mock.MagicMock(enable_node_public_ip=False),
],
rbac_enabled=True,
)
}
}
check = aks_clusters_created_with_private_nodes()
result = check.execute()

View File

@@ -1,6 +1,7 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.aks.aks_service import Cluster
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
@@ -40,6 +41,18 @@ class Test_aks_clusters_public_access_disabled:
def test_aks_cluster_public_fqdn(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=False)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_public_access_disabled.aks_clusters_public_access_disabled.aks_client",
@@ -48,22 +61,6 @@ class Test_aks_clusters_public_access_disabled:
from prowler.providers.azure.services.aks.aks_clusters_public_access_disabled.aks_clusters_public_access_disabled import (
aks_clusters_public_access_disabled,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False)
],
rbac_enabled=True,
)
}
}
check = aks_clusters_public_access_disabled()
result = check.execute()
@@ -80,6 +77,18 @@ class Test_aks_clusters_public_access_disabled:
def test_aks_cluster_private_fqdn(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=False)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_public_access_disabled.aks_clusters_public_access_disabled.aks_client",
@@ -88,22 +97,6 @@ class Test_aks_clusters_public_access_disabled:
from prowler.providers.azure.services.aks.aks_clusters_public_access_disabled.aks_clusters_public_access_disabled import (
aks_clusters_public_access_disabled,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False)
],
rbac_enabled=True,
)
}
}
check = aks_clusters_public_access_disabled()
result = check.execute()
@@ -120,6 +113,18 @@ class Test_aks_clusters_public_access_disabled:
def test_aks_cluster_private_fqdn_with_public_ip(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=True)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_clusters_public_access_disabled.aks_clusters_public_access_disabled.aks_client",
@@ -128,22 +133,6 @@ class Test_aks_clusters_public_access_disabled:
from prowler.providers.azure.services.aks.aks_clusters_public_access_disabled.aks_clusters_public_access_disabled import (
aks_clusters_public_access_disabled,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn="private_fqdn",
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=True)
],
rbac_enabled=True,
)
}
}
check = aks_clusters_public_access_disabled()
result = check.execute()

View File

@@ -1,12 +1,14 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.aks.aks_service import Cluster
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
class Test_aks_network_policy_enabled:
def test_aks_no_subscriptions(self):
aks_client = mock.MagicMock
aks_client.clusters = {}
with mock.patch(
"prowler.providers.azure.services.aks.aks_network_policy_enabled.aks_network_policy_enabled.aks_client",
@@ -16,14 +18,13 @@ class Test_aks_network_policy_enabled:
aks_network_policy_enabled,
)
aks_client.clusters = {}
check = aks_network_policy_enabled()
result = check.execute()
assert len(result) == 0
def test_aks_subscription_empty(self):
aks_client = mock.MagicMock
aks_client.clusters = {AZURE_SUBSCRIPTION: {}}
with mock.patch(
"prowler.providers.azure.services.aks.aks_network_policy_enabled.aks_network_policy_enabled.aks_client",
@@ -33,8 +34,6 @@ class Test_aks_network_policy_enabled:
aks_network_policy_enabled,
)
aks_client.clusters = {AZURE_SUBSCRIPTION: {}}
check = aks_network_policy_enabled()
result = check.execute()
assert len(result) == 0
@@ -42,6 +41,18 @@ class Test_aks_network_policy_enabled:
def test_aks_network_policy_enabled(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=False)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_network_policy_enabled.aks_network_policy_enabled.aks_client",
@@ -50,22 +61,6 @@ class Test_aks_network_policy_enabled:
from prowler.providers.azure.services.aks.aks_network_policy_enabled.aks_network_policy_enabled import (
aks_network_policy_enabled,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy="network_policy",
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False)
],
rbac_enabled=True,
)
}
}
check = aks_network_policy_enabled()
result = check.execute()
@@ -82,6 +77,18 @@ class Test_aks_network_policy_enabled:
def test_aks_network_policy_disabled(self):
aks_client = mock.MagicMock
cluster_id = str(uuid4())
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy=None,
agent_pool_profiles=[mock.MagicMock(enable_node_public_ip=False)],
rbac_enabled=True,
)
}
}
with mock.patch(
"prowler.providers.azure.services.aks.aks_network_policy_enabled.aks_network_policy_enabled.aks_client",
@@ -90,22 +97,6 @@ class Test_aks_network_policy_enabled:
from prowler.providers.azure.services.aks.aks_network_policy_enabled.aks_network_policy_enabled import (
aks_network_policy_enabled,
)
from prowler.providers.azure.services.aks.aks_service import Cluster
aks_client.clusters = {
AZURE_SUBSCRIPTION: {
cluster_id: Cluster(
name="cluster_name",
public_fqdn="public_fqdn",
private_fqdn=None,
network_policy=None,
agent_pool_profiles=[
mock.MagicMock(enable_node_public_ip=False)
],
rbac_enabled=True,
)
}
}
check = aks_network_policy_enabled()
result = check.execute()

View File

@@ -1,12 +1,14 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.app.app_service import WebApp
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
class Test_app_client_certificates_on:
def test_app_no_subscriptions(self):
app_client = mock.MagicMock
app_client.apps = {}
with mock.patch(
"prowler.providers.azure.services.app.app_client_certificates_on.app_client_certificates_on.app_client",
@@ -16,14 +18,13 @@ class Test_app_client_certificates_on:
app_client_certificates_on,
)
app_client.apps = {}
check = app_client_certificates_on()
result = check.execute()
assert len(result) == 0
def test_app_subscription_empty(self):
app_client = mock.MagicMock
app_client.apps = {AZURE_SUBSCRIPTION: {}}
with mock.patch(
"prowler.providers.azure.services.app.app_client_certificates_on.app_client_certificates_on.app_client",
@@ -33,8 +34,6 @@ class Test_app_client_certificates_on:
app_client_certificates_on,
)
app_client.apps = {AZURE_SUBSCRIPTION: {}}
check = app_client_certificates_on()
result = check.execute()
assert len(result) == 0
@@ -42,6 +41,18 @@ class Test_app_client_certificates_on:
def test_app_client_certificates_on(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=None,
client_cert_mode="Required",
https_only=False,
identity=None,
)
}
}
with mock.patch(
"prowler.providers.azure.services.app.app_client_certificates_on.app_client_certificates_on.app_client",
@@ -50,20 +61,6 @@ class Test_app_client_certificates_on:
from prowler.providers.azure.services.app.app_client_certificates_on.app_client_certificates_on import (
app_client_certificates_on,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=None,
client_cert_mode="Required",
https_only=False,
identity=None,
)
}
}
check = app_client_certificates_on()
result = check.execute()
@@ -80,6 +77,18 @@ class Test_app_client_certificates_on:
def test_app_client_certificates_off(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=None,
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
with mock.patch(
"prowler.providers.azure.services.app.app_client_certificates_on.app_client_certificates_on.app_client",
@@ -88,20 +97,6 @@ class Test_app_client_certificates_on:
from prowler.providers.azure.services.app.app_client_certificates_on.app_client_certificates_on import (
app_client_certificates_on,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=None,
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
check = app_client_certificates_on()
result = check.execute()

View File

@@ -1,12 +1,14 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.app.app_service import WebApp
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
class Test_app_ensure_auth_is_set_up:
def test_app_no_subscriptions(self):
app_client = mock.MagicMock
app_client.apps = {}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_auth_is_set_up.app_ensure_auth_is_set_up.app_client",
@@ -16,14 +18,13 @@ class Test_app_ensure_auth_is_set_up:
app_ensure_auth_is_set_up,
)
app_client.apps = {}
check = app_ensure_auth_is_set_up()
result = check.execute()
assert len(result) == 0
def test_app_subscription_empty(self):
app_client = mock.MagicMock
app_client.apps = {AZURE_SUBSCRIPTION: {}}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_auth_is_set_up.app_ensure_auth_is_set_up.app_client",
@@ -33,8 +34,6 @@ class Test_app_ensure_auth_is_set_up:
app_ensure_auth_is_set_up,
)
app_client.apps = {AZURE_SUBSCRIPTION: {}}
check = app_ensure_auth_is_set_up()
result = check.execute()
assert len(result) == 0
@@ -42,6 +41,18 @@ class Test_app_ensure_auth_is_set_up:
def test_app_auth_enabled(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_auth_is_set_up.app_ensure_auth_is_set_up.app_client",
@@ -50,20 +61,6 @@ class Test_app_ensure_auth_is_set_up:
from prowler.providers.azure.services.app.app_ensure_auth_is_set_up.app_ensure_auth_is_set_up import (
app_ensure_auth_is_set_up,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
check = app_ensure_auth_is_set_up()
result = check.execute()
@@ -80,6 +77,18 @@ class Test_app_ensure_auth_is_set_up:
def test_app_auth_disabled(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=False,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_auth_is_set_up.app_ensure_auth_is_set_up.app_client",
@@ -88,20 +97,6 @@ class Test_app_ensure_auth_is_set_up:
from prowler.providers.azure.services.app.app_ensure_auth_is_set_up.app_ensure_auth_is_set_up import (
app_ensure_auth_is_set_up,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=False,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
check = app_ensure_auth_is_set_up()
result = check.execute()

View File

@@ -1,12 +1,14 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.app.app_service import WebApp
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
class Test_app_ensure_http_is_redirected_to_https:
def test_app_no_subscriptions(self):
app_client = mock.MagicMock
app_client.apps = {}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https.app_client",
@@ -16,14 +18,13 @@ class Test_app_ensure_http_is_redirected_to_https:
app_ensure_http_is_redirected_to_https,
)
app_client.apps = {}
check = app_ensure_http_is_redirected_to_https()
result = check.execute()
assert len(result) == 0
def test_app_subscriptions_empty_empty(self):
app_client = mock.MagicMock
app_client.apps = {AZURE_SUBSCRIPTION: {}}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https.app_client",
@@ -33,8 +34,6 @@ class Test_app_ensure_http_is_redirected_to_https:
app_ensure_http_is_redirected_to_https,
)
app_client.apps = {AZURE_SUBSCRIPTION: {}}
check = app_ensure_http_is_redirected_to_https()
result = check.execute()
assert len(result) == 0
@@ -42,6 +41,18 @@ class Test_app_ensure_http_is_redirected_to_https:
def test_app_http_to_https(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https.app_client",
@@ -50,20 +61,6 @@ class Test_app_ensure_http_is_redirected_to_https:
from prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https import (
app_ensure_http_is_redirected_to_https,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
check = app_ensure_http_is_redirected_to_https()
result = check.execute()
@@ -80,6 +77,18 @@ class Test_app_ensure_http_is_redirected_to_https:
def test_app_http_to_https_enabled(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=True,
identity=None,
)
}
}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https.app_client",
@@ -88,20 +97,6 @@ class Test_app_ensure_http_is_redirected_to_https:
from prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https import (
app_ensure_http_is_redirected_to_https,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(),
client_cert_mode="Ignore",
https_only=True,
identity=None,
)
}
}
check = app_ensure_http_is_redirected_to_https()
result = check.execute()

View File

@@ -1,12 +1,14 @@
from unittest import mock
from uuid import uuid4
from prowler.providers.azure.services.app.app_service import WebApp
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
class Test_app_ensure_java_version_is_latest:
def test_app_no_subscriptions(self):
app_client = mock.MagicMock
app_client.apps = {}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -16,14 +18,13 @@ class Test_app_ensure_java_version_is_latest:
app_ensure_java_version_is_latest,
)
app_client.apps = {}
check = app_ensure_java_version_is_latest()
result = check.execute()
assert len(result) == 0
def test_app_subscriptions_empty(self):
app_client = mock.MagicMock
app_client.apps = {AZURE_SUBSCRIPTION: {}}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -33,8 +34,6 @@ class Test_app_ensure_java_version_is_latest:
app_ensure_java_version_is_latest,
)
app_client.apps = {AZURE_SUBSCRIPTION: {}}
check = app_ensure_java_version_is_latest()
result = check.execute()
assert len(result) == 0
@@ -42,6 +41,18 @@ class Test_app_ensure_java_version_is_latest:
def test_app_configurations_none(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=None,
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -50,20 +61,6 @@ class Test_app_ensure_java_version_is_latest:
from prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest import (
app_ensure_java_version_is_latest,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=None,
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
check = app_ensure_java_version_is_latest()
result = check.execute()
@@ -72,6 +69,22 @@ class Test_app_ensure_java_version_is_latest:
def test_app_linux_java_version_latest(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="Tomcat|9.0-java17", java_version=None
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -80,24 +93,6 @@ class Test_app_ensure_java_version_is_latest:
from prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest import (
app_ensure_java_version_is_latest,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="Tomcat|9.0-java17", java_version=None
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
check = app_ensure_java_version_is_latest()
result = check.execute()
@@ -114,6 +109,22 @@ class Test_app_ensure_java_version_is_latest:
def test_app_linux_java_version_not_latest(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="Tomcat|9.0-java11", java_version=None
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -122,24 +133,6 @@ class Test_app_ensure_java_version_is_latest:
from prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest import (
app_ensure_java_version_is_latest,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="Tomcat|9.0-java11", java_version=None
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
check = app_ensure_java_version_is_latest()
result = check.execute()
@@ -156,6 +149,22 @@ class Test_app_ensure_java_version_is_latest:
def test_app_windows_java_version_latest(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="", java_version="17"
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -164,24 +173,6 @@ class Test_app_ensure_java_version_is_latest:
from prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest import (
app_ensure_java_version_is_latest,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="", java_version="17"
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
check = app_ensure_java_version_is_latest()
result = check.execute()
@@ -198,6 +189,22 @@ class Test_app_ensure_java_version_is_latest:
def test_app_windows_java_version_not_latest(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="", java_version="11"
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -206,24 +213,6 @@ class Test_app_ensure_java_version_is_latest:
from prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest import (
app_ensure_java_version_is_latest,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="", java_version="11"
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
check = app_ensure_java_version_is_latest()
result = check.execute()
@@ -240,6 +229,22 @@ class Test_app_ensure_java_version_is_latest:
def test_app_linux_php_version_latest(self):
resource_id = f"/subscriptions/{uuid4()}"
app_client = mock.MagicMock
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="php|8.0", java_version=None
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
with mock.patch(
"prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest.app_client",
@@ -248,24 +253,6 @@ class Test_app_ensure_java_version_is_latest:
from prowler.providers.azure.services.app.app_ensure_java_version_is_latest.app_ensure_java_version_is_latest import (
app_ensure_java_version_is_latest,
)
from prowler.providers.azure.services.app.app_service import WebApp
app_client.apps = {
AZURE_SUBSCRIPTION: {
"app_id-1": WebApp(
resource_id=resource_id,
auth_enabled=True,
configurations=mock.MagicMock(
linux_fx_version="php|8.0", java_version=None
),
client_cert_mode="Ignore",
https_only=False,
identity=None,
)
}
}
app_client.audit_config = {"java_latest_version": "17"}
check = app_ensure_java_version_is_latest()
result = check.execute()

Some files were not shown because too many files have changed in this diff Show More