Compare commits

..

13 Commits

Author SHA1 Message Date
github-actions
50d8b23b19 chore(release): 3.15.1 2024-03-20 14:00:02 +00:00
Pepe Fagoaga
2039ec0c9f fix(action): Release on whatever branch (#3576) 2024-03-20 14:52:56 +01:00
Nacho Rivera
6a6ffbab45 chore(regions_update): Changes in regions for AWS services. (#3571)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
Nacho Rivera
6871169730 chore(regions_update): Changes in regions for AWS services. (#3566)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
8e4d8b5a04 build(deps-dev): bump mkdocs-material from 9.5.12 to 9.5.14
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.5.12 to 9.5.14.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.5.12...9.5.14)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
7ce499ec37 build(deps): bump azure-mgmt-compute from 30.5.0 to 30.6.0 (#3559)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
990cb7dae2 build(deps-dev): bump black from 24.2.0 to 24.3.0
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.2.0...24.3.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
9fea275472 build(deps): bump trufflesecurity/trufflehog from 3.69.0 to 3.70.2 (#3561)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
853cf8be25 build(deps): bump tj-actions/changed-files from 42 to 43 (#3560)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
dependabot[bot]
0a3f972239 build(deps-dev): bump coverage from 7.4.3 to 7.4.4 (#3558)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
Sergio Garcia
180864dab4 fix(iam): handle KeyError in service_last_accessed (#3555) 2024-03-20 14:52:56 +01:00
Sergio Garcia
d06ccb9af1 chore(compliance): rename AWS FTR compliance (#3550) 2024-03-20 14:52:56 +01:00
Nacho Rivera
00d9a391c4 chore(regions_update): Changes in regions for AWS services. (#3552)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-03-20 14:52:56 +01:00
198 changed files with 1609 additions and 13840 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @prowler-cloud/prowler-oss @prowler-cloud/prowler-dev
* @prowler-cloud/prowler-oss

View File

@@ -4,7 +4,7 @@ on:
pull_request:
branches:
- 'master'
- 'v3'
- 'prowler-4.0-dev'
paths:
- 'docs/**'

View File

@@ -3,7 +3,6 @@ name: build-lint-push-containers
on:
push:
branches:
- "v3"
- "master"
paths-ignore:
- ".github/**"
@@ -14,90 +13,44 @@ on:
types: [published]
env:
# AWS Configuration
AWS_REGION_STG: eu-west-1
AWS_REGION_PLATFORM: eu-west-1
AWS_REGION: us-east-1
# Container's configuration
IMAGE_NAME: prowler
DOCKERFILE_PATH: ./Dockerfile
# Tags
LATEST_TAG: latest
STABLE_TAG: stable
# The RELEASE_TAG is set during runtime in releases
RELEASE_TAG: ""
# The PROWLER_VERSION and PROWLER_VERSION_MAJOR are set during runtime in releases
PROWLER_VERSION: ""
PROWLER_VERSION_MAJOR: ""
# TEMPORARY_TAG: temporary
# Python configuration
PYTHON_VERSION: 3.12
TEMPORARY_TAG: temporary
DOCKERFILE_PATH: ./Dockerfile
PYTHON_VERSION: 3.9
jobs:
# Build Prowler OSS container
container-build-push:
# needs: dockerfile-linter
runs-on: ubuntu-latest
outputs:
prowler_version_major: ${{ steps.get-prowler-version.outputs.PROWLER_VERSION_MAJOR }}
prowler_version: ${{ steps.update-prowler-version.outputs.PROWLER_VERSION }}
env:
POETRY_VIRTUALENVS_CREATE: "false"
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python
- name: Setup python (release)
if: github.event_name == 'release'
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install Poetry
- name: Install dependencies (release)
if: github.event_name == 'release'
run: |
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: Get Prowler version
id: get-prowler-version
run: |
PROWLER_VERSION="$(poetry version -s 2>/dev/null)"
# Store prowler version major just for the release
PROWLER_VERSION_MAJOR="${PROWLER_VERSION%%.*}"
echo "PROWLER_VERSION_MAJOR=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_ENV}"
echo "PROWLER_VERSION_MAJOR=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_OUTPUT}"
case ${PROWLER_VERSION_MAJOR} in
3)
echo "LATEST_TAG=v3-latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=v3-stable" >> "${GITHUB_ENV}"
;;
4)
echo "LATEST_TAG=latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=stable" >> "${GITHUB_ENV}"
;;
*)
# Fallback if any other version is present
echo "Releasing another Prowler major version, aborting..."
exit 1
;;
esac
- name: Update Prowler version (release)
id: update-prowler-version
if: github.event_name == 'release'
run: |
PROWLER_VERSION="${{ github.event.release.tag_name }}"
poetry version "${PROWLER_VERSION}"
echo "PROWLER_VERSION=${PROWLER_VERSION}" >> "${GITHUB_ENV}"
echo "PROWLER_VERSION=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
poetry version ${{ github.event.release.tag_name }}
- name: Login to DockerHub
uses: docker/login-action@v3
with:
@@ -137,9 +90,9 @@ jobs:
context: .
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
@@ -149,26 +102,16 @@ jobs:
needs: container-build-push
runs-on: ubuntu-latest
steps:
- name: Get latest commit info (latest)
- name: Get latest commit info
if: github.event_name == 'push'
run: |
LATEST_COMMIT_HASH=$(echo ${{ github.event.after }} | cut -b -7)
echo "LATEST_COMMIT_HASH=${LATEST_COMMIT_HASH}" >> $GITHUB_ENV
- name: Dispatch event (latest)
if: github.event_name == 'push' && needs.container-build-push.outputs.prowler_version_major == '3'
- name: Dispatch event for latest
if: github.event_name == 'push'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
--data '{"event_type":"dispatch","client_payload":{"version":"v3-latest", "tag": "${{ env.LATEST_COMMIT_HASH }}"}}'
- name: Dispatch event (release)
if: github.event_name == 'release' && needs.container-build-push.outputs.prowler_version_major == '3'
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"latest", "tag": "${{ env.LATEST_COMMIT_HASH }}"}}'
- name: Dispatch event for release
if: github.event_name == 'release'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
--data '{"event_type":"dispatch","client_payload":{"version":"release", "tag":"${{ needs.container-build-push.outputs.prowler_version }}"}}'
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"release", "tag":"${{ github.event.release.tag_name }}"}}'

View File

@@ -13,10 +13,10 @@ name: "CodeQL"
on:
push:
branches: [ "master", "v3" ]
branches: [ "master", "prowler-4.0-dev" ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "master", "v3" ]
branches: [ "master", "prowler-4.0-dev" ]
schedule:
- cron: '00 12 * * *'

View File

@@ -11,9 +11,8 @@ jobs:
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.73.0
uses: trufflesecurity/trufflehog@v3.70.2
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD
extra_args: --only-verified

View File

@@ -4,7 +4,7 @@ on:
pull_request_target:
branches:
- "master"
- "v3"
- "prowler-4.0-dev"
jobs:
labeler:

View File

@@ -4,11 +4,11 @@ on:
push:
branches:
- "master"
- "v3"
- "prowler-4.0-dev"
pull_request:
branches:
- "master"
- "v3"
- "prowler-4.0-dev"
jobs:
build:
runs-on: ubuntu-latest
@@ -20,7 +20,7 @@ jobs:
- uses: actions/checkout@v4
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@v44
uses: tj-actions/changed-files@v43
with:
files: ./**
files_ignore: |

View File

@@ -8,7 +8,10 @@ env:
RELEASE_TAG: ${{ github.event.release.tag_name }}
PYTHON_VERSION: 3.11
CACHE: "poetry"
# TODO: create a bot user for this kind of tasks, like prowler-bot
# This base branch is used to create a PR with the updated version
# We'd need to handle the base branch for v4 and v3, since they will be
# `master` and `3.0-dev`, respectively
GITHUB_BASE_BRANCH: "master"
GIT_COMMITTER_EMAIL: "sergio@prowler.com"
jobs:
@@ -18,23 +21,6 @@ jobs:
POETRY_VIRTUALENVS_CREATE: "false"
name: Release Prowler to PyPI
steps:
- name: Get Prowler version
run: |
PROWLER_VERSION="${{ env.RELEASE_TAG }}"
case ${PROWLER_VERSION%%.*} in
3)
echo "Releasing Prowler v3 with tag ${PROWLER_VERSION}"
;;
4)
echo "Releasing Prowler v4 with tag ${PROWLER_VERSION}"
;;
*)
echo "Releasing another Prowler major version, aborting..."
exit 1
;;
esac
- uses: actions/checkout@v4
- name: Install dependencies
@@ -53,7 +39,7 @@ jobs:
poetry version ${{ env.RELEASE_TAG }}
- name: Import GPG key
uses: crazy-max/ghaction-import-gpg@v6
uses: crazy-max/ghaction-import-gpg@v4
with:
gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.GPG_PASSPHRASE }}
@@ -76,6 +62,12 @@ jobs:
# Push the tag
git push -f origin ${{ env.RELEASE_TAG }}
- name: Create new branch for the version update
run: |
git switch -c release-${{ env.RELEASE_TAG }}
git push --set-upstream origin release-${{ env.RELEASE_TAG }}
- name: Build Prowler package
run: |
poetry build
@@ -85,6 +77,23 @@ jobs:
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish
- name: Create PR to update version in the branch
run: |
echo "### Description
This PR updates Prowler Version to ${{ env.RELEASE_TAG }}.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license." |\
gh pr create \
--base ${{ env.GITHUB_BASE_BRANCH }} \
--head release-${{ env.RELEASE_TAG }} \
--title "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}." \
--body-file -
env:
GH_TOKEN: ${{ secrets.PROWLER_ACCESS_TOKEN }}
- name: Replicate PyPI package
run: |
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info

View File

@@ -10,4 +10,4 @@
Want some swag as appreciation for your contribution?
# Prowler Developer Guide
https://docs.prowler.com/projects/prowler-open-source/en/latest/developer-guide/introduction/
https://docs.prowler.cloud/en/latest/tutorials/developer-guide/

View File

@@ -1,4 +1,4 @@
FROM python:3.12-alpine
FROM python:3.11-alpine
LABEL maintainer="https://github.com/prowler-cloud/prowler"

View File

@@ -8,7 +8,7 @@
<p align="center">
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img width="30" height="30" alt="Prowler community on Slack" src="https://github.com/prowler-cloud/prowler/assets/3985464/3617e470-670c-47c9-9794-ce895ebdb627"></a>
<br>
@@ -49,7 +49,7 @@ It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, Fe
|---|---|---|---|---|
| AWS | 304 | 61 -> `prowler aws --list-services` | 28 -> `prowler aws --list-compliance` | 6 -> `prowler aws --list-categories` |
| GCP | 75 | 11 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
| Azure | 127 | 16 -> `prowler azure --list-services` | 2 -> `prowler azure --list-compliance` | 2 -> `prowler azure --list-categories` |
| Azure | 109 | 16 -> `prowler azure --list-services` | CIS soon | 2 -> `prowler azure --list-categories` |
| Kubernetes | Work In Progress | - | CIS soon | - |
# 📖 Documentation

View File

@@ -1,8 +1,8 @@
## Contribute with documentation
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them. To install all necessary dependencies use `poetry install --with docs`.
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them.
1. Install `mkdocs` with your favorite package manager.
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yml` file in the root folder of the Prowler repo.
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yaml` file in the root folder of the Prowler repo.
4. Once you are done with changes, please send a pull request to us for review and merge. Thank you in advance!

View File

@@ -517,8 +517,6 @@ Coming soon ...
For the Azure Provider we don't have any library to mock out the API calls we use. So in this scenario we inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock).
In essence, we create object instances and we run the check that we are testing with that instance. In the test we ensure the check executed correctly and results with the expected values.
The following code shows how to use MagicMock to create the service objects for a Azure check test.
```python
@@ -559,8 +557,11 @@ class Test_defender_ensure_defender_for_arm_is_on:
# In this scenario we have to mock also the Defender service and the defender_client from the check to enforce that the defender_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked Defender client for both objects
with mock.patch(
"prowler.providers.azure.services.defender.defender_ensure_defender_for_arm_is_on.defender_ensure_defender_for_arm_is_on.defender_client",
with mock.patch(
"prowler.providers.azure.services.defender.defender_service.Defender",
new=defender_client,
), mock.patch(
"prowler.providers.azure.services.defender.defender_client.defender_client",
new=defender_client,
):
@@ -574,7 +575,7 @@ class Test_defender_ensure_defender_for_arm_is_on:
check = defender_ensure_defender_for_arm_is_on()
# And then, call the execute() function to run the check
# against the Defender client we've set up.
# against the IAM client we've set up.
result = check.execute()
# Last but not least, we need to assert all the fields
@@ -592,171 +593,4 @@ class Test_defender_ensure_defender_for_arm_is_on:
### Services
For the Azure Services tests, the idea is similar, we test that the functions we've done for capturing the values of the different objects using the Azure API work correctly. Again, we create an object instance and verify that the values captured for that instance are correct.
The following code shows how a service test looks like.
```python
#We import patch from unittest.mock for simulating objects, the ones that we'll test with.
from unittest.mock import patch
#Importing FlowLogs from azure.mgmt.network.models allows us to create objects corresponding
#to flow log settings for Azure networking resources.
from azure.mgmt.network.models import FlowLog
#We import the different classes of the Network Service so we can use them.
from prowler.providers.azure.services.network.network_service import (
BastionHost,
Network,
NetworkWatcher,
PublicIp,
SecurityGroup,
)
#Azure constants
from tests.providers.azure.azure_fixtures import (
AZURE_SUBSCRIPTION,
set_mocked_azure_audit_info,
)
#Mocks the behavior of a function responsible for retrieving security groups from a network service so
#basically this is the instance for SecurityGroup that we are going to use
def mock_network_get_security_groups(_):
return {
AZURE_SUBSCRIPTION: [
SecurityGroup(
id="id",
name="name",
location="location",
security_rules=[],
)
]
}
#We do the same for all the components we need, BastionHost, NetworkWatcher and PublicIp in this case
def mock_network_get_bastion_hosts(_):
return {
AZURE_SUBSCRIPTION: [
BastionHost(
id="id",
name="name",
location="location",
)
]
}
def mock_network_get_network_watchers(_):
return {
AZURE_SUBSCRIPTION: [
NetworkWatcher(
id="id",
name="name",
location="location",
flow_logs=[FlowLog(enabled=True, retention_policy=90)],
)
]
}
def mock_network_get_public_ip_addresses(_):
return {
AZURE_SUBSCRIPTION: [
PublicIp(
id="id",
name="name",
location="location",
ip_address="ip_address",
)
]
}
#We use the 'path' decorator to replace during the test, the original get functions with the mock functions.
#In this case we are replacing the '__get_security_groups__' with the 'mock_network_get_security_groups'.
#We do the same for the rest of the functions.
@patch(
"prowler.providers.azure.services.network.network_service.Network.__get_security_groups__",
new=mock_network_get_security_groups,
)
@patch(
"prowler.providers.azure.services.network.network_service.Network.__get_bastion_hosts__",
new=mock_network_get_bastion_hosts,
)
@patch(
"prowler.providers.azure.services.network.network_service.Network.__get_network_watchers__",
new=mock_network_get_network_watchers,
)
@patch(
"prowler.providers.azure.services.network.network_service.Network.__get_public_ip_addresses__",
new=mock_network_get_public_ip_addresses,
)
#We create the class for finally testing the methods
class Test_Network_Service:
#Verifies that Network class initializes correctly a client object
def test__get_client__(self):
#Creates instance of the Network class with the audit information provided
network = Network(set_mocked_azure_audit_info())
#Checks if the client is not being initialize correctly
assert (
network.clients[AZURE_SUBSCRIPTION].__class__.__name__
== "NetworkManagementClient"
)
#Verifies Securiy Group are set correctly
def test__get_security_groups__(self):
network = Network(set_mocked_azure_audit_info())
assert (
network.security_groups[AZURE_SUBSCRIPTION][0].__class__.__name__
== "SecurityGroup"
)
#As you can see, every field must be right according to the mocking method
assert network.security_groups[AZURE_SUBSCRIPTION][0].id == "id"
assert network.security_groups[AZURE_SUBSCRIPTION][0].name == "name"
assert network.security_groups[AZURE_SUBSCRIPTION][0].location == "location"
assert network.security_groups[AZURE_SUBSCRIPTION][0].security_rules == []
#Verifies Network Watchers are set correctly
def test__get_network_watchers__(self):
network = Network(set_mocked_azure_audit_info())
assert (
network.network_watchers[AZURE_SUBSCRIPTION][0].__class__.__name__
== "NetworkWatcher"
)
assert network.network_watchers[AZURE_SUBSCRIPTION][0].id == "id"
assert network.network_watchers[AZURE_SUBSCRIPTION][0].name == "name"
assert network.network_watchers[AZURE_SUBSCRIPTION][0].location == "location"
assert network.network_watchers[AZURE_SUBSCRIPTION][0].flow_logs == [
FlowLog(enabled=True, retention_policy=90)
]
#Verifies Flow Logs are set correctly
def __get_flow_logs__(self):
network = Network(set_mocked_azure_audit_info())
nw_name = "name"
assert (
network.network_watchers[AZURE_SUBSCRIPTION][0]
.flow_logs[nw_name][0]
.__class__.__name__
== "FlowLog"
)
assert network.network_watchers[AZURE_SUBSCRIPTION][0].flow_logs == [
FlowLog(enabled=True, retention_policy=90)
]
assert (
network.network_watchers[AZURE_SUBSCRIPTION][0].flow_logs[0].enabled is True
)
assert (
network.network_watchers[AZURE_SUBSCRIPTION][0]
.flow_logs[0]
.retention_policy
== 90
)
...
```
The code continues with some more verifications the same way.
Hopefully this will result useful for understanding and creating new Azure Services checks.
Please refer to the [Azure checks tests](./unit-testing.md#azure) for more information on how to create tests and check the existing services tests [here](https://github.com/prowler-cloud/prowler/tree/master/tests/providers/azure/services).
Coming soon ...

View File

@@ -64,17 +64,16 @@ The other three cases does not need additional configuration, `--az-cli-auth` an
To use each one you need to pass the proper flag to the execution. Prowler for Azure handles two types of permission scopes, which are:
- **Microsoft Entra ID permissions**: Used to retrieve metadata from the identity assumed by Prowler (not mandatory to have access to execute the tool).
- **Azure Active Directory permissions**: Used to retrieve metadata from the identity assumed by Prowler and future AAD checks (not mandatory to have access to execute the tool)
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool.
#### Microsoft Entra ID scope
#### Azure Active Directory scope
Microsoft Entra ID (AAD earlier) permissions required by the tool are the following:
- `Directory.Read.All`
- `Policy.Read.All`
- `UserAuthenticationMethod.Read.All`
The best way to assign it is through the Azure web console:
@@ -87,10 +86,9 @@ The best way to assign it is through the Azure web console:
5. In the left menu bar, select "API permissions"
6. Then click on "+ Add a permission" and select "Microsoft Graph"
7. Once in the "Microsoft Graph" view, select "Application permissions"
8. Finally, search for "Directory", "Policy" and "UserAuthenticationMethod" select the following permissions:
8. Finally, search for "Directory" and "Policy" and select the following permissions:
- `Directory.Read.All`
- `Policy.Read.All`
- `UserAuthenticationMethod.Read.All`
![EntraID Permissions](../img/AAD-permissions.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 376 KiB

After

Width:  |  Height:  |  Size: 358 KiB

View File

@@ -17,8 +17,6 @@ Currently, the available frameworks are:
- `cis_1.5_aws`
- `cis_2.0_aws`
- `cis_2.0_gcp`
- `cis_2.0_azure`
- `cis_2.1_azure`
- `cis_3.0_aws`
- `cisa_aws`
- `ens_rd2022_aws`

View File

@@ -100,27 +100,18 @@ aws:
# aws.awslambda_function_using_supported_runtimes
obsolete_lambda_runtimes:
[
"java8",
"go1.x",
"provided",
"python3.6",
"python2.7",
"python3.7",
"nodejs4.3",
"nodejs4.3-edge",
"nodejs6.10",
"nodejs",
"nodejs8.10",
"nodejs10.x",
"nodejs12.x",
"nodejs14.x",
"dotnet5.0",
"dotnetcore1.0",
"dotnetcore2.0",
"dotnetcore2.1",
"dotnetcore3.1",
"ruby2.5",
"ruby2.7",
]
# AWS Organizations

View File

@@ -16,23 +16,8 @@ theme:
- navigation.sections
- navigation.top
palette:
# Palette toggle for light mode
- media: "(prefers-color-scheme: light)"
scheme: default
primary: black
accent: green
toggle:
icon: material/weather-night
name: Switch to dark mode
# Palette toggle for dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: black
accent: green
toggle:
icon: material/weather-sunny
name: Switch to light mode
primary: black
accent: green
plugins:
- search

682
poetry.lock generated

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -11,7 +11,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "3.16.3"
prowler_version = "3.15.1"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"

View File

@@ -52,27 +52,18 @@ aws:
# aws.awslambda_function_using_supported_runtimes
obsolete_lambda_runtimes:
[
"java8",
"go1.x",
"provided",
"python3.6",
"python2.7",
"python3.7",
"nodejs4.3",
"nodejs4.3-edge",
"nodejs6.10",
"nodejs",
"nodejs8.10",
"nodejs10.x",
"nodejs12.x",
"nodejs14.x",
"dotnet5.0",
"dotnetcore1.0",
"dotnetcore2.0",
"dotnetcore2.1",
"dotnetcore3.1",
"ruby2.5",
"ruby2.7",
]
# AWS Organizations

View File

@@ -46,8 +46,6 @@ class ENS_Requirement_Attribute(BaseModel):
Tipo: ENS_Requirement_Attribute_Tipos
Nivel: ENS_Requirement_Attribute_Nivel
Dimensiones: list[ENS_Requirement_Attribute_Dimensiones]
ModoEjecucion: str
Dependencias: list[str]
# Generic Compliance Requirement Attribute
@@ -89,7 +87,6 @@ class CIS_Requirement_Attribute(BaseModel):
RemediationProcedure: str
AuditProcedure: str
AdditionalInformation: str
DefaultValue: Optional[str]
References: str

View File

@@ -11,7 +11,6 @@ from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
Check_Output_CSV_AWS_Well_Architected,
Check_Output_CSV_AZURE_CIS,
Check_Output_CSV_ENS_RD2022,
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
@@ -36,7 +35,6 @@ def add_manual_controls(output_options, audit_info, file_descriptors):
manual_finding.region = ""
manual_finding.location = ""
manual_finding.project_id = ""
manual_finding.subscription = ""
fill_compliance(
output_options, manual_finding, audit_info, file_descriptors
)
@@ -84,10 +82,6 @@ def fill_compliance(output_options, finding, audit_info, file_descriptors):
Requirements_Attributes_Dimensiones=",".join(
attribute.Dimensiones
),
Requirements_Attributes_ModoEjecucion=attribute.ModoEjecucion,
Requirements_Attributes_Dependencias=",".join(
attribute.Dependencias
),
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
@@ -167,36 +161,7 @@ def fill_compliance(output_options, finding, audit_info, file_descriptors):
csv_header = generate_csv_fields(
Check_Output_CSV_GCP_CIS
)
elif compliance.Provider == "Azure":
compliance_row = Check_Output_CSV_AZURE_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
Subscription=finding.subscription,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_DefaultValue=attribute.DefaultValue,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
ResourceName=finding.resource_name,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(
Check_Output_CSV_AZURE_CIS
)
elif (
"AWS-Well-Architected-Framework" in compliance.Framework
and compliance.Provider == "AWS"
@@ -304,19 +269,11 @@ def fill_compliance(output_options, finding, audit_info, file_descriptors):
attributes_categories = ""
attributes_values = ""
attributes_comments = ""
attributes_aws_services = ", ".join(
attribute.AWSService for attribute in requirement.Attributes
)
attributes_categories = ", ".join(
attribute.Category for attribute in requirement.Attributes
)
attributes_values = ", ".join(
attribute.Value for attribute in requirement.Attributes
)
attributes_comments = ", ".join(
attribute.Comment for attribute in requirement.Attributes
)
for attribute in requirement.Attributes:
attributes_aws_services += attribute.AWSService + "\n"
attributes_categories += attribute.Category + "\n"
attributes_values += attribute.Value + "\n"
attributes_comments += attribute.Comment + "\n"
compliance_row = Check_Output_MITRE_ATTACK(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,

View File

@@ -15,7 +15,6 @@ from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
Check_Output_CSV_AWS_Well_Architected,
Check_Output_CSV_AZURE_CIS,
Check_Output_CSV_ENS_RD2022,
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
@@ -24,7 +23,6 @@ from prowler.lib.outputs.models import (
)
from prowler.lib.utils.utils import file_exists, open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.common.outputs import get_provider_output_model
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
@@ -115,16 +113,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
filename, output_mode, audit_info, Check_Output_CSV_GCP_CIS
)
file_descriptors.update({output_mode: file_descriptor})
elif isinstance(audit_info, Azure_Audit_Info):
filename = f"{output_directory}/{output_filename}_{output_mode}{csv_file_suffix}"
if "cis_" in output_mode:
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AZURE_CIS,
)
file_descriptors.update({output_mode: file_descriptor})
elif isinstance(audit_info, AWS_Audit_Info):
if output_mode == "json-asff":
filename = f"{output_directory}/{output_filename}{json_asff_file_suffix}"

View File

@@ -100,17 +100,7 @@ def fill_json_asff(finding_output, audit_info, finding, output_options):
if not finding.check_metadata.Remediation.Recommendation.Url:
finding.check_metadata.Remediation.Recommendation.Url = "https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html"
finding_output.Remediation = {
"Recommendation": {
"Text": (
(
finding.check_metadata.Remediation.Recommendation.Text[:509]
+ "..."
)
if len(finding.check_metadata.Remediation.Recommendation.Text) > 512
else finding.check_metadata.Remediation.Recommendation.Text
),
"Url": finding.check_metadata.Remediation.Recommendation.Url,
}
"Recommendation": finding.check_metadata.Remediation.Recommendation
}
return finding_output

View File

@@ -536,8 +536,6 @@ class Check_Output_CSV_ENS_RD2022(BaseModel):
Requirements_Attributes_Nivel: str
Requirements_Attributes_Tipo: str
Requirements_Attributes_Dimensiones: str
Requirements_Attributes_ModoEjecucion: str
Requirements_Attributes_Dependencias: Optional[str]
Status: str
StatusExtended: str
ResourceId: str
@@ -601,35 +599,6 @@ class Check_Output_CSV_GCP_CIS(BaseModel):
CheckId: str
class Check_Output_CSV_AZURE_CIS(BaseModel):
"""
Check_Output_CSV_CIS generates a finding's output in CSV CIS format.
"""
Provider: str
Description: str
Subscription: str
AssessmentDate: str
Requirements_Id: str
Requirements_Description: str
Requirements_Attributes_Section: str
Requirements_Attributes_Profile: str
Requirements_Attributes_AssessmentStatus: str
Requirements_Attributes_Description: str
Requirements_Attributes_RationaleStatement: str
Requirements_Attributes_ImpactStatement: str
Requirements_Attributes_RemediationProcedure: str
Requirements_Attributes_AuditProcedure: str
Requirements_Attributes_AdditionalInformation: str
Requirements_Attributes_DefaultValue: str
Requirements_Attributes_References: str
Status: str
StatusExtended: str
ResourceId: str
ResourceName: str
CheckId: str
class Check_Output_CSV_Generic_Compliance(BaseModel):
"""
Check_Output_CSV_Generic_Compliance generates a finding's output in CSV Generic Compliance format.

View File

@@ -234,7 +234,6 @@
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
@@ -261,7 +260,6 @@
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
@@ -288,7 +286,6 @@
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
@@ -1254,9 +1251,7 @@
"aws": [
"ap-northeast-1",
"ap-southeast-1",
"ap-southeast-2",
"eu-central-1",
"eu-west-3",
"us-east-1",
"us-west-2"
],
@@ -1292,6 +1287,7 @@
"ap-northeast-1",
"ap-northeast-2",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
@@ -1419,14 +1415,8 @@
"chime-sdk-media-pipelines": {
"regions": {
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-west-2",
"us-east-1",
"us-west-2"
],
@@ -2269,17 +2259,14 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -2306,17 +2293,14 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -2345,17 +2329,14 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -2844,22 +2825,6 @@
"aws-us-gov": []
}
},
"deadline-cloud": {
"regions": {
"aws": [
"ap-northeast-1",
"ap-southeast-1",
"ap-southeast-2",
"eu-central-1",
"eu-west-1",
"us-east-1",
"us-east-2",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"deepcomposer": {
"regions": {
"aws": [
@@ -3103,7 +3068,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -4220,7 +4184,6 @@
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -4897,7 +4860,6 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
@@ -4907,7 +4869,6 @@
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -5226,6 +5187,16 @@
]
}
},
"iot-roborunner": {
"regions": {
"aws": [
"eu-central-1",
"us-east-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"iot1click-devices": {
"regions": {
"aws": [
@@ -5576,7 +5547,6 @@
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
@@ -6022,9 +5992,7 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-west-1"
]
"aws-us-gov": []
}
},
"lexv2-models": {
@@ -6043,9 +6011,7 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-west-1"
]
"aws-us-gov": []
}
},
"license-manager": {
@@ -6063,7 +6029,6 @@
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
@@ -6519,9 +6484,7 @@
"aws": [
"us-east-1"
],
"aws-cn": [
"cn-northwest-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
@@ -6613,7 +6576,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6635,7 +6597,6 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-4",
@@ -6645,7 +6606,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6775,7 +6735,6 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-4",
@@ -6784,7 +6743,6 @@
"eu-north-1",
"eu-west-1",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -7088,7 +7046,6 @@
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
@@ -7186,7 +7143,6 @@
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -7194,7 +7150,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -8342,29 +8297,19 @@
"resource-explorer-2": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -8663,26 +8608,15 @@
"rum": {
"regions": {
"aws": [
"af-south-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
@@ -8735,29 +8669,18 @@
"s3control": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -9449,7 +9372,6 @@
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
@@ -9885,7 +9807,6 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
@@ -9895,7 +9816,6 @@
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -9976,7 +9896,6 @@
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
@@ -10242,24 +10161,6 @@
]
}
},
"timestream-influxdb": {
"regions": {
"aws": [
"ap-northeast-1",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"eu-central-1",
"eu-north-1",
"eu-west-1",
"us-east-1",
"us-east-2",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"timestream-write": {
"regions": {
"aws": [
@@ -10526,7 +10427,6 @@
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"ca-west-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
@@ -10535,7 +10435,6 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -10545,10 +10444,7 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
"aws-us-gov": []
}
},
"vmwarecloudonaws": {
@@ -10656,7 +10552,6 @@
"eu-north-1",
"eu-west-1",
"eu-west-2",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-2"

View File

@@ -69,9 +69,6 @@ Caller Identity ARN: {Fore.YELLOW}[{audit_info.audited_identity_arn}]{Style.RESE
def create_sts_session(
session: session.Session, aws_region: str
) -> session.Session.client:
sts_endpoint_url = (
f"https://sts.{aws_region}.amazonaws.com"
if "cn-" not in aws_region
else f"https://sts.{aws_region}.amazonaws.com.cn"
return session.client(
"sts", aws_region, endpoint_url=f"https://sts.{aws_region}.amazonaws.com"
)
return session.client("sts", aws_region, endpoint_url=sts_endpoint_url)

View File

@@ -86,7 +86,7 @@ def verify_security_hub_integration_enabled_per_region(
error_message = client_error.response["Error"]["Message"]
if (
error_code == "InvalidAccessException"
and f"Account {aws_account_number} is not subscribed to AWS Security Hub"
and f"Account {aws_account_number} is not subscribed to AWS Security Hub in region {region}"
in error_message
):
logger.warning(

View File

@@ -1,6 +1,5 @@
from typing import Optional
from botocore.exceptions import ClientError
from pydantic import BaseModel
from prowler.lib.logger import logger
@@ -48,28 +47,12 @@ class APIGateway(AWSService):
logger.info("APIGateway - Getting Rest APIs authorizer...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(
restApiId=rest_api.id
)["items"]
if authorizers:
rest_api.authorizer = True
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
"items"
]
if authorizers:
rest_api.authorizer = True
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -79,25 +62,10 @@ class APIGateway(AWSService):
logger.info("APIGateway - Describing Rest API...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -107,44 +75,29 @@ class APIGateway(AWSService):
logger.info("APIGateway - Getting stages for Rest APIs...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
)
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
)
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
@@ -155,50 +108,33 @@ class APIGateway(AWSService):
logger.info("APIGateway - Getting API resources...")
try:
for rest_api in self.rest_apis:
try:
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator(
"get_resources"
)
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get(
"resourceMethods", {}
).keys():
resource_methods.append(resource_method)
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator("get_resources")
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get(
"resourceMethods", {}
).keys():
resource_methods.append(resource_method)
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
)
except ClientError as error:
if error.response["Error"]["Code"] == "NotFoundException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
)
)
except Exception as error:
logger.error(

View File

@@ -32,7 +32,7 @@ class ApiGatewayV2(AWSService):
arn=arn,
id=apigw["ApiId"],
region=regional_client.region,
name=apigw.get("Name", ""),
name=apigw["Name"],
tags=[apigw.get("Tags")],
)
)

View File

@@ -20,7 +20,7 @@ class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check)
f"Lambda function {function.name} is not recorded by CloudTrail."
)
lambda_recorded_cloudtrail = False
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
for data_event in trail.data_events:
# classic event selectors
if not data_event.is_advanced:

View File

@@ -8,7 +8,7 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_bucket_requires_mfa_delete(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.is_logging:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket

View File

@@ -11,7 +11,7 @@ maximum_time_without_logging = 1
class cloudtrail_cloudwatch_logging_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region

View File

@@ -7,7 +7,7 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_insights_exist(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.is_logging:
report = Check_Report_AWS(self.metadata())
report.region = trail.region

View File

@@ -7,7 +7,7 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_kms_encryption_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region

View File

@@ -7,7 +7,7 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class cloudtrail_log_file_validation_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.name:
report = Check_Report_AWS(self.metadata())
report.region = trail.region

View File

@@ -8,7 +8,7 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_logs_s3_bucket_access_logging_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.name:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket

View File

@@ -8,7 +8,7 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_logs_s3_bucket_is_not_publicly_accessible(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.name:
trail_bucket_is_in_account = False
trail_bucket = trail.s3_bucket

View File

@@ -10,8 +10,8 @@ class cloudtrail_multi_region_enabled(Check):
for region in cloudtrail_client.regional_clients.keys():
report = Check_Report_AWS(self.metadata())
report.region = region
for trail in cloudtrail_client.trails.values():
if trail.region == region or trail.is_multiregion:
for trail in cloudtrail_client.trails:
if trail.region == region:
if trail.is_logging:
report.status = "PASS"
report.resource_id = trail.name

View File

@@ -16,7 +16,7 @@ class cloudtrail_multi_region_enabled_logging_management_events(Check):
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.trail_arn_template
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
if trail.is_logging:
if trail.is_multiregion:
for event in trail.data_events:

View File

@@ -8,7 +8,7 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_s3_dataevents_read_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
for data_event in trail.data_events:
# classic event selectors
if not data_event.is_advanced:

View File

@@ -8,7 +8,7 @@ from prowler.providers.aws.services.s3.s3_client import s3_client
class cloudtrail_s3_dataevents_write_enabled(Check):
def execute(self):
findings = []
for trail in cloudtrail_client.trails.values():
for trail in cloudtrail_client.trails:
for data_event in trail.data_events:
# Classic event selectors
if not data_event.is_advanced:

View File

@@ -15,7 +15,7 @@ class Cloudtrail(AWSService):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
self.trail_arn_template = f"arn:{self.audited_partition}:cloudtrail:{self.region}:{self.audited_account}:trail"
self.trails = {}
self.trails = []
self.__threading_call__(self.__get_trails__)
self.__get_trail_status__()
self.__get_insight_selectors__()
@@ -45,23 +45,27 @@ class Cloudtrail(AWSService):
kms_key_id = trail["KmsKeyId"]
if "CloudWatchLogsLogGroupArn" in trail:
log_group_arn = trail["CloudWatchLogsLogGroupArn"]
self.trails[trail["TrailARN"]] = Trail(
name=trail["Name"],
is_multiregion=trail["IsMultiRegionTrail"],
home_region=trail["HomeRegion"],
arn=trail["TrailARN"],
region=regional_client.region,
is_logging=False,
log_file_validation_enabled=trail["LogFileValidationEnabled"],
latest_cloudwatch_delivery_time=None,
s3_bucket=trail["S3BucketName"],
kms_key=kms_key_id,
log_group_arn=log_group_arn,
data_events=[],
has_insight_selectors=trail.get("HasInsightSelectors"),
self.trails.append(
Trail(
name=trail["Name"],
is_multiregion=trail["IsMultiRegionTrail"],
home_region=trail["HomeRegion"],
arn=trail["TrailARN"],
region=regional_client.region,
is_logging=False,
log_file_validation_enabled=trail[
"LogFileValidationEnabled"
],
latest_cloudwatch_delivery_time=None,
s3_bucket=trail["S3BucketName"],
kms_key=kms_key_id,
log_group_arn=log_group_arn,
data_events=[],
has_insight_selectors=trail.get("HasInsightSelectors"),
)
)
if trails_count == 0:
self.trails[self.__get_trail_arn_template__(regional_client.region)] = (
self.trails.append(
Trail(
region=regional_client.region,
)
@@ -75,7 +79,7 @@ class Cloudtrail(AWSService):
def __get_trail_status__(self):
logger.info("Cloudtrail - Getting trail status")
try:
for trail in self.trails.values():
for trail in self.trails:
for region, client in self.regional_clients.items():
if trail.region == region and trail.name:
status = client.get_trail_status(Name=trail.arn)
@@ -93,7 +97,7 @@ class Cloudtrail(AWSService):
def __get_event_selectors__(self):
logger.info("Cloudtrail - Getting event selector")
try:
for trail in self.trails.values():
for trail in self.trails:
for region, client in self.regional_clients.items():
if trail.region == region and trail.name:
data_events = client.get_event_selectors(TrailName=trail.arn)
@@ -127,7 +131,7 @@ class Cloudtrail(AWSService):
logger.info("Cloudtrail - Getting trail insight selectors...")
try:
for trail in self.trails.values():
for trail in self.trails:
for region, client in self.regional_clients.items():
if trail.region == region and trail.name:
insight_selectors = None
@@ -176,7 +180,7 @@ class Cloudtrail(AWSService):
def __list_tags_for_resource__(self):
logger.info("CloudTrail - List Tags...")
try:
for trail in self.trails.values():
for trail in self.trails:
# Check if trails are in this account and region
if (
trail.region == trail.home_region

View File

@@ -12,7 +12,7 @@ def check_cloudwatch_log_metric_filter(
):
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in trails.values():
for trail in trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups

View File

@@ -30,7 +30,8 @@ class EC2(AWSService):
self.__threading_call__(self.__describe_snapshots__)
self.__threading_call__(self.__determine_public_snapshots__, self.snapshots)
self.network_interfaces = []
self.__threading_call__(self.__describe_network_interfaces__)
self.__threading_call__(self.__describe_public_network_interfaces__)
self.__threading_call__(self.__describe_sg_network_interfaces__)
self.images = []
self.__threading_call__(self.__describe_images__)
self.volumes = []
@@ -243,7 +244,7 @@ class EC2(AWSService):
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_network_interfaces__(self, regional_client):
def __describe_public_network_interfaces__(self, regional_client):
try:
# Get Network Interfaces with Public IPs
describe_network_interfaces_paginator = regional_client.get_paginator(
@@ -251,45 +252,47 @@ class EC2(AWSService):
)
for page in describe_network_interfaces_paginator.paginate():
for interface in page["NetworkInterfaces"]:
eni = NetworkInterface(
id=interface["NetworkInterfaceId"],
association=interface.get("Association", {}),
attachment=interface.get("Attachment", {}),
private_ip=interface["PrivateIpAddress"],
type=interface["InterfaceType"],
subnet_id=interface["SubnetId"],
vpc_id=interface["VpcId"],
region=regional_client.region,
tags=interface.get("TagSet"),
)
self.network_interfaces.append(eni)
# Add Network Interface to Security Group
# 'Groups': [
# {
# 'GroupId': 'sg-xxxxx',
# 'GroupName': 'default',
# },
# ],
self.__add_network_interfaces_to_security_groups__(
eni, interface.get("Groups", [])
)
if interface.get("Association"):
self.network_interfaces.append(
NetworkInterface(
public_ip=interface["Association"]["PublicIp"],
type=interface["InterfaceType"],
private_ip=interface["PrivateIpAddress"],
subnet_id=interface["SubnetId"],
vpc_id=interface["VpcId"],
region=regional_client.region,
tags=interface.get("TagSet"),
)
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __add_network_interfaces_to_security_groups__(
self, interface, interface_security_groups
):
def __describe_sg_network_interfaces__(self, regional_client):
try:
for sg in interface_security_groups:
for security_group in self.security_groups:
if security_group.id == sg["GroupId"]:
security_group.network_interfaces.append(interface)
# Get Network Interfaces for Security Groups
for sg in self.security_groups:
regional_client = self.regional_clients[sg.region]
describe_network_interfaces_paginator = regional_client.get_paginator(
"describe_network_interfaces"
)
for page in describe_network_interfaces_paginator.paginate(
Filters=[
{
"Name": "group-id",
"Values": [
sg.id,
],
},
],
):
for interface in page["NetworkInterfaces"]:
sg.network_interfaces.append(interface["NetworkInterfaceId"])
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_instance_user_data__(self, instance):
@@ -447,18 +450,6 @@ class Volume(BaseModel):
tags: Optional[list] = []
class NetworkInterface(BaseModel):
id: str
association: dict
attachment: dict
private_ip: str
type: str
subnet_id: str
vpc_id: str
region: str
tags: Optional[list] = []
class SecurityGroup(BaseModel):
name: str
arn: str
@@ -467,7 +458,7 @@ class SecurityGroup(BaseModel):
vpc_id: str
public_ports: bool
associated_sgs: list
network_interfaces: list[NetworkInterface] = []
network_interfaces: list[str] = []
ingress_rules: list[dict]
egress_rules: list[dict]
tags: Optional[list] = []
@@ -482,6 +473,16 @@ class NetworkACL(BaseModel):
tags: Optional[list] = []
class NetworkInterface(BaseModel):
public_ip: str
private_ip: str
type: str
subnet_id: str
vpc_id: str
region: str
tags: Optional[list] = []
class ElasticIP(BaseModel):
public_ip: Optional[str]
association_id: Optional[str]

View File

@@ -6,7 +6,7 @@
"Security",
"Configuration"
],
"ServiceName": "eks",
"ServiceName": "EKS",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",

View File

@@ -11,7 +11,7 @@
"SubServiceName": "",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "high",
"ResourceType": "AwsIamRole",
"ResourceType": "AwsIamPolicy",
"Description": "Ensure inline policies that allow full \"*:*\" administrative privileges are not associated to IAM identities",
"Risk": "IAM policies are the means by which privileges are granted to users, groups or roles. It is recommended and considered a standard security advice to grant least privilege—that is; granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks instead of allowing full administrative privileges. Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions.",
"RelatedUrl": "",

View File

@@ -57,18 +57,16 @@ class iam_policy_allows_privilege_escalation(Check):
"glue:GetDevEndpoints",
},
"PassRole+CloudFormation": {
"iam:PassRole",
"cloudformation:CreateStack",
"cloudformation:DescribeStacks",
},
"PassRole+DataPipeline": {
"iam:PassRole",
"datapipeline:CreatePipeline",
"datapipeline:PutPipelineDefinition",
"datapipeline:ActivatePipeline",
},
"GlueUpdateDevEndpoint": {"glue:UpdateDevEndpoint"},
"GlueUpdateDevEndpoints": {"glue:UpdateDevEndpoints"},
"GlueUpdateDevEndpoints": {"glue:UpdateDevEndpoint"},
"lambda:UpdateFunctionCode": {"lambda:UpdateFunctionCode"},
"iam:CreateAccessKey": {"iam:CreateAccessKey"},
"iam:CreateLoginProfile": {"iam:CreateLoginProfile"},

View File

@@ -1,6 +1,5 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
from prowler.providers.aws.services.ec2.lib.security_groups import check_security_group
from prowler.providers.aws.services.rds.rds_client import rds_client
@@ -18,25 +17,18 @@ class rds_instance_no_public_access(Check):
f"RDS Instance {db_instance.id} is not publicly accessible."
)
if db_instance.public:
report.status_extended = f"RDS Instance {db_instance.id} is set as publicly accessible, but is not publicly exposed."
# Check if any DB Instance Security Group is publicly open
if db_instance.security_groups:
report.status = "PASS"
report.status_extended = f"RDS Instance {db_instance.id} is set as publicly accessible but filtered with security groups."
db_instance_port = db_instance.endpoint.get("Port")
if db_instance_port:
for security_group in ec2_client.security_groups:
if security_group.id in db_instance.security_groups:
for ingress_rule in security_group.ingress_rules:
if check_security_group(
ingress_rule,
"tcp",
[db_instance_port],
any_address=True,
):
report.status = "FAIL"
report.status_extended = f"RDS Instance {db_instance.id} is set as publicly accessible and security group {security_group.name} ({security_group.id}) has {db_instance.engine} port {db_instance_port} open to the Internet at endpoint {db_instance.endpoint.get('Address')}."
break
report.status_extended = f"RDS Instance {db_instance.id} is public but filtered with security groups."
for security_group in ec2_client.security_groups:
if (
security_group.id in db_instance.security_groups
and security_group.public_ports
):
report.status = "FAIL"
report.status_extended = f"RDS Instance {db_instance.id} is set as publicly accessible."
break
findings.append(report)
return findings

View File

@@ -18,13 +18,9 @@ class route53_dangling_ip_subdomain_takeover(Check):
# Gather Elastic IPs and Network Interfaces Public IPs inside the AWS Account
public_ips = []
public_ips.extend([eip.public_ip for eip in ec2_client.elastic_ips])
# Add public IPs from Network Interfaces
for network_interface in ec2_client.network_interfaces:
if (
network_interface.association
and network_interface.association.get("PublicIp")
):
public_ips.append(network_interface.association.get("PublicIp"))
public_ips.extend(
[interface.public_ip for interface in ec2_client.network_interfaces]
)
for record in record_set.records:
# Check if record is an IP Address
if validate_ip_address(record):
@@ -38,18 +34,18 @@ class route53_dangling_ip_subdomain_takeover(Check):
].tags
report.region = record_set.region
report.status = "PASS"
report.status_extended = f"Route53 record {record} (name: {record_set.name}) in Hosted Zone {route53_client.hosted_zones[record_set.hosted_zone_id].name} is not a dangling IP."
report.status_extended = f"Route53 record {record} in Hosted Zone {route53_client.hosted_zones[record_set.hosted_zone_id].name} is not a dangling IP."
# If Public IP check if it is in the AWS Account
if (
not ip_address(record).is_private
and record not in public_ips
):
report.status_extended = f"Route53 record {record} (name: {record_set.name}) in Hosted Zone {route53_client.hosted_zones[record_set.hosted_zone_id].name} does not belong to AWS and it is not a dangling IP."
report.status_extended = f"Route53 record {record} in Hosted Zone {route53_client.hosted_zones[record_set.hosted_zone_id].name} does not belong to AWS and it is not a dangling IP."
# Check if potential dangling IP is within AWS Ranges
aws_ip_ranges = awsipranges.get_ranges()
if aws_ip_ranges.get(record):
report.status = "FAIL"
report.status_extended = f"Route53 record {record} (name: {record_set.name}) in Hosted Zone {route53_client.hosted_zones[record_set.hosted_zone_id].name} is a dangling IP which can lead to a subdomain takeover attack."
report.status_extended = f"Route53 record {record} in Hosted Zone {route53_client.hosted_zones[record_set.hosted_zone_id].name} is a dangling IP which can lead to a subdomain takeover attack."
findings.append(report)
return findings

View File

@@ -43,7 +43,6 @@ class sns_topics_not_publicly_accessible(Check):
else:
report.status = "FAIL"
report.status_extended = f"SNS topic {topic.name} is public because its policy allows public access."
break
findings.append(report)

View File

@@ -41,11 +41,9 @@ class sqs_queues_not_publicly_accessible(Check):
else:
report.status = "FAIL"
report.status_extended = f"SQS queue {queue.id} is public because its policy allows public access, and the condition does not limit access to resources within the same account."
break
else:
report.status = "FAIL"
report.status_extended = f"SQS queue {queue.id} is public because its policy allows public access."
break
findings.append(report)
return findings

View File

@@ -342,12 +342,7 @@ class VPC(AWSService):
for route in route_tables_for_subnet.get("RouteTables")[
0
].get("Routes"):
if (
"GatewayId" in route
and "igw" in route["GatewayId"]
and route["DestinationCidrBlock"] == "0.0.0.0/0"
):
# If the route table has a default route to an internet gateway, the subnet is public
if "GatewayId" in route and "igw" in route["GatewayId"]:
public = True
if "NatGatewayId" in route:
nat_gateway = True

View File

@@ -1,4 +1,3 @@
from botocore.exceptions import ClientError
from pydantic import BaseModel
from prowler.lib.logger import logger
@@ -48,16 +47,6 @@ class WAFv2(AWSService):
acl.logging_enabled = bool(
logging_enabled["LoggingConfiguration"]["LogDestinationConfigs"]
)
except ClientError as error:
if error.response["Error"]["Code"] == "WAFNonexistentItemException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,21 +0,0 @@
from uuid import UUID
# Service management API
WINDOWS_AZURE_SERVICE_MANAGEMENT_API = "797f4846-ba00-4fd7-ba43-dac1f8f63013"
# Authorization policy roles
GUEST_USER_ACCESS_NO_RESTRICTICTED = UUID("a0b1b346-4d3e-4e8b-98f8-753987be4970")
GUEST_USER_ACCESS_RESTRICTICTED = UUID("2af84b1e-32c8-42b7-82bc-daa82404023b")
# General built-in roles
CONTRIBUTOR_ROLE_ID = "b24988ac-6180-42a0-ab88-20f7382dd24c"
OWNER_ROLE_ID = "8e3af657-a8ff-443c-a75c-2fe8c4bcb635"
# Compute roles
VIRTUAL_MACHINE_CONTRIBUTOR_ROLE_ID = "9980e02c-c2be-4d73-94e8-173b1dc7cf3c"
VIRTUAL_MACHINE_ADMINISTRATOR_LOGIN_ROLE_ID = "1c0163c0-47e6-4577-8991-ea5c82e286e4"
VIRTUAL_MACHINE_USER_LOGIN_ROLE_ID = "fb879df8-f326-4884-b1cf-06f3ad86be52"
VIRTUAL_MACHINE_LOCAL_USER_LOGIN_ROLE_ID = "602da2ba-a5c2-41da-b01d-5360126ab525"
WINDOWS_ADMIN_CENTER_ADMINISTRATOR_LOGIN_ROLE_ID = (
"a6333a3e-0164-44c3-b281-7a577aff287f"
)

View File

@@ -1,5 +1,3 @@
from typing import Any
from prowler.lib.logger import logger
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
@@ -7,7 +5,7 @@ from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
class AzureService:
def __init__(
self,
service: Any,
service: str,
audit_info: Azure_Audit_Info,
):
self.clients = self.__set_clients__(

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "app_http_logs_enabled",
"CheckTitle": "Ensure that logging for Azure AppService 'HTTP logs' is enabled",
"CheckType": [],
"ServiceName": "app",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "low",
"ResourceType": "Microsoft.Web/sites/config",
"Description": "Enable AppServiceHTTPLogs diagnostic log category for Azure App Service instances to ensure all http requests are captured and centrally logged.",
"Risk": "Capturing web requests can be important supporting information for security analysts performing monitoring and incident response activities. Once logging, these logs can be ingested into SIEM or other central aggregation point for the organization.",
"RelatedUrl": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-logging-threat-detection#lt-3-enable-logging-for-security-investigation",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": "https://docs.bridgecrew.io/docs/ensure-that-app-service-enables-http-logging#terraform"
},
"Recommendation": {
"Text": "1. Go to App Services For each App Service: 2. Go to Diagnostic Settings 3. Click Add Diagnostic Setting 4. Check the checkbox next to 'HTTP logs' 5. Configure a destination based on your specific logging consumption capability (for example Stream to an event hub and then consuming with SIEM integration for Event Hub logging).",
"Url": "https://docs.microsoft.com/en-us/azure/app-service/troubleshoot-diagnostic-logs"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Log consumption and processing will incur additional cost."
}

View File

@@ -1,29 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.app.app_client import app_client
class app_http_logs_enabled(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for subscription_name, apps in app_client.apps.items():
for app_name, app in apps.items():
if "functionapp" not in app.kind:
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = subscription_name
report.resource_name = app_name
report.resource_id = app.resource_id
if not app.monitor_diagnostic_settings:
report.status_extended = f"App {app_name} does not have a diagnostic setting in subscription {subscription_name}."
else:
for diagnostic_setting in app.monitor_diagnostic_settings:
report.status_extended = f"App {app_name} does not have HTTP Logs enabled in diagnostic setting {diagnostic_setting.name} in subscription {subscription_name}"
for log in diagnostic_setting.logs:
if log.category == "AppServiceHTTPLogs" and log.enabled:
report.status = "PASS"
report.status_extended = f"App {app_name} has HTTP Logs enabled in diagnostic setting {diagnostic_setting.name} in subscription {subscription_name}"
break
findings.append(report)
return findings

View File

@@ -6,8 +6,6 @@ from azure.mgmt.web.models import ManagedServiceIdentity, SiteConfigResource
from prowler.lib.logger import logger
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.azure.lib.service.service import AzureService
from prowler.providers.azure.services.monitor.monitor_client import monitor_client
from prowler.providers.azure.services.monitor.monitor_service import DiagnosticSetting
########################## App
@@ -51,12 +49,8 @@ class App(AzureService):
getattr(app, "client_cert_enabled", False),
getattr(app, "client_cert_mode", "Ignore"),
),
monitor_diagnostic_settings=self.__get_app_monitor_settings__(
app.name, app.resource_group, subscription_name
),
https_only=getattr(app, "https_only", False),
identity=getattr(app, "identity", None),
kind=getattr(app, "kind", "app"),
)
}
)
@@ -84,21 +78,6 @@ class App(AzureService):
return cert_mode
def __get_app_monitor_settings__(self, app_name, resource_group, subscription):
logger.info(f"App - Getting monitor diagnostics settings for {app_name}...")
monitor_diagnostics_settings = []
try:
monitor_diagnostics_settings = monitor_client.diagnostic_settings_with_uri(
self.subscriptions[subscription],
f"subscriptions/{self.subscriptions[subscription]}/resourceGroups/{resource_group}/providers/Microsoft.Web/sites/{app_name}",
monitor_client.clients[subscription],
)
except Exception as error:
logger.error(
f"Subscription name: {self.subscription} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return monitor_diagnostics_settings
@dataclass
class WebApp:
@@ -108,5 +87,3 @@ class WebApp:
client_cert_mode: str = "Ignore"
auth_enabled: bool = False
https_only: bool = False
monitor_diagnostic_settings: list[DiagnosticSetting] = None
kind: str = "app"

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_conditional_access_policy_require_mfa_for_management_api",
"CheckTitle": "Ensure Multifactor Authentication is Required for Windows Azure Service Management API",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.conditionalAccess",
"Description": "This recommendation ensures that users accessing the Windows Azure Service Management API (i.e. Azure Powershell, Azure CLI, Azure Resource Manager API, etc.) are required to use multifactor authentication (MFA) credentials when accessing resources through the Windows Azure Service Management API.",
"Risk": "Administrative access to the Windows Azure Service Management API should be secured with a higher level of scrutiny to authenticating mechanisms. Enabling multifactor authentication is recommended to reduce the potential for abuse of Administrative actions, and to prevent intruders or compromised admin credentials from changing administrative settings.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/conditional-access/howto-conditional-access-policy-azure-management",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From the Azure Admin Portal dashboard, open Microsoft Entra ID. 2. Click Security in the Entra ID blade. 3. Click Conditional Access in the Security blade. 4. Click Policies in the Conditional Access blade. 5. Click + New policy. 6. Enter a name for the policy. 7. Click the blue text under Users. 8. Under Include, select All users. 9. Under Exclude, check Users and groups. 10. Select users or groups to be exempted from this policy (e.g. break-glass emergency accounts, and non-interactive service accounts) then click the Select button. 11. Click the blue text under Target Resources. 12. Under Include, click the Select apps radio button. 13. Click the blue text under Select. 14. Check the box next to Windows Azure Service Management APIs then click the Select button. 15. Click the blue text under Grant. 16. Under Grant access check the box for Require multifactor authentication then click the Select button. 17. Before creating, set Enable policy to Report-only. 18. Click Create. After testing the policy in report-only mode, update the Enable policy setting from Report-only to On.",
"Url": "https://learn.microsoft.com/en-us/entra/identity/conditional-access/concept-conditional-access-cloud-apps"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Conditional Access policies require Microsoft Entra ID P1 or P2 licenses. Similarly, they may require additional overhead to maintain if users lose access to their MFA. Any users or groups which are granted an exception to this policy should be carefully tracked, be granted only minimal necessary privileges, and conditional access exceptions should be regularly reviewed or investigated."
}

View File

@@ -1,44 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.config import WINDOWS_AZURE_SERVICE_MANAGEMENT_API
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_conditional_access_policy_require_mfa_for_management_api(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for (
tenant_name,
conditional_access_policies,
) in entra_client.conditional_access_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_name}"
report.resource_name = "Conditional Access Policy"
report.resource_id = "Conditional Access Policy"
report.status_extended = (
"Conditional Access Policy does not require MFA for management API."
)
for policy_id, policy in conditional_access_policies.items():
if (
policy.state == "enabled"
and "All" in policy.users["include"]
and WINDOWS_AZURE_SERVICE_MANAGEMENT_API
in policy.target_resources["include"]
and any(
"mfa" in access_control.lower()
for access_control in policy.access_controls["grant"]
)
):
report.status = "PASS"
report.status_extended = (
"Conditional Access Policy requires MFA for management API."
)
report.resource_id = policy_id
report.resource_name = policy.name
break
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_global_admin_in_less_than_five_users",
"CheckTitle": "Ensure fewer than 5 users have global administrator assignment",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.directoryRole",
"Description": "This recommendation aims to maintain a balance between security and operational efficiency by ensuring that a minimum of 2 and a maximum of 4 users are assigned the Global Administrator role in Microsoft Entra ID. Having at least two Global Administrators ensures redundancy, while limiting the number to four reduces the risk of excessive privileged access.",
"Risk": "The Global Administrator role has extensive privileges across all services in Microsoft Entra ID. The Global Administrator role should never be used in regular daily activities; administrators should have a regular user account for daily activities, and a separate account for administrative responsibilities. Limiting the number of Global Administrators helps mitigate the risk of unauthorized access, reduces the potential impact of human error, and aligns with the principle of least privilege to reduce the attack surface of an Azure tenant. Conversely, having at least two Global Administrators ensures that administrative functions can be performed without interruption in case of unavailability of a single admin.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/best-practices#5-limit-the-number-of-global-administrators-to-less-than-5",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Roles and Administrators 4. Select Global Administrator 5. Ensure less than 5 users are actively assigned the role. 6. Ensure that at least 2 users are actively assigned the role.",
"Url": "https://learn.microsoft.com/en-us/microsoft-365/admin/add-users/about-admin-roles?view=o365-worldwide#security-guidelines-for-assigning-roles"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Implementing this recommendation may require changes in administrative workflows or the redistribution of roles and responsibilities. Adequate training and awareness should be provided to all Global Administrators."
}

View File

@@ -1,36 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_global_admin_in_less_than_five_users(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, directory_roles in entra_client.directory_roles.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = "Global Administrator"
if "Global Administrator" in directory_roles:
report.resource_id = getattr(
directory_roles["Global Administrator"],
"id",
"Global Administrator",
)
num_global_admins = len(
getattr(directory_roles["Global Administrator"], "members", [])
)
if num_global_admins < 5:
report.status = "PASS"
report.status_extended = (
f"There are {num_global_admins} global administrators."
)
else:
report.status_extended = f"There are {num_global_admins} global administrators. It should be less than five."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_non_privileged_user_has_mfa",
"CheckTitle": "Ensure that 'Multi-Factor Auth Status' is 'Enabled' for all Non-Privileged Users",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.users",
"Description": "Enable multi-factor authentication for all non-privileged users.",
"Risk": "Multi-factor authentication requires an individual to present a minimum of two separate forms of authentication before access is granted. Multi-factor authentication provides additional assurance that the individual attempting to gain access is who they claim to be. With multi-factor authentication, an attacker would need to compromise at least two different authentication mechanisms, increasing the difficulty of compromise and thus reducing the risk.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/authentication/concept-mfa-howitworks",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/multi-factor-authentication-for-all-non-privileged-users.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "Activate one of the available multi-factor authentication methods for users in Microsoft Entra ID.",
"Url": "https://learn.microsoft.com/en-us/entra/identity/authentication/tutorial-enable-azure-mfa"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Users would require two forms of authentication before any access is granted. Also, this requires an overhead for managing dual forms of authentication."
}

View File

@@ -1,34 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
from prowler.providers.azure.services.entra.lib.user_privileges import (
is_privileged_user,
)
class entra_non_privileged_user_has_mfa(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, users in entra_client.users.items():
for user_domain_name, user in users.items():
if not is_privileged_user(
user, entra_client.directory_roles[tenant_domain]
):
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = user_domain_name
report.resource_id = user.id
report.status_extended = (
f"Non-privileged user {user.name} does not have MFA."
)
if len(user.authentication_methods) > 1:
report.status = "PASS"
report.status_extended = (
f"Non-privileged user {user.name} has MFA."
)
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_default_users_cannot_create_security_groups",
"CheckTitle": "Ensure that 'Users can create security groups in Azure portals, API or PowerShell' is set to 'No'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Restrict security group creation to administrators only.",
"Risk": "When creating security groups is enabled, all users in the directory are allowed to create new security groups and add members to those groups. Unless a business requires this day-to-day delegation, security group creation should be restricted to administrators only.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/users/groups-self-service-management",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/users-can-create-security-groups.html",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Groups 4. Select General under Settings 5. Set Users can create security groups in Azure portals, API or PowerShell to No",
"Url": ""
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Enabling this setting could create a number of requests that would need to be managed by an administrator."
}

View File

@@ -1,30 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_default_users_cannot_create_security_groups(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Non-privileged users are able to create security groups via the Access Panel and the Azure administration portal."
if getattr(
auth_policy, "default_user_role_permissions", None
) and not getattr(
auth_policy.default_user_role_permissions,
"allowed_to_create_security_groups",
True,
):
report.status = "PASS"
report.status_extended = "Non-privileged users are not able to create security groups via the Access Panel and the Azure administration portal."
findings.append(report)
return findings

View File

@@ -15,7 +15,7 @@
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/users-can-register-applications.html",
"Other": "",
"Terraform": ""
},
"Recommendation": {

View File

@@ -10,14 +10,12 @@ class entra_policy_ensure_default_user_cannot_create_apps(Check):
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.subscription = f"All from tenant '{tenant_domain}'"
report.resource_name = auth_policy.name
report.resource_id = auth_policy.id
report.status_extended = "App creation is not disabled for non-admin users."
if getattr(
auth_policy, "default_user_role_permissions", None
) and not getattr(
if auth_policy.default_user_role_permissions and not getattr(
auth_policy.default_user_role_permissions,
"allowed_to_create_apps",
True,

View File

@@ -7,18 +7,17 @@ class entra_policy_ensure_default_user_cannot_create_tenants(Check):
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.subscription = f"All from tenant '{tenant_domain}'"
report.resource_name = auth_policy.name
report.resource_id = auth_policy.id
report.status_extended = (
"Tenants creation is not disabled for non-admin users."
)
if getattr(
auth_policy, "default_user_role_permissions", None
) and not getattr(
if auth_policy.default_user_role_permissions and not getattr(
auth_policy.default_user_role_permissions,
"allowed_to_create_tenants",
True,

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_guest_invite_only_for_admin_roles",
"CheckTitle": "Ensure that 'Guest invite restrictions' is set to 'Only users assigned to specific admin roles can invite guest users'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Restrict invitations to users with specific administrative roles only.",
"Risk": "Restricting invitations to users with specific administrator roles ensures that only authorized accounts have access to cloud resources. This helps to maintain 'Need to Know' permissions and prevents inadvertent access to data. By default the setting Guest invite restrictions is set to Anyone in the organization can invite guest users including guests and non-admins. This would allow anyone within the organization to invite guests and non-admins to the tenant, posing a security risk.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/external-id/external-collaboration-settings-configure",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Then External Identities 4. Select External collaboration settings 5. Under Guest invite settings, for Guest invite restrictions, ensure that Only users assigned to specific admin roles can invite guest users is selected",
"Url": "https://learn.microsoft.com/en-us/answers/questions/685101/how-to-allow-only-admins-to-add-guests"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "With the option of Only users assigned to specific admin roles can invite guest users selected, users with specific admin roles will be in charge of sending invitations to the external users, requiring additional overhead by them to manage user accounts. This will mean coordinating with other departments as they are onboarding new users."
}

View File

@@ -1,27 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_guest_invite_only_for_admin_roles(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Guest invitations are not restricted to users with specific administrative roles only."
if (
getattr(auth_policy, "guest_invite_settings", "everyone")
== "adminsAndGuestInviters"
or getattr(auth_policy, "guest_invite_settings", "everyone") == "none"
):
report.status = "PASS"
report.status_extended = "Guest invitations are restricted to users with specific administrative roles only."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_guest_users_access_restrictions",
"CheckTitle": "Ensure That 'Guest users access restrictions' is set to 'Guest user access is restricted to properties and memberships of their own directory objects'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Limit guest user permissions.",
"Risk": "Limiting guest access ensures that guest accounts do not have permission for certain directory tasks, such as enumerating users, groups or other directory resources, and cannot be assigned to administrative roles in your directory. Guest access has three levels of restriction. 1. Guest users have the same access as members (most inclusive), 2. Guest users have limited access to properties and memberships of directory objects (default value), 3. Guest user access is restricted to properties and memberships of their own directory objects (most restrictive). The recommended option is the 3rd, most restrictive: 'Guest user access is restricted to their own directory object'.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/users/users-restrict-guest-permissions",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Then External Identities 4. Select External collaboration settings 5. Under Guest user access, change Guest user access restrictions to be Guest user access is restricted to properties and memberships of their own directory objects",
"Url": "https://learn.microsoft.com/en-us/entra/fundamentals/users-default-permissions#member-and-guest-users"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "This may create additional requests for permissions to access resources that administrators will need to approve. According to https://learn.microsoft.com/en-us/azure/active-directory/enterprise- users/users-restrict-guest-permissions#services-currently-not-supported Service without current support might have compatibility issues with the new guest restriction setting."
}

View File

@@ -1,27 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.config import GUEST_USER_ACCESS_RESTRICTICTED
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_guest_users_access_restrictions(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Guest user access is not restricted to properties and memberships of their own directory objects"
if (
getattr(auth_policy, "guest_user_role_id", None)
== GUEST_USER_ACCESS_RESTRICTICTED
):
report.status = "PASS"
report.status_extended = "Guest user access is restricted to properties and memberships of their own directory objects"
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_restricts_user_consent_for_apps",
"CheckTitle": "Ensure 'User consent for applications' is set to 'Do not allow user consent'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Require administrators to provide consent for applications before use.",
"Risk": "If Microsoft Entra ID is running as an identity provider for third-party applications, permissions and consent should be limited to administrators or pre-approved. Malicious applications may attempt to exfiltrate data or abuse privileged user accounts.",
"RelatedUrl": "https://learn.microsoft.com/en-gb/entra/identity/enterprise-apps/configure-user-consent?pivots=portal",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/users-can-consent-to-apps-accessing-company-data-on-their-behalf.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Enterprise Applications 4. Select Consent and permissions 5. Select User consent settings 6. Set User consent for applications to Do not allow user consent 7. Click save",
"Url": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-privileged-access#pa-1-separate-and-limit-highly-privilegedadministrative-users"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Enforcing this setting may create additional requests that administrators need to review."
}

View File

@@ -1,30 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_restricts_user_consent_for_apps(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Entra allows users to consent apps accessing company data on their behalf"
if getattr(auth_policy, "default_user_role_permissions", None) and not any(
"ManagePermissionGrantsForSelf" in policy_assigned
for policy_assigned in getattr(
auth_policy.default_user_role_permissions,
"permission_grant_policies_assigned",
["ManagePermissionGrantsForSelf.microsoft-user-default-legacy"],
)
):
report.status = "PASS"
report.status_extended = "Entra does not allow users to consent apps accessing company data on their behalf"
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_policy_user_consent_for_verified_apps",
"CheckTitle": "Ensure 'User consent for applications' Is Set To 'Allow for Verified Publishers'",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.authorizationPolicy",
"Description": "Allow users to provide consent for selected permissions when a request is coming from a verified publisher.",
"Risk": "If Microsoft Entra ID is running as an identity provider for third-party applications, permissions and consent should be limited to administrators or pre-approved. Malicious applications may attempt to exfiltrate data or abuse privileged user accounts.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-user-consent?pivots=portal#configure-user-consent-to-applications",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu 2. Select Microsoft Entra ID 3. Select Enterprise Applications 4. Select Consent and permissions 5. Select User consent settings 6. Under User consent for applications, select Allow user consent for apps from verified publishers, for selected permissions 7. Select Save",
"Url": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-privileged-access#pa-1-separate-and-limit-highly-privilegedadministrative-users"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Enforcing this setting may create additional requests that administrators need to review."
}

View File

@@ -1,31 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_policy_user_consent_for_verified_apps(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, auth_policy in entra_client.authorization_policy.items():
report = Check_Report_Azure(self.metadata())
report.status = "PASS"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = getattr(auth_policy, "name", "Authorization Policy")
report.resource_id = getattr(auth_policy, "id", "authorizationPolicy")
report.status_extended = "Entra does not allow users to consent non-verified apps accessing company data on their behalf."
if getattr(auth_policy, "default_user_role_permissions", None) and any(
"ManagePermissionGrantsForSelf.microsoft-user-default-legacy"
in policy_assigned
for policy_assigned in getattr(
auth_policy.default_user_role_permissions,
"permission_grant_policies_assigned",
["ManagePermissionGrantsForSelf.microsoft-user-default-legacy"],
)
):
report.status = "FAIL"
report.status_extended = "Entra allows users to consent apps accessing company data on their behalf."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_privileged_user_has_mfa",
"CheckTitle": "Ensure that 'Multi-Factor Auth Status' is 'Enabled' for all Privileged Users",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.users",
"Description": "Enable multi-factor authentication for all roles, groups, and users that have write access or permissions to Azure resources. These include custom created objects or built-in roles such as; - Service Co-Administrators - Subscription Owners - Contributors",
"Risk": "Multi-factor authentication requires an individual to present a minimum of two separate forms of authentication before access is granted. Multi-factor authentication provides additional assurance that the individual attempting to gain access is who they claim to be. With multi-factor authentication, an attacker would need to compromise at least two different authentication mechanisms, increasing the difficulty of compromise and thus reducing the risk.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/authentication/concept-mfa-howitworks",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/multi-factor-authentication-for-all-privileged-users.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "Activate one of the available multi-factor authentication methods for users in Microsoft Entra ID.",
"Url": "https://learn.microsoft.com/en-us/entra/identity/authentication/tutorial-enable-azure-mfa"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Users would require two forms of authentication before any access is granted. Additional administrative time will be required for managing dual forms of authentication when enabling multi-factor authentication."
}

View File

@@ -1,32 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
from prowler.providers.azure.services.entra.lib.user_privileges import (
is_privileged_user,
)
class entra_privileged_user_has_mfa(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for tenant_domain, users in entra_client.users.items():
for user_domain_name, user in users.items():
if is_privileged_user(
user, entra_client.directory_roles[tenant_domain]
):
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant_domain}"
report.resource_name = user_domain_name
report.resource_id = user.id
report.status_extended = (
f"Privileged user {user.name} does not have MFA."
)
if len(user.authentication_methods) > 1:
report.status = "PASS"
report.status_extended = f"Privileged user {user.name} has MFA."
findings.append(report)
return findings

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_security_defaults_enabled",
"CheckTitle": "Ensure Security Defaults is enabled on Microsoft Entra ID",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "high",
"ResourceType": "#microsoft.graph.identitySecurityDefaultsEnforcementPolicy",
"Description": "Security defaults in Microsoft Entra ID make it easier to be secure and help protect your organization. Security defaults contain preconfigured security settings for common attacks. Security defaults is available to everyone. The goal is to ensure that all organizations have a basic level of security enabled at no extra cost. You may turn on security defaults in the Azure portal.",
"Risk": "Security defaults provide secure default settings that we manage on behalf of organizations to keep customers safe until they are ready to manage their own identity security settings. For example, doing the following: - Requiring all users and admins to register for MFA. - Challenging users with MFA - when necessary, based on factors such as location, device, role, and task. - Disabling authentication from legacy authentication clients, which cant do MFA.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/fundamentals/security-defaults",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/azure/ActiveDirectory/security-defaults-enabled.html#",
"Terraform": ""
},
"Recommendation": {
"Text": "1. From Azure Home select the Portal Menu. 2. Browse to Microsoft Entra ID > Properties 3. Select Manage security defaults 4. Set the Enable security defaults to Enabled 5. Select Save",
"Url": "https://techcommunity.microsoft.com/t5/microsoft-entra-blog/introducing-security-defaults/ba-p/1061414"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "This recommendation should be implemented initially and then may be overridden by other service/product specific CIS Benchmarks. Administrators should also be aware that certain configurations in Microsoft Entra ID may impact other Microsoft services such as Microsoft 365."
}

View File

@@ -1,26 +0,0 @@
from prowler.lib.check.models import Check, Check_Report_Azure
from prowler.providers.azure.services.entra.entra_client import entra_client
class entra_security_defaults_enabled(Check):
def execute(self) -> Check_Report_Azure:
findings = []
for (
tenant,
security_default,
) in entra_client.security_default.items():
report = Check_Report_Azure(self.metadata())
report.status = "FAIL"
report.subscription = f"Tenant: {tenant}"
report.resource_name = getattr(security_default, "name", "Security Default")
report.resource_id = getattr(security_default, "id", "Security Default")
report.status_extended = "Entra security defaults is diabled."
if getattr(security_default, "is_enabled", False):
report.status = "PASS"
report.status_extended = "Entra security defaults is enabled."
findings.append(report)
return findings

View File

@@ -1,20 +1,18 @@
import asyncio
from dataclasses import dataclass
from typing import Any, List, Optional
from uuid import UUID
from typing import Optional
from msgraph import GraphServiceClient
from msgraph.generated.models.default_user_role_permissions import (
DefaultUserRolePermissions,
)
from msgraph.generated.models.setting_value import SettingValue
from pydantic import BaseModel
from prowler.lib.logger import logger
from prowler.providers.azure.config import GUEST_USER_ACCESS_NO_RESTRICTICTED
from prowler.providers.azure.lib.service.service import AzureService
########################## Entra
class Entra(AzureService):
def __init__(self, azure_audit_info):
super().__init__(GraphServiceClient, azure_audit_info)
@@ -22,26 +20,10 @@ class Entra(AzureService):
self.authorization_policy = asyncio.get_event_loop().run_until_complete(
self.__get_authorization_policy__()
)
self.group_settings = asyncio.get_event_loop().run_until_complete(
self.__get_group_settings__()
)
self.security_default = asyncio.get_event_loop().run_until_complete(
self.__get_security_default__()
)
self.named_locations = asyncio.get_event_loop().run_until_complete(
self.__get_named_locations__()
)
self.directory_roles = asyncio.get_event_loop().run_until_complete(
self.__get_directory_roles__()
)
self.conditional_access_policy = asyncio.get_event_loop().run_until_complete(
self.__get_conditional_access_policy__()
)
async def __get_users__(self):
logger.info("Entra - Getting users...")
users = {}
try:
users = {}
for tenant, client in self.clients.items():
users_list = await client.users.get()
users.update({tenant: {}})
@@ -49,36 +31,20 @@ class Entra(AzureService):
users[tenant].update(
{
user.user_principal_name: User(
id=user.id,
name=user.display_name,
authentication_methods=(
await client.users.by_user_id(
user.id
).authentication.methods.get()
).value,
id=user.id, name=user.display_name
)
}
)
except Exception as error:
if (
error.__class__.__name__ == "ODataError"
and error.__dict__.get("response_status_code", None) == 403
):
logger.error(
"You need 'UserAuthenticationMethod.Read.All' permission to access this information. It only can be granted through Service Principal authentication."
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return users
async def __get_authorization_policy__(self):
logger.info("Entra - Getting authorization policy...")
authorization_policy = {}
try:
authorization_policy = {}
for tenant, client in self.clients.items():
auth_policy = await client.policies.authorization_policy.get()
authorization_policy.update(
@@ -90,16 +56,6 @@ class Entra(AzureService):
default_user_role_permissions=getattr(
auth_policy, "default_user_role_permissions", None
),
guest_invite_settings=(
auth_policy.allow_invites_from.value
if getattr(auth_policy, "allow_invites_from", None)
else "everyone"
),
guest_user_role_id=getattr(
auth_policy,
"guest_user_role_id",
GUEST_USER_ACCESS_NO_RESTRICTICTED,
),
)
}
)
@@ -110,202 +66,10 @@ class Entra(AzureService):
return authorization_policy
async def __get_group_settings__(self):
logger.info("Entra - Getting group settings...")
group_settings = {}
try:
for tenant, client in self.clients.items():
group_settings_list = await client.group_settings.get()
group_settings.update({tenant: {}})
for group_setting in group_settings_list.value:
group_settings[tenant].update(
{
group_setting.id: GroupSetting(
name=getattr(group_setting, "display_name", None),
template_id=getattr(group_setting, "template_id", None),
settings=getattr(group_setting, "values", []),
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return group_settings
async def __get_security_default__(self):
logger.info("Entra - Getting security default...")
try:
security_defaults = {}
for tenant, client in self.clients.items():
security_default = (
await client.policies.identity_security_defaults_enforcement_policy.get()
)
security_defaults.update(
{
tenant: SecurityDefault(
id=security_default.id,
name=security_default.display_name,
is_enabled=security_default.is_enabled,
),
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return security_defaults
async def __get_named_locations__(self):
logger.info("Entra - Getting named locations...")
named_locations = {}
try:
for tenant, client in self.clients.items():
named_locations_list = (
await client.identity.conditional_access.named_locations.get()
)
named_locations.update({tenant: {}})
for named_location in getattr(named_locations_list, "value", []):
named_locations[tenant].update(
{
named_location.id: NamedLocation(
name=named_location.display_name,
ip_ranges_addresses=[
getattr(ip_range, "cidr_address", None)
for ip_range in getattr(
named_location, "ip_ranges", []
)
],
is_trusted=getattr(named_location, "is_trusted", False),
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return named_locations
async def __get_directory_roles__(self):
logger.info("Entra - Getting directory roles...")
directory_roles_with_members = {}
try:
for tenant, client in self.clients.items():
directory_roles_with_members.update({tenant: {}})
directory_roles = await client.directory_roles.get()
for directory_role in directory_roles.value:
directory_role_members = (
await client.directory_roles.by_directory_role_id(
directory_role.id
).members.get()
)
directory_roles_with_members[tenant].update(
{
directory_role.display_name: DirectoryRole(
id=directory_role.id,
members=[
self.users[tenant][member.user_principal_name]
for member in directory_role_members.value
if self.users[tenant].get(
member.user_principal_name, None
)
],
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return directory_roles_with_members
async def __get_conditional_access_policy__(self):
logger.info("Entra - Getting conditional access policy...")
conditional_access_policy = {}
try:
for tenant, client in self.clients.items():
conditional_access_policies = (
await client.identity.conditional_access.policies.get()
)
conditional_access_policy.update({tenant: {}})
for policy in getattr(conditional_access_policies, "value", []):
conditions = getattr(policy, "conditions", None)
included_apps = []
excluded_apps = []
if getattr(conditions, "applications", None):
if getattr(conditions.applications, "include_applications", []):
included_apps = conditions.applications.include_applications
elif getattr(
conditions.applications, "include_user_actions", []
):
included_apps = conditions.applications.include_user_actions
if getattr(conditions.applications, "exclude_applications", []):
excluded_apps = conditions.applications.exclude_applications
elif getattr(
conditions.applications, "exclude_user_actions", []
):
excluded_apps = conditions.applications.exclude_user_actions
grant_access_controls = []
block_access_controls = []
for access_control in (
getattr(policy.grant_controls, "built_in_controls")
if policy.grant_controls
else []
):
if "Grant" in str(access_control):
grant_access_controls.append(str(access_control))
else:
block_access_controls.append(str(access_control))
conditional_access_policy[tenant].update(
{
policy.id: ConditionalAccessPolicy(
name=policy.display_name,
state=getattr(policy, "state", "None"),
users={
"include": (
getattr(conditions.users, "include_users", [])
if getattr(conditions, "users", None)
else []
),
"exclude": (
getattr(conditions.users, "exclude_users", [])
if getattr(conditions, "users", None)
else []
),
},
target_resources={
"include": included_apps,
"exclude": excluded_apps,
},
access_controls={
"grant": grant_access_controls,
"block": block_access_controls,
},
)
}
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return conditional_access_policy
class User(BaseModel):
id: str
name: str
authentication_methods: List[Any] = []
@dataclass
@@ -314,37 +78,3 @@ class AuthorizationPolicy:
name: str
description: str
default_user_role_permissions: Optional[DefaultUserRolePermissions]
guest_invite_settings: str
guest_user_role_id: UUID
@dataclass
class GroupSetting:
name: Optional[str]
template_id: Optional[str]
settings: List[SettingValue]
class SecurityDefault(BaseModel):
id: str
name: str
is_enabled: bool
class NamedLocation(BaseModel):
name: str
ip_ranges_addresses: List[str]
is_trusted: bool
class DirectoryRole(BaseModel):
id: str
members: List[User]
class ConditionalAccessPolicy(BaseModel):
name: str
state: str
users: dict[str, List[str]]
target_resources: dict[str, List[str]]
access_controls: dict[str, List[str]]

View File

@@ -1,30 +0,0 @@
{
"Provider": "azure",
"CheckID": "entra_trusted_named_locations_exists",
"CheckTitle": "Ensure Trusted Locations Are Defined",
"CheckType": [],
"ServiceName": "entra",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "medium",
"ResourceType": "#microsoft.graph.ipNamedLocation",
"Description": "Microsoft Entra ID Conditional Access allows an organization to configure Named locations and configure whether those locations are trusted or untrusted. These settings provide organizations the means to specify Geographical locations for use in conditional access policies, or define actual IP addresses and IP ranges and whether or not those IP addresses and/or ranges are trusted by the organization.",
"Risk": "Defining trusted source IP addresses or ranges helps organizations create and enforce Conditional Access policies around those trusted or untrusted IP addresses and ranges. Users authenticating from trusted IP addresses and/or ranges may have less access restrictions or access requirements when compared to users that try to authenticate to Microsoft Entra ID from untrusted locations or untrusted source IP addresses/ranges.",
"RelatedUrl": "https://learn.microsoft.com/en-us/entra/identity/conditional-access/location-condition",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "1. Navigate to the Microsoft Entra ID Conditional Access Blade 2. Click on the Named locations blade 3. Within the Named locations blade, click on IP ranges location 4. Enter a name for this location setting in the Name text box 5. Click on the + sign 6. Add an IP Address Range in CIDR notation inside the text box that appears 7. Click on the Add button 8. Repeat steps 5 through 7 for each IP Range that needs to be added 9. If the information entered are trusted ranges, select the Mark as trusted location check box 10. Once finished, click on Create",
"Url": "https://learn.microsoft.com/en-us/security/benchmark/azure/mcsb-identity-management#im-7-restrict-resource-access-based-on--conditions"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "When configuring Named locations, the organization can create locations using Geographical location data or by defining source IP addresses or ranges. Configuring Named locations using a Country location does not provide the organization the ability to mark those locations as trusted, and any Conditional Access policy relying on those Countries location setting will not be able to use the All trusted locations setting within the Conditional Access policy. They instead will have to rely on the Select locations setting. This may add additional resource requirements when configuring, and will require thorough organizational testing. In general, Conditional Access policies may completely prevent users from authenticating to Microsoft Entra ID, and thorough testing is recommended. To avoid complete lockout, a 'Break Glass' account with full Global Administrator rights is recommended in the event all other administrators are locked out of authenticating to Microsoft Entra ID. This 'Break Glass' account should be excluded from Conditional Access Policies and should be configured with the longest pass phrase feasible. This account should only be used in the event of an emergency and complete administrator lockout."
}

Some files were not shown because too many files have changed in this diff Show More