Compare commits

..

58 Commits

Author SHA1 Message Date
Fennerr bcfdcbde30 Resolved some conflicts 2024-02-15 15:56:07 +02:00
Sergio Garcia 2f50aaa9c1 resolve conflicts 2024-01-16 11:16:11 +01:00
Nacho Rivera 537081a0f6 feat(AwsProvider): include new structure for AWS provider (#3252)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-01-16 10:51:15 +01:00
Sergio Garcia 2eb774bbc9 chore(manual status): change INFO to MANUAL status (#3254) 2024-01-16 10:45:00 +01:00
Sergio Garcia 5419117842 feat(status): add --status flag (#3238) 2024-01-16 10:44:39 +01:00
Sergio Garcia e72831d428 feat(kubernetes): add Kubernetes provider (#3226)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:43:22 +01:00
Sergio Garcia 217b8ad250 fix(gcp): fix error in generating compliance (#3201) 2024-01-16 10:42:34 +01:00
Sergio Garcia 09b4548445 feat(compliance): execute all compliance by default (#3003)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:42:31 +01:00
Nacho Rivera 0d96583769 feat(CloudProvider): introduce global provider Azure&GCP (#3069) 2024-01-16 10:41:11 +01:00
Sergio Garcia 722fe0a1bc chore(sts-endpoint): deprecate --sts-endpoint-region (#3046)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:39:14 +01:00
Sergio Garcia 445821eceb feat(mute list): change allowlist to mute list (#3039)
Co-authored-by: Nacho Rivera <nachor1992@gmail.com>
2024-01-16 10:39:11 +01:00
Nacho Rivera c3d129a4b2 chore(update): rebase from master (#3067)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: John Mastron <14130495+mtronrd@users.noreply.github.com>
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: simone ragonesi <102741679+R3DRUN3@users.noreply.github.com>
Co-authored-by: Johnny Lu <johnny2lu@gmail.com>
Co-authored-by: Vajrala Venkateswarlu <59252985+venkyvajrala@users.noreply.github.com>
Co-authored-by: Ignacio Dominguez <ignacio.dominguez@zego.com>
2024-01-16 10:36:42 +01:00
Nacho Rivera 36fc575e40 feat(AwsProvider): include new structure for AWS provider (#3252)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-01-15 16:55:53 +01:00
Sergio Garcia 24efb34d91 chore(manual status): change INFO to MANUAL status (#3254) 2024-01-09 18:08:00 +01:00
Sergio Garcia c08e244c95 feat(status): add --status flag (#3238) 2024-01-09 11:35:44 +01:00
Sergio Garcia c2f8980f1f feat(kubernetes): add Kubernetes provider (#3226)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-09 10:31:51 +01:00
Fennerr 028d29b8ff Added rich to poetry dependencies 2023-12-21 08:43:57 +02:00
Fennerr b976cab926 Added rich to poetry dependencies 2023-12-21 08:43:47 +02:00
Fennerr 197a08ab94 Added --only-logs and some reordering 2023-12-21 08:34:49 +02:00
Fennerr 0d97780ade cleaned up execution manager,live display. Added metaclass 2023-12-20 23:22:18 +02:00
Fennerr f2f922d7e8 fixed decorator to correctly handle args 2023-12-20 12:10:34 +02:00
Fennerr 606b4b5a66 merged threading progress 2023-12-20 11:58:39 +02:00
Fennerr 132056f4c1 some more progress 2023-12-20 11:52:32 +02:00
Fennerr 4845d6033b added progress decorator 2023-12-20 11:48:40 +02:00
Fennerr 57550e6984 initial switch 2023-12-20 11:42:48 +02:00
Fennerr 040b780af7 WIP: improved layout 2023-12-20 00:14:53 +02:00
Fennerr abaa7855d7 Pull rebase from master 2023-12-19 21:55:18 +02:00
Fennerr e9c6b35698 WIP: added verbose results and timer 2023-12-19 21:54:26 +02:00
Fennerr c92740869f WIP: centered results table 2023-12-19 14:13:59 +02:00
Fennerr 49003fae08 WIP: added results table 2023-12-19 13:52:54 +02:00
Fennerr 01f3c8656c WIP: improved layout 2023-12-18 23:33:43 +02:00
Fennerr ba705406ff Moved all the check execution logic into execution manager 2023-12-18 14:37:41 +02:00
Fennerr d8101acc9c Moved all the check execution logic into execution manager 2023-12-18 14:37:14 +02:00
Sergio Garcia 0ef85b3dee fix(gcp): fix error in generating compliance (#3201) 2023-12-18 12:10:58 +01:00
Fennerr 126acc046a Added execution manager and live display 2023-12-15 12:28:25 +02:00
Fennerr f324f27016 updated ui redesign implementation 2023-12-13 23:00:12 +02:00
Sergio Garcia 93a2431211 feat(compliance): execute all compliance by default (#3003)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-13 17:31:39 +01:00
Fennerr 5b80082491 merged issue-2516 2023-12-13 15:17:13 +02:00
Pepe Fagoaga 2ca4656ef9 fix(lambda): Do not use function.code 2023-12-13 13:53:36 +01:00
Pepe Fagoaga cb4de850e9 fix(lambda): Do not use function.code 2023-12-13 13:51:53 +01:00
Fennerr 92e0d74055 Keeping the code seperate from the function obj 2023-12-13 14:51:17 +02:00
Fennerr 578b21f424 Fixed error log message 2023-12-13 14:36:59 +02:00
Fennerr 85c44f01c5 Initial progress 2023-12-13 14:23:49 +02:00
Pepe Fagoaga fb5d6cfd7e refactor(lambda): fetch code 2023-12-13 13:16:13 +01:00
Pepe Fagoaga 1b3f830623 test(lambda): fix tests 2023-12-13 13:15:23 +01:00
Nacho Rivera 1fe74937c1 feat(CloudProvider): introduce global provider Azure&GCP (#3069) 2023-12-12 18:05:17 +01:00
Sergio Garcia 6ee016e577 chore(sts-endpoint): deprecate --sts-endpoint-region (#3046)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 17:13:50 +01:00
Sergio Garcia f7248dfb1c feat(mute list): change allowlist to mute list (#3039)
Co-authored-by: Nacho Rivera <nachor1992@gmail.com>
2023-12-12 16:57:52 +01:00
Fennerr 0481435846 Made use of service thread_pool 2023-12-12 16:10:34 +02:00
Fennerr 5554e2be1b Merge branch 'master' into issue-2516 2023-12-12 14:58:10 +02:00
Fennerr e97e2e84fc Merge branch 'master' of https://github.com/prowler-cloud/prowler 2023-12-12 14:57:53 +02:00
Fennerr 19f38dbb63 Modified logging statements 2023-12-12 14:25:29 +02:00
Fennerr 06d9eccebd Added threading 2023-12-12 12:22:38 +02:00
Fennerr 5dfd8460be Improved threading for EC2 2023-12-11 11:06:32 +02:00
Fennerr f71052bcfe Update awslambda_service.py
fixed blank space + removed the if statement in __init__ which I should have previously removed
2023-12-06 09:19:44 +02:00
Fennerr 7bfdb8c1f3 Update awslambda_service.py
removed function param - it is not needed
2023-12-05 22:18:36 +02:00
Justin Moorcroft dedb03cc6e initial fix for issue #2516 2023-12-05 22:11:32 +02:00
Nacho Rivera 856afb3966 chore(update): rebase from master (#3067)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: John Mastron <14130495+mtronrd@users.noreply.github.com>
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: simone ragonesi <102741679+R3DRUN3@users.noreply.github.com>
Co-authored-by: Johnny Lu <johnny2lu@gmail.com>
Co-authored-by: Vajrala Venkateswarlu <59252985+venkyvajrala@users.noreply.github.com>
Co-authored-by: Ignacio Dominguez <ignacio.dominguez@zego.com>
2023-11-27 13:58:45 +01:00
1189 changed files with 11598 additions and 36831 deletions
+1 -1
View File
@@ -1,6 +1,6 @@
name: 💡 Feature Request
description: Suggest an idea for this project
labels: ["feature-request", "status/needs-triage"]
labels: ["enhancement", "status/needs-triage"]
body:
-5
View File
@@ -13,8 +13,3 @@ updates:
labels:
- "dependencies"
- "pip"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
target-branch: master
-27
View File
@@ -1,27 +0,0 @@
documentation:
- changed-files:
- any-glob-to-any-file: "docs/**"
provider/aws:
- changed-files:
- any-glob-to-any-file: "prowler/providers/aws/**"
- any-glob-to-any-file: "tests/providers/aws/**"
provider/azure:
- changed-files:
- any-glob-to-any-file: "prowler/providers/azure/**"
- any-glob-to-any-file: "tests/providers/azure/**"
provider/gcp:
- changed-files:
- any-glob-to-any-file: "prowler/providers/gcp/**"
- any-glob-to-any-file: "tests/providers/gcp/**"
provider/kubernetes:
- changed-files:
- any-glob-to-any-file: "prowler/providers/kubernetes/**"
- any-glob-to-any-file: "tests/providers/kubernetes/**"
github_actions:
- changed-files:
- any-glob-to-any-file: ".github/workflows/*"
@@ -1,24 +0,0 @@
name: Pull Request Documentation Link
on:
pull_request:
branches:
- 'master'
- 'prowler-4.0-dev'
paths:
- 'docs/**'
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
jobs:
documentation-link:
name: Documentation Link
runs-on: ubuntu-latest
steps:
- name: Leave PR comment with the SaaS Documentation URI
uses: peter-evans/create-or-update-comment@v4
with:
issue-number: ${{ env.PR_NUMBER }}
body: |
You can check the documentation for this PR here -> [SaaS Documentation](https://prowler-prowler-docs--${{ env.PR_NUMBER }}.com.readthedocs.build/projects/prowler-open-source/en/${{ env.PR_NUMBER }}/)
@@ -32,11 +32,11 @@ jobs:
POETRY_VIRTUALENVS_CREATE: "false"
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Setup python (release)
if: github.event_name == 'release'
uses: actions/setup-python@v5
uses: actions/setup-python@v2
with:
python-version: ${{ env.PYTHON_VERSION }}
@@ -52,13 +52,13 @@ jobs:
poetry version ${{ github.event.release.tag_name }}
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@v3
uses: docker/login-action@v2
with:
registry: public.ecr.aws
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
@@ -67,11 +67,11 @@ jobs:
AWS_REGION: ${{ env.AWS_REGION }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
- name: Build and push container image (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@v5
uses: docker/build-push-action@v2
with:
push: true
tags: |
@@ -83,7 +83,7 @@ jobs:
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v5
uses: docker/build-push-action@v2
with:
# Use local context to get changes
# https://github.com/docker/build-push-action#path-context
+3 -3
View File
@@ -37,11 +37,11 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
@@ -52,6 +52,6 @@ jobs:
# queries: security-extended,security-and-quality
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"
+2 -2
View File
@@ -7,11 +7,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.69.0
uses: trufflesecurity/trufflehog@v3.4.4
with:
path: ./
base: ${{ github.event.repository.default_branch }}
-16
View File
@@ -1,16 +0,0 @@
name: "Pull Request Labeler"
on:
pull_request_target:
branches:
- "master"
- "prowler-4.0-dev"
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
+5 -5
View File
@@ -14,13 +14,13 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
python-version: ["3.9", "3.10", "3.11"]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@v42
uses: tj-actions/changed-files@v39
with:
files: ./**
files_ignore: |
@@ -36,7 +36,7 @@ jobs:
pipx install poetry
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@v5
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
@@ -88,6 +88,6 @@ jobs:
poetry run pytest -n auto --cov=./prowler --cov-report=xml tests
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
+3 -3
View File
@@ -16,7 +16,7 @@ jobs:
name: Release Prowler to PyPI
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: Install dependencies
@@ -24,7 +24,7 @@ jobs:
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: setup python
uses: actions/setup-python@v5
uses: actions/setup-python@v4
with:
python-version: 3.9
cache: 'poetry'
@@ -44,7 +44,7 @@ jobs:
poetry publish
# Create pull request with new version
- name: Create Pull Request
uses: peter-evans/create-pull-request@v6
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}."
@@ -23,12 +23,12 @@ jobs:
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: setup python
uses: actions/setup-python@v5
uses: actions/setup-python@v2
with:
python-version: 3.9 #install the python needed
@@ -38,7 +38,7 @@ jobs:
pip install boto3
- name: Configure AWS Credentials -- DEV
uses: aws-actions/configure-aws-credentials@v4
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION_DEV }}
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
@@ -50,12 +50,12 @@ jobs:
# Create pull request
- name: Create Pull Request
uses: peter-evans/create-pull-request@v6
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services."
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low, provider/aws"
labels: "status/waiting-for-revision, severity/low"
title: "chore(regions_update): Changes in regions for AWS services."
body: |
### Description
+15 -9
View File
@@ -1,7 +1,7 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v4.4.0
hooks:
- id: check-merge-conflict
- id: check-yaml
@@ -15,7 +15,7 @@ repos:
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.12.0
rev: v2.10.0
hooks:
- id: pretty-format-toml
args: [--autofix]
@@ -28,7 +28,7 @@ repos:
- id: shellcheck
## PYTHON
- repo: https://github.com/myint/autoflake
rev: v2.2.1
rev: v2.2.0
hooks:
- id: autoflake
args:
@@ -39,25 +39,25 @@ repos:
]
- repo: https://github.com/timothycrosley/isort
rev: 5.13.2
rev: 5.12.0
hooks:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/psf/black
rev: 24.1.1
rev: 22.12.0
hooks:
- id: black
- repo: https://github.com/pycqa/flake8
rev: 7.0.0
rev: 6.1.0
hooks:
- id: flake8
exclude: contrib
args: ["--ignore=E266,W503,E203,E501,W605"]
- repo: https://github.com/python-poetry/poetry
rev: 1.7.0
rev: 1.6.0 # add version here
hooks:
- id: poetry-check
- id: poetry-lock
@@ -80,12 +80,18 @@ repos:
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
# entry: bash -c 'trufflehog git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
stages: ["commit", "push"]
- id: pytest-check
name: pytest-check
entry: bash -c 'pytest tests -n auto'
language: system
files: '.*\.py'
- id: bandit
name: bandit
description: "Bandit is a tool for finding common security issues in Python code"
+5 -7
View File
@@ -8,18 +8,16 @@ version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.11"
python: "3.9"
jobs:
post_create_environment:
# Install poetry
# https://python-poetry.org/docs/#installing-manually
- python -m pip install poetry
- pip install poetry
# Tell poetry to not use a virtual environment
- poetry config virtualenvs.create false
post_install:
# Install dependencies with 'docs' dependency group
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# VIRTUAL_ENV needs to be set manually for now.
# See https://github.com/readthedocs/readthedocs.org/pull/11152/
- VIRTUAL_ENV=${READTHEDOCS_VIRTUALENV_PATH} python -m poetry install --only=docs
- poetry install -E docs
mkdocs:
configuration: mkdocs.yml
+1 -1
View File
@@ -55,7 +55,7 @@ further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [support.prowler.com](https://customer.support.prowler.com/servicedesk/customer/portals). All
reported by contacting the project team at community@prowler.cloud. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
+1 -1
View File
@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright @ 2024 Toni de la Fuente
Copyright 2018 Netflix, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
+15 -23
View File
@@ -1,31 +1,24 @@
<p align="center">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-black.png?raw=True#gh-light-mode-only" width="350" height="115">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-white.png?raw=True#gh-dark-mode-only" width="350" height="115">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/62c1ce73bbcdd6b9e5ba03dfcae26dfd165defd9/docs/img/prowler-pro-dark.png?raw=True#gh-dark-mode-only" width="150" height="36">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/62c1ce73bbcdd6b9e5ba03dfcae26dfd165defd9/docs/img/prowler-pro-light.png?raw=True#gh-light-mode-only" width="15%" height="15%">
</p>
<p align="center">
<b><i>Prowler SaaS </b> and <b>Prowler Open Source</b> are as dynamic and adaptable as the environment theyre meant to protect. Trusted by the leaders in security.
<b><i>See all the things you and your team can do with ProwlerPro at <a href="https://prowler.pro">prowler.pro</a></i></b>
</p>
<p align="center">
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img width="30" height="30" alt="Prowler community on Slack" src="https://github.com/prowler-cloud/prowler/assets/3985464/3617e470-670c-47c9-9794-ce895ebdb627"></a>
<br>
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog">Join our Prowler community!</a>
</p>
<hr>
<p align="center">
<img src="https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png" />
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=prowler%20downloads"></a>
<a href="https://pypistats.org/packages/prowler-cloud"><img alt="PyPI Prowler-Cloud Downloads" src="https://img.shields.io/pypi/dw/prowler-cloud.svg?label=prowler-cloud%20downloads"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://codecov.io/gh/prowler-cloud/prowler"><img src="https://codecov.io/gh/prowler-cloud/prowler/graph/badge.svg?token=OflBGsdpDl"/></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler"><img alt="Repo size" src="https://img.shields.io/github/repo-size/prowler-cloud/prowler"></a>
@@ -37,7 +30,6 @@
<a href="https://twitter.com/ToniBlyx"><img alt="Twitter" src="https://img.shields.io/twitter/follow/toniblyx?style=social"></a>
<a href="https://twitter.com/prowlercloud"><img alt="Twitter" src="https://img.shields.io/twitter/follow/prowlercloud?style=social"></a>
</p>
<hr>
# Description
@@ -45,16 +37,16 @@
It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks.
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.cloud/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.cloud/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 302 | 61 -> `prowler aws --list-services` | 27 -> `prowler aws --list-compliance` | 6 -> `prowler aws --list-categories` |
| AWS | 301 | 61 -> `prowler aws --list-services` | 25 -> `prowler aws --list-compliance` | 5 -> `prowler aws --list-categories` |
| GCP | 73 | 11 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
| Azure | 91 | 14 -> `prowler azure --list-services` | CIS soon | 2 -> `prowler azure --list-categories` |
| Kubernetes | Work In Progress | - | CIS soon | - |
| Azure | 23 | 4 -> `prowler azure --list-services` | CIS soon | 1 -> `prowler azure --list-categories` |
| Kubernetes | Planned | - | - | - |
# 📖 Documentation
The full documentation can now be found at [https://docs.prowler.com](https://docs.prowler.com/projects/prowler-open-source/en/latest/)
The full documentation can now be found at [https://docs.prowler.cloud](https://docs.prowler.cloud)
## Looking for Prowler v2 documentation?
For Prowler v2 Documentation, please go to https://github.com/prowler-cloud/prowler/tree/2.12.1.
@@ -62,13 +54,13 @@ For Prowler v2 Documentation, please go to https://github.com/prowler-cloud/prow
# ⚙️ Install
## Pip package
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with Python >= 3.9, < 3.13:
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with Python >= 3.9:
```console
pip install prowler
prowler -v
```
More details at [https://docs.prowler.com](https://docs.prowler.com/projects/prowler-open-source/en/latest/)
More details at https://docs.prowler.cloud
## Containers
@@ -85,7 +77,7 @@ The container images are available here:
## From Github
Python >= 3.9, < 3.13 is required with pip and poetry:
Python >= 3.9 is required with pip and poetry:
```
git clone https://github.com/prowler-cloud/prowler
+1 -1
View File
@@ -14,7 +14,7 @@ As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (F
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with ProwlerPro as part of this process is kept confidential within ProwlerPro. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
The information you share with Verica as part of this process is kept confidential within Verica and the Prowler team. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.
+5 -8
View File
@@ -101,8 +101,8 @@ All the checks MUST fill the `report.status` and `report.status_extended` with t
- Status -- `report.status`
- `PASS` --> If the check is passing against the configured value.
- `FAIL` --> If the check is failing against the configured value.
- `INFO` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
- `FAIL` --> If the check is passing against the configured value.
- `MANUAL` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
- Status Extended -- `report.status_extended`
- MUST end in a dot `.`
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`
@@ -125,7 +125,7 @@ All the checks MUST fill the `report.resource_id` and `report.resource_arn` with
- Resource ARN -- `report.resource_arn`
- AWS Account --> Root ARN `arn:aws:iam::123456789012:root`
- AWS Resource --> Resource ARN
- Root resource --> Resource Type ARN `f"arn:{service_client.audited_partition}:<service_name>:{service_client.region}:{service_client.audited_account}:<resource_type>"`
- Root resource --> Root ARN `arn:aws:iam::123456789012:root`
- GCP
- Resource ID -- `report.resource_id`
- GCP Resource --> Resource ID
@@ -196,17 +196,14 @@ aws:
As you can see in the above code, within the service client, in this case the `ec2_client`, there is an object called `audit_config` which is a Python dictionary containing the values read from the configuration file.
In order to use it, you have to check first if the value is present in the configuration file. If the value is not present, you can create it in the `config.yaml` file and then, read it from the check.
???+ note
It is mandatory to always use the `dictionary.get(value, default)` syntax to set a default value in the case the configuration value is not present.
> It is mandatory to always use the `dictionary.get(value, default)` syntax to set a default value in the case the configuration value is not present.
## Check Metadata
Each Prowler check has metadata associated which is stored at the same level of the check's folder in a file called A `check_name.metadata.json` containing the check's metadata.
???+ note
We are going to include comments in this example metadata JSON but they cannot be included because the JSON format does not allow comments.
> We are going to include comments in this example metadata JSON but they cannot be included because the JSON format does not allow comments.
```json
{
-45
View File
@@ -1,45 +0,0 @@
# Debugging
Debugging in Prowler make things easier!
If you are developing Prowler, it's possible that you will encounter some situations where you have to inspect the code in depth to fix some unexpected issues during the execution. To do that, if you are using VSCode you can run the code using the integrated debugger. Please, refer to this [documentation](https://code.visualstudio.com/docs/editor/debugging) for guidance about the debugger in VSCode.
The following file is an example of the [debugging configuration](https://code.visualstudio.com/docs/editor/debugging#_launch-configurations) file that you can add to [Virtual Studio Code](https://code.visualstudio.com/).
This file should inside the *.vscode* folder and its name has to be *launch.json*:
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "prowler.py",
"args": [
"aws",
"-f",
"eu-west-1",
"--service",
"cloudwatch",
"--log-level",
"ERROR",
"-p",
"dev",
],
"console": "integratedTerminal",
"justMyCode": false
},
{
"name": "Python: Debug Tests",
"type": "python",
"request": "launch",
"program": "${file}",
"purpose": [
"debug-test"
],
"console": "integratedTerminal",
"justMyCode": false
}
]
}
```
+3 -5
View File
@@ -1,6 +1,6 @@
# Developer Guide
You can extend Prowler Open Source in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
You can extend Prowler in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
## Get the code and install all dependencies
@@ -16,7 +16,7 @@ pip install poetry
```
Then install all dependencies including the ones for developers:
```
poetry install --with dev
poetry install
poetry shell
```
@@ -31,9 +31,7 @@ You should get an output like the following:
pre-commit installed at .git/hooks/pre-commit
```
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated:
???+ note
These should have been already installed if you ran `poetry install --with dev`
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated (these should have been already installed if you ran `pipenv install -d`):
- [`bandit`](https://pypi.org/project/bandit/) for code security review.
- [`safety`](https://pypi.org/project/safety/) and [`dependabot`](https://github.com/features/security) for dependencies.
@@ -23,7 +23,7 @@ Each file version of a framework will have the following structure at high level
"Requirements": [
{
"Id": "<unique-id>",
"Description": "Requirement full description",
"Description": "Requiemente full description",
"Checks": [
"Here is the prowler check or checks that is going to be executed"
],
@@ -38,4 +38,4 @@ Each file version of a framework will have the following structure at high level
}
```
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`. Also, you need to add a new conditional in `prowler/lib/outputs/file_descriptors.py` if you create a new CSV model.
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.
+11 -15
View File
@@ -40,15 +40,13 @@ Other commands to run tests:
- Run tests for a provider service: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>`
- Run tests for a provider check: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>/<check>`
???+ note
Refer to the [pytest documentation](https://docs.pytest.org/en/7.1.x/getting-started.html) documentation for more information.
> Refer to the [pytest documentation](https://docs.pytest.org/en/7.1.x/getting-started.html) documentation for more information.
## AWS
For the AWS provider we have ways to test a Prowler check based on the following criteria:
???+ note
We use and contribute to the [Moto](https://github.com/getmoto/moto) library which allows us to easily mock out tests based on AWS infrastructure. **It's awesome!**
> Note: We use and contribute to the [Moto](https://github.com/getmoto/moto) library which allows us to easily mock out tests based on AWS infrastructure. **It's awesome!**
- AWS API calls covered by [Moto](https://github.com/getmoto/moto):
- Service tests with `@mock_<service>`
@@ -197,8 +195,7 @@ class Test_iam_password_policy_uppercase:
If the IAM service for the check's we want to test is not covered by Moto, we have to inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock). As we have pointed above, we cannot instantiate the service since it will make real calls to the AWS APIs.
???+ note
The following example uses the IAM GetAccountPasswordPolicy which is covered by Moto but this is only for demonstration purposes.
> The following example uses the IAM GetAccountPasswordPolicy which is covered by Moto but this is only for demonstration purposes.
The following code shows how to use MagicMock to create the service objects.
@@ -328,8 +325,7 @@ class Test_iam_password_policy_uppercase:
Note that this does not use Moto, to keep it simple, but if you use any `moto`-decorators in addition to the patch, the call to `orig(self, operation_name, kwarg)` will be intercepted by Moto.
???+ note
The above code comes from here https://docs.getmoto.org/en/latest/docs/services/patching_other_services.html
> The above code comes from here https://docs.getmoto.org/en/latest/docs/services/patching_other_services.html
#### Mocking more than one service
@@ -389,7 +385,7 @@ with mock.patch(
"prowler.providers.<provider>.lib.audit_info.audit_info.audit_info",
new=audit_info,
), mock.patch(
"prowler.providers.<provider>.services.<service>.<check>.<check>.<service>_client",
"prowler.providers.aws.services.<service>.<check>.<check>.<service>_client",
new=<SERVICE>(audit_info),
):
```
@@ -411,10 +407,10 @@ with mock.patch(
"prowler.providers.<provider>.lib.audit_info.audit_info.audit_info",
new=audit_info,
), mock.patch(
"prowler.providers.<provider>.services.<service>.<SERVICE>",
"prowler.providers.aws.services.<service>.<SERVICE>",
new=<SERVICE>(audit_info),
) as service_client, mock.patch(
"prowler.providers.<provider>.services.<service>.<service>_client.<service>_client",
"prowler.providers.aws.services.<service>.<service>_client.<service>_client",
new=service_client,
):
```
@@ -527,7 +523,7 @@ from unittest import mock
from uuid import uuid4
# Azure Constants
from tests.providers.azure.azure_fixtures import AZURE_SUBSCRIPTION
AZURE_SUSCRIPTION = str(uuid4())
@@ -546,7 +542,7 @@ class Test_defender_ensure_defender_for_arm_is_on:
# Create the custom Defender object to be tested
defender_client.pricings = {
AZURE_SUBSCRIPTION: {
AZURE_SUSCRIPTION: {
"Arm": Defender_Pricing(
resource_id=resource_id,
pricing_tier="Not Standard",
@@ -584,9 +580,9 @@ class Test_defender_ensure_defender_for_arm_is_on:
assert result[0].status == "FAIL"
assert (
result[0].status_extended
== f"Defender plan Defender for ARM from subscription {AZURE_SUBSCRIPTION} is set to OFF (pricing tier not standard)"
== f"Defender plan Defender for ARM from subscription {AZURE_SUSCRIPTION} is set to OFF (pricing tier not standard)"
)
assert result[0].subscription == AZURE_SUBSCRIPTION
assert result[0].subscription == AZURE_SUSCRIPTION
assert result[0].resource_name == "Defender plan ARM"
assert result[0].resource_id == resource_id
```
+12 -38
View File
@@ -5,7 +5,7 @@ Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.ama
Since Prowler uses AWS Credentials under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence).
### Authentication
### AWS Authentication
Make sure you have properly configured your AWS-CLI with a valid Access Key and Region or declare AWS variables properly (or instance profile/role):
@@ -26,8 +26,9 @@ Those credentials must be associated to a user or role with proper permissions t
- `arn:aws:iam::aws:policy/SecurityAudit`
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
???+ note
Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using. If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
> If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
### Multi-Factor Authentication
@@ -38,7 +39,7 @@ If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to
## Azure
Prowler for Azure supports the following authentication types:
Prowler for azure supports the following authentication types:
- Service principal authentication by environment variables (Enterprise Application)
- Current az cli credentials stored
@@ -62,7 +63,7 @@ The other three cases does not need additional configuration, `--az-cli-auth` an
### Permissions
To use each one you need to pass the proper flag to the execution. Prowler for Azure handles two types of permission scopes, which are:
To use each one you need to pass the proper flag to the execution. Prowler fro Azure handles two types of permission scopes, which are:
- **Azure Active Directory permissions**: Used to retrieve metadata from the identity assumed by Prowler and future AAD checks (not mandatory to have access to execute the tool)
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool.
@@ -70,51 +71,25 @@ To use each one you need to pass the proper flag to the execution. Prowler for A
#### Azure Active Directory scope
Microsoft Entra ID (AAD earlier) permissions required by the tool are the following:
Azure Active Directory (AAD) permissions required by the tool are the following:
- `Directory.Read.All`
- `Policy.Read.All`
The best way to assign it is through the Azure web console:
1. Access to Microsoft Entra ID
2. In the left menu bar, go to "App registrations"
3. Once there, in the menu bar click on "+ New registration" to register a new application
4. Fill the "Name, select the "Supported account types" and click on "Register. You will be redirected to the applications page.
![Register an Application page](../img/register-application.png)
4. Select the new application
5. In the left menu bar, select "API permissions"
6. Then click on "+ Add a permission" and select "Microsoft Graph"
7. Once in the "Microsoft Graph" view, select "Application permissions"
8. Finally, search for "Directory" and "Policy" and select the following permissions:
- `Directory.Read.All`
- `Policy.Read.All`
![EntraID Permissions](../img/AAD-permissions.png)
The best way to assign it is through the azure web console:
![AAD Permissions](../img/AAD-permissions.png)
#### Subscriptions scope
Regarding the subscription scope, Prowler by default scans all the subscriptions that is able to list, so it is required to add the following RBAC builtin roles per subscription to the entity that is going to be assumed by the tool:
Regarding the subscription scope, Prowler by default scans all the subscriptions that is able to list, so it is required to add the following RBAC builtin roles per subscription to the entity that is going to be assumed by the tool:
- `Security Reader`
- `Reader`
To assign this roles, follow the instructions:
1. Access your subscription, then select your subscription.
2. Select "Access control (IAM)".
3. In the overview, select "Roles"
![IAM Page](../img/page-IAM.png)
4. Click on "+ Add" and select "Add role assignment"
5. In the search bar, type `Security Reader`, select it and click on "Next"
6. In the Members tab, click on "+ Select members" and add the members you want to assign this role.
7. Click on "Review + assign" to apply the new role.
*Repeat these steps for `Reader` role*
## Google Cloud
### Authentication
### GCP Authentication
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
@@ -124,5 +99,4 @@ Prowler will follow the same credentials search as [Google authentication librar
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the `Viewer` role to the member associated with the credentials.
???+ note
By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
Binary file not shown.

Before

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 302 KiB

+34 -19
View File
@@ -1,13 +1,38 @@
**Prowler** is an Open Source security tool to perform AWS, Azure and Google Cloud security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. We have Prowler CLI (Command Line Interface) that we call Prowler Open Source and a service on top of it that we call <a href="https://prowler.com">Prowler SaaS</a>.
<p href="https://github.com/prowler-cloud/prowler">
<img align="right" src="./img/prowler-logo.png" height="100">
</p>
<br>
![Prowler Execution](img/short-display.png)
# Prowler Documentation
**Welcome to [Prowler Open Source v3](https://github.com/prowler-cloud/prowler/) Documentation!** 📄
For **Prowler v2 Documentation**, please go [here](https://github.com/prowler-cloud/prowler/tree/2.12.0) to the branch and its README.md.
- You are currently in the **Getting Started** section where you can find general information and requirements to help you start with the tool.
- In the [Tutorials](./tutorials/misc.md) section you will see how to take advantage of all the features in Prowler.
- In the [Contact Us](./contact.md) section you can find how to reach us out in case of technical issues.
- In the [About](./about.md) section you will find more information about the Prowler team and license.
## About Prowler
**Prowler** is an Open Source security tool to perform AWS, Azure and Google Cloud security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
It contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/prowlercloud.svg?style=social&label=Follow%20%40prowlercloud)](https://twitter.com/prowlercloud)
## About ProwlerPro
<a href="https://prowler.pro"><img align="right" src="./img/prowler-pro-light.png" width="350"></a> **ProwlerPro** gives you the benefits of Prowler Open Source plus continuous monitoring, faster execution, personalized support, visualization of your data with dashboards, alerts and much more.
Visit <a href="https://prowler.pro">prowler.pro</a> for more info.
Prowler offers hundreds of controls covering more than 25 standards and compliance frameworks like CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
## Quick Start
### Installation
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler/), thus can be installed using pip with `Python >= 3.9`:
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with `Python >= 3.9`:
=== "Generic"
@@ -124,8 +149,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler/),
prowler -v
```
???+ note
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
> To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
=== "Azure CloudShell"
@@ -160,18 +184,14 @@ You can run Prowler from your workstation, an EC2 instance, Fargate or any other
![Architecture](img/architecture.png)
## Basic Usage
To run Prowler, you will need to specify the provider (e.g `aws`, `gcp` or `azure`):
???+ note
If no provider specified, AWS will be used for backward compatibility with most of v2 options.
To run Prowler, you will need to specify the provider (e.g aws, gcp or azure):
> If no provider specified, AWS will be used for backward compatibility with most of v2 options.
```console
prowler <provider>
```
![Prowler Execution](img/short-display.png)
???+ note
Running the `prowler` command without options will use your environment variable credentials, see [Requirements](./getting-started/requirements.md) section to review the credentials settings.
> Running the `prowler` command without options will use your environment variable credentials, see [Requirements](./getting-started/requirements.md) section to review the credentials settings.
If you miss the former output you can use `--verbose` but Prowler v3 is smoking fast, so you won't see much ;)
@@ -222,9 +242,7 @@ Use a custom AWS profile with `-p`/`--profile` and/or AWS regions which you want
```console
prowler aws --profile custom-profile -f us-east-1 eu-south-2
```
???+ note
By default, `prowler` will scan all AWS regions.
> By default, `prowler` will scan all AWS regions.
See more details about AWS Authentication in [Requirements](getting-started/requirements.md)
@@ -274,6 +292,3 @@ prowler gcp --project-ids <Project ID 1> <Project ID 2> ... <Project ID N>
```
See more details about GCP Authentication in [Requirements](getting-started/requirements.md)
## Prowler v2 Documentation
For **Prowler v2 Documentation**, please check it out [here](https://github.com/prowler-cloud/prowler/blob/8818f47333a0c1c1a457453c87af0ea5b89a385f/README.md).
+2 -2
View File
@@ -13,9 +13,9 @@ As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (F
## Reporting Vulnerabilities
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or Prowler SaaS service, please submit the information by contacting to us via [**support.prowler.com**](http://support.prowler.com).
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with the Prowler team as part of this process is kept confidential within Prowler. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
The information you share with Verica as part of this process is kept confidential within Verica and the Prowler team. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.
+3 -6
View File
@@ -19,8 +19,9 @@ Those credentials must be associated to a user or role with proper permissions t
- `arn:aws:iam::aws:policy/SecurityAudit`
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
???+ note
Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using. If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
> If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
## Profiles
@@ -36,7 +37,3 @@ If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to
- ARN of your MFA device
- TOTP (Time-Based One-Time Password)
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.
Binary file not shown.

Before

Width:  |  Height:  |  Size: 341 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 291 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 346 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 603 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 273 KiB

+12 -23
View File
@@ -1,28 +1,21 @@
# AWS Organizations
## Get AWS Account details from your AWS Organization
Prowler allows you to get additional information of the scanned account from AWS Organizations.
Prowler allows you to get additional information of the scanned account in CSV and JSON outputs. When scanning a single account you get the Account ID as part of the output.
If you have AWS Organizations enabled, Prowler can get your account details like account name, email, ARN, organization id and tags and you will have them next to every finding's output.
If you have AWS Organizations Prowler can get your account details like Account Name, Email, ARN, Organization ID and Tags and you will have them next to every finding in the CSV and JSON outputs.
In order to do that you can use the argument `-O`/`--organizations-role <organizations_role_arn>`. If this argument is not present Prowler will try to fetch that information automatically if the AWS account is a delegated administrator for the AWS Organization.
???+ note
Refer [here](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_delegate_policies.html) for more information about AWS Organizations delegated administrator.
See the following sample command:
In order to do that you can use the option `-O`/`--organizations-role <organizations_role_arn>`. See the following sample command:
```shell
prowler aws \
-O arn:aws:iam::<management_organizations_account_id>:role/<role_name>
```
???+ note
Make sure the role in your AWS Organizations management account has the permissions `organizations:DescribeAccount` and `organizations:ListTagsForResource`.
> Make sure the role in your AWS Organizations management account has the permissions `organizations:ListAccounts*` and `organizations:ListTagsForResource`.
Prowler will scan the AWS account and get the account details from AWS Organizations.
In that command Prowler will scan the account and getting the account details from the AWS Organizations management account assuming a role and creating two reports with those details in JSON and CSV.
In the JSON output below you can see tags coded in base64 to prevent breaking CSV or JSON due to its format:
In the JSON output below (redacted) you can see tags coded in base64 to prevent breaking CSV or JSON due to its format:
```json
"Account Email": "my-prod-account@domain.com",
@@ -32,15 +25,13 @@ In the JSON output below you can see tags coded in base64 to prevent breaking CS
"Account tags": "\"eyJUYWdzIjpasf0=\""
```
The additional fields in CSV header output are as follows:
The additional fields in CSV header output are as follow:
- ACCOUNT_DETAILS_EMAIL
- ACCOUNT_DETAILS_NAME
- ACCOUNT_DETAILS_ARN
- ACCOUNT_DETAILS_ORG
- ACCOUNT_DETAILS_TAGS
```csv
ACCOUNT_DETAILS_EMAIL,ACCOUNT_DETAILS_NAME,ACCOUNT_DETAILS_ARN,ACCOUNT_DETAILS_ORG,ACCOUNT_DETAILS_TAGS
```
## Extra: Run Prowler across all accounts in AWS Organizations by assuming roles
## Extra: run Prowler across all accounts in AWS Organizations by assuming roles
If you want to run Prowler across all accounts of AWS Organizations you can do this:
@@ -64,6 +55,4 @@ If you want to run Prowler across all accounts of AWS Organizations you can do t
done
```
???+ note
Using the same for loop it can be scanned a list of accounts with a variable like:
</br>`ACCOUNTS_LIST='11111111111 2222222222 333333333'`
> Using the same for loop it can be scanned a list of accounts with a variable like `ACCOUNTS_LIST='11111111111 2222222222 333333333'`
+4 -9
View File
@@ -6,13 +6,10 @@ By default Prowler is able to scan the following AWS partitions:
- China: `aws-cn`
- GovCloud (US): `aws-us-gov`
???+ note
To check the available regions for each partition and service please refer to the following document [aws_regions_by_service.json](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_regions_by_service.json)
> To check the available regions for each partition and service please refer to the following document [aws_regions_by_service.json](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_regions_by_service.json)
It is important to take into consideration that to scan the China (`aws-cn`) or GovCloud (`aws-us-gov`) partitions it is either required to have a valid region for that partition in your AWS credentials or to specify the regions you want to audit for that partition using the `-f/--region` flag.
???+ note
Please, refer to https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials for more information about the AWS credentials configuration.
> Please, refer to https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials for more information about the AWS credentials configuration.
Prowler can scan specific region(s) with:
```console
@@ -37,8 +34,7 @@ aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXX
region = cn-north-1
```
???+ note
With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
> With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
## AWS GovCloud (US)
@@ -56,8 +52,7 @@ aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXX
region = us-gov-east-1
```
???+ note
With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
> With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
## AWS ISO (US & Europe)
+2 -20
View File
@@ -23,23 +23,6 @@ prowler aws -R arn:aws:iam::<account_id>:role/<role_name>
prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R arn:aws:iam::<account_id>:role/<role_name>
```
## Custom Role Session Name
Prowler can use your custom Role Session name with:
```console
prowler aws --role-session-name <role_session_name>
```
???+ note
It defaults to `ProwlerAssessmentSession`.
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.
???+ note
Since v3.11.0, Prowler uses a regional token in STS sessions so it can scan all AWS regions without needing the `--sts-endpoint-region` argument. Make sure that you have enabled the AWS Region you want to scan in **BOTH** AWS Accounts (assumed role account and account from which you assume the role).
## Role MFA
If your IAM Role has MFA configured you can use `--mfa` along with `-R`/`--role <role_arn>` and Prowler will ask you to input the following values to get a new temporary session for the IAM Role provided:
@@ -51,7 +34,6 @@ If your IAM Role has MFA configured you can use `--mfa` along with `-R`/`--role
To create a role to be assumed in one or multiple accounts you can use either as CloudFormation Stack or StackSet the following [template](https://github.com/prowler-cloud/prowler/blob/master/permissions/create_role_to_assume_cfn.yaml) and adapt it.
???+ note "About Session Duration"
Depending on the amount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify _"Maximum CLI/API session duration"_ for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
> _NOTE 1 about Session Duration_: Depending on the amount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify _"Maximum CLI/API session duration"_ for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
Bear in mind that if you are using roles assumed by role chaining there is a hard limit of 1 hour so consider not using role chaining if possible, read more about that, in foot note 1 below the table [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html).
> _NOTE 2 about Session Duration_: Bear in mind that if you are using roles assumed by role chaining there is a hard limit of 1 hour so consider not using role chaining if possible, read more about that, in foot note 1 below the table [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html).
+3 -2
View File
@@ -21,5 +21,6 @@ By default Prowler sends HTML, JSON and CSV output formats, if you want to send
prowler <provider> -M csv -B my-bucket
```
???+ note
In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D`/`--output-bucket-no-assume` instead of `-B`/`--output-bucket`. Make sure that the used credentials have `s3:PutObject` permissions in the S3 path where the reports are going to be uploaded.
> In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D`/`--output-bucket-no-assume` instead of `-B`/`--output-bucket`.
> Make sure that the used credentials have `s3:PutObject` permissions in the S3 path where the reports are going to be uploaded.
+23 -99
View File
@@ -1,137 +1,61 @@
# AWS Security Hub Integration
Prowler supports natively and as **official integration** sending findings to [AWS Security Hub](https://aws.amazon.com/security-hub). This integration allows **Prowler** to import its findings to AWS Security Hub.
Prowler supports natively and as **official integration** sending findings to [AWS Security Hub](https://aws.amazon.com/security-hub). This integration allows Prowler to import its findings to AWS Security Hub.
With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions and from Prowler for free.
Before sending findings, you will need to enable AWS Security Hub and the **Prowler** integration.
Before sending findings to Prowler, you will need to perform next steps:
## Enable AWS Security Hub
1. Since Security Hub is a region based service, enable it in the region or regions you require. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-security-hub --region <region>`.
2. Enable Prowler as partner integration integration. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-import-findings-for-product --region <region> --product-arn arn:aws:securityhub:<region>::product/prowler/prowler` (change region also inside the ARN).
- Using the AWS Management Console:
![Screenshot 2020-10-29 at 10 26 02 PM](https://user-images.githubusercontent.com/3985464/97634660-5ade3400-1a36-11eb-9a92-4a45cc98c158.png)
3. Allow Prowler to import its findings to AWS Security Hub by adding the policy below to the role or user running Prowler:
- [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
To enable the integration you have to perform the following steps, in _at least_ one AWS region of a given AWS account, to enable **AWS Security Hub** and **Prowler** as a partner integration.
Since **AWS Security Hub** is a region based service, you will need to enable it in the region or regions you require. You can configure it using the AWS Management Console or the AWS CLI.
???+ note
Take into account that enabling this integration will incur in costs in AWS Security Hub, please refer to its pricing [here](https://aws.amazon.com/security-hub/pricing/) for more information.
### Using the AWS Management Console
#### Enable AWS Security Hub
If you have currently AWS Security Hub enabled you can skip to the [next section](#enable-prowler-integration).
1. Open the **AWS Security Hub** console at https://console.aws.amazon.com/securityhub/.
2. When you open the Security Hub console for the first time make sure that you are in the region you want to enable, then choose **Go to Security Hub**.
![](./img/enable.png)
3. On the next page, the Security standards section lists the security standards that Security Hub supports. Select the check box for a standard to enable it, and clear the check box to disable it.
4. Choose **Enable Security Hub**.
![](./img/enable-2.png)
#### Enable Prowler Integration
If you have currently the Prowler integration enabled in AWS Security Hub you can skip to the [next section](#send-findings) and start sending findings.
Once **AWS Security Hub** is enabled you will need to enable **Prowler** as partner integration to allow **Prowler** to send findings to your **AWS Security Hub**.
1. Open the **AWS Security Hub** console at https://console.aws.amazon.com/securityhub/.
2. Select the **Integrations** tab in the right-side menu bar.
![](./img/enable-partner-integration.png)
3. Search for _Prowler_ in the text search box and the **Prowler** integration will appear.
4. Once there, click on **Accept Findings** to allow **AWS Security Hub** to receive findings from **Prowler**.
![](./img/enable-partner-integration-2.png)
5. A new modal will appear to confirm that you are enabling the **Prowler** integration.
![](./img/enable-partner-integration-3.png)
6. Right after click on **Accept Findings**, you will see that the integration is enabled in **AWS Security Hub**.
![](./img/enable-partner-integration-4.png)
### Using the AWS CLI
To enable **AWS Security Hub** and the **Prowler** integration you have to run the following commands using the AWS CLI:
```shell
aws securityhub enable-security-hub --region <region>
```
???+ note
For this command to work you will need the `securityhub:EnableSecurityHub` permission. You will need to set the AWS region where you want to enable AWS Security Hub.
Once **AWS Security Hub** is enabled you will need to enable **Prowler** as partner integration to allow **Prowler** to send findings to your AWS Security Hub. You have to run the following commands using the AWS CLI:
```shell
aws securityhub enable-import-findings-for-product --region eu-west-1 --product-arn arn:aws:securityhub:<region>::product/prowler/prowler
```
???+ note
You will need to set the AWS region where you want to enable the integration and also the AWS region also within the ARN. For this command to work you will need the `securityhub:securityhub:EnableImportFindingsForProduct` permission.
## Send Findings
Once it is enabled, it is as simple as running the command below (for all regions):
```sh
prowler aws --security-hub
prowler aws -S
```
or for only one filtered region like eu-west-1:
```sh
prowler --security-hub --region eu-west-1
prowler -S -f eu-west-1
```
???+ note
It is recommended to send only fails to Security Hub and that is possible adding `-q/--quiet` to the command. You can use, instead of the `-q/--quiet` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub.
> **Note 1**: It is recommended to send only fails to Security Hub and that is possible adding `-q` to the command.
Since Prowler perform checks to all regions by default you may need to filter by region when running Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f/--region <region>` (if no region is used it will try to push findings in all regions hubs). Prowler will send findings to the Security Hub on the region where the scanned resource is located.
> **Note 2**: Since Prowler perform checks to all regions by default you may need to filter by region when running Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs). Prowler will send findings to the Security Hub on the region where the scanned resource is located.
To have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
> **Note 3**: To have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
### See you Prowler findings in AWS Security Hub
Once you run findings for first time you will be able to see Prowler findings in Findings section:
Once configured the **AWS Security Hub** in your next scan you will receive the **Prowler** findings in the AWS regions configured. To review those findings in **AWS Security Hub**:
1. Open the **AWS Security Hub** console at https://console.aws.amazon.com/securityhub/.
2. Select the **Findings** tab in the right-side menu bar.
![](./img/findings.png)
3. Use the search box filters and use the **Product Name** filter with the value _Prowler_ to see the findings sent from **Prowler**.
4. Then, you can click on the check **Title** to see the details and the history of a finding.
![](./img/finding-details.png)
As you can see in the related requirements section, in the detailed view of the findings, **Prowler** also sends compliance information related to every finding.
![Screenshot 2020-10-29 at 10 29 05 PM](https://user-images.githubusercontent.com/3985464/97634676-66c9f600-1a36-11eb-9341-70feb06f6331.png)
## Send findings to Security Hub assuming an IAM Role
When you are auditing a multi-account AWS environment, you can send findings to a Security Hub of another account by assuming an IAM role from that account using the `-R` flag in the Prowler command:
```sh
prowler --security-hub --role arn:aws:iam::123456789012:role/ProwlerExecutionRole
prowler -S -R arn:aws:iam::123456789012:role/ProwlerExecRole
```
???+ note
Remember that the used role needs to have permissions to send findings to Security Hub. To get more information about the permissions required, please refer to the following IAM policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
> Remember that the used role needs to have permissions to send findings to Security Hub. To get more information about the permissions required, please refer to the following IAM policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
## Send only failed findings to Security Hub
When using the **AWS Security Hub** integration you can send only the `FAIL` findings generated by **Prowler**. Therefore, the **AWS Security Hub** usage costs eventually would be lower. To follow that recommendation you could add the `-q/--quiet` flag to the Prowler command:
When using Security Hub it is recommended to send only the failed findings generated. To follow that recommendation you could add the `-q` flag to the Prowler command:
```sh
prowler --security-hub --quiet
prowler -S -q
```
You can use, instead of the `-q/--quiet` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub:
```sh
prowler --security-hub --send-sh-only-fails
```
## Skip sending updates of findings to Security Hub
@@ -139,5 +63,5 @@ By default, Prowler archives all its findings in Security Hub that have not appe
You can skip this logic by using the option `--skip-sh-update` so Prowler will not archive older findings:
```sh
prowler --security-hub --skip-sh-update
prowler -S --skip-sh-update
```
+12 -12
View File
@@ -1,19 +1,19 @@
# Check Aliases
Prowler allows you to use aliases for the checks. You only have to add the `CheckAliases` key to the check's metadata with a list of the aliases:
```json title="check.metadata.json"
"Provider": "<provider>",
"CheckID": "<check_id>",
"CheckTitle": "<check_title>",
"CheckAliases": [
"<check_alias_1>"
"<check_alias_2>",
...
],
...
```
"Provider": "<provider>",
"CheckID": "<check_id>",
"CheckTitle": "<check_title>",
"CheckAliases": [
"<check_alias_1>"
"<check_alias_2>",
...
],
...
Then, you can execute the check either with its check ID or with one of the previous aliases:
```shell
```console
prowler <provider> -c/--checks <check_alias_1>
Using alias <check_alias_1> for check <check_id>...
+23 -21
View File
@@ -1,5 +1,18 @@
# Compliance
Prowler allows you to execute checks based on requirements defined in compliance frameworks.
Prowler allows you to execute checks based on requirements defined in compliance frameworks. By default, it will execute and give you an overview of the status of each compliance framework:
<img src="../img/compliance.png"/>
> You can find CSVs containing detailed compliance results inside the compliance folder within Prowler's output folder.
## Execute Prowler based on Compliance Frameworks
Prowler can analyze your environment based on a specific compliance framework and get more details, to do it, you can use option `--compliance`:
```sh
prowler <provider> --compliance <compliance_framework>
```
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 1.5. For details a CSV file has been generated as well.
<img src="../img/compliance-cis-sample1.png"/>
## List Available Compliance Frameworks
In order to see which compliance frameworks are cover by Prowler, you can use option `--list-compliance`:
@@ -8,36 +21,35 @@ prowler <provider> --list-compliance
```
Currently, the available frameworks are:
- `aws_account_security_onboarding_aws`
- `cis_1.4_aws`
- `cis_1.5_aws`
- `cis_2.0_aws`
- `cisa_aws`
- `ens_rd2022_aws`
- `aws_audit_manager_control_tower_guardrails_aws`
- `aws_foundational_security_best_practices_aws`
- `aws_well_architected_framework_reliability_pillar_aws`
- `aws_well_architected_framework_security_pillar_aws`
- `cis_1.4_aws`
- `cis_1.5_aws`
- `cis_2.0_aws`
- `cis_2.0_gcp`
- `cis_3.0_aws`
- `cisa_aws`
- `ens_rd2022_aws`
- `fedramp_low_revision_4_aws`
- `fedramp_moderate_revision_4_aws`
- `ffiec_aws`
- `foundational_technical_review_aws`
- `gdpr_aws`
- `gxp_21_cfr_part_11_aws`
- `gxp_eu_annex_11_aws`
- `gxp_21_cfr_part_11_aws`
- `hipaa_aws`
- `iso27001_2013_aws`
- `iso27001_2013_aws`
- `mitre_attack_aws`
- `nist_800_171_revision_2_aws`
- `nist_800_53_revision_4_aws`
- `nist_800_53_revision_5_aws`
- `nist_800_171_revision_2_aws`
- `nist_csf_1.1_aws`
- `pci_3.2.1_aws`
- `rbi_cyber_security_framework_aws`
- `soc2_aws`
## List Requirements of Compliance Frameworks
For each compliance framework, you can use option `--list-compliance-requirements` to list its requirements:
```sh
@@ -45,7 +57,6 @@ prowler <provider> --list-compliance-requirements <compliance_framework(s)>
```
Example for the first requirements of CIS 1.5 for AWS:
```
Listing CIS 1.5 AWS Compliance Requirements:
@@ -78,15 +89,6 @@ Requirement Id: 1.5
```
## Execute Prowler based on Compliance Frameworks
As we mentioned, Prowler can be execute to analyse you environment based on a specific compliance framework, to do it, you can use option `--compliance`:
```sh
prowler <provider> --compliance <compliance_framework>
```
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 1.5. For details a CSV file has been generated as well.
<img src="../img/compliance-cis-sample1.png"/>
## Create and contribute adding other Security Frameworks
This information is part of the Developer Guide and can be found here: https://docs.prowler.cloud/en/latest/tutorials/developer-guide/.
+7 -32
View File
@@ -29,40 +29,29 @@ The following list includes all the AWS checks with configurable variables that
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
| `ecr_repositories_scan_vulnerabilities_in_latest_image` | `ecr_repository_vulnerability_minimum_severity` | String |
| `trustedadvisor_premium_support_plan_subscribed` | `verify_premium_support_plans` | Boolean |
| `config_recorder_all_regions_enabled` | `allowlist_non_default_regions` | Boolean |
| `drs_job_exist` | `allowlist_non_default_regions` | Boolean |
| `guardduty_is_enabled` | `allowlist_non_default_regions` | Boolean |
| `securityhub_enabled` | `allowlist_non_default_regions` | Boolean |
| `config_recorder_all_regions_enabled` | `mute_non_default_regions` | Boolean |
| `drs_job_exist` | `mute_non_default_regions` | Boolean |
| `guardduty_is_enabled` | `mute_non_default_regions` | Boolean |
| `securityhub_enabled` | `mute_non_default_regions` | Boolean |
## Azure
### Configurable Checks
The following list includes all the Azure checks with configurable variables that can be changed in the configuration yaml file:
| Check Name | Value | Type |
|---------------------------------------------------------------|--------------------------------------------------|-----------------|
| `network_public_ip_shodan` | `shodan_api_key` | String |
| `app_ensure_php_version_is_latest` | `php_latest_version` | String |
| `app_ensure_python_version_is_latest` | `python_latest_version` | String |
| `app_ensure_java_version_is_latest` | `java_latest_version` | String |
## GCP
### Configurable Checks
## Config YAML File Structure
???+ note
This is the new Prowler configuration file format. The old one without provider keys is still compatible just for the AWS provider.
> This is the new Prowler configuration file format. The old one without provider keys is still compatible just for the AWS provider.
```yaml title="config.yaml"
# AWS Configuration
aws:
# AWS Global Configuration
# aws.allowlist_non_default_regions --> Allowlist Failed Findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
allowlist_non_default_regions: False
# aws.mute_non_default_regions --> Mute Failed Findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
mute_non_default_regions: False
# AWS IAM Configuration
# aws.iam_user_accesskey_unused --> CIS recommends 45 days
@@ -137,22 +126,8 @@ aws:
# Azure Configuration
azure:
# Azure Network Configuration
# azure.network_public_ip_shodan
shodan_api_key: null
# Azure App Configuration
# azure.app_ensure_php_version_is_latest
php_latest_version: "8.2"
# azure.app_ensure_python_version_is_latest
python_latest_version: "3.12"
# azure.app_ensure_java_version_is_latest
java_latest_version: "17"
# GCP Configuration
gcp:
# GCP Compute Configuration
# gcp.compute_public_address_shodan
shodan_api_key: null
```
+1 -2
View File
@@ -13,8 +13,7 @@ Otherwise, you can generate and download Service Account keys in JSON format (re
prowler gcp --credentials-file path
```
???+ note
`prowler` will scan the GCP project associated with the credentials.
> `prowler` will scan the GCP project associated with the credentials.
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
+2 -3
View File
@@ -1,7 +1,6 @@
# Ignore Unused Services
???+ note
Currently only available on the AWS provider.
> Currently only available on the AWS provider.
Prowler allows you to ignore unused services findings, so you can reduce the number of findings in Prowler's reports.
@@ -48,7 +47,7 @@ It is a best practice to encrypt both metadata and connection passwords in AWS G
#### Inspector
Amazon Inspector is a vulnerability discovery service that automates continuous scanning for security vulnerabilities within your Amazon EC2, Amazon ECR, and AWS Lambda environments. Prowler recommends to enable it and resolve all the Inspector's findings. Ignoring the unused services, Prowler will only notify you if there are any Lambda functions, EC2 instances or ECR repositories in the region where Amazon inspector should be enabled.
- `inspector2_is_enabled`
- `inspector2_findings_exist`
#### Macie
Amazon Macie is a security service that uses machine learning to automatically discover, classify and protect sensitive data in S3 buckets. Prowler will only create a finding when Macie is not enabled if there are S3 buckets in your account.
Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 94 KiB

+1 -3
View File
@@ -10,9 +10,7 @@ prowler <provider> --slack
![Prowler Slack Message](img/slack-prowler-message.png)
???+ note
Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_ID environment variables.
> Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_ID environment variables.
### Configuration
To configure the Slack Integration, follow the next steps:
+3 -5
View File
@@ -8,7 +8,7 @@ There are different log levels depending on the logging information that is desi
- **DEBUG**: It will show low-level logs from Python.
- **INFO**: It will show all the API calls that are being invoked by the provider.
- **WARNING**: It will show all resources that are being **allowlisted**.
- **WARNING**: It will show all resources that are being **muted**.
- **ERROR**: It will show any errors, e.g., not authorized actions.
- **CRITICAL**: The default log level. If a critical log appears, it will **exit** Prowlers execution.
@@ -18,8 +18,7 @@ You can establish the log level of Prowler with `--log-level` option:
prowler <provider> --log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
```
???+ note
By default, Prowler will run with the `CRITICAL` log level, since critical errors will abort the execution.
> By default, Prowler will run with the `CRITICAL` log level, since critical errors will abort the execution.
## Export Logs to File
@@ -46,5 +45,4 @@ An example of a log file will be the following:
"message": "eu-west-2 -- ClientError[124]: An error occurred (UnauthorizedOperation) when calling the DescribeNetworkAcls operation: You are not authorized to perform this operation."
}
???+ note
Each finding is represented as a `json` object.
> NOTE: Each finding is represented as a `json` object.
+9 -14
View File
@@ -9,10 +9,10 @@ Execute Prowler in verbose mode (like in Version 2):
```console
prowler <provider> --verbose
```
## Show only Fails
Prowler can only display the failed findings:
## Filter findings by status
Prowler can filter the findings by their status:
```console
prowler <provider> -q/--quiet
prowler <provider> --status [PASS, FAIL, MANUAL]
```
## Disable Exit Code 3
Prowler does not trigger exit code 3 with failed checks:
@@ -61,26 +61,21 @@ Prowler allows you to include your custom checks with the flag:
```console
prowler <provider> -x/--checks-folder <custom_checks_folder>
```
???+ note
S3 URIs are also supported as folders for custom checks, e.g. `s3://bucket/prefix/checks_folder/`. Make sure that the used credentials have `s3:GetObject` permissions in the S3 path where the custom checks are located.
> S3 URIs are also supported as folders for custom checks, e.g. s3://bucket/prefix/checks_folder/. Make sure that the used credentials have s3:GetObject permissions in the S3 path where the custom checks are located.
The custom checks folder must contain one subfolder per check, each subfolder must be named as the check and must contain:
- An empty `__init__.py`: to make Python treat this check folder as a package.
- A `check_name.py` containing the check's logic.
- A `check_name.metadata.json` containing the check's metadata.
???+ note
The check name must start with the service name followed by an underscore (e.g., ec2_instance_public_ip).
>The check name must start with the service name followed by an underscore (e.g., ec2_instance_public_ip).
To see more information about how to write checks see the [Developer Guide](../developer-guide/checks.md#create-a-new-check-for-a-provider).
???+ note
If you want to run ONLY your custom check(s), import it with -x (--checks-folder) and then run it with -c (--checks), e.g.:
```console
prowler aws -x s3://bucket/prowler/providers/aws/services/s3/s3_bucket_policy/ -c s3_bucket_policy
```
> If you want to run ONLY your custom check(s), import it with -x (--checks-folder) and then run it with -c (--checks), e.g.:
```console
prowler aws -x s3://bucket/prowler/providers/aws/services/s3/s3_bucket_policy/ -c s3_bucket_policy
```
## Severities
Each of Prowler's checks has a severity, which can be:
@@ -1,19 +1,19 @@
# Allowlisting
# Mute Listing
Sometimes you may find resources that are intentionally configured in a certain way that may be a bad practice but it is all right with it, for example an AWS S3 Bucket open to the internet hosting a web site, or an AWS Security Group with an open port needed in your use case.
Allowlist option works along with other options and adds a `WARNING` instead of `INFO`, `PASS` or `FAIL` to any output format.
Mute List option works along with other options and adds a `MUTED` instead of `MANUAL`, `PASS` or `FAIL` to any output format.
You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, but first, let's review the syntax.
You can use `-w`/`--mutelist-file` with the path of your mutelist yaml file, but first, let's review the syntax.
## Allowlist Yaml File Syntax
## Mute List Yaml File Syntax
### Account, Check and/or Region can be * to apply for all the cases.
### Resources and tags are lists that can have either Regex or Keywords.
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
### For each check you can except Accounts, Regions, Resources and/or Tags.
########################### ALLOWLIST EXAMPLE ###########################
Allowlist:
########################### MUTE LIST EXAMPLE ###########################
Mute List:
Accounts:
"123456789012":
Checks:
@@ -79,10 +79,10 @@ You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, b
Tags:
- "environment=prod" # Will ignore every resource except in account 123456789012 except the ones containing the string "test" and tag environment=prod
## Allowlist specific regions
If you want to allowlist/mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w allowlist.yaml`:
## Mute specific regions
If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
Allowlist:
Mute List:
Accounts:
"*":
Checks:
@@ -93,54 +93,52 @@ If you want to allowlist/mute failed findings only in specific regions, create a
Resources:
- "*"
## Default AWS Allowlist
Prowler provides you a Default AWS Allowlist with the AWS Resources that should be allowlisted such as all resources created by AWS Control Tower when setting up a landing zone.
You can execute Prowler with this allowlist using the following command:
## Default AWS Mute List
Prowler provides you a Default AWS Mute List with the AWS Resources that should be muted such as all resources created by AWS Control Tower when setting up a landing zone.
You can execute Prowler with this mutelist using the following command:
```sh
prowler aws --allowlist prowler/config/aws_allowlist.yaml
prowler aws --mutelist prowler/config/aws_mutelist.yaml
```
## Supported Allowlist Locations
## Supported Mute List Locations
The allowlisting flag supports the following locations:
The mutelisting flag supports the following locations:
### Local file
You will need to pass the local path where your Allowlist YAML file is located:
You will need to pass the local path where your Mute List YAML file is located:
```
prowler <provider> -w allowlist.yaml
prowler <provider> -w mutelist.yaml
```
### AWS S3 URI
You will need to pass the S3 URI where your Allowlist YAML file was uploaded to your bucket:
You will need to pass the S3 URI where your Mute List YAML file was uploaded to your bucket:
```
prowler aws -w s3://<bucket>/<prefix>/allowlist.yaml
prowler aws -w s3://<bucket>/<prefix>/mutelist.yaml
```
???+ note
Make sure that the used AWS credentials have s3:GetObject permissions in the S3 path where the allowlist file is located.
> Make sure that the used AWS credentials have s3:GetObject permissions in the S3 path where the mutelist file is located.
### AWS DynamoDB Table ARN
You will need to pass the DynamoDB Allowlist Table ARN:
You will need to pass the DynamoDB Mute List Table ARN:
```
prowler aws -w arn:aws:dynamodb:<region_name>:<account_id>:table/<table_name>
```
1. The DynamoDB Table must have the following String keys:
<img src="../img/allowlist-keys.png"/>
<img src="../img/mutelist-keys.png"/>
- The Allowlist Table must have the following columns:
- Accounts (String): This field can contain either an Account ID or an `*` (which applies to all the accounts that use this table as an allowlist).
- The Mute List Table must have the following columns:
- Accounts (String): This field can contain either an Account ID or an `*` (which applies to all the accounts that use this table as an mutelist).
- Checks (String): This field can contain either a Prowler Check Name or an `*` (which applies to all the scanned checks).
- Regions (List): This field contains a list of regions where this allowlist rule is applied (it can also contains an `*` to apply all scanned regions).
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be allowlisted.
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be allowlisted.
- Exceptions (Map): -Optional- This field contains a map of lists of accounts/regions/resources/tags that are wanted to be excepted in the allowlist.
- Regions (List): This field contains a list of regions where this mutelist rule is applied (it can also contains an `*` to apply all scanned regions).
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be muted.
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be muted.
- Exceptions (Map): -Optional- This field contains a map of lists of accounts/regions/resources/tags that are wanted to be excepted in the mutelist.
The following example will allowlist all resources in all accounts for the EC2 checks in the regions `eu-west-1` and `us-east-1` with the tags `environment=dev` and `environment=prod`, except the resources containing the string `test` in the account `012345678912` and region `eu-west-1` with the tag `environment=prod`:
The following example will mute all resources in all accounts for the EC2 checks in the regions `eu-west-1` and `us-east-1` with the tags `environment=dev` and `environment=prod`, except the resources containing the string `test` in the account `012345678912` and region `eu-west-1` with the tag `environment=prod`:
<img src="../img/allowlist-row.png"/>
<img src="../img/mutelist-row.png"/>
???+ note
Make sure that the used AWS credentials have `dynamodb:PartiQLSelect` permissions in the table.
> Make sure that the used AWS credentials have `dynamodb:PartiQLSelect` permissions in the table.
### AWS Lambda ARN
@@ -153,7 +151,7 @@ prowler aws -w arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
Make sure that the credentials that Prowler uses can invoke the Lambda Function:
```
- PolicyName: GetAllowList
- PolicyName: GetMuteList
PolicyDocument:
Version: '2012-10-17'
Statement:
@@ -162,14 +160,14 @@ Make sure that the credentials that Prowler uses can invoke the Lambda Function:
Resource: arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
```
The Lambda Function can then generate an Allowlist dynamically. Here is the code an example Python Lambda Function that
generates an Allowlist:
The Lambda Function can then generate an Mute List dynamically. Here is the code an example Python Lambda Function that
generates an Mute List:
```
def handler(event, context):
checks = {}
checks["vpc_flow_logs_enabled"] = { "Regions": [ "*" ], "Resources": [ "" ], Optional("Tags"): [ "key:value" ] }
al = { "Allowlist": { "Accounts": { "*": { "Checks": checks } } } }
al = { "Mute List": { "Accounts": { "*": { "Checks": checks } } } }
return al
```
+1 -2
View File
@@ -10,8 +10,7 @@ This can help for really large accounts, but please be aware of AWS API rate lim
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.cloud/en/latest/tutorials/aws/boto3-configuration/).
???+ note
You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
> Note: You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
## Linux
+2 -12
View File
@@ -50,7 +50,6 @@ Several checks analyse resources that are exposed to the Internet, these are:
- sagemaker_notebook_instance_without_direct_internet_access_configured
- sns_topics_not_publicly_accessible
- sqs_queues_not_publicly_accessible
- network_public_ip_shodan
...
@@ -62,17 +61,8 @@ prowler <provider> --categories internet-exposed
### Shodan
Prowler allows you check if any public IPs in your Cloud environments are exposed in Shodan with `-N`/`--shodan <shodan_api_key>` option:
Prowler allows you check if any elastic ip in your AWS Account is exposed in Shodan with `-N`/`--shodan <shodan_api_key>` option:
For example, you can check if any of your AWS EC2 instances has an elastic IP exposed in shodan:
```console
prowler aws -N/--shodan <shodan_api_key> -c ec2_elastic_ip_shodan
```
Also, you can check if any of your Azure Subscription has an public IP exposed in shodan:
```console
prowler azure -N/--shodan <shodan_api_key> -c network_public_ip_shodan
```
And finally, you can check if any of your GCP projects has an public IP address exposed in shodan:
```console
prowler gcp -N/--shodan <shodan_api_key> -c compute_public_address_shodan
prowler aws --shodan <shodan_api_key> -c ec2_elastic_ip_shodan
```
+2 -6
View File
@@ -1,18 +1,14 @@
# Quick Inventory
Prowler allows you to execute a quick inventory to extract the number of resources in your provider.
???+ note
Currently, it is only available for AWS provider.
> Currently, it is only available for AWS provider.
- You can use option `-i`/`--quick-inventory` to execute it:
```sh
prowler <provider> -i
```
???+ note
By default, it extracts resources from all the regions, you could use `-f`/`--filter-region` to specify the regions to execute the analysis.
> By default, it extracts resources from all the regions, you could use `-f`/`--filter-region` to specify the regions to execute the analysis.
- This feature specify both the number of resources for each service and for each resource type.
+13 -17
View File
@@ -19,12 +19,11 @@ prowler <provider> -M csv json json-asff html -F <custom_report_name>
```console
prowler <provider> -M csv json json-asff html -o <custom_report_directory>
```
???+ note
Both flags can be used simultaneously to provide a custom directory and filename.
```console
prowler <provider> -M csv json json-asff html \
-F <custom_report_name> -o <custom_report_directory>
```
> Both flags can be used simultaneously to provide a custom directory and filename.
```console
prowler <provider> -M csv json json-asff html \
-F <custom_report_name> -o <custom_report_directory>
```
## Output timestamp format
By default, the timestamp format of the output files is ISO 8601. This can be changed with the flag `--unix-timestamp` generating the timestamp fields in pure unix timestamp format.
@@ -42,10 +41,9 @@ Hereunder is the structure for each of the supported report formats by Prowler:
### HTML
![HTML Output](../img/output-html.png)
### CSV
CSV format has a set of common columns for all the providers, and then provider specific columns.
CSV format has a set of common columns for all the providers, and then provider specific columns.
The common columns are the following:
- ASSESSMENT_START_TIME
@@ -92,6 +90,7 @@ And then by the provider specific columns:
- RESOURCE_ID
- RESOURCE_ARN
#### AZURE
- TENANT_DOMAIN
@@ -99,6 +98,7 @@ And then by the provider specific columns:
- RESOURCE_ID
- RESOURCE_NAME
#### GCP
- PROJECT_ID
@@ -107,9 +107,9 @@ And then by the provider specific columns:
- RESOURCE_NAME
???+ note
Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
> Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
### JSON
The following code is an example output of the JSON format:
@@ -206,8 +206,7 @@ The following code is an example output of the JSON format:
}]
```
???+ note
Each finding is a `json` object within a list. This has changed in v3 since in v2 the format used was [ndjson](http://ndjson.org/).
> NOTE: Each finding is a `json` object within a list. This has changed in v3 since in v2 the format used was [ndjson](http://ndjson.org/).
### JSON-OCSF
@@ -468,9 +467,7 @@ Based on [Open Cybersecurity Schema Framework Security Finding v1.0.0-rc.3](http
}]
```
???+ note
Each finding is a `json` object.
> NOTE: Each finding is a `json` object.
### JSON-ASFF
The following code is an example output of the [JSON-ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-syntax.html) format:
@@ -603,5 +600,4 @@ The following code is an example output of the [JSON-ASFF](https://docs.aws.amaz
}]
```
???+ note
Each finding is a `json` object within a list.
> NOTE: Each finding is a `json` object within a list.
+10 -21
View File
@@ -1,15 +1,15 @@
# Project information
site_name: Prowler Open Source Documentation
site_url: https://docs.prowler.com/
site_name: Prowler Documentation
site_url: https://docs.prowler.pro/
site_description: >-
Prowler Open Source Documentation
Prowler Documentation Site
# Theme Configuration
theme:
language: en
logo: img/prowler-logo-white.png
logo: img/prowler-logo.png
name: material
favicon: favicon.ico
favicon: img/prowler-icon.svg
features:
- navigation.tabs
- navigation.tabs.sticky
@@ -19,11 +19,6 @@ theme:
primary: black
accent: green
plugins:
- search
- git-revision-date-localized:
enable_creation_date: true
edit_uri: "https://github.com/prowler-cloud/prowler/tree/master/docs"
# Prowler OSS Repository
repo_url: https://github.com/prowler-cloud/prowler/
@@ -41,7 +36,7 @@ nav:
- Slack Integration: tutorials/integrations.md
- Configuration File: tutorials/configuration_file.md
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Mute List: tutorials/mutelist.md
- Check Aliases: tutorials/check-aliases.md
- Custom Metadata: tutorials/custom-checks-metadata.md
- Ignore Unused Services: tutorials/ignore-unused-services.md
@@ -79,13 +74,11 @@ nav:
- Testing:
- Unit Tests: developer-guide/unit-testing.md
- Integration Tests: developer-guide/integration-testing.md
- Debugging: developer-guide/debugging.md
- Security: security.md
- Contact Us: contact.md
- Troubleshooting: troubleshooting.md
- About: about.md
- Prowler SaaS: https://prowler.com
- ProwlerPro: https://prowler.pro
# Customization
extra:
consent:
@@ -109,15 +102,11 @@ extra:
link: https://twitter.com/prowlercloud
# Copyright
copyright: >
Copyright &copy; <script>document.write(new Date().getFullYear())</script> Toni de la Fuente, Maintained by the Prowler Team at ProwlerPro, Inc.</a>
</br><a href="#__consent">Change cookie settings</a>
copyright: Copyright &copy; 2022 Toni de la Fuente, Maintained by the Prowler Team at Verica, Inc.</a>
markdown_extensions:
- abbr
- admonition
- pymdownx.details
- pymdownx.superfences
- attr_list
- def_list
- footnotes
@@ -131,8 +120,8 @@ markdown_extensions:
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
emoji_generator: !!python/name:materialx.emoji.to_svg
emoji_index: !!python/name:materialx.emoji.twemoji
- pymdownx.highlight:
anchor_linenums: true
- pymdownx.inlinehilite
Generated
+1139 -2244
View File
File diff suppressed because it is too large Load Diff
+40 -14
View File
@@ -6,13 +6,12 @@ import sys
from colorama import Fore, Style
from prowler.lib.banner import print_banner
from prowler.config.config import get_available_compliance_frameworks
from prowler.lib.check.check import (
bulk_load_checks_metadata,
bulk_load_compliance_frameworks,
exclude_checks_to_run,
exclude_services_to_run,
execute_checks,
list_categories,
list_checks_json,
list_services,
@@ -30,15 +29,16 @@ from prowler.lib.check.custom_checks_metadata import (
parse_custom_checks_metadata_file,
update_checks_metadata,
)
from prowler.lib.check.managers import ExecutionManager
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.lib.logger import logger, set_logging_config
from prowler.lib.outputs.compliance import display_compliance_table
from prowler.lib.outputs.compliance.compliance import display_compliance_table
from prowler.lib.outputs.html import add_html_footer, fill_html_overview_statistics
from prowler.lib.outputs.json import close_json
from prowler.lib.outputs.outputs import extract_findings_statistics
from prowler.lib.outputs.slack import send_slack_message
from prowler.lib.outputs.summary_table import display_summary_table
from prowler.providers.aws.aws_provider import get_available_aws_service_regions
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.lib.s3.s3 import send_to_s3_bucket
from prowler.providers.aws.lib.security_hub.security_hub import (
batch_send_to_security_hub,
@@ -46,11 +46,16 @@ from prowler.providers.aws.lib.security_hub.security_hub import (
resolve_security_hub_previous_findings,
verify_security_hub_integration_enabled_per_region,
)
from prowler.providers.common.allowlist import set_provider_allowlist
from prowler.providers.common.audit_info import (
set_provider_audit_info,
set_provider_execution_parameters,
)
from prowler.providers.common.clean import clean_provider_local_output_directories
from prowler.providers.common.common import (
get_global_provider,
set_global_provider_object,
)
from prowler.providers.common.mutelist import set_provider_mutelist
from prowler.providers.common.outputs import set_provider_output_options
from prowler.providers.common.quick_inventory import run_provider_quick_inventory
@@ -73,12 +78,17 @@ def prowler():
compliance_framework = args.compliance
custom_checks_metadata_file = args.custom_checks_metadata_file
if not args.no_banner:
print_banner(args)
live_display.initialize(args)
# if not args.no_banner:
# print_banner(args)
# We treat the compliance framework as another output format
if compliance_framework:
args.output_modes.extend(compliance_framework)
# If no input compliance framework, set all
else:
args.output_modes.extend(get_available_compliance_frameworks(provider))
# Set Logger configuration
set_logging_config(args.log_level, args.log_file, args.only_logs)
@@ -148,6 +158,7 @@ def prowler():
# Set the audit info based on the selected provider
audit_info = set_provider_audit_info(provider, args.__dict__)
set_global_provider_object(args)
# Import custom checks from folder
if checks_folder:
@@ -172,12 +183,12 @@ def prowler():
# Sort final check list
checks_to_execute = sorted(checks_to_execute)
# Parse Allowlist
allowlist_file = set_provider_allowlist(provider, audit_info, args)
# Parse Mute List
mutelist_file = set_provider_mutelist(provider, audit_info, args)
# Set output options based on the selected provider
audit_output_options = set_provider_output_options(
provider, args, audit_info, allowlist_file, bulk_checks_metadata
provider, args, audit_info, mutelist_file, bulk_checks_metadata
)
# Run the quick inventory for the provider if available
@@ -187,14 +198,16 @@ def prowler():
# Execute checks
findings = []
if len(checks_to_execute):
findings = execute_checks(
execution_manager = ExecutionManager(
checks_to_execute,
provider,
audit_info,
audit_output_options,
custom_checks_metadata,
)
findings = execution_manager.execute_checks()
else:
logger.error(
"There are no checks to execute. Please, check your input arguments"
@@ -256,9 +269,10 @@ def prowler():
f"{Style.BRIGHT}\nSending findings to AWS Security Hub, please wait...{Style.RESET_ALL}"
)
# Verify where AWS Security Hub is enabled
global_provider = get_global_provider()
aws_security_enabled_regions = []
security_hub_regions = (
get_available_aws_service_regions("securityhub", audit_info)
global_provider.get_available_aws_service_regions("securityhub")
if not audit_info.audited_regions
else audit_info.audited_regions
)
@@ -308,8 +322,12 @@ def prowler():
provider,
)
if compliance_framework and findings:
for compliance in compliance_framework:
if findings:
compliance_overview = False
if not compliance_framework:
compliance_overview = True
compliance_framework = get_available_compliance_frameworks(provider)
for compliance in sorted(compliance_framework):
# Display compliance table
display_compliance_table(
findings,
@@ -317,12 +335,20 @@ def prowler():
compliance,
audit_output_options.output_filename,
audit_output_options.output_directory,
compliance_overview,
)
if compliance_overview:
print(
f"\nDetailed compliance results are in {Fore.YELLOW}{audit_output_options.output_directory}/compliance/{Style.RESET_ALL}\n"
)
# If custom checks were passed, remove the modules
if checks_folder:
remove_custom_checks_module(checks_folder, provider)
# clean local directories
clean_provider_local_output_directories(args)
# If there are failed findings exit code 3, except if -z is input
if not args.ignore_exit_code_3 and stats["total_fail"] > 0:
sys.exit(3)
File diff suppressed because it is too large Load Diff
+24 -3
View File
@@ -468,6 +468,27 @@
},
{
"Id": "2.1.1",
"Description": "Ensure all S3 buckets employ encryption-at-rest",
"Checks": [
"s3_bucket_default_encryption"
],
"Attributes": [
{
"Section": "2.1. Simple Storage Service (S3)",
"Profile": "Level 2",
"AssessmentStatus": "Automated",
"Description": "Amazon S3 provides a variety of no, or low, cost encryption options to protect data at rest.",
"RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.",
"ImpactStatement": "Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as destination buckets for Amazon S3 server access logging. Only SSE-S3 default encryption is supported for server access log destination buckets.",
"RemediationProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select a Bucket. 3. Click on 'Properties'. 4. Click edit on `Default Encryption`. 5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 6. Click `Save` 7. Repeat for all the buckets in your AWS account lacking encryption. **From Command Line:** Run either ``` aws s3api put-bucket-encryption --bucket <bucket name> --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}' ``` or ``` aws s3api put-bucket-encryption --bucket <bucket name> --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}' ``` **Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.",
"AuditProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select a Bucket. 3. Click on 'Properties'. 4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 5. Repeat for all the buckets in your AWS account. **From Command Line:** 1. Run command to list buckets ``` aws s3 ls ``` 2. For each bucket, run ``` aws s3api get-bucket-encryption --bucket <bucket name> ``` 3. Verify that either ``` \"SSEAlgorithm\": \"AES256\" ``` or ``` \"SSEAlgorithm\": \"aws:kms\"``` is displayed.",
"AdditionalInformation": "S3 bucket encryption only applies to objects as they are placed in the bucket. Enabling S3 bucket encryption does **not** encrypt objects previously stored within the bucket.",
"References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/default-bucket-encryption.html:https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-related-resources"
}
]
},
{
"Id": "2.1.2",
"Description": "Ensure S3 Bucket Policy is set to deny HTTP requests",
"Checks": [
"s3_bucket_secure_transport_policy"
@@ -488,7 +509,7 @@
]
},
{
"Id": "2.1.2",
"Id": "2.1.3",
"Description": "Ensure MFA Delete is enabled on S3 buckets",
"Checks": [
"s3_bucket_no_mfa_delete"
@@ -509,7 +530,7 @@
]
},
{
"Id": "2.1.3",
"Id": "2.1.4",
"Description": "Ensure all data in Amazon S3 has been discovered, classified and secured when required.",
"Checks": [
"macie_is_enabled"
@@ -530,7 +551,7 @@
]
},
{
"Id": "2.1.4",
"Id": "2.1.5",
"Description": "Ensure that S3 Buckets are configured with 'Block public access (bucket settings)'",
"Checks": [
"s3_bucket_level_public_access_block",
File diff suppressed because it is too large Load Diff
+6 -112
View File
@@ -211,31 +211,6 @@
"iam_avoid_root_usage"
]
},
{
"Id": "op.acc.4.aws.iam.8",
"Description": "Proceso de gestión de derechos de acceso",
"Attributes": [
{
"IdGrupoControl": "op.acc.4",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Se restringirá todo acceso a las acciones especificadas para el usuario root de una cuenta.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"organizations_account_part_of_organizations",
"organizations_scp_check_deny_regions"
]
},
{
"Id": "op.acc.4.aws.iam.9",
"Description": "Proceso de gestión de derechos de acceso",
@@ -814,8 +789,7 @@
}
],
"Checks": [
"inspector2_is_enabled",
"inspector2_active_findings_exist"
"inspector2_findings_exist"
]
},
{
@@ -1147,30 +1121,6 @@
"cloudtrail_insights_exist"
]
},
{
"Id": "op.exp.8.r1.aws.ct.3",
"Description": "Revisión de los registros",
"Attributes": [
{
"IdGrupoControl": "op.exp.8.r1",
"Marco": "operacional",
"Categoria": "explotación",
"DescripcionControl": "Registrar los eventos de lectura y escritura de datos.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_insights_exist"
]
},
{
"Id": "op.exp.8.r1.aws.ct.4",
"Description": "Revisión de los registros",
@@ -1283,33 +1233,6 @@
"iam_role_cross_service_confused_deputy_prevention"
]
},
{
"Id": "op.exp.8.r4.aws.ct.1",
"Description": "Control de acceso",
"Attributes": [
{
"IdGrupoControl": "op.exp.8.r4",
"Marco": "operacional",
"Categoria": "explotación",
"DescripcionControl": "Asignar correctamente las políticas AWS IAM para el acceso y borrado de los registros y sus copias de seguridad haciendo uso del principio de mínimo privilegio.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"iam_policy_allows_privilege_escalation",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_customer_unattached_policy_no_administrative_privilege",
"iam_no_custom_policy_permissive_role_assumption",
"iam_policy_attached_only_to_group_or_roles",
"iam_role_cross_service_confused_deputy_prevention",
"iam_policy_no_full_access_to_cloudtrail"
]
},
{
"Id": "op.exp.8.r4.aws.ct.2",
"Description": "Control de acceso",
@@ -1936,8 +1859,7 @@
}
],
"Checks": [
"inspector2_is_enabled",
"inspector2_active_findings_exist"
"inspector2_findings_exist"
]
},
{
@@ -2012,8 +1934,7 @@
}
],
"Checks": [
"inspector2_is_enabled",
"inspector2_active_findings_exist"
"inspector2_findings_exist"
]
},
{
@@ -2189,7 +2110,7 @@
}
],
"Checks": [
"fms_policy_compliant"
"networkfirewall_in_all_vpc"
]
},
{
@@ -2330,31 +2251,6 @@
"cloudfront_distributions_https_enabled"
]
},
{
"Id": "mp.com.4.aws.ws.1",
"Description": "Separación de flujos de información en la red",
"Attributes": [
{
"IdGrupoControl": "mp.com.4",
"Marco": "medidas de protección",
"Categoria": "segregación de redes",
"DescripcionControl": "Se deberán abrir solo los puertos necesarios para el uso del servicio AWS WorkSpaces.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad",
"disponibilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"workspaces_vpc_2private_1public_subnets_nat"
]
},
{
"Id": "mp.com.4.aws.vpc.1",
"Description": "Separación de flujos de información en la red",
@@ -2427,8 +2323,7 @@
}
],
"Checks": [
"vpc_subnet_separate_private_public",
"vpc_different_regions"
"vpc_subnet_separate_private_public"
]
},
{
@@ -2475,8 +2370,7 @@
}
],
"Checks": [
"vpc_subnet_different_az",
"vpc_different_regions"
"vpc_subnet_different_az"
]
},
{
@@ -1,693 +0,0 @@
{
"Framework": "Foundational-Technical-Review",
"Version": "",
"Provider": "AWS",
"Description": "The Foundational Technical Review (FTR) assesses an AWS Partner's solution against a specific set of Amazon Web Services (AWS) best practices around security, performance, and operational processes that are most critical for customer success. Passing the FTR is required to qualify AWS Software Partners for AWS Partner Network (APN) programs such as AWS Competency and AWS Service Ready but any AWS Partner who offers a technology solution may request a FTR review through AWS Partner Central.",
"Requirements": [
{
"Id": "HOST-001",
"Name": "Confirm your hosting model",
"Description": "To use this FTR checklist you must host all critical application components on AWS. You may use external providers for edge services such as content delivery networks (CDNs) or domain name system (DNS), or corporate identity providers. If you are using any edge services outside AWS, please specify them in the self-assessment.",
"Attributes": [
{
"Section": "Partner-hosted FTR requirements",
"Subsection": "Hosting",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "SUP-001",
"Name": "Subscribe to the AWS Business Support tier (or higher) for all production AWS accounts or have an action plan to handle issues which require help from AWS Support",
"Description": "It is recommended that you subscribe to the AWS Business Support tier or higher (including AWS Partner-Led Support) for all of your AWS production accounts. For more information, refer to Compare AWS Support Plans. If you don't have premium support, you must have an action plan to handle issues which require help from AWS Support. AWS Support provides a mix of tools and technology, people, and programs designed to proactively help you optimize performance, lower costs, and innovate faster. AWS Business Support provides additional benefits including access to AWS Trusted Advisor and AWS Personal Health Dashboard and faster response times.",
"Attributes": [
{
"Section": "Partner-hosted FTR requirements",
"Subsection": "Support level",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "WAFR-001",
"Name": "Conduct periodic architecture reviews (minimum once every year)",
"Description": "Conduct periodic architecture reviews of your production workload (at least once per year) using a documented architectural standard that includes AWS-specific best practices. If you have an internally defined standard for your AWS workloads, we recommend you use it for these reviews. If you do not have an internal standard, we recommend you use the AWS Well-Architected Framework.",
"Attributes": [
{
"Section": "Partner-hosted FTR requirements",
"Subsection": "Architecture review",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "WAFR-002",
"Name": "Review the AWS Shared Responsibility Models for Security and Resiliency",
"Description": "Review the AWS Shared Responsibility Model for Security and the AWS Shared Responsibility Model for Resiliency. Ensure that your products architecture and operational processes address the customer responsibilities defined in these models. We recommend you to use AWS Resilience Hub to ensure your workload resiliency posture meets your targets and to provide you with operational procedures you may use to address the customer responsibilities.",
"Attributes": [
{
"Section": "Partner-hosted FTR requirements",
"Subsection": "Architecture review",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "ARC-001",
"Name": "Use root user only by exception",
"Description": "The root user has unlimited access to your account and its resources, and using it only by exception helps protect your AWS resources. The AWS root user must not be used for everyday tasks, even administrative ones. Instead, adhere to the best practice of using the root user only to create your first AWS Identity and Access Management (IAM) user. Then securely lock away the root user credentials and use them to perform only a few accounts and service management tasks. To view the tasks that require you to sign in as the root user, see AWS Tasks That Require Root User. FTR does not require you to actively monitor root usage.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "AWS root account",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "ARC-003",
"Name": "Enable multi-factor authentication (MFA) on the root user for all AWS accounts",
"Description": "Enabling MFA provides an additional layer of protection against unauthorized access to your account. To configure MFA for the root user, follow the instructions for enabling either a virtual MFA or hardware MFA device. If you are using AWS Organizations to create new accounts, the initial password for the root user is set to a random value that is never exposed to you. If you do not recover the password for the root user of these accounts, you do not need to enable MFA on them. For any accounts where you do have access to the root users password, you must enable MFA",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "AWS root account",
"Type": "Automated"
}
],
"Checks": [
"iam_root_mfa_enabled",
"iam_root_hardware_mfa_enabled"
]
},
{
"Id": "ARC-004",
"Name": "Remove access keys for the root user",
"Description": "Programmatic access to AWS APIs should never use the root user. It is best not to generate static an access key for the root user. If one already exists, you should transition any processes using that key to use temporary access keys from an AWS Identity and Access Management (IAM) role, or, if necessary, static access keys from an IAM user.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "AWS root account",
"Type": "Automated"
}
],
"Checks": [
"iam_no_root_access_key"
]
},
{
"Id": "ARC-005",
"Name": "Develop incident management plans",
"Description": "An incident management plan is critical to respond, mitigate, and recover from the potential impact of security incidents. An incident management plan is a structured process for identifying, remediating, and responding in a timely matter to security incidents. An effective incident management plan must be continually iterated upon, remaining current with your cloud operations goal. For more information on developing incident management plan please see Develop incident management plans.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "AWS root account",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "ACOM-001",
"Name": "Configure AWS account contacts",
"Description": "If an account is not managed by AWS Organizations, alternate account contacts help AWS get in contact with the appropriate personnel if needed. Configure the accounts alternate contacts to point to a group rather than an individual. For example, create separate email distribution lists for billing, operations, and security and configure these as Billing, Security, and Operations contacts in each active AWS account. This ensures that multiple people will receive AWS notifications and be able to respond, even if someone is on vacation, changes roles, or leaves the company.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Communications from AWS",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "ACOM-002",
"Name": "Set account contact information including the root user email address to email addresses and phone numbers owned by your company",
"Description": "Using company owned email addresses and phone numbers for contact information enables you to access them even if the individuals whom they belong to are no longer with your organization",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Communications from AWS",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-001",
"Name": "Enable multi-factor authentication (MFA) for all Human Identities with AWS access",
"Description": "You must require any human identities to authenticate using MFA before accessing your AWS accounts. Typically, this means enabling MFA within your corporate identity provider. If you have existing legacy IAM users you must enable MFA for console access for those principals as well. Enabling MFA for IAM users provides an additional layer of security. With MFA, users have a device that generates a unique authentication code (a one-time password, or OTP). Users must provide both their normal credentials (user name and password) and the OTP. The MFA device can either be a special piece of hardware, or it can be a virtual device (for example, it can run in an app on a smartphone). Please note that machine identities do not require MFA.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Automated"
}
],
"Checks": [
"iam_root_mfa_enabled",
"iam_root_hardware_mfa_enabled",
"iam_user_hardware_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_administrator_access_with_mfa"
]
},
{
"Id": "IAM-002",
"Name": "Monitor and secure static AWS Identity and Access Management (IAM) credentials",
"Description": "Use temporary IAM credentials retrieved by assuming a role whenever possible. In cases where it is infeasible to use IAM roles, implement the following controls to reduce the risk these credentials are misused: Rotate IAM access keys regularly (recommended at least every 90 days). Maintain an inventory of all static keys and where they are used and remove unused access keys. Implement monitoring of AWS CloudTrail logs to detect anomalous activity or other potential misuse (e.g. using AWS GuardDuty.) Define a runbook or SOP for revoking credentials in the event you detect misuse.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Automated"
}
],
"Checks": [
"iam_rotate_access_key_90_days",
"iam_user_accesskey_unused",
"iam_user_with_temporary_credentials",
"guardduty_is_enabled",
"guardduty_no_high_severity_findings"
]
},
{
"Id": "IAM-003",
"Name": "Use strong password policy",
"Description": "Enforce a strong password policy, and educate users to avoid common or re-used passwords. For IAM users, you can create a password policy for your account on the Account Settings page of the IAM console. You can use the password policy to define password requirements, such as minimum length and whether it requires non-alphabetic characters, and so on. For more information, see Setting an Account Password Policy for IAM users.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Automated"
}
],
"Checks": [
"iam_password_policy_expires_passwords_within_90_days_or_less",
"iam_password_policy_lowercase",
"iam_password_policy_minimum_length_14",
"iam_password_policy_number",
"iam_password_policy_reuse_24",
"iam_password_policy_symbol",
"iam_password_policy_uppercase"
]
},
{
"Id": "IAM-004",
"Name": "Create individual identities (no shared credentials) for anyone who needs AWS access",
"Description": "Create individual entities and give unique security credentials and permissions to each user accessing your account. With individual entities and no shared credentials, you can audit the activity of each user.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-005",
"Name": "Use IAM roles and its temporary security credentials to provide access to third parties.",
"Description": "Do not provision IAM users and share those credentials with people outside of your organization. Any external services that need to make AWS API calls against your account (for example, a monitoring solution that accesses your account's AWS CloudWatch metrics) must use a cross-account role. For more information, refer to Providing access to AWS accounts owned by third parties.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-006",
"Name": "Grant least privilege access",
"Description": "You must follow the standard security advice of granting least privilege. Grant only the access that identities require by allowing access to specific actions on specific AWS resources under specific conditions. Rely on groups and identity attributes to dynamically set permissions at scale, rather than defining permissions for individual users. For example, you can allow a group of developers access to manage only resources for their project. This way, when a developer is removed from the group, access for the developer is revoked everywhere that group was used for access control, without requiring any changes to the access policies.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Automated"
}
],
"Checks": [
"iam_policy_attached_only_to_group_or_roles"
]
},
{
"Id": "IAM-007",
"Name": "Manage access based on life cycle",
"Description": "Integrate access controls with operator and application lifecycle and your centralized federation provider and IAM. For example, remove a users access when they leave the organization or change roles.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-008",
"Name": "Audit identities quarterly",
"Description": "Auditing the identities that are configured in your identity provider and IAM helps ensure that only authorized identities have access to your workload. For example, remove people that leave the organization, and remove cross-account roles that are no longer required. Have a process in place to periodically audit permissions to the services accessed by an IAM entity. This helps you identify the policies you needto modify to remove any unused permissions. For more information, see Refining permissions in AWS using last accessed information.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-009",
"Name": "Do not embed credentials in application code",
"Description": "Ensure that all credentials used by your applications (for example, IAM access keys and database passwords) are never included in your application's source code or committed to source control in any way.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-0010",
"Name": "Store secrets securely.",
"Description": "Encrypt all secrets in transit and at rest, define fine-grained access controls that only allow access to specific identities, and log access to secrets in an audit log. We recommend you use a purpose-built secret management service such as AWS Secrets Manager, AWS Systems Manager Parameter Store, or an AWS Partner solution, but internally developed solutions that meet these requirements are also acceptable.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-0011",
"Name": "Encrypt all end user/customer credentials and hash passwords at rest.",
"Description": "If you are storing end user/customer credentials in a database that you manage, encrypt credentials at rest and hash passwords. As an alternative, AWS recommends using a user-identity synchronization service, such as Amazon Cognito or an equivalent AWS Partner solution.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "IAM-0012",
"Name": "Use temporary credentials",
"Description": "Use temporary security credentials to access AWS resources. For machine identities within AWS (for example, Amazon Elastic Compute Cloud (Amazon EC2) instances or AWS Lambda functions), always use IAM roles to acquire temporary security credentials. For machine identities running outside of AWS, use IAM Roles Anywhere or securely store static AWS access keys that are only used to assume an IAM role.For human identities, use AWS IAM Identity Center or other identity federation solutions where possible. If you must use static AWS access keys for human users, require MFA for all access, including the AWS Management Console, and AWS Command Line Interface (AWS CLI).",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Identity and Access Management",
"Type": "Automated"
}
],
"Checks": [
"iam_rotate_access_key_90_days",
"iam_user_accesskey_unused",
"iam_user_with_temporary_credentials",
"iam_policy_attached_only_to_group_or_roles",
"iam_role_administratoraccess_policy",
"iam_role_cross_account_readonlyaccess_policy",
"iam_role_cross_service_confused_deputy_prevention",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_root_hardware_mfa_enabled",
"iam_user_hardware_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_administrator_access_with_mfa"
]
},
{
"Id": "SECOPS-001",
"Name": "Perform vulnerability management",
"Description": "Define a mechanism and frequency to scan and patch for vulnerabilities in your dependencies, and in your operating systems to help protect against new threats. Scan and patch your dependencies, and your operating systems on a defined schedule. Software vulnerability management is essential to keeping your system secure from threat actors. Embedding vulnerability assessments early into your continuous integration/continuous delivery (CI/CD) pipeline allows you to prioritize remediation of any security vulnerabilities detected. The solution you need to achieve this varies according to the AWS services that you are consuming. To check for vulnerabilities in software running in Amazon EC2 instances, you can add Amazon Inspector to your pipeline to cause your build to fail if Inspector detects vulnerabilities. You can also use open source products such as OWASP Dependency-Check, Snyk, OpenVAS, package managers and AWS Partner tools for vulnerability management.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Operational security",
"Type": "Automated"
}
],
"Checks": [
"inspector2_is_enabled",
"inspector2_active_findings_exist",
"accessanalyzer_enabled_without_findings",
"guardduty_no_high_severity_findings"
]
},
{
"Id": "NETSEC-001",
"Name": "Implement the least permissive rules for all Amazon EC2 security groups",
"Description": "All Amazon EC2 security groups should restrict access to the greatest degree possible. At a minimum, do the following: Ensure that no security groups allow ingress from 0.0.0.0/0 to port 22 or 3389 (CIS 5.2) Ensure that the default security group of every VPC restricts all traffic (CIS 5.3/Security Control EC2.2)",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Network Security",
"Type": "Automated"
}
],
"Checks": [
"ec2_ami_public",
"ec2_instance_public_ip",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_elasticsearch_kibana_9200_9300_5601",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_kafka_9092",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_memcached_11211",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_oracle_1521_2483",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_postgres_5432",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_redis_6379",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_sql_server_1433_1434",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_telnet_23",
"ec2_securitygroup_allow_wide_open_public_ipv4",
"ec2_securitygroup_default_restrict_traffic",
"ec2_securitygroup_not_used",
"ec2_securitygroup_with_many_ingress_egress_rules"
]
},
{
"Id": "NETSEC-002",
"Name": "Restrict resources in public subnets",
"Description": "Do not place resources in public subnets of your VPC unless they must receive network traffic from public sources. Public subnets are subnets associated with a route table that has a route to an internet gateway.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Network Security",
"Type": "Automated"
}
],
"Checks": [
"vpc_subnet_no_public_ip_by_default",
"vpc_subnet_separate_private_public",
"vpc_endpoint_connections_trust_boundaries",
"vpc_endpoint_services_allowed_principals_trust_boundaries",
"workspaces_vpc_2private_1public_subnets_nat"
]
},
{
"Id": "BAR-001",
"Name": "Configure automatic data backups",
"Description": "You must perform regular backups to a durable storage service. Backups ensure that you have the ability to recover from administrative, logical, or physical error scenarios. Configure backups to be taken automatically based on a periodic schedule, or by changes in the dataset. RDS instances, EBS volumes, DynamoDB tables, and S3 objects can all be configured for automatic backup. AWS Backup, AWS Marketplace solutions or third-party solutions can also be used. If objects in S3 bucket are write-once-read-many (WORM), compensating controls such as object lock can be used meet this requirement. If it is customers responsibility to backup their data, it must be clearly stated in the documentation and the Partner must provide clear instructions on how to backup the data.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Backups and recovery",
"Type": "Automated"
}
],
"Checks": [
"backup_plans_exist",
"backup_reportplans_exist",
"backup_vaults_encrypted",
"backup_vaults_exist",
"efs_have_backup_enabled",
"rds_instance_backup_enabled"
]
},
{
"Id": "BAR-002",
"Name": "Periodically recover data to verify the integrity of your backup process",
"Description": "To confirm that your backup process meets your recovery time objectives (RTO) and recovery point objectives (RPO), run a recovery test on a regular schedule and after making significant changes to your cloud environment. For more information, refer to Getting Started - Backup and Restore with AWS.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Backups and recovery",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "RES-001",
"Name": "Define a Recovery Point Objective (RPO)",
"Description": "To confirm that your backup process meets your recovery time objectives (RTO) and recovery point objectives (RPO), run a recovery test on a regular schedule and after making significant changes to your cloud environment. For more information, refer to Getting Started - Backup and Restore with AWS.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Resiliency",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "RES-002",
"Name": "Establish a Recovery Time Objective (RTO)",
"Description": "Define an RTO that meets your organizations needs and expectations. RTO is the maximum acceptable delay your organization will accept between the interruption and restoration of service.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Resiliency",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "RES-004",
"Name": "Resiliency Testing",
"Description": "Test resiliency to ensure that RTO and RPO are met, both periodically (minimum every 12 months) and after major updates. The resiliency test must include accidental data loss, instance failures, and Availability Zone (AZ) failures. At least one resilience test that meets RTO and RPO requirements must be completed prior to FTR approval. You can use AWS Resilience Hub to test and verify your workloads to see if it meets its resilience target. AWS Resilience Hub works with AWS Fault Injection Service (AWS FIS) , a chaos engineering service, to provide fault-injection simulations of real-world failures to validate the application recovers within the resilience targets you defined. AWS Resilience Hub also provides API operations for you to integrate its resilience assessment and testing into your CI/CD pipelines for ongoing resilience validation. Including resilience validation in CI/CD pipelines helps make sure that changes to the workloads underlying infrastructure don't compromise resilience.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Resiliency",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "RES-005",
"Name": "Communicate customer responsibilities for resilience",
"Description": "Clearly define your customers responsibility for backup, recovery, and availability. At a minimum, your product documentation or customer agreements should cover the following: Responsibility the customer has for backing up the data stored in your solution. Instructions for backing up data or configuring optional features in your product for data protection, if applicable. Options customers have for configuring the availability of your product.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Resiliency",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "RES-006",
"Name": "Architect your product to meet availability targets and uptime service level agreements (SLAs)",
"Description": "If you publish or privately agree to availability targets or uptime SLAs, ensure that your architecture and operational processes are designed to support them. Additionally, provide clear guidance to customers on any configuration required to achieve the targets or SLAs.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Resiliency",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "RES-007",
"Name": "Define a customer communication plan for outages",
"Description": "Establish a plan for communicating information about system outages to your customers both during and after incidents. Your communication should not include any data that was provided by AWS under a non-disclosure agreement (NDA).",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Resiliency",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "S3-001",
"Name": "Review all Amazon S3 buckets to determine appropriate access levels",
"Description": "You must ensure that buckets that require public access have been reviewed to determine if public read or write access is needed and if appropriate controls are in place to control public access. When assigning access permissions, follow the principle of least privilege, an AWS best practice. For more information, refer to overview of managing access.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Amazon S3 bucket access",
"Type": "Automated"
}
],
"Checks": [
"s3_bucket_acl_prohibited",
"s3_bucket_default_encryption",
"s3_bucket_kms_encryption",
"s3_bucket_level_public_access_block",
"s3_bucket_object_lock",
"s3_bucket_policy_public_write_access",
"s3_bucket_public_access",
"s3_bucket_public_list_acl",
"s3_bucket_public_write_acl",
"s3_bucket_secure_transport_policy",
"s3_bucket_server_access_logging_enabled"
]
},
{
"Id": "CAA-001",
"Name": "Use cross-account roles to access customer AWS accounts",
"Description": "Cross-account roles reduce the amount of sensitive information AWS Partners need to store for their customers.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Cross-account access",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "CAA-007",
"Name": "Provide guidance or an automated setup mechanism (for example, an AWS CloudFormation template) for creating cross-account roles with the minimum required privileges",
"Description": "The policy created for cross-account access in customer accounts must follow the principle of least privilege. The AWS Partner must provide a role-policy document or an automated setup mechanism (for example, an AWS CloudFormation template) for the customers to use to ensure that the roles are created with minimum required privileges. For more information, refer to the AWS Partner Network (APN) blog posts.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Cross-account access",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "CAA-002",
"Name": "Use an external ID with cross-account roles to access customer accounts",
"Description": "An external ID allows the user that is assuming the role to assert the circumstances in which they are operating. It also provides a way for the account owner to permit the role to be assumed only under specific circumstances. The primary function of the external ID is to address and prevent the confused deputy problem.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Cross-account access",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "CAA-004",
"Name": "Use a value you generate (not something provided by the customer) for the external ID",
"Description": "When configuring cross-account access using IAM roles, you must use a value you generate for the external ID, instead of one provided by the customer, to ensure the integrity of the cross-account role configuration. A partner-generated external ID ensures that malicious parties cannot impersonate a customer's configuration and enforces uniqueness and format consistency across all customers. If you are not generating an external ID today we recommend implementing a process that generates a random unique value (such as a Universally Unique Identifier) for the external ID that a customer uses to set up a cross-account role.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Cross-account access",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "CAA-005",
"Name": "Ensure that all external IDs are unique.",
"Description": "The external IDs used must be unique across all customers. Re-using external IDs for different customers does not solve the confused deputy problem and runs the risk of customer A being able to view data of customer B by using the role ARN and the external ID of customer B. To resolve this, we recommend implementing a process that ensures a random unique value, such as a Universally Unique Identifier, is generated for the external ID that a customer would use to setup a cross account role.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Cross-account access",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "CAA-006",
"Name": "Provide read-only access to external ID to customers",
"Description": "Customers must not be able to set or influence external IDs. When the external ID is editable, it is possible for one customer to impersonate the configuration of another. For example, when the external ID is editable, customer A can create a cross account role setup using customer Bs role ARN and external ID, granting customer A access to customer Bs data. Remediation of this item involves making the external ID a view-only field, ensuring that the external ID cannot be changed to impersonate the setup of another customer.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Cross-account access",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "CAA-003",
"Name": "Deprecate any historical use of customer-provided IAM credentials",
"Description": "If your application provides legacy support for the use of static IAM credentials for cross-account access, the application's user interface and customer documentation must make it clear that this method is deprecated. Existing customers should be encouraged to switch to cross-account role based-access, and collection of credentials should be disabled for new customers.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Cross-account access",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "SDAT-001",
"Name": "Identify sensitive data (for example, Personally Identifiable Information (PII) and Protected Health Information (PHI))",
"Description": "Data classification enables you to determine which data needs to be protected and how. Based on the workload and the data it processes, identify the data that is not common public knowledge.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Sensitive data",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "SDAT-002",
"Name": "Encrypt all sensitive data at rest",
"Description": "Encryption maintains the confidentiality of sensitive data even when it gets stolen or the network through which it is transmitted becomes compromised.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Sensitive data",
"Type": "Automated"
}
],
"Checks": [
"sns_topics_kms_encryption_at_rest_enabled",
"athena_workgroup_encryption",
"cloudtrail_kms_encryption_enabled",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"efs_encryption_at_rest_enabled",
"opensearch_service_domains_encryption_at_rest_enabled"
]
},
{
"Id": "SDAT-003",
"Name": "Only use protocols with encryption when transmitting sensitive data outside of your VPC",
"Description": "Encryption maintains data confidentiality even when the network through which it is transmitted becomes compromised.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Sensitive data",
"Type": "Manual"
}
],
"Checks": []
},
{
"Id": "RCVP-001",
"Name": "Establish a process to ensure that all required compliance standards are met",
"Description": "If you advertise that your product meets specific compliance standards, you must have an internal process for ensuring compliance. Examples of compliance standards include Payment Card Industry Data Security Standard (PCI DSS) PCI DSS, Federal Risk and Authorization Management Program (FedRAMP)FedRAMP, and U.S. Health Insurance Portability and Accountability Act (HIPAA)HIPAA. Applicable compliance standards are determined by various factors, such as what types of data the solution stores or transmits and which geographic regions the solution supports.",
"Attributes": [
{
"Section": "Architectural and Operational Controls",
"Subsection": "Regulatory compliance validation process",
"Type": "Manual"
}
],
"Checks": []
}
]
}
+4 -8
View File
@@ -29,8 +29,7 @@
"securityhub_enabled",
"elbv2_waf_acl_attached",
"guardduty_is_enabled",
"inspector2_is_enabled",
"inspector2_active_findings_exist",
"inspector2_findings_exist",
"awslambda_function_not_publicly_accessible",
"ec2_instance_public_ip"
],
@@ -577,8 +576,7 @@
"config_recorder_all_regions_enabled",
"securityhub_enabled",
"guardduty_is_enabled",
"inspector2_is_enabled",
"inspector2_active_findings_exist"
"inspector2_findings_exist"
],
"Attributes": [
{
@@ -739,8 +737,7 @@
"iam_user_hardware_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"securityhub_enabled",
"inspector2_is_enabled",
"inspector2_active_findings_exist"
"inspector2_findings_exist"
],
"Attributes": [
{
@@ -1895,8 +1892,7 @@
"networkfirewall_in_all_vpc",
"elbv2_waf_acl_attached",
"guardduty_is_enabled",
"inspector2_is_enabled",
"inspector2_active_findings_exist",
"inspector2_findings_exist",
"ec2_networkacl_allow_ingress_any_port",
"ec2_networkacl_allow_ingress_tcp_port_22",
"ec2_networkacl_allow_ingress_tcp_port_3389",
+56 -56
View File
@@ -13,7 +13,7 @@
"ItemId": "cc_1_1",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -27,7 +27,7 @@
"ItemId": "cc_1_2",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -41,7 +41,7 @@
"ItemId": "cc_1_3",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -62,7 +62,7 @@
"ItemId": "cc_1_4",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -76,7 +76,7 @@
"ItemId": "cc_1_5",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -90,7 +90,7 @@
"ItemId": "cc_2_1",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -109,7 +109,7 @@
"ItemId": "cc_2_2",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -123,7 +123,7 @@
"ItemId": "cc_2_3",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -137,7 +137,7 @@
"ItemId": "cc_3_1",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -155,7 +155,7 @@
"ItemId": "cc_3_2",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -175,7 +175,7 @@
"ItemId": "cc_3_3",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -189,7 +189,7 @@
"ItemId": "cc_3_4",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "config",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -205,7 +205,7 @@
"ItemId": "cc_4_1",
"Section": "CC4.0 - Monitoring Activities",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -219,7 +219,7 @@
"ItemId": "cc_4_2",
"Section": "CC4.0 - Monitoring Activities",
"Service": "guardduty",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -236,7 +236,7 @@
"ItemId": "cc_5_1",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -250,7 +250,7 @@
"ItemId": "cc_5_2",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -264,7 +264,7 @@
"ItemId": "cc_5_3",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -278,7 +278,7 @@
"ItemId": "cc_6_1",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "s3",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -294,7 +294,7 @@
"ItemId": "cc_6_2",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "rds",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -310,7 +310,7 @@
"ItemId": "cc_6_3",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "iam",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -328,7 +328,7 @@
"ItemId": "cc_6_4",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -342,7 +342,7 @@
"ItemId": "cc_6_5",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -356,7 +356,7 @@
"ItemId": "cc_6_6",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "ec2",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -372,7 +372,7 @@
"ItemId": "cc_6_7",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "acm",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -388,7 +388,7 @@
"ItemId": "cc_6_8",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -405,7 +405,7 @@
"ItemId": "cc_7_1",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -424,7 +424,7 @@
"ItemId": "cc_7_2",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -460,7 +460,7 @@
"ItemId": "cc_7_3",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -492,7 +492,7 @@
"ItemId": "cc_7_4",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -523,7 +523,7 @@
"ItemId": "cc_7_5",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -537,7 +537,7 @@
"ItemId": "cc_8_1",
"Section": "CC8.0 - Change Management",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -553,7 +553,7 @@
"ItemId": "cc_9_1",
"Section": "CC9.0 - Risk Mitigation",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -567,7 +567,7 @@
"ItemId": "cc_9_2",
"Section": "CC9.0 - Risk Mitigation",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -581,7 +581,7 @@
"ItemId": "cc_a_1_1",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -595,7 +595,7 @@
"ItemId": "cc_a_1_2",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -626,7 +626,7 @@
"ItemId": "cc_a_1_3",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -640,7 +640,7 @@
"ItemId": "cc_c_1_1",
"Section": "CCC1.0 - Additional Criterial for Confidentiality",
"Service": "aws",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -656,7 +656,7 @@
"ItemId": "cc_c_1_2",
"Section": "CCC1.0 - Additional Criterial for Confidentiality",
"Service": "s3",
"Type": "automated"
"Soc_Type": "automated"
}
],
"Checks": [
@@ -672,7 +672,7 @@
"ItemId": "p_1_1",
"Section": "P1.0 - Privacy Criteria Related to Notice and Communication of Objectives Related to Privacy",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -686,7 +686,7 @@
"ItemId": "p_2_1",
"Section": "P2.0 - Privacy Criteria Related to Choice and Consent",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -700,7 +700,7 @@
"ItemId": "p_3_1",
"Section": "P3.0 - Privacy Criteria Related to Collection",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -714,7 +714,7 @@
"ItemId": "p_3_2",
"Section": "P3.0 - Privacy Criteria Related to Collection",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -728,7 +728,7 @@
"ItemId": "p_4_1",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -742,7 +742,7 @@
"ItemId": "p_4_2",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -756,7 +756,7 @@
"ItemId": "p_4_3",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -770,7 +770,7 @@
"ItemId": "p_5_1",
"Section": "P5.0 - Privacy Criteria Related to Access",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -784,7 +784,7 @@
"ItemId": "p_5_2",
"Section": "P5.0 - Privacy Criteria Related to Access",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -798,7 +798,7 @@
"ItemId": "p_6_1",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -812,7 +812,7 @@
"ItemId": "p_6_2",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -826,7 +826,7 @@
"ItemId": "p_6_3",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -840,7 +840,7 @@
"ItemId": "p_6_4",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -854,7 +854,7 @@
"ItemId": "p_6_5",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -868,7 +868,7 @@
"ItemId": "p_6_6",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -882,7 +882,7 @@
"ItemId": "p_6_7",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -896,7 +896,7 @@
"ItemId": "p_7_1",
"Section": "P7.0 - Privacy Criteria Related to Quality",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -910,7 +910,7 @@
"ItemId": "p_8_1",
"Section": "P8.0 - Privacy Criteria Related to Monitoring and Enforcement",
"Service": "aws",
"Type": "manual"
"Soc_Type": "manual"
}
],
"Checks": []
@@ -1,4 +1,4 @@
Allowlist:
Mute List:
Accounts:
"*":
########################### AWS CONTROL TOWER ###########################
@@ -38,9 +38,6 @@ Allowlist:
- "aws-controltower-ReadOnlyExecutionRole"
- "AWSControlTower_VPCFlowLogsRole"
- "AWSControlTowerExecution"
- "AWSAFTAdmin"
- "AWSAFTExecution"
- "AWSAFTService"
"iam_policy_*":
Regions:
- "*"
@@ -3,8 +3,8 @@
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
### For each check you can except Accounts, Regions, Resources and/or Tags.
########################### ALLOWLIST EXAMPLE ###########################
Allowlist:
########################### MUTE LIST EXAMPLE ###########################
Mute List:
Accounts:
"123456789012":
Checks:
+9 -4
View File
@@ -11,7 +11,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "3.15.0"
prowler_version = "3.11.3"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
@@ -25,13 +25,19 @@ banner_color = "\033[1;92m"
# Severities
valid_severities = ["critical", "high", "medium", "low", "informational"]
# Statuses
finding_statuses = ["PASS", "FAIL", "MANUAL"]
# Compliance
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
def get_available_compliance_frameworks():
def get_available_compliance_frameworks(provider=None):
available_compliance_frameworks = []
for provider in ["aws", "gcp", "azure"]:
providers = ["aws", "gcp", "azure"]
if provider:
providers = [provider]
for provider in providers:
with os.scandir(f"{actual_directory}/../compliance/{provider}") as files:
for file in files:
if file.is_file() and file.name.endswith(".json"):
@@ -50,7 +56,6 @@ aws_services_json_file = "aws_regions_by_service.json"
# gcp_zones_json_file = "gcp_zones.json"
default_output_directory = getcwd() + "/output"
output_file_timestamp = timestamp.strftime("%Y%m%d%H%M%S")
timestamp_iso = timestamp.isoformat(sep=" ", timespec="seconds")
csv_file_suffix = ".csv"
+9 -20
View File
@@ -2,10 +2,10 @@
aws:
# AWS Global Configuration
# aws.allowlist_non_default_regions --> Set to True to allowlist failed findings in non-default regions for AccessAnalyzer, GuardDuty, SecurityHub, DRS and Config
allowlist_non_default_regions: False
# If you want to allowlist/mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w allowlist.yaml`:
# Allowlist:
# aws.mute_non_default_regions --> Set to True to mute failed findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
mute_non_default_regions: False
# If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
# Mute List:
# Accounts:
# "*":
# Checks:
@@ -69,8 +69,8 @@ aws:
# AWS Organizations
# organizations_scp_check_deny_regions
# organizations_enabled_regions: [
# "eu-central-1",
# "eu-west-1",
# 'eu-central-1',
# 'eu-west-1',
# "us-east-1"
# ]
organizations_enabled_regions: []
@@ -89,20 +89,9 @@ aws:
# Azure Configuration
azure:
# Azure Network Configuration
# azure.network_public_ip_shodan
shodan_api_key: null
# Azure App Service
# azure.app_ensure_php_version_is_latest
php_latest_version: "8.2"
# azure.app_ensure_python_version_is_latest
python_latest_version: "3.12"
# azure.app_ensure_java_version_is_latest
java_latest_version: "17"
# GCP Configuration
gcp:
# GCP Compute Configuration
# gcp.compute_public_address_shodan
shodan_api_key: null
# Kubernetes Configuration
kubernetes:
@@ -13,3 +13,7 @@ CustomChecksMetadata:
Checks:
compute_instance_public_ip:
Severity: critical
kubernetes:
Checks:
apiserver_anonymous_requests:
Severity: low
+4 -4
View File
@@ -4,7 +4,7 @@ from prowler.config.config import banner_color, orange_color, prowler_version, t
def print_banner(args):
banner = rf"""{banner_color} _
banner = f"""{banner_color} _
_ __ _ __ _____ _| | ___ _ __
| '_ \| '__/ _ \ \ /\ / / |/ _ \ '__|
| |_) | | | (_) \ V V /| | __/ |
@@ -15,13 +15,13 @@ def print_banner(args):
"""
print(banner)
if args.verbose or args.quiet:
if args.verbose:
print(
f"""
Color code for results:
- {Fore.YELLOW}INFO (Information){Style.RESET_ALL}
- {Fore.YELLOW}MANUAL (Manual check){Style.RESET_ALL}
- {Fore.GREEN}PASS (Recommended value){Style.RESET_ALL}
- {orange_color}WARNING (Ignored by allowlist){Style.RESET_ALL}
- {orange_color}MUTED (Muted by muted list){Style.RESET_ALL}
- {Fore.RED}FAIL (Fix required){Style.RESET_ALL}
"""
)
+66 -52
View File
@@ -10,18 +10,19 @@ from pkgutil import walk_packages
from types import ModuleType
from typing import Any
from alive_progress import alive_bar
from colorama import Fore, Style
import prowler
from prowler.config.config import orange_color
from prowler.lib.check.compliance_models import load_compliance_framework
from prowler.lib.check.custom_checks_metadata import update_check_metadata
from prowler.lib.check.managers import ExecutionManager
from prowler.lib.check.models import Check, load_check_metadata
from prowler.lib.logger import logger
from prowler.lib.outputs.outputs import report
from prowler.lib.ui.live_display import live_display
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.lib.allowlist.allowlist import allowlist_findings
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
from prowler.providers.common.common import get_global_provider
from prowler.providers.common.models import Audit_Metadata
from prowler.providers.common.outputs import Provider_Output_Options
@@ -67,9 +68,9 @@ def bulk_load_compliance_frameworks(provider: str) -> dict:
# cis_v1.4_aws.json --> cis_v1.4_aws
compliance_framework_name = filename.split(".json")[0]
# Store the compliance info
bulk_compliance_frameworks[compliance_framework_name] = (
load_compliance_framework(file_path)
)
bulk_compliance_frameworks[
compliance_framework_name
] = load_compliance_framework(file_path)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
@@ -217,7 +218,7 @@ def print_categories(categories: set):
singular_string = f"\nThere is {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available category.\n"
message = plural_string if categories_num > 1 else singular_string
for category in sorted(categories):
for category in categories:
print(f"- {category}")
print(message)
@@ -246,7 +247,7 @@ def print_compliance_frameworks(
singular_string = f"\nThere is {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Framework.\n"
message = plural_string if frameworks_num > 1 else singular_string
for framework in sorted(bulk_compliance_frameworks.keys()):
for framework in bulk_compliance_frameworks.keys():
print(f"- {framework}")
print(message)
@@ -431,8 +432,10 @@ def execute_checks(
services_executed = set()
checks_executed = set()
global_provider = get_global_provider()
# Initialize the Audit Metadata
audit_info.audit_metadata = Audit_Metadata(
global_provider.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=checks_to_execute,
completed_checks=0,
@@ -492,47 +495,57 @@ def execute_checks(
print(
f"{Style.BRIGHT}Executing {checks_num} {check_noun}, please wait...{Style.RESET_ALL}\n"
)
with alive_bar(
total=len(checks_to_execute),
ctrl_c=False,
bar="blocks",
spinner="classic",
stats=False,
enrich_print=False,
) as bar:
for check_name in checks_to_execute:
# Recover service from check name
service = check_name.split("_")[0]
bar.title = (
f"-> Scanning {orange_color}{service}{Style.RESET_ALL} service"
execution_manager = ExecutionManager(provider, checks_to_execute)
total_checks = execution_manager.total_checks_per_service()
completed_checks = {service: 0 for service in total_checks}
service_findings = []
for service, check_name in execution_manager.execute_checks():
try:
check_findings = execute(
service,
check_name,
provider,
audit_output_options,
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
try:
check_findings = execute(
service,
check_name,
provider,
audit_output_options,
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
all_findings.extend(check_findings)
all_findings.extend(check_findings)
service_findings.extend(check_findings)
# Update the completed checks count
completed_checks[service] += 1
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
bar()
bar.title = f"-> {Fore.GREEN}Scan completed!{Style.RESET_ALL}"
# Check if all checks for the service are completed
if completed_checks[service] == total_checks[service]:
# All checks for the service are completed
# Add a summary table or perform other actions
live_display.add_results_for_service(service, service_findings)
# Clear service_findings
service_findings = []
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return all_findings
def create_check_service_dict(checks_to_execute):
output = {}
for check_name in checks_to_execute:
service = check_name.split("_")[0]
if service not in output.keys():
output[service] = []
output[service].append(check_name)
return output
def execute(
service: str,
check_name: str,
@@ -543,6 +556,7 @@ def execute(
checks_executed: set,
custom_checks_metadata: Any,
):
global_provider = get_global_provider()
# Import check module
check_module_path = (
f"prowler.providers.{provider}.services.{service}.{check_name}.{check_name}"
@@ -562,15 +576,15 @@ def execute(
# Update Audit Status
services_executed.add(service)
checks_executed.add(check_name)
audit_info.audit_metadata = update_audit_metadata(
audit_info.audit_metadata, services_executed, checks_executed
global_provider.audit_metadata = update_audit_metadata(
global_provider.audit_metadata, services_executed, checks_executed
)
# Allowlist findings
if audit_output_options.allowlist_file:
check_findings = allowlist_findings(
audit_output_options.allowlist_file,
audit_info.audited_account,
# Mute List findings
if audit_output_options.mutelist_file:
check_findings = mutelist_findings(
audit_output_options.mutelist_file,
global_provider.audited_account,
check_findings,
)
@@ -0,0 +1,48 @@
import ast
import os
import pathlib
from prowler.lib.logger import logger
class ImportFinder(ast.NodeVisitor):
def __init__(self, provider):
self.imports = set()
self.provider = provider
def visit_ImportFrom(self, node):
if node.module and f"prowler.providers.{self.provider}.services" in node.module:
for name in node.names:
if "_client" in name.name:
self.imports.add(name.name)
self.generic_visit(node)
def analyze_check_file(file_path, provider):
# Prase the check file
with open(file_path, "r") as file:
node = ast.parse(file.read(), filename=file_path)
finder = ImportFinder(provider)
finder.visit(node)
return list(finder.imports)
def get_dependencies_for_checks(provider, checks_dict):
current_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
prowler_dir = current_directory.parent.parent
check_dependencies = {}
for service_name, checks in checks_dict.items():
check_dependencies[service_name] = {}
for check_name in checks:
relative_path = f"providers/{provider}/services/{service_name}/{check_name}/{check_name}.py"
check_file_path = prowler_dir / relative_path
if not check_file_path.exists():
logger.error(
f"{check_name} does not exist at {relative_path}! Cannot determine service dependencies"
)
continue
clients = analyze_check_file(str(check_file_path), provider)
check_dependencies[service_name][check_name] = clients
return check_dependencies
+23 -36
View File
@@ -32,26 +32,19 @@ def load_checks_to_execute(
# First, loop over the bulk_checks_metadata to extract the needed subsets
for check, metadata in bulk_checks_metadata.items():
try:
# Aliases
for alias in metadata.CheckAliases:
if alias not in check_aliases:
check_aliases[alias] = []
check_aliases[alias].append(check)
# Aliases
for alias in metadata.CheckAliases:
check_aliases[alias] = check
# Severities
if metadata.Severity:
check_severities[metadata.Severity].append(check)
# Severities
if metadata.Severity:
check_severities[metadata.Severity].append(check)
# Categories
for category in metadata.Categories:
if category not in check_categories:
check_categories[category] = []
check_categories[category].append(check)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
# Categories
for category in metadata.Categories:
if category not in check_categories:
check_categories[category] = []
check_categories[category].append(check)
# Handle if there are checks passed using -c/--checks
if check_list:
@@ -110,7 +103,6 @@ def load_checks_to_execute(
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
return checks_to_execute
def update_checks_to_execute_with_aliases(
@@ -118,20 +110,15 @@ def update_checks_to_execute_with_aliases(
) -> set:
"""update_checks_to_execute_with_aliases returns the checks_to_execute updated using the check aliases."""
# Verify if any input check is an alias of another check
try:
new_checks_to_execute = checks_to_execute.copy()
for input_check in checks_to_execute:
if input_check in check_aliases:
# Remove input check name and add the real one
new_checks_to_execute.remove(input_check)
for alias in check_aliases[input_check]:
if alias not in new_checks_to_execute:
new_checks_to_execute.add(alias)
print(
f"\nUsing alias {Fore.YELLOW}{input_check}{Style.RESET_ALL} for check {Fore.YELLOW}{alias}{Style.RESET_ALL}..."
)
return new_checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
for input_check in checks_to_execute:
if (
input_check in check_aliases
and check_aliases[input_check] not in checks_to_execute
):
# Remove input check name and add the real one
checks_to_execute.remove(input_check)
checks_to_execute.add(check_aliases[input_check])
print(
f"\nUsing alias {Fore.YELLOW}{input_check}{Style.RESET_ALL} for check {Fore.YELLOW}{check_aliases[input_check]}{Style.RESET_ALL}...\n"
)
return checks_to_execute
+2 -2
View File
@@ -53,14 +53,14 @@ def update_checks_metadata_with_compliance(
check_compliance.append(compliance)
# Create metadata for Manual Control
manual_check_metadata = {
"Provider": framework.Provider.lower(),
"Provider": "aws",
"CheckID": "manual_check",
"CheckTitle": "Manual Check",
"CheckType": [],
"ServiceName": "",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "low",
"Severity": "",
"ResourceType": "",
"Description": "",
"Risk": "",
+4 -4
View File
@@ -52,12 +52,12 @@ class ENS_Requirement_Attribute(BaseModel):
class Generic_Compliance_Requirement_Attribute(BaseModel):
"""Generic Compliance Requirement Attribute"""
ItemId: Optional[str]
ItemId: str
Section: Optional[str]
SubSection: Optional[str]
SubGroup: Optional[str]
Service: Optional[str]
Type: Optional[str]
Service: str
Soc_Type: Optional[str]
class CIS_Requirement_Attribute_Profile(str):
@@ -151,9 +151,9 @@ class Compliance_Requirement(BaseModel):
Union[
CIS_Requirement_Attribute,
ENS_Requirement_Attribute,
Generic_Compliance_Requirement_Attribute,
ISO27001_2013_Requirement_Attribute,
AWS_Well_Architected_Requirement_Attribute,
Generic_Compliance_Requirement_Attribute,
]
]
Checks: list[str]
+369
View File
@@ -0,0 +1,369 @@
import importlib
import os
import sys
import traceback
from types import ModuleType
from typing import Any, Set
from colorama import Fore, Style
from prowler.lib.check.check_to_client_mapper import get_dependencies_for_checks
from prowler.lib.check.custom_checks_metadata import update_check_metadata
from prowler.lib.check.models import Check
from prowler.lib.logger import logger
from prowler.lib.outputs.outputs import report
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
from prowler.providers.common.common import get_global_provider
from prowler.providers.common.models import Audit_Metadata
from prowler.providers.common.outputs import Provider_Output_Options
class ExecutionManager:
def __init__(
self,
checks_to_execute: list,
provider: str,
audit_info: Any,
audit_output_options: Provider_Output_Options,
custom_checks_metadata: Any,
):
self.checks_to_execute = checks_to_execute
self.provider = provider
self.audit_info = audit_info
self.audit_output_options = audit_output_options
self.custom_checks_metadata = custom_checks_metadata
self.live_display = live_display
self.live_display.start()
self.loaded_clients = {} # defaultdict(lambda: False)
self.check_dict = self.create_check_service_dict(checks_to_execute)
self.check_dependencies = get_dependencies_for_checks(provider, self.check_dict)
self.remaining_checks = self.initialize_remaining_checks(
self.check_dependencies
)
self.services_queue = self.initialize_services_queue(self.check_dependencies)
# For tracking the executed services and checks
self.services_executed: Set[str] = set()
self.checks_executed: Set[str] = set()
# Initialize the Audit Metadata
self.audit_info.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=self.checks_to_execute,
completed_checks=0,
audit_progress=0,
)
def update_tracking(self, service: str, check: str):
self.services_executed.add(service)
self.checks_executed.add(check)
@staticmethod
def initialize_remaining_checks(check_dependencies):
remaining_checks = {}
for service, checks in check_dependencies.items():
for check_name, clients in checks.items():
remaining_checks[(service, check_name)] = clients
return remaining_checks
@staticmethod
def initialize_services_queue(check_dependencies):
return list(check_dependencies.keys())
@staticmethod
def create_check_service_dict(checks_to_execute):
output = {}
for check_name in checks_to_execute:
service = check_name.split("_")[0]
if service not in output.keys():
output[service] = []
output[service].append(check_name)
return output
def total_checks_per_service(self):
"""Returns a dictionary with the total number of checks for each service."""
total_checks = {}
for service, checks in self.check_dict.items():
total_checks[service] = len(checks)
return total_checks
def find_next_service(self):
# Prioritize services that use already loaded clients
for service in self.services_queue:
checks = self.check_dependencies[service]
if any(
client in self.loaded_clients
for check in checks.values()
for client in check
):
return service
return None if not self.services_queue else self.services_queue[0]
@staticmethod
def import_check(check_path: str) -> ModuleType:
"""
Imports an input check using its path
When importing a module using importlib.import_module, it's loaded and added to the sys.modules cache.
This means that the module remains in memory and is not garbage collected immediately after use, as it's still referenced in sys.modules.
This behavior is intentional, as importing modules can be a costly operation, and keeping them in memory allows for faster re-use.
release_check deletes this reference if it is no longer required by any of the remaining checks
"""
lib = importlib.import_module(f"{check_path}")
return lib
# Imports service clients, and tracks if it needs to be imported
def import_client(self, client_name):
if not self.loaded_clients.get(client_name):
# Dynamically import the client
module_name, _ = client_name.rsplit("_", 1)
client_module = importlib.import_module(
f"prowler.providers.{self.provider}.services.{module_name}.{client_name}"
)
self.loaded_clients[client_name] = client_module
def release_clients(self, completed_check_clients):
for client_name in completed_check_clients:
# Determine if any of the remaining checks still require the client
if not any(
client == client_name
for check in self.remaining_checks
for client in self.remaining_checks[check]
):
# Delete the reference to the client for this object
del self.loaded_clients[client_name]
module_name, _ = client_name.rsplit("_", 1)
# Delete the reference to the client in sys.modules
del sys.modules[
f"prowler.providers.aws.services.{module_name}.{client_name}"
]
def generate_checks(self):
"""
This is a generator function, which will:
* Determine the next service whose checks will be executed
* Load all the clients which are required by the checks into memory (init them)
* Yield the service and check name, 1-by-1, to be used within execute_checks
* Pass the completed checks to release_clients to determine if the clients that were required by the check are no longer needed, and can be garabage collected
It will complete the checks for a service, before moving onto the next one
It uses find_next_service to prioritize the next service based on if any of that service's checks require a client that has already been loaded
"""
while self.remaining_checks:
current_service = self.find_next_service()
if not current_service:
# Execution has completed, return
break
# Remove the service from the services_queue
self.services_queue.remove(current_service)
checks = self.check_dependencies[current_service]
clients_for_service = list(
set(client for client_list in checks.values() for client in client_list)
)
for client in clients_for_service:
self.live_display.add_client_init_section(client)
self.import_client(client)
# Add the display component
total_checks = len(self.check_dict[current_service])
self.live_display.add_service_section(current_service, total_checks)
for check_name, clients_for_check in checks.items():
yield current_service, check_name
self.live_display.increment_check_progress()
self.live_display.increment_overall_check_progress()
del self.remaining_checks[(current_service, check_name)]
self.release_clients(clients_for_check)
self.live_display.increment_overall_service_progress()
def execute_checks(self) -> list:
# List to store all the check's findings
all_findings = []
# Services and checks executed for the Audit Status
global_provider = get_global_provider()
# Initialize the Audit Metadata
global_provider.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=self.checks_to_execute,
completed_checks=0,
audit_progress=0,
)
if os.name != "nt":
try:
from resource import RLIMIT_NOFILE, getrlimit
# Check ulimit for the maximum system open files
soft, _ = getrlimit(RLIMIT_NOFILE)
if soft < 4096:
logger.warning(
f"Your session file descriptors limit ({soft} open files) is below 4096. We recommend to increase it to avoid errors. Solve it running this command `ulimit -n 4096`. For more info visit https://docs.prowler.cloud/en/latest/troubleshooting/"
)
except Exception as error:
logger.error("Unable to retrieve ulimit default settings")
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# Execution with the --only-logs flag
if self.audit_output_options.only_logs:
for service, check_name in self.generate_checks():
try:
check_findings = self.execute(service, check_name)
all_findings.extend(check_findings)
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
# Default execution
total_checks = self.total_checks_per_service()
self.live_display.add_overall_progress_section(
total_checks_dict=total_checks
)
# For tracking when a service is completed
completed_checks = {service: 0 for service in total_checks}
service_findings = []
for service, check_name in self.generate_checks():
try:
check_findings = self.execute(
service,
check_name,
)
all_findings.extend(check_findings)
service_findings.extend(check_findings)
# Update the completed checks count
completed_checks[service] += 1
# Check if all checks for the service are completed
if completed_checks[service] == total_checks[service]:
# All checks for the service are completed
# Add a summary table or perform other actions
live_display.add_results_for_service(service, service_findings)
# Clear service_findings
service_findings = []
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
self.live_display.hide_service_section()
return all_findings
def execute(
self,
service: str,
check_name: str,
):
try:
# Import check module
check_module_path = f"prowler.providers.{self.provider}.services.{service}.{check_name}.{check_name}"
lib = self.import_check(check_module_path)
# Recover functions from check
check_to_execute = getattr(lib, check_name)
c = check_to_execute()
# Update check metadata to reflect that in the outputs
if self.custom_checks_metadata and self.custom_checks_metadata[
"Checks"
].get(c.CheckID):
c = update_check_metadata(
c, self.custom_checks_metadata["Checks"][c.CheckID]
)
# Run check
check_findings = self.run_check(c, self.audit_output_options)
# Update Audit Status
self.update_tracking(service, check_name)
self.update_audit_metadata()
# Mutelist findings
if self.audit_output_options.mutelist_file:
check_findings = mutelist_findings(
self.audit_output_options.mutelist_file,
self.audit_info.audited_account,
check_findings,
)
# Report the check's findings
report(check_findings, self.audit_output_options, self.audit_info)
if os.environ.get("PROWLER_REPORT_LIB_PATH"):
try:
logger.info("Using custom report interface ...")
lib = os.environ["PROWLER_REPORT_LIB_PATH"]
outputs_module = importlib.import_module(lib)
custom_report_interface = getattr(outputs_module, "report")
custom_report_interface(
check_findings, self.audit_output_options, self.audit_info
)
except Exception:
sys.exit(1)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return check_findings
@staticmethod
def run_check(check: Check, output_options: Provider_Output_Options) -> list:
findings = []
if output_options.verbose:
print(
f"\nCheck ID: {check.CheckID} - {Fore.MAGENTA}{check.ServiceName}{Fore.YELLOW} [{check.Severity}]{Style.RESET_ALL}"
)
logger.debug(f"Executing check: {check.CheckID}")
try:
findings = check.execute()
except Exception as error:
if not output_options.only_logs:
print(
f"Something went wrong in {check.CheckID}, please use --log-level ERROR"
)
logger.error(
f"{check.CheckID} -- {error.__class__.__name__}[{traceback.extract_tb(error.__traceback__)[-1].lineno}]: {error}"
)
finally:
return findings
def update_audit_metadata(self):
"""update_audit_metadata returns the audit_metadata updated with the new status
Updates the given audit_metadata using the length of the services_executed and checks_executed
"""
try:
self.audit_info.audit_metadata.services_scanned = len(
self.services_executed
)
self.audit_info.audit_metadata.completed_checks = len(self.checks_executed)
self.audit_info.audit_metadata.audit_progress = (
100
* len(self.checks_executed)
/ len(self.audit_info.audit_metadata.expected_checks)
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
+93 -24
View File
@@ -1,13 +1,14 @@
import os
import re
import sys
from abc import ABC, abstractmethod
from dataclasses import dataclass
from functools import wraps
from pydantic import BaseModel, ValidationError, validator
from pydantic import BaseModel, ValidationError
from pydantic.main import ModelMetaclass
from prowler.config.config import valid_severities
from prowler.lib.logger import logger
from prowler.lib.ui.live_display import live_display
class Code(BaseModel):
@@ -58,33 +59,30 @@ class Check_Metadata_Model(BaseModel):
# store the compliance later if supplied
Compliance: list = None
@validator("Categories", each_item=True, pre=True, always=True)
def valid_category(value):
if not isinstance(value, str):
raise ValueError("Categories must be a list of strings")
value_lower = value.lower()
if not re.match("^[a-z-]+$", value_lower):
raise ValueError(
f"Invalid category: {value}. Categories can only contain lowercase letters and hyphen '-'"
)
return value_lower
@validator("Severity", pre=True, always=True)
def severity_to_lower(severity):
return severity.lower()
class CheckMeta(ModelMetaclass):
"""
Dynamically decorates the execute function of all subclasses of the Check class
@validator("Severity")
def valid_severity(severity):
if severity not in valid_severities:
raise ValueError(
f"Invalid severity: {severity}. Severity must be one of {', '.join(valid_severities)}"
)
return severity
By making CheckMeta inherit from ModelMetaclass, it ensures that all features provided by Pydantic's BaseModel (such as data validation, serialization, and so forth) are preserved. CheckMeta just adds additional behavior (decorator application) on top of the existing features.
This also works because ModelMetaclass inherits from ABCMeta, as does the ABC class (its got to do with how metaclasses work when applying it to a class that inherits from other classes that have a metaclass).
The primary role of CheckMeta is to automatically apply a decorator to the execute method of subclasses. This behavior does not conflict with the typical responsibilities of ModelMetaclass
"""
def __new__(cls, name, bases, dct):
if "execute" in dct and not getattr(
dct["execute"], "__isabstractmethod__", False
):
dct["execute"] = Check.update_title_with_findings_decorator(dct["execute"])
return super(CheckMeta, cls).__new__(cls, name, bases, dct)
class Check(ABC, Check_Metadata_Model):
class Check(ABC, Check_Metadata_Model, metaclass=CheckMeta):
"""Prowler Check"""
title_bar_task: int = None
progress_task: int = None
def __init__(self, **data):
"""Check's init function. Calls the CheckMetadataModel init."""
# Parse the Check's metadata file
@@ -97,6 +95,43 @@ class Check(ABC, Check_Metadata_Model):
# Calls parents init function
super().__init__(**data)
self.live_display_enabled = False
service_section = live_display.get_service_section()
if service_section:
self.live_display_enabled = True
self.title_bar_task = service_section.title_bar.add_task(
f"{self.CheckTitle}...", start=False
)
def increment_task_progress(self):
if self.live_display_enabled:
current_section = live_display.get_service_section()
current_section.task_progress.update(self.progress_task, advance=1)
def start_task(self, message, count):
if self.live_display_enabled:
current_section = live_display.get_service_section()
self.progress_task = current_section.task_progress.add_task(
description=message, total=count, visible=True
)
def update_title_with_findings(self, findings):
if self.live_display_enabled:
current_section = live_display.get_service_section()
# current_section.task_progress.remove_task(self.progress_task)
total_failed = len(
[report for report in findings if report.status == "FAIL"]
)
total_checked = len(findings)
if total_failed == 0:
message = f"{self.CheckTitle} [pass]All resources passed ({total_checked})[/pass]"
else:
message = f"{self.CheckTitle} [fail]{total_failed}/{total_checked} failed![/fail]"
current_section.title_bar.update(
task_id=self.title_bar_task, description=message
)
def metadata(self) -> dict:
"""Return the JSON representation of the check's metadata"""
return self.json()
@@ -105,6 +140,24 @@ class Check(ABC, Check_Metadata_Model):
def execute(self):
"""Execute the check's logic"""
@staticmethod
def update_title_with_findings_decorator(func):
"""
Decorator to update the title bar in the live_display with findings after executing a check.
"""
@wraps(func)
def wrapper(check_instance, *args, **kwargs):
# Execute the original check's logic
findings = func(check_instance, *args, **kwargs)
# Update the title bar with the findings
check_instance.update_title_with_findings(findings)
return findings
return wrapper
@dataclass
class Check_Report:
@@ -171,6 +224,22 @@ class Check_Report_GCP(Check_Report):
self.location = ""
@dataclass
class Check_Report_Kubernetes(Check_Report):
# TODO change class name to CheckReportKubernetes
"""Contains the Kubernetes Check's finding information."""
resource_name: str
resource_id: str
namespace: str
def __init__(self, metadata):
super().__init__(metadata)
self.resource_name = ""
self.resource_id = ""
self.namespace = ""
# Testing Pending
def load_check_metadata(metadata_file: str) -> Check_Metadata_Model:
"""load_check_metadata loads and parse a Check's metadata file"""
+5 -4
View File
@@ -8,6 +8,7 @@ from prowler.config.config import (
default_config_file_path,
default_output_directory,
valid_severities,
finding_statuses,
)
from prowler.providers.common.arguments import (
init_providers_parser,
@@ -116,10 +117,10 @@ Detailed documentation at https://docs.prowler.cloud
"Outputs"
)
common_outputs_parser.add_argument(
"-q",
"--quiet",
action="store_true",
help="Store or send only Prowler failed findings",
"--status",
nargs="+",
help=f"Filter by the status of the findings {finding_statuses}",
choices=finding_statuses,
)
common_outputs_parser.add_argument(
"-M",
+7 -7
View File
@@ -330,7 +330,7 @@ def fill_compliance(output_options, finding, audit_info, file_descriptors):
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_SubGroup=attribute.SubGroup,
Requirements_Attributes_Service=attribute.Service,
Requirements_Attributes_Type=attribute.Type,
Requirements_Attributes_Soc_Type=attribute.Soc_Type,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
@@ -444,8 +444,8 @@ def display_compliance_table(
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
@@ -539,8 +539,8 @@ def display_compliance_table(
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
@@ -610,8 +610,8 @@ def display_compliance_table(
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
@@ -0,0 +1,55 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_Well_Architected,
generate_csv_fields,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_aws_well_architected_framework(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_CSV_AWS_Well_Architected)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_AWS_Well_Architected(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Name=attribute.Name,
Requirements_Attributes_WellArchitectedQuestionId=attribute.WellArchitectedQuestionId,
Requirements_Attributes_WellArchitectedPracticeId=attribute.WellArchitectedPracticeId,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_LevelOfRisk=attribute.LevelOfRisk,
Requirements_Attributes_AssessmentMethod=attribute.AssessmentMethod,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_ImplementationGuidanceUrl=attribute.ImplementationGuidanceUrl,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)
+36
View File
@@ -0,0 +1,36 @@
from prowler.lib.outputs.compliance.cis_aws import generate_compliance_row_cis_aws
from prowler.lib.outputs.compliance.cis_gcp import generate_compliance_row_cis_gcp
from prowler.lib.outputs.csv import write_csv
def write_compliance_row_cis(
file_descriptors,
finding,
compliance,
output_options,
audit_info,
input_compliance_frameworks,
):
compliance_output = "cis_" + compliance.Version + "_" + compliance.Provider.lower()
# Only with the version of CIS that was selected
if compliance_output in str(input_compliance_frameworks):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
if compliance.Provider == "AWS":
(compliance_row, csv_header) = generate_compliance_row_cis_aws(
finding,
compliance,
requirement,
attribute,
output_options,
audit_info,
)
elif compliance.Provider == "GCP":
(compliance_row, csv_header) = generate_compliance_row_cis_gcp(
finding, compliance, requirement, attribute, output_options
)
write_csv(
file_descriptors[compliance_output], csv_header, compliance_row
)
+34
View File
@@ -0,0 +1,34 @@
from prowler.config.config import timestamp
from prowler.lib.outputs.models import Check_Output_CSV_AWS_CIS, generate_csv_fields
from prowler.lib.utils.utils import outputs_unix_timestamp
def generate_compliance_row_cis_aws(
finding, compliance, requirement, attribute, output_options, audit_info
):
compliance_row = Check_Output_CSV_AWS_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(output_options.unix_timestamp, timestamp),
Requirements_Id=requirement.Id,
Requirements_Description=requirement.Description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_AWS_CIS)
return compliance_row, csv_header
+35
View File
@@ -0,0 +1,35 @@
from prowler.config.config import timestamp
from prowler.lib.outputs.models import Check_Output_CSV_GCP_CIS, generate_csv_fields
from prowler.lib.utils.utils import outputs_unix_timestamp
def generate_compliance_row_cis_gcp(
finding, compliance, requirement, attribute, output_options
):
compliance_row = Check_Output_CSV_GCP_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
ProjectId=finding.project_id,
Location=finding.location.lower(),
AssessmentDate=outputs_unix_timestamp(output_options.unix_timestamp, timestamp),
Requirements_Id=requirement.Id,
Requirements_Description=requirement.Description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
ResourceName=finding.resource_name,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_GCP_CIS)
return compliance_row, csv_header
@@ -0,0 +1,472 @@
import sys
from colorama import Fore, Style
from tabulate import tabulate
from prowler.config.config import orange_color
from prowler.lib.check.models import Check_Report
from prowler.lib.logger import logger
from prowler.lib.outputs.compliance.aws_well_architected_framework import (
write_compliance_row_aws_well_architected_framework,
)
from prowler.lib.outputs.compliance.cis import write_compliance_row_cis
from prowler.lib.outputs.compliance.ens_rd2022_aws import (
write_compliance_row_ens_rd2022_aws,
)
from prowler.lib.outputs.compliance.generic import write_compliance_row_generic
from prowler.lib.outputs.compliance.iso27001_2013_aws import (
write_compliance_row_iso27001_2013_aws,
)
from prowler.lib.outputs.compliance.mitre_attack_aws import (
write_compliance_row_mitre_attack_aws,
)
def add_manual_controls(
output_options, audit_info, file_descriptors, input_compliance_frameworks
):
try:
# Check if MANUAL control was already added to output
if "manual_check" in output_options.bulk_checks_metadata:
manual_finding = Check_Report(
output_options.bulk_checks_metadata["manual_check"].json()
)
manual_finding.status = "MANUAL"
manual_finding.status_extended = "Manual check"
manual_finding.resource_id = "manual_check"
manual_finding.resource_name = "Manual check"
manual_finding.region = ""
manual_finding.location = ""
manual_finding.project_id = ""
fill_compliance(
output_options,
manual_finding,
audit_info,
file_descriptors,
input_compliance_frameworks,
)
del output_options.bulk_checks_metadata["manual_check"]
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def get_check_compliance_frameworks_in_input(
check_id, bulk_checks_metadata, input_compliance_frameworks
):
"""get_check_compliance_frameworks_in_input returns a list of Compliance for the given check if the compliance framework is present in the input compliance to execute"""
check_compliances = []
if bulk_checks_metadata and bulk_checks_metadata[check_id]:
for compliance in bulk_checks_metadata[check_id].Compliance:
compliance_name = ""
if compliance.Version:
compliance_name = (
compliance.Framework.lower()
+ "_"
+ compliance.Version.lower()
+ "_"
+ compliance.Provider.lower()
)
else:
compliance_name = (
compliance.Framework.lower() + "_" + compliance.Provider.lower()
)
if compliance_name.replace("-", "_") in input_compliance_frameworks:
check_compliances.append(compliance)
return check_compliances
def fill_compliance(
output_options, finding, audit_info, file_descriptors, input_compliance_frameworks
):
try:
# We have to retrieve all the check's compliance requirements and get the ones matching with the input ones
check_compliances = get_check_compliance_frameworks_in_input(
finding.check_metadata.CheckID,
output_options.bulk_checks_metadata,
input_compliance_frameworks,
)
for compliance in check_compliances:
if compliance.Framework == "ENS" and compliance.Version == "RD2022":
write_compliance_row_ens_rd2022_aws(
file_descriptors, finding, compliance, output_options, audit_info
)
elif compliance.Framework == "CIS":
write_compliance_row_cis(
file_descriptors,
finding,
compliance,
output_options,
audit_info,
input_compliance_frameworks,
)
elif (
"AWS-Well-Architected-Framework" in compliance.Framework
and compliance.Provider == "AWS"
):
write_compliance_row_aws_well_architected_framework(
file_descriptors, finding, compliance, output_options, audit_info
)
elif (
compliance.Framework == "ISO27001"
and compliance.Version == "2013"
and compliance.Provider == "AWS"
):
write_compliance_row_iso27001_2013_aws(
file_descriptors, finding, compliance, output_options, audit_info
)
elif (
compliance.Framework == "MITRE-ATTACK"
and compliance.Version == ""
and compliance.Provider == "AWS"
):
write_compliance_row_mitre_attack_aws(
file_descriptors, finding, compliance, output_options, audit_info
)
else:
write_compliance_row_generic(
file_descriptors, finding, compliance, output_options, audit_info
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def display_compliance_table(
findings: list,
bulk_checks_metadata: dict,
compliance_framework: str,
output_filename: str,
output_directory: str,
compliance_overview: bool,
):
try:
if "ens_rd2022_aws" == compliance_framework:
marcos = {}
ens_compliance_table = {
"Proveedor": [],
"Marco/Categoria": [],
"Estado": [],
"Alto": [],
"Medio": [],
"Bajo": [],
"Opcional": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework == "ENS"
and compliance.Provider == "AWS"
and compliance.Version == "RD2022"
):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
marco_categoria = (
f"{attribute.Marco}/{attribute.Categoria}"
)
# Check if Marco/Categoria exists
if marco_categoria not in marcos:
marcos[marco_categoria] = {
"Estado": f"{Fore.GREEN}CUMPLE{Style.RESET_ALL}",
"Opcional": 0,
"Alto": 0,
"Medio": 0,
"Bajo": 0,
}
if finding.status == "FAIL":
fail_count += 1
marcos[marco_categoria][
"Estado"
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"
elif finding.status == "PASS":
pass_count += 1
if attribute.Nivel == "opcional":
marcos[marco_categoria]["Opcional"] += 1
elif attribute.Nivel == "alto":
marcos[marco_categoria]["Alto"] += 1
elif attribute.Nivel == "medio":
marcos[marco_categoria]["Medio"] += 1
elif attribute.Nivel == "bajo":
marcos[marco_categoria]["Bajo"] += 1
# Add results to table
for marco in sorted(marcos):
ens_compliance_table["Proveedor"].append(compliance.Provider)
ens_compliance_table["Marco/Categoria"].append(marco)
ens_compliance_table["Estado"].append(marcos[marco]["Estado"])
ens_compliance_table["Opcional"].append(
f"{Fore.BLUE}{marcos[marco]['Opcional']}{Style.RESET_ALL}"
)
ens_compliance_table["Alto"].append(
f"{Fore.LIGHTRED_EX}{marcos[marco]['Alto']}{Style.RESET_ALL}"
)
ens_compliance_table["Medio"].append(
f"{orange_color}{marcos[marco]['Medio']}{Style.RESET_ALL}"
)
ens_compliance_table["Bajo"].append(
f"{Fore.YELLOW}{marcos[marco]['Bajo']}{Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nEstado de Cumplimiento de {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(
f"\nResultados de {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}:"
)
print(
tabulate(
ens_compliance_table,
headers="keys",
tablefmt="rounded_grid",
)
)
print(
f"{Style.BRIGHT}* Solo aparece el Marco/Categoria que contiene resultados.{Style.RESET_ALL}"
)
print(
f"\nResultados detallados de {compliance_framework.upper()} en:"
)
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
elif "cis_" in compliance_framework:
sections = {}
cis_compliance_table = {
"Provider": [],
"Section": [],
"Level 1": [],
"Level 2": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework == "CIS"
and compliance.Version in compliance_framework
):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
section = attribute.Section
# Check if Section exists
if section not in sections:
sections[section] = {
"Status": f"{Fore.GREEN}PASS{Style.RESET_ALL}",
"Level 1": {"FAIL": 0, "PASS": 0},
"Level 2": {"FAIL": 0, "PASS": 0},
}
if finding.status == "FAIL":
fail_count += 1
elif finding.status == "PASS":
pass_count += 1
if attribute.Profile == "Level 1":
if finding.status == "FAIL":
sections[section]["Level 1"]["FAIL"] += 1
else:
sections[section]["Level 1"]["PASS"] += 1
elif attribute.Profile == "Level 2":
if finding.status == "FAIL":
sections[section]["Level 2"]["FAIL"] += 1
else:
sections[section]["Level 2"]["PASS"] += 1
# Add results to table
sections = dict(sorted(sections.items()))
for section in sections:
cis_compliance_table["Provider"].append(compliance.Provider)
cis_compliance_table["Section"].append(section)
if sections[section]["Level 1"]["FAIL"] > 0:
cis_compliance_table["Level 1"].append(
f"{Fore.RED}FAIL({sections[section]['Level 1']['FAIL']}){Style.RESET_ALL}"
)
else:
cis_compliance_table["Level 1"].append(
f"{Fore.GREEN}PASS({sections[section]['Level 1']['PASS']}){Style.RESET_ALL}"
)
if sections[section]["Level 2"]["FAIL"] > 0:
cis_compliance_table["Level 2"].append(
f"{Fore.RED}FAIL({sections[section]['Level 2']['FAIL']}){Style.RESET_ALL}"
)
else:
cis_compliance_table["Level 2"].append(
f"{Fore.GREEN}PASS({sections[section]['Level 2']['PASS']}){Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
)
print(
tabulate(
cis_compliance_table,
headers="keys",
tablefmt="rounded_grid",
)
)
print(
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
)
print(
f"\nDetailed results of {compliance_framework.upper()} are in:"
)
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
elif "mitre_attack" in compliance_framework:
tactics = {}
mitre_compliance_table = {
"Provider": [],
"Tactic": [],
"Status": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
"MITRE-ATTACK" in compliance.Framework
and compliance.Version in compliance_framework
):
for requirement in compliance.Requirements:
for tactic in requirement.Tactics:
if tactic not in tactics:
tactics[tactic] = {"FAIL": 0, "PASS": 0}
if finding.status == "FAIL":
fail_count += 1
tactics[tactic]["FAIL"] += 1
elif finding.status == "PASS":
pass_count += 1
tactics[tactic]["PASS"] += 1
# Add results to table
tactics = dict(sorted(tactics.items()))
for tactic in tactics:
mitre_compliance_table["Provider"].append(compliance.Provider)
mitre_compliance_table["Tactic"].append(tactic)
if tactics[tactic]["FAIL"] > 0:
mitre_compliance_table["Status"].append(
f"{Fore.RED}FAIL({tactics[tactic]['FAIL']}){Style.RESET_ALL}"
)
else:
mitre_compliance_table["Status"].append(
f"{Fore.GREEN}PASS({tactics[tactic]['PASS']}){Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
)
print(
tabulate(
mitre_compliance_table,
headers="keys",
tablefmt="rounded_grid",
)
)
print(
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
)
print(
f"\nDetailed results of {compliance_framework.upper()} are in:"
)
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
else:
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework.upper()
in compliance_framework.upper().replace("_", "-")
and compliance.Version in compliance_framework.upper()
and compliance.Provider in compliance_framework.upper()
):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
if finding.status == "FAIL":
fail_count += 1
elif finding.status == "PASS":
pass_count += 1
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(f"\nDetailed results of {compliance_framework.upper()} are in:")
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
)
sys.exit(1)
@@ -0,0 +1,45 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import Check_Output_CSV_ENS_RD2022, generate_csv_fields
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_ens_rd2022_aws(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = "ens_rd2022_aws"
csv_header = generate_csv_fields(Check_Output_CSV_ENS_RD2022)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_ENS_RD2022(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_IdGrupoControl=attribute.IdGrupoControl,
Requirements_Attributes_Marco=attribute.Marco,
Requirements_Attributes_Categoria=attribute.Categoria,
Requirements_Attributes_DescripcionControl=attribute.DescripcionControl,
Requirements_Attributes_Nivel=attribute.Nivel,
Requirements_Attributes_Tipo=attribute.Tipo,
Requirements_Attributes_Dimensiones=",".join(attribute.Dimensiones),
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)
+51
View File
@@ -0,0 +1,51 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_CSV_Generic_Compliance,
generate_csv_fields,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_generic(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_CSV_Generic_Compliance)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_Generic_Compliance(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_SubGroup=attribute.SubGroup,
Requirements_Attributes_Service=attribute.Service,
Requirements_Attributes_Soc_Type=attribute.Soc_Type,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)
@@ -0,0 +1,53 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_ISO27001_2013,
generate_csv_fields,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_iso27001_2013_aws(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_CSV_AWS_ISO27001_2013)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_AWS_ISO27001_2013(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Name=requirement_name,
Requirements_Description=requirement_description,
Requirements_Attributes_Category=attribute.Category,
Requirements_Attributes_Objetive_ID=attribute.Objetive_ID,
Requirements_Attributes_Objetive_Name=attribute.Objetive_Name,
Requirements_Attributes_Check_Summary=attribute.Check_Summary,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)
@@ -0,0 +1,66 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_MITRE_ATTACK,
generate_csv_fields,
unroll_list,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_mitre_attack_aws(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_MITRE_ATTACK)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
attributes_aws_services = ""
attributes_categories = ""
attributes_values = ""
attributes_comments = ""
for attribute in requirement.Attributes:
attributes_aws_services += attribute.AWSService + "\n"
attributes_categories += attribute.Category + "\n"
attributes_values += attribute.Value + "\n"
attributes_comments += attribute.Comment + "\n"
compliance_row = Check_Output_MITRE_ATTACK(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Name=requirement_name,
Requirements_Tactics=unroll_list(requirement.Tactics),
Requirements_SubTechniques=unroll_list(requirement.SubTechniques),
Requirements_Platforms=unroll_list(requirement.Platforms),
Requirements_TechniqueURL=requirement.TechniqueURL,
Requirements_Attributes_AWSServices=attributes_aws_services,
Requirements_Attributes_Categories=attributes_categories,
Requirements_Attributes_Values=attributes_values,
Requirements_Attributes_Comments=attributes_comments,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)
+10
View File
@@ -0,0 +1,10 @@
from csv import DictWriter
def write_csv(file_descriptor, headers, row):
csv_writer = DictWriter(
file_descriptor,
fieldnames=headers,
delimiter=";",
)
csv_writer.writerow(row.__dict__)
+84 -46
View File
@@ -23,6 +23,7 @@ from prowler.lib.outputs.models import (
)
from prowler.lib.utils.utils import file_exists, open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.common.outputs import get_provider_output_model
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
@@ -107,8 +108,8 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
file_descriptors.update({output_mode: file_descriptor})
elif isinstance(audit_info, GCP_Audit_Info):
filename = f"{output_directory}/{output_filename}_{output_mode}{csv_file_suffix}"
if "cis_" in output_mode:
if output_mode == "cis_2.0_gcp":
filename = f"{output_directory}/compliance/{output_filename}_cis_2.0_gcp{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_GCP_CIS
)
@@ -121,55 +122,92 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
filename, output_mode, audit_info
)
file_descriptors.update({output_mode: file_descriptor})
else: # Compliance frameworks
filename = f"{output_directory}/{output_filename}_{output_mode}{csv_file_suffix}"
if output_mode == "ens_rd2022_aws":
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_ENS_RD2022,
)
file_descriptors.update({output_mode: file_descriptor})
elif "cis_" in output_mode:
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AWS_CIS,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "ens_rd2022_aws":
filename = f"{output_directory}/compliance/{output_filename}_ens_rd2022_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_ENS_RD2022,
)
file_descriptors.update({output_mode: file_descriptor})
elif "aws_well_architected_framework" in output_mode:
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AWS_Well_Architected,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "cis_1.5_aws":
filename = f"{output_directory}/compliance/{output_filename}_cis_1.5_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_AWS_CIS
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "iso27001_2013_aws":
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AWS_ISO27001_2013,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "cis_1.4_aws":
filename = f"{output_directory}/compliance/{output_filename}_cis_1.4_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_AWS_CIS
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "mitre_attack_aws":
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_MITRE_ATTACK,
)
file_descriptors.update({output_mode: file_descriptor})
elif (
output_mode
== "aws_well_architected_framework_security_pillar_aws"
):
filename = f"{output_directory}/compliance/{output_filename}_aws_well_architected_framework_security_pillar_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AWS_Well_Architected,
)
file_descriptors.update({output_mode: file_descriptor})
else:
# Generic Compliance framework
elif (
output_mode
== "aws_well_architected_framework_reliability_pillar_aws"
):
filename = f"{output_directory}/compliance/{output_filename}_aws_well_architected_framework_reliability_pillar_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AWS_Well_Architected,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "iso27001_2013_aws":
filename = f"{output_directory}/compliance/{output_filename}_iso27001_2013_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_AWS_ISO27001_2013,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "mitre_attack_aws":
filename = f"{output_directory}/compliance/{output_filename}_mitre_attack_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_MITRE_ATTACK,
)
file_descriptors.update({output_mode: file_descriptor})
else:
# Generic Compliance framework
if (
isinstance(audit_info, AWS_Audit_Info)
and "aws" in output_mode
or (
isinstance(audit_info, Azure_Audit_Info)
and "azure" in output_mode
)
or (
isinstance(audit_info, GCP_Audit_Info)
and "gcp" in output_mode
)
):
filename = f"{output_directory}/compliance/{output_filename}_{output_mode}{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
+52 -3
View File
@@ -21,6 +21,7 @@ from prowler.lib.utils.utils import open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
from prowler.providers.kubernetes.lib.audit_info.models import Kubernetes_Audit_Info
def add_html_header(file_descriptor, audit_info):
@@ -169,11 +170,11 @@ def add_html_header(file_descriptor, audit_info):
def fill_html(file_descriptor, finding, output_options):
try:
row_class = "p-3 mb-2 bg-success-custom"
if finding.status == "INFO":
if finding.status == "MANUAL":
row_class = "table-info"
elif finding.status == "FAIL":
row_class = "table-danger"
elif finding.status == "WARNING":
elif finding.status == "MUTED":
row_class = "table-warning"
file_descriptor.write(
f"""
@@ -407,7 +408,7 @@ def get_azure_html_assessment_summary(audit_info):
if isinstance(audit_info, Azure_Audit_Info):
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = f"{key} : {value}"
intermediate = key + " : " + value
printed_subscriptions.append(intermediate)
# check if identity is str(coming from SP) or dict(coming from browser or)
@@ -522,6 +523,53 @@ def get_gcp_html_assessment_summary(audit_info):
sys.exit(1)
def get_kubernetes_html_assessment_summary(audit_info):
try:
if isinstance(audit_info, Kubernetes_Audit_Info):
return (
"""
<div class="col-md-2">
<div class="card">
<div class="card-header">
Kubernetes Assessment Summary
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>Kubernetes Context:</b> """
+ audit_info.context["name"]
+ """
</li>
</ul>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
Kubernetes Credentials
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>Kubernetes Cluster:</b> """
+ audit_info.context["context"]["cluster"]
+ """
</li>
<li class="list-group-item">
<b>Kubernetes User:</b> """
+ audit_info.context["context"]["user"]
+ """
</li>
</ul>
</div>
</div>
"""
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_assessment_summary(audit_info):
"""
get_assessment_summary gets the HTML assessment summary for the provider
@@ -532,6 +580,7 @@ def get_assessment_summary(audit_info):
# AWS_Audit_Info --> aws
# GCP_Audit_Info --> gcp
# Azure_Audit_Info --> azure
# Kubernetes_Audit_Info --> kubernetes
provider = audit_info.__class__.__name__.split("_")[0].lower()
# Dynamically get the Provider quick inventory handler
+7 -7
View File
@@ -51,9 +51,9 @@ def fill_json_asff(finding_output, audit_info, finding, output_options):
finding_output.GeneratorId = "prowler-" + finding.check_metadata.CheckID
finding_output.AwsAccountId = audit_info.audited_account
finding_output.Types = finding.check_metadata.CheckType
finding_output.FirstObservedAt = finding_output.UpdatedAt = (
finding_output.CreatedAt
) = timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
finding_output.FirstObservedAt = (
finding_output.UpdatedAt
) = finding_output.CreatedAt = timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
finding_output.Severity = Severity(
Label=finding.check_metadata.Severity.upper()
)
@@ -116,8 +116,8 @@ def generate_json_asff_status(status: str) -> str:
json_asff_status = "PASSED"
elif status == "FAIL":
json_asff_status = "FAILED"
elif status == "WARNING":
json_asff_status = "WARNING"
elif status == "MUTED":
json_asff_status = "MUTED"
else:
json_asff_status = "NOT_AVAILABLE"
@@ -293,7 +293,7 @@ def generate_json_ocsf_status(status: str):
json_ocsf_status = "Success"
elif status == "FAIL":
json_ocsf_status = "Failure"
elif status == "WARNING":
elif status == "MUTED":
json_ocsf_status = "Other"
else:
json_ocsf_status = "Unknown"
@@ -307,7 +307,7 @@ def generate_json_ocsf_status_id(status: str):
json_ocsf_status_id = 1
elif status == "FAIL":
json_ocsf_status_id = 2
elif status == "WARNING":
elif status == "MUTED":
json_ocsf_status_id = 99
else:
json_ocsf_status_id = 0
+58 -14
View File
@@ -10,10 +10,19 @@ from prowler.config.config import prowler_version, timestamp
from prowler.lib.check.models import Remediation
from prowler.lib.logger import logger
from prowler.lib.utils.utils import outputs_unix_timestamp
from prowler.providers.aws.lib.audit_info.models import AWS_Organizations_Info
from prowler.providers.aws.lib.audit_info.models import AWSOrganizationsInfo
def get_check_compliance(finding, provider, output_options):
def get_check_compliance(finding, provider, output_options) -> dict:
"""get_check_compliance returns a map with the compliance framework as key and the requirements where the finding's check is present.
Example:
{
"CIS-1.4": ["2.1.3"],
"CIS-1.5": ["2.1.3"],
}
"""
try:
check_compliance = {}
# We have to retrieve all the check's compliance requirements
@@ -55,9 +64,9 @@ def generate_provider_output_csv(
data["resource_name"] = finding.resource_name
data["subscription"] = finding.subscription
data["tenant_domain"] = audit_info.identity.domain
data["finding_unique_id"] = (
f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.subscription}-{finding.resource_id}"
)
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.subscription}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
@@ -68,9 +77,21 @@ def generate_provider_output_csv(
data["resource_name"] = finding.resource_name
data["project_id"] = finding.project_id
data["location"] = finding.location.lower()
data["finding_unique_id"] = (
f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.project_id}-{finding.resource_id}"
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.project_id}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
finding_output = output_model(**data)
if provider == "kubernetes":
data["resource_id"] = finding.resource_id
data["resource_name"] = finding.resource_name
data["namespace"] = finding.namespace
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.namespace}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
@@ -82,9 +103,9 @@ def generate_provider_output_csv(
data["region"] = finding.region
data["resource_id"] = finding.resource_id
data["resource_arn"] = finding.resource_arn
data["finding_unique_id"] = (
f"prowler-{provider}-{finding.check_metadata.CheckID}-{audit_info.audited_account}-{finding.region}-{finding.resource_id}"
)
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{audit_info.audited_account}-{finding.region}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
@@ -348,6 +369,16 @@ class Gcp_Check_Output_CSV(Check_Output_CSV):
resource_name: str = ""
class Kubernetes_Check_Output_CSV(Check_Output_CSV):
"""
Kubernetes_Check_Output_CSV generates a finding's output in CSV format for the Kubernetes provider.
"""
namespace: str = ""
resource_id: str = ""
resource_name: str = ""
def generate_provider_output_json(
provider: str, finding, audit_info, mode: str, output_options
):
@@ -452,7 +483,7 @@ class Aws_Check_Output_JSON(Check_Output_JSON):
Profile: str = ""
AccountId: str = ""
OrganizationsInfo: Optional[AWS_Organizations_Info]
OrganizationsInfo: Optional[AWSOrganizationsInfo]
Region: str = ""
ResourceId: str = ""
ResourceArn: str = ""
@@ -478,7 +509,7 @@ class Azure_Check_Output_JSON(Check_Output_JSON):
class Gcp_Check_Output_JSON(Check_Output_JSON):
"""
Gcp_Check_Output_JSON generates a finding's output in JSON format for the AWS provider.
Gcp_Check_Output_JSON generates a finding's output in JSON format for the GCP provider.
"""
ProjectId: str = ""
@@ -490,6 +521,19 @@ class Gcp_Check_Output_JSON(Check_Output_JSON):
super().__init__(**metadata)
class Kubernetes_Check_Output_JSON(Check_Output_JSON):
"""
Kubernetes_Check_Output_JSON generates a finding's output in JSON format for the Kubernetes provider.
"""
ResourceId: str = ""
ResourceName: str = ""
Namespace: str = ""
def __init__(self, **metadata):
super().__init__(**metadata)
class Check_Output_MITRE_ATTACK(BaseModel):
"""
Check_Output_MITRE_ATTACK generates a finding's output in CSV MITRE ATTACK format.
@@ -614,8 +658,8 @@ class Check_Output_CSV_Generic_Compliance(BaseModel):
Requirements_Attributes_Section: Optional[str]
Requirements_Attributes_SubSection: Optional[str]
Requirements_Attributes_SubGroup: Optional[str]
Requirements_Attributes_Service: Optional[str]
Requirements_Attributes_Type: Optional[str]
Requirements_Attributes_Service: str
Requirements_Attributes_Soc_Type: Optional[str]
Status: str
StatusExtended: str
ResourceId: str

Some files were not shown because too many files have changed in this diff Show More