Compare commits

..

1 Commits

Author SHA1 Message Date
n4ch04
2a30b3bcac chore(kubernetes mitre): first version of mapping file 2024-05-03 11:27:00 +02:00
1390 changed files with 11306 additions and 68018 deletions

6
.github/CODEOWNERS vendored
View File

@@ -1,5 +1 @@
* @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
# To protect a repository fully against unauthorized changes, you also need to define an owner for the CODEOWNERS file itself.
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#codeowners-and-branch-protection
/.github/ @prowler-cloud/sdk
* @prowler-cloud/prowler-oss @prowler-cloud/prowler-dev

View File

@@ -1,5 +1,6 @@
name: 🐞 Bug Report
description: Create a report to help us improve
title: "[Bug]: "
labels: ["bug", "status/needs-triage"]
body:
@@ -26,7 +27,7 @@ body:
id: actual
attributes:
label: Actual Result with Screenshots or Logs
description: If applicable, add screenshots to help explain your problem. Also, you can add logs (anonymize them first!). Here a command that may help to share a log `prowler <your arguments> --log-level ERROR --log-file $(date +%F)_error.log` then attach here the log file.
description: If applicable, add screenshots to help explain your problem. Also, you can add logs (anonymize them first!). Here a command that may help to share a log `prowler <your arguments> --log-level DEBUG --log-file $(date +%F)_debug.log` then attach here the log file.
validations:
required: true
- type: dropdown

View File

@@ -1,7 +1,8 @@
name: 💡 Feature Request
name: 💡 Feature Request
description: Suggest an idea for this project
labels: ["feature-request", "status/needs-triage"]
body:
- type: textarea
id: Problem

View File

@@ -8,7 +8,7 @@ updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"
interval: "weekly"
open-pull-requests-limit: 10
target-branch: master
labels:
@@ -17,14 +17,14 @@ updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
interval: "weekly"
open-pull-requests-limit: 10
target-branch: master
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"
interval: "weekly"
open-pull-requests-limit: 10
target-branch: v3
labels:
@@ -34,7 +34,7 @@ updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
interval: "weekly"
open-pull-requests-limit: 10
target-branch: v3
labels:

54
.github/labeler.yml vendored
View File

@@ -25,57 +25,3 @@ provider/kubernetes:
github_actions:
- changed-files:
- any-glob-to-any-file: ".github/workflows/*"
cli:
- changed-files:
- any-glob-to-any-file: "cli/**"
mutelist:
- changed-files:
- any-glob-to-any-file: "prowler/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/aws/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/azure/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/gcp/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/kubernetes/lib/mutelist/**"
- any-glob-to-any-file: "tests/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/aws/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/azure/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/gcp/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/kubernetes/lib/mutelist/**"
integration/s3:
- changed-files:
- any-glob-to-any-file: "prowler/providers/aws/lib/s3/**"
- any-glob-to-any-file: "tests/providers/aws/lib/s3/**"
integration/slack:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/slack/**"
- any-glob-to-any-file: "tests/lib/outputs/slack/**"
integration/security-hub:
- changed-files:
- any-glob-to-any-file: "prowler/providers/aws/lib/security_hub/**"
- any-glob-to-any-file: "tests/providers/aws/lib/security_hub/**"
- any-glob-to-any-file: "prowler/lib/outputs/asff/**"
- any-glob-to-any-file: "tests/lib/outputs/asff/**"
output/html:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/html/**"
- any-glob-to-any-file: "tests/lib/outputs/html/**"
output/asff:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/asff/**"
- any-glob-to-any-file: "tests/lib/outputs/asff/**"
output/ocsf:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/ocsf/**"
- any-glob-to-any-file: "tests/lib/outputs/ocsf/**"
output/csv:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/csv/**"
- any-glob-to-any-file: "tests/lib/outputs/csv/**"

View File

@@ -2,18 +2,11 @@
Please include relevant motivation and context for this PR.
If fixes an issue please add it with `Fix #XXXX`
### Description
Please include a summary of the change and which issue is fixed. List any dependencies that are required for this change.
### Checklist
- Are there new checks included in this PR? Yes / No
- If so, do we need to update permissions for the provider? Please review this carefully.
- [ ] Review if the code is being covered by tests.
- [ ] Review if code is being documented following this specification https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings
### License

View File

@@ -118,7 +118,7 @@ jobs:
- name: Build and push container image (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@v6
uses: docker/build-push-action@v5
with:
push: true
tags: |
@@ -130,7 +130,7 @@ jobs:
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v6
uses: docker/build-push-action@v5
with:
# Use local context to get changes
# https://github.com/docker/build-push-action#path-context

View File

@@ -11,7 +11,7 @@ jobs:
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@3.80.4
uses: trufflesecurity/trufflehog@v3.74.0
with:
path: ./
base: ${{ github.event.repository.default_branch }}

View File

@@ -73,7 +73,7 @@ jobs:
- name: Safety
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run safety check --ignore 70612
poetry run safety check
- name: Vulture
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |

View File

@@ -1,7 +1,7 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
rev: v4.5.0
hooks:
- id: check-merge-conflict
- id: check-yaml
@@ -15,7 +15,7 @@ repos:
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.13.0
rev: v2.12.0
hooks:
- id: pretty-format-toml
args: [--autofix]
@@ -23,13 +23,13 @@ repos:
## BASH
- repo: https://github.com/koalaman/shellcheck-precommit
rev: v0.10.0
rev: v0.9.0
hooks:
- id: shellcheck
exclude: contrib
## PYTHON
- repo: https://github.com/myint/autoflake
rev: v2.3.1
rev: v2.2.1
hooks:
- id: autoflake
args:
@@ -46,7 +46,7 @@ repos:
args: ["--profile", "black"]
- repo: https://github.com/psf/black
rev: 24.4.2
rev: 24.1.1
hooks:
- id: black
@@ -58,14 +58,14 @@ repos:
args: ["--ignore=E266,W503,E203,E501,W605"]
- repo: https://github.com/python-poetry/poetry
rev: 1.8.0
rev: 1.7.0
hooks:
- id: poetry-check
- id: poetry-lock
args: ["--no-update"]
- repo: https://github.com/hadolint/hadolint
rev: v2.13.0-beta
rev: v2.12.1-beta
hooks:
- id: hadolint
args: ["--ignore=DL3013"]
@@ -97,7 +97,7 @@ repos:
- id: safety
name: safety
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
entry: bash -c 'safety check --ignore 70612'
entry: bash -c 'safety check'
language: system
- id: vulture

View File

@@ -1,6 +1,6 @@
<p align="center">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-black.png#gh-light-mode-only" width="50%" height="50%">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-white.png#gh-dark-mode-only" width="50%" height="50%">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-black.png?raw=True#gh-light-mode-only" width="350" height="115">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-white.png?raw=True#gh-dark-mode-only" width="350" height="115">
</p>
<p align="center">
<b><i>Prowler SaaS </b> and <b>Prowler Open Source</b> are as dynamic and adaptable as the environment theyre meant to protect. Trusted by the leaders in security.
@@ -10,10 +10,11 @@
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img width="30" height="30" alt="Prowler community on Slack" src="https://github.com/prowler-cloud/prowler/assets/38561120/3c8b4ec5-6849-41a5-b5e1-52bbb94af73a"></a>
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img width="30" height="30" alt="Prowler community on Slack" src="https://github.com/prowler-cloud/prowler/assets/3985464/3617e470-670c-47c9-9794-ce895ebdb627"></a>
<br>
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog">Join our Prowler community!</a>
</p>
<hr>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
@@ -37,9 +38,6 @@
<a href="https://twitter.com/prowlercloud"><img alt="Twitter" src="https://img.shields.io/twitter/follow/prowlercloud?style=social"></a>
</p>
<hr>
<p align="center">
<img align="center" src="/docs/img/prowler-cli-quick.gif" width="100%" height="100%">
</p>
# Description
@@ -63,9 +61,9 @@ It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, Fe
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 385 | 67 -> `prowler aws --list-services` | 28 -> `prowler aws --list-compliance` | 7 -> `prowler aws --list-categories` |
| GCP | 77 | 13 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
| Azure | 135 | 16 -> `prowler azure --list-services` | 2 -> `prowler azure --list-compliance` | 2 -> `prowler azure --list-categories` |
| AWS | 304 | 61 -> `prowler aws --list-services` | 28 -> `prowler aws --list-compliance` | 6 -> `prowler aws --list-categories` |
| GCP | 75 | 11 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
| Azure | 127 | 16 -> `prowler azure --list-services` | 2 -> `prowler azure --list-compliance` | 2 -> `prowler azure --list-categories` |
| Kubernetes | 83 | 7 -> `prowler kubernetes --list-services` | 1 -> `prowler kubernetes --list-compliance` | 7 -> `prowler kubernetes --list-categories` |
# 💻 Installation
@@ -77,7 +75,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
pip install prowler
prowler -v
```
>More details at [https://docs.prowler.com](https://docs.prowler.com/projects/prowler-open-source/en/latest/)
More details at [https://docs.prowler.com](https://docs.prowler.com/projects/prowler-open-source/en/latest/)
## Containers
@@ -94,7 +92,7 @@ The container images are available here:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
## From GitHub
## From Github
Python >= 3.9, < 3.13 is required with pip and poetry:
@@ -105,7 +103,7 @@ poetry shell
poetry install
python prowler.py -v
```
> If you want to clone Prowler from Windows, use `git config core.longpaths true` to allow long file paths.
# 📐✏️ High level architecture
You can run Prowler from your workstation, a Kubernetes Job, a Google Compute Engine, an Azure VM, an EC2 instance, Fargate or any other container, CloudShell and many more.
@@ -121,6 +119,7 @@ You can run Prowler from your workstation, a Kubernetes Job, a Google Compute En
- The CSV output format is common for all the providers.
We have deprecated some of our outputs formats:
- The HTML is replaced for the new Prowler Dashboard, run `prowler dashboard`.
- The native JSON is replaced for the JSON [OCSF](https://schema.ocsf.io/) v1.1.0, common for all the providers.
## AWS

View File

@@ -12,7 +12,7 @@ As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (F
## Reporting a Vulnerability
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to https://support.prowler.com.
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with ProwlerPro as part of this process is kept confidential within ProwlerPro. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.

View File

@@ -14,4 +14,4 @@ cd ~ || exit
python3.9 -m pip install prowler-cloud
prowler -v
# Run Prowler
prowler aws
prowler

View File

@@ -212,7 +212,6 @@ Resources:
- appstream:Describe*
- codeartifact:List*
- codebuild:BatchGet*
- cognito-idp:GetUserPoolMfaConfig
- ds:Get*
- ds:Describe*
- ds:List*

View File

@@ -1,47 +0,0 @@
#!/bin/bash
# List of project IDs
PROJECT_IDS=(
"project-id-1"
"project-id-2"
"project-id-3"
# Add more project IDs as needed
)
# List of Prowler APIs to enable
APIS=(
"apikeys.googleapis.com"
"artifactregistry.googleapis.com"
"bigquery.googleapis.com"
"sqladmin.googleapis.com" # Cloud SQL
"storage.googleapis.com" # Cloud Storage
"compute.googleapis.com"
"dataproc.googleapis.com"
"dns.googleapis.com"
"containerregistry.googleapis.com" # GCR (Google Container Registry)
"container.googleapis.com" # GKE (Google Kubernetes Engine)
"iam.googleapis.com"
"cloudkms.googleapis.com" # KMS (Key Management Service)
"logging.googleapis.com"
)
# Function to enable APIs for a given project
enable_apis_for_project() {
local PROJECT_ID=$1
echo "Enabling APIs for project: ${PROJECT_ID}"
for API in "${APIS[@]}"; do
echo "Enabling API: $API for project: ${PROJECT_ID}"
if gcloud services enable "${API}" --project="${PROJECT_ID}"; then
echo "Successfully enabled API $API for project ${PROJECT_ID}."
else
echo "Failed to enable API $API for project ${PROJECT_ID}."
fi
done
}
# Loop over each project and enable the APIs
for PROJECT_ID in "${PROJECT_IDS[@]}"; do
enable_apis_for_project "${PROJECT_ID}"
done

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,24 +0,0 @@
apiVersion: v2
name: prowler
description: Prowler Security Tool Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,78 +0,0 @@
# prowler
![Version: 0.1.1](https://img.shields.io/badge/Version-0.1.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.16.0](https://img.shields.io/badge/AppVersion-1.16.0-informational?style=flat-square)
Prowler Security Tool Helm chart for Kubernetes
# Prowler Helm Chart Deployment
This guide provides step-by-step instructions for deploying the Prowler Helm chart.
## Prerequisites
Before you begin, ensure you have the following:
1. A running Kubernetes cluster.
2. Helm installed on your local machine. If you don't have Helm installed, you can follow the [Helm installation guide](https://helm.sh/docs/intro/install/).
3. Proper access to your Kubernetes cluster (e.g., `kubectl` is configured and working).
## Deployment Steps
### 1. Clone the Repository
Clone the repository containing the Helm chart to your local machine.
```sh
git clone git@github.com:prowler-cloud/prowler.git
cd prowler/contrib/k8s/helm
```
### 2. Deploy the helm chart
```
helm install prowler .
```
### 3. Verify the deployment
```
helm status prowler
kubectl get all -n prowler-ns
```
### 4. Clean Up
To uninstall the Helm release and clean up the resources, run:
```helm uninstall prowler
kubectl delete namespace prowler-ns
```
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| clusterRole.name | string | `"prowler-read-cluster"` | |
| clusterRoleBinding.name | string | `"prowler-read-cluster-binding"` | |
| configMap.name | string | `"prowler-hostpaths"` | |
| configMapData.etcCniNetd | string | `"/etc/cni/net.d"` | |
| configMapData.etcKubernetes | string | `"/etc/kubernetes"` | |
| configMapData.etcSystemd | string | `"/etc/systemd"` | |
| configMapData.libSystemd | string | `"/lib/systemd"` | |
| configMapData.optCniBin | string | `"/opt/cni/bin"` | |
| configMapData.usrBin | string | `"/usr/bin"` | |
| configMapData.varLibCni | string | `"/var/lib/cni"` | |
| configMapData.varLibEtcd | string | `"/var/lib/etcd"` | |
| configMapData.varLibKubeControllerManager | string | `"/var/lib/kube-controller-manager"` | |
| configMapData.varLibKubeScheduler | string | `"/var/lib/kube-scheduler"` | |
| configMapData.varLibKubelet | string | `"/var/lib/kubelet"` | |
| cronjob.hostPID | bool | `true` | |
| cronjob.name | string | `"prowler"` | |
| cronjob.schedule | string | `"0 0 * * *"` | |
| image.pullPolicy | string | `"Always"` | |
| image.repository | string | `"toniblyx/prowler"` | |
| image.tag | string | `"stable"` | |
| namespace.name | string | `"prowler"` | |
| serviceAccount.name | string | `"prowler"` | |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.3](https://github.com/norwoodj/helm-docs/releases/v1.11.3)

View File

@@ -1,11 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterRole.name }}
rules:
- apiGroups: [""]
resources: ["pods", "configmaps", "nodes", "namespaces"]
verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterrolebindings", "rolebindings", "clusterroles", "roles"]
verbs: ["get", "list", "watch"]

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.configMap.name }}
namespace: {{ .Values.namespace.name }}
data:
varLibCni: "{{ .Values.configMap.data.varLibCni }}"
varLibEtcd: "{{ .Values.configMap.data.varLibEtcd }}"
varLibKubelet: "{{ .Values.configMap.data.varLibKubelet }}"
varLibKubeScheduler: "{{ .Values.configMap.data.varLibKubeScheduler }}"
varLibKubeControllerManager: "{{ .Values.configMap.data.varLibKubeControllerManager }}"
etcSystemd: "{{ .Values.configMap.data.etcSystemd }}"
libSystemd: "{{ .Values.configMap.data.libSystemd }}"
etcKubernetes: "{{ .Values.configMap.data.etcKubernetes }}"
usrBin: "{{ .Values.configMap.data.usrBin }}"
etcCniNetd: "{{ .Values.configMap.data.etcCniNetd }}"
optCniBin: "{{ .Values.configMap.data.optCniBin }}"
srvKubernetes: "{{ .Values.configMap.data.srvKubernetes }}"

View File

@@ -1,42 +0,0 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ .Values.cronjob.name }}
namespace: {{ .Values.namespace.name }}
spec:
schedule: "{{ .Values.cronjob.schedule }}"
jobTemplate:
spec:
template:
metadata:
labels:
app: prowler
spec:
serviceAccountName: {{ .Values.serviceAccount.name }}
containers:
- name: prowler
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
command: ["prowler"]
args: ["kubernetes", "-z", "-b"]
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
{{- range $key, $value := .Values.configMap.data }}
{{- if and (eq $.Values.clusterType "gke") (eq $key "srvKubernetes") }}
{{- else }}
- name: {{ $key | lower }}
mountPath: {{ $value }}
readOnly: true
{{- end }}
{{- end }}
hostPID: {{ .Values.cronjob.hostPID }}
restartPolicy: Never
volumes:
{{- range $key, $value := .Values.configMap.data }}
{{- if and (eq $.Values.clusterType "gke") (eq $key "srvKubernetes") }}
{{- else }}
- name: {{ $key | lower }}
hostPath:
path: {{ $value }}
{{- end }}
{{- end }}

View File

@@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace.name }}

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.clusterRoleBinding.name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Values.clusterRole.name }}
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Values.namespace.name }}

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Values.namespace.name }}

View File

@@ -1,40 +0,0 @@
namespace:
name: prowler-ns
cronjob:
name: prowler
schedule: "0 0 * * *"
hostPID: true
serviceAccount:
name: prowler-sa
image:
repository: toniblyx/prowler
tag: stable
pullPolicy: Always
clusterType:
configMap:
name: prowler-config
data:
varLibCni: "/var/lib/cni"
varLibEtcd: "/var/lib/etcd"
varLibKubelet: "/var/lib/kubelet"
varLibKubeScheduler: "/var/lib/kube-scheduler"
varLibKubeControllerManager: "/var/lib/kube-controller-manager"
etcSystemd: "/etc/systemd"
libSystemd: "/lib/systemd"
etcKubernetes: "/etc/kubernetes"
usrBin: "/usr/bin"
etcCniNetd: "/etc/cni/net.d"
optCniBin: "/opt/cni/bin"
srvKubernetes: "/srv/kubernetes"
clusterRole:
name: prowler-read-cluster
clusterRoleBinding:
name: prowler-read-cluster-binding
roleName: prowler-read-cluster

View File

@@ -16,18 +16,18 @@ from prowler.lib.banner import print_banner
warnings.filterwarnings("ignore")
cli = sys.modules["flask.cli"]
print_banner()
print_banner(verbose=False)
print(
f"{Fore.GREEN}Loading all CSV files from the folder {folder_path_overview} ...\n{Style.RESET_ALL}"
)
cli.show_server_banner = lambda *x: click.echo(
f"{Fore.YELLOW}NOTE:{Style.RESET_ALL} If you are using {Fore.GREEN}{Style.BRIGHT}Prowler SaaS{Style.RESET_ALL} with the S3 integration or that integration \nfrom {Fore.CYAN}{Style.BRIGHT}Prowler Open Source{Style.RESET_ALL} and you want to use your data from your S3 bucket,\nrun: `{orange_color}aws s3 cp s3://<your-bucket>/output/csv ./output --recursive{Style.RESET_ALL}`\nand then run `prowler dashboard` again to load the new files."
f"{Fore.YELLOW}NOTE:{Style.RESET_ALL} If you are a {Fore.GREEN}{Style.BRIGHT}Prowler SaaS{Style.RESET_ALL} customer and you want to use your data from your S3 bucket,\nrun: `{orange_color}aws s3 cp s3://<your-bucket>/output/csv ./output --recursive{Style.RESET_ALL}`\nand then run `prowler dashboard` again to load the new files."
)
# Initialize the app - incorporate css
dashboard = dash.Dash(
__name__,
external_stylesheets=[dbc.themes.FLATLY],
external_stylesheets=[dbc.themes.DARKLY],
use_pages=True,
suppress_callback_exceptions=True,
title="Prowler Dashboard",
@@ -60,9 +60,7 @@ def generate_nav_links(current_path):
link_content = html.Span(
[
html.Img(src=icon_url, className="w-5"),
html.Span(
page["name"], className="font-medium text-base leading-6 text-white"
),
html.Span(page["name"], className="font-medium text-base leading-6"),
],
className="flex justify-center lg:justify-normal items-center gap-x-3 py-2 px-3",
)
@@ -98,8 +96,7 @@ def generate_help_menu():
[
html.Img(src=link["icon"], className="w-5"),
html.Span(
link["title"],
className="font-medium text-base leading-6 text-white",
link["title"], className="font-medium text-base leading-6"
),
],
className="flex items-center gap-x-3 py-2 px-3",
@@ -163,7 +160,7 @@ def update_nav_bar(pathname):
html.Img(src="assets/favicon.ico", className="w-5"),
"Subscribe to prowler SaaS",
],
className="flex items-center gap-x-3 text-white",
className="flex items-center gap-x-3",
),
],
href="https://prowler.com/",

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" shape-rendering="geometricPrecision" text-rendering="geometricPrecision" image-rendering="optimizeQuality" fill-rule="evenodd" clip-rule="evenodd" viewBox="0 0 443 511.62"><path fill-rule="nonzero" d="M152.93 286.97c0 17.1-13.87 30.97-30.97 30.97-17.11 0-30.98-13.87-30.98-30.97v-177.4l-37.45 40.31c-11.63 12.5-31.19 13.2-43.68 1.57-12.49-11.62-13.19-31.18-1.57-43.68L99.33 9.79l2.06-1.94c12.69-11.35 32.2-10.26 43.55 2.43l91.05 101.47c11.35 12.69 10.26 32.2-2.43 43.55-12.68 11.36-32.19 10.27-43.55-2.42l-37.08-41.33v175.42zm236.24 71.77c11.35-12.69 30.86-13.78 43.55-2.43 12.69 11.36 13.78 30.87 2.42 43.56L344.1 501.34c-11.36 12.69-30.87 13.78-43.55 2.42l-2.02-1.97-91.09-97.95c-11.63-12.49-10.93-32.05 1.57-43.67 12.49-11.63 32.05-10.93 43.67 1.57l37.46 40.31V231.53c0-17.11 13.87-30.97 30.97-30.97s30.97 13.86 30.97 30.97v168.54l37.09-41.33z"/></svg>

Before

Width:  |  Height:  |  Size: 896 B

View File

@@ -1 +0,0 @@
<svg class="svg-icon" style="width: 1.001953125em; height: 1em;vertical-align: middle;fill: currentColor;overflow: hidden;" viewBox="0 0 1026 1024" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M1013.7 90.8C997.8 75.5 972.4 76 957.1 92L510.9 557.1 73.2 90.8C58 74.7 32.7 73.9 16.6 89 0.5 104.1-0.3 129.4 14.8 145.5l466.6 497.1 1.5 1.5c0.2 0.2 0.4 0.4 0.7 0.6 0.3 0.3 0.6 0.5 0.9 0.8 0.3 0.3 0.6 0.5 0.9 0.7 0.2 0.2 0.4 0.4 0.7 0.6 0.3 0.2 0.6 0.5 0.9 0.7 0.2 0.2 0.5 0.4 0.7 0.5l0.9 0.6c0.3 0.2 0.5 0.4 0.8 0.5 0.3 0.2 0.6 0.3 0.9 0.5 0.3 0.2 0.6 0.3 0.9 0.5 0.3 0.2 0.5 0.3 0.8 0.4 0.3 0.2 0.6 0.3 1 0.5 0.3 0.1 0.5 0.3 0.8 0.4 0.3 0.2 0.7 0.3 1 0.5 0.2 0.1 0.5 0.2 0.7 0.3 0.4 0.2 0.7 0.3 1.1 0.4 0.2 0.1 0.5 0.2 0.7 0.3 0.4 0.1 0.8 0.3 1.2 0.4 0.2 0.1 0.5 0.1 0.7 0.2l1.2 0.3c0.2 0.1 0.4 0.1 0.7 0.2 0.4 0.1 0.8 0.2 1.3 0.3 0.2 0 0.4 0.1 0.6 0.1 0.4 0.1 0.9 0.2 1.3 0.2 0.2 0 0.4 0.1 0.6 0.1 0.5 0.1 0.9 0.1 1.4 0.2 0.2 0 0.4 0 0.6 0.1 0.5 0 1 0.1 1.5 0.1h4.6c0.5 0 1-0.1 1.5-0.1 0.2 0 0.4 0 0.5-0.1 0.5 0 0.9-0.1 1.4-0.2 0.2 0 0.4-0.1 0.6-0.1 0.4-0.1 0.9-0.1 1.3-0.2 0.2 0 0.4-0.1 0.6-0.1l1.2-0.3c0.2-0.1 0.4-0.1 0.7-0.2l1.2-0.3c0.2-0.1 0.5-0.1 0.7-0.2 0.4-0.1 0.8-0.2 1.1-0.4 0.2-0.1 0.5-0.2 0.7-0.3 0.4-0.1 0.7-0.3 1.1-0.4 0.3-0.1 0.5-0.2 0.8-0.3 0.3-0.1 0.7-0.3 1-0.5 0.3-0.1 0.5-0.2 0.8-0.4 0.3-0.2 0.6-0.3 0.9-0.5 0.3-0.1 0.6-0.3 0.8-0.4 0.3-0.2 0.6-0.3 0.8-0.5 0.3-0.2 0.6-0.3 0.9-0.5 0.3-0.2 0.5-0.3 0.8-0.5l0.9-0.6c0.2-0.2 0.4-0.3 0.7-0.5 0.3-0.2 0.6-0.5 1-0.7 0.2-0.1 0.4-0.3 0.6-0.5 0.3-0.3 0.7-0.5 1-0.8 0.2-0.1 0.3-0.3 0.5-0.5 0.5-0.5 1-0.9 1.5-1.4l0.9-0.9 475.4-495.6c15.3-15.7 14.7-41.1-1.2-56.3z" fill="#898989" /></svg>

Before

Width:  |  Height:  |  Size: 1.6 KiB

View File

@@ -5,7 +5,7 @@
/* Use this file to add custom styles using Tailwind's utility classes. */
/*
! tailwindcss v3.4.3 | MIT License | https://tailwindcss.com */
! tailwindcss v3.4.1 | MIT License | https://tailwindcss.com */
/*
1. Prevent padding and border from affecting element width. (https://github.com/mozdevs/cssremedy/issues/4)
@@ -216,8 +216,6 @@ textarea {
/* 1 */
line-height: inherit;
/* 1 */
letter-spacing: inherit;
/* 1 */
color: inherit;
/* 1 */
margin: 0;
@@ -241,9 +239,9 @@ select {
*/
button,
input:where([type='button']),
input:where([type='reset']),
input:where([type='submit']) {
[type='button'],
[type='reset'],
[type='submit'] {
-webkit-appearance: button;
/* 1 */
background-color: transparent;
@@ -499,10 +497,6 @@ video {
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
--tw-contain-size: ;
--tw-contain-layout: ;
--tw-contain-paint: ;
--tw-contain-style: ;
}
::backdrop {
@@ -553,18 +547,14 @@ video {
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
--tw-contain-size: ;
--tw-contain-layout: ;
--tw-contain-paint: ;
--tw-contain-style: ;
}
.custom-grid {
grid-template-columns: minmax(0, 16fr) repeat(11, minmax(0, 11fr));
}
.collapse {
visibility: collapse;
.visible {
visibility: visible;
}
.relative {
@@ -604,10 +594,6 @@ video {
margin-right: auto;
}
.mb-0 {
margin-bottom: 0px;
}
.mb-2 {
margin-bottom: 0.5rem;
}
@@ -632,14 +618,6 @@ video {
margin-top: auto;
}
.mb-\[30px\] {
margin-bottom: 30px;
}
.mt-\[30px\] {
margin-top: 30px;
}
.block {
display: block;
}
@@ -656,6 +634,14 @@ video {
display: inline-flex;
}
.min-w-36 {
min-width: 9rem;
}
.min-w-44 {
min-width: 11rem;
}
.table {
display: table;
}
@@ -676,10 +662,6 @@ video {
max-height: 300px;
}
.w-3 {
width: 0.75rem;
}
.w-5 {
width: 1.25rem;
}
@@ -688,50 +670,6 @@ video {
width: 2rem;
}
.w-\[10\%\] {
width: 10%;
}
.w-\[10\.5\%\] {
width: 10.5%;
}
.w-\[11\%\] {
width: 11%;
}
.w-\[13\.5\%\] {
width: 13.5%;
}
.w-\[14\.5\%\] {
width: 14.5%;
}
.w-\[15\%\] {
width: 15%;
}
.w-\[36\%\] {
width: 36%;
}
.w-\[4\%\] {
width: 4%;
}
.w-\[40\.5\%\] {
width: 40.5%;
}
.w-\[9\%\] {
width: 9%;
}
.w-\[9\.5\%\] {
width: 9.5%;
}
.w-fit {
width: -moz-fit-content;
width: fit-content;
@@ -741,10 +679,6 @@ video {
width: 100%;
}
.min-w-36 {
min-width: 9rem;
}
.grid-cols-12 {
grid-template-columns: repeat(12, minmax(0, 1fr));
}
@@ -862,31 +796,30 @@ video {
}
.bg-gradient-failed {
background-image: linear-gradient(127.43deg, #F1F5F8 -177.68%, #EF4444 87.35%);
background-image: linear-gradient(127.43deg, #F1F5F8 -177.68%, #e67272 87.35%);
}
.bg-gradient-passed {
background-image: linear-gradient(127.43deg, #F1F5F8 -177.68%, #4ADE80 87.35%);
background-image: linear-gradient(127.43deg, #F1F5F8 -177.68%, #54d283 87.35%);
}
.p-2 {
padding: 0.5rem;
.bg-gradient-muted {
background-image: linear-gradient(127.43deg, #F1F5F8 -177.68%, #636c78 87.35%);
}
.p-3 {
padding: 0.75rem;
}
.p-2 {
padding: 0.5rem;
}
.px-10 {
padding-left: 2.5rem;
padding-right: 2.5rem;
}
.px-2 {
padding-left: 0.5rem;
padding-right: 0.5rem;
}
.px-3 {
padding-left: 0.75rem;
padding-right: 0.75rem;
@@ -921,10 +854,6 @@ video {
padding-bottom: 0.75rem;
}
.pr-2 {
padding-right: 0.5rem;
}
.text-center {
text-align: center;
}
@@ -1000,11 +929,6 @@ video {
color: rgb(41 37 36 / var(--tw-text-opacity));
}
.text-white {
--tw-text-opacity: 1;
color: rgb(255 255 255 / var(--tw-text-opacity));
}
.opacity-90 {
opacity: 0.9;
}
@@ -1068,6 +992,19 @@ video {
/* Firefox */
}
/*Styles for previous-vext-container from table*/
.previous-next-container {
margin-top: 1rem;
color: #000;
}
/*Style for input in filter table*/
.dash-table-container .dash-spreadsheet-container .dash-spreadsheet-inner input:not([type=radio]):not([type=checkbox]) {
color: #FFF !important;
opacity: 1 !important;
}
#_dash-app-content {
--tw-bg-opacity: 1;
background-color: rgb(231 229 228 / var(--tw-bg-opacity));
@@ -1104,10 +1041,6 @@ video {
color: rgb(41 37 36 / var(--tw-text-opacity));
}
#_dash-app-content .accordion .accordion-collapse.collapse {
visibility: visible;
}
#_dash-app-content .accordion .accordion-button:not(.collapsed) {
--tw-bg-opacity: 1;
background-color: rgb(231 229 228 / var(--tw-bg-opacity));
@@ -1224,10 +1157,6 @@ video {
width: auto;
}
.overview-table .card .collapse {
visibility: visible;
}
@media (min-width: 1536px) {
.\32xl\:container {
width: 100%;
@@ -1370,37 +1299,3 @@ video {
row-gap: 0px;
}
}
@media (min-width: 1536px) {
.\32xl\:w-\[10\%\] {
width: 10%;
}
.\32xl\:w-\[12\.5\%\] {
width: 12.5%;
}
.\32xl\:w-\[14\%\] {
width: 14%;
}
.\32xl\:w-\[15\.5\%\] {
width: 15.5%;
}
.\32xl\:w-\[2\%\] {
width: 2%;
}
.\32xl\:w-\[48\%\] {
width: 48%;
}
.\32xl\:w-\[71\.5\%\] {
width: 71.5%;
}
.\32xl\:w-\[9\%\] {
width: 9%;
}
}

View File

@@ -1535,7 +1535,7 @@ def get_section_container_iso(data, section_1, section_2):
return html.Div(section_containers, className="compliance-data-layout")
def get_section_containers_format4(data, section_1):
def get_section_containers_pci(data, section_1):
data["STATUS"] = data["STATUS"].apply(map_status_to_icon)
data[section_1] = data[section_1].astype(str)
@@ -1654,13 +1654,9 @@ def get_section_containers_format4(data, section_1):
)
graph_div_service = html.Div(graph_service, className="graph-section-req")
if "REQUIREMENTS_NAME" not in specific_data.columns:
title_internal = f"{service}"
else:
title_internal = f"{service} - {specific_data['REQUIREMENTS_NAME'].iloc[0]}"
internal_accordion_item = dbc.AccordionItem(
title=title_internal,
title=service,
children=[html.Div([data_table], className="inner-accordion-content")],
)

View File

@@ -1,23 +0,0 @@
import warnings
from dashboard.common_methods import get_section_containers_format1
warnings.filterwarnings("ignore")
def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_ATTRIBUTES_SECTION",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
].copy()
return get_section_containers_format1(
aux, "REQUIREMENTS_ATTRIBUTES_SECTION", "REQUIREMENTS_ID"
)

View File

@@ -6,13 +6,6 @@ warnings.filterwarnings("ignore")
def get_table(data):
# append the requirements_description to idgrupocontrol
data["REQUIREMENTS_ATTRIBUTES_IDGRUPOCONTROL"] = (
data["REQUIREMENTS_ATTRIBUTES_IDGRUPOCONTROL"]
+ " - "
+ data["REQUIREMENTS_DESCRIPTION"]
)
aux = data[
[
"REQUIREMENTS_ATTRIBUTES_MARCO",

View File

@@ -1,6 +1,6 @@
import warnings
from dashboard.common_methods import get_section_containers_format4
from dashboard.common_methods import get_section_containers_format2
warnings.filterwarnings("ignore")
@@ -9,13 +9,15 @@ def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_NAME",
"REQUIREMENTS_SUBTECHNIQUES",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
]
].copy()
return get_section_containers_format4(aux, "REQUIREMENTS_ID")
return get_section_containers_format2(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_SUBTECHNIQUES"
)

View File

@@ -1,6 +1,6 @@
import warnings
from dashboard.common_methods import get_section_containers_format4
from dashboard.common_methods import get_section_containers_format2
warnings.filterwarnings("ignore")
@@ -9,13 +9,15 @@ def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_NAME",
"REQUIREMENTS_SUBTECHNIQUES",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
]
].copy()
return get_section_containers_format4(aux, "REQUIREMENTS_ID")
return get_section_containers_format2(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_SUBTECHNIQUES"
)

View File

@@ -1,23 +0,0 @@
import warnings
from dashboard.common_methods import get_section_containers_format2
warnings.filterwarnings("ignore")
def get_table(data):
aux = data[
[
"REQUIREMENTS_ID",
"REQUIREMENTS_SUBTECHNIQUES",
"CHECKID",
"STATUS",
"REGION",
"ACCOUNTID",
"RESOURCEID",
]
].copy()
return get_section_containers_format2(
aux, "REQUIREMENTS_ID", "REQUIREMENTS_SUBTECHNIQUES"
)

View File

@@ -1,6 +1,6 @@
import warnings
from dashboard.common_methods import get_section_containers_format4
from dashboard.common_methods import get_section_containers_pci
warnings.filterwarnings("ignore")
@@ -17,4 +17,4 @@ def get_table(data):
]
]
return get_section_containers_format4(aux, "REQUIREMENTS_ID")
return get_section_containers_pci(aux, "REQUIREMENTS_ID")

View File

@@ -21,13 +21,12 @@ muted_manual_color = "#b33696"
critical_color = "#951649"
high_color = "#e11d48"
medium_color = "#ee6f15"
low_color = "#fcf45d"
low_color = "#f9f5e6"
informational_color = "#3274d9"
# Folder output path
folder_path_overview = os.getcwd() + "/output"
folder_path_compliance = os.getcwd() + "/output/compliance"
# Encoding
encoding_format = "utf-8"
# Error action, it is recommended to use "ignore" or "replace"
error_action = "ignore"

View File

@@ -11,7 +11,6 @@ def create_layout_overview(
service_dropdown: html.Div,
table_row_dropdown: html.Div,
status_dropdown: html.Div,
table_div_header: html.Div,
) -> html.Div:
"""
Create the layout of the dashboard.
@@ -41,7 +40,7 @@ def create_layout_overview(
html.Div([account_dropdown], className=""),
html.Div([region_dropdown], className=""),
],
className="grid gap-x-4 mt-[30px] mb-[30px] sm:grid-cols-2 lg:grid-cols-3",
className="grid gap-x-4 gap-y-4 sm:grid-cols-2 lg:grid-cols-3 lg:gap-y-0",
),
html.Div(
[
@@ -49,7 +48,7 @@ def create_layout_overview(
html.Div([service_dropdown], className=""),
html.Div([status_dropdown], className=""),
],
className="grid gap-x-4 mb-[30px] sm:grid-cols-2 lg:grid-cols-3",
className="grid gap-x-4 gap-y-4 sm:grid-cols-2 lg:grid-cols-3 lg:gap-y-0",
),
html.Div(
[
@@ -58,11 +57,11 @@ def create_layout_overview(
html.Div(className="flex", id="gcp_card", n_clicks=0),
html.Div(className="flex", id="k8s_card", n_clicks=0),
],
className="grid gap-x-4 mb-[30px] sm:grid-cols-2 lg:grid-cols-4",
className="grid gap-x-4 gap-y-4 sm:grid-cols-2 lg:grid-cols-4 lg:gap-y-0",
),
html.H4(
"Count of Findings by severity",
className="text-prowler-stone-900 text-lg font-bold mb-[30px]",
className="text-prowler-stone-900 text-lg font-bold",
),
html.Div(
[
@@ -79,7 +78,7 @@ def create_layout_overview(
id="line_plot",
),
],
className="grid gap-x-4 grid-cols-12 mb-[30px]",
className="grid gap-x-4 gap-y-4 grid-cols-12 lg:gap-y-0",
),
html.Div(
[
@@ -106,10 +105,9 @@ def create_layout_overview(
],
className="flex justify-between items-center",
),
table_div_header,
html.Div(id="table", className="grid"),
],
className="grid gap-x-8 2xl:container mx-auto",
className="grid gap-x-8 gap-y-8 2xl:container mx-auto",
)

View File

@@ -16,7 +16,6 @@ from dash.dependencies import Input, Output
# Config import
from dashboard.config import (
encoding_format,
error_action,
fail_color,
folder_path_compliance,
info_color,
@@ -30,7 +29,6 @@ from dashboard.lib.dropdowns import (
create_region_dropdown_compliance,
)
from dashboard.lib.layouts import create_layout_compliance
from prowler.lib.logger import logger
# Suppress warnings
warnings.filterwarnings("ignore")
@@ -40,16 +38,11 @@ warnings.filterwarnings("ignore")
csv_files = []
for file in glob.glob(os.path.join(folder_path_compliance, "*.csv")):
try:
with open(
file, "r", newline="", encoding=encoding_format, errors=error_action
) as csvfile:
reader = csv.reader(csvfile)
num_rows = sum(1 for row in reader)
if num_rows > 1:
csv_files.append(file)
except UnicodeDecodeError:
logger.error(f"Error decoding file: {file}")
with open(file, "r", newline="", encoding=encoding_format) as csvfile:
reader = csv.reader(csvfile)
num_rows = sum(1 for row in reader)
if num_rows > 1:
csv_files.append(file)
def load_csv_files(csv_files):
@@ -57,7 +50,7 @@ def load_csv_files(csv_files):
dfs = []
results = []
for file in csv_files:
df = pd.read_csv(file, sep=";", on_bad_lines="skip", encoding=encoding_format)
df = pd.read_csv(file, sep=";", on_bad_lines="skip")
if "CHECKID" in df.columns:
dfs.append(df)
result = file
@@ -245,9 +238,7 @@ def display_data(
"""Load CSV files into a single pandas DataFrame."""
dfs = []
for file in files:
df = pd.read_csv(
file, sep=";", on_bad_lines="skip", encoding=encoding_format
)
df = pd.read_csv(file, sep=";", on_bad_lines="skip")
dfs.append(df.astype(str))
return pd.concat(dfs, ignore_index=True)
@@ -272,7 +263,7 @@ def display_data(
# Rename the column PROJECTID to ACCOUNTID for GCP
if data.columns.str.contains("PROJECTID").any():
data.rename(columns={"PROJECTID": "ACCOUNTID"}, inplace=True)
data["REGION"] = "-"
# Rename the column SUBSCRIPTIONID to ACCOUNTID for Azure
if data.columns.str.contains("SUBSCRIPTIONID").any():
data.rename(columns={"SUBSCRIPTIONID": "ACCOUNTID"}, inplace=True)
@@ -474,7 +465,7 @@ def display_data(
overall_status_result_graph = get_graph(pie_1, "Overall Status Result")
security_level_graph = get_graph(
pie_2, f"Top 5 failed {current_filter} by requirements"
pie_2, f"Top 5 failed {current_filter} by findings"
)
return (

File diff suppressed because it is too large Load Diff

View File

@@ -8,6 +8,10 @@
@tailwind components;
@tailwind utilities;
#_dash-app-content {
@apply bg-prowler-stone-500;
}
@layer components {
.custom-grid {
grid-template-columns: minmax(0, 16fr) repeat(11, minmax(0, 11fr));
@@ -16,24 +20,6 @@
.custom-grid-large {
grid-template-columns: minmax(0, 10fr) repeat(11, minmax(0, 11fr));
}
}
@layer utilities {
/* Hide scrollbar for Chrome, Safari and Opera */
.no-scrollbar::-webkit-scrollbar {
display: none;
}
/* Hide scrollbar for IE, Edge and Firefox */
.no-scrollbar {
-ms-overflow-style: none; /* IE and Edge */
scrollbar-width: none; /* Firefox */
}
}
#_dash-app-content {
@apply bg-prowler-stone-500;
}
/* Styles for the accordion in the compliance page */
#_dash-app-content .accordion .accordion-header .accordion-button {
@apply text-prowler-stone-900 inline-block px-4 text-xs font-bold uppercase transition-all rounded-lg bg-prowler-stone-300 hover:bg-prowler-stone-900/10;
@@ -43,10 +29,6 @@
@apply text-prowler-stone-900 bg-prowler-white rounded-lg;
}
#_dash-app-content .accordion .accordion-collapse.collapse {
@apply visible
}
#_dash-app-content .accordion .accordion-button:not(.collapsed) {
@apply text-prowler-stone-900 bg-prowler-stone-500;
}
@@ -117,6 +99,14 @@
@apply absolute right-6 top-2 w-auto h-8 z-50;
}
.overview-table .card .collapse {
@apply visible
}
@layer utilities {
/* Hide scrollbar for Chrome, Safari and Opera */
.no-scrollbar::-webkit-scrollbar {
display: none;
}
/* Hide scrollbar for IE, Edge and Firefox */
.no-scrollbar {
-ms-overflow-style: none; /* IE and Edge */
scrollbar-width: none; /* Firefox */
}
}

View File

@@ -1,9 +1,11 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
content: [
"*.{py,html,js}",
"./**/*.{py,html,js}",
"./**/**/*.{py,html,js}",
"./assets/**/*.{py,html,js}",
"./components/**/*.{py,html,js}",
"./pages/**/*.{py,html,js}",
"./utils/**/*.{py,html,js}",
"./app.py",
],
theme: {
extend: {

View File

@@ -120,42 +120,6 @@ All the checks MUST fill the `report.region` with the following criteria:
- If the audited resource is regional use the `region` (the name changes depending on the provider: `location` in Azure and GCP and `namespace` in K8s) attribute within the resource object.
- If the audited resource is global use the `service_client.region` within the service client object.
### Check Severity
The severity of the checks are defined in the metadata file with the `Severity` field. The severity is always in lowercase and can be one of the following values:
- `critical`
- `high`
- `medium`
- `low`
- `informational`
You may need to change it in the check's code if the check has different scenarios that could change the severity. This can be done by using the `report.check_metadata.Severity` attribute:
```python
if <valid for more than 6 months>:
report.status = "PASS"
report.check_metadata.Severity = "informational"
report.status_extended = f"RDS Instance {db_instance.id} certificate has over 6 months of validity left."
elif <valid for more than 3 months>:
report.status = "PASS"
report.check_metadata.Severity = "low"
report.status_extended = f"RDS Instance {db_instance.id} certificate has between 3 and 6 months of validity."
elif <valid for more than 1 month>:
report.status = "FAIL"
report.check_metadata.Severity = "medium"
report.status_extended = f"RDS Instance {db_instance.id} certificate less than 3 months of validity."
elif <valid for less than 1 month>:
report.status = "FAIL"
report.check_metadata.Severity = "high"
report.status_extended = f"RDS Instance {db_instance.id} certificate less than 1 month of validity."
else:
report.status = "FAIL"
report.check_metadata.Severity = "critical"
report.status_extended = (
f"RDS Instance {db_instance.id} certificate has expired."
)
```
### Resource ID, Name and ARN
All the checks MUST fill the `report.resource_id` and `report.resource_arn` with the following criteria:
@@ -319,7 +283,7 @@ Each Prowler check has metadata associated which is stored at the same level of
For the Remediation Code we use the following knowledge base to fill it:
- Official documentation for the provider
- https://docs.prowler.com/checks/checks-index
- https://docs.bridgecrew.io
- https://www.trendmicro.com/cloudoneconformity
- https://github.com/cloudmatos/matos/tree/master/remediations

View File

@@ -1,11 +1,7 @@
# Debugging
Debugging in Prowler make things easier!
If you are developing Prowler, it's possible that you will encounter some situations where you have to inspect the code in depth to fix some unexpected issues during the execution.
## VSCode
In VSCode you can run the code using the integrated debugger. Please, refer to this [documentation](https://code.visualstudio.com/docs/editor/debugging) for guidance about the debugger in VSCode.
If you are developing Prowler, it's possible that you will encounter some situations where you have to inspect the code in depth to fix some unexpected issues during the execution. To do that, if you are using VSCode you can run the code using the integrated debugger. Please, refer to this [documentation](https://code.visualstudio.com/docs/editor/debugging) for guidance about the debugger in VSCode.
The following file is an example of the [debugging configuration](https://code.visualstudio.com/docs/editor/debugging#_launch-configurations) file that you can add to [Virtual Studio Code](https://code.visualstudio.com/).
This file should inside the *.vscode* folder and its name has to be *launch.json*:
@@ -15,62 +11,31 @@ This file should inside the *.vscode* folder and its name has to be *launch.json
"version": "0.2.0",
"configurations": [
{
"name": "Debug AWS Check",
"type": "debugpy",
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "prowler.py",
"args": [
"aws",
"-f",
"eu-west-1",
"--service",
"cloudwatch",
"--log-level",
"ERROR",
"-c",
"<check_name>"
"-p",
"dev",
],
"console": "integratedTerminal",
"justMyCode": false
},
{
"name": "Debug Azure Check",
"type": "debugpy",
"name": "Python: Debug Tests",
"type": "python",
"request": "launch",
"program": "prowler.py",
"args": [
"azure",
"--sp-env-auth",
"--log-level",
"ERROR",
"-c",
"<check_name>"
],
"console": "integratedTerminal",
"justMyCode": false
},
{
"name": "Debug GCP Check",
"type": "debugpy",
"request": "launch",
"program": "prowler.py",
"args": [
"gcp",
"--log-level",
"ERROR",
"-c",
"<check_name>"
],
"console": "integratedTerminal",
"justMyCode": false
},
{
"name": "Debug K8s Check",
"type": "debugpy",
"request": "launch",
"program": "prowler.py",
"args": [
"kubernetes",
"--log-level",
"ERROR",
"-c",
"<check_name>"
"program": "${file}",
"purpose": [
"debug-test"
],
"console": "integratedTerminal",
"justMyCode": false

View File

@@ -4,14 +4,10 @@ You can extend Prowler Open Source in many different ways, in most cases you wil
## Get the code and install all dependencies
First of all, you need a version of Python 3.9 or higher and also `pip` installed to be able to install all dependencies required.
Then, to start working with the Prowler Github repository you need to fork it to be able to propose changes for new features, bug fixing, etc. To fork the Prowler repo please refer to [this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo?tool=webui#forking-a-repository).
Once that is satisfied go ahead and clone your forked repo:
First of all, you need a version of Python 3.9 or higher and also pip installed to be able to install all dependencies required. Once that is satisfied go a head and clone the repo:
```
git clone https://github.com/<your-github-user>/prowler
git clone https://github.com/prowler-cloud/prowler
cd prowler
```
For isolation and avoid conflicts with other environments, we recommend usage of `poetry`:
@@ -48,11 +44,6 @@ Before we merge any of your pull requests we pass checks to the code, we use the
You can see all dependencies in file `pyproject.toml`.
Moreover, you would need to install [`TruffleHog`](https://github.com/trufflesecurity/trufflehog) on the latest version to check for secrets in the code. You can install it using the official installation guide [here](https://github.com/trufflesecurity/trufflehog?tab=readme-ov-file#floppy_disk-installation).
???+ note
If you have any trouble when committing to the Prowler repository, add the `--no-verify` flag to the `git commit` command.
## Pull Request Checklist
If you create or review a PR in https://github.com/prowler-cloud/prowler please follow this checklist:

View File

@@ -23,8 +23,8 @@ The Prowler's service structure is the following and the way to initialise it is
All the Prowler provider's services inherits from a base class depending on the provider used.
- [AWS Service Base Class](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/service/service.py)
- [GCP Service Base Class](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/lib/service/service.py)
- [Azure Service Base Class](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/lib/service/service.py)
- [GCP Service Base Class](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/lib/service/service.py)
- [Azure Service Base Class](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/lib/service/service.py)
- [Kubernetes Service Base Class](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/lib/service/service.py)
Each class is used to initialize the credentials and the API's clients to be used in the service. If some threading is used it must be coded there.
@@ -225,10 +225,10 @@ Each Prowler service requires a service client to use the service in the checks.
The following is the `<new_service_name>_client.py` containing the initialization of the service's class we have just created so the service's checks can use them:
```python
from prowler.providers.common.provider import Provider
from prowler.providers.common.common import get_global_provider
from prowler.providers.<provider>.services.<new_service_name>.<new_service_name>_service import <Service>
<new_service_name>_client = <Service>(Provider.get_global_provider())
<new_service_name>_client = <Service>(get_global_provider())
```
## Permissions

View File

@@ -115,7 +115,7 @@ class Test_iam_password_policy_uppercase:
# Prowler for AWS uses a shared object called aws_provider where it stores
# the info related with the provider
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
return_value=aws_provider,
),
# We have to mock also the iam_client from the check to enforce that the iam_client used is the one
@@ -191,6 +191,9 @@ class Test_iam_password_policy_uppercase:
expiration=True,
)
# We set a mocked aws_provider to unify providers, this way will isolate each test not to step on other tests configuration
aws_provider = set_mocked_aws_provider([AWS_REGION_US_EAST_1])
# In this scenario we have to mock also the IAM service and the iam_client from the check to enforce # that the iam_client used is the one created within this check because patch != import, and if you # execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked IAM client for both objects
with mock.patch(
@@ -313,7 +316,7 @@ If the test your are creating belongs to a check that uses more than one provide
```python
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
return_value=set_mocked_aws_provider(
[AWS_REGION_US_EAST_1, AWS_REGION_EU_WEST_1]
),
@@ -344,10 +347,10 @@ from prowler.providers.<provider>.services.<service>.<service>_client import <se
```
2. `<service>_client.py`:
```python
from prowler.providers.common.provider import Provider
from prowler.providers.common.common import get_global_provider
from prowler.providers.<provider>.services.<service>.<service>_service import <SERVICE>
<service>_client = <SERVICE>(Provider.get_global_provider())
<service>_client = <SERVICE>(mocked_provider)
```
Due to the above import path it's not the same to patch the following objects because if you run a bunch of tests, either in parallel or not, some clients can be already instantiated by another check, hence your test execution will be using another test's service instance:
@@ -368,7 +371,7 @@ Mocking a service client using the following code ...
Once the needed attributes are set for the mocked provider, you can use the mocked provider:
```python title="Mocking the service_client"
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
new=set_mocked_aws_provider([<region>]),
), mock.patch(
"prowler.providers.<provider>.services.<service>.<check>.<check>.<service>_client",
@@ -390,7 +393,7 @@ Mocking a service client using the following code ...
```python title="Mocking the service and the service_client"
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
new=set_mocked_aws_provider([<region>]),
), mock.patch(
"prowler.providers.<provider>.services.<service>.<SERVICE>",
@@ -447,7 +450,7 @@ class Test_compute_project_os_login_enabled:
# In this scenario we have to mock the app_client from the check to enforce that the compute_client used is the one created above
# And also is mocked the return value of get_global_provider function to return our GCP mocked provider defined in fixtures
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
return_value=set_mocked_gcp_provider(),
), mock.patch(
"prowler.providers.gcp.services.compute.compute_project_os_login_enabled.compute_project_os_login_enabled.compute_client",
@@ -487,7 +490,7 @@ class Test_compute_project_os_login_enabled:
compute_client.projects = [project]
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
return_value=set_mocked_gcp_provider(),
), mock.patch(
"prowler.providers.gcp.services.compute.compute_project_os_login_enabled.compute_project_os_login_enabled.compute_client",
@@ -655,7 +658,7 @@ class Test_app_ensure_http_is_redirected_to_https:
# In this scenario we have to mock the app_client from the check to enforce that the app_client used is the one created above
# And also is mocked the return value of get_global_provider function to return our Azure mocked provider defined in fixtures
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
return_value=set_mocked_azure_provider(),
), mock.patch(
"prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https.app_client",
@@ -705,7 +708,7 @@ class Test_app_ensure_http_is_redirected_to_https:
app_client = mock.MagicMock
with mock.patch(
"prowler.providers.common.provider.Provider.get_global_provider",
"prowler.providers.common.common.get_global_provider",
return_value=set_mocked_azure_provider(),
), mock.patch(
"prowler.providers.azure.services.app.app_ensure_http_is_redirected_to_https.app_ensure_http_is_redirected_to_https.app_client",

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -40,10 +40,10 @@ If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to
Prowler for Azure supports the following authentication types:
- [Service principal application](https://learn.microsoft.com/en-us/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) by environment variables (recommended)
- Service principal authentication by environment variables (Enterprise Application)
- Current az cli credentials stored
- Interactive browser authentication
- [Managed identity](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview) authentication
- Managed identity authentication
### Service Principal authentication
@@ -56,8 +56,6 @@ export AZURE_CLIENT_SECRET="XXXXXXX"
```
If you try to execute Prowler with the `--sp-env-auth` flag and those variables are empty or not exported, the execution is going to fail.
Follow the instructions in the [Create Prowler Service Principal](../tutorials/azure/create-prowler-service-principal.md) section to create a service principal.
### AZ CLI / Browser / Managed Identity authentication
The other three cases does not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options. To use `--browser-auth` the user needs to authenticate against Azure using the default browser to start the scan, also `tenant-id` is required.
@@ -66,22 +64,55 @@ The other three cases does not need additional configuration, `--az-cli-auth` an
To use each one you need to pass the proper flag to the execution. Prowler for Azure handles two types of permission scopes, which are:
- **Microsoft Entra ID permissions**: Used to retrieve metadata from the identity assumed by Prowler and specific Entra checks (not mandatory to have access to execute the tool). The permissions required by the tool are the following:
- **Microsoft Entra ID permissions**: Used to retrieve metadata from the identity assumed by Prowler (not mandatory to have access to execute the tool).
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool.
#### Microsoft Entra ID scope
Microsoft Entra ID (AAD earlier) permissions required by the tool are the following:
- `Directory.Read.All`
- `Policy.Read.All`
- `UserAuthenticationMethod.Read.All`
The best way to assign it is through the Azure web console:
1. Access to Microsoft Entra ID
2. In the left menu bar, go to "App registrations"
3. Once there, in the menu bar click on "+ New registration" to register a new application
4. Fill the "Name, select the "Supported account types" and click on "Register. You will be redirected to the applications page.
![Register an Application page](../img/register-application.png)
4. Select the new application
5. In the left menu bar, select "API permissions"
6. Then click on "+ Add a permission" and select "Microsoft Graph"
7. Once in the "Microsoft Graph" view, select "Application permissions"
8. Finally, search for "Directory", "Policy" and "UserAuthenticationMethod" select the following permissions:
- `Directory.Read.All`
- `Policy.Read.All`
- `UserAuthenticationMethod.Read.All`
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool. It is required to add the following RBAC builtin roles per subscription to the entity that is going to be assumed by the tool:
- `Reader`
- `ProwlerRole` (custom role defined in [prowler-azure-custom-role](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-azure-custom-role.json))
![EntraID Permissions](../img/AAD-permissions.png)
To assign the permissions, follow the instructions in the [Microsoft Entra ID permissions](../tutorials/azure/create-prowler-service-principal.md#assigning-the-proper-permissions) section and the [Azure subscriptions permissions](../tutorials/azure/subscriptions.md#assigning-proper-permissions) section, respectively.
#### Checks that require ProwlerRole
#### Subscriptions scope
The following checks require the `ProwlerRole` custom role to be executed, if you want to run them, make sure you have assigned the role to the identity that is going to be assumed by Prowler:
Regarding the subscription scope, Prowler by default scans all the subscriptions that is able to list, so it is required to add the following RBAC builtin roles per subscription to the entity that is going to be assumed by the tool:
- `app_function_access_keys_configured`
- `app_function_ftps_deployment_disabled`
- `Security Reader`
- `Reader`
To assign this roles, follow the instructions:
1. Access your subscription, then select your subscription.
2. Select "Access control (IAM)".
3. In the overview, select "Roles"
![IAM Page](../img/page-IAM.png)
4. Click on "+ Add" and select "Add role assignment"
5. In the search bar, type `Security Reader`, select it and click on "Next"
6. In the Members tab, click on "+ Select members" and add the members you want to assign this role.
7. Click on "Review + assign" to apply the new role.
*Repeat these steps for `Reader` role*
## Google Cloud

View File

@@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 240.29 285.79"><defs><style>.cls-1{fill:url(#linear-gradient);}.cls-2{fill:#71be44;}</style><linearGradient id="linear-gradient" x1="157.45" y1="97.85" x2="211.7" y2="97.85" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#5a9b37"/><stop offset="1" stop-color="#71be44"/></linearGradient></defs><circle class="cls-1" cx="148.2" cy="97.85" r="67.45"/><path class="cls-2" d="M66.28,30.4H148.2a0,0,0,0,1,0,0V185.35a81.93,81.93,0,0,1-81.93,81.93h0a0,0,0,0,1,0,0V30.4A0,0,0,0,1,66.28,30.4Z"/></svg>

After

Width:  |  Height:  |  Size: 635 B

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 357 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 351 KiB

After

Width:  |  Height:  |  Size: 338 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 688 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 746 KiB

After

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

After

Width:  |  Height:  |  Size: 848 KiB

BIN
docs/img/page-IAM.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 552 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/img/prowler-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

View File

@@ -90,8 +90,6 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler/),
poetry install
python prowler.py -v
```
???+ note
If you want to clone Prowler from Windows, use `git config core.longpaths true` to allow long file paths.
=== "Amazon Linux 2"
@@ -189,6 +187,7 @@ You can run Prowler from your workstation, a Kubernetes Job, a Google Compute En
We have deprecated some of our outputs formats:
- The HTML is replaced for the new Prowler Dashboard, run `prowler dashboard`.
- The native JSON is replaced for the JSON [OCSF](https://schema.ocsf.io/) v1.1.0, common for all the providers.
### AWS
@@ -212,10 +211,10 @@ prowler <provider>
If you miss the former output you can use `--verbose` but Prowler v4 is smoking fast, so you won't see much ;
By default, Prowler generates CSV, JSON-OCSF and HTML reports. However, you can generate a JSON-ASFF report (used by AWS Security Hub) with `-M` or `--output-modes`:
By default, Prowler will generate a CSV, JSON and HTML reports, however you can generate a JSON-ASFF (used by AWS Security Hub) report with `-M` or `--output-modes`:
```console
prowler <provider> -M csv json-asff json-ocsf html
prowler <provider> -M csv json json-asff html
```
The html report will be located in the output directory as the other files and it will look like:
@@ -323,20 +322,17 @@ For non in-cluster execution, you can provide the location of the KubeConfig fil
```console
prowler kubernetes --kubeconfig-file path
```
???+ note
If no `--kubeconfig-file` is provided, Prowler will use the default KubeConfig file location (`~/.kube/config`).
For in-cluster execution, you can use the supplied yaml to run Prowler as a job within a new Prowler namespace:
For in-cluster execution, you can use the supplied yaml to run Prowler as a job:
```console
kubectl apply -f kubernetes/job.yaml
kubectl apply -f kubernetes/prowler-role.yaml
kubectl apply -f kubernetes/prowler-rolebinding.yaml
kubectl get pods --namespace prowler-ns --> prowler-XXXXX
kubectl logs prowler-XXXXX --namespace prowler-ns
kubectl get pods --> prowler-XXXXX
kubectl logs prowler-XXXXX
```
???+ note
By default, `prowler` will scan all namespaces in your active Kubernetes context. Use the flag `--context` to specify the context to be scanned and `--namespaces` to specify the namespaces to be scanned.
> By default, `prowler` will scan all namespaces in your active Kubernetes context, use flag `--context` to specify the context to be scanned and `--namespaces` to specify the namespaces to be scanned.
## Prowler v2 Documentation
For **Prowler v2 Documentation**, please check it out [here](https://github.com/prowler-cloud/prowler/blob/8818f47333a0c1c1a457453c87af0ea5b89a385f/README.md).

View File

@@ -85,7 +85,7 @@ prowler --security-hub --region eu-west-1
```
???+ note
It is recommended to send only fails to Security Hub and that is possible adding `--status FAIL` to the command. You can use, instead of the `--status FAIL` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub.
It is recommended to send only fails to Security Hub and that is possible adding `-q/--quiet` to the command. You can use, instead of the `-q/--quiet` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub.
Since Prowler perform checks to all regions by default you may need to filter by region when running Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f/--region <region>` (if no region is used it will try to push findings in all regions hubs). Prowler will send findings to the Security Hub on the region where the scanned resource is located.
@@ -121,13 +121,13 @@ prowler --security-hub --role arn:aws:iam::123456789012:role/ProwlerExecutionRol
## Send only failed findings to Security Hub
When using the **AWS Security Hub** integration you can send only the `FAIL` findings generated by **Prowler**. Therefore, the **AWS Security Hub** usage costs eventually would be lower. To follow that recommendation you could add the `--status FAIL` flag to the Prowler command:
When using the **AWS Security Hub** integration you can send only the `FAIL` findings generated by **Prowler**. Therefore, the **AWS Security Hub** usage costs eventually would be lower. To follow that recommendation you could add the `-q/--quiet` flag to the Prowler command:
```sh
prowler --security-hub --status FAIL
prowler --security-hub --quiet
```
You can use, instead of the `--status FAIL` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub:
You can use, instead of the `-q/--quiet` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub:
```sh
prowler --security-hub --send-sh-only-fails

View File

@@ -1,6 +1,6 @@
# Check mapping between Prowler v4/v3 and v2
Prowler v3 and v4 comes with different identifiers but we maintained the same checks that were implemented in v2. The reason for this change is because in previous versions of Prowler, check names were mostly based on CIS Benchmark for AWS. In v4 and v3 all checks are independent from any security framework and they have its own name and ID.
Prowler v3 comes with different identifiers but we maintained the same checks that were implemented in v2. The reason for this change is because in previous versions of Prowler, check names were mostly based on CIS Benchmark for AWS. In v4 and v3 all checks are independent from any security framework and they have its own name and ID.
If you need more information about how new compliance implementation works in Prowler v4 and v3 see [Compliance](../compliance.md) section.
@@ -95,8 +95,7 @@ checks_v4_v3_to_v2_mapping = {
"ec2_networkacl_allow_ingress_any_port": "extra7138",
"ec2_networkacl_allow_ingress_tcp_port_22": "check45",
"ec2_networkacl_allow_ingress_tcp_port_3389": "check46",
"ec2_securitygroup_allow_ingress_from_internet_to_all_ports": "extra748",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port": "extra74",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port": "extra748",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018": "extra753",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21": "extra7134",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22": "check41",

View File

@@ -1,34 +0,0 @@
# How to create Prowler Service Principal
To allow Prowler assume an identity to start the scan with the required privileges is necesary to create a Service Principal. To create one follow the next steps:
1. Access to Microsoft Entra ID
2. In the left menu bar, go to "App registrations"
3. Once there, in the menu bar click on "+ New registration" to register a new application
4. Fill the "Name, select the "Supported account types" and click on "Register. You will be redirected to the applications page.
5. Once in the application page, in the left menu bar, select "Certificates & secrets"
6. In the "Certificates & secrets" view, click on "+ New client secret"
7. Fill the "Description" and "Expires" fields and click on "Add"
8. Copy the value of the secret, it is going to be used as `AZURE_CLIENT_SECRET` environment variable.
![Register an Application page](../../img/create-sp.gif)
## Assigning the proper permissions
To allow Prowler to retrieve metadata from the identity assumed and specific Entra checks, it is needed to assign the following permissions:
1. Access to Microsoft Entra ID
2. In the left menu bar, go to "App registrations"
3. Once there, select the application that you have created
4. In the left menu bar, select "API permissions"
5. Then click on "+ Add a permission" and select "Microsoft Graph"
6. Once in the "Microsoft Graph" view, select "Application permissions"
7. Finally, search for "Directory", "Policy" and "UserAuthenticationMethod" select the following permissions:
- `Directory.Read.All`
- `Policy.Read.All`
- `UserAuthenticationMethod.Read.All`
8. Click on "Add permissions" to apply the new permissions.
9. Finally, click on "Grant admin consent for [your tenant]" to apply the permissions.
![EntraID Permissions](../../img/AAD-permissions.png)

View File

@@ -1,6 +1,6 @@
# Azure subscriptions scope
By default, Prowler is multisubscription, which means that is going to scan all the subscriptions is able to list. If you only assign permissions to one subscription, it is going to scan a single one.
By default, Prowler is multisubscription, which means that is going to scan all the subscriptions is able to list. If you only assign permissions to one subscription, it is going to scan a single one.
Prowler also has the ability to limit the subscriptions to scan to a set passed as input argument, to do so:
```console
@@ -8,36 +8,3 @@ prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription
```
Where you can pass from 1 up to N subscriptions to be scanned.
## Assigning proper permissions
Regarding the subscription scope, Prowler by default scans all subscriptions that it is able to list, so it is necessary to add the `Reader` RBAC built-in roles per subscription or management group (recommended for multiple subscriptions, see it in the [next section](#recommendation-for-multiple-subscriptions)) to the entity that will be adopted by the tool:
To assign this roles, follow the instructions:
1. Access your subscription, then select your subscription.
2. Select "Access control (IAM)".
3. In the overview, select "Roles".
4. Click on "+ Add" and select "Add role assignment".
5. In the search bar, type `Reader`, select it and click on "Next".
6. In the Members tab, click on "+ Select members" and add the members you want to assign this role.
7. Click on "Review + assign" to apply the new role.
![Add reader role to subscription](../../img/add-reader-role.gif)
Moreover, some additional read-only permissions are needed for some checks, for this kind of checks that are not covered by built-in roles we use a custom role. This role is defined in [prowler-azure-custom-role](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-azure-custom-role.json). Once the cusotm role is created, repeat the steps mentioned above to assign the new `ProwlerRole` to an identity.
## Recommendation for multiple subscriptions
While scanning multiple subscriptions could be tedious to create and assign roles for each one. For this reason in Prowler we recommend the usage of *[management groups](https://learn.microsoft.com/en-us/azure/governance/management-groups/overview)* to group all subscriptions that are going to be audited by Prowler.
To do this in a proper way you have to [create a new management group](https://learn.microsoft.com/en-us/azure/governance/management-groups/create-management-group-portal) and add all roles in the same way that have been done for subscription scope.
![Create management group](../../img/create-management-group.gif)
Once the management group is properly set you can add all the subscription that you want to audit.
![Add subscription to management group](../../img/add-sub-to-management-group.gif)
???+ note
By default, `prowler` will scan all subscriptions in the Azure tenant, use the flag `--subscription-id` to specify the subscriptions to be scanned.

View File

@@ -29,23 +29,16 @@ The following list includes all the AWS checks with configurable variables that
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
| `ecr_repositories_scan_vulnerabilities_in_latest_image` | `ecr_repository_vulnerability_minimum_severity` | String |
| `trustedadvisor_premium_support_plan_subscribed` | `verify_premium_support_plans` | Boolean |
| `config_recorder_all_regions_enabled` | `mute_non_default_regions` | Boolean |
| `drs_job_exist` | `mute_non_default_regions` | Boolean |
| `guardduty_is_enabled` | `mute_non_default_regions` | Boolean |
| `securityhub_enabled` | `mute_non_default_regions` | Boolean |
| `cloudtrail_threat_detection_privilege_escalation` | `threat_detection_privilege_escalation_entropy` | Integer |
| `cloudtrail_threat_detection_privilege_escalation` | `threat_detection_privilege_escalation_minutes` | Integer |
| `cloudtrail_threat_detection_privilege_escalation` | `threat_detection_privilege_escalation_actions` | List of Strings |
| `cloudtrail_threat_detection_enumeration` | `threat_detection_enumeration_entropy` | Integer |
| `cloudtrail_threat_detection_enumeration` | `threat_detection_enumeration_minutes` | Integer |
| `cloudtrail_threat_detection_enumeration` | `threat_detection_enumeration_actions` | List of Strings |
| `rds_instance_backup_enabled` | `check_rds_instance_replicas` | Boolean |
| `ec2_securitygroup_allow_ingress_from_internet_to_any_port` | `ec2_allowed_interface_types` | List of Strings |
| `ec2_securitygroup_allow_ingress_from_internet_to_any_port` | `ec2_allowed_instance_owners` | List of Strings |
| `acm_certificates_expiration_check` | `days_to_expire_threshold` | Integer |
| `eks_control_plane_logging_all_types_enabled` | `eks_required_log_types` | List of Strings |
| `config_recorder_all_regions_enabled` | `mute_non_default_regions` | Boolean |
| `drs_job_exist` | `mute_non_default_regions` | Boolean |
| `guardduty_is_enabled` | `mute_non_default_regions` | Boolean |
| `securityhub_enabled` | `mute_non_default_regions` | Boolean |
| `cloudtrail_threat_detection_privilege_escalation` | `threat_detection_privilege_escalation_entropy` | Integer |
| `cloudtrail_threat_detection_privilege_escalation` | `threat_detection_privilege_escalation_minutes` | Integer |
| `cloudtrail_threat_detection_privilege_escalation` | `threat_detection_privilege_escalation_actions` | List of Strings |
| `cloudtrail_threat_detection_enumeration` | `threat_detection_enumeration_entropy` | Integer |
| `cloudtrail_threat_detection_enumeration` | `threat_detection_enumeration_minutes` | Integer |
| `cloudtrail_threat_detection_enumeration` | `threat_detection_enumeration_actions` | List of Strings |
## Azure
### Configurable Checks
@@ -84,20 +77,10 @@ The following list includes all the Azure checks with configurable variables tha
```yaml title="config.yaml"
# AWS Configuration
aws:
# AWS Global Configuration
# aws.mute_non_default_regions --> Set to True to muted failed findings in non-default regions for AccessAnalyzer, GuardDuty, SecurityHub, DRS and Config
# aws.mute_non_default_regions --> Mute Failed Findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
mute_non_default_regions: False
# If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
# Mutelist:
# Accounts:
# "*":
# Checks:
# "*":
# Regions:
# - "ap-southeast-1"
# - "ap-southeast-2"
# Resources:
# - "*"
# AWS IAM Configuration
# aws.iam_user_accesskey_unused --> CIS recommends 45 days
@@ -107,24 +90,11 @@ aws:
# AWS EC2 Configuration
# aws.ec2_elastic_ip_shodan
# TODO: create common config
shodan_api_key: null
# aws.ec2_securitygroup_with_many_ingress_egress_rules --> by default is 50 rules
max_security_group_rules: 50
# aws.ec2_instance_older_than_specific_days --> by default is 6 months (180 days)
max_ec2_instance_age_in_days: 180
# aws.ec2_securitygroup_allow_ingress_from_internet_to_any_port
# allowed network interface types for security groups open to the Internet
ec2_allowed_interface_types:
[
"api_gateway_managed",
"vpc_endpoint",
]
# allowed network interface owners for security groups open to the Internet
ec2_allowed_instance_owners:
[
"amazon-elb"
]
# AWS VPC Configuration (vpc_endpoint_connections_trust_boundaries, vpc_endpoint_services_allowed_principals_trust_boundaries)
# Single account environment: No action required. The AWS account number will be automatically added by the checks.
@@ -148,234 +118,201 @@ aws:
# aws.awslambda_function_using_supported_runtimes
obsolete_lambda_runtimes:
[
"java8",
"go1.x",
"provided",
"python3.6",
"python2.7",
"python3.7",
"nodejs4.3",
"nodejs4.3-edge",
"nodejs6.10",
"nodejs",
"nodejs8.10",
"nodejs10.x",
"nodejs12.x",
"nodejs14.x",
"dotnet5.0",
"dotnetcore1.0",
"dotnetcore2.0",
"dotnetcore2.1",
"dotnetcore3.1",
"ruby2.5",
"ruby2.7",
]
# AWS Organizations
# aws.organizations_scp_check_deny_regions
# aws.organizations_enabled_regions: [
# "eu-central-1",
# "eu-west-1",
# organizations_scp_check_deny_regions
# organizations_enabled_regions: [
# 'eu-central-1',
# 'eu-west-1',
# "us-east-1"
# ]
organizations_enabled_regions: []
organizations_trusted_delegated_administrators: []
# AWS ECR
# aws.ecr_repositories_scan_vulnerabilities_in_latest_image
# ecr_repositories_scan_vulnerabilities_in_latest_image
# CRITICAL
# HIGH
# MEDIUM
ecr_repository_vulnerability_minimum_severity: "MEDIUM"
# AWS Trusted Advisor
# aws.trustedadvisor_premium_support_plan_subscribed
# trustedadvisor_premium_support_plan_subscribed
verify_premium_support_plans: True
# AWS CloudTrail Configuration
# aws.cloudtrail_threat_detection_privilege_escalation
threat_detection_privilege_escalation_threshold: 0.1 # Percentage of actions found to decide if it is an privilege_escalation attack event, by default is 0.1 (10%)
threat_detection_privilege_escalation_entropy: 0.7 # Percentage of actions found to decide if it is an privilege_escalation attack event, by default is 0.7 (70%)
threat_detection_privilege_escalation_minutes: 1440 # Past minutes to search from now for privilege_escalation attacks, by default is 1440 minutes (24 hours)
threat_detection_privilege_escalation_actions:
[
"AddPermission",
"AddRoleToInstanceProfile",
"AddUserToGroup",
"AssociateAccessPolicy",
"AssumeRole",
"AttachGroupPolicy",
"AttachRolePolicy",
"AttachUserPolicy",
"ChangePassword",
"CreateAccessEntry",
"CreateAccessKey",
"CreateDevEndpoint",
"CreateEventSourceMapping",
"CreateFunction",
"CreateGroup",
"CreateJob",
"CreateKeyPair",
"CreateLoginProfile",
"CreatePipeline",
"CreatePolicyVersion",
"CreateRole",
"CreateStack",
"DeleteRolePermissionsBoundary",
"DeleteRolePolicy",
"DeleteUserPermissionsBoundary",
"DeleteUserPolicy",
"DetachRolePolicy",
"DetachUserPolicy",
"GetCredentialsForIdentity",
"GetId",
"GetPolicyVersion",
"GetUserPolicy",
"Invoke",
"ModifyInstanceAttribute",
"PassRole",
"PutGroupPolicy",
"PutPipelineDefinition",
"PutRolePermissionsBoundary",
"PutRolePolicy",
"PutUserPermissionsBoundary",
"PutUserPolicy",
"ReplaceIamInstanceProfileAssociation",
"RunInstances",
"SetDefaultPolicyVersion",
"UpdateAccessKey",
"UpdateAssumeRolePolicy",
"UpdateDevEndpoint",
"UpdateEventSourceMapping",
"UpdateFunctionCode",
"UpdateJob",
"UpdateLoginProfile",
]
threat_detection_privilege_escalation_actions: [
"AddPermission",
"AddRoleToInstanceProfile",
"AddUserToGroup",
"AssociateAccessPolicy",
"AssumeRole",
"AttachGroupPolicy",
"AttachRolePolicy",
"AttachUserPolicy",
"ChangePassword",
"CreateAccessEntry",
"CreateAccessKey",
"CreateDevEndpoint",
"CreateEventSourceMapping",
"CreateFunction",
"CreateGroup",
"CreateJob",
"CreateKeyPair",
"CreateLoginProfile",
"CreatePipeline",
"CreatePolicyVersion",
"CreateRole",
"CreateStack",
"DeleteRolePermissionsBoundary",
"DeleteRolePolicy",
"DeleteUserPermissionsBoundary",
"DeleteUserPolicy",
"DetachRolePolicy",
"DetachUserPolicy",
"GetCredentialsForIdentity",
"GetId",
"GetPolicyVersion",
"GetUserPolicy",
"Invoke",
"ModifyInstanceAttribute",
"PassRole",
"PutGroupPolicy",
"PutPipelineDefinition",
"PutRolePermissionsBoundary",
"PutRolePolicy",
"PutUserPermissionsBoundary",
"PutUserPolicy",
"ReplaceIamInstanceProfileAssociation",
"RunInstances",
"SetDefaultPolicyVersion",
"UpdateAccessKey",
"UpdateAssumeRolePolicy",
"UpdateDevEndpoint",
"UpdateEventSourceMapping",
"UpdateFunctionCode",
"UpdateJob",
"UpdateLoginProfile",
]
# aws.cloudtrail_threat_detection_enumeration
threat_detection_enumeration_threshold: 0.1 # Percentage of actions found to decide if it is an enumeration attack event, by default is 0.1 (10%)
threat_detection_enumeration_entropy: 0.7 # Percentage of actions found to decide if it is an enumeration attack event, by default is 0.7 (70%)
threat_detection_enumeration_minutes: 1440 # Past minutes to search from now for enumeration attacks, by default is 1440 minutes (24 hours)
threat_detection_enumeration_actions:
[
"DescribeAccessEntry",
"DescribeAccountAttributes",
"DescribeAvailabilityZones",
"DescribeBundleTasks",
"DescribeCarrierGateways",
"DescribeClientVpnRoutes",
"DescribeCluster",
"DescribeDhcpOptions",
"DescribeFlowLogs",
"DescribeImages",
"DescribeInstanceAttribute",
"DescribeInstanceInformation",
"DescribeInstanceTypes",
"DescribeInstances",
"DescribeInstances",
"DescribeKeyPairs",
"DescribeLogGroups",
"DescribeLogStreams",
"DescribeOrganization",
"DescribeRegions",
"DescribeSecurityGroups",
"DescribeSnapshotAttribute",
"DescribeSnapshotTierStatus",
"DescribeSubscriptionFilters",
"DescribeTransitGatewayMulticastDomains",
"DescribeVolumes",
"DescribeVolumesModifications",
"DescribeVpcEndpointConnectionNotifications",
"DescribeVpcs",
"GetAccount",
"GetAccountAuthorizationDetails",
"GetAccountSendingEnabled",
"GetBucketAcl",
"GetBucketLogging",
"GetBucketPolicy",
"GetBucketReplication",
"GetBucketVersioning",
"GetCallerIdentity",
"GetCertificate",
"GetConsoleScreenshot",
"GetCostAndUsage",
"GetDetector",
"GetEbsDefaultKmsKeyId",
"GetEbsEncryptionByDefault",
"GetFindings",
"GetFlowLogsIntegrationTemplate",
"GetIdentityVerificationAttributes",
"GetInstances",
"GetIntrospectionSchema",
"GetLaunchTemplateData",
"GetLaunchTemplateData",
"GetLogRecord",
"GetParameters",
"GetPolicyVersion",
"GetPublicAccessBlock",
"GetQueryResults",
"GetRegions",
"GetSMSAttributes",
"GetSMSSandboxAccountStatus",
"GetSendQuota",
"GetTransitGatewayRouteTableAssociations",
"GetUserPolicy",
"HeadObject",
"ListAccessKeys",
"ListAccounts",
"ListAllMyBuckets",
"ListAssociatedAccessPolicies",
"ListAttachedUserPolicies",
"ListClusters",
"ListDetectors",
"ListDomains",
"ListFindings",
"ListHostedZones",
"ListIPSets",
"ListIdentities",
"ListInstanceProfiles",
"ListObjects",
"ListOrganizationalUnitsForParent",
"ListOriginationNumbers",
"ListPolicyVersions",
"ListRoles",
"ListRoles",
"ListRules",
"ListServiceQuotas",
"ListSubscriptions",
"ListTargetsByRule",
"ListTopics",
"ListUsers",
"LookupEvents",
"Search",
]
# AWS RDS Configuration
# aws.rds_instance_backup_enabled
# Whether to check RDS instance replicas or not
check_rds_instance_replicas: False
# AWS ACM Configuration
# aws.acm_certificates_expiration_check
days_to_expire_threshold: 7
# AWS EKS Configuration
# aws.eks_control_plane_logging_all_types_enabled
# EKS control plane logging types that must be enabled
eks_required_log_types:
[
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler",
]
threat_detection_enumeration_actions: [
"DescribeAccessEntry",
"DescribeAccountAttributes",
"DescribeAvailabilityZones",
"DescribeBundleTasks",
"DescribeCarrierGateways",
"DescribeClientVpnRoutes",
"DescribeCluster",
"DescribeDhcpOptions",
"DescribeFlowLogs",
"DescribeImages",
"DescribeInstanceAttribute",
"DescribeInstanceInformation",
"DescribeInstanceTypes",
"DescribeInstances",
"DescribeInstances",
"DescribeKeyPairs",
"DescribeLogGroups",
"DescribeLogStreams",
"DescribeOrganization",
"DescribeRegions",
"DescribeSecurityGroups",
"DescribeSnapshotAttribute",
"DescribeSnapshotTierStatus",
"DescribeSubscriptionFilters",
"DescribeTransitGatewayMulticastDomains",
"DescribeVolumes",
"DescribeVolumesModifications",
"DescribeVpcEndpointConnectionNotifications",
"DescribeVpcs",
"GetAccount",
"GetAccountAuthorizationDetails",
"GetAccountSendingEnabled",
"GetBucketAcl",
"GetBucketLogging",
"GetBucketPolicy",
"GetBucketReplication",
"GetBucketVersioning",
"GetCallerIdentity",
"GetCertificate",
"GetConsoleScreenshot",
"GetCostAndUsage",
"GetDetector",
"GetEbsDefaultKmsKeyId",
"GetEbsEncryptionByDefault",
"GetFindings",
"GetFlowLogsIntegrationTemplate",
"GetIdentityVerificationAttributes",
"GetInstances",
"GetIntrospectionSchema",
"GetLaunchTemplateData",
"GetLaunchTemplateData",
"GetLogRecord",
"GetParameters",
"GetPolicyVersion",
"GetPublicAccessBlock",
"GetQueryResults",
"GetRegions",
"GetSMSAttributes",
"GetSMSSandboxAccountStatus",
"GetSendQuota",
"GetTransitGatewayRouteTableAssociations",
"GetUserPolicy",
"HeadObject",
"ListAccessKeys",
"ListAccounts",
"ListAllMyBuckets",
"ListAssociatedAccessPolicies",
"ListAttachedUserPolicies",
"ListClusters",
"ListDetectors",
"ListDomains",
"ListFindings",
"ListHostedZones",
"ListIPSets",
"ListIdentities",
"ListInstanceProfiles",
"ListObjects",
"ListOrganizationalUnitsForParent",
"ListOriginationNumbers",
"ListPolicyVersions",
"ListRoles",
"ListRoles",
"ListRules",
"ListServiceQuotas",
"ListSubscriptions",
"ListTargetsByRule",
"ListTopics",
"ListUsers",
"LookupEvents",
"Search",
]
# Azure Configuration
azure:
# Azure Network Configuration
# azure.network_public_ip_shodan
# TODO: create common config
shodan_api_key: null
# Azure App Service
# Azure App Configuration
# azure.app_ensure_php_version_is_latest
php_latest_version: "8.2"
# azure.app_ensure_python_version_is_latest
@@ -389,34 +326,4 @@ gcp:
# gcp.compute_public_address_shodan
shodan_api_key: null
# Kubernetes Configuration
kubernetes:
# Kubernetes API Server
# kubernetes.apiserver_audit_log_maxbackup_set
audit_log_maxbackup: 10
# kubernetes.apiserver_audit_log_maxsize_set
audit_log_maxsize: 100
# kubernetes.apiserver_audit_log_maxage_set
audit_log_maxage: 30
# kubernetes.apiserver_strong_ciphers_only
apiserver_strong_ciphers:
[
"TLS_AES_128_GCM_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_CHACHA20_POLY1305_SHA256",
]
# Kubelet
# kubernetes.kubelet_strong_ciphers_only
kubelet_strong_ciphers:
[
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_GCM_SHA256",
]
```

View File

@@ -11,18 +11,6 @@ You can utilize `--custom-checks-metadata-file` followed by the path to your cus
The list of supported check's metadata fields that can be override are listed as follows:
- Severity
- CheckTitle
- Risk
- RelatedUrl
- Remediation
- Code
- CLI
- NativeIaC
- Other
- Terraform
- Recommendation
- Text
- Url
## File Syntax
@@ -33,85 +21,20 @@ CustomChecksMetadata:
Checks:
s3_bucket_level_public_access_block:
Severity: high
CheckTitle: S3 Bucket Level Public Access Block
Description: This check ensures that the S3 bucket level public access block is enabled.
Risk: This check is important because it ensures that the S3 bucket level public access block is enabled.
RelatedUrl: https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Remediation:
Code:
CLI: aws s3api put-public-access-block --bucket <bucket-name> --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
NativeIaC: https://aws.amazon.com/es/s3/features/block-public-access/
Other: https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html
Terraform: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_public_access_block
Recommendation:
Text: Enable the S3 bucket level public access block.
Url: https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
s3_bucket_no_mfa_delete:
Severity: high
CheckTitle: S3 Bucket No MFA Delete
Description: This check ensures that the S3 bucket does not allow delete operations without MFA.
Risk: This check is important because it ensures that the S3 bucket does not allow delete operations without MFA.
RelatedUrl: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Remediation:
Code:
CLI: aws s3api put-bucket-versioning --bucket <bucket-name> --versioning-configuration Status=Enabled,MFADelete=Enabled
NativeIaC: https://aws.amazon.com/es/s3/features/versioning/
Other: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Terraform: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_versioning
Recommendation:
Text: Enable versioning on the S3 bucket.
Url: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
azure:
Checks:
storage_infrastructure_encryption_is_enabled:
Severity: medium
CheckTitle: Storage Infrastructure Encryption Is Enabled
Description: This check ensures that storage infrastructure encryption is enabled.
Risk: This check is important because it ensures that storage infrastructure encryption is enabled.
RelatedUrl: https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption
Remediation:
Code:
CLI: az storage account update --name <storage-account-name> --resource-group <resource-group-name> --set properties.encryption.services.blob.enabled=true properties.encryption.services.file.enabled=true properties.encryption.services.queue.enabled=true properties.encryption.services.table.enabled=true
NativeIaC: https://docs.microsoft.com/en-us/azure/templates/microsoft.storage/storageaccounts
Other: https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption
Terraform: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account
Recommendation:
Text: Enable storage infrastructure encryption.
Url: https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption
gcp:
Checks:
compute_instance_public_ip:
Severity: critical
CheckTitle: Compute Instance Public IP
Description: This check ensures that the compute instance does not have a public IP.
Risk: This check is important because it ensures that the compute instance does not have a public IP.
RelatedUrl: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
Remediation:
Code:
CLI: https://docs.prowler.com/checks/gcp/google-cloud-public-policies/bc_gcp_public_2#cli-command
NativeIaC: https://cloud.google.com/compute/docs/reference/rest/v1/instances
Other: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
Terraform: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance
Recommendation:
Text: Remove the public IP from the compute instance.
Url: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
kubernetes:
Checks:
apiserver_anonymous_requests:
Severity: low
CheckTitle: APIServer Anonymous Requests
Description: This check ensures that anonymous requests to the APIServer are disabled.
Risk: This check is important because it ensures that anonymous requests to the APIServer are disabled.
RelatedUrl: https://kubernetes.io/docs/reference/access-authn-authz/authentication/
Remediation:
Code:
CLI: --anonymous-auth=false
NativeIaC: https://docs.prowler.com/checks/kubernetes/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false-1#kubernetes
Other: https://kubernetes.io/docs/reference/access-authn-authz/authentication/
Terraform: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/cluster_role_binding
Recommendation:
Text: Disable anonymous requests to the APIServer.
Url: https://kubernetes.io/docs/reference/access-authn-authz/authentication/
```
## Usage

View File

@@ -6,7 +6,7 @@ prowler dashboard
```
???+ note
You can expose the `dashboard` server in another address using the `HOST` environment variable.
To run Prowler local dashboard with Docker, use:
```sh
@@ -15,7 +15,7 @@ docker run --env HOST=0.0.0.0 --publish 127.0.0.1:11666:11666 toniblyx/prowler:l
???+ note
**Remember that the `dashboard` server is not authenticated, if you expose it to the internet, you are running it at your own risk.**
The banner and additional info about the dashboard will be shown on your console:
<img src="../img/dashboard/dashboard-banner.png">
@@ -27,20 +27,10 @@ The overview page provides a full impression of your findings obtained from Prow
In this page you can do multiple functions:
* Apply filters:
* Assesment Date
* Account
* Region
* Severity
* Service
* Status
* Apply filters (Assessment Date / Account / Region)
* See wich files has been scanned to generate the dashboard placing your mouse on the `?` icon:
<img src="../img/dashboard/dashboard-files-scanned.png">
* Download the `Top Findings by Severity` table using the button `DOWNLOAD THIS TABLE AS CSV` or `DOWNLOAD THIS TABLE AS XLSX`
* Click on the provider cards to filter by provider.
* On the dropdowns under `Top Findings by Severity` you can apply multiple sorts to see the information, also you will get a detailed view of each finding using the dropdowns:
<img src="../img/dashboard/dropdown.png">
* Download the `Top 25 Failed Findings by Severity` table using the button `DOWNLOAD THIS TABLE AS CSV`
## Compliance Page
@@ -81,7 +71,7 @@ def get_table(data):
## S3 Integration
If you are using Prowler SaaS with the S3 integration or that integration from Prowler Open Source and you want to use your data from your S3 bucket, you can run:
If you are a Prowler Saas customer and you want to use your data from your S3 bucket, you can run:
```sh
aws s3 cp s3://<your-bucket>/output/csv ./output --recursive
@@ -94,9 +84,6 @@ Prowler will use the outputs from the folder `/output` (for common prowler outpu
To change the path modify the values `folder_path_overview` or `folder_path_compliance` from `/dashboard/config.py`
???+ note
If you have any issue related with dashboards, check that the output path where the dashboard is getting the outputs is correct.
## Output Support
Prowler dashboard supports the detailed outputs:

View File

@@ -1,4 +1,4 @@
# Prowler Fixer (remediation)
# Prowler Fixer
Prowler allows you to fix some of the failed findings it identifies. You can use the `--fixer` flag to run the fixes that are available for the checks that failed.
```sh
@@ -8,10 +8,10 @@ prowler <provider> -c <check_to_fix_1> <check_to_fix_2> ... --fixer
<img src="../img/fixer.png">
???+ note
You can see all the available fixes for each provider with the `--list-remediations` or `--list-fixers flag.
You can see all the available fixes for each provider with the `--list-fixers` flag.
```sh
prowler <provider> --list-fixers
prowler <provider> --list-fixer
```
## Writing a Fixer

View File

@@ -24,33 +24,3 @@ Prowler will follow the same credentials search as [Google authentication librar
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the `Viewer` role to the member associated with the credentials.
## Impersonate Service Account
If you want to impersonate a GCP service account, you can use the `--impersonate-service-account` argument:
```console
prowler gcp --impersonate-service-account <service-account-email>
```
This argument will use the default credentials to impersonate the service account provided.
## Service APIs
Prowler will use the Google Cloud APIs to get the information needed to perform the checks. Make sure that the following APIs are enabled in the project:
- apikeys.googleapis.com
- artifactregistry.googleapis.com
- bigquery.googleapis.com
- sqladmin.googleapis.com
- storage.googleapis.com
- compute.googleapis.com
- dataproc.googleapis.com
- dns.googleapis.com
- containerregistry.googleapis.com
- container.googleapis.com
- iam.googleapis.com
- cloudkms.googleapis.com
- logging.googleapis.com
You can enable them automatically using our script [enable_apis_in_projects.sh](https://github.com/prowler-cloud/prowler/blob/master/contrib/gcp/enable_apis_in_projects.sh)

Some files were not shown because too many files have changed in this diff Show More