mirror of
https://github.com/prowler-cloud/prowler.git
synced 2026-05-13 15:50:55 +00:00
Compare commits
298 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| ae08623b75 | |||
| ab727e6816 | |||
| 23d882d7ab | |||
| 59435167ea | |||
| 77cdd793f8 | |||
| d13f3f0e0c | |||
| 56821de2f4 | |||
| 92190fa69f | |||
| 85db7c5183 | |||
| a55ac266bf | |||
| 90622e0437 | |||
| 81596250dc | |||
| 43db5fe527 | |||
| dfb479fa80 | |||
| aa88b453ff | |||
| fbda66c6d1 | |||
| 2200e65519 | |||
| b8537aa22d | |||
| cb4a5dec79 | |||
| 0286de7ce2 | |||
| b00602f109 | |||
| 1cfae546a0 | |||
| 05dae4e8d1 | |||
| 52ddaca4c5 | |||
| 940a1202b3 | |||
| ec27451199 | |||
| 60e06dcc6e | |||
| 7733aab088 | |||
| 5c6fadcfe7 | |||
| 1bdb314e2c | |||
| 5b0365947f | |||
| b512f6c421 | |||
| c4a8771647 | |||
| 6f967c6da7 | |||
| 82cd29d595 | |||
| 14c2334e1b | |||
| 3598514cb4 | |||
| c4ba061f30 | |||
| f4530b21d2 | |||
| 3949ab736d | |||
| 9da5066b18 | |||
| 941539616c | |||
| 135fa044b7 | |||
| 48913c1886 | |||
| ea20943f83 | |||
| 2738cfd1bd | |||
| 265c3d818e | |||
| c0a9fdf8c8 | |||
| 8b3335f426 | |||
| 252033d113 | |||
| 0bc00dbca4 | |||
| 3f5178bffb | |||
| e08b272a1d | |||
| 64c43a288d | |||
| 74bf0e6b47 | |||
| 02b7c5328f | |||
| bb02004e7c | |||
| 82cf216a74 | |||
| 7916425ed4 | |||
| d98063ed47 | |||
| 27bf78a3a1 | |||
| f50bd50d60 | |||
| 80665e0396 | |||
| 4b259fa8dd | |||
| 10db2ed6d8 | |||
| 422a8a0f62 | |||
| 906a2cc651 | |||
| 43fe9c6860 | |||
| f87b2089fb | |||
| 1884874ab6 | |||
| cd6d29e176 | |||
| 0b7055e983 | |||
| ae53b76d78 | |||
| 406e473b5c | |||
| 1a2bf461f0 | |||
| 1b49c0b27f | |||
| 12ada66978 | |||
| daa2536005 | |||
| 69a62db19a | |||
| 79450d6977 | |||
| 0463fd0830 | |||
| b15e3d339c | |||
| 1fc12952ba | |||
| 088a6bcbda | |||
| a3b0bb6d4b | |||
| 3c819f8875 | |||
| cdf0292bbc | |||
| 987121051b | |||
| c9ed7773d2 | |||
| fdf45aac51 | |||
| 3ded224a4b | |||
| 230a085c76 | |||
| 8cd90e07dc | |||
| 06ded98d05 | |||
| a5066326bd | |||
| 83a9ac2109 | |||
| 136eb4facd | |||
| d4eb4bdca7 | |||
| 665c9d878a | |||
| a064e43302 | |||
| fdb76e7820 | |||
| 1259bb85e3 | |||
| 0db9ab91b2 | |||
| f6ea314ec0 | |||
| 9e02da342b | |||
| 358d4239c7 | |||
| b003fca377 | |||
| b4deda3c3f | |||
| 338bb74c0c | |||
| 7342a8901f | |||
| f484b83f15 | |||
| c69187f484 | |||
| 5038afeb26 | |||
| fce43cea16 | |||
| 43a14b89bc | |||
| 24364bd73e | |||
| a1abe6dd2d | |||
| 25098bc82a | |||
| 20f2f45610 | |||
| 06c2608a05 | |||
| 329ac113f2 | |||
| 97179d2b43 | |||
| 8317ea783f | |||
| 65e7e89d61 | |||
| 26a4dd4e8d | |||
| dab0cea2dd | |||
| 3b42eb3818 | |||
| a5ba950627 | |||
| a1232446c1 | |||
| aa6f851887 | |||
| 25f972e910 | |||
| 7216e5ce3d | |||
| 83242da0ab | |||
| d457166a0c | |||
| 88f38b2d2a | |||
| c2e0849d5f | |||
| 1fdebfa295 | |||
| ea6d04ed3a | |||
| 2167683851 | |||
| 6324be31ab | |||
| 525f152e51 | |||
| c3a2d79234 | |||
| cefa708322 | |||
| 1a9e14ab2a | |||
| b1c6094b6d | |||
| 1038b11fe3 | |||
| d54e3b25db | |||
| 6a8e8750bb | |||
| ad3d4536fb | |||
| 46c24055ee | |||
| 4c6a1592ac | |||
| 89e657561c | |||
| 55099abc86 | |||
| 3c599a75cc | |||
| f77897f813 | |||
| 30518f2e0e | |||
| efdeb431ba | |||
| bb07cf9147 | |||
| 9214b5c26f | |||
| d57df3cc28 | |||
| 2f5fce41dc | |||
| 6918a75449 | |||
| 3aeaa3d992 | |||
| fd833eecf0 | |||
| 39e4d20b24 | |||
| dfdd45e4d0 | |||
| 81478dfed3 | |||
| 2854f8405c | |||
| 0e1578cfbc | |||
| f5b1532647 | |||
| d9f3a6b88e | |||
| b0c386fc60 | |||
| 72b06261df | |||
| 1562b77581 | |||
| 10e38ca407 | |||
| 5842f2df37 | |||
| 8b3b9ffd99 | |||
| d238050065 | |||
| 5572d476ad | |||
| 3c94d3a56f | |||
| 85af4ff77c | |||
| dcee114ef3 | |||
| 760723874c | |||
| c0a4898074 | |||
| 03c0533b58 | |||
| c8dcb0edb0 | |||
| 82171ee916 | |||
| df4bf18b97 | |||
| 94e60f7329 | |||
| f1ba5abbec | |||
| 6cc1a9a2cb | |||
| 31f98092bf | |||
| 85197036ca | |||
| be43025f00 | |||
| c6b34f0a85 | |||
| 675698a26a | |||
| 8d9bf2384f | |||
| ff900a2a45 | |||
| a41663fb0d | |||
| 033e9fd58c | |||
| 240b02b498 | |||
| 87eb2dfdf7 | |||
| b4d8d64f0e | |||
| 7944ebe83a | |||
| bd138114c9 | |||
| d527a3f12b | |||
| 260fada3eb | |||
| 0ee0fc082a | |||
| 9d66d86f66 | |||
| 825e53c38f | |||
| 196c17d44d | |||
| fc69e195e4 | |||
| 5f53a9ec6f | |||
| 5e72a40898 | |||
| 496ada3cba | |||
| 481a43f3f6 | |||
| 58298706d4 | |||
| e75a760da0 | |||
| c313757ef2 | |||
| 284678fe48 | |||
| c3d25e6f39 | |||
| a9d16bbbce | |||
| 92bc992e7f | |||
| 903e4f8b9f | |||
| 2c09076f91 | |||
| 3d4902b057 | |||
| b30eab7935 | |||
| cf8402e013 | |||
| af8fbaf2cd | |||
| c748e57878 | |||
| a5187c6a42 | |||
| e19ed30ac7 | |||
| 96ce1461b9 | |||
| 9da5fb67c3 | |||
| eb1c1791e4 | |||
| 581afd38e6 | |||
| 19a735aafe | |||
| 2170fbb1ab | |||
| 90c6c6b98d | |||
| 02b416b4f8 | |||
| 1022b5e413 | |||
| d1bad9d9ab | |||
| 178f3850be | |||
| d239d299e2 | |||
| 88fae9ecae | |||
| a3bff9705c | |||
| 75989b09d7 | |||
| 9a622f60fe | |||
| 7cd1966066 | |||
| 77e59203ae | |||
| 0a449c7e13 | |||
| 163fbaff19 | |||
| 7ec514d9dd | |||
| b63f70ac82 | |||
| 2c86b3a990 | |||
| 12443f7cbb | |||
| 3a8c635b75 | |||
| 8bc6e8b7ab | |||
| 9ca1899ebf | |||
| 1bdcf2c7f1 | |||
| 92a804bf88 | |||
| f85ad9a7a2 | |||
| 308c778bad | |||
| ee06d3a68a | |||
| 8dc4bd0be8 | |||
| bf9e38dc5c | |||
| a85b89ffb5 | |||
| 87da11b712 | |||
| 8b57f178e0 | |||
| 7830ed8b9f | |||
| d4e66c4a6f | |||
| 1cfe610d47 | |||
| d9a9236ab7 | |||
| 285aea3458 | |||
| b051aeeb64 | |||
| b99dce6a43 | |||
| 04749c1da1 | |||
| 44d70f8467 | |||
| 95791a9909 | |||
| ad0b8a4208 | |||
| 5669a42039 | |||
| 83b328ea92 | |||
| a6c88c0d9e | |||
| 922f9d2f91 | |||
| a69d0d16c0 | |||
| 676cc44fe2 | |||
| 3840e40870 | |||
| ab2d57554a | |||
| cbb5b21e6c | |||
| 1efd5668ce | |||
| ca86aeb1d7 | |||
| 4f2a8b71bb | |||
| 3b0cb3db85 | |||
| 00c527ff79 | |||
| ab348d5752 | |||
| dd713351dc | |||
| fa722f1dc7 | |||
| b0cc3978d0 |
@@ -10,6 +10,8 @@ NEXT_PUBLIC_API_BASE_URL=${API_BASE_URL}
|
||||
NEXT_PUBLIC_API_DOCS_URL=http://prowler-api:8080/api/v1/docs
|
||||
AUTH_TRUST_HOST=true
|
||||
UI_PORT=3000
|
||||
# Temp URL for feeds need to use actual
|
||||
RSS_FEED_URL=https://prowler.com/blog/rss
|
||||
# openssl rand -base64 32
|
||||
AUTH_SECRET="N/c6mnaS5+SWq81+819OrzQZlmx1Vxtp/orjttJSmw8="
|
||||
# Google Tag Manager ID
|
||||
@@ -83,55 +85,21 @@ DJANGO_CACHE_MAX_AGE=3600
|
||||
DJANGO_STALE_WHILE_REVALIDATE=60
|
||||
DJANGO_MANAGE_DB_PARTITIONS=True
|
||||
# openssl genrsa -out private.pem 2048
|
||||
DJANGO_TOKEN_SIGNING_KEY="-----BEGIN PRIVATE KEY-----
|
||||
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDs4e+kt7SnUJek
|
||||
6V5r9zMGzXCoU5qnChfPiqu+BgANyawz+MyVZPs6RCRfeo6tlCknPQtOziyXYM2I
|
||||
7X+qckmuzsjqp8+u+o1mw3VvUuJew5k2SQLPYwsiTzuFNVJEOgRo3hywGiGwS2iv
|
||||
/5nh2QAl7fq2qLqZEXQa5+/xJlQggS1CYxOJgggvLyra50QZlBvPve/AxKJ/EV/Q
|
||||
irWTZU5lLNI8sH2iZR05vQeBsxZ0dCnGMT+vGl+cGkqrvzQzKsYbDmabMcfTYhYi
|
||||
78fpv6A4uharJFHayypYBjE39PwhMyyeycrNXlpm1jpq+03HgmDuDMHydk1tNwuT
|
||||
nEC7m7iNAgMBAAECggEAA2m48nJcJbn9SVi8bclMwKkWmbJErOnyEGEy2sTK3Of+
|
||||
NWx9BB0FmqAPNxn0ss8K7cANKOhDD7ZLF9E2MO4/HgfoMKtUzHRbM7MWvtEepldi
|
||||
nnvcUMEgULD8Dk4HnqiIVjt3BdmGiTv46OpBnRWrkSBV56pUL+7msZmMZTjUZvh2
|
||||
ZWv0+I3gtDIjo2Zo/FiwDV7CfwRjJarRpYUj/0YyuSA4FuOUYl41WAX1I301FKMH
|
||||
xo3jiAYi1s7IneJ16OtPpOA34Wg5F6ebm/UO0uNe+iD4kCXKaZmxYQPh5tfB0Qa3
|
||||
qj1T7GNpFNyvtG7VVdauhkb8iu8X/wl6PCwbg0RCKQKBgQD9HfpnpH0lDlHMRw9K
|
||||
X7Vby/1fSYy1BQtlXFEIPTN/btJ/asGxLmAVwJ2HAPXWlrfSjVAH7CtVmzN7v8oj
|
||||
HeIHfeSgoWEu1syvnv2AMaYSo03UjFFlfc/GUxF7DUScRIhcJUPCP8jkAROz9nFv
|
||||
DByNjUL17Q9r43DmDiRsy0IFqQKBgQDvlJ9Uhl+Sp7gRgKYwa/IG0+I4AduAM+Gz
|
||||
Dxbm52QrMGMTjaJFLmLHBUZ/ot+pge7tZZGws8YR8ufpyMJbMqPjxhIvRRa/p1Tf
|
||||
E3TQPW93FMsHUvxAgY3MV5MzXFPhlNAKb+akP/RcXUhetGAuZKLubtDCWa55ZQuL
|
||||
wj2OS+niRQKBgE7K8zUqNi6/22S8xhy/2GPgB1qPObbsABUofK0U6CAGLo6te+gc
|
||||
6Jo84IyzFtQbDNQFW2Fr+j1m18rw9AqkdcUhQndiZS9AfG07D+zFB86LeWHt4DS4
|
||||
ymIRX8Kvaak/iDcu/n3Mf0vCrhB6aetImObTj4GgrwlFvtJOmrYnO8EpAoGAIXXP
|
||||
Xt25gWD9OyyNiVu6HKwA/zN7NYeJcRmdaDhO7B1A6R0x2Zml4AfjlbXoqOLlvLAf
|
||||
zd79vcoAC82nH1eOPiSOq51plPDI0LMF8IN0CtyTkn1Lj7LIXA6rF1RAvtOqzppc
|
||||
SvpHpZK9pcRpXnFdtBE0BMDDtl6fYzCIqlP94UUCgYEAnhXbAQMF7LQifEm34Dx8
|
||||
BizRMOKcqJGPvbO2+Iyt50O5X6onU2ITzSV1QHtOvAazu+B1aG9pEuBFDQ+ASxEu
|
||||
L9ruJElkOkb/o45TSF6KCsHd55ReTZ8AqnRjf5R+lyzPqTZCXXb8KTcRvWT4zQa3
|
||||
VxyT2PnaSqEcexWUy4+UXoQ=
|
||||
-----END PRIVATE KEY-----"
|
||||
DJANGO_TOKEN_SIGNING_KEY=""
|
||||
# openssl rsa -in private.pem -pubout -out public.pem
|
||||
DJANGO_TOKEN_VERIFYING_KEY="-----BEGIN PUBLIC KEY-----
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7OHvpLe0p1CXpOlea/cz
|
||||
Bs1wqFOapwoXz4qrvgYADcmsM/jMlWT7OkQkX3qOrZQpJz0LTs4sl2DNiO1/qnJJ
|
||||
rs7I6qfPrvqNZsN1b1LiXsOZNkkCz2MLIk87hTVSRDoEaN4csBohsEtor/+Z4dkA
|
||||
Je36tqi6mRF0Gufv8SZUIIEtQmMTiYIILy8q2udEGZQbz73vwMSifxFf0Iq1k2VO
|
||||
ZSzSPLB9omUdOb0HgbMWdHQpxjE/rxpfnBpKq780MyrGGw5mmzHH02IWIu/H6b+g
|
||||
OLoWqyRR2ssqWAYxN/T8ITMsnsnKzV5aZtY6avtNx4Jg7gzB8nZNbTcLk5xAu5u4
|
||||
jQIDAQAB
|
||||
-----END PUBLIC KEY-----"
|
||||
DJANGO_TOKEN_VERIFYING_KEY=""
|
||||
# openssl rand -base64 32
|
||||
DJANGO_SECRETS_ENCRYPTION_KEY="oE/ltOhp/n1TdbHjVmzcjDPLcLA41CVI/4Rk+UB5ESc="
|
||||
DJANGO_BROKER_VISIBILITY_TIMEOUT=86400
|
||||
DJANGO_SENTRY_DSN=
|
||||
DJANGO_THROTTLE_TOKEN_OBTAIN=50/minute
|
||||
|
||||
# Sentry settings
|
||||
SENTRY_ENVIRONMENT=local
|
||||
SENTRY_RELEASE=local
|
||||
|
||||
#### Prowler release version ####
|
||||
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.7.5
|
||||
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.10.0
|
||||
|
||||
# Social login credentials
|
||||
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
|
||||
|
||||
@@ -22,6 +22,11 @@ provider/kubernetes:
|
||||
- any-glob-to-any-file: "prowler/providers/kubernetes/**"
|
||||
- any-glob-to-any-file: "tests/providers/kubernetes/**"
|
||||
|
||||
provider/m365:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "prowler/providers/m365/**"
|
||||
- any-glob-to-any-file: "tests/providers/m365/**"
|
||||
|
||||
provider/github:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "prowler/providers/github/**"
|
||||
@@ -32,6 +37,11 @@ provider/iac:
|
||||
- any-glob-to-any-file: "prowler/providers/iac/**"
|
||||
- any-glob-to-any-file: "tests/providers/iac/**"
|
||||
|
||||
provider/mongodbatlas:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "prowler/providers/mongodbatlas/**"
|
||||
- any-glob-to-any-file: "tests/providers/mongodbatlas/**"
|
||||
|
||||
github_actions:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: ".github/workflows/*"
|
||||
@@ -47,11 +57,13 @@ mutelist:
|
||||
- any-glob-to-any-file: "prowler/providers/azure/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "prowler/providers/gcp/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "prowler/providers/kubernetes/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "prowler/providers/mongodbatlas/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "tests/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "tests/providers/aws/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "tests/providers/azure/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "tests/providers/gcp/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "tests/providers/kubernetes/lib/mutelist/**"
|
||||
- any-glob-to-any-file: "tests/providers/mongodbatlas/lib/mutelist/**"
|
||||
|
||||
integration/s3:
|
||||
- changed-files:
|
||||
@@ -98,6 +110,10 @@ component/ui:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "ui/**"
|
||||
|
||||
component/mcp-server:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "mcp_server/**"
|
||||
|
||||
compliance:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "prowler/compliance/**"
|
||||
@@ -107,3 +123,7 @@ compliance:
|
||||
review-django-migrations:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "api/src/backend/api/migrations/**"
|
||||
|
||||
metadata-review:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "**/*.metadata.json"
|
||||
|
||||
@@ -8,6 +8,10 @@ If fixes an issue please add it with `Fix #XXXX`
|
||||
|
||||
Please include a summary of the change and which issue is fixed. List any dependencies that are required for this change.
|
||||
|
||||
### Steps to review
|
||||
|
||||
Please add a detailed description of how to review this PR.
|
||||
|
||||
### Checklist
|
||||
|
||||
- Are there new checks included in this PR? Yes / No
|
||||
|
||||
@@ -48,12 +48,12 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
|
||||
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/api-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
|
||||
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
@@ -13,6 +13,7 @@ on:
|
||||
- "master"
|
||||
- "v5.*"
|
||||
paths:
|
||||
- ".github/workflows/api-pull-request.yml"
|
||||
- "api/**"
|
||||
|
||||
env:
|
||||
@@ -81,7 +82,9 @@ jobs:
|
||||
id: are-non-ignored-files-changed
|
||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
|
||||
with:
|
||||
files: api/**
|
||||
files: |
|
||||
api/**
|
||||
.github/workflows/api-pull-request.yml
|
||||
files_ignore: ${{ env.IGNORE_FILES }}
|
||||
|
||||
- name: Replace @master with current branch in pyproject.toml
|
||||
@@ -99,7 +102,24 @@ jobs:
|
||||
python -m pip install --upgrade pip
|
||||
pipx install poetry==2.1.1
|
||||
|
||||
- name: Update poetry.lock after the branch name change
|
||||
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
|
||||
working-directory: ./api
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push'
|
||||
run: |
|
||||
# Get the latest commit hash from the prowler-cloud/prowler repository
|
||||
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
|
||||
echo "Latest commit hash: $LATEST_COMMIT"
|
||||
|
||||
# Update the resolved_reference specifically for prowler-cloud/prowler repository
|
||||
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
|
||||
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
|
||||
}' poetry.lock
|
||||
|
||||
# Verify the change was made
|
||||
echo "Updated resolved_reference:"
|
||||
grep -A2 -B2 "resolved_reference" poetry.lock
|
||||
|
||||
- name: Update poetry.lock
|
||||
working-directory: ./api
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
run: |
|
||||
@@ -164,8 +184,9 @@ jobs:
|
||||
working-directory: ./api
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
|
||||
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
|
||||
run: |
|
||||
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323
|
||||
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
|
||||
|
||||
- name: Vulture
|
||||
working-directory: ./api
|
||||
|
||||
@@ -7,6 +7,7 @@ on:
|
||||
- 'v3'
|
||||
paths:
|
||||
- 'docs/**'
|
||||
- '.github/workflows/build-documentation-on-pr.yml'
|
||||
|
||||
env:
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
@@ -16,9 +17,20 @@ jobs:
|
||||
name: Documentation Link
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Leave PR comment with the Prowler Documentation URI
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
- name: Find existing documentation comment
|
||||
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
|
||||
id: find-comment
|
||||
with:
|
||||
issue-number: ${{ env.PR_NUMBER }}
|
||||
comment-author: 'github-actions[bot]'
|
||||
body-includes: '<!-- prowler-docs-link -->'
|
||||
|
||||
- name: Create or update PR comment with the Prowler Documentation URI
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
with:
|
||||
comment-id: ${{ steps.find-comment.outputs.comment-id }}
|
||||
issue-number: ${{ env.PR_NUMBER }}
|
||||
body: |
|
||||
<!-- prowler-docs-link -->
|
||||
You can check the documentation for this PR here -> [Prowler Documentation](https://prowler-prowler-docs--${{ env.PR_NUMBER }}.com.readthedocs.build/projects/prowler-open-source/en/${{ env.PR_NUMBER }}/)
|
||||
edit-mode: replace
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
name: Create Backport Label
|
||||
name: Prowler - Create Backport Label
|
||||
|
||||
on:
|
||||
release:
|
||||
|
||||
@@ -11,7 +11,7 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: TruffleHog OSS
|
||||
uses: trufflesecurity/trufflehog@6641d4ba5b684fffe195b9820345de1bf19f3181 # v3.89.2
|
||||
uses: trufflesecurity/trufflehog@a05cf0859455b5b16317ee22d809887a4043cdf0 # v3.90.2
|
||||
with:
|
||||
path: ./
|
||||
base: ${{ github.event.repository.default_branch }}
|
||||
|
||||
@@ -0,0 +1,168 @@
|
||||
name: Prowler - PR Conflict Checker
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
- opened
|
||||
- synchronize
|
||||
- reopened
|
||||
branches:
|
||||
- "master"
|
||||
- "v5.*"
|
||||
pull_request_target:
|
||||
types:
|
||||
- opened
|
||||
- synchronize
|
||||
- reopened
|
||||
branches:
|
||||
- "master"
|
||||
- "v5.*"
|
||||
|
||||
jobs:
|
||||
conflict-checker:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
|
||||
with:
|
||||
files: |
|
||||
**
|
||||
|
||||
- name: Check for conflict markers
|
||||
id: conflict-check
|
||||
run: |
|
||||
echo "Checking for conflict markers in changed files..."
|
||||
|
||||
CONFLICT_FILES=""
|
||||
HAS_CONFLICTS=false
|
||||
|
||||
# Check each changed file for conflict markers
|
||||
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
|
||||
if [ -f "$file" ]; then
|
||||
echo "Checking file: $file"
|
||||
|
||||
# Look for conflict markers
|
||||
if grep -l "^<<<<<<<\|^=======\|^>>>>>>>" "$file" 2>/dev/null; then
|
||||
echo "Conflict markers found in: $file"
|
||||
CONFLICT_FILES="$CONFLICT_FILES$file "
|
||||
HAS_CONFLICTS=true
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$HAS_CONFLICTS" = true ]; then
|
||||
echo "has_conflicts=true" >> $GITHUB_OUTPUT
|
||||
echo "conflict_files=$CONFLICT_FILES" >> $GITHUB_OUTPUT
|
||||
echo "Conflict markers detected in files: $CONFLICT_FILES"
|
||||
else
|
||||
echo "has_conflicts=false" >> $GITHUB_OUTPUT
|
||||
echo "No conflict markers found in changed files"
|
||||
fi
|
||||
|
||||
- name: Add conflict label
|
||||
if: steps.conflict-check.outputs.has_conflicts == 'true'
|
||||
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
|
||||
with:
|
||||
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
script: |
|
||||
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
});
|
||||
|
||||
const hasConflictLabel = labels.some(label => label.name === 'has-conflicts');
|
||||
|
||||
if (!hasConflictLabel) {
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
labels: ['has-conflicts']
|
||||
});
|
||||
console.log('Added has-conflicts label');
|
||||
} else {
|
||||
console.log('has-conflicts label already exists');
|
||||
}
|
||||
|
||||
- name: Remove conflict label
|
||||
if: steps.conflict-check.outputs.has_conflicts == 'false'
|
||||
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
|
||||
with:
|
||||
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
script: |
|
||||
try {
|
||||
await github.rest.issues.removeLabel({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
name: 'has-conflicts'
|
||||
});
|
||||
console.log('Removed has-conflicts label');
|
||||
} catch (error) {
|
||||
if (error.status === 404) {
|
||||
console.log('has-conflicts label was not present');
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
- name: Find existing conflict comment
|
||||
if: steps.conflict-check.outputs.has_conflicts == 'true'
|
||||
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
|
||||
id: find-comment
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-author: 'github-actions[bot]'
|
||||
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
|
||||
|
||||
- name: Create or update conflict comment
|
||||
if: steps.conflict-check.outputs.has_conflicts == 'true'
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
with:
|
||||
comment-id: ${{ steps.find-comment.outputs.comment-id }}
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
edit-mode: replace
|
||||
body: |
|
||||
⚠️ **Conflict Markers Detected**
|
||||
|
||||
This pull request contains unresolved conflict markers in the following files:
|
||||
```
|
||||
${{ steps.conflict-check.outputs.conflict_files }}
|
||||
```
|
||||
|
||||
Please resolve these conflicts by:
|
||||
1. Locating the conflict markers: `<<<<<<<`, `=======`, and `>>>>>>>`
|
||||
2. Manually editing the files to resolve the conflicts
|
||||
3. Removing all conflict markers
|
||||
4. Committing and pushing the changes
|
||||
|
||||
- name: Find existing conflict comment when resolved
|
||||
if: steps.conflict-check.outputs.has_conflicts == 'false'
|
||||
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
|
||||
id: find-resolved-comment
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-author: 'github-actions[bot]'
|
||||
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
|
||||
|
||||
- name: Update comment when conflicts resolved
|
||||
if: steps.conflict-check.outputs.has_conflicts == 'false' && steps.find-resolved-comment.outputs.comment-id != ''
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
with:
|
||||
comment-id: ${{ steps.find-resolved-comment.outputs.comment-id }}
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
edit-mode: replace
|
||||
body: |
|
||||
✅ **Conflict Markers Resolved**
|
||||
|
||||
All conflict markers have been successfully resolved in this pull request.
|
||||
@@ -1,4 +1,4 @@
|
||||
name: Prowler Release Preparation
|
||||
name: Prowler - Release Preparation
|
||||
|
||||
run-name: Prowler Release Preparation for ${{ inputs.prowler_version }}
|
||||
|
||||
@@ -25,6 +25,7 @@ jobs:
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
|
||||
@@ -36,6 +37,11 @@ jobs:
|
||||
python3 -m pip install --user poetry
|
||||
echo "$HOME/.local/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global user.name "prowler-bot"
|
||||
git config --global user.email "179230569+prowler-bot@users.noreply.github.com"
|
||||
|
||||
- name: Parse version and determine branch
|
||||
run: |
|
||||
# Validate version format (reusing pattern from sdk-bump-version.yml)
|
||||
@@ -138,24 +144,34 @@ jobs:
|
||||
fi
|
||||
echo "✓ api/src/backend/api/v1/views.py version: $CURRENT_API_VERSION"
|
||||
|
||||
- name: Create release branch for minor release
|
||||
- name: Checkout existing release branch for minor release
|
||||
if: ${{ env.PATCH_VERSION == '0' }}
|
||||
run: |
|
||||
echo "Minor release detected (patch = 0), creating new branch $BRANCH_NAME..."
|
||||
if git show-ref --verify --quiet "refs/heads/$BRANCH_NAME" || git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
|
||||
echo "ERROR: Branch $BRANCH_NAME already exists for minor release $PROWLER_VERSION"
|
||||
echo "Minor release detected (patch = 0), checking out existing branch $BRANCH_NAME..."
|
||||
if git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
|
||||
echo "Branch $BRANCH_NAME exists remotely, checking out..."
|
||||
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
|
||||
else
|
||||
echo "ERROR: Branch $BRANCH_NAME should exist for minor release $PROWLER_VERSION. Please create it manually first."
|
||||
exit 1
|
||||
fi
|
||||
git checkout -b "$BRANCH_NAME"
|
||||
|
||||
- name: Update prowler dependency in api/pyproject.toml
|
||||
- name: Prepare prowler dependency update for minor release
|
||||
if: ${{ env.PATCH_VERSION == '0' }}
|
||||
run: |
|
||||
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
|
||||
BRANCH_NAME_TRIMMED=$(echo "$BRANCH_NAME" | tr -d '[:space:]')
|
||||
|
||||
# Minor release: update the dependency to use the new branch
|
||||
echo "Minor release detected - updating prowler dependency from '$CURRENT_PROWLER_REF' to '$BRANCH_NAME_TRIMMED'"
|
||||
# Create a temporary branch for the PR
|
||||
TEMP_BRANCH="update-api-dependency-$BRANCH_NAME_TRIMMED-$(date +%s)"
|
||||
echo "TEMP_BRANCH=$TEMP_BRANCH" >> $GITHUB_ENV
|
||||
|
||||
# Switch back to master and create temp branch
|
||||
git checkout master
|
||||
git checkout -b "$TEMP_BRANCH"
|
||||
|
||||
# Minor release: update the dependency to use the release branch
|
||||
echo "Updating prowler dependency from '$CURRENT_PROWLER_REF' to '$BRANCH_NAME_TRIMMED'"
|
||||
sed -i "s|prowler @ git+https://github.com/prowler-cloud/prowler.git@[^\"]*\"|prowler @ git+https://github.com/prowler-cloud/prowler.git@$BRANCH_NAME_TRIMMED\"|" api/pyproject.toml
|
||||
|
||||
# Verify the change was made
|
||||
@@ -168,15 +184,42 @@ jobs:
|
||||
# Update poetry lock file
|
||||
echo "Updating poetry.lock file..."
|
||||
cd api
|
||||
poetry lock --no-update
|
||||
poetry lock
|
||||
cd ..
|
||||
|
||||
# Commit and push the changes
|
||||
# Commit and push the temporary branch
|
||||
git add api/pyproject.toml api/poetry.lock
|
||||
git commit -m "chore(api): update prowler dependency to $BRANCH_NAME_TRIMMED for release $PROWLER_VERSION"
|
||||
git push origin "$BRANCH_NAME"
|
||||
git push origin "$TEMP_BRANCH"
|
||||
|
||||
echo "✓ api/pyproject.toml prowler dependency updated to: $UPDATED_PROWLER_REF"
|
||||
echo "✓ Prepared prowler dependency update to: $UPDATED_PROWLER_REF"
|
||||
|
||||
- name: Create Pull Request against release branch
|
||||
if: ${{ env.PATCH_VERSION == '0' }}
|
||||
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
|
||||
with:
|
||||
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
branch: ${{ env.TEMP_BRANCH }}
|
||||
base: ${{ env.BRANCH_NAME }}
|
||||
title: "chore(api): Update prowler dependency to ${{ env.BRANCH_NAME }} for release ${{ env.PROWLER_VERSION }}"
|
||||
body: |
|
||||
### Description
|
||||
|
||||
Updates the API prowler dependency for release ${{ env.PROWLER_VERSION }}.
|
||||
|
||||
**Changes:**
|
||||
- Updates `api/pyproject.toml` prowler dependency from `@master` to `@${{ env.BRANCH_NAME }}`
|
||||
- Updates `api/poetry.lock` file with resolved dependencies
|
||||
|
||||
This PR should be merged into the `${{ env.BRANCH_NAME }}` release branch.
|
||||
|
||||
### License
|
||||
|
||||
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
|
||||
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
|
||||
labels: |
|
||||
component/api
|
||||
no-changelog
|
||||
|
||||
- name: Extract changelog entries
|
||||
run: |
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
name: Check Changelog
|
||||
name: Prowler - Check Changelog
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
@@ -13,7 +13,7 @@ jobs:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
env:
|
||||
MONITORED_FOLDERS: "api ui prowler"
|
||||
MONITORED_FOLDERS: "api ui prowler dashboard"
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
@@ -55,29 +55,20 @@ jobs:
|
||||
comment-author: 'github-actions[bot]'
|
||||
body-includes: '<!-- changelog-check -->'
|
||||
|
||||
- name: Comment on PR if changelog is missing
|
||||
if: github.event.pull_request.head.repo.full_name == github.repository && steps.check_folders.outputs.missing_changelogs != ''
|
||||
- name: Update PR comment with changelog status
|
||||
if: github.event.pull_request.head.repo.full_name == github.repository
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-id: ${{ steps.find_comment.outputs.comment-id }}
|
||||
edit-mode: replace
|
||||
body: |
|
||||
<!-- changelog-check -->
|
||||
⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
|
||||
${{ steps.check_folders.outputs.missing_changelogs != '' && format('⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
|
||||
|
||||
${{ steps.check_folders.outputs.missing_changelogs }}
|
||||
{0}
|
||||
|
||||
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.
|
||||
|
||||
- name: Comment on PR if all changelogs are present
|
||||
if: github.event.pull_request.head.repo.full_name == github.repository && steps.check_folders.outputs.missing_changelogs == ''
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-id: ${{ steps.find_comment.outputs.comment-id }}
|
||||
body: |
|
||||
<!-- changelog-check -->
|
||||
✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉
|
||||
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.', steps.check_folders.outputs.missing_changelogs) || '✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉' }}
|
||||
|
||||
- name: Fail if changelog is missing
|
||||
if: steps.check_folders.outputs.missing_changelogs != ''
|
||||
|
||||
@@ -27,11 +27,12 @@ jobs:
|
||||
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
repository: ${{ secrets.CLOUD_DISPATCH }}
|
||||
event-type: prowler-pull-request-merged
|
||||
client-payload: '{
|
||||
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
|
||||
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
|
||||
"PROWLER_PR_TITLE": "${{ github.event.pull_request.title }}",
|
||||
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
|
||||
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
|
||||
"PROWLER_PR_URL":${{ toJson(github.event.pull_request.html_url) }}
|
||||
}'
|
||||
client-payload: |
|
||||
{
|
||||
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
|
||||
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
|
||||
"PROWLER_PR_TITLE": ${{ toJson(github.event.pull_request.title) }},
|
||||
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
|
||||
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
|
||||
"PROWLER_PR_URL": ${{ toJson(github.event.pull_request.html_url) }}
|
||||
}
|
||||
|
||||
@@ -157,6 +157,22 @@ jobs:
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
# - name: Push README to Docker Hub (toniblyx)
|
||||
# uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
|
||||
# with:
|
||||
# username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
# password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
# repository: ${{ env.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}
|
||||
# readme-filepath: ./README.md
|
||||
#
|
||||
# - name: Push README to Docker Hub (prowlercloud)
|
||||
# uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
|
||||
# with:
|
||||
# username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
# password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
# repository: ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}
|
||||
# readme-filepath: ./README.md
|
||||
|
||||
dispatch-action:
|
||||
needs: container-build-push
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
@@ -12,7 +12,6 @@ env:
|
||||
jobs:
|
||||
bump-version:
|
||||
name: Bump Version
|
||||
if: github.repository == 'prowler-cloud/prowler'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
@@ -56,12 +56,12 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
|
||||
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/sdk-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
|
||||
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
@@ -122,7 +122,7 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/aws/**
|
||||
./tests/providers/aws/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: AWS - Test
|
||||
if: steps.aws-changed-files.outputs.any_changed == 'true'
|
||||
@@ -137,7 +137,7 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/azure/**
|
||||
./tests/providers/azure/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: Azure - Test
|
||||
if: steps.azure-changed-files.outputs.any_changed == 'true'
|
||||
@@ -152,7 +152,7 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/gcp/**
|
||||
./tests/providers/gcp/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: GCP - Test
|
||||
if: steps.gcp-changed-files.outputs.any_changed == 'true'
|
||||
@@ -167,7 +167,7 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/kubernetes/**
|
||||
./tests/providers/kubernetes/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: Kubernetes - Test
|
||||
if: steps.kubernetes-changed-files.outputs.any_changed == 'true'
|
||||
@@ -182,7 +182,7 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/github/**
|
||||
./tests/providers/github/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: GitHub - Test
|
||||
if: steps.github-changed-files.outputs.any_changed == 'true'
|
||||
@@ -197,7 +197,7 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/nhn/**
|
||||
./tests/providers/nhn/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: NHN - Test
|
||||
if: steps.nhn-changed-files.outputs.any_changed == 'true'
|
||||
@@ -212,7 +212,7 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/m365/**
|
||||
./tests/providers/m365/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: M365 - Test
|
||||
if: steps.m365-changed-files.outputs.any_changed == 'true'
|
||||
@@ -227,13 +227,28 @@ jobs:
|
||||
files: |
|
||||
./prowler/providers/iac/**
|
||||
./tests/providers/iac/**
|
||||
.poetry.lock
|
||||
./poetry.lock
|
||||
|
||||
- name: IaC - Test
|
||||
if: steps.iac-changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
poetry run pytest -n auto --cov=./prowler/providers/iac --cov-report=xml:iac_coverage.xml tests/providers/iac
|
||||
|
||||
# Test MongoDB Atlas
|
||||
- name: MongoDB Atlas - Check if any file has changed
|
||||
id: mongodb-atlas-changed-files
|
||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
|
||||
with:
|
||||
files: |
|
||||
./prowler/providers/mongodbatlas/**
|
||||
./tests/providers/mongodbatlas/**
|
||||
.poetry.lock
|
||||
|
||||
- name: MongoDB Atlas - Test
|
||||
if: steps.mongodb-atlas-changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
poetry run pytest -n auto --cov=./prowler/providers/mongodbatlas --cov-report=xml:mongodb_atlas_coverage.xml tests/providers/mongodbatlas
|
||||
|
||||
# Common Tests
|
||||
- name: Lib - Test
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
|
||||
@@ -56,7 +56,7 @@ jobs:
|
||||
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
|
||||
commit-message: "feat(regions_update): Update regions for AWS services"
|
||||
branch: "aws-services-regions-updated-${{ github.sha }}"
|
||||
labels: "status/waiting-for-revision, severity/low, provider/aws"
|
||||
labels: "status/waiting-for-revision, severity/low, provider/aws, no-changelog"
|
||||
title: "chore(regions_update): Changes in regions for AWS services"
|
||||
body: |
|
||||
### Description
|
||||
|
||||
@@ -48,12 +48,12 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
|
||||
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/ui-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
|
||||
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
@@ -0,0 +1,100 @@
|
||||
name: UI - E2E Tests
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
- "v5.*"
|
||||
paths:
|
||||
- '.github/workflows/ui-e2e-tests.yml'
|
||||
- 'ui/**'
|
||||
|
||||
jobs:
|
||||
e2e-tests:
|
||||
if: github.repository == 'prowler-cloud/prowler'
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
AUTH_SECRET: 'fallback-ci-secret-for-testing'
|
||||
AUTH_TRUST_HOST: true
|
||||
NEXTAUTH_URL: 'http://localhost:3000'
|
||||
NEXT_PUBLIC_API_BASE_URL: 'http://localhost:8080/api/v1'
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
- name: Fix API data directory permissions
|
||||
run: docker run --rm -v $(pwd)/_data/api:/data alpine chown -R 1000:1000 /data
|
||||
- name: Start API services
|
||||
run: |
|
||||
# Override docker-compose image tag to use latest instead of stable
|
||||
# This overrides any PROWLER_API_VERSION set in .env file
|
||||
export PROWLER_API_VERSION=latest
|
||||
echo "Using PROWLER_API_VERSION=${PROWLER_API_VERSION}"
|
||||
docker compose up -d api worker worker-beat
|
||||
- name: Wait for API to be ready
|
||||
run: |
|
||||
echo "Waiting for prowler-api..."
|
||||
timeout=150 # 5 minutes max
|
||||
elapsed=0
|
||||
while [ $elapsed -lt $timeout ]; do
|
||||
if curl -s ${NEXT_PUBLIC_API_BASE_URL}/docs >/dev/null 2>&1; then
|
||||
echo "Prowler API is ready!"
|
||||
exit 0
|
||||
fi
|
||||
echo "Waiting for prowler-api... (${elapsed}s elapsed)"
|
||||
sleep 5
|
||||
elapsed=$((elapsed + 5))
|
||||
done
|
||||
echo "Timeout waiting for prowler-api to start"
|
||||
exit 1
|
||||
- name: Load database fixtures for E2E tests
|
||||
run: |
|
||||
docker compose exec -T api sh -c '
|
||||
echo "Loading all fixtures from api/fixtures/dev/..."
|
||||
for fixture in api/fixtures/dev/*.json; do
|
||||
if [ -f "$fixture" ]; then
|
||||
echo "Loading $fixture"
|
||||
poetry run python manage.py loaddata "$fixture" --database admin
|
||||
fi
|
||||
done
|
||||
echo "All database fixtures loaded successfully!"
|
||||
'
|
||||
- name: Setup Node.js environment
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
with:
|
||||
node-version: '20.x'
|
||||
cache: 'npm'
|
||||
cache-dependency-path: './ui/package-lock.json'
|
||||
- name: Install UI dependencies
|
||||
working-directory: ./ui
|
||||
run: npm ci
|
||||
- name: Build UI application
|
||||
working-directory: ./ui
|
||||
run: npm run build
|
||||
- name: Cache Playwright browsers
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
id: playwright-cache
|
||||
with:
|
||||
path: ~/.cache/ms-playwright
|
||||
key: ${{ runner.os }}-playwright-${{ hashFiles('ui/package-lock.json') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-playwright-
|
||||
- name: Install Playwright browsers
|
||||
working-directory: ./ui
|
||||
if: steps.playwright-cache.outputs.cache-hit != 'true'
|
||||
run: npm run test:e2e:install
|
||||
- name: Run E2E tests
|
||||
working-directory: ./ui
|
||||
run: npm run test:e2e
|
||||
- name: Upload test reports
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||
if: failure()
|
||||
with:
|
||||
name: playwright-report
|
||||
path: ui/playwright-report/
|
||||
retention-days: 30
|
||||
- name: Cleanup services
|
||||
if: always()
|
||||
run: |
|
||||
echo "Shutting down services..."
|
||||
docker compose down -v || true
|
||||
echo "Cleanup completed"
|
||||
@@ -46,52 +46,6 @@ jobs:
|
||||
working-directory: ./ui
|
||||
run: npm run build
|
||||
|
||||
e2e-tests:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
AUTH_SECRET: 'fallback-ci-secret-for-testing'
|
||||
AUTH_TRUST_HOST: true
|
||||
NEXTAUTH_URL: http://localhost:3000
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
with:
|
||||
node-version: '20.x'
|
||||
cache: 'npm'
|
||||
cache-dependency-path: './ui/package-lock.json'
|
||||
- name: Install dependencies
|
||||
working-directory: ./ui
|
||||
run: npm ci
|
||||
- name: Cache Playwright browsers
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
id: playwright-cache
|
||||
with:
|
||||
path: ~/.cache/ms-playwright
|
||||
key: ${{ runner.os }}-playwright-${{ hashFiles('ui/package-lock.json') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-playwright-
|
||||
- name: Install Playwright browsers
|
||||
working-directory: ./ui
|
||||
if: steps.playwright-cache.outputs.cache-hit != 'true'
|
||||
run: npm run test:e2e:install
|
||||
- name: Build the application
|
||||
working-directory: ./ui
|
||||
run: npm run build
|
||||
- name: Run Playwright tests
|
||||
working-directory: ./ui
|
||||
run: npm run test:e2e
|
||||
- name: Upload Playwright report
|
||||
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
|
||||
if: failure()
|
||||
with:
|
||||
name: playwright-report
|
||||
path: ui/playwright-report/
|
||||
retention-days: 30
|
||||
|
||||
test-container-build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
|
||||
+11
@@ -63,6 +63,7 @@ junit-reports/
|
||||
# .env
|
||||
ui/.env*
|
||||
api/.env*
|
||||
mcp_server/.env*
|
||||
.env.local
|
||||
|
||||
# Coverage
|
||||
@@ -75,3 +76,13 @@ node_modules
|
||||
|
||||
# Persistent data
|
||||
_data/
|
||||
|
||||
# Claude
|
||||
CLAUDE.md
|
||||
|
||||
# LLM's (Until we have a standard one)
|
||||
AGENTS.md
|
||||
|
||||
# MCP Server
|
||||
mcp_server/prowler_mcp_server/prowler_app/server.py
|
||||
mcp_server/prowler_mcp_server/prowler_app/utils/schema.yaml
|
||||
|
||||
@@ -115,7 +115,8 @@ repos:
|
||||
- id: safety
|
||||
name: safety
|
||||
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
|
||||
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353'
|
||||
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
|
||||
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745'
|
||||
language: system
|
||||
|
||||
- id: vulture
|
||||
|
||||
@@ -19,19 +19,16 @@
|
||||
<a href="https://goto.prowler.com/slack"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
|
||||
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
|
||||
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
|
||||
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=prowler%20downloads"></a>
|
||||
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=downloads"></a>
|
||||
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
|
||||
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
|
||||
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
|
||||
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
|
||||
<a href="https://codecov.io/gh/prowler-cloud/prowler"><img src="https://codecov.io/gh/prowler-cloud/prowler/graph/badge.svg?token=OflBGsdpDl"/></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
<a href="https://github.com/prowler-cloud/prowler"><img alt="Repo size" src="https://img.shields.io/github/repo-size/prowler-cloud/prowler"></a>
|
||||
<a href="https://github.com/prowler-cloud/prowler/issues"><img alt="Issues" src="https://img.shields.io/github/issues/prowler-cloud/prowler"></a>
|
||||
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/v/release/prowler-cloud/prowler"></a>
|
||||
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/v/release/prowler-cloud/prowler"></a>
|
||||
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/release-date/prowler-cloud/prowler"></a>
|
||||
<a href="https://github.com/prowler-cloud/prowler"><img alt="Contributors" src="https://img.shields.io/github/contributors-anon/prowler-cloud/prowler"></a>
|
||||
<a href="https://github.com/prowler-cloud/prowler/issues"><img alt="Issues" src="https://img.shields.io/github/issues/prowler-cloud/prowler"></a>
|
||||
<a href="https://github.com/prowler-cloud/prowler"><img alt="License" src="https://img.shields.io/github/license/prowler-cloud/prowler"></a>
|
||||
<a href="https://twitter.com/ToniBlyx"><img alt="Twitter" src="https://img.shields.io/twitter/follow/toniblyx?style=social"></a>
|
||||
<a href="https://twitter.com/prowlercloud"><img alt="Twitter" src="https://img.shields.io/twitter/follow/prowlercloud?style=social"></a>
|
||||
@@ -55,15 +52,11 @@ Prowler includes hundreds of built-in controls to ensure compliance with standar
|
||||
- **National Security Standards:** ENS (Spanish National Security Scheme)
|
||||
- **Custom Security Frameworks:** Tailored to your needs
|
||||
|
||||
## Prowler CLI and Prowler Cloud
|
||||
|
||||
Prowler offers a Command Line Interface (CLI), known as Prowler Open Source, and an additional service built on top of it, called <a href="https://prowler.com">Prowler Cloud</a>.
|
||||
|
||||
## Prowler App
|
||||
|
||||
Prowler App is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
|
||||
|
||||

|
||||

|
||||
|
||||
>For more details, refer to the [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
|
||||
|
||||
@@ -80,28 +73,36 @@ prowler <provider>
|
||||
```console
|
||||
prowler dashboard
|
||||
```
|
||||

|
||||

|
||||
|
||||
# Prowler at a Glance
|
||||
> [!Tip]
|
||||
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
|
||||
|
||||
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
|
||||
|---|---|---|---|---|
|
||||
| AWS | 567 | 82 | 36 | 10 |
|
||||
| GCP | 79 | 13 | 10 | 3 |
|
||||
| Azure | 142 | 18 | 10 | 3 |
|
||||
| Kubernetes | 83 | 7 | 5 | 7 |
|
||||
| GitHub | 16 | 2 | 1 | 0 |
|
||||
| M365 | 69 | 7 | 3 | 2 |
|
||||
| NHN (Unofficial) | 6 | 2 | 1 | 0 |
|
||||
|
||||
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) | Support | Stage | Interface |
|
||||
|---|---|---|---|---|---|---|---|
|
||||
| AWS | 576 | 82 | 36 | 10 | Official | Stable | UI, API, CLI |
|
||||
| GCP | 79 | 13 | 10 | 3 | Official | Stable | UI, API, CLI |
|
||||
| Azure | 162 | 19 | 11 | 4 | Official | Stable | UI, API, CLI |
|
||||
| Kubernetes | 83 | 7 | 5 | 7 | Official | Stable | UI, API, CLI |
|
||||
| GitHub | 17 | 2 | 1 | 0 | Official | Stable | UI, API, CLI |
|
||||
| M365 | 70 | 7 | 3 | 2 | Official | Stable | UI, API, CLI |
|
||||
| IaC | [See `trivy` docs.](https://trivy.dev/latest/docs/coverage/iac/) | N/A | N/A | N/A | Official | Beta | CLI |
|
||||
| MongoDB Atlas | 10 | 3 | 0 | 0 | Official | Beta | CLI |
|
||||
| NHN | 6 | 2 | 1 | 0 | Unofficial | Beta | CLI |
|
||||
|
||||
> [!Note]
|
||||
> The numbers in the table are updated periodically.
|
||||
|
||||
> [!Tip]
|
||||
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
|
||||
|
||||
|
||||
> [!Note]
|
||||
> Use the following commands to list Prowler's available checks, services, compliance frameworks, and categories: `prowler <provider> --list-checks`, `prowler <provider> --list-services`, `prowler <provider> --list-compliance` and `prowler <provider> --list-categories`.
|
||||
> Use the following commands to list Prowler's available checks, services, compliance frameworks, and categories:
|
||||
> - `prowler <provider> --list-checks`
|
||||
> - `prowler <provider> --list-services`
|
||||
> - `prowler <provider> --list-compliance`
|
||||
> - `prowler <provider> --list-categories`
|
||||
|
||||
# 💻 Installation
|
||||
|
||||
@@ -239,7 +240,7 @@ The following versions of Prowler CLI are available, depending on your requireme
|
||||
|
||||
The container images are available here:
|
||||
- Prowler CLI:
|
||||
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
|
||||
- [DockerHub](https://hub.docker.com/r/prowlercloud/prowler/tags)
|
||||
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
|
||||
- Prowler App:
|
||||
- [DockerHub - Prowler UI](https://hub.docker.com/r/prowlercloud/prowler-ui/tags)
|
||||
@@ -274,7 +275,7 @@ python prowler-cli.py -v
|
||||
- **Prowler API**: A backend service, developed with Django REST Framework, responsible for running Prowler scans and storing the generated results.
|
||||
- **Prowler SDK**: A Python SDK designed to extend the functionality of the Prowler CLI for advanced capabilities.
|
||||
|
||||

|
||||

|
||||
|
||||
## Prowler CLI
|
||||
|
||||
@@ -300,40 +301,12 @@ And many more environments.
|
||||
|
||||

|
||||
|
||||
# Deprecations from v3
|
||||
|
||||
## General
|
||||
- `Allowlist` now is called `Mutelist`.
|
||||
- The `--quiet` option has been deprecated. Use the `--status` flag to filter findings based on their status: PASS, FAIL, or MANUAL.
|
||||
- All findings with an `INFO` status have been reclassified as `MANUAL`.
|
||||
- The CSV output format is standardized across all providers.
|
||||
|
||||
**Deprecated Output Formats**
|
||||
|
||||
The following formats are now deprecated:
|
||||
- Native JSON has been replaced with JSON in [OCSF] v1.1.0 format, which is standardized across all providers (https://schema.ocsf.io/).
|
||||
|
||||
## AWS
|
||||
|
||||
**AWS Flag Deprecation**
|
||||
|
||||
The flag --sts-endpoint-region has been deprecated due to the adoption of AWS STS regional tokens.
|
||||
|
||||
**Sending FAIL Results to AWS Security Hub**
|
||||
|
||||
- To send only FAILS to AWS Security Hub, use one of the following options: `--send-sh-only-fails` or `--security-hub --status FAIL`.
|
||||
|
||||
|
||||
# 📖 Documentation
|
||||
|
||||
**Documentation Resources**
|
||||
|
||||
For installation instructions, usage details, tutorials, and the Developer Guide, visit https://docs.prowler.com/
|
||||
|
||||
# 📃 License
|
||||
|
||||
**Prowler License Information**
|
||||
|
||||
Prowler is licensed under the Apache License 2.0, as indicated in each file within the repository. Obtaining a Copy of the License
|
||||
Prowler is licensed under the Apache License 2.0.
|
||||
|
||||
A copy of the License is available at <http://www.apache.org/licenses/LICENSE-2.0>
|
||||
|
||||
+57
-15
@@ -1,23 +1,65 @@
|
||||
# Security Policy
|
||||
# Security
|
||||
|
||||
## Software Security
|
||||
As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (FTR)](https://aws.amazon.com/partners/foundational-technical-review/) and we use the following tools and automation to make sure our code is secure and dependencies up-to-dated:
|
||||
## Reporting Vulnerabilities
|
||||
|
||||
- `bandit` for code security review.
|
||||
- `safety` and `dependabot` for dependencies.
|
||||
- `hadolint` and `dockle` for our containers security.
|
||||
- `snyk` in Docker Hub.
|
||||
- `clair` in Amazon ECR.
|
||||
- `vulture`, `flake8`, `black` and `pylint` for formatting and best practices.
|
||||
At Prowler, we consider the security of our open source software and systems a top priority. But no matter how much effort we put into system security, there can still be vulnerabilities present.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
If you discover a vulnerability, we would like to know about it so we can take steps to address it as quickly as possible. We would like to ask you to help us better protect our users, our clients and our systems.
|
||||
|
||||
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to https://support.prowler.com.
|
||||
When reporting vulnerabilities, please consider (1) attack scenario / exploitability, and (2) the security impact of the bug. The following issues are considered out of scope:
|
||||
|
||||
The information you share with ProwlerPro as part of this process is kept confidential within ProwlerPro. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
|
||||
- Social engineering support or attacks requiring social engineering.
|
||||
- Clickjacking on pages with no sensitive actions.
|
||||
- Cross-Site Request Forgery (CSRF) on unauthenticated forms or forms with no sensitive actions.
|
||||
- Attacks requiring Man-In-The-Middle (MITM) or physical access to a user's device.
|
||||
- Previously known vulnerable libraries without a working Proof of Concept (PoC).
|
||||
- Comma Separated Values (CSV) injection without demonstrating a vulnerability.
|
||||
- Missing best practices in SSL/TLS configuration.
|
||||
- Any activity that could lead to the disruption of service (DoS).
|
||||
- Rate limiting or brute force issues on non-authentication endpoints.
|
||||
- Missing best practices in Content Security Policy (CSP).
|
||||
- Missing HttpOnly or Secure flags on cookies.
|
||||
- Configuration of or missing security headers.
|
||||
- Missing email best practices, such as invalid, incomplete, or missing SPF/DKIM/DMARC records.
|
||||
- Vulnerabilities only affecting users of outdated or unpatched browsers (less than two stable versions behind).
|
||||
- Software version disclosure, banner identification issues, or descriptive error messages.
|
||||
- Tabnabbing.
|
||||
- Issues that require unlikely user interaction.
|
||||
- Improper logout functionality and improper session timeout.
|
||||
- CORS misconfiguration without an exploitation scenario.
|
||||
- Broken link hijacking.
|
||||
- Automated scanning results (e.g., sqlmap, Burp active scanner) that have not been manually verified.
|
||||
- Content spoofing and text injection issues without a clear attack vector.
|
||||
- Email spoofing without exploiting security flaws.
|
||||
- Dead links or broken links.
|
||||
- User enumeration.
|
||||
|
||||
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.
|
||||
Testing guidelines:
|
||||
- Do not run automated scanners on other customer projects. Running automated scanners can run up costs for our users. Aggressively configured scanners might inadvertently disrupt services, exploit vulnerabilities, lead to system instability or breaches and violate Terms of Service from our upstream providers. Our own security systems won't be able to distinguish hostile reconnaissance from whitehat research. If you wish to run an automated scanner, notify us at support@prowler.com and only run it on your own Prowler app project. Do NOT attack Prowler in usage of other customers.
|
||||
- Do not take advantage of the vulnerability or problem you have discovered, for example by downloading more data than necessary to demonstrate the vulnerability or deleting or modifying other people's data.
|
||||
|
||||
You will receive a non-automated response to your initial contact within 24 hours, confirming receipt of your reported vulnerability.
|
||||
Reporting guidelines:
|
||||
- File a report through our Support Desk at https://support.prowler.com
|
||||
- If it is about a lack of a security functionality, please file a feature request instead at https://github.com/prowler-cloud/prowler/issues
|
||||
- Do provide sufficient information to reproduce the problem, so we will be able to resolve it as quickly as possible.
|
||||
- If you have further questions and want direct interaction with the Prowler team, please contact us at via our Community Slack at goto.prowler.com/slack.
|
||||
|
||||
We will coordinate public notification of any validated vulnerability with you. Where possible, we prefer that our respective public disclosures be posted simultaneously.
|
||||
Disclosure guidelines:
|
||||
- In order to protect our users and customers, do not reveal the problem to others until we have researched, addressed and informed our affected customers.
|
||||
- If you want to publicly share your research about Prowler at a conference, in a blog or any other public forum, you should share a draft with us for review and approval at least 30 days prior to the publication date. Please note that the following should not be included:
|
||||
- Data regarding any Prowler user or customer projects.
|
||||
- Prowler customers' data.
|
||||
- Information about Prowler employees, contractors or partners.
|
||||
|
||||
What we promise:
|
||||
- We will respond to your report within 5 business days with our evaluation of the report and an expected resolution date.
|
||||
- If you have followed the instructions above, we will not take any legal action against you in regard to the report.
|
||||
- We will handle your report with strict confidentiality, and not pass on your personal details to third parties without your permission.
|
||||
- We will keep you informed of the progress towards resolving the problem.
|
||||
- In the public information concerning the problem reported, we will give your name as the discoverer of the problem (unless you desire otherwise).
|
||||
|
||||
We strive to resolve all problems as quickly as possible, and we would like to play an active role in the ultimate publication on the problem after it is resolved.
|
||||
|
||||
---
|
||||
|
||||
For more information about our security policies, please refer to our [Security](https://docs.prowler.com/projects/prowler-open-source/en/latest/security/) section in our documentation.
|
||||
@@ -19,6 +19,8 @@ DJANGO_REFRESH_TOKEN_LIFETIME=1440
|
||||
DJANGO_CACHE_MAX_AGE=3600
|
||||
DJANGO_STALE_WHILE_REVALIDATE=60
|
||||
DJANGO_SECRETS_ENCRYPTION_KEY=""
|
||||
# Throttle, two options: Empty means no throttle; or if desired use one in DRF format: https://www.django-rest-framework.org/api-guide/throttling/#setting-the-throttling-policy
|
||||
DJANGO_THROTTLE_TOKEN_OBTAIN=50/minute
|
||||
# Decide whether to allow Django manage database table partitions
|
||||
DJANGO_MANAGE_DB_PARTITIONS=[True|False]
|
||||
DJANGO_CELERY_DEADLOCK_ATTEMPTS=5
|
||||
|
||||
@@ -2,6 +2,67 @@
|
||||
|
||||
All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
## [1.14.0] (Prowler UNRELEASED)
|
||||
|
||||
### Added
|
||||
- Default JWT keys are generated and stored if they are missing from configuration [(#8655)](https://github.com/prowler-cloud/prowler/pull/8655)
|
||||
- `compliance_name` for each compliance [(#7920)](https://github.com/prowler-cloud/prowler/pull/7920)
|
||||
|
||||
### Changed
|
||||
- Now the MANAGE_ACCOUNT permission is required to modify or read user permissions instead of MANAGE_USERS [(#8281)](https://github.com/prowler-cloud/prowler/pull/8281)
|
||||
- Now at least one user with MANAGE_ACCOUNT permission is required in the tenant [(#8729)](https://github.com/prowler-cloud/prowler/pull/8729)
|
||||
|
||||
---
|
||||
|
||||
## [1.13.1] (Prowler 5.12.2)
|
||||
|
||||
### Changed
|
||||
- Renamed compliance overview task queue to `compliance` [(#8755)](https://github.com/prowler-cloud/prowler/pull/8755)
|
||||
|
||||
### Security
|
||||
- Django updated to the latest 5.1 security release, 5.1.12, due to [problems](https://www.djangoproject.com/weblog/2025/sep/03/security-releases/) with potential SQL injection in FilteredRelation column aliases [(#8693)](https://github.com/prowler-cloud/prowler/pull/8693)
|
||||
|
||||
---
|
||||
|
||||
## [1.13.0] (Prowler 5.12.0)
|
||||
|
||||
### Added
|
||||
- Integration with JIRA, enabling sending findings to a JIRA project [(#8622)](https://github.com/prowler-cloud/prowler/pull/8622), [(#8637)](https://github.com/prowler-cloud/prowler/pull/8637)
|
||||
- `GET /overviews/findings_severity` now supports `filter[status]` and `filter[status__in]` to aggregate by specific statuses (`FAIL`, `PASS`)[(#8186)](https://github.com/prowler-cloud/prowler/pull/8186)
|
||||
- Throttling options for `/api/v1/tokens` using the `DJANGO_THROTTLE_TOKEN_OBTAIN` environment variable [(#8647)](https://github.com/prowler-cloud/prowler/pull/8647)
|
||||
|
||||
---
|
||||
|
||||
## [1.12.0] (Prowler 5.11.0)
|
||||
|
||||
### Added
|
||||
- Lighthouse support for OpenAI GPT-5 [(#8527)](https://github.com/prowler-cloud/prowler/pull/8527)
|
||||
- Integration with Amazon Security Hub, enabling sending findings to Security Hub [(#8365)](https://github.com/prowler-cloud/prowler/pull/8365)
|
||||
- Generate ASFF output for AWS providers with SecurityHub integration enabled [(#8569)](https://github.com/prowler-cloud/prowler/pull/8569)
|
||||
|
||||
### Fixed
|
||||
- GitHub provider always scans user instead of organization when using provider UID [(#8587)](https://github.com/prowler-cloud/prowler/pull/8587)
|
||||
|
||||
---
|
||||
|
||||
## [1.11.0] (Prowler 5.10.0)
|
||||
|
||||
### Added
|
||||
- Github provider support [(#8271)](https://github.com/prowler-cloud/prowler/pull/8271)
|
||||
- Integration with Amazon S3, enabling storage and retrieval of scan data via S3 buckets [(#8056)](https://github.com/prowler-cloud/prowler/pull/8056)
|
||||
|
||||
### Fixed
|
||||
- Avoid sending errors to Sentry in M365 provider when user authentication fails [(#8420)](https://github.com/prowler-cloud/prowler/pull/8420)
|
||||
|
||||
---
|
||||
|
||||
## [1.10.2] (Prowler v5.9.2)
|
||||
|
||||
### Changed
|
||||
- Optimized queries for resources views [(#8336)](https://github.com/prowler-cloud/prowler/pull/8336)
|
||||
|
||||
---
|
||||
|
||||
## [v1.10.1] (Prowler v5.9.1)
|
||||
|
||||
### Fixed
|
||||
|
||||
@@ -44,6 +44,9 @@ USER prowler
|
||||
|
||||
WORKDIR /home/prowler
|
||||
|
||||
# Ensure output directory exists
|
||||
RUN mkdir -p /tmp/prowler_api_output
|
||||
|
||||
COPY pyproject.toml ./
|
||||
|
||||
RUN pip install --no-cache-dir --upgrade pip && \
|
||||
|
||||
@@ -20,6 +20,10 @@ Valkey exposes a Redis 7.2 compliant API. Any service that exposes the Redis API
|
||||
|
||||
Under the root path of the project, you can find a file called `.env.example`. This file shows all the environment variables that the project uses. You *must* create a new file called `.env` and set the values for the variables.
|
||||
|
||||
If you don't set `DJANGO_TOKEN_SIGNING_KEY` or `DJANGO_TOKEN_VERIFYING_KEY`, the API will generate them at `~/.config/prowler-api/` with `0600` and `0644` permissions; back up these files to persist identity across redeploys.
|
||||
|
||||
**Important:** When using the production Docker Compose profile (`docker compose --profile prod`), you **must** set both `DJANGO_TOKEN_SIGNING_KEY` and `DJANGO_TOKEN_VERIFYING_KEY` in your `.env` file, as automatic key generation is not available in this deployment mode.
|
||||
|
||||
## Local deployment
|
||||
Keep in mind if you export the `.env` file to use it with local deployment that you will have to do it within the context of the Poetry interpreter, not before. Otherwise, variables will not be loaded properly.
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ start_prod_server() {
|
||||
|
||||
start_worker() {
|
||||
echo "Starting the worker..."
|
||||
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview -E --max-tasks-per-child 1
|
||||
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview,integrations,compliance -E --max-tasks-per-child 1
|
||||
}
|
||||
|
||||
start_worker_beat() {
|
||||
|
||||
Generated
+1504
-1239
File diff suppressed because it is too large
Load Diff
+5
-3
@@ -7,7 +7,7 @@ authors = [{name = "Prowler Engineering", email = "engineering@prowler.com"}]
|
||||
dependencies = [
|
||||
"celery[pytest] (>=5.4.0,<6.0.0)",
|
||||
"dj-rest-auth[with_social,jwt] (==7.0.1)",
|
||||
"django==5.1.10",
|
||||
"django (==5.1.12)",
|
||||
"django-allauth[saml] (>=65.8.0,<66.0.0)",
|
||||
"django-celery-beat (>=2.7.0,<3.0.0)",
|
||||
"django-celery-results (>=2.5.1,<3.0.0)",
|
||||
@@ -30,7 +30,9 @@ dependencies = [
|
||||
"sentry-sdk[django] (>=2.20.0,<3.0.0)",
|
||||
"uuid6==2024.7.10",
|
||||
"openai (>=1.82.0,<2.0.0)",
|
||||
"xmlsec==1.3.14"
|
||||
"xmlsec==1.3.14",
|
||||
"h2 (==4.3.0)",
|
||||
"markdown (>=3.9,<4.0)"
|
||||
]
|
||||
description = "Prowler's API (Django/DRF)"
|
||||
license = "Apache-2.0"
|
||||
@@ -38,7 +40,7 @@ name = "prowler-api"
|
||||
package-mode = false
|
||||
# Needed for the SDK compatibility
|
||||
requires-python = ">=3.11,<3.13"
|
||||
version = "1.10.1"
|
||||
version = "1.14.0"
|
||||
|
||||
[project.scripts]
|
||||
celery = "src.backend.config.settings.celery"
|
||||
|
||||
@@ -1,4 +1,28 @@
|
||||
import logging
|
||||
import os
|
||||
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
from django.apps import AppConfig
|
||||
from django.conf import settings
|
||||
|
||||
from config.custom_logging import BackendLogger
|
||||
from config.env import env
|
||||
|
||||
logger = logging.getLogger(BackendLogger.API)
|
||||
|
||||
SIGNING_KEY_ENV = "DJANGO_TOKEN_SIGNING_KEY"
|
||||
VERIFYING_KEY_ENV = "DJANGO_TOKEN_VERIFYING_KEY"
|
||||
|
||||
PRIVATE_KEY_FILE = "jwt_private.pem"
|
||||
PUBLIC_KEY_FILE = "jwt_public.pem"
|
||||
|
||||
KEYS_DIRECTORY = (
|
||||
Path.home() / ".config" / "prowler-api"
|
||||
) # `/home/prowler/.config/prowler-api` inside the container
|
||||
|
||||
_keys_initialized = False # Flag to prevent multiple executions within the same process
|
||||
|
||||
|
||||
class ApiConfig(AppConfig):
|
||||
@@ -9,4 +33,138 @@ class ApiConfig(AppConfig):
|
||||
from api import signals # noqa: F401
|
||||
from api.compliance import load_prowler_compliance
|
||||
|
||||
# Generate required cryptographic keys if not present, but only if:
|
||||
# `"manage.py" not in sys.argv`: If an external server (e.g., Gunicorn) is running the app
|
||||
# `os.environ.get("RUN_MAIN")`: If it's not a Django command or using `runserver`,
|
||||
# only the main process will do it
|
||||
if "manage.py" not in sys.argv or os.environ.get("RUN_MAIN"):
|
||||
self._ensure_crypto_keys()
|
||||
|
||||
load_prowler_compliance()
|
||||
|
||||
def _ensure_crypto_keys(self):
|
||||
"""
|
||||
Orchestrator method that ensures all required cryptographic keys are present.
|
||||
This method coordinates the generation of:
|
||||
- RSA key pairs for JWT token signing and verification
|
||||
Note: During development, Django spawns multiple processes (migrations, fixtures, etc.)
|
||||
which will each generate their own keys. This is expected behavior and each process
|
||||
will have consistent keys for its lifetime. In production, set the keys as environment
|
||||
variables to avoid regeneration.
|
||||
"""
|
||||
global _keys_initialized
|
||||
|
||||
# Skip key generation if running tests
|
||||
if hasattr(settings, "TESTING") and settings.TESTING:
|
||||
return
|
||||
|
||||
# Skip if already initialized in this process
|
||||
if _keys_initialized:
|
||||
return
|
||||
|
||||
# Check if both JWT keys are set; if not, generate them
|
||||
signing_key = env.str(SIGNING_KEY_ENV, default="").strip()
|
||||
verifying_key = env.str(VERIFYING_KEY_ENV, default="").strip()
|
||||
|
||||
if not signing_key or not verifying_key:
|
||||
logger.info(
|
||||
f"Generating JWT RSA key pair. In production, set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' "
|
||||
"environment variables."
|
||||
)
|
||||
self._ensure_jwt_keys()
|
||||
|
||||
# Mark as initialized to prevent future executions in this process
|
||||
_keys_initialized = True
|
||||
|
||||
def _read_key_file(self, file_name):
|
||||
"""
|
||||
Utility method to read the contents of a file.
|
||||
"""
|
||||
file_path = KEYS_DIRECTORY / file_name
|
||||
return file_path.read_text().strip() if file_path.is_file() else None
|
||||
|
||||
def _write_key_file(self, file_name, content, private=True):
|
||||
"""
|
||||
Utility method to write content to a file.
|
||||
"""
|
||||
try:
|
||||
file_path = KEYS_DIRECTORY / file_name
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
file_path.write_text(content)
|
||||
file_path.chmod(0o600 if private else 0o644)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error writing key file '{file_name}': {e}. "
|
||||
f"Please set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' manually."
|
||||
)
|
||||
raise e
|
||||
|
||||
def _ensure_jwt_keys(self):
|
||||
"""
|
||||
Generate RSA key pairs for JWT token signing and verification
|
||||
if they are not already set in environment variables.
|
||||
"""
|
||||
# Read existing keys from files if they exist
|
||||
signing_key = self._read_key_file(PRIVATE_KEY_FILE)
|
||||
verifying_key = self._read_key_file(PUBLIC_KEY_FILE)
|
||||
|
||||
if not signing_key or not verifying_key:
|
||||
# Generate and store the RSA key pair
|
||||
signing_key, verifying_key = self._generate_jwt_keys()
|
||||
self._write_key_file(PRIVATE_KEY_FILE, signing_key, private=True)
|
||||
self._write_key_file(PUBLIC_KEY_FILE, verifying_key, private=False)
|
||||
logger.info("JWT keys generated and stored successfully")
|
||||
|
||||
else:
|
||||
logger.info("JWT keys already generated")
|
||||
|
||||
# Set environment variables and Django settings
|
||||
os.environ[SIGNING_KEY_ENV] = signing_key
|
||||
settings.SIMPLE_JWT["SIGNING_KEY"] = signing_key
|
||||
|
||||
os.environ[VERIFYING_KEY_ENV] = verifying_key
|
||||
settings.SIMPLE_JWT["VERIFYING_KEY"] = verifying_key
|
||||
|
||||
def _generate_jwt_keys(self):
|
||||
"""
|
||||
Generate and set RSA key pairs for JWT token operations.
|
||||
"""
|
||||
try:
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
from cryptography.hazmat.primitives.asymmetric import rsa
|
||||
|
||||
# Generate RSA key pair
|
||||
private_key = rsa.generate_private_key( # Future improvement: we could read the next values from env vars
|
||||
public_exponent=65537,
|
||||
key_size=2048,
|
||||
)
|
||||
|
||||
# Serialize private key (for signing)
|
||||
private_pem = private_key.private_bytes(
|
||||
encoding=serialization.Encoding.PEM,
|
||||
format=serialization.PrivateFormat.PKCS8,
|
||||
encryption_algorithm=serialization.NoEncryption(),
|
||||
).decode("utf-8")
|
||||
|
||||
# Serialize public key (for verification)
|
||||
public_key = private_key.public_key()
|
||||
public_pem = public_key.public_bytes(
|
||||
encoding=serialization.Encoding.PEM,
|
||||
format=serialization.PublicFormat.SubjectPublicKeyInfo,
|
||||
).decode("utf-8")
|
||||
|
||||
logger.debug("JWT RSA key pair generated successfully.")
|
||||
return private_pem, public_pem
|
||||
|
||||
except ImportError as e:
|
||||
logger.warning(
|
||||
"The 'cryptography' package is required for automatic JWT key generation."
|
||||
)
|
||||
raise e
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error generating JWT keys: {e}. Please set '{SIGNING_KEY_ENV}' and '{VERIFYING_KEY_ENV}' manually."
|
||||
)
|
||||
raise e
|
||||
|
||||
@@ -225,6 +225,7 @@ def generate_compliance_overview_template(prowler_compliance: dict):
|
||||
# Build compliance dictionary
|
||||
compliance_dict = {
|
||||
"framework": compliance_data.Framework,
|
||||
"name": compliance_data.Name,
|
||||
"version": compliance_data.Version,
|
||||
"provider": provider_type,
|
||||
"description": compliance_data.Description,
|
||||
|
||||
@@ -2,7 +2,7 @@ from datetime import date, datetime, timedelta, timezone
|
||||
|
||||
from dateutil.parser import parse
|
||||
from django.conf import settings
|
||||
from django.db.models import Q
|
||||
from django.db.models import F, Q
|
||||
from django_filters.rest_framework import (
|
||||
BaseInFilter,
|
||||
BooleanFilter,
|
||||
@@ -28,6 +28,7 @@ from api.models import (
|
||||
Integration,
|
||||
Invitation,
|
||||
Membership,
|
||||
OverviewStatusChoices,
|
||||
PermissionChoices,
|
||||
Processor,
|
||||
Provider,
|
||||
@@ -750,6 +751,72 @@ class ScanSummaryFilter(FilterSet):
|
||||
}
|
||||
|
||||
|
||||
class ScanSummarySeverityFilter(ScanSummaryFilter):
|
||||
"""Filter for findings_severity ScanSummary endpoint - includes status filters"""
|
||||
|
||||
# Custom status filters - only for severity grouping endpoint
|
||||
status = ChoiceFilter(method="filter_status", choices=OverviewStatusChoices.choices)
|
||||
status__in = CharInFilter(method="filter_status_in", lookup_expr="in")
|
||||
|
||||
def filter_status(self, queryset, name, value):
|
||||
# Validate the status value
|
||||
if value not in [choice[0] for choice in OverviewStatusChoices.choices]:
|
||||
raise ValidationError(f"Invalid status value: {value}")
|
||||
|
||||
# Apply the filter by annotating the queryset with the status field
|
||||
if value == OverviewStatusChoices.FAIL:
|
||||
return queryset.annotate(status_count=F("fail"))
|
||||
elif value == OverviewStatusChoices.PASS:
|
||||
return queryset.annotate(status_count=F("_pass"))
|
||||
else:
|
||||
return queryset.annotate(status_count=F("total"))
|
||||
|
||||
def filter_status_in(self, queryset, name, value):
|
||||
# Validate the status values
|
||||
valid_statuses = [choice[0] for choice in OverviewStatusChoices.choices]
|
||||
for status_val in value:
|
||||
if status_val not in valid_statuses:
|
||||
raise ValidationError(f"Invalid status value: {status_val}")
|
||||
|
||||
# If all statuses or no valid statuses, use total
|
||||
if (
|
||||
set(value)
|
||||
>= {
|
||||
OverviewStatusChoices.FAIL,
|
||||
OverviewStatusChoices.PASS,
|
||||
}
|
||||
or not value
|
||||
):
|
||||
return queryset.annotate(status_count=F("total"))
|
||||
|
||||
# Build the sum expression based on status values
|
||||
sum_expression = None
|
||||
for status in value:
|
||||
if status == OverviewStatusChoices.FAIL:
|
||||
field_expr = F("fail")
|
||||
elif status == OverviewStatusChoices.PASS:
|
||||
field_expr = F("_pass")
|
||||
else:
|
||||
continue
|
||||
|
||||
if sum_expression is None:
|
||||
sum_expression = field_expr
|
||||
else:
|
||||
sum_expression = sum_expression + field_expr
|
||||
|
||||
if sum_expression is None:
|
||||
return queryset.annotate(status_count=F("total"))
|
||||
|
||||
return queryset.annotate(status_count=sum_expression)
|
||||
|
||||
class Meta:
|
||||
model = ScanSummary
|
||||
fields = {
|
||||
"inserted_at": ["date", "gte", "lte"],
|
||||
"region": ["exact", "icontains", "in"],
|
||||
}
|
||||
|
||||
|
||||
class ServiceOverviewFilter(ScanSummaryFilter):
|
||||
def is_valid(self):
|
||||
# Check if at least one of the inserted_at filters is present
|
||||
@@ -793,3 +860,23 @@ class ProcessorFilter(FilterSet):
|
||||
field_name="processor_type",
|
||||
lookup_expr="in",
|
||||
)
|
||||
|
||||
|
||||
class IntegrationJiraFindingsFilter(FilterSet):
|
||||
# To be expanded as needed
|
||||
finding_id = UUIDFilter(field_name="id", lookup_expr="exact")
|
||||
finding_id__in = UUIDInFilter(field_name="id", lookup_expr="in")
|
||||
|
||||
class Meta:
|
||||
model = Finding
|
||||
fields = {}
|
||||
|
||||
def filter_queryset(self, queryset):
|
||||
# Validate that there is at least one filter provided
|
||||
if not self.data:
|
||||
raise ValidationError(
|
||||
{
|
||||
"findings": "No finding filters provided. At least one filter is required."
|
||||
}
|
||||
)
|
||||
return super().filter_queryset(queryset)
|
||||
|
||||
@@ -24,5 +24,18 @@
|
||||
"is_active": true,
|
||||
"date_joined": "2024-09-18T09:04:20.850Z"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model": "api.user",
|
||||
"pk": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
|
||||
"fields": {
|
||||
"password": "pbkdf2_sha256$870000$Z63pGJ7nre48hfcGbk5S0O$rQpKczAmijs96xa+gPVJifpT3Fetb8DOusl5Eq6gxac=",
|
||||
"last_login": null,
|
||||
"name": "E2E Test User",
|
||||
"email": "e2e@prowler.com",
|
||||
"company_name": "Prowler E2E Tests",
|
||||
"is_active": true,
|
||||
"date_joined": "2024-01-01T00:00:00.850Z"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
@@ -46,5 +46,24 @@
|
||||
"role": "member",
|
||||
"date_joined": "2024-09-19T11:03:59.712Z"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model": "api.tenant",
|
||||
"pk": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
|
||||
"fields": {
|
||||
"inserted_at": "2024-01-01T00:00:00Z",
|
||||
"updated_at": "2024-01-01T00:00:00Z",
|
||||
"name": "E2E Test Tenant"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model": "api.membership",
|
||||
"pk": "9b1a2c3d-4e5f-6789-abc1-23456789def0",
|
||||
"fields": {
|
||||
"user": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
|
||||
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
|
||||
"role": "owner",
|
||||
"date_joined": "2024-01-01T00:00:00.000Z"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
@@ -149,5 +149,32 @@
|
||||
"user": "8b38e2eb-6689-4f1e-a4ba-95b275130200",
|
||||
"inserted_at": "2024-11-20T15:36:14.302Z"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model": "api.role",
|
||||
"pk": "a5b6c7d8-9e0f-1234-5678-90abcdef1234",
|
||||
"fields": {
|
||||
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
|
||||
"name": "e2e_admin",
|
||||
"manage_users": true,
|
||||
"manage_account": true,
|
||||
"manage_billing": true,
|
||||
"manage_providers": true,
|
||||
"manage_integrations": true,
|
||||
"manage_scans": true,
|
||||
"unlimited_visibility": true,
|
||||
"inserted_at": "2024-01-01T00:00:00.000Z",
|
||||
"updated_at": "2024-01-01T00:00:00.000Z"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model": "api.userrolerelationship",
|
||||
"pk": "f1e2d3c4-b5a6-9876-5432-10fedcba9876",
|
||||
"fields": {
|
||||
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
|
||||
"role": "a5b6c7d8-9e0f-1234-5678-90abcdef1234",
|
||||
"user": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
|
||||
"inserted_at": "2024-01-01T00:00:00.000Z"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1,30 @@
|
||||
from functools import partial
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
atomic = False
|
||||
|
||||
dependencies = [
|
||||
("api", "0039_resource_resources_failed_findings_idx"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(
|
||||
partial(
|
||||
create_index_on_partitions,
|
||||
parent_table="resource_finding_mappings",
|
||||
index_name="rfm_tenant_resource_idx",
|
||||
columns="tenant_id, resource_id",
|
||||
method="BTREE",
|
||||
),
|
||||
reverse_code=partial(
|
||||
drop_index_on_partitions,
|
||||
parent_table="resource_finding_mappings",
|
||||
index_name="rfm_tenant_resource_idx",
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,17 @@
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0040_rfm_tenant_resource_index_partitions"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddIndex(
|
||||
model_name="resourcefindingmapping",
|
||||
index=models.Index(
|
||||
fields=["tenant_id", "resource_id"],
|
||||
name="rfm_tenant_resource_idx",
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,23 @@
|
||||
from django.contrib.postgres.operations import AddIndexConcurrently
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
atomic = False
|
||||
|
||||
dependencies = [
|
||||
("api", "0041_rfm_tenant_resource_parent_partitions"),
|
||||
("django_celery_beat", "0019_alter_periodictasks_options"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
AddIndexConcurrently(
|
||||
model_name="scan",
|
||||
index=models.Index(
|
||||
condition=models.Q(("state", "completed")),
|
||||
fields=["tenant_id", "provider_id", "-inserted_at"],
|
||||
include=("id",),
|
||||
name="scans_prov_ins_desc_idx",
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,33 @@
|
||||
# Generated by Django 5.1.7 on 2025-07-09 14:44
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
import api.db_utils
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0042_scan_scans_prov_ins_desc_idx"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="provider",
|
||||
name="provider",
|
||||
field=api.db_utils.ProviderEnumField(
|
||||
choices=[
|
||||
("aws", "AWS"),
|
||||
("azure", "Azure"),
|
||||
("gcp", "GCP"),
|
||||
("kubernetes", "Kubernetes"),
|
||||
("m365", "M365"),
|
||||
("github", "GitHub"),
|
||||
],
|
||||
default="aws",
|
||||
),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'github';",
|
||||
reverse_sql=migrations.RunSQL.noop,
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,19 @@
|
||||
# Generated by Django 5.1.10 on 2025-07-17 11:52
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0043_github_provider"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddConstraint(
|
||||
model_name="integration",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("configuration", "tenant"),
|
||||
name="unique_configuration_per_tenant",
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,17 @@
|
||||
# Generated by Django 5.1.10 on 2025-07-21 16:08
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0044_integration_unique_configuration_per_tenant"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="scan",
|
||||
name="output_location",
|
||||
field=models.CharField(blank=True, max_length=4096, null=True),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,33 @@
|
||||
# Generated by Django 5.1.10 on 2025-08-20 09:04
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0045_alter_scan_output_location"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="lighthouseconfiguration",
|
||||
name="model",
|
||||
field=models.CharField(
|
||||
choices=[
|
||||
("gpt-4o-2024-11-20", "GPT-4o v2024-11-20"),
|
||||
("gpt-4o-2024-08-06", "GPT-4o v2024-08-06"),
|
||||
("gpt-4o-2024-05-13", "GPT-4o v2024-05-13"),
|
||||
("gpt-4o", "GPT-4o Default"),
|
||||
("gpt-4o-mini-2024-07-18", "GPT-4o Mini v2024-07-18"),
|
||||
("gpt-4o-mini", "GPT-4o Mini Default"),
|
||||
("gpt-5-2025-08-07", "GPT-5 v2025-08-07"),
|
||||
("gpt-5", "GPT-5 Default"),
|
||||
("gpt-5-mini-2025-08-07", "GPT-5 Mini v2025-08-07"),
|
||||
("gpt-5-mini", "GPT-5 Mini Default"),
|
||||
],
|
||||
default="gpt-4o-2024-08-06",
|
||||
help_text="Must be one of the supported model names",
|
||||
max_length=50,
|
||||
),
|
||||
),
|
||||
]
|
||||
+16
@@ -0,0 +1,16 @@
|
||||
# Generated by Django 5.1.10 on 2025-08-20 08:24
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0046_lighthouse_gpt5"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RemoveConstraint(
|
||||
model_name="integration",
|
||||
name="unique_configuration_per_tenant",
|
||||
),
|
||||
]
|
||||
@@ -74,6 +74,15 @@ class StatusChoices(models.TextChoices):
|
||||
MANUAL = "MANUAL", _("Manual")
|
||||
|
||||
|
||||
class OverviewStatusChoices(models.TextChoices):
|
||||
"""
|
||||
Status filters allowed in overview/severity endpoints.
|
||||
"""
|
||||
|
||||
FAIL = "FAIL", _("Fail")
|
||||
PASS = "PASS", _("Pass")
|
||||
|
||||
|
||||
class StateChoices(models.TextChoices):
|
||||
AVAILABLE = "available", _("Available")
|
||||
SCHEDULED = "scheduled", _("Scheduled")
|
||||
@@ -205,6 +214,7 @@ class Provider(RowLevelSecurityProtectedModel):
|
||||
GCP = "gcp", _("GCP")
|
||||
KUBERNETES = "kubernetes", _("Kubernetes")
|
||||
M365 = "m365", _("M365")
|
||||
GITHUB = "github", _("GitHub")
|
||||
|
||||
@staticmethod
|
||||
def validate_aws_uid(value):
|
||||
@@ -265,6 +275,16 @@ class Provider(RowLevelSecurityProtectedModel):
|
||||
pointer="/data/attributes/uid",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def validate_github_uid(value):
|
||||
if not re.match(r"^[a-zA-Z0-9][a-zA-Z0-9-]{0,38}$", value):
|
||||
raise ModelValidationError(
|
||||
detail="GitHub provider ID must be a valid GitHub username or organization name (1-39 characters, "
|
||||
"starting with alphanumeric, containing only alphanumeric characters and hyphens).",
|
||||
code="github-uid",
|
||||
pointer="/data/attributes/uid",
|
||||
)
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
updated_at = models.DateTimeField(auto_now=True, editable=False)
|
||||
@@ -427,7 +447,7 @@ class Scan(RowLevelSecurityProtectedModel):
|
||||
scheduler_task = models.ForeignKey(
|
||||
PeriodicTask, on_delete=models.SET_NULL, null=True, blank=True
|
||||
)
|
||||
output_location = models.CharField(blank=True, null=True, max_length=200)
|
||||
output_location = models.CharField(blank=True, null=True, max_length=4096)
|
||||
provider = models.ForeignKey(
|
||||
Provider,
|
||||
on_delete=models.CASCADE,
|
||||
@@ -476,6 +496,13 @@ class Scan(RowLevelSecurityProtectedModel):
|
||||
condition=Q(state=StateChoices.COMPLETED),
|
||||
name="scans_prov_state_ins_desc_idx",
|
||||
),
|
||||
# TODO This might replace `scans_prov_state_ins_desc_idx` completely. Review usage
|
||||
models.Index(
|
||||
fields=["tenant_id", "provider_id", "-inserted_at"],
|
||||
condition=Q(state=StateChoices.COMPLETED),
|
||||
include=["id"],
|
||||
name="scans_prov_ins_desc_idx",
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
@@ -860,6 +887,10 @@ class ResourceFindingMapping(PostgresPartitionedModel, RowLevelSecurityProtected
|
||||
fields=["tenant_id", "finding_id"],
|
||||
name="rfm_tenant_finding_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=["tenant_id", "resource_id"],
|
||||
name="rfm_tenant_resource_idx",
|
||||
),
|
||||
]
|
||||
constraints = [
|
||||
models.UniqueConstraint(
|
||||
@@ -1324,7 +1355,7 @@ class ScanSummary(RowLevelSecurityProtectedModel):
|
||||
|
||||
class Integration(RowLevelSecurityProtectedModel):
|
||||
class IntegrationChoices(models.TextChoices):
|
||||
S3 = "amazon_s3", _("Amazon S3")
|
||||
AMAZON_S3 = "amazon_s3", _("Amazon S3")
|
||||
AWS_SECURITY_HUB = "aws_security_hub", _("AWS Security Hub")
|
||||
JIRA = "jira", _("JIRA")
|
||||
SLACK = "slack", _("Slack")
|
||||
@@ -1726,6 +1757,10 @@ class LighthouseConfiguration(RowLevelSecurityProtectedModel):
|
||||
GPT_4O = "gpt-4o", _("GPT-4o Default")
|
||||
GPT_4O_MINI_2024_07_18 = "gpt-4o-mini-2024-07-18", _("GPT-4o Mini v2024-07-18")
|
||||
GPT_4O_MINI = "gpt-4o-mini", _("GPT-4o Mini Default")
|
||||
GPT_5_2025_08_07 = "gpt-5-2025-08-07", _("GPT-5 v2025-08-07")
|
||||
GPT_5 = "gpt-5", _("GPT-5 Default")
|
||||
GPT_5_MINI_2025_08_07 = "gpt-5-mini-2025-08-07", _("GPT-5 Mini v2025-08-07")
|
||||
GPT_5_MINI = "gpt-5-mini", _("GPT-5 Mini Default")
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
|
||||
@@ -0,0 +1,95 @@
|
||||
def _pick_task_response_component(components):
|
||||
schemas = components.get("schemas", {}) or {}
|
||||
for candidate in ("TaskResponse",):
|
||||
if candidate in schemas:
|
||||
return candidate
|
||||
return None
|
||||
|
||||
|
||||
def _extract_task_example_from_components(components):
|
||||
schemas = components.get("schemas", {}) or {}
|
||||
candidate = "TaskResponse"
|
||||
doc = schemas.get(candidate)
|
||||
if isinstance(doc, dict) and "example" in doc:
|
||||
return doc["example"]
|
||||
|
||||
res = schemas.get(candidate)
|
||||
if isinstance(res, dict) and "example" in res:
|
||||
example = res["example"]
|
||||
return example if "data" in example else {"data": example}
|
||||
|
||||
# Fallback
|
||||
return {
|
||||
"data": {
|
||||
"type": "tasks",
|
||||
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
|
||||
"attributes": {
|
||||
"inserted_at": "2019-08-24T14:15:22Z",
|
||||
"completed_at": "2019-08-24T14:15:22Z",
|
||||
"name": "string",
|
||||
"state": "available",
|
||||
"result": None,
|
||||
"task_args": None,
|
||||
"metadata": None,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def attach_task_202_examples(result, generator, request, public): # noqa: F841
|
||||
if not isinstance(result, dict):
|
||||
return result
|
||||
|
||||
components = result.get("components", {}) or {}
|
||||
task_resp_component = _pick_task_response_component(components)
|
||||
task_example = _extract_task_example_from_components(components)
|
||||
|
||||
paths = result.get("paths", {}) or {}
|
||||
for path_item in paths.values():
|
||||
if not isinstance(path_item, dict):
|
||||
continue
|
||||
|
||||
for method_obj in path_item.values():
|
||||
if not isinstance(method_obj, dict):
|
||||
continue
|
||||
|
||||
responses = method_obj.get("responses", {}) or {}
|
||||
resp_202 = responses.get("202")
|
||||
if not isinstance(resp_202, dict):
|
||||
continue
|
||||
|
||||
content = resp_202.get("content", {}) or {}
|
||||
jsonapi = content.get("application/vnd.api+json")
|
||||
if not isinstance(jsonapi, dict):
|
||||
continue
|
||||
|
||||
# Inject example if missing
|
||||
if "examples" not in jsonapi and "example" not in jsonapi:
|
||||
jsonapi["examples"] = {
|
||||
"Task queued": {
|
||||
"summary": "Task queued",
|
||||
"value": task_example,
|
||||
}
|
||||
}
|
||||
|
||||
# Rewrite schema $ref if needed
|
||||
if task_resp_component:
|
||||
schema = jsonapi.get("schema")
|
||||
must_replace = False
|
||||
if not isinstance(schema, dict):
|
||||
must_replace = True
|
||||
else:
|
||||
ref = schema.get("$ref")
|
||||
if not ref:
|
||||
must_replace = True
|
||||
else:
|
||||
current = ref.split("/")[-1]
|
||||
if current != task_resp_component:
|
||||
must_replace = True
|
||||
|
||||
if must_replace:
|
||||
jsonapi["schema"] = {
|
||||
"$ref": f"#/components/schemas/{task_resp_component}"
|
||||
}
|
||||
|
||||
return result
|
||||
+1013
-307
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,152 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
from django.conf import settings
|
||||
|
||||
import api.apps as api_apps_module
|
||||
from api.apps import (
|
||||
ApiConfig,
|
||||
PRIVATE_KEY_FILE,
|
||||
PUBLIC_KEY_FILE,
|
||||
SIGNING_KEY_ENV,
|
||||
VERIFYING_KEY_ENV,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_keys_initialized(monkeypatch):
|
||||
"""Ensure per-test clean state for the module-level guard flag."""
|
||||
monkeypatch.setattr(api_apps_module, "_keys_initialized", False, raising=False)
|
||||
|
||||
|
||||
def _stub_keys():
|
||||
return (
|
||||
"""-----BEGIN PRIVATE KEY-----\nPRIVATE\n-----END PRIVATE KEY-----\n""",
|
||||
"""-----BEGIN PUBLIC KEY-----\nPUBLIC\n-----END PUBLIC KEY-----\n""",
|
||||
)
|
||||
|
||||
|
||||
def test_generate_jwt_keys_when_missing(monkeypatch, tmp_path):
|
||||
# Arrange: isolate FS, env, and settings; force generation path
|
||||
monkeypatch.setattr(
|
||||
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
|
||||
)
|
||||
monkeypatch.delenv(SIGNING_KEY_ENV, raising=False)
|
||||
monkeypatch.delenv(VERIFYING_KEY_ENV, raising=False)
|
||||
|
||||
# Work on a copy of SIMPLE_JWT to avoid mutating the global settings dict for other tests
|
||||
monkeypatch.setattr(
|
||||
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
|
||||
)
|
||||
monkeypatch.setattr(settings, "TESTING", False, raising=False)
|
||||
|
||||
# Avoid dependency on the cryptography package
|
||||
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_stub_keys))
|
||||
|
||||
config = ApiConfig("api", api_apps_module)
|
||||
|
||||
# Act
|
||||
config._ensure_crypto_keys()
|
||||
|
||||
# Assert: files created with expected content
|
||||
priv_path = Path(tmp_path) / PRIVATE_KEY_FILE
|
||||
pub_path = Path(tmp_path) / PUBLIC_KEY_FILE
|
||||
assert priv_path.is_file()
|
||||
assert pub_path.is_file()
|
||||
assert priv_path.read_text() == _stub_keys()[0]
|
||||
assert pub_path.read_text() == _stub_keys()[1]
|
||||
|
||||
# Env vars and Django settings updated
|
||||
assert os.environ[SIGNING_KEY_ENV] == _stub_keys()[0]
|
||||
assert os.environ[VERIFYING_KEY_ENV] == _stub_keys()[1]
|
||||
assert settings.SIMPLE_JWT["SIGNING_KEY"] == _stub_keys()[0]
|
||||
assert settings.SIMPLE_JWT["VERIFYING_KEY"] == _stub_keys()[1]
|
||||
|
||||
|
||||
def test_ensure_crypto_keys_are_idempotent_within_process(monkeypatch, tmp_path):
|
||||
# Arrange
|
||||
monkeypatch.setattr(
|
||||
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
|
||||
)
|
||||
monkeypatch.delenv(SIGNING_KEY_ENV, raising=False)
|
||||
monkeypatch.delenv(VERIFYING_KEY_ENV, raising=False)
|
||||
monkeypatch.setattr(
|
||||
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
|
||||
)
|
||||
monkeypatch.setattr(settings, "TESTING", False, raising=False)
|
||||
|
||||
mock_generate = MagicMock(side_effect=_stub_keys)
|
||||
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(mock_generate))
|
||||
|
||||
config = ApiConfig("api", api_apps_module)
|
||||
|
||||
# Act: first call should generate, second should be a no-op (guard flag)
|
||||
config._ensure_crypto_keys()
|
||||
config._ensure_crypto_keys()
|
||||
|
||||
# Assert: generation occurred exactly once
|
||||
assert mock_generate.call_count == 1
|
||||
|
||||
|
||||
def test_ensure_jwt_keys_uses_existing_files(monkeypatch, tmp_path):
|
||||
# Arrange: pre-create key files
|
||||
monkeypatch.setattr(
|
||||
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
|
||||
)
|
||||
|
||||
existing_private, existing_public = _stub_keys()
|
||||
|
||||
(Path(tmp_path) / PRIVATE_KEY_FILE).write_text(existing_private)
|
||||
(Path(tmp_path) / PUBLIC_KEY_FILE).write_text(existing_public)
|
||||
|
||||
# If generation were called, fail the test
|
||||
def _fail_generate():
|
||||
raise AssertionError("_generate_jwt_keys should not be called when files exist")
|
||||
|
||||
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_fail_generate))
|
||||
|
||||
config = ApiConfig("api", api_apps_module)
|
||||
|
||||
# Act: call the lower-level method directly to set env/settings from files
|
||||
config._ensure_jwt_keys()
|
||||
|
||||
# Assert
|
||||
# _read_key_file() strips trailing newlines; environment/settings should reflect stripped content
|
||||
assert os.environ[SIGNING_KEY_ENV] == existing_private.strip()
|
||||
assert os.environ[VERIFYING_KEY_ENV] == existing_public.strip()
|
||||
assert settings.SIMPLE_JWT["SIGNING_KEY"] == existing_private.strip()
|
||||
assert settings.SIMPLE_JWT["VERIFYING_KEY"] == existing_public.strip()
|
||||
|
||||
|
||||
def test_ensure_crypto_keys_skips_when_env_vars(monkeypatch, tmp_path):
|
||||
# Arrange: put values in env so the orchestrator doesn't generate
|
||||
monkeypatch.setattr(
|
||||
api_apps_module, "KEYS_DIRECTORY", Path(tmp_path), raising=False
|
||||
)
|
||||
monkeypatch.setenv(SIGNING_KEY_ENV, "ENV-PRIVATE")
|
||||
monkeypatch.setenv(VERIFYING_KEY_ENV, "ENV-PUBLIC")
|
||||
monkeypatch.setattr(
|
||||
settings, "SIMPLE_JWT", settings.SIMPLE_JWT.copy(), raising=False
|
||||
)
|
||||
monkeypatch.setattr(settings, "TESTING", False, raising=False)
|
||||
|
||||
called = {"ensure": False}
|
||||
|
||||
def _track_call():
|
||||
called["ensure"] = True
|
||||
return _stub_keys()
|
||||
|
||||
monkeypatch.setattr(ApiConfig, "_generate_jwt_keys", staticmethod(_track_call))
|
||||
|
||||
config = ApiConfig("api", api_apps_module)
|
||||
|
||||
# Act
|
||||
config._ensure_crypto_keys()
|
||||
|
||||
# Assert: orchestrator did not trigger generation when env present
|
||||
assert called["ensure"] is False
|
||||
@@ -239,6 +239,7 @@ class TestCompliance:
|
||||
Framework="Framework 1",
|
||||
Version="1.0",
|
||||
Description="Description of compliance1",
|
||||
Name="Compliance 1",
|
||||
)
|
||||
prowler_compliance = {"aws": {"compliance1": compliance1}}
|
||||
|
||||
@@ -248,6 +249,7 @@ class TestCompliance:
|
||||
"aws": {
|
||||
"compliance1": {
|
||||
"framework": "Framework 1",
|
||||
"name": "Compliance 1",
|
||||
"version": "1.0",
|
||||
"provider": "aws",
|
||||
"description": "Description of compliance1",
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import json
|
||||
from unittest.mock import ANY, Mock, patch
|
||||
|
||||
import pytest
|
||||
@@ -151,6 +152,221 @@ class TestUserViewSet:
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
assert response.json()["data"]["attributes"]["email"] == "rbac_limited@rbac.com"
|
||||
|
||||
def test_me_shows_own_roles_and_memberships_without_manage_account(
|
||||
self, authenticated_client_no_permissions_rbac
|
||||
):
|
||||
response = authenticated_client_no_permissions_rbac.get(reverse("user-me"))
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
|
||||
rels = response.json()["data"]["relationships"]
|
||||
|
||||
# Self should see own roles and memberships even without manage_account
|
||||
assert isinstance(rels["roles"]["data"], list)
|
||||
assert rels["memberships"]["meta"]["count"] == 1
|
||||
|
||||
def test_me_shows_roles_and_memberships_with_manage_account(
|
||||
self, authenticated_client_rbac
|
||||
):
|
||||
response = authenticated_client_rbac.get(reverse("user-me"))
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
|
||||
rels = response.json()["data"]["relationships"]
|
||||
|
||||
# Roles should have data when manage_account is True
|
||||
assert len(rels["roles"]["data"]) > 0
|
||||
|
||||
# Memberships should be present and count > 0
|
||||
assert rels["memberships"]["meta"]["count"] > 0
|
||||
|
||||
def test_me_include_roles_and_memberships_included_block(
|
||||
self, authenticated_client_rbac
|
||||
):
|
||||
# Request current user info including roles and memberships
|
||||
response = authenticated_client_rbac.get(
|
||||
reverse("user-me"), {"include": "roles,memberships"}
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
payload = response.json()
|
||||
|
||||
# Included must contain memberships corresponding to relationships data
|
||||
rel_memberships = payload["data"]["relationships"]["memberships"]
|
||||
ids_in_relationship = {item["id"] for item in rel_memberships["data"]}
|
||||
|
||||
included = payload["included"]
|
||||
included_membership_ids = {
|
||||
item["id"] for item in included if item["type"] == "memberships"
|
||||
}
|
||||
|
||||
# If there are memberships in relationships, they must be present in included
|
||||
if ids_in_relationship:
|
||||
assert ids_in_relationship.issubset(included_membership_ids)
|
||||
else:
|
||||
# At minimum, included should contain the user's membership when requested
|
||||
# (count should align with meta count)
|
||||
assert rel_memberships["meta"]["count"] == len(included_membership_ids)
|
||||
|
||||
def test_list_users_with_manage_account_only_forbidden(
|
||||
self, authenticated_client_rbac_manage_account
|
||||
):
|
||||
response = authenticated_client_rbac_manage_account.get(reverse("user-list"))
|
||||
assert response.status_code == status.HTTP_403_FORBIDDEN
|
||||
|
||||
def test_retrieve_other_user_with_manage_account_only_forbidden(
|
||||
self, authenticated_client_rbac_manage_account, create_test_user
|
||||
):
|
||||
response = authenticated_client_rbac_manage_account.get(
|
||||
reverse("user-detail", kwargs={"pk": create_test_user.id})
|
||||
)
|
||||
assert response.status_code == status.HTTP_403_FORBIDDEN
|
||||
|
||||
def test_list_users_with_manage_users_only_hides_relationships(
|
||||
self, authenticated_client_rbac_manage_users_only
|
||||
):
|
||||
# Ensure there is at least one other user in the same tenant
|
||||
mu_user = authenticated_client_rbac_manage_users_only.user
|
||||
mu_membership = Membership.objects.filter(user=mu_user).first()
|
||||
tenant = mu_membership.tenant
|
||||
|
||||
other_user = User.objects.create_user(
|
||||
name="other_in_tenant",
|
||||
email="other_in_tenant@rbac.com",
|
||||
password="Password123@",
|
||||
)
|
||||
Membership.objects.create(user=other_user, tenant=tenant)
|
||||
|
||||
response = authenticated_client_rbac_manage_users_only.get(reverse("user-list"))
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert isinstance(data, list)
|
||||
|
||||
current_user_id = str(mu_user.id)
|
||||
assert any(item["id"] == current_user_id for item in data)
|
||||
|
||||
for item in data:
|
||||
rels = item["relationships"]
|
||||
if item["id"] == current_user_id:
|
||||
# Self should see own relationships
|
||||
assert isinstance(rels["roles"]["data"], list)
|
||||
assert rels["memberships"]["meta"].get("count", 0) >= 1
|
||||
else:
|
||||
# Others should be hidden without manage_account
|
||||
assert rels["roles"]["data"] == []
|
||||
assert rels["memberships"]["data"] == []
|
||||
assert rels["memberships"]["meta"]["count"] == 0
|
||||
|
||||
def test_include_roles_hidden_without_manage_account(
|
||||
self, authenticated_client_rbac_manage_users_only
|
||||
):
|
||||
# Arrange: ensure another user in the same tenant with its own role
|
||||
mu_user = authenticated_client_rbac_manage_users_only.user
|
||||
mu_membership = Membership.objects.filter(user=mu_user).first()
|
||||
tenant = mu_membership.tenant
|
||||
|
||||
other_user = User.objects.create_user(
|
||||
name="other_in_tenant_inc",
|
||||
email="other_in_tenant_inc@rbac.com",
|
||||
password="Password123@",
|
||||
)
|
||||
Membership.objects.create(user=other_user, tenant=tenant)
|
||||
other_role = Role.objects.create(
|
||||
name="other_inc_role",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=False,
|
||||
manage_account=False,
|
||||
)
|
||||
UserRoleRelationship.objects.create(
|
||||
user=other_user, role=other_role, tenant_id=tenant.id
|
||||
)
|
||||
|
||||
response = authenticated_client_rbac_manage_users_only.get(
|
||||
reverse("user-list"), {"include": "roles"}
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
payload = response.json()
|
||||
|
||||
# Assert: included must not contain the other user's role
|
||||
included = payload.get("included", [])
|
||||
included_role_ids = {
|
||||
item["id"] for item in included if item.get("type") == "roles"
|
||||
}
|
||||
assert str(other_role.id) not in included_role_ids
|
||||
|
||||
# Relationships for other user should be empty
|
||||
for item in payload["data"]:
|
||||
if item["id"] == str(other_user.id):
|
||||
rels = item["relationships"]
|
||||
assert rels["roles"]["data"] == []
|
||||
|
||||
def test_include_roles_visible_with_manage_account(
|
||||
self, authenticated_client_rbac, tenants_fixture
|
||||
):
|
||||
# Arrange: another user in tenant[0] with its role
|
||||
tenant = tenants_fixture[0]
|
||||
other_user = User.objects.create_user(
|
||||
name="other_with_role",
|
||||
email="other_with_role@rbac.com",
|
||||
password="Password123@",
|
||||
)
|
||||
Membership.objects.create(user=other_user, tenant=tenant)
|
||||
other_role = Role.objects.create(
|
||||
name="other_visible_role",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=False,
|
||||
manage_account=False,
|
||||
)
|
||||
UserRoleRelationship.objects.create(
|
||||
user=other_user, role=other_role, tenant_id=tenant.id
|
||||
)
|
||||
|
||||
response = authenticated_client_rbac.get(
|
||||
reverse("user-list"), {"include": "roles"}
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
payload = response.json()
|
||||
|
||||
# Assert: included must contain the other user's role
|
||||
included = payload.get("included", [])
|
||||
included_role_ids = {
|
||||
item["id"] for item in included if item.get("type") == "roles"
|
||||
}
|
||||
assert str(other_role.id) in included_role_ids
|
||||
|
||||
def test_retrieve_user_with_manage_users_only_hides_relationships(
|
||||
self, authenticated_client_rbac_manage_users_only
|
||||
):
|
||||
# Create a target user in the same tenant to ensure visibility
|
||||
mu_user = authenticated_client_rbac_manage_users_only.user
|
||||
mu_membership = Membership.objects.filter(user=mu_user).first()
|
||||
tenant = mu_membership.tenant
|
||||
|
||||
target_user = User.objects.create_user(
|
||||
name="target_same_tenant",
|
||||
email="target_same_tenant@rbac.com",
|
||||
password="Password123@",
|
||||
)
|
||||
Membership.objects.create(user=target_user, tenant=tenant)
|
||||
|
||||
response = authenticated_client_rbac_manage_users_only.get(
|
||||
reverse("user-detail", kwargs={"pk": target_user.id})
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
rels = response.json()["data"]["relationships"]
|
||||
assert rels["roles"]["data"] == []
|
||||
assert rels["memberships"]["data"] == []
|
||||
assert rels["memberships"]["meta"]["count"] == 0
|
||||
|
||||
def test_list_users_with_all_permissions_shows_relationships(
|
||||
self, authenticated_client_rbac
|
||||
):
|
||||
response = authenticated_client_rbac.get(reverse("user-list"))
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()["data"]
|
||||
assert isinstance(data, list)
|
||||
|
||||
rels = data[0]["relationships"]
|
||||
assert len(rels["roles"]["data"]) >= 0
|
||||
assert rels["memberships"]["meta"]["count"] >= 0
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestProviderViewSet:
|
||||
@@ -494,3 +710,123 @@ class TestLimitedVisibility:
|
||||
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
assert len(response.json()["data"]) == 0
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestRolePermissions:
|
||||
def test_role_create_with_manage_account_only_allowed(
|
||||
self, authenticated_client_rbac_manage_account
|
||||
):
|
||||
data = {
|
||||
"data": {
|
||||
"type": "roles",
|
||||
"attributes": {
|
||||
"name": "Role Manage Account Only",
|
||||
"manage_users": "false",
|
||||
"manage_account": "true",
|
||||
"manage_providers": "false",
|
||||
"manage_scans": "false",
|
||||
"unlimited_visibility": "false",
|
||||
},
|
||||
"relationships": {"provider_groups": {"data": []}},
|
||||
}
|
||||
}
|
||||
response = authenticated_client_rbac_manage_account.post(
|
||||
reverse("role-list"),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_201_CREATED
|
||||
|
||||
def test_role_create_with_manage_users_only_forbidden(
|
||||
self, authenticated_client_rbac_manage_users_only
|
||||
):
|
||||
data = {
|
||||
"data": {
|
||||
"type": "roles",
|
||||
"attributes": {
|
||||
"name": "Role Manage Users Only",
|
||||
"manage_users": "true",
|
||||
"manage_account": "false",
|
||||
"manage_providers": "false",
|
||||
"manage_scans": "false",
|
||||
"unlimited_visibility": "false",
|
||||
},
|
||||
"relationships": {"provider_groups": {"data": []}},
|
||||
}
|
||||
}
|
||||
response = authenticated_client_rbac_manage_users_only.post(
|
||||
reverse("role-list"),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_403_FORBIDDEN
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestUserRoleLinkPermissions:
|
||||
def test_link_user_roles_with_manage_account_only_allowed(
|
||||
self, authenticated_client_rbac_manage_account
|
||||
):
|
||||
# Arrange: create a second user in the same tenant as the manage_account user
|
||||
ma_user = authenticated_client_rbac_manage_account.user
|
||||
ma_membership = Membership.objects.filter(user=ma_user).first()
|
||||
tenant = ma_membership.tenant
|
||||
|
||||
user2 = User.objects.create_user(
|
||||
name="target_user",
|
||||
email="target_user_ma@rbac.com",
|
||||
password="Password123@",
|
||||
)
|
||||
Membership.objects.create(user=user2, tenant=tenant)
|
||||
|
||||
# Create a role in the same tenant
|
||||
role = Role.objects.create(
|
||||
name="linkable_role",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=False,
|
||||
manage_account=False,
|
||||
)
|
||||
|
||||
data = {"data": [{"type": "roles", "id": str(role.id)}]}
|
||||
|
||||
# Act
|
||||
response = authenticated_client_rbac_manage_account.post(
|
||||
reverse("user-roles-relationship", kwargs={"pk": user2.id}),
|
||||
data=data,
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert response.status_code == status.HTTP_204_NO_CONTENT
|
||||
|
||||
def test_link_user_roles_with_manage_users_only_forbidden(
|
||||
self, authenticated_client_rbac_manage_users_only
|
||||
):
|
||||
mu_user = authenticated_client_rbac_manage_users_only.user
|
||||
mu_membership = Membership.objects.filter(user=mu_user).first()
|
||||
tenant = mu_membership.tenant
|
||||
|
||||
user2 = User.objects.create_user(
|
||||
name="target_user2",
|
||||
email="target_user_mu@rbac.com",
|
||||
password="Password123@",
|
||||
)
|
||||
Membership.objects.create(user=user2, tenant=tenant)
|
||||
|
||||
role = Role.objects.create(
|
||||
name="linkable_role_mu",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=False,
|
||||
manage_account=False,
|
||||
)
|
||||
|
||||
data = {"data": [{"type": "roles", "id": str(role.id)}]}
|
||||
|
||||
response = authenticated_client_rbac_manage_users_only.post(
|
||||
reverse("user-roles-relationship", kwargs={"pk": user2.id}),
|
||||
data=data,
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
|
||||
assert response.status_code == status.HTTP_403_FORBIDDEN
|
||||
|
||||
@@ -0,0 +1,100 @@
|
||||
import pytest
|
||||
from rest_framework.exceptions import ValidationError
|
||||
|
||||
from api.v1.serializer_utils.integrations import S3ConfigSerializer
|
||||
|
||||
|
||||
class TestS3ConfigSerializer:
|
||||
"""Test cases for S3ConfigSerializer validation."""
|
||||
|
||||
def test_validate_output_directory_valid_paths(self):
|
||||
"""Test that valid output directory paths are accepted."""
|
||||
serializer = S3ConfigSerializer()
|
||||
|
||||
# Test normal paths
|
||||
assert serializer.validate_output_directory("test") == "test"
|
||||
assert serializer.validate_output_directory("test/folder") == "test/folder"
|
||||
assert serializer.validate_output_directory("my-folder_123") == "my-folder_123"
|
||||
|
||||
# Test paths with leading slashes (should be normalized)
|
||||
assert serializer.validate_output_directory("/test") == "test"
|
||||
assert serializer.validate_output_directory("/test/folder") == "test/folder"
|
||||
|
||||
# Test paths with excessive slashes (should be normalized)
|
||||
assert serializer.validate_output_directory("///test") == "test"
|
||||
assert serializer.validate_output_directory("///////test") == "test"
|
||||
assert serializer.validate_output_directory("test//folder") == "test/folder"
|
||||
assert serializer.validate_output_directory("test///folder") == "test/folder"
|
||||
|
||||
def test_validate_output_directory_empty_values(self):
|
||||
"""Test that empty values raise validation errors."""
|
||||
serializer = S3ConfigSerializer()
|
||||
|
||||
with pytest.raises(
|
||||
ValidationError, match="Output directory cannot be empty or just"
|
||||
):
|
||||
serializer.validate_output_directory(".")
|
||||
|
||||
with pytest.raises(
|
||||
ValidationError, match="Output directory cannot be empty or just"
|
||||
):
|
||||
serializer.validate_output_directory("/")
|
||||
|
||||
def test_validate_output_directory_invalid_characters(self):
|
||||
"""Test that invalid characters are rejected."""
|
||||
serializer = S3ConfigSerializer()
|
||||
|
||||
invalid_chars = ["<", ">", ":", '"', "|", "?", "*"]
|
||||
|
||||
for char in invalid_chars:
|
||||
with pytest.raises(
|
||||
ValidationError, match="Output directory contains invalid characters"
|
||||
):
|
||||
serializer.validate_output_directory(f"test{char}folder")
|
||||
|
||||
def test_validate_output_directory_too_long(self):
|
||||
"""Test that paths that are too long are rejected."""
|
||||
serializer = S3ConfigSerializer()
|
||||
|
||||
# Create a path longer than 900 characters
|
||||
long_path = "a" * 901
|
||||
|
||||
with pytest.raises(ValidationError, match="Output directory path is too long"):
|
||||
serializer.validate_output_directory(long_path)
|
||||
|
||||
def test_validate_output_directory_edge_cases(self):
|
||||
"""Test edge cases for output directory validation."""
|
||||
serializer = S3ConfigSerializer()
|
||||
|
||||
# Test path at the limit (900 characters)
|
||||
path_at_limit = "a" * 900
|
||||
assert serializer.validate_output_directory(path_at_limit) == path_at_limit
|
||||
|
||||
# Test complex normalization
|
||||
assert serializer.validate_output_directory("//test/../folder//") == "folder"
|
||||
assert serializer.validate_output_directory("/test/./folder/") == "test/folder"
|
||||
|
||||
def test_s3_config_serializer_full_validation(self):
|
||||
"""Test the full S3ConfigSerializer with valid data."""
|
||||
data = {
|
||||
"bucket_name": "my-test-bucket",
|
||||
"output_directory": "///////test", # This should be normalized
|
||||
}
|
||||
|
||||
serializer = S3ConfigSerializer(data=data)
|
||||
assert serializer.is_valid()
|
||||
|
||||
validated_data = serializer.validated_data
|
||||
assert validated_data["bucket_name"] == "my-test-bucket"
|
||||
assert validated_data["output_directory"] == "test" # Normalized
|
||||
|
||||
def test_s3_config_serializer_invalid_data(self):
|
||||
"""Test the full S3ConfigSerializer with invalid data."""
|
||||
data = {
|
||||
"bucket_name": "my-test-bucket",
|
||||
"output_directory": "test<invalid", # Contains invalid character
|
||||
}
|
||||
|
||||
serializer = S3ConfigSerializer(data=data)
|
||||
assert not serializer.is_valid()
|
||||
assert "output_directory" in serializer.errors
|
||||
@@ -6,16 +6,18 @@ from rest_framework.exceptions import NotFound, ValidationError
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.exceptions import InvitationTokenExpiredException
|
||||
from api.models import Invitation, Provider
|
||||
from api.models import Integration, Invitation, Provider
|
||||
from api.utils import (
|
||||
get_prowler_provider_kwargs,
|
||||
initialize_prowler_provider,
|
||||
merge_dicts,
|
||||
prowler_integration_connection_test,
|
||||
prowler_provider_connection_test,
|
||||
return_prowler_provider,
|
||||
validate_invitation,
|
||||
)
|
||||
from prowler.providers.aws.aws_provider import AwsProvider
|
||||
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHubConnection
|
||||
from prowler.providers.azure.azure_provider import AzureProvider
|
||||
from prowler.providers.gcp.gcp_provider import GcpProvider
|
||||
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
|
||||
@@ -197,6 +199,10 @@ class TestGetProwlerProviderKwargs:
|
||||
Provider.ProviderChoices.M365.value,
|
||||
{},
|
||||
),
|
||||
(
|
||||
Provider.ProviderChoices.GITHUB.value,
|
||||
{"organizations": ["provider_uid"]},
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_get_prowler_provider_kwargs(self, provider_type, expected_extra_kwargs):
|
||||
@@ -390,3 +396,258 @@ class TestValidateInvitation:
|
||||
mock_db.get.assert_called_once_with(
|
||||
token="VALID_TOKEN", email__iexact="user@example.com"
|
||||
)
|
||||
|
||||
|
||||
class TestProwlerIntegrationConnectionTest:
|
||||
"""Test prowler_integration_connection_test function for SecurityHub regions reset."""
|
||||
|
||||
@patch("api.utils.SecurityHub")
|
||||
def test_security_hub_connection_failure_resets_regions(
|
||||
self, mock_security_hub_class
|
||||
):
|
||||
"""Test that SecurityHub connection failure resets regions to empty dict."""
|
||||
# Create integration with existing regions configuration
|
||||
integration = MagicMock()
|
||||
integration.integration_type = Integration.IntegrationChoices.AWS_SECURITY_HUB
|
||||
integration.credentials = {
|
||||
"aws_access_key_id": "test_key",
|
||||
"aws_secret_access_key": "test_secret",
|
||||
}
|
||||
integration.configuration = {
|
||||
"send_only_fails": True,
|
||||
"regions": {
|
||||
"us-east-1": True,
|
||||
"us-west-2": True,
|
||||
"eu-west-1": False,
|
||||
"ap-south-1": False,
|
||||
},
|
||||
}
|
||||
|
||||
# Mock provider relationship
|
||||
mock_provider = MagicMock()
|
||||
mock_provider.uid = "123456789012"
|
||||
mock_relationship = MagicMock()
|
||||
mock_relationship.provider = mock_provider
|
||||
integration.integrationproviderrelationship_set.first.return_value = (
|
||||
mock_relationship
|
||||
)
|
||||
|
||||
# Mock failed SecurityHub connection
|
||||
mock_connection = SecurityHubConnection(
|
||||
is_connected=False,
|
||||
error=Exception("SecurityHub testing"),
|
||||
enabled_regions=set(),
|
||||
disabled_regions=set(),
|
||||
)
|
||||
mock_security_hub_class.test_connection.return_value = mock_connection
|
||||
|
||||
# Call the function
|
||||
result = prowler_integration_connection_test(integration)
|
||||
|
||||
# Assertions
|
||||
assert result.is_connected is False
|
||||
assert str(result.error) == "SecurityHub testing"
|
||||
|
||||
# Verify regions were completely reset to empty dict
|
||||
assert integration.configuration["regions"] == {}
|
||||
|
||||
# Verify save was called to persist the change
|
||||
integration.save.assert_called_once()
|
||||
|
||||
# Verify test_connection was called with correct parameters
|
||||
mock_security_hub_class.test_connection.assert_called_once_with(
|
||||
aws_account_id="123456789012",
|
||||
raise_on_exception=False,
|
||||
aws_access_key_id="test_key",
|
||||
aws_secret_access_key="test_secret",
|
||||
)
|
||||
|
||||
@patch("api.utils.SecurityHub")
|
||||
def test_security_hub_connection_success_saves_regions(
|
||||
self, mock_security_hub_class
|
||||
):
|
||||
"""Test that successful SecurityHub connection saves regions correctly."""
|
||||
integration = MagicMock()
|
||||
integration.integration_type = Integration.IntegrationChoices.AWS_SECURITY_HUB
|
||||
integration.credentials = {
|
||||
"aws_access_key_id": "valid_key",
|
||||
"aws_secret_access_key": "valid_secret",
|
||||
}
|
||||
integration.configuration = {"send_only_fails": False}
|
||||
|
||||
# Mock provider relationship
|
||||
mock_provider = MagicMock()
|
||||
mock_provider.uid = "123456789012"
|
||||
mock_relationship = MagicMock()
|
||||
mock_relationship.provider = mock_provider
|
||||
integration.integrationproviderrelationship_set.first.return_value = (
|
||||
mock_relationship
|
||||
)
|
||||
|
||||
# Mock successful SecurityHub connection with regions
|
||||
mock_connection = SecurityHubConnection(
|
||||
is_connected=True,
|
||||
error=None,
|
||||
enabled_regions={"us-east-1", "eu-west-1"},
|
||||
disabled_regions={"ap-south-1"},
|
||||
)
|
||||
mock_security_hub_class.test_connection.return_value = mock_connection
|
||||
|
||||
result = prowler_integration_connection_test(integration)
|
||||
|
||||
assert result.is_connected is True
|
||||
|
||||
# Verify regions were saved correctly
|
||||
assert integration.configuration["regions"]["us-east-1"] is True
|
||||
assert integration.configuration["regions"]["eu-west-1"] is True
|
||||
assert integration.configuration["regions"]["ap-south-1"] is False
|
||||
integration.save.assert_called_once()
|
||||
|
||||
@patch("api.utils.rls_transaction")
|
||||
@patch("api.utils.Jira")
|
||||
def test_jira_connection_success_basic_auth(
|
||||
self, mock_jira_class, mock_rls_transaction
|
||||
):
|
||||
integration = MagicMock()
|
||||
integration.integration_type = Integration.IntegrationChoices.JIRA
|
||||
integration.tenant_id = "test-tenant-id"
|
||||
integration.credentials = {
|
||||
"user_mail": "test@example.com",
|
||||
"api_token": "test_api_token",
|
||||
"domain": "example.atlassian.net",
|
||||
}
|
||||
integration.configuration = {}
|
||||
|
||||
# Mock successful JIRA connection with projects
|
||||
mock_connection = MagicMock()
|
||||
mock_connection.is_connected = True
|
||||
mock_connection.error = None
|
||||
mock_connection.projects = {"PROJ1": "Project 1", "PROJ2": "Project 2"}
|
||||
mock_jira_class.test_connection.return_value = mock_connection
|
||||
|
||||
# Mock rls_transaction context manager
|
||||
mock_rls_transaction.return_value.__enter__ = MagicMock()
|
||||
mock_rls_transaction.return_value.__exit__ = MagicMock()
|
||||
|
||||
result = prowler_integration_connection_test(integration)
|
||||
|
||||
assert result.is_connected is True
|
||||
assert result.error is None
|
||||
|
||||
# Verify JIRA connection was called with correct parameters including domain from credentials
|
||||
mock_jira_class.test_connection.assert_called_once_with(
|
||||
user_mail="test@example.com",
|
||||
api_token="test_api_token",
|
||||
domain="example.atlassian.net",
|
||||
raise_on_exception=False,
|
||||
)
|
||||
|
||||
# Verify rls_transaction was called with correct tenant_id
|
||||
mock_rls_transaction.assert_called_once_with("test-tenant-id")
|
||||
|
||||
# Verify projects were saved to integration configuration
|
||||
assert integration.configuration["projects"] == {
|
||||
"PROJ1": "Project 1",
|
||||
"PROJ2": "Project 2",
|
||||
}
|
||||
|
||||
# Verify integration.save() was called
|
||||
integration.save.assert_called_once()
|
||||
|
||||
@patch("api.utils.rls_transaction")
|
||||
@patch("api.utils.Jira")
|
||||
def test_jira_connection_failure_invalid_credentials(
|
||||
self, mock_jira_class, mock_rls_transaction
|
||||
):
|
||||
integration = MagicMock()
|
||||
integration.integration_type = Integration.IntegrationChoices.JIRA
|
||||
integration.tenant_id = "test-tenant-id"
|
||||
integration.credentials = {
|
||||
"user_mail": "invalid@example.com",
|
||||
"api_token": "invalid_token",
|
||||
"domain": "invalid.atlassian.net",
|
||||
}
|
||||
integration.configuration = {}
|
||||
|
||||
# Mock failed JIRA connection
|
||||
mock_connection = MagicMock()
|
||||
mock_connection.is_connected = False
|
||||
mock_connection.error = Exception("Authentication failed: Invalid credentials")
|
||||
mock_connection.projects = {} # Empty projects when connection fails
|
||||
mock_jira_class.test_connection.return_value = mock_connection
|
||||
|
||||
# Mock rls_transaction context manager
|
||||
mock_rls_transaction.return_value.__enter__ = MagicMock()
|
||||
mock_rls_transaction.return_value.__exit__ = MagicMock()
|
||||
|
||||
result = prowler_integration_connection_test(integration)
|
||||
|
||||
assert result.is_connected is False
|
||||
assert "Authentication failed: Invalid credentials" in str(result.error)
|
||||
|
||||
# Verify JIRA connection was called with correct parameters
|
||||
mock_jira_class.test_connection.assert_called_once_with(
|
||||
user_mail="invalid@example.com",
|
||||
api_token="invalid_token",
|
||||
domain="invalid.atlassian.net",
|
||||
raise_on_exception=False,
|
||||
)
|
||||
|
||||
# Verify rls_transaction was called even on failure
|
||||
mock_rls_transaction.assert_called_once_with("test-tenant-id")
|
||||
|
||||
# Verify empty projects dict was saved to integration configuration
|
||||
assert integration.configuration["projects"] == {}
|
||||
|
||||
# Verify integration.save() was called even on connection failure
|
||||
integration.save.assert_called_once()
|
||||
|
||||
@patch("api.utils.rls_transaction")
|
||||
@patch("api.utils.Jira")
|
||||
def test_jira_connection_projects_update_with_existing_configuration(
|
||||
self, mock_jira_class, mock_rls_transaction
|
||||
):
|
||||
"""Test that projects are properly updated when integration already has configuration data"""
|
||||
integration = MagicMock()
|
||||
integration.integration_type = Integration.IntegrationChoices.JIRA
|
||||
integration.tenant_id = "test-tenant-id"
|
||||
integration.credentials = {
|
||||
"user_mail": "test@example.com",
|
||||
"api_token": "test_api_token",
|
||||
"domain": "example.atlassian.net",
|
||||
}
|
||||
integration.configuration = {
|
||||
"issue_types": ["Task"], # Existing configuration
|
||||
"projects": {"OLD_PROJ": "Old Project"}, # Will be overwritten
|
||||
}
|
||||
|
||||
# Mock successful JIRA connection with new projects
|
||||
mock_connection = MagicMock()
|
||||
mock_connection.is_connected = True
|
||||
mock_connection.error = None
|
||||
mock_connection.projects = {
|
||||
"NEW_PROJ1": "New Project 1",
|
||||
"NEW_PROJ2": "New Project 2",
|
||||
}
|
||||
mock_jira_class.test_connection.return_value = mock_connection
|
||||
|
||||
# Mock rls_transaction context manager
|
||||
mock_rls_transaction.return_value.__enter__ = MagicMock()
|
||||
mock_rls_transaction.return_value.__exit__ = MagicMock()
|
||||
|
||||
result = prowler_integration_connection_test(integration)
|
||||
|
||||
assert result.is_connected is True
|
||||
assert result.error is None
|
||||
|
||||
# Verify projects were updated (old projects replaced with new ones)
|
||||
assert integration.configuration["projects"] == {
|
||||
"NEW_PROJ1": "New Project 1",
|
||||
"NEW_PROJ2": "New Project 2",
|
||||
}
|
||||
|
||||
# Verify other configuration fields were preserved
|
||||
assert integration.configuration["issue_types"] == ["Task"]
|
||||
|
||||
# Verify integration.save() was called
|
||||
integration.save.assert_called_once()
|
||||
|
||||
@@ -51,6 +51,7 @@ from api.models import (
|
||||
UserRoleRelationship,
|
||||
)
|
||||
from api.rls import Tenant
|
||||
from api.v1.serializers import TokenSerializer
|
||||
from api.v1.views import ComplianceOverviewViewSet, TenantFinishACSView
|
||||
|
||||
|
||||
@@ -966,6 +967,31 @@ class TestProviderViewSet:
|
||||
"uid": "subdomain1.subdomain2.subdomain3.subdomain4.domain.net",
|
||||
"alias": "test",
|
||||
},
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "test-user",
|
||||
"alias": "test",
|
||||
},
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "test-organization",
|
||||
"alias": "GitHub Org",
|
||||
},
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "prowler-cloud",
|
||||
"alias": "Prowler",
|
||||
},
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "microsoft",
|
||||
"alias": "Microsoft",
|
||||
},
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "a12345678901234567890123456789012345678",
|
||||
"alias": "Long Username",
|
||||
},
|
||||
]
|
||||
),
|
||||
)
|
||||
@@ -1079,6 +1105,42 @@ class TestProviderViewSet:
|
||||
"m365-uid",
|
||||
"uid",
|
||||
),
|
||||
(
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "-invalid-start",
|
||||
"alias": "test",
|
||||
},
|
||||
"github-uid",
|
||||
"uid",
|
||||
),
|
||||
(
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "invalid@username",
|
||||
"alias": "test",
|
||||
},
|
||||
"github-uid",
|
||||
"uid",
|
||||
),
|
||||
(
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "invalid_username",
|
||||
"alias": "test",
|
||||
},
|
||||
"github-uid",
|
||||
"uid",
|
||||
),
|
||||
(
|
||||
{
|
||||
"provider": "github",
|
||||
"uid": "a" * 40,
|
||||
"alias": "test",
|
||||
},
|
||||
"github-uid",
|
||||
"uid",
|
||||
),
|
||||
]
|
||||
),
|
||||
)
|
||||
@@ -4659,6 +4721,36 @@ class TestRoleViewSet:
|
||||
assert role.users.count() == 0
|
||||
assert role.provider_groups.count() == 0
|
||||
|
||||
def test_cannot_remove_own_assignment_via_role_update(
|
||||
self, authenticated_client, roles_fixture
|
||||
):
|
||||
role = roles_fixture[0]
|
||||
# Ensure the authenticated user is assigned to this role
|
||||
user = User.objects.get(email=TEST_USER)
|
||||
if not UserRoleRelationship.objects.filter(user=user, role=role).exists():
|
||||
UserRoleRelationship.objects.create(
|
||||
user=user, role=role, tenant_id=role.tenant_id
|
||||
)
|
||||
|
||||
# Attempt to update role users to exclude the current user
|
||||
data = {
|
||||
"data": {
|
||||
"id": str(role.id),
|
||||
"type": "roles",
|
||||
"relationships": {"users": {"data": []}},
|
||||
}
|
||||
}
|
||||
response = authenticated_client.patch(
|
||||
reverse("role-detail", kwargs={"pk": role.id}),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_400_BAD_REQUEST
|
||||
assert (
|
||||
"cannot remove their own role"
|
||||
in response.json()["errors"][0]["detail"].lower()
|
||||
)
|
||||
|
||||
def test_role_create_with_invalid_user_relationship(
|
||||
self, authenticated_client, provider_groups_fixture
|
||||
):
|
||||
@@ -4780,15 +4872,134 @@ class TestUserRoleRelationshipViewSet:
|
||||
roles_fixture[2].id,
|
||||
}
|
||||
|
||||
def test_destroy_relationship(
|
||||
self, authenticated_client, roles_fixture, create_test_user
|
||||
def test_destroy_relationship_other_user(
|
||||
self, authenticated_client, roles_fixture, create_test_user, tenants_fixture
|
||||
):
|
||||
# Create another user in same tenant and assign a role
|
||||
tenant = tenants_fixture[0]
|
||||
other_user = User.objects.create_user(
|
||||
name="other",
|
||||
email="other_user@prowler.com",
|
||||
password="TmpPass123@",
|
||||
)
|
||||
Membership.objects.create(user=other_user, tenant=tenant)
|
||||
UserRoleRelationship.objects.create(
|
||||
user=other_user, role=roles_fixture[0], tenant_id=tenant.id
|
||||
)
|
||||
|
||||
# Delete roles for the other user (allowed)
|
||||
response = authenticated_client.delete(
|
||||
reverse("user-roles-relationship", kwargs={"pk": other_user.id}),
|
||||
)
|
||||
assert response.status_code == status.HTTP_204_NO_CONTENT
|
||||
relationships = UserRoleRelationship.objects.filter(user=other_user.id)
|
||||
assert relationships.count() == 0
|
||||
|
||||
def test_cannot_delete_own_roles(self, authenticated_client, create_test_user):
|
||||
# Attempt to delete own roles should be forbidden
|
||||
response = authenticated_client.delete(
|
||||
reverse("user-roles-relationship", kwargs={"pk": create_test_user.id}),
|
||||
)
|
||||
assert response.status_code == status.HTTP_400_BAD_REQUEST
|
||||
|
||||
def test_prevent_removing_last_manage_account_on_patch(
|
||||
self, authenticated_client, roles_fixture, create_test_user, tenants_fixture
|
||||
):
|
||||
# roles_fixture[1] has manage_account=False
|
||||
limited_role = roles_fixture[1]
|
||||
|
||||
# Ensure there is no other user with MANAGE_ACCOUNT in the tenant
|
||||
tenant = tenants_fixture[0]
|
||||
# Create a secondary user without MANAGE_ACCOUNT
|
||||
user2 = User.objects.create_user(
|
||||
name="limited_user",
|
||||
email="limited_user@prowler.com",
|
||||
password="TmpPass123@",
|
||||
)
|
||||
Membership.objects.create(user=user2, tenant=tenant)
|
||||
UserRoleRelationship.objects.create(
|
||||
user=user2, role=limited_role, tenant_id=tenant.id
|
||||
)
|
||||
|
||||
# Attempt to switch the only MANAGE_ACCOUNT user to a role without it
|
||||
data = {"data": [{"type": "roles", "id": str(limited_role.id)}]}
|
||||
response = authenticated_client.patch(
|
||||
reverse("user-roles-relationship", kwargs={"pk": create_test_user.id}),
|
||||
data=data,
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_400_BAD_REQUEST
|
||||
assert "MANAGE_ACCOUNT" in response.json()["errors"][0]["detail"]
|
||||
|
||||
def test_allow_role_change_when_other_user_has_manage_account_on_patch(
|
||||
self, authenticated_client, roles_fixture, create_test_user, tenants_fixture
|
||||
):
|
||||
# roles_fixture[1] has manage_account=False, roles_fixture[0] has manage_account=True
|
||||
limited_role = roles_fixture[1]
|
||||
ma_role = roles_fixture[0]
|
||||
|
||||
tenant = tenants_fixture[0]
|
||||
# Create another user with MANAGE_ACCOUNT
|
||||
user2 = User.objects.create_user(
|
||||
name="ma_user",
|
||||
email="ma_user@prowler.com",
|
||||
password="TmpPass123@",
|
||||
)
|
||||
Membership.objects.create(user=user2, tenant=tenant)
|
||||
UserRoleRelationship.objects.create(
|
||||
user=user2, role=ma_role, tenant_id=tenant.id
|
||||
)
|
||||
|
||||
# Now changing the first user's roles to a non-MA role should succeed
|
||||
data = {"data": [{"type": "roles", "id": str(limited_role.id)}]}
|
||||
response = authenticated_client.patch(
|
||||
reverse("user-roles-relationship", kwargs={"pk": create_test_user.id}),
|
||||
data=data,
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_204_NO_CONTENT
|
||||
relationships = UserRoleRelationship.objects.filter(role=roles_fixture[0].id)
|
||||
assert relationships.count() == 0
|
||||
|
||||
def test_role_destroy_only_manage_account_blocked(
|
||||
self, authenticated_client, tenants_fixture
|
||||
):
|
||||
# Use a tenant without default admin role (tenant3)
|
||||
tenant = tenants_fixture[2]
|
||||
user = User.objects.get(email=TEST_USER)
|
||||
# Add membership for this tenant
|
||||
Membership.objects.create(user=user, tenant=tenant)
|
||||
|
||||
# Create a single MANAGE_ACCOUNT role in this tenant
|
||||
only_role = Role.objects.create(
|
||||
name="only_ma",
|
||||
tenant=tenant,
|
||||
manage_users=True,
|
||||
manage_account=True,
|
||||
manage_billing=False,
|
||||
manage_providers=False,
|
||||
manage_integrations=False,
|
||||
manage_scans=False,
|
||||
unlimited_visibility=False,
|
||||
)
|
||||
|
||||
# Switch token to this tenant
|
||||
serializer = TokenSerializer(
|
||||
data={
|
||||
"type": "tokens",
|
||||
"email": TEST_USER,
|
||||
"password": TEST_PASSWORD,
|
||||
"tenant_id": str(tenant.id),
|
||||
}
|
||||
)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
access_token = serializer.validated_data["access"]
|
||||
authenticated_client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
|
||||
|
||||
# Attempt to delete the only MANAGE_ACCOUNT role
|
||||
response = authenticated_client.delete(
|
||||
reverse("role-detail", kwargs={"pk": only_role.id})
|
||||
)
|
||||
assert response.status_code == status.HTTP_400_BAD_REQUEST
|
||||
assert Role.objects.filter(id=only_role.id).exists()
|
||||
|
||||
def test_invalid_provider_group_id(self, authenticated_client, create_test_user):
|
||||
invalid_id = "non-existent-id"
|
||||
@@ -5580,7 +5791,7 @@ class TestIntegrationViewSet:
|
||||
[
|
||||
# Amazon S3 - AWS credentials
|
||||
(
|
||||
Integration.IntegrationChoices.S3,
|
||||
Integration.IntegrationChoices.AMAZON_S3,
|
||||
{
|
||||
"bucket_name": "bucket-name",
|
||||
"output_directory": "output-directory",
|
||||
@@ -5592,7 +5803,7 @@ class TestIntegrationViewSet:
|
||||
),
|
||||
# Amazon S3 - No credentials (AWS self-hosted)
|
||||
(
|
||||
Integration.IntegrationChoices.S3,
|
||||
Integration.IntegrationChoices.AMAZON_S3,
|
||||
{
|
||||
"bucket_name": "bucket-name",
|
||||
"output_directory": "output-directory",
|
||||
@@ -5618,6 +5829,7 @@ class TestIntegrationViewSet:
|
||||
"integration_type": integration_type,
|
||||
"configuration": configuration,
|
||||
"credentials": credentials,
|
||||
"enabled": True,
|
||||
},
|
||||
"relationships": {
|
||||
"providers": {
|
||||
@@ -5635,6 +5847,7 @@ class TestIntegrationViewSet:
|
||||
assert Integration.objects.count() == 1
|
||||
integration = Integration.objects.first()
|
||||
assert integration.configuration == data["data"]["attributes"]["configuration"]
|
||||
assert integration.enabled == data["data"]["attributes"]["enabled"]
|
||||
assert (
|
||||
integration.integration_type
|
||||
== data["data"]["attributes"]["integration_type"]
|
||||
@@ -5645,6 +5858,47 @@ class TestIntegrationViewSet:
|
||||
== data["data"]["relationships"]["providers"]["data"][0]["id"]
|
||||
)
|
||||
|
||||
def test_integrations_create_valid_jira(
|
||||
self,
|
||||
authenticated_client,
|
||||
):
|
||||
"""Jira integrations are special"""
|
||||
data = {
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"attributes": {
|
||||
"integration_type": Integration.IntegrationChoices.JIRA,
|
||||
"configuration": {},
|
||||
"credentials": {
|
||||
"domain": "prowlerdomain",
|
||||
"api_token": "this-is-an-api-token-for-jira-that-works-for-sure",
|
||||
"user_mail": "testing@prowler.com",
|
||||
},
|
||||
"enabled": True,
|
||||
},
|
||||
}
|
||||
}
|
||||
response = authenticated_client.post(
|
||||
reverse("integration-list"),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_201_CREATED
|
||||
assert Integration.objects.count() == 1
|
||||
integration = Integration.objects.first()
|
||||
integration_configuration = response.json()["data"]["attributes"][
|
||||
"configuration"
|
||||
]
|
||||
assert "projects" in integration_configuration
|
||||
assert "issue_types" in integration_configuration
|
||||
assert "domain" in integration_configuration
|
||||
assert integration.enabled == data["data"]["attributes"]["enabled"]
|
||||
assert (
|
||||
integration.integration_type
|
||||
== data["data"]["attributes"]["integration_type"]
|
||||
)
|
||||
assert "credentials" not in response.json()["data"]["attributes"]
|
||||
|
||||
def test_integrations_create_valid_relationships(
|
||||
self,
|
||||
authenticated_client,
|
||||
@@ -5656,7 +5910,7 @@ class TestIntegrationViewSet:
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"attributes": {
|
||||
"integration_type": Integration.IntegrationChoices.S3,
|
||||
"integration_type": Integration.IntegrationChoices.AMAZON_S3,
|
||||
"configuration": {
|
||||
"bucket_name": "bucket-name",
|
||||
"output_directory": "output-directory",
|
||||
@@ -5743,6 +5997,46 @@ class TestIntegrationViewSet:
|
||||
"invalid",
|
||||
None,
|
||||
),
|
||||
(
|
||||
{
|
||||
"integration_type": "jira",
|
||||
"configuration": {
|
||||
"projects": ["JIRA"],
|
||||
},
|
||||
"credentials": {"domain": "prowlerdomain"},
|
||||
},
|
||||
"invalid",
|
||||
"configuration",
|
||||
),
|
||||
(
|
||||
{
|
||||
"integration_type": "jira",
|
||||
"credentialss": {
|
||||
"domain": "prowlerdomain",
|
||||
"api_token": "api-token",
|
||||
"user_mail": "test@prowler.com",
|
||||
},
|
||||
},
|
||||
"required",
|
||||
"configuration",
|
||||
),
|
||||
(
|
||||
{
|
||||
"integration_type": "jira",
|
||||
"configuration": {},
|
||||
},
|
||||
"required",
|
||||
"credentials",
|
||||
),
|
||||
(
|
||||
{
|
||||
"integration_type": "jira",
|
||||
"configuration": {},
|
||||
"credentials": {"api_token": "api-token"},
|
||||
},
|
||||
"invalid",
|
||||
"credentials",
|
||||
),
|
||||
]
|
||||
),
|
||||
)
|
||||
@@ -5891,11 +6185,11 @@ class TestIntegrationViewSet:
|
||||
("inserted_at", TODAY, 2),
|
||||
("inserted_at.gte", "2024-01-01", 2),
|
||||
("inserted_at.lte", "2024-01-01", 0),
|
||||
("integration_type", Integration.IntegrationChoices.S3, 2),
|
||||
("integration_type", Integration.IntegrationChoices.AMAZON_S3, 2),
|
||||
("integration_type", Integration.IntegrationChoices.SLACK, 0),
|
||||
(
|
||||
"integration_type__in",
|
||||
f"{Integration.IntegrationChoices.S3},{Integration.IntegrationChoices.SLACK}",
|
||||
f"{Integration.IntegrationChoices.AMAZON_S3},{Integration.IntegrationChoices.SLACK}",
|
||||
2,
|
||||
),
|
||||
]
|
||||
@@ -5932,6 +6226,217 @@ class TestIntegrationViewSet:
|
||||
)
|
||||
assert response.status_code == status.HTTP_400_BAD_REQUEST
|
||||
|
||||
def test_integrations_create_duplicate_amazon_s3(
|
||||
self, authenticated_client, providers_fixture
|
||||
):
|
||||
provider = providers_fixture[0]
|
||||
|
||||
# Create first S3 integration
|
||||
data = {
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"attributes": {
|
||||
"integration_type": Integration.IntegrationChoices.AMAZON_S3,
|
||||
"configuration": {
|
||||
"bucket_name": "test-bucket",
|
||||
"output_directory": "test-output",
|
||||
},
|
||||
"credentials": {
|
||||
"role_arn": "arn:aws:iam::123456789012:role/test-role",
|
||||
"external_id": "test-external-id",
|
||||
},
|
||||
"enabled": True,
|
||||
},
|
||||
"relationships": {
|
||||
"providers": {
|
||||
"data": [{"type": "providers", "id": str(provider.id)}]
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
# First creation should succeed
|
||||
response = authenticated_client.post(
|
||||
reverse("integration-list"),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_201_CREATED
|
||||
|
||||
# Attempt to create duplicate should return 409
|
||||
response = authenticated_client.post(
|
||||
reverse("integration-list"),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_409_CONFLICT
|
||||
assert (
|
||||
"This integration already exists" in response.json()["errors"][0]["detail"]
|
||||
)
|
||||
assert (
|
||||
response.json()["errors"][0]["source"]["pointer"]
|
||||
== "/data/attributes/configuration"
|
||||
)
|
||||
|
||||
def test_integrations_create_duplicate_jira(self, authenticated_client):
|
||||
# Create first JIRA integration
|
||||
data = {
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"attributes": {
|
||||
"integration_type": Integration.IntegrationChoices.JIRA,
|
||||
"configuration": {},
|
||||
"credentials": {
|
||||
"user_mail": "test@example.com",
|
||||
"api_token": "test-api-token",
|
||||
"domain": "prowlerdomain",
|
||||
},
|
||||
"enabled": True,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
# First creation should succeed
|
||||
response = authenticated_client.post(
|
||||
reverse("integration-list"),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_201_CREATED
|
||||
|
||||
# Attempt to create duplicate should return 409
|
||||
response = authenticated_client.post(
|
||||
reverse("integration-list"),
|
||||
data=json.dumps(data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_409_CONFLICT
|
||||
assert (
|
||||
"This integration already exists" in response.json()["errors"][0]["detail"]
|
||||
)
|
||||
assert (
|
||||
response.json()["errors"][0]["source"]["pointer"]
|
||||
== "/data/attributes/configuration"
|
||||
)
|
||||
|
||||
def test_integrations_update_jira_configuration_readonly(
|
||||
self, authenticated_client
|
||||
):
|
||||
# Create JIRA integration first
|
||||
create_data = {
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"attributes": {
|
||||
"integration_type": Integration.IntegrationChoices.JIRA,
|
||||
"configuration": {},
|
||||
"credentials": {
|
||||
"user_mail": "test@example.com",
|
||||
"api_token": "test-api-token",
|
||||
"domain": "initial-domain",
|
||||
},
|
||||
"enabled": True,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
# Create the integration
|
||||
response = authenticated_client.post(
|
||||
reverse("integration-list"),
|
||||
data=json.dumps(create_data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_201_CREATED
|
||||
integration_id = response.json()["data"]["id"]
|
||||
|
||||
# Attempt to update configuration - should be ignored/not allowed
|
||||
update_data = {
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"id": integration_id,
|
||||
"attributes": {
|
||||
"configuration": {
|
||||
"projects": {"NEW_PROJECT": "New Project"},
|
||||
"issue_types": ["Epic", "Story"],
|
||||
"domain": "malicious-domain",
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
response = authenticated_client.patch(
|
||||
reverse("integration-detail", kwargs={"pk": integration_id}),
|
||||
data=json.dumps(update_data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_400_BAD_REQUEST
|
||||
|
||||
def test_integrations_update_jira_credentials_domain_reflects_in_configuration(
|
||||
self, authenticated_client
|
||||
):
|
||||
# Create JIRA integration first
|
||||
create_data = {
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"attributes": {
|
||||
"integration_type": Integration.IntegrationChoices.JIRA,
|
||||
"configuration": {},
|
||||
"credentials": {
|
||||
"user_mail": "test@example.com",
|
||||
"api_token": "test-api-token",
|
||||
"domain": "original-domain",
|
||||
},
|
||||
"enabled": True,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
# Create the integration
|
||||
response = authenticated_client.post(
|
||||
reverse("integration-list"),
|
||||
data=json.dumps(create_data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_201_CREATED
|
||||
integration_id = response.json()["data"]["id"]
|
||||
|
||||
# Verify initial domain in configuration
|
||||
initial_integration = response.json()["data"]
|
||||
assert (
|
||||
initial_integration["attributes"]["configuration"]["domain"]
|
||||
== "original-domain"
|
||||
)
|
||||
|
||||
# Update credentials with new domain
|
||||
update_data = {
|
||||
"data": {
|
||||
"type": "integrations",
|
||||
"id": integration_id,
|
||||
"attributes": {
|
||||
"credentials": {
|
||||
"user_mail": "updated@example.com",
|
||||
"api_token": "updated-api-token",
|
||||
"domain": "updated-domain",
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
response = authenticated_client.patch(
|
||||
reverse("integration-detail", kwargs={"pk": integration_id}),
|
||||
data=json.dumps(update_data),
|
||||
content_type="application/vnd.api+json",
|
||||
)
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
|
||||
# Verify the new domain is reflected in configuration
|
||||
updated_integration = response.json()["data"]
|
||||
configuration = updated_integration["attributes"]["configuration"]
|
||||
assert configuration["domain"] == "updated-domain"
|
||||
|
||||
# Verify other configuration fields are preserved
|
||||
assert "projects" in configuration
|
||||
assert "issue_types" in configuration
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestSAMLTokenValidation:
|
||||
|
||||
@@ -6,13 +6,18 @@ from django.db.models import Subquery
|
||||
from rest_framework.exceptions import NotFound, ValidationError
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.db_utils import rls_transaction
|
||||
from api.exceptions import InvitationTokenExpiredException
|
||||
from api.models import Invitation, Processor, Provider, Resource
|
||||
from api.models import Integration, Invitation, Processor, Provider, Resource
|
||||
from api.v1.serializers import FindingMetadataSerializer
|
||||
from prowler.lib.outputs.jira.jira import Jira, JiraBasicAuthError
|
||||
from prowler.providers.aws.aws_provider import AwsProvider
|
||||
from prowler.providers.aws.lib.s3.s3 import S3
|
||||
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHub
|
||||
from prowler.providers.azure.azure_provider import AzureProvider
|
||||
from prowler.providers.common.models import Connection
|
||||
from prowler.providers.gcp.gcp_provider import GcpProvider
|
||||
from prowler.providers.github.github_provider import GithubProvider
|
||||
from prowler.providers.kubernetes.kubernetes_provider import KubernetesProvider
|
||||
from prowler.providers.m365.m365_provider import M365Provider
|
||||
|
||||
@@ -55,14 +60,21 @@ def merge_dicts(default_dict: dict, replacement_dict: dict) -> dict:
|
||||
|
||||
def return_prowler_provider(
|
||||
provider: Provider,
|
||||
) -> [AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider]:
|
||||
) -> [
|
||||
AwsProvider
|
||||
| AzureProvider
|
||||
| GcpProvider
|
||||
| GithubProvider
|
||||
| KubernetesProvider
|
||||
| M365Provider
|
||||
]:
|
||||
"""Return the Prowler provider class based on the given provider type.
|
||||
|
||||
Args:
|
||||
provider (Provider): The provider object containing the provider type and associated secrets.
|
||||
|
||||
Returns:
|
||||
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider: The corresponding provider class.
|
||||
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider: The corresponding provider class.
|
||||
|
||||
Raises:
|
||||
ValueError: If the provider type specified in `provider.provider` is not supported.
|
||||
@@ -78,6 +90,8 @@ def return_prowler_provider(
|
||||
prowler_provider = KubernetesProvider
|
||||
case Provider.ProviderChoices.M365.value:
|
||||
prowler_provider = M365Provider
|
||||
case Provider.ProviderChoices.GITHUB.value:
|
||||
prowler_provider = GithubProvider
|
||||
case _:
|
||||
raise ValueError(f"Provider type {provider.provider} not supported")
|
||||
return prowler_provider
|
||||
@@ -108,6 +122,12 @@ def get_prowler_provider_kwargs(
|
||||
}
|
||||
elif provider.provider == Provider.ProviderChoices.KUBERNETES.value:
|
||||
prowler_provider_kwargs = {**prowler_provider_kwargs, "context": provider.uid}
|
||||
elif provider.provider == Provider.ProviderChoices.GITHUB.value:
|
||||
if provider.uid:
|
||||
prowler_provider_kwargs = {
|
||||
**prowler_provider_kwargs,
|
||||
"organizations": [provider.uid],
|
||||
}
|
||||
|
||||
if mutelist_processor:
|
||||
mutelist_content = mutelist_processor.configuration.get("Mutelist", {})
|
||||
@@ -120,7 +140,14 @@ def get_prowler_provider_kwargs(
|
||||
def initialize_prowler_provider(
|
||||
provider: Provider,
|
||||
mutelist_processor: Processor | None = None,
|
||||
) -> AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider:
|
||||
) -> (
|
||||
AwsProvider
|
||||
| AzureProvider
|
||||
| GcpProvider
|
||||
| GithubProvider
|
||||
| KubernetesProvider
|
||||
| M365Provider
|
||||
):
|
||||
"""Initialize a Prowler provider instance based on the given provider type.
|
||||
|
||||
Args:
|
||||
@@ -128,8 +155,8 @@ def initialize_prowler_provider(
|
||||
mutelist_processor (Processor): The mutelist processor object containing the mutelist configuration.
|
||||
|
||||
Returns:
|
||||
AwsProvider | AzureProvider | GcpProvider | KubernetesProvider | M365Provider: An instance of the corresponding provider class
|
||||
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `KubernetesProvider` or `M365Provider`) initialized with the
|
||||
AwsProvider | AzureProvider | GcpProvider | GithubProvider | KubernetesProvider | M365Provider: An instance of the corresponding provider class
|
||||
(`AwsProvider`, `AzureProvider`, `GcpProvider`, `GithubProvider`, `KubernetesProvider` or `M365Provider`) initialized with the
|
||||
provider's secrets.
|
||||
"""
|
||||
prowler_provider = return_prowler_provider(provider)
|
||||
@@ -158,6 +185,77 @@ def prowler_provider_connection_test(provider: Provider) -> Connection:
|
||||
)
|
||||
|
||||
|
||||
def prowler_integration_connection_test(integration: Integration) -> Connection:
|
||||
"""Test the connection to a Prowler integration based on the given integration type.
|
||||
|
||||
Args:
|
||||
integration (Integration): The integration object containing the integration type and associated credentials.
|
||||
|
||||
Returns:
|
||||
Connection: A connection object representing the result of the connection test for the specified integration.
|
||||
"""
|
||||
if integration.integration_type == Integration.IntegrationChoices.AMAZON_S3:
|
||||
return S3.test_connection(
|
||||
**integration.credentials,
|
||||
bucket_name=integration.configuration["bucket_name"],
|
||||
raise_on_exception=False,
|
||||
)
|
||||
# TODO: It is possible that we can unify the connection test for all integrations, but need refactoring
|
||||
# to avoid code duplication. Actually the AWS integrations are similar, so SecurityHub and S3 can be unified
|
||||
# making some changes in the SDK.
|
||||
elif (
|
||||
integration.integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB
|
||||
):
|
||||
# Get the provider associated with this integration
|
||||
provider_relationship = integration.integrationproviderrelationship_set.first()
|
||||
if not provider_relationship:
|
||||
return Connection(
|
||||
is_connected=False, error="No provider associated with this integration"
|
||||
)
|
||||
|
||||
credentials = (
|
||||
integration.credentials
|
||||
if integration.credentials
|
||||
else provider_relationship.provider.secret.secret
|
||||
)
|
||||
connection = SecurityHub.test_connection(
|
||||
aws_account_id=provider_relationship.provider.uid,
|
||||
raise_on_exception=False,
|
||||
**credentials,
|
||||
)
|
||||
|
||||
# Only save regions if connection is successful
|
||||
if connection.is_connected:
|
||||
regions_status = {r: True for r in connection.enabled_regions}
|
||||
regions_status.update({r: False for r in connection.disabled_regions})
|
||||
|
||||
# Save regions information in the integration configuration
|
||||
integration.configuration["regions"] = regions_status
|
||||
integration.save()
|
||||
else:
|
||||
# Reset regions information if connection fails
|
||||
integration.configuration["regions"] = {}
|
||||
integration.save()
|
||||
|
||||
return connection
|
||||
elif integration.integration_type == Integration.IntegrationChoices.JIRA:
|
||||
jira_connection = Jira.test_connection(
|
||||
**integration.credentials,
|
||||
raise_on_exception=False,
|
||||
)
|
||||
project_keys = jira_connection.projects if jira_connection.is_connected else {}
|
||||
with rls_transaction(str(integration.tenant_id)):
|
||||
integration.configuration["projects"] = project_keys
|
||||
integration.save()
|
||||
return jira_connection
|
||||
elif integration.integration_type == Integration.IntegrationChoices.SLACK:
|
||||
pass
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Integration type {integration.integration_type} not supported"
|
||||
)
|
||||
|
||||
|
||||
def validate_invitation(
|
||||
invitation_token: str, email: str, raise_not_found=False
|
||||
) -> Invitation:
|
||||
@@ -249,3 +347,17 @@ def get_findings_metadata_no_aggregations(tenant_id: str, filtered_queryset):
|
||||
serializer.is_valid(raise_exception=True)
|
||||
|
||||
return serializer.data
|
||||
|
||||
|
||||
def initialize_prowler_integration(integration: Integration) -> Jira:
|
||||
# TODO Refactor other integrations to use this function
|
||||
if integration.integration_type == Integration.IntegrationChoices.JIRA:
|
||||
try:
|
||||
return Jira(**integration.credentials)
|
||||
except JiraBasicAuthError as jira_auth_error:
|
||||
with rls_transaction(str(integration.tenant_id)):
|
||||
integration.configuration["projects"] = {}
|
||||
integration.connected = False
|
||||
integration.connection_last_checked_at = datetime.now(tz=timezone.utc)
|
||||
integration.save()
|
||||
raise jira_auth_error
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
import os
|
||||
import re
|
||||
|
||||
from drf_spectacular.utils import extend_schema_field
|
||||
from rest_framework_json_api import serializers
|
||||
|
||||
@@ -6,7 +9,70 @@ from api.v1.serializer_utils.base import BaseValidateSerializer
|
||||
|
||||
class S3ConfigSerializer(BaseValidateSerializer):
|
||||
bucket_name = serializers.CharField()
|
||||
output_directory = serializers.CharField()
|
||||
output_directory = serializers.CharField(allow_blank=True)
|
||||
|
||||
def validate_output_directory(self, value):
|
||||
"""
|
||||
Validate the output_directory field to ensure it's a properly formatted path.
|
||||
Prevents paths with excessive slashes like "///////test".
|
||||
If empty, sets a default value.
|
||||
"""
|
||||
# If empty or None, set default value
|
||||
if not value:
|
||||
return "output"
|
||||
|
||||
# Normalize the path to remove excessive slashes
|
||||
normalized_path = os.path.normpath(value)
|
||||
|
||||
# Remove leading slashes for S3 paths
|
||||
if normalized_path.startswith("/"):
|
||||
normalized_path = normalized_path.lstrip("/")
|
||||
|
||||
# Check for invalid characters or patterns
|
||||
if re.search(r'[<>:"|?*]', normalized_path):
|
||||
raise serializers.ValidationError(
|
||||
'Output directory contains invalid characters. Avoid: < > : " | ? *'
|
||||
)
|
||||
|
||||
# Check for empty path after normalization
|
||||
if not normalized_path or normalized_path == ".":
|
||||
raise serializers.ValidationError(
|
||||
"Output directory cannot be empty or just '.' or '/'."
|
||||
)
|
||||
|
||||
# Check for paths that are too long (S3 key limit is 1024 characters, leave some room for filename)
|
||||
if len(normalized_path) > 900:
|
||||
raise serializers.ValidationError(
|
||||
"Output directory path is too long (max 900 characters)."
|
||||
)
|
||||
|
||||
return normalized_path
|
||||
|
||||
class Meta:
|
||||
resource_name = "integrations"
|
||||
|
||||
|
||||
class SecurityHubConfigSerializer(BaseValidateSerializer):
|
||||
send_only_fails = serializers.BooleanField(default=False)
|
||||
archive_previous_findings = serializers.BooleanField(default=False)
|
||||
regions = serializers.DictField(default=dict, read_only=True)
|
||||
|
||||
def to_internal_value(self, data):
|
||||
validated_data = super().to_internal_value(data)
|
||||
# Always initialize regions as empty dict
|
||||
validated_data["regions"] = {}
|
||||
return validated_data
|
||||
|
||||
class Meta:
|
||||
resource_name = "integrations"
|
||||
|
||||
|
||||
class JiraConfigSerializer(BaseValidateSerializer):
|
||||
domain = serializers.CharField(read_only=True)
|
||||
issue_types = serializers.ListField(
|
||||
read_only=True, child=serializers.CharField(), default=["Task"]
|
||||
)
|
||||
projects = serializers.DictField(read_only=True)
|
||||
|
||||
class Meta:
|
||||
resource_name = "integrations"
|
||||
@@ -27,6 +93,15 @@ class AWSCredentialSerializer(BaseValidateSerializer):
|
||||
resource_name = "integrations"
|
||||
|
||||
|
||||
class JiraCredentialSerializer(BaseValidateSerializer):
|
||||
user_mail = serializers.EmailField(required=True)
|
||||
api_token = serializers.CharField(required=True)
|
||||
domain = serializers.CharField(required=True)
|
||||
|
||||
class Meta:
|
||||
resource_name = "integrations"
|
||||
|
||||
|
||||
@extend_schema_field(
|
||||
{
|
||||
"oneOf": [
|
||||
@@ -78,6 +153,27 @@ class AWSCredentialSerializer(BaseValidateSerializer):
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "JIRA Credentials",
|
||||
"properties": {
|
||||
"user_mail": {
|
||||
"type": "string",
|
||||
"format": "email",
|
||||
"description": "The email address of the JIRA user account.",
|
||||
},
|
||||
"api_token": {
|
||||
"type": "string",
|
||||
"description": "The API token for authentication with JIRA. This can be generated from your "
|
||||
"Atlassian account settings.",
|
||||
},
|
||||
"domain": {
|
||||
"type": "string",
|
||||
"description": "The JIRA domain/instance URL (e.g., 'your-domain.atlassian.net').",
|
||||
},
|
||||
},
|
||||
"required": ["user_mail", "api_token", "domain"],
|
||||
},
|
||||
]
|
||||
}
|
||||
)
|
||||
@@ -98,10 +194,40 @@ class IntegrationCredentialField(serializers.JSONField):
|
||||
},
|
||||
"output_directory": {
|
||||
"type": "string",
|
||||
"description": "The directory path within the bucket where files will be saved.",
|
||||
"description": "The directory path within the bucket where files will be saved. Optional - "
|
||||
'defaults to "output" if not provided. Path will be normalized to remove '
|
||||
'excessive slashes and invalid characters are not allowed (< > : " | ? *). '
|
||||
"Maximum length is 900 characters.",
|
||||
"maxLength": 900,
|
||||
"pattern": '^[^<>:"|?*]+$',
|
||||
"default": "output",
|
||||
},
|
||||
},
|
||||
"required": ["bucket_name", "output_directory"],
|
||||
"required": ["bucket_name"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "AWS Security Hub",
|
||||
"properties": {
|
||||
"send_only_fails": {
|
||||
"type": "boolean",
|
||||
"default": False,
|
||||
"description": "If true, only findings with status 'FAIL' will be sent to Security Hub.",
|
||||
},
|
||||
"archive_previous_findings": {
|
||||
"type": "boolean",
|
||||
"default": False,
|
||||
"description": "If true, archives findings that are not present in the current execution.",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "JIRA",
|
||||
"description": "JIRA integration does not accept any configuration in the payload. Leave it as an "
|
||||
"empty JSON object (`{}`).",
|
||||
"properties": {},
|
||||
"additionalProperties": False,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
@@ -176,6 +176,43 @@ from rest_framework_json_api import serializers
|
||||
},
|
||||
"required": ["kubeconfig_content"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "GitHub Personal Access Token",
|
||||
"properties": {
|
||||
"personal_access_token": {
|
||||
"type": "string",
|
||||
"description": "GitHub personal access token for authentication.",
|
||||
}
|
||||
},
|
||||
"required": ["personal_access_token"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "GitHub OAuth App Token",
|
||||
"properties": {
|
||||
"oauth_app_token": {
|
||||
"type": "string",
|
||||
"description": "GitHub OAuth App token for authentication.",
|
||||
}
|
||||
},
|
||||
"required": ["oauth_app_token"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "GitHub App Credentials",
|
||||
"properties": {
|
||||
"github_app_id": {
|
||||
"type": "integer",
|
||||
"description": "GitHub App ID for authentication.",
|
||||
},
|
||||
"github_app_key": {
|
||||
"type": "string",
|
||||
"description": "Path to the GitHub App private key file.",
|
||||
},
|
||||
},
|
||||
"required": ["github_app_id", "github_app_key"],
|
||||
},
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
@@ -15,6 +15,8 @@ from rest_framework_simplejwt.exceptions import TokenError
|
||||
from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
|
||||
from rest_framework_simplejwt.tokens import RefreshToken
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.exceptions import ConflictException
|
||||
from api.models import (
|
||||
Finding,
|
||||
Integration,
|
||||
@@ -45,7 +47,10 @@ from api.v1.serializer_utils.integrations import (
|
||||
AWSCredentialSerializer,
|
||||
IntegrationConfigField,
|
||||
IntegrationCredentialField,
|
||||
JiraConfigSerializer,
|
||||
JiraCredentialSerializer,
|
||||
S3ConfigSerializer,
|
||||
SecurityHubConfigSerializer,
|
||||
)
|
||||
from api.v1.serializer_utils.processors import ProcessorConfigField
|
||||
from api.v1.serializer_utils.providers import ProviderSecretField
|
||||
@@ -255,8 +260,15 @@ class UserSerializer(BaseSerializerV1):
|
||||
Serializer for the User model.
|
||||
"""
|
||||
|
||||
memberships = serializers.ResourceRelatedField(many=True, read_only=True)
|
||||
roles = serializers.ResourceRelatedField(many=True, read_only=True)
|
||||
# We use SerializerMethodResourceRelatedField so includes (e.g. ?include=roles)
|
||||
# respect RBAC and do not leak relationships of other users when the requester
|
||||
# lacks manage_account. The visibility logic lives in get_roles/get_memberships.
|
||||
memberships = SerializerMethodResourceRelatedField(
|
||||
many=True, read_only=True, source="memberships", method_name="get_memberships"
|
||||
)
|
||||
roles = SerializerMethodResourceRelatedField(
|
||||
many=True, read_only=True, source="roles", method_name="get_roles"
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = User
|
||||
@@ -274,9 +286,35 @@ class UserSerializer(BaseSerializerV1):
|
||||
}
|
||||
|
||||
included_serializers = {
|
||||
"roles": "api.v1.serializers.RoleSerializer",
|
||||
"roles": "api.v1.serializers.RoleIncludeSerializer",
|
||||
"memberships": "api.v1.serializers.MembershipIncludeSerializer",
|
||||
}
|
||||
|
||||
def _can_view_relationships(self, instance) -> bool:
|
||||
"""Allow self to view own relationships. Require manage_account to view others."""
|
||||
role = self.context.get("role")
|
||||
request = self.context.get("request")
|
||||
is_self = bool(
|
||||
request
|
||||
and getattr(request, "user", None)
|
||||
and getattr(instance, "id", None) == request.user.id
|
||||
)
|
||||
return is_self or (role and role.manage_account)
|
||||
|
||||
def get_roles(self, instance):
|
||||
return (
|
||||
instance.roles.all()
|
||||
if self._can_view_relationships(instance)
|
||||
else Role.objects.none()
|
||||
)
|
||||
|
||||
def get_memberships(self, instance):
|
||||
return (
|
||||
instance.memberships.all()
|
||||
if self._can_view_relationships(instance)
|
||||
else Membership.objects.none()
|
||||
)
|
||||
|
||||
|
||||
class UserCreateSerializer(BaseWriteSerializer):
|
||||
password = serializers.CharField(write_only=True)
|
||||
@@ -384,6 +422,34 @@ class UserRoleRelationshipSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
roles = Role.objects.filter(id__in=role_ids)
|
||||
tenant_id = self.context.get("tenant_id")
|
||||
|
||||
# Safeguard: A tenant must always have at least one user with MANAGE_ACCOUNT.
|
||||
# If the target roles do NOT include MANAGE_ACCOUNT, and the current user is
|
||||
# the only one in the tenant with MANAGE_ACCOUNT, block the update.
|
||||
target_includes_manage_account = roles.filter(manage_account=True).exists()
|
||||
if not target_includes_manage_account:
|
||||
# Check if any other user has MANAGE_ACCOUNT
|
||||
other_users_have_manage_account = (
|
||||
UserRoleRelationship.objects.filter(
|
||||
tenant_id=tenant_id, role__manage_account=True
|
||||
)
|
||||
.exclude(user_id=instance.id)
|
||||
.exists()
|
||||
)
|
||||
|
||||
# Check if the current user has MANAGE_ACCOUNT
|
||||
instance_has_manage_account = instance.roles.filter(
|
||||
tenant_id=tenant_id, manage_account=True
|
||||
).exists()
|
||||
|
||||
# If the current user is the last holder of MANAGE_ACCOUNT, prevent removal
|
||||
if instance_has_manage_account and not other_users_have_manage_account:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"roles": "At least one user in the tenant must retain MANAGE_ACCOUNT. "
|
||||
"Assign MANAGE_ACCOUNT to another user before removing it here."
|
||||
}
|
||||
)
|
||||
|
||||
instance.roles.clear()
|
||||
new_relationships = [
|
||||
UserRoleRelationship(user=instance, role=r, tenant_id=tenant_id)
|
||||
@@ -498,6 +564,12 @@ class TenantSerializer(BaseSerializerV1):
|
||||
fields = ["id", "name", "memberships"]
|
||||
|
||||
|
||||
class TenantIncludeSerializer(BaseSerializerV1):
|
||||
class Meta:
|
||||
model = Tenant
|
||||
fields = ["id", "name"]
|
||||
|
||||
|
||||
# Memberships
|
||||
|
||||
|
||||
@@ -519,6 +591,29 @@ class MembershipSerializer(serializers.ModelSerializer):
|
||||
fields = ["id", "user", "tenant", "role", "date_joined"]
|
||||
|
||||
|
||||
class MembershipIncludeSerializer(serializers.ModelSerializer):
|
||||
"""
|
||||
Include-oriented Membership serializer that enables including tenant objects with names
|
||||
without altering the base MembershipSerializer behavior.
|
||||
"""
|
||||
|
||||
role = MemberRoleEnumSerializerField()
|
||||
user = serializers.ResourceRelatedField(read_only=True)
|
||||
tenant = SerializerMethodResourceRelatedField(read_only=True, source="tenant")
|
||||
|
||||
class Meta:
|
||||
model = Membership
|
||||
fields = ["id", "user", "tenant", "role", "date_joined"]
|
||||
|
||||
included_serializers = {"tenant": "api.v1.serializers.TenantIncludeSerializer"}
|
||||
|
||||
def get_tenant(self, instance):
|
||||
try:
|
||||
return Tenant.objects.using(MainRouter.admin_db).get(id=instance.tenant_id)
|
||||
except Tenant.DoesNotExist:
|
||||
return None
|
||||
|
||||
|
||||
# Provider Groups
|
||||
class ProviderGroupSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
providers = serializers.ResourceRelatedField(
|
||||
@@ -1217,6 +1312,8 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
|
||||
serializer = AzureProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.GCP.value:
|
||||
serializer = GCPProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.GITHUB.value:
|
||||
serializer = GithubProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.KUBERNETES.value:
|
||||
serializer = KubernetesProviderSecret(data=secret)
|
||||
elif provider_type == Provider.ProviderChoices.M365.value:
|
||||
@@ -1296,6 +1393,16 @@ class KubernetesProviderSecret(serializers.Serializer):
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class GithubProviderSecret(serializers.Serializer):
|
||||
personal_access_token = serializers.CharField(required=False)
|
||||
oauth_app_token = serializers.CharField(required=False)
|
||||
github_app_id = serializers.IntegerField(required=False)
|
||||
github_app_key_content = serializers.CharField(required=False)
|
||||
|
||||
class Meta:
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class AWSRoleAssumptionProviderSecret(serializers.Serializer):
|
||||
role_arn = serializers.CharField()
|
||||
external_id = serializers.CharField()
|
||||
@@ -1662,6 +1769,37 @@ class RoleUpdateSerializer(RoleSerializer):
|
||||
|
||||
if "users" in validated_data:
|
||||
users = validated_data.pop("users")
|
||||
# Prevent a user from removing their own role assignment via Role update
|
||||
request = self.context.get("request")
|
||||
if request and getattr(request, "user", None):
|
||||
request_user = request.user
|
||||
is_currently_assigned = instance.users.filter(
|
||||
id=request_user.id
|
||||
).exists()
|
||||
will_be_assigned = any(u.id == request_user.id for u in users)
|
||||
if is_currently_assigned and not will_be_assigned:
|
||||
raise serializers.ValidationError(
|
||||
{"users": "Users cannot remove their own role."}
|
||||
)
|
||||
|
||||
# Safeguard MANAGE_ACCOUNT coverage when updating users of this role
|
||||
if instance.manage_account:
|
||||
# Existing MANAGE_ACCOUNT assignments on other roles within the tenant
|
||||
other_ma_exists = (
|
||||
UserRoleRelationship.objects.filter(
|
||||
tenant_id=tenant_id, role__manage_account=True
|
||||
)
|
||||
.exclude(role_id=instance.id)
|
||||
.exists()
|
||||
)
|
||||
|
||||
if not other_ma_exists and len(users) == 0:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"users": "At least one user in the tenant must retain MANAGE_ACCOUNT. "
|
||||
"Assign this MANAGE_ACCOUNT role to at least one user or ensure another user has it."
|
||||
}
|
||||
)
|
||||
instance.users.clear()
|
||||
through_model_instances = [
|
||||
UserRoleRelationship(
|
||||
@@ -1676,6 +1814,37 @@ class RoleUpdateSerializer(RoleSerializer):
|
||||
return super().update(instance, validated_data)
|
||||
|
||||
|
||||
class RoleIncludeSerializer(RLSSerializer):
|
||||
permission_state = serializers.SerializerMethodField()
|
||||
|
||||
def get_permission_state(self, obj) -> str:
|
||||
return obj.permission_state
|
||||
|
||||
class Meta:
|
||||
model = Role
|
||||
fields = [
|
||||
"id",
|
||||
"name",
|
||||
"manage_users",
|
||||
"manage_account",
|
||||
# Disable for the first release
|
||||
# "manage_billing",
|
||||
# /Disable for the first release
|
||||
"manage_integrations",
|
||||
"manage_providers",
|
||||
"manage_scans",
|
||||
"permission_state",
|
||||
"unlimited_visibility",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
}
|
||||
|
||||
|
||||
class ProviderGroupResourceIdentifierSerializer(serializers.Serializer):
|
||||
resource_type = serializers.CharField(source="type")
|
||||
id = serializers.UUIDField()
|
||||
@@ -1790,6 +1959,7 @@ class ComplianceOverviewDetailSerializer(serializers.Serializer):
|
||||
|
||||
class ComplianceOverviewAttributesSerializer(serializers.Serializer):
|
||||
id = serializers.CharField()
|
||||
compliance_name = serializers.CharField()
|
||||
framework_description = serializers.CharField()
|
||||
name = serializers.CharField()
|
||||
framework = serializers.CharField()
|
||||
@@ -1938,6 +2108,62 @@ class ScheduleDailyCreateSerializer(serializers.Serializer):
|
||||
|
||||
|
||||
class BaseWriteIntegrationSerializer(BaseWriteSerializer):
|
||||
def validate(self, attrs):
|
||||
integration_type = attrs.get("integration_type")
|
||||
|
||||
if (
|
||||
integration_type == Integration.IntegrationChoices.AMAZON_S3
|
||||
and Integration.objects.filter(
|
||||
configuration=attrs.get("configuration")
|
||||
).exists()
|
||||
):
|
||||
raise ConflictException(
|
||||
detail="This integration already exists.",
|
||||
pointer="/data/attributes/configuration",
|
||||
)
|
||||
|
||||
if (
|
||||
integration_type == Integration.IntegrationChoices.JIRA
|
||||
and Integration.objects.filter(
|
||||
configuration__contains={
|
||||
"domain": attrs.get("configuration").get("domain")
|
||||
}
|
||||
).exists()
|
||||
):
|
||||
raise ConflictException(
|
||||
detail="This integration already exists.",
|
||||
pointer="/data/attributes/configuration",
|
||||
)
|
||||
|
||||
# Check if any provider already has a SecurityHub integration
|
||||
if hasattr(self, "instance") and self.instance and not integration_type:
|
||||
integration_type = self.instance.integration_type
|
||||
|
||||
if (
|
||||
integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB
|
||||
and "providers" in attrs
|
||||
):
|
||||
providers = attrs.get("providers", [])
|
||||
tenant_id = self.context.get("tenant_id")
|
||||
for provider in providers:
|
||||
# For updates, exclude the current instance from the check
|
||||
query = IntegrationProviderRelationship.objects.filter(
|
||||
provider=provider,
|
||||
integration__integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
|
||||
tenant_id=tenant_id,
|
||||
)
|
||||
if hasattr(self, "instance") and self.instance:
|
||||
query = query.exclude(integration=self.instance)
|
||||
|
||||
if query.exists():
|
||||
raise ConflictException(
|
||||
detail=f"Provider {provider.id} already has a Security Hub integration. Only one "
|
||||
"Security Hub integration is allowed per provider.",
|
||||
pointer="/data/relationships/providers",
|
||||
)
|
||||
|
||||
return super().validate(attrs)
|
||||
|
||||
@staticmethod
|
||||
def validate_integration_data(
|
||||
integration_type: str,
|
||||
@@ -1945,17 +2171,49 @@ class BaseWriteIntegrationSerializer(BaseWriteSerializer):
|
||||
configuration: dict,
|
||||
credentials: dict,
|
||||
):
|
||||
if integration_type == Integration.IntegrationChoices.S3:
|
||||
if integration_type == Integration.IntegrationChoices.AMAZON_S3:
|
||||
config_serializer = S3ConfigSerializer
|
||||
credentials_serializers = [AWSCredentialSerializer]
|
||||
# TODO: This will be required for AWS Security Hub
|
||||
# if providers and not all(
|
||||
# provider.provider == Provider.ProviderChoices.AWS
|
||||
# for provider in providers
|
||||
# ):
|
||||
# raise serializers.ValidationError(
|
||||
# {"providers": "All providers must be AWS for the S3 integration."}
|
||||
# )
|
||||
elif integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB:
|
||||
if providers:
|
||||
if len(providers) > 1:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"providers": "Only one provider is supported for the Security Hub integration."
|
||||
}
|
||||
)
|
||||
if providers[0].provider != Provider.ProviderChoices.AWS:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"providers": "The provider must be AWS type for the Security Hub integration."
|
||||
}
|
||||
)
|
||||
config_serializer = SecurityHubConfigSerializer
|
||||
credentials_serializers = [AWSCredentialSerializer]
|
||||
elif integration_type == Integration.IntegrationChoices.JIRA:
|
||||
if providers:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"providers": "Relationship field is not accepted. This integration applies to all providers."
|
||||
}
|
||||
)
|
||||
if configuration:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"configuration": "This integration does not support custom configuration."
|
||||
}
|
||||
)
|
||||
config_serializer = JiraConfigSerializer
|
||||
# Create non-editable configuration for JIRA integration
|
||||
default_jira_issue_types = ["Task"]
|
||||
configuration.update(
|
||||
{
|
||||
"projects": {},
|
||||
"issue_types": default_jira_issue_types,
|
||||
"domain": credentials.get("domain"),
|
||||
}
|
||||
)
|
||||
credentials_serializers = [JiraCredentialSerializer]
|
||||
else:
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
@@ -1963,7 +2221,11 @@ class BaseWriteIntegrationSerializer(BaseWriteSerializer):
|
||||
}
|
||||
)
|
||||
|
||||
config_serializer(data=configuration).is_valid(raise_exception=True)
|
||||
serializer_instance = config_serializer(data=configuration)
|
||||
serializer_instance.is_valid(raise_exception=True)
|
||||
|
||||
# Apply the validated (and potentially transformed) data back to configuration
|
||||
configuration.update(serializer_instance.validated_data)
|
||||
|
||||
for cred_serializer in credentials_serializers:
|
||||
try:
|
||||
@@ -2015,6 +2277,10 @@ class IntegrationSerializer(RLSSerializer):
|
||||
for provider in representation["providers"]
|
||||
if provider["id"] in allowed_provider_ids
|
||||
]
|
||||
if instance.integration_type == Integration.IntegrationChoices.JIRA:
|
||||
representation["configuration"].update(
|
||||
{"domain": instance.credentials.get("domain")}
|
||||
)
|
||||
return representation
|
||||
|
||||
|
||||
@@ -2042,7 +2308,6 @@ class IntegrationCreateSerializer(BaseWriteIntegrationSerializer):
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
"connected": {"read_only": True},
|
||||
"enabled": {"read_only": True},
|
||||
"connection_last_checked_at": {"read_only": True},
|
||||
}
|
||||
|
||||
@@ -2052,10 +2317,18 @@ class IntegrationCreateSerializer(BaseWriteIntegrationSerializer):
|
||||
configuration = attrs.get("configuration")
|
||||
credentials = attrs.get("credentials")
|
||||
|
||||
validated_attrs = super().validate(attrs)
|
||||
if (
|
||||
not providers
|
||||
and integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB
|
||||
):
|
||||
raise serializers.ValidationError(
|
||||
{"providers": "At least one provider is required for this integration."}
|
||||
)
|
||||
|
||||
self.validate_integration_data(
|
||||
integration_type, providers, configuration, credentials
|
||||
)
|
||||
validated_attrs = super().validate(attrs)
|
||||
return validated_attrs
|
||||
|
||||
def create(self, validated_data):
|
||||
@@ -2108,13 +2381,16 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
|
||||
def validate(self, attrs):
|
||||
integration_type = self.instance.integration_type
|
||||
providers = attrs.get("providers")
|
||||
configuration = attrs.get("configuration") or self.instance.configuration
|
||||
if integration_type != Integration.IntegrationChoices.JIRA:
|
||||
configuration = attrs.get("configuration") or self.instance.configuration
|
||||
else:
|
||||
configuration = attrs.get("configuration", {})
|
||||
credentials = attrs.get("credentials") or self.instance.credentials
|
||||
|
||||
validated_attrs = super().validate(attrs)
|
||||
self.validate_integration_data(
|
||||
integration_type, providers, configuration, credentials
|
||||
)
|
||||
validated_attrs = super().validate(attrs)
|
||||
return validated_attrs
|
||||
|
||||
def update(self, instance, validated_data):
|
||||
@@ -2129,8 +2405,62 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
|
||||
]
|
||||
IntegrationProviderRelationship.objects.bulk_create(new_relationships)
|
||||
|
||||
# Preserve regions field for Security Hub integrations
|
||||
if instance.integration_type == Integration.IntegrationChoices.AWS_SECURITY_HUB:
|
||||
if "configuration" in validated_data:
|
||||
# Preserve the existing regions field if it exists
|
||||
existing_regions = instance.configuration.get("regions", {})
|
||||
validated_data["configuration"]["regions"] = existing_regions
|
||||
|
||||
return super().update(instance, validated_data)
|
||||
|
||||
def to_representation(self, instance):
|
||||
representation = super().to_representation(instance)
|
||||
# Ensure JIRA integrations show updated domain in configuration from credentials
|
||||
if instance.integration_type == Integration.IntegrationChoices.JIRA:
|
||||
representation["configuration"].update(
|
||||
{"domain": instance.credentials.get("domain")}
|
||||
)
|
||||
return representation
|
||||
|
||||
|
||||
class IntegrationJiraDispatchSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for dispatching findings to JIRA integration.
|
||||
"""
|
||||
|
||||
project_key = serializers.CharField(required=True)
|
||||
issue_type = serializers.ChoiceField(required=True, choices=["Task"])
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "integrations-jira-dispatches"
|
||||
|
||||
def validate(self, attrs):
|
||||
validated_attrs = super().validate(attrs)
|
||||
integration_instance = Integration.objects.get(
|
||||
id=self.context.get("integration_id")
|
||||
)
|
||||
if integration_instance.integration_type != Integration.IntegrationChoices.JIRA:
|
||||
raise ValidationError(
|
||||
{"integration_type": "The given integration is not a JIRA integration"}
|
||||
)
|
||||
|
||||
if not integration_instance.enabled:
|
||||
raise ValidationError(
|
||||
{"integration": "The given integration is not enabled"}
|
||||
)
|
||||
|
||||
project_key = attrs.get("project_key")
|
||||
if project_key not in integration_instance.configuration.get("projects", {}):
|
||||
raise ValidationError(
|
||||
{
|
||||
"project_key": "The given project key is not available for this JIRA integration. Refresh the "
|
||||
"connection if this is an error."
|
||||
}
|
||||
)
|
||||
|
||||
return validated_attrs
|
||||
|
||||
|
||||
# Processors
|
||||
|
||||
|
||||
@@ -12,6 +12,7 @@ from api.v1.views import (
|
||||
FindingViewSet,
|
||||
GithubSocialLoginView,
|
||||
GoogleSocialLoginView,
|
||||
IntegrationJiraViewSet,
|
||||
IntegrationViewSet,
|
||||
InvitationAcceptViewSet,
|
||||
InvitationViewSet,
|
||||
@@ -73,6 +74,13 @@ tenants_router.register(
|
||||
users_router = routers.NestedSimpleRouter(router, r"users", lookup="user")
|
||||
users_router.register(r"memberships", MembershipViewSet, basename="user-membership")
|
||||
|
||||
integrations_router = routers.NestedSimpleRouter(
|
||||
router, r"integrations", lookup="integration"
|
||||
)
|
||||
integrations_router.register(
|
||||
r"jira", IntegrationJiraViewSet, basename="integration-jira"
|
||||
)
|
||||
|
||||
urlpatterns = [
|
||||
path("tokens", CustomTokenObtainView.as_view(), name="token-obtain"),
|
||||
path("tokens/refresh", CustomTokenRefreshView.as_view(), name="token-refresh"),
|
||||
@@ -162,6 +170,7 @@ urlpatterns = [
|
||||
path("", include(router.urls)),
|
||||
path("", include(tenants_router.urls)),
|
||||
path("", include(users_router.urls)),
|
||||
path("", include(integrations_router.urls)),
|
||||
path("schema", SchemaView.as_view(), name="schema"),
|
||||
path("docs", SpectacularRedocView.as_view(url_name="schema"), name="docs"),
|
||||
]
|
||||
|
||||
+302
-52
@@ -22,7 +22,7 @@ from django.conf import settings as django_settings
|
||||
from django.contrib.postgres.aggregates import ArrayAgg
|
||||
from django.contrib.postgres.search import SearchQuery
|
||||
from django.db import transaction
|
||||
from django.db.models import Count, F, Prefetch, Q, Sum
|
||||
from django.db.models import Count, F, Prefetch, Q, Subquery, Sum
|
||||
from django.db.models.functions import Coalesce
|
||||
from django.http import HttpResponse
|
||||
from django.shortcuts import redirect
|
||||
@@ -57,10 +57,12 @@ from tasks.beat import schedule_provider_scan
|
||||
from tasks.jobs.export import get_s3_client
|
||||
from tasks.tasks import (
|
||||
backfill_scan_resource_summaries_task,
|
||||
check_integration_connection_task,
|
||||
check_lighthouse_connection_task,
|
||||
check_provider_connection_task,
|
||||
delete_provider_task,
|
||||
delete_tenant_task,
|
||||
jira_integration_task,
|
||||
perform_scan_task,
|
||||
)
|
||||
|
||||
@@ -74,8 +76,10 @@ from api.db_utils import rls_transaction
|
||||
from api.exceptions import TaskFailedException
|
||||
from api.filters import (
|
||||
ComplianceOverviewFilter,
|
||||
CustomDjangoFilterBackend,
|
||||
FindingFilter,
|
||||
IntegrationFilter,
|
||||
IntegrationJiraFindingsFilter,
|
||||
InvitationFilter,
|
||||
LatestFindingFilter,
|
||||
LatestResourceFilter,
|
||||
@@ -88,6 +92,7 @@ from api.filters import (
|
||||
RoleFilter,
|
||||
ScanFilter,
|
||||
ScanSummaryFilter,
|
||||
ScanSummarySeverityFilter,
|
||||
ServiceOverviewFilter,
|
||||
TaskFilter,
|
||||
TenantFilter,
|
||||
@@ -141,6 +146,7 @@ from api.v1.serializers import (
|
||||
FindingMetadataSerializer,
|
||||
FindingSerializer,
|
||||
IntegrationCreateSerializer,
|
||||
IntegrationJiraDispatchSerializer,
|
||||
IntegrationSerializer,
|
||||
IntegrationUpdateSerializer,
|
||||
InvitationAcceptSerializer,
|
||||
@@ -213,6 +219,8 @@ class RelationshipViewSchema(JsonApiAutoSchema):
|
||||
description="Obtain a token by providing valid credentials and an optional tenant ID.",
|
||||
)
|
||||
class CustomTokenObtainView(GenericAPIView):
|
||||
throttle_scope = "token-obtain"
|
||||
|
||||
resource_name = "tokens"
|
||||
serializer_class = TokenSerializer
|
||||
http_method_names = ["post"]
|
||||
@@ -292,7 +300,7 @@ class SchemaView(SpectacularAPIView):
|
||||
|
||||
def get(self, request, *args, **kwargs):
|
||||
spectacular_settings.TITLE = "Prowler API"
|
||||
spectacular_settings.VERSION = "1.10.1"
|
||||
spectacular_settings.VERSION = "1.14.0"
|
||||
spectacular_settings.DESCRIPTION = (
|
||||
"Prowler API specification.\n\nThis file is auto-generated."
|
||||
)
|
||||
@@ -312,6 +320,11 @@ class SchemaView(SpectacularAPIView):
|
||||
"description": "Endpoints for tenant invitations management, allowing retrieval and filtering of "
|
||||
"invitations, creating new invitations, accepting and revoking them.",
|
||||
},
|
||||
{
|
||||
"name": "Role",
|
||||
"description": "Endpoints for managing RBAC roles within tenants, allowing creation, retrieval, "
|
||||
"updating, and deletion of role configurations and permissions.",
|
||||
},
|
||||
{
|
||||
"name": "Provider",
|
||||
"description": "Endpoints for managing providers (AWS, GCP, Azure, etc...).",
|
||||
@@ -320,10 +333,20 @@ class SchemaView(SpectacularAPIView):
|
||||
"name": "Provider Group",
|
||||
"description": "Endpoints for managing provider groups.",
|
||||
},
|
||||
{
|
||||
"name": "Task",
|
||||
"description": "Endpoints for task management, allowing retrieval of task status and "
|
||||
"revoking tasks that have not started.",
|
||||
},
|
||||
{
|
||||
"name": "Scan",
|
||||
"description": "Endpoints for triggering manual scans and viewing scan results.",
|
||||
},
|
||||
{
|
||||
"name": "Schedule",
|
||||
"description": "Endpoints for managing scan schedules, allowing configuration of automated "
|
||||
"scans with different scheduling options.",
|
||||
},
|
||||
{
|
||||
"name": "Resource",
|
||||
"description": "Endpoints for managing resources discovered by scans, allowing "
|
||||
@@ -335,8 +358,9 @@ class SchemaView(SpectacularAPIView):
|
||||
"findings that result from scans.",
|
||||
},
|
||||
{
|
||||
"name": "Overview",
|
||||
"description": "Endpoints for retrieving aggregated summaries of resources from the system.",
|
||||
"name": "Processor",
|
||||
"description": "Endpoints for managing post-processors used to process Prowler findings, including "
|
||||
"registration, configuration, and deletion of post-processing actions.",
|
||||
},
|
||||
{
|
||||
"name": "Compliance Overview",
|
||||
@@ -344,9 +368,8 @@ class SchemaView(SpectacularAPIView):
|
||||
" compliance framework ID.",
|
||||
},
|
||||
{
|
||||
"name": "Task",
|
||||
"description": "Endpoints for task management, allowing retrieval of task status and "
|
||||
"revoking tasks that have not started.",
|
||||
"name": "Overview",
|
||||
"description": "Endpoints for retrieving aggregated summaries of resources from the system.",
|
||||
},
|
||||
{
|
||||
"name": "Integration",
|
||||
@@ -354,14 +377,15 @@ class SchemaView(SpectacularAPIView):
|
||||
" retrieval, and deletion of integrations such as S3, JIRA, or other services.",
|
||||
},
|
||||
{
|
||||
"name": "Lighthouse",
|
||||
"description": "Endpoints for managing Lighthouse configurations, including creation, retrieval, "
|
||||
"updating, and deletion of configurations such as OpenAI keys, models, and business context.",
|
||||
"name": "Lighthouse AI",
|
||||
"description": "Endpoints for managing Lighthouse AI configurations, including creation, retrieval, "
|
||||
"updating, and deletion of configurations such as OpenAI keys, models, and business "
|
||||
"context.",
|
||||
},
|
||||
{
|
||||
"name": "Processor",
|
||||
"description": "Endpoints for managing post-processors used to process Prowler findings, including "
|
||||
"registration, configuration, and deletion of post-processing actions.",
|
||||
"name": "SAML",
|
||||
"description": "Endpoints for Single Sign-On authentication management via SAML for seamless user "
|
||||
"authentication.",
|
||||
},
|
||||
]
|
||||
return super().get(request, *args, **kwargs)
|
||||
@@ -744,11 +768,13 @@ class UserViewSet(BaseUserViewset):
|
||||
# If called during schema generation, return an empty queryset
|
||||
if getattr(self, "swagger_fake_view", False):
|
||||
return User.objects.none()
|
||||
|
||||
queryset = (
|
||||
User.objects.filter(membership__tenant__id=self.request.tenant_id)
|
||||
if hasattr(self.request, "tenant_id")
|
||||
else User.objects.all()
|
||||
)
|
||||
|
||||
return queryset.prefetch_related("memberships", "roles")
|
||||
|
||||
def get_permissions(self):
|
||||
@@ -766,6 +792,12 @@ class UserViewSet(BaseUserViewset):
|
||||
else:
|
||||
return UserSerializer
|
||||
|
||||
def get_serializer_context(self):
|
||||
context = super().get_serializer_context()
|
||||
if self.request.user.is_authenticated:
|
||||
context["role"] = get_role(self.request.user)
|
||||
return context
|
||||
|
||||
@action(detail=False, methods=["get"], url_name="me")
|
||||
def me(self, request):
|
||||
user = self.request.user
|
||||
@@ -870,7 +902,11 @@ class UserViewSet(BaseUserViewset):
|
||||
partial_update=extend_schema(
|
||||
tags=["User"],
|
||||
summary="Partially update a user-roles relationship",
|
||||
description="Update the user-roles relationship information without affecting other fields.",
|
||||
description=(
|
||||
"Update the user-roles relationship information without affecting other fields. "
|
||||
"If the update would remove MANAGE_ACCOUNT from the last remaining user in the "
|
||||
"tenant, the API rejects the request with a 400 response."
|
||||
),
|
||||
responses={
|
||||
204: OpenApiResponse(
|
||||
response=None, description="Relationship updated successfully"
|
||||
@@ -880,7 +916,12 @@ class UserViewSet(BaseUserViewset):
|
||||
destroy=extend_schema(
|
||||
tags=["User"],
|
||||
summary="Delete a user-roles relationship",
|
||||
description="Remove the user-roles relationship from the system by their ID.",
|
||||
description=(
|
||||
"Remove the user-roles relationship from the system by their ID. If removing "
|
||||
"MANAGE_ACCOUNT would take it away from the last remaining user in the tenant, "
|
||||
"the API rejects the request with a 400 response. Users also cannot delete their "
|
||||
"own role assignments; attempting to do so returns a 400 response."
|
||||
),
|
||||
responses={
|
||||
204: OpenApiResponse(
|
||||
response=None, description="Relationship deleted successfully"
|
||||
@@ -895,11 +936,48 @@ class UserRoleRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
http_method_names = ["post", "patch", "delete"]
|
||||
schema = RelationshipViewSchema()
|
||||
# RBAC required permissions
|
||||
required_permissions = [Permissions.MANAGE_USERS]
|
||||
required_permissions = [Permissions.MANAGE_ACCOUNT]
|
||||
|
||||
def get_queryset(self):
|
||||
return User.objects.filter(membership__tenant__id=self.request.tenant_id)
|
||||
|
||||
def destroy(self, request, *args, **kwargs):
|
||||
"""
|
||||
Prevent deleting role relationships if it would leave the tenant with no
|
||||
users having MANAGE_ACCOUNT. Supports deleting specific roles via JSON:API
|
||||
relationship payload or clearing all roles for the user when no payload.
|
||||
"""
|
||||
user = self.get_object()
|
||||
# Disallow deleting own roles
|
||||
if str(user.id) == str(request.user.id):
|
||||
return Response(
|
||||
data={
|
||||
"detail": "Users cannot delete the relationship with their role."
|
||||
},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
tenant_id = self.request.tenant_id
|
||||
payload = request.data if isinstance(request.data, dict) else None
|
||||
|
||||
# If a user has more than one role, we will delete the relationship with the roles in the payload
|
||||
data = payload.get("data") if payload else None
|
||||
if data:
|
||||
try:
|
||||
role_ids = [item["id"] for item in data]
|
||||
except KeyError:
|
||||
role_ids = []
|
||||
roles_to_remove = Role.objects.filter(id__in=role_ids, tenant_id=tenant_id)
|
||||
else:
|
||||
roles_to_remove = user.roles.filter(tenant_id=tenant_id)
|
||||
|
||||
UserRoleRelationship.objects.filter(
|
||||
user=user,
|
||||
tenant_id=tenant_id,
|
||||
role_id__in=roles_to_remove.values_list("id", flat=True),
|
||||
).delete()
|
||||
|
||||
return Response(status=status.HTTP_204_NO_CONTENT)
|
||||
|
||||
def create(self, request, *args, **kwargs):
|
||||
user = self.get_object()
|
||||
|
||||
@@ -938,12 +1016,6 @@ class UserRoleRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
serializer.save()
|
||||
return Response(status=status.HTTP_204_NO_CONTENT)
|
||||
|
||||
def destroy(self, request, *args, **kwargs):
|
||||
user = self.get_object()
|
||||
user.roles.clear()
|
||||
|
||||
return Response(status=status.HTTP_204_NO_CONTENT)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
@@ -1994,6 +2066,21 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
|
||||
)
|
||||
)
|
||||
|
||||
def _should_prefetch_findings(self) -> bool:
|
||||
fields_param = self.request.query_params.get("fields[resources]", "")
|
||||
include_param = self.request.query_params.get("include", "")
|
||||
return (
|
||||
fields_param == ""
|
||||
or "findings" in fields_param.split(",")
|
||||
or "findings" in include_param.split(",")
|
||||
)
|
||||
|
||||
def _get_findings_prefetch(self):
|
||||
findings_queryset = Finding.all_objects.defer("scan", "resources").filter(
|
||||
tenant_id=self.request.tenant_id
|
||||
)
|
||||
return [Prefetch("findings", queryset=findings_queryset)]
|
||||
|
||||
def get_serializer_class(self):
|
||||
if self.action in ["metadata", "metadata_latest"]:
|
||||
return ResourceMetadataSerializer
|
||||
@@ -2017,7 +2104,11 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
|
||||
filtered_queryset,
|
||||
manager=Resource.all_objects,
|
||||
select_related=["provider"],
|
||||
prefetch_related=["findings"],
|
||||
prefetch_related=(
|
||||
self._get_findings_prefetch()
|
||||
if self._should_prefetch_findings()
|
||||
else []
|
||||
),
|
||||
)
|
||||
|
||||
def retrieve(self, request, *args, **kwargs):
|
||||
@@ -2042,14 +2133,18 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
|
||||
tenant_id = request.tenant_id
|
||||
filtered_queryset = self.filter_queryset(self.get_queryset())
|
||||
|
||||
latest_scan_ids = (
|
||||
Scan.all_objects.filter(tenant_id=tenant_id, state=StateChoices.COMPLETED)
|
||||
latest_scans = (
|
||||
Scan.all_objects.filter(
|
||||
tenant_id=tenant_id,
|
||||
state=StateChoices.COMPLETED,
|
||||
)
|
||||
.order_by("provider_id", "-inserted_at")
|
||||
.distinct("provider_id")
|
||||
.values_list("id", flat=True)
|
||||
.values("provider_id")
|
||||
)
|
||||
|
||||
filtered_queryset = filtered_queryset.filter(
|
||||
tenant_id=tenant_id, provider__scan__in=latest_scan_ids
|
||||
provider_id__in=Subquery(latest_scans)
|
||||
)
|
||||
|
||||
return self.paginate_by_pk(
|
||||
@@ -2057,7 +2152,11 @@ class ResourceViewSet(PaginateByPkMixin, BaseRLSViewSet):
|
||||
filtered_queryset,
|
||||
manager=Resource.all_objects,
|
||||
select_related=["provider"],
|
||||
prefetch_related=["findings"],
|
||||
prefetch_related=(
|
||||
self._get_findings_prefetch()
|
||||
if self._should_prefetch_findings()
|
||||
else []
|
||||
),
|
||||
)
|
||||
|
||||
@action(detail=False, methods=["get"], url_name="metadata")
|
||||
@@ -2821,13 +2920,11 @@ class InvitationAcceptViewSet(BaseRLSViewSet):
|
||||
partial_update=extend_schema(
|
||||
tags=["Role"],
|
||||
summary="Partially update a role",
|
||||
description="Update certain fields of an existing role's information without affecting other fields.",
|
||||
responses={200: RoleSerializer},
|
||||
),
|
||||
destroy=extend_schema(
|
||||
tags=["Role"],
|
||||
summary="Delete a role",
|
||||
description="Remove a role from the system by their ID.",
|
||||
),
|
||||
)
|
||||
class RoleViewSet(BaseRLSViewSet):
|
||||
@@ -2849,6 +2946,14 @@ class RoleViewSet(BaseRLSViewSet):
|
||||
return RoleUpdateSerializer
|
||||
return super().get_serializer_class()
|
||||
|
||||
@extend_schema(
|
||||
description=(
|
||||
"Update selected fields on an existing role. When changing the `users` "
|
||||
"relationship of a role that grants MANAGE_ACCOUNT, the API blocks attempts "
|
||||
"that would leave the tenant without any MANAGE_ACCOUNT assignees and prevents "
|
||||
"callers from removing their own assignment to that role."
|
||||
)
|
||||
)
|
||||
def partial_update(self, request, *args, **kwargs):
|
||||
user_role = get_role(request.user)
|
||||
# If the user is the owner of the role, the manage_account field is not editable
|
||||
@@ -2856,6 +2961,12 @@ class RoleViewSet(BaseRLSViewSet):
|
||||
request.data["manage_account"] = str(user_role.manage_account).lower()
|
||||
return super().partial_update(request, *args, **kwargs)
|
||||
|
||||
@extend_schema(
|
||||
description=(
|
||||
"Delete the specified role. The API rejects deletion of the last role "
|
||||
"in the tenant that grants MANAGE_ACCOUNT."
|
||||
)
|
||||
)
|
||||
def destroy(self, request, *args, **kwargs):
|
||||
instance = self.get_object()
|
||||
if (
|
||||
@@ -2863,6 +2974,21 @@ class RoleViewSet(BaseRLSViewSet):
|
||||
): # TODO: Move to a constant/enum (in case other roles are created by default)
|
||||
raise ValidationError(detail="The admin role cannot be deleted.")
|
||||
|
||||
# Prevent deleting the last MANAGE_ACCOUNT role in the tenant
|
||||
if instance.manage_account:
|
||||
has_other_ma = (
|
||||
Role.objects.filter(tenant_id=instance.tenant_id, manage_account=True)
|
||||
.exclude(id=instance.id)
|
||||
.exists()
|
||||
)
|
||||
if not has_other_ma:
|
||||
return Response(
|
||||
data={
|
||||
"detail": "Cannot delete the only role with MANAGE_ACCOUNT in the tenant."
|
||||
},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
|
||||
return super().destroy(request, *args, **kwargs)
|
||||
|
||||
|
||||
@@ -3005,7 +3131,9 @@ class RoleProviderGroupRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
description="Compliance overviews metadata obtained successfully",
|
||||
response=ComplianceOverviewMetadataSerializer,
|
||||
),
|
||||
202: OpenApiResponse(description="The task is in progress"),
|
||||
202: OpenApiResponse(
|
||||
description="The task is in progress", response=TaskSerializer
|
||||
),
|
||||
500: OpenApiResponse(
|
||||
description="Compliance overviews generation task failed"
|
||||
),
|
||||
@@ -3037,7 +3165,9 @@ class RoleProviderGroupRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
description="Compliance requirement details obtained successfully",
|
||||
response=ComplianceOverviewDetailSerializer(many=True),
|
||||
),
|
||||
202: OpenApiResponse(description="The task is in progress"),
|
||||
202: OpenApiResponse(
|
||||
description="The task is in progress", response=TaskSerializer
|
||||
),
|
||||
500: OpenApiResponse(
|
||||
description="Compliance overviews generation task failed"
|
||||
),
|
||||
@@ -3415,6 +3545,7 @@ class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
|
||||
),
|
||||
"name": requirement.get("name", ""),
|
||||
"framework": compliance_framework.get("framework", ""),
|
||||
"compliance_name": compliance_framework.get("name", ""),
|
||||
"version": compliance_framework.get("version", ""),
|
||||
"description": requirement.get("description", ""),
|
||||
"attributes": base_attributes,
|
||||
@@ -3498,8 +3629,10 @@ class OverviewViewSet(BaseRLSViewSet):
|
||||
def get_filterset_class(self):
|
||||
if self.action == "providers":
|
||||
return None
|
||||
elif self.action in ["findings", "findings_severity"]:
|
||||
elif self.action == "findings":
|
||||
return ScanSummaryFilter
|
||||
elif self.action == "findings_severity":
|
||||
return ScanSummarySeverityFilter
|
||||
elif self.action == "services":
|
||||
return ServiceOverviewFilter
|
||||
return None
|
||||
@@ -3621,7 +3754,12 @@ class OverviewViewSet(BaseRLSViewSet):
|
||||
@action(detail=False, methods=["get"], url_name="findings_severity")
|
||||
def findings_severity(self, request):
|
||||
tenant_id = self.request.tenant_id
|
||||
queryset = self.get_queryset()
|
||||
|
||||
# Load only required fields
|
||||
queryset = self.get_queryset().only(
|
||||
"tenant_id", "scan_id", "severity", "fail", "_pass", "total"
|
||||
)
|
||||
|
||||
filtered_queryset = self.filter_queryset(queryset)
|
||||
provider_filter = (
|
||||
{"provider__in": self.allowed_providers}
|
||||
@@ -3641,16 +3779,22 @@ class OverviewViewSet(BaseRLSViewSet):
|
||||
tenant_id=tenant_id, scan_id__in=latest_scan_ids
|
||||
)
|
||||
|
||||
# The filter will have added a status_count annotation if any status filter was used
|
||||
if "status_count" in filtered_queryset.query.annotations:
|
||||
sum_expression = Sum("status_count")
|
||||
else:
|
||||
sum_expression = Sum("total")
|
||||
|
||||
severity_counts = (
|
||||
filtered_queryset.values("severity")
|
||||
.annotate(count=Sum("total"))
|
||||
.annotate(count=sum_expression)
|
||||
.order_by("severity")
|
||||
)
|
||||
|
||||
severity_data = {sev[0]: 0 for sev in SeverityChoices}
|
||||
|
||||
for item in severity_counts:
|
||||
severity_data[item["severity"]] = item["count"]
|
||||
severity_data.update(
|
||||
{item["severity"]: item["count"] for item in severity_counts}
|
||||
)
|
||||
|
||||
serializer = self.get_serializer(severity_data)
|
||||
return Response(serializer.data, status=status.HTTP_200_OK)
|
||||
@@ -3811,32 +3955,138 @@ class IntegrationViewSet(BaseRLSViewSet):
|
||||
context["allowed_providers"] = self.allowed_providers
|
||||
return context
|
||||
|
||||
@extend_schema(
|
||||
tags=["Integration"],
|
||||
summary="Check integration connection",
|
||||
description="Try to verify integration connection",
|
||||
request=None,
|
||||
responses={202: OpenApiResponse(response=TaskSerializer)},
|
||||
)
|
||||
@action(detail=True, methods=["post"], url_name="connection")
|
||||
def connection(self, request, pk=None):
|
||||
get_object_or_404(Integration, pk=pk)
|
||||
with transaction.atomic():
|
||||
task = check_integration_connection_task.delay(
|
||||
integration_id=pk, tenant_id=self.request.tenant_id
|
||||
)
|
||||
prowler_task = Task.objects.get(id=task.id)
|
||||
serializer = TaskSerializer(prowler_task)
|
||||
return Response(
|
||||
data=serializer.data,
|
||||
status=status.HTTP_202_ACCEPTED,
|
||||
headers={
|
||||
"Content-Location": reverse(
|
||||
"task-detail", kwargs={"pk": prowler_task.id}
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
dispatches=extend_schema(
|
||||
tags=["Integration"],
|
||||
summary="Send findings to a Jira integration",
|
||||
description="Send a set of filtered findings to the given integration. At least one finding filter must be "
|
||||
"provided.",
|
||||
responses={202: OpenApiResponse(response=TaskSerializer)},
|
||||
filters=True,
|
||||
)
|
||||
)
|
||||
class IntegrationJiraViewSet(BaseRLSViewSet):
|
||||
queryset = Finding.all_objects.all()
|
||||
serializer_class = IntegrationJiraDispatchSerializer
|
||||
http_method_names = ["post"]
|
||||
filter_backends = [CustomDjangoFilterBackend]
|
||||
filterset_class = IntegrationJiraFindingsFilter
|
||||
# RBAC required permissions
|
||||
required_permissions = [Permissions.MANAGE_INTEGRATIONS]
|
||||
|
||||
@extend_schema(exclude=True)
|
||||
def create(self, request, *args, **kwargs):
|
||||
raise MethodNotAllowed(method="POST")
|
||||
|
||||
def get_queryset(self):
|
||||
tenant_id = self.request.tenant_id
|
||||
user_roles = get_role(self.request.user)
|
||||
if user_roles.unlimited_visibility:
|
||||
# User has unlimited visibility, return all findings
|
||||
queryset = Finding.all_objects.filter(tenant_id=tenant_id)
|
||||
else:
|
||||
# User lacks permission, filter findings based on provider groups associated with the role
|
||||
queryset = Finding.all_objects.filter(
|
||||
scan__provider__in=get_providers(user_roles)
|
||||
)
|
||||
|
||||
return queryset
|
||||
|
||||
@action(detail=False, methods=["post"], url_name="dispatches")
|
||||
def dispatches(self, request, integration_pk=None):
|
||||
get_object_or_404(Integration, pk=integration_pk)
|
||||
serializer = self.get_serializer(
|
||||
data=request.data, context={"integration_id": integration_pk}
|
||||
)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
|
||||
if self.filter_queryset(self.get_queryset()).count() == 0:
|
||||
raise ValidationError(
|
||||
{"findings": "No findings match the provided filters"}
|
||||
)
|
||||
|
||||
finding_ids = [
|
||||
str(finding_id)
|
||||
for finding_id in self.filter_queryset(self.get_queryset()).values_list(
|
||||
"id", flat=True
|
||||
)
|
||||
]
|
||||
project_key = serializer.validated_data["project_key"]
|
||||
issue_type = serializer.validated_data["issue_type"]
|
||||
|
||||
with transaction.atomic():
|
||||
task = jira_integration_task.delay(
|
||||
tenant_id=self.request.tenant_id,
|
||||
integration_id=integration_pk,
|
||||
project_key=project_key,
|
||||
issue_type=issue_type,
|
||||
finding_ids=finding_ids,
|
||||
)
|
||||
prowler_task = Task.objects.get(id=task.id)
|
||||
serializer = TaskSerializer(prowler_task)
|
||||
return Response(
|
||||
data=serializer.data,
|
||||
status=status.HTTP_202_ACCEPTED,
|
||||
headers={
|
||||
"Content-Location": reverse(
|
||||
"task-detail", kwargs={"pk": prowler_task.id}
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="List all Lighthouse configurations",
|
||||
description="Retrieve a list of all Lighthouse configurations.",
|
||||
tags=["Lighthouse AI"],
|
||||
summary="List all Lighthouse AI configurations",
|
||||
description="Retrieve a list of all Lighthouse AI configurations.",
|
||||
),
|
||||
create=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="Create a new Lighthouse configuration",
|
||||
description="Create a new Lighthouse configuration with the specified details.",
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Create a new Lighthouse AI configuration",
|
||||
description="Create a new Lighthouse AI configuration with the specified details.",
|
||||
),
|
||||
partial_update=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="Partially update a Lighthouse configuration",
|
||||
description="Update certain fields of an existing Lighthouse configuration.",
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Partially update a Lighthouse AI configuration",
|
||||
description="Update certain fields of an existing Lighthouse AI configuration.",
|
||||
),
|
||||
destroy=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="Delete a Lighthouse configuration",
|
||||
description="Remove a Lighthouse configuration by its ID.",
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Delete a Lighthouse AI configuration",
|
||||
description="Remove a Lighthouse AI configuration by its ID.",
|
||||
),
|
||||
connection=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
tags=["Lighthouse AI"],
|
||||
summary="Check the connection to the OpenAI API",
|
||||
description="Verify the connection to the OpenAI API for a specific Lighthouse configuration.",
|
||||
description="Verify the connection to the OpenAI API for a specific Lighthouse AI configuration.",
|
||||
request=None,
|
||||
responses={202: OpenApiResponse(response=TaskSerializer)},
|
||||
),
|
||||
|
||||
@@ -108,6 +108,13 @@ REST_FRAMEWORK = {
|
||||
),
|
||||
"TEST_REQUEST_DEFAULT_FORMAT": "vnd.api+json",
|
||||
"JSON_API_UNIFORM_EXCEPTIONS": True,
|
||||
"DEFAULT_THROTTLE_CLASSES": [
|
||||
"rest_framework.throttling.ScopedRateThrottle",
|
||||
],
|
||||
"DEFAULT_THROTTLE_RATES": {
|
||||
"token-obtain": env("DJANGO_THROTTLE_TOKEN_OBTAIN", default=None),
|
||||
"dj_rest_auth": None,
|
||||
},
|
||||
}
|
||||
|
||||
SPECTACULAR_SETTINGS = {
|
||||
@@ -116,6 +123,9 @@ SPECTACULAR_SETTINGS = {
|
||||
"PREPROCESSING_HOOKS": [
|
||||
"drf_spectacular_jsonapi.hooks.fix_nested_path_parameters",
|
||||
],
|
||||
"POSTPROCESSING_HOOKS": [
|
||||
"api.schema_hooks.attach_task_202_examples",
|
||||
],
|
||||
"TITLE": "API Reference - Prowler",
|
||||
}
|
||||
|
||||
|
||||
@@ -69,6 +69,9 @@ IGNORED_EXCEPTIONS = [
|
||||
"AzureClientIdAndClientSecretNotBelongingToTenantIdError",
|
||||
"AzureHTTPResponseError",
|
||||
"Error with credentials provided",
|
||||
# PowerShell Errors in User Authentication
|
||||
"Microsoft Teams User Auth connection failed: Please check your permissions and try again.",
|
||||
"Exchange Online User Auth connection failed: Please check your permissions and try again.",
|
||||
]
|
||||
|
||||
|
||||
|
||||
+103
-1
@@ -191,6 +191,108 @@ def create_test_user_rbac_limited(django_db_setup, django_db_blocker):
|
||||
return user
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def create_test_user_rbac_manage_account(django_db_setup, django_db_blocker):
|
||||
"""User with only manage_account permission (no manage_users)."""
|
||||
with django_db_blocker.unblock():
|
||||
user = User.objects.create_user(
|
||||
name="testing_manage_account",
|
||||
email="rbac_manage_account@rbac.com",
|
||||
password=TEST_PASSWORD,
|
||||
)
|
||||
tenant = Tenant.objects.create(
|
||||
name="Tenant Test Manage Account",
|
||||
)
|
||||
Membership.objects.create(
|
||||
user=user,
|
||||
tenant=tenant,
|
||||
role=Membership.RoleChoices.OWNER,
|
||||
)
|
||||
role = Role.objects.create(
|
||||
name="manage_account",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=False,
|
||||
manage_account=True,
|
||||
manage_billing=False,
|
||||
manage_providers=False,
|
||||
manage_integrations=False,
|
||||
manage_scans=False,
|
||||
unlimited_visibility=False,
|
||||
)
|
||||
UserRoleRelationship.objects.create(
|
||||
user=user,
|
||||
role=role,
|
||||
tenant_id=tenant.id,
|
||||
)
|
||||
return user
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def authenticated_client_rbac_manage_account(
|
||||
create_test_user_rbac_manage_account, tenants_fixture, client
|
||||
):
|
||||
client.user = create_test_user_rbac_manage_account
|
||||
serializer = TokenSerializer(
|
||||
data={
|
||||
"type": "tokens",
|
||||
"email": "rbac_manage_account@rbac.com",
|
||||
"password": TEST_PASSWORD,
|
||||
}
|
||||
)
|
||||
serializer.is_valid()
|
||||
access_token = serializer.validated_data["access"]
|
||||
client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
|
||||
return client
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def create_test_user_rbac_manage_users_only(django_db_setup, django_db_blocker):
|
||||
"""User with only manage_users permission (no manage_account)."""
|
||||
with django_db_blocker.unblock():
|
||||
user = User.objects.create_user(
|
||||
name="testing_manage_users_only",
|
||||
email="rbac_manage_users_only@rbac.com",
|
||||
password=TEST_PASSWORD,
|
||||
)
|
||||
tenant = Tenant.objects.create(name="Tenant Test Manage Users Only")
|
||||
Membership.objects.create(
|
||||
user=user,
|
||||
tenant=tenant,
|
||||
role=Membership.RoleChoices.OWNER,
|
||||
)
|
||||
role = Role.objects.create(
|
||||
name="manage_users_only",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=True,
|
||||
manage_account=False,
|
||||
manage_billing=False,
|
||||
manage_providers=False,
|
||||
manage_integrations=False,
|
||||
manage_scans=False,
|
||||
unlimited_visibility=False,
|
||||
)
|
||||
UserRoleRelationship.objects.create(user=user, role=role, tenant_id=tenant.id)
|
||||
return user
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def authenticated_client_rbac_manage_users_only(
|
||||
create_test_user_rbac_manage_users_only, client
|
||||
):
|
||||
client.user = create_test_user_rbac_manage_users_only
|
||||
serializer = TokenSerializer(
|
||||
data={
|
||||
"type": "tokens",
|
||||
"email": "rbac_manage_users_only@rbac.com",
|
||||
"password": TEST_PASSWORD,
|
||||
}
|
||||
)
|
||||
serializer.is_valid()
|
||||
access_token = serializer.validated_data["access"]
|
||||
client.defaults["HTTP_AUTHORIZATION"] = f"Bearer {access_token}"
|
||||
return client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def authenticated_client_rbac(create_test_user_rbac, tenants_fixture, client):
|
||||
client.user = create_test_user_rbac
|
||||
@@ -1065,7 +1167,7 @@ def integrations_fixture(providers_fixture):
|
||||
enabled=True,
|
||||
connected=True,
|
||||
integration_type="amazon_s3",
|
||||
configuration={"key": "value"},
|
||||
configuration={"key": "value1"},
|
||||
credentials={"psswd": "1234"},
|
||||
)
|
||||
IntegrationProviderRelationship.objects.create(
|
||||
|
||||
@@ -3,8 +3,11 @@ from datetime import datetime, timezone
|
||||
import openai
|
||||
from celery.utils.log import get_task_logger
|
||||
|
||||
from api.models import LighthouseConfiguration, Provider
|
||||
from api.utils import prowler_provider_connection_test
|
||||
from api.models import Integration, LighthouseConfiguration, Provider
|
||||
from api.utils import (
|
||||
prowler_integration_connection_test,
|
||||
prowler_provider_connection_test,
|
||||
)
|
||||
|
||||
logger = get_task_logger(__name__)
|
||||
|
||||
@@ -83,3 +86,35 @@ def check_lighthouse_connection(lighthouse_config_id: str):
|
||||
lighthouse_config.is_active = False
|
||||
lighthouse_config.save()
|
||||
return {"connected": False, "error": str(e), "available_models": []}
|
||||
|
||||
|
||||
def check_integration_connection(integration_id: str):
|
||||
"""
|
||||
Business logic to check the connection status of an integration.
|
||||
|
||||
Args:
|
||||
integration_id (str): The primary key of the Integration instance to check.
|
||||
"""
|
||||
integration = Integration.objects.filter(pk=integration_id, enabled=True).first()
|
||||
|
||||
if not integration:
|
||||
logger.info(f"Integration {integration_id} is not enabled")
|
||||
return {"connected": False, "error": "Integration is not enabled"}
|
||||
|
||||
try:
|
||||
result = prowler_integration_connection_test(integration)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f"Unexpected exception checking {integration.integration_type} integration connection: {str(e)}"
|
||||
)
|
||||
raise e
|
||||
|
||||
# Update integration connection status
|
||||
integration.connected = result.is_connected
|
||||
integration.connection_last_checked_at = datetime.now(tz=timezone.utc)
|
||||
integration.save()
|
||||
|
||||
return {
|
||||
"connected": result.is_connected,
|
||||
"error": str(result.error) if result.error else None,
|
||||
}
|
||||
|
||||
@@ -8,18 +8,22 @@ from botocore.exceptions import ClientError, NoCredentialsError, ParamValidation
|
||||
from celery.utils.log import get_task_logger
|
||||
from django.conf import settings
|
||||
|
||||
from api.db_utils import rls_transaction
|
||||
from api.models import Scan
|
||||
from prowler.config.config import (
|
||||
csv_file_suffix,
|
||||
html_file_suffix,
|
||||
json_asff_file_suffix,
|
||||
json_ocsf_file_suffix,
|
||||
output_file_timestamp,
|
||||
)
|
||||
from prowler.lib.outputs.asff.asff import ASFF
|
||||
from prowler.lib.outputs.compliance.aws_well_architected.aws_well_architected import (
|
||||
AWSWellArchitected,
|
||||
)
|
||||
from prowler.lib.outputs.compliance.cis.cis_aws import AWSCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_azure import AzureCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_gcp import GCPCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_github import GithubCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_kubernetes import KubernetesCIS
|
||||
from prowler.lib.outputs.compliance.cis.cis_m365 import M365CIS
|
||||
from prowler.lib.outputs.compliance.ens.ens_aws import AWSENS
|
||||
@@ -93,6 +97,9 @@ COMPLIANCE_CLASS_MAP = {
|
||||
(lambda name: name == "prowler_threatscore_m365", ProwlerThreatScoreM365),
|
||||
(lambda name: name.startswith("iso27001_"), M365ISO27001),
|
||||
],
|
||||
"github": [
|
||||
(lambda name: name.startswith("cis_"), GithubCIS),
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
@@ -104,6 +111,7 @@ OUTPUT_FORMATS_MAPPING = {
|
||||
"kwargs": {},
|
||||
},
|
||||
"json-ocsf": {"class": OCSF, "suffix": json_ocsf_file_suffix, "kwargs": {}},
|
||||
"json-asff": {"class": ASFF, "suffix": json_asff_file_suffix, "kwargs": {}},
|
||||
"html": {"class": HTML, "suffix": html_file_suffix, "kwargs": {"stats": {}}},
|
||||
}
|
||||
|
||||
@@ -167,7 +175,7 @@ def get_s3_client():
|
||||
return s3_client
|
||||
|
||||
|
||||
def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str:
|
||||
def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str | None:
|
||||
"""
|
||||
Upload the specified ZIP file to an S3 bucket.
|
||||
If the S3 bucket environment variables are not configured,
|
||||
@@ -184,7 +192,7 @@ def _upload_to_s3(tenant_id: str, zip_path: str, scan_id: str) -> str:
|
||||
"""
|
||||
bucket = base.DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET
|
||||
if not bucket:
|
||||
return None
|
||||
return
|
||||
|
||||
try:
|
||||
s3 = get_s3_client()
|
||||
@@ -244,15 +252,19 @@ def _generate_output_directory(
|
||||
# Sanitize the prowler provider name to ensure it is a valid directory name
|
||||
prowler_provider_sanitized = re.sub(r"[^\w\-]", "-", prowler_provider)
|
||||
|
||||
with rls_transaction(tenant_id):
|
||||
started_at = Scan.objects.get(id=scan_id).started_at
|
||||
|
||||
timestamp = started_at.strftime("%Y%m%d%H%M%S")
|
||||
path = (
|
||||
f"{output_directory}/{tenant_id}/{scan_id}/prowler-output-"
|
||||
f"{prowler_provider_sanitized}-{output_file_timestamp}"
|
||||
f"{prowler_provider_sanitized}-{timestamp}"
|
||||
)
|
||||
os.makedirs("/".join(path.split("/")[:-1]), exist_ok=True)
|
||||
|
||||
compliance_path = (
|
||||
f"{output_directory}/{tenant_id}/{scan_id}/compliance/prowler-output-"
|
||||
f"{prowler_provider_sanitized}-{output_file_timestamp}"
|
||||
f"{prowler_provider_sanitized}-{timestamp}"
|
||||
)
|
||||
os.makedirs("/".join(compliance_path.split("/")[:-1]), exist_ok=True)
|
||||
|
||||
|
||||
@@ -0,0 +1,506 @@
|
||||
import os
|
||||
from glob import glob
|
||||
|
||||
from celery.utils.log import get_task_logger
|
||||
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE
|
||||
from tasks.utils import batched
|
||||
|
||||
from api.db_utils import rls_transaction
|
||||
from api.models import Finding, Integration, Provider
|
||||
from api.utils import initialize_prowler_integration, initialize_prowler_provider
|
||||
from prowler.lib.outputs.asff.asff import ASFF
|
||||
from prowler.lib.outputs.compliance.generic.generic import GenericCompliance
|
||||
from prowler.lib.outputs.csv.csv import CSV
|
||||
from prowler.lib.outputs.finding import Finding as FindingOutput
|
||||
from prowler.lib.outputs.html.html import HTML
|
||||
from prowler.lib.outputs.ocsf.ocsf import OCSF
|
||||
from prowler.providers.aws.aws_provider import AwsProvider
|
||||
from prowler.providers.aws.lib.s3.s3 import S3
|
||||
from prowler.providers.aws.lib.security_hub.security_hub import SecurityHub
|
||||
from prowler.providers.common.models import Connection
|
||||
|
||||
logger = get_task_logger(__name__)
|
||||
|
||||
|
||||
def get_s3_client_from_integration(
|
||||
integration: Integration,
|
||||
) -> tuple[bool, S3 | Connection]:
|
||||
"""
|
||||
Create and return a boto3 S3 client using AWS credentials from an integration.
|
||||
|
||||
Args:
|
||||
integration (Integration): The integration to get the S3 client from.
|
||||
|
||||
Returns:
|
||||
tuple[bool, S3 | Connection]: A tuple containing a boolean indicating if the connection was successful and the S3 client or connection object.
|
||||
"""
|
||||
s3 = S3(
|
||||
**integration.credentials,
|
||||
bucket_name=integration.configuration["bucket_name"],
|
||||
output_directory=integration.configuration["output_directory"],
|
||||
)
|
||||
|
||||
connection = s3.test_connection(
|
||||
**integration.credentials,
|
||||
bucket_name=integration.configuration["bucket_name"],
|
||||
)
|
||||
|
||||
if connection.is_connected:
|
||||
return True, s3
|
||||
|
||||
return False, connection
|
||||
|
||||
|
||||
def upload_s3_integration(
|
||||
tenant_id: str, provider_id: str, output_directory: str
|
||||
) -> bool:
|
||||
"""
|
||||
Upload the specified output files to an S3 bucket from an integration.
|
||||
Reconstructs output objects from files in the output directory instead of using serialized data.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant identifier, used as part of the S3 key prefix.
|
||||
provider_id (str): The provider identifier, used as part of the S3 key prefix.
|
||||
output_directory (str): Path to the directory containing output files.
|
||||
|
||||
Returns:
|
||||
bool: True if all integrations were executed, False otherwise.
|
||||
|
||||
Raises:
|
||||
botocore.exceptions.ClientError: If the upload attempt to S3 fails for any reason.
|
||||
"""
|
||||
logger.info(f"Processing S3 integrations for provider {provider_id}")
|
||||
|
||||
try:
|
||||
with rls_transaction(tenant_id):
|
||||
integrations = list(
|
||||
Integration.objects.filter(
|
||||
integrationproviderrelationship__provider_id=provider_id,
|
||||
integration_type=Integration.IntegrationChoices.AMAZON_S3,
|
||||
enabled=True,
|
||||
)
|
||||
)
|
||||
|
||||
if not integrations:
|
||||
logger.error(f"No S3 integrations found for provider {provider_id}")
|
||||
return False
|
||||
|
||||
integration_executions = 0
|
||||
for integration in integrations:
|
||||
try:
|
||||
connected, s3 = get_s3_client_from_integration(integration)
|
||||
except Exception as e:
|
||||
logger.info(
|
||||
f"S3 connection failed for integration {integration.id}: {e}"
|
||||
)
|
||||
integration.connected = False
|
||||
integration.save()
|
||||
continue
|
||||
|
||||
if connected:
|
||||
try:
|
||||
# Reconstruct generated_outputs from files in output directory
|
||||
# This approach scans the output directory for files and creates the appropriate
|
||||
# output objects based on file extensions and naming patterns.
|
||||
generated_outputs = {"regular": [], "compliance": []}
|
||||
|
||||
# Find and recreate regular outputs (CSV, HTML, OCSF)
|
||||
output_file_patterns = {
|
||||
".csv": CSV,
|
||||
".html": HTML,
|
||||
".ocsf.json": OCSF,
|
||||
".asff.json": ASFF,
|
||||
}
|
||||
|
||||
base_dir = os.path.dirname(output_directory)
|
||||
for extension, output_class in output_file_patterns.items():
|
||||
pattern = f"{output_directory}*{extension}"
|
||||
for file_path in glob(pattern):
|
||||
if os.path.exists(file_path):
|
||||
output = output_class(findings=[], file_path=file_path)
|
||||
output.create_file_descriptor(file_path)
|
||||
generated_outputs["regular"].append(output)
|
||||
|
||||
# Find and recreate compliance outputs
|
||||
compliance_pattern = os.path.join(base_dir, "compliance", "*.csv")
|
||||
for file_path in glob(compliance_pattern):
|
||||
if os.path.exists(file_path):
|
||||
output = GenericCompliance(
|
||||
findings=[],
|
||||
compliance=None,
|
||||
file_path=file_path,
|
||||
file_extension=".csv",
|
||||
)
|
||||
output.create_file_descriptor(file_path)
|
||||
generated_outputs["compliance"].append(output)
|
||||
|
||||
# Use send_to_bucket with recreated generated_outputs objects
|
||||
s3.send_to_bucket(generated_outputs)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"S3 upload failed for integration {integration.id}: {e}"
|
||||
)
|
||||
continue
|
||||
integration_executions += 1
|
||||
else:
|
||||
integration.connected = False
|
||||
integration.save()
|
||||
logger.error(
|
||||
f"S3 upload failed, connection failed for integration {integration.id}: {s3.error}"
|
||||
)
|
||||
|
||||
result = integration_executions == len(integrations)
|
||||
if result:
|
||||
logger.info(
|
||||
f"All the S3 integrations completed successfully for provider {provider_id}"
|
||||
)
|
||||
else:
|
||||
logger.info(f"Some S3 integrations failed for provider {provider_id}")
|
||||
return result
|
||||
except Exception as e:
|
||||
logger.error(f"S3 integrations failed for provider {provider_id}: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
def get_security_hub_client_from_integration(
|
||||
integration: Integration, tenant_id: str, findings: list
|
||||
) -> tuple[bool, SecurityHub | Connection]:
|
||||
"""
|
||||
Create and return a SecurityHub client using AWS credentials from an integration.
|
||||
|
||||
Args:
|
||||
integration (Integration): The integration to get the Security Hub client from.
|
||||
tenant_id (str): The tenant identifier.
|
||||
findings (list): List of findings in ASFF format to send to Security Hub.
|
||||
|
||||
Returns:
|
||||
tuple[bool, SecurityHub | Connection]: A tuple containing a boolean indicating
|
||||
if the connection was successful and the SecurityHub client or connection object.
|
||||
"""
|
||||
# Get the provider associated with this integration
|
||||
with rls_transaction(tenant_id):
|
||||
provider_relationship = integration.integrationproviderrelationship_set.first()
|
||||
if not provider_relationship:
|
||||
return Connection(
|
||||
is_connected=False, error="No provider associated with this integration"
|
||||
)
|
||||
provider_uid = provider_relationship.provider.uid
|
||||
provider_secret = provider_relationship.provider.secret.secret
|
||||
|
||||
credentials = (
|
||||
integration.credentials if integration.credentials else provider_secret
|
||||
)
|
||||
connection = SecurityHub.test_connection(
|
||||
aws_account_id=provider_uid,
|
||||
raise_on_exception=False,
|
||||
**credentials,
|
||||
)
|
||||
|
||||
if connection.is_connected:
|
||||
all_security_hub_regions = AwsProvider.get_available_aws_service_regions(
|
||||
"securityhub", connection.partition
|
||||
)
|
||||
|
||||
# Create regions status dictionary
|
||||
regions_status = {}
|
||||
for region in set(all_security_hub_regions):
|
||||
regions_status[region] = region in connection.enabled_regions
|
||||
|
||||
# Save regions information in the integration configuration
|
||||
with rls_transaction(tenant_id):
|
||||
integration.configuration["regions"] = regions_status
|
||||
integration.save()
|
||||
|
||||
# Create SecurityHub client with all necessary parameters
|
||||
security_hub = SecurityHub(
|
||||
aws_account_id=provider_uid,
|
||||
findings=findings,
|
||||
send_only_fails=integration.configuration.get("send_only_fails", False),
|
||||
aws_security_hub_available_regions=list(connection.enabled_regions),
|
||||
**credentials,
|
||||
)
|
||||
return True, security_hub
|
||||
else:
|
||||
# Reset regions information if connection fails
|
||||
with rls_transaction(tenant_id):
|
||||
integration.configuration["regions"] = {}
|
||||
integration.save()
|
||||
|
||||
return False, connection
|
||||
|
||||
|
||||
def upload_security_hub_integration(
|
||||
tenant_id: str, provider_id: str, scan_id: str
|
||||
) -> bool:
|
||||
"""
|
||||
Upload findings to AWS Security Hub using configured integrations.
|
||||
|
||||
This function retrieves findings from the database, transforms them to ASFF format,
|
||||
and sends them to AWS Security Hub using the configured integration credentials.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant identifier.
|
||||
provider_id (str): The provider identifier.
|
||||
scan_id (str): The scan identifier for which to send findings.
|
||||
|
||||
Returns:
|
||||
bool: True if all integrations executed successfully, False otherwise.
|
||||
"""
|
||||
logger.info(f"Processing Security Hub integrations for provider {provider_id}")
|
||||
|
||||
try:
|
||||
with rls_transaction(tenant_id):
|
||||
# Get Security Hub integrations for this provider
|
||||
integrations = list(
|
||||
Integration.objects.filter(
|
||||
integrationproviderrelationship__provider_id=provider_id,
|
||||
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
|
||||
enabled=True,
|
||||
)
|
||||
)
|
||||
|
||||
if not integrations:
|
||||
logger.error(
|
||||
f"No Security Hub integrations found for provider {provider_id}"
|
||||
)
|
||||
return False
|
||||
|
||||
# Get the provider object
|
||||
provider = Provider.objects.get(id=provider_id)
|
||||
|
||||
# Initialize prowler provider for finding transformation
|
||||
prowler_provider = initialize_prowler_provider(provider)
|
||||
|
||||
# Process each Security Hub integration
|
||||
integration_executions = 0
|
||||
total_findings_sent = {} # Track findings sent per integration
|
||||
|
||||
for integration in integrations:
|
||||
try:
|
||||
# Initialize Security Hub client for this integration
|
||||
# We'll create the client once and reuse it for all batches
|
||||
security_hub_client = None
|
||||
send_only_fails = integration.configuration.get(
|
||||
"send_only_fails", False
|
||||
)
|
||||
total_findings_sent[integration.id] = 0
|
||||
|
||||
# Process findings in batches to avoid memory issues
|
||||
has_findings = False
|
||||
batch_number = 0
|
||||
|
||||
with rls_transaction(tenant_id):
|
||||
qs = (
|
||||
Finding.all_objects.filter(tenant_id=tenant_id, scan_id=scan_id)
|
||||
.order_by("uid")
|
||||
.iterator()
|
||||
)
|
||||
|
||||
for batch, _ in batched(qs, DJANGO_FINDINGS_BATCH_SIZE):
|
||||
batch_number += 1
|
||||
has_findings = True
|
||||
|
||||
# Transform findings for this batch
|
||||
transformed_findings = [
|
||||
FindingOutput.transform_api_finding(
|
||||
finding, prowler_provider
|
||||
)
|
||||
for finding in batch
|
||||
]
|
||||
|
||||
# Convert to ASFF format
|
||||
asff_transformer = ASFF(
|
||||
findings=transformed_findings,
|
||||
file_path="",
|
||||
file_extension="json",
|
||||
)
|
||||
asff_transformer.transform(transformed_findings)
|
||||
|
||||
# Get the batch of ASFF findings
|
||||
batch_asff_findings = asff_transformer.data
|
||||
|
||||
if batch_asff_findings:
|
||||
# Create Security Hub client for first batch or reuse existing
|
||||
if not security_hub_client:
|
||||
connected, security_hub = (
|
||||
get_security_hub_client_from_integration(
|
||||
integration, tenant_id, batch_asff_findings
|
||||
)
|
||||
)
|
||||
|
||||
if not connected:
|
||||
logger.error(
|
||||
f"Security Hub connection failed for integration {integration.id}: "
|
||||
f"{security_hub.error}"
|
||||
)
|
||||
integration.connected = False
|
||||
integration.save()
|
||||
break # Skip this integration
|
||||
|
||||
security_hub_client = security_hub
|
||||
logger.info(
|
||||
f"Sending {'fail' if send_only_fails else 'all'} findings to Security Hub via "
|
||||
f"integration {integration.id}"
|
||||
)
|
||||
else:
|
||||
# Update findings in existing client for this batch
|
||||
security_hub_client._findings_per_region = (
|
||||
security_hub_client.filter(
|
||||
batch_asff_findings, send_only_fails
|
||||
)
|
||||
)
|
||||
|
||||
# Send this batch to Security Hub
|
||||
try:
|
||||
findings_sent = (
|
||||
security_hub_client.batch_send_to_security_hub()
|
||||
)
|
||||
total_findings_sent[integration.id] += findings_sent
|
||||
|
||||
if findings_sent > 0:
|
||||
logger.debug(
|
||||
f"Sent batch {batch_number} with {findings_sent} findings to Security Hub"
|
||||
)
|
||||
except Exception as batch_error:
|
||||
logger.error(
|
||||
f"Failed to send batch {batch_number} to Security Hub: {str(batch_error)}"
|
||||
)
|
||||
|
||||
# Clear memory after processing each batch
|
||||
asff_transformer._data.clear()
|
||||
del batch_asff_findings
|
||||
del transformed_findings
|
||||
|
||||
if not has_findings:
|
||||
logger.info(
|
||||
f"No findings to send to Security Hub for scan {scan_id}"
|
||||
)
|
||||
integration_executions += 1
|
||||
elif security_hub_client:
|
||||
if total_findings_sent[integration.id] > 0:
|
||||
logger.info(
|
||||
f"Successfully sent {total_findings_sent[integration.id]} total findings to Security Hub via integration {integration.id}"
|
||||
)
|
||||
integration_executions += 1
|
||||
else:
|
||||
logger.warning(
|
||||
f"No findings were sent to Security Hub via integration {integration.id}"
|
||||
)
|
||||
|
||||
# Archive previous findings if configured to do so
|
||||
if integration.configuration.get(
|
||||
"archive_previous_findings", False
|
||||
):
|
||||
logger.info(
|
||||
f"Archiving previous findings in Security Hub via integration {integration.id}"
|
||||
)
|
||||
try:
|
||||
findings_archived = (
|
||||
security_hub_client.archive_previous_findings()
|
||||
)
|
||||
logger.info(
|
||||
f"Successfully archived {findings_archived} previous findings in Security Hub"
|
||||
)
|
||||
except Exception as archive_error:
|
||||
logger.warning(
|
||||
f"Failed to archive previous findings: {str(archive_error)}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Security Hub integration {integration.id} failed: {str(e)}"
|
||||
)
|
||||
continue
|
||||
|
||||
result = integration_executions == len(integrations)
|
||||
if result:
|
||||
logger.info(
|
||||
f"All Security Hub integrations completed successfully for provider {provider_id}"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
f"Some Security Hub integrations failed for provider {provider_id}"
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Security Hub integrations failed for provider {provider_id}: {str(e)}"
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def send_findings_to_jira(
|
||||
tenant_id: str,
|
||||
integration_id: str,
|
||||
project_key: str,
|
||||
issue_type: str,
|
||||
finding_ids: list[str],
|
||||
):
|
||||
with rls_transaction(tenant_id):
|
||||
integration = Integration.objects.get(id=integration_id)
|
||||
jira_integration = initialize_prowler_integration(integration)
|
||||
|
||||
num_tickets_created = 0
|
||||
for finding_id in finding_ids:
|
||||
with rls_transaction(tenant_id):
|
||||
finding_instance = (
|
||||
Finding.all_objects.select_related("scan__provider")
|
||||
.prefetch_related("resources")
|
||||
.get(id=finding_id)
|
||||
)
|
||||
|
||||
# Extract resource information
|
||||
resource = (
|
||||
finding_instance.resources.first()
|
||||
if finding_instance.resources.exists()
|
||||
else None
|
||||
)
|
||||
resource_uid = resource.uid if resource else ""
|
||||
resource_name = resource.name if resource else ""
|
||||
resource_tags = {}
|
||||
if resource and hasattr(resource, "tags"):
|
||||
resource_tags = resource.get_tags(tenant_id)
|
||||
|
||||
# Get region
|
||||
region = resource.region if resource and resource.region else ""
|
||||
|
||||
# Extract remediation information from check_metadata
|
||||
check_metadata = finding_instance.check_metadata
|
||||
remediation = check_metadata.get("remediation", {})
|
||||
recommendation = remediation.get("recommendation", {})
|
||||
remediation_code = remediation.get("code", {})
|
||||
|
||||
# Send the individual finding to Jira
|
||||
result = jira_integration.send_finding(
|
||||
check_id=finding_instance.check_id,
|
||||
check_title=check_metadata.get("checktitle", ""),
|
||||
severity=finding_instance.severity,
|
||||
status=finding_instance.status,
|
||||
status_extended=finding_instance.status_extended or "",
|
||||
provider=finding_instance.scan.provider.provider,
|
||||
region=region,
|
||||
resource_uid=resource_uid,
|
||||
resource_name=resource_name,
|
||||
risk=check_metadata.get("risk", ""),
|
||||
recommendation_text=recommendation.get("text", ""),
|
||||
recommendation_url=recommendation.get("url", ""),
|
||||
remediation_code_native_iac=remediation_code.get("nativeiac", ""),
|
||||
remediation_code_terraform=remediation_code.get("terraform", ""),
|
||||
remediation_code_cli=remediation_code.get("cli", ""),
|
||||
remediation_code_other=remediation_code.get("other", ""),
|
||||
resource_tags=resource_tags,
|
||||
compliance=finding_instance.compliance or {},
|
||||
project_key=project_key,
|
||||
issue_type=issue_type,
|
||||
)
|
||||
if result:
|
||||
num_tickets_created += 1
|
||||
else:
|
||||
logger.error(f"Failed to send finding {finding_id} to Jira")
|
||||
|
||||
return {
|
||||
"created_count": num_tickets_created,
|
||||
"failed_count": len(finding_ids) - num_tickets_created,
|
||||
}
|
||||
@@ -2,13 +2,17 @@ from datetime import datetime, timedelta, timezone
|
||||
from pathlib import Path
|
||||
from shutil import rmtree
|
||||
|
||||
from celery import chain, shared_task
|
||||
from celery import chain, group, shared_task
|
||||
from celery.utils.log import get_task_logger
|
||||
from config.celery import RLSTask
|
||||
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE, DJANGO_TMP_OUTPUT_DIRECTORY
|
||||
from django_celery_beat.models import PeriodicTask
|
||||
from tasks.jobs.backfill import backfill_resource_scan_summaries
|
||||
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
|
||||
from tasks.jobs.connection import (
|
||||
check_integration_connection,
|
||||
check_lighthouse_connection,
|
||||
check_provider_connection,
|
||||
)
|
||||
from tasks.jobs.deletion import delete_provider, delete_tenant
|
||||
from tasks.jobs.export import (
|
||||
COMPLIANCE_CLASS_MAP,
|
||||
@@ -17,6 +21,11 @@ from tasks.jobs.export import (
|
||||
_generate_output_directory,
|
||||
_upload_to_s3,
|
||||
)
|
||||
from tasks.jobs.integrations import (
|
||||
send_findings_to_jira,
|
||||
upload_s3_integration,
|
||||
upload_security_hub_integration,
|
||||
)
|
||||
from tasks.jobs.scan import (
|
||||
aggregate_findings,
|
||||
create_compliance_requirements,
|
||||
@@ -27,7 +36,7 @@ from tasks.utils import batched, get_next_execution_datetime
|
||||
from api.compliance import get_compliance_frameworks
|
||||
from api.db_utils import rls_transaction
|
||||
from api.decorators import set_tenant
|
||||
from api.models import Finding, Provider, Scan, ScanSummary, StateChoices
|
||||
from api.models import Finding, Integration, Provider, Scan, ScanSummary, StateChoices
|
||||
from api.utils import initialize_prowler_provider
|
||||
from api.v1.serializers import ScanTaskSerializer
|
||||
from prowler.lib.check.compliance_models import Compliance
|
||||
@@ -54,6 +63,11 @@ def _perform_scan_complete_tasks(tenant_id: str, scan_id: str, provider_id: str)
|
||||
generate_outputs_task.si(
|
||||
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
|
||||
),
|
||||
check_integrations_task.si(
|
||||
tenant_id=tenant_id,
|
||||
provider_id=provider_id,
|
||||
scan_id=scan_id,
|
||||
),
|
||||
).apply_async()
|
||||
|
||||
|
||||
@@ -74,6 +88,18 @@ def check_provider_connection_task(provider_id: str):
|
||||
return check_provider_connection(provider_id=provider_id)
|
||||
|
||||
|
||||
@shared_task(base=RLSTask, name="integration-connection-check")
|
||||
@set_tenant
|
||||
def check_integration_connection_task(integration_id: str):
|
||||
"""
|
||||
Task to check the connection status of an integration.
|
||||
|
||||
Args:
|
||||
integration_id (str): The primary key of the Integration instance to check.
|
||||
"""
|
||||
return check_integration_connection(integration_id=integration_id)
|
||||
|
||||
|
||||
@shared_task(
|
||||
base=RLSTask, name="provider-deletion", queue="deletion", autoretry_for=(Exception,)
|
||||
)
|
||||
@@ -302,12 +328,30 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
|
||||
ScanSummary.objects.filter(scan_id=scan_id)
|
||||
)
|
||||
|
||||
qs = Finding.all_objects.filter(scan_id=scan_id).order_by("uid").iterator()
|
||||
# Check if we need to generate ASFF output for AWS providers with SecurityHub integration
|
||||
generate_asff = False
|
||||
if provider_type == "aws":
|
||||
security_hub_integrations = Integration.objects.filter(
|
||||
integrationproviderrelationship__provider_id=provider_id,
|
||||
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
|
||||
enabled=True,
|
||||
)
|
||||
generate_asff = security_hub_integrations.exists()
|
||||
|
||||
qs = (
|
||||
Finding.all_objects.filter(tenant_id=tenant_id, scan_id=scan_id)
|
||||
.order_by("uid")
|
||||
.iterator()
|
||||
)
|
||||
for batch, is_last in batched(qs, DJANGO_FINDINGS_BATCH_SIZE):
|
||||
fos = [FindingOutput.transform_api_finding(f, prowler_provider) for f in batch]
|
||||
|
||||
# Outputs
|
||||
for mode, cfg in OUTPUT_FORMATS_MAPPING.items():
|
||||
# Skip ASFF generation if not needed
|
||||
if mode == "json-asff" and not generate_asff:
|
||||
continue
|
||||
|
||||
cls = cfg["class"]
|
||||
suffix = cfg["suffix"]
|
||||
extra = cfg.get("kwargs", {}).copy()
|
||||
@@ -361,7 +405,34 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
|
||||
compressed = _compress_output_files(out_dir)
|
||||
upload_uri = _upload_to_s3(tenant_id, compressed, scan_id)
|
||||
|
||||
# S3 integrations (need output_directory)
|
||||
with rls_transaction(tenant_id):
|
||||
s3_integrations = Integration.objects.filter(
|
||||
integrationproviderrelationship__provider_id=provider_id,
|
||||
integration_type=Integration.IntegrationChoices.AMAZON_S3,
|
||||
enabled=True,
|
||||
)
|
||||
|
||||
if s3_integrations:
|
||||
# Pass the output directory path to S3 integration task to reconstruct objects from files
|
||||
s3_integration_task.apply_async(
|
||||
kwargs={
|
||||
"tenant_id": tenant_id,
|
||||
"provider_id": provider_id,
|
||||
"output_directory": out_dir,
|
||||
}
|
||||
).get(
|
||||
disable_sync_subtasks=False
|
||||
) # TODO: This synchronous execution is NOT recommended
|
||||
# We're forced to do this because we need the files to exist before deletion occurs.
|
||||
# Once we have the periodic file cleanup task implemented, we should:
|
||||
# 1. Remove this .get() call and make it fully async
|
||||
# 2. For Cloud deployments, develop a secondary approach where outputs are stored
|
||||
# directly in S3 and read from there, eliminating local file dependencies
|
||||
|
||||
if upload_uri:
|
||||
# TODO: We need to create a new periodic task to delete the output files
|
||||
# This task shouldn't be responsible for deleting the output files
|
||||
try:
|
||||
rmtree(Path(compressed).parent, ignore_errors=True)
|
||||
except Exception as e:
|
||||
@@ -372,7 +443,10 @@ def generate_outputs_task(scan_id: str, provider_id: str, tenant_id: str):
|
||||
|
||||
Scan.all_objects.filter(id=scan_id).update(output_location=final_location)
|
||||
logger.info(f"Scan outputs at {final_location}")
|
||||
return {"upload": did_upload}
|
||||
|
||||
return {
|
||||
"upload": did_upload,
|
||||
}
|
||||
|
||||
|
||||
@shared_task(name="backfill-scan-resource-summaries", queue="backfill")
|
||||
@@ -387,7 +461,7 @@ def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
|
||||
return backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
|
||||
|
||||
|
||||
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="overview")
|
||||
@shared_task(base=RLSTask, name="scan-compliance-overviews", queue="compliance")
|
||||
def create_compliance_requirements_task(tenant_id: str, scan_id: str):
|
||||
"""
|
||||
Creates detailed compliance requirement records for a scan.
|
||||
@@ -420,3 +494,122 @@ def check_lighthouse_connection_task(lighthouse_config_id: str, tenant_id: str =
|
||||
- 'available_models' (list): List of available models if connection is successful.
|
||||
"""
|
||||
return check_lighthouse_connection(lighthouse_config_id=lighthouse_config_id)
|
||||
|
||||
|
||||
@shared_task(name="integration-check")
|
||||
def check_integrations_task(tenant_id: str, provider_id: str, scan_id: str = None):
|
||||
"""
|
||||
Check and execute all configured integrations for a provider.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant identifier
|
||||
provider_id (str): The provider identifier
|
||||
scan_id (str, optional): The scan identifier for integrations that need scan data
|
||||
"""
|
||||
logger.info(f"Checking integrations for provider {provider_id}")
|
||||
|
||||
try:
|
||||
integration_tasks = []
|
||||
with rls_transaction(tenant_id):
|
||||
integrations = Integration.objects.filter(
|
||||
integrationproviderrelationship__provider_id=provider_id,
|
||||
enabled=True,
|
||||
)
|
||||
|
||||
if not integrations.exists():
|
||||
logger.info(f"No integrations configured for provider {provider_id}")
|
||||
return {"integrations_processed": 0}
|
||||
|
||||
# Security Hub integration
|
||||
security_hub_integrations = integrations.filter(
|
||||
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB
|
||||
)
|
||||
if security_hub_integrations.exists():
|
||||
integration_tasks.append(
|
||||
security_hub_integration_task.s(
|
||||
tenant_id=tenant_id, provider_id=provider_id, scan_id=scan_id
|
||||
)
|
||||
)
|
||||
|
||||
# TODO: Add other integration types here
|
||||
# slack_integrations = integrations.filter(
|
||||
# integration_type=Integration.IntegrationChoices.SLACK
|
||||
# )
|
||||
# if slack_integrations.exists():
|
||||
# integration_tasks.append(
|
||||
# slack_integration_task.s(
|
||||
# tenant_id=tenant_id,
|
||||
# provider_id=provider_id,
|
||||
# )
|
||||
# )
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Integration check failed for provider {provider_id}: {str(e)}")
|
||||
return {"integrations_processed": 0, "error": str(e)}
|
||||
|
||||
# Execute all integration tasks in parallel if any were found
|
||||
if integration_tasks:
|
||||
job = group(integration_tasks)
|
||||
job.apply_async()
|
||||
logger.info(f"Launched {len(integration_tasks)} integration task(s)")
|
||||
|
||||
return {"integrations_processed": len(integration_tasks)}
|
||||
|
||||
|
||||
@shared_task(
|
||||
base=RLSTask,
|
||||
name="integration-s3",
|
||||
queue="integrations",
|
||||
)
|
||||
def s3_integration_task(
|
||||
tenant_id: str,
|
||||
provider_id: str,
|
||||
output_directory: str,
|
||||
):
|
||||
"""
|
||||
Process S3 integrations for a provider.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant identifier
|
||||
provider_id (str): The provider identifier
|
||||
output_directory (str): Path to the directory containing output files
|
||||
"""
|
||||
return upload_s3_integration(tenant_id, provider_id, output_directory)
|
||||
|
||||
|
||||
@shared_task(
|
||||
base=RLSTask,
|
||||
name="integration-security-hub",
|
||||
queue="integrations",
|
||||
)
|
||||
def security_hub_integration_task(
|
||||
tenant_id: str,
|
||||
provider_id: str,
|
||||
scan_id: str,
|
||||
):
|
||||
"""
|
||||
Process Security Hub integrations for a provider.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant identifier
|
||||
provider_id (str): The provider identifier
|
||||
scan_id (str): The scan identifier
|
||||
"""
|
||||
return upload_security_hub_integration(tenant_id, provider_id, scan_id)
|
||||
|
||||
|
||||
@shared_task(
|
||||
base=RLSTask,
|
||||
name="integration-jira",
|
||||
queue="integrations",
|
||||
)
|
||||
def jira_integration_task(
|
||||
tenant_id: str,
|
||||
integration_id: str,
|
||||
project_key: str,
|
||||
issue_type: str,
|
||||
finding_ids: list[str],
|
||||
):
|
||||
return send_findings_to_jira(
|
||||
tenant_id, integration_id, project_key, issue_type, finding_ids
|
||||
)
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
import uuid
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
|
||||
from tasks.jobs.connection import (
|
||||
check_integration_connection,
|
||||
check_lighthouse_connection,
|
||||
check_provider_connection,
|
||||
)
|
||||
|
||||
from api.models import LighthouseConfiguration, Provider
|
||||
from api.models import Integration, LighthouseConfiguration, Provider
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@@ -127,3 +132,128 @@ def test_check_lighthouse_connection_missing_api_key(mock_lighthouse_get):
|
||||
assert result["available_models"] == []
|
||||
assert mock_lighthouse_instance.is_active is False
|
||||
mock_lighthouse_instance.save.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestCheckIntegrationConnection:
|
||||
def setup_method(self):
|
||||
self.integration_id = str(uuid.uuid4())
|
||||
|
||||
@patch("tasks.jobs.connection.Integration.objects.filter")
|
||||
@patch("tasks.jobs.connection.prowler_integration_connection_test")
|
||||
def test_check_integration_connection_success(
|
||||
self, mock_prowler_test, mock_integration_filter
|
||||
):
|
||||
"""Test successful integration connection check with enabled=True filter."""
|
||||
mock_integration = MagicMock()
|
||||
mock_integration.id = self.integration_id
|
||||
mock_integration.integration_type = Integration.IntegrationChoices.AMAZON_S3
|
||||
|
||||
mock_queryset = MagicMock()
|
||||
mock_queryset.first.return_value = mock_integration
|
||||
mock_integration_filter.return_value = mock_queryset
|
||||
|
||||
mock_connection_result = MagicMock()
|
||||
mock_connection_result.is_connected = True
|
||||
mock_connection_result.error = None
|
||||
mock_prowler_test.return_value = mock_connection_result
|
||||
|
||||
result = check_integration_connection(integration_id=self.integration_id)
|
||||
|
||||
# Verify that Integration.objects.filter was called with enabled=True filter
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
pk=self.integration_id, enabled=True
|
||||
)
|
||||
mock_queryset.first.assert_called_once()
|
||||
mock_prowler_test.assert_called_once_with(mock_integration)
|
||||
|
||||
# Verify the integration properties were updated
|
||||
assert mock_integration.connected is True
|
||||
assert mock_integration.connection_last_checked_at is not None
|
||||
mock_integration.save.assert_called_once()
|
||||
|
||||
# Verify the return value
|
||||
assert result["connected"] is True
|
||||
assert result["error"] is None
|
||||
|
||||
@patch("tasks.jobs.connection.Integration.objects.filter")
|
||||
@patch("tasks.jobs.connection.prowler_integration_connection_test")
|
||||
def test_check_integration_connection_failure(
|
||||
self, mock_prowler_test, mock_integration_filter
|
||||
):
|
||||
"""Test failed integration connection check."""
|
||||
mock_integration = MagicMock()
|
||||
mock_integration.id = self.integration_id
|
||||
|
||||
mock_queryset = MagicMock()
|
||||
mock_queryset.first.return_value = mock_integration
|
||||
mock_integration_filter.return_value = mock_queryset
|
||||
|
||||
test_error = Exception("Connection failed")
|
||||
mock_connection_result = MagicMock()
|
||||
mock_connection_result.is_connected = False
|
||||
mock_connection_result.error = test_error
|
||||
mock_prowler_test.return_value = mock_connection_result
|
||||
|
||||
result = check_integration_connection(integration_id=self.integration_id)
|
||||
|
||||
# Verify that Integration.objects.filter was called with enabled=True filter
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
pk=self.integration_id, enabled=True
|
||||
)
|
||||
mock_queryset.first.assert_called_once()
|
||||
|
||||
# Verify the integration properties were updated
|
||||
assert mock_integration.connected is False
|
||||
assert mock_integration.connection_last_checked_at is not None
|
||||
mock_integration.save.assert_called_once()
|
||||
|
||||
# Verify the return value
|
||||
assert result["connected"] is False
|
||||
assert result["error"] == str(test_error)
|
||||
|
||||
@patch("tasks.jobs.connection.Integration.objects.filter")
|
||||
def test_check_integration_connection_not_enabled(self, mock_integration_filter):
|
||||
"""Test that disabled integrations return proper error response."""
|
||||
# Mock that no enabled integration is found
|
||||
mock_queryset = MagicMock()
|
||||
mock_queryset.first.return_value = None
|
||||
mock_integration_filter.return_value = mock_queryset
|
||||
|
||||
result = check_integration_connection(integration_id=self.integration_id)
|
||||
|
||||
# Verify the filter was called with enabled=True
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
pk=self.integration_id, enabled=True
|
||||
)
|
||||
mock_queryset.first.assert_called_once()
|
||||
|
||||
# Verify the return value matches the expected error response
|
||||
assert result["connected"] is False
|
||||
assert result["error"] == "Integration is not enabled"
|
||||
|
||||
@patch("tasks.jobs.connection.Integration.objects.filter")
|
||||
@patch("tasks.jobs.connection.prowler_integration_connection_test")
|
||||
def test_check_integration_connection_exception(
|
||||
self, mock_prowler_test, mock_integration_filter
|
||||
):
|
||||
"""Test integration connection check when prowler test raises exception."""
|
||||
mock_integration = MagicMock()
|
||||
mock_integration.id = self.integration_id
|
||||
|
||||
mock_queryset = MagicMock()
|
||||
mock_queryset.first.return_value = mock_integration
|
||||
mock_integration_filter.return_value = mock_queryset
|
||||
|
||||
test_exception = Exception("Unexpected error during connection test")
|
||||
mock_prowler_test.side_effect = test_exception
|
||||
|
||||
with pytest.raises(Exception, match="Unexpected error during connection test"):
|
||||
check_integration_connection(integration_id=self.integration_id)
|
||||
|
||||
# Verify that Integration.objects.filter was called with enabled=True filter
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
pk=self.integration_id, enabled=True
|
||||
)
|
||||
mock_queryset.first.assert_called_once()
|
||||
mock_prowler_test.assert_called_once_with(mock_integration)
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
import os
|
||||
import uuid
|
||||
import zipfile
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
@@ -127,14 +129,26 @@ class TestOutputs:
|
||||
_upload_to_s3("tenant", str(zip_path), "scan")
|
||||
mock_logger.assert_called()
|
||||
|
||||
def test_generate_output_directory_creates_paths(self, tmpdir):
|
||||
from prowler.config.config import output_file_timestamp
|
||||
@patch("tasks.jobs.export.rls_transaction")
|
||||
@patch("tasks.jobs.export.Scan")
|
||||
def test_generate_output_directory_creates_paths(
|
||||
self, mock_scan, mock_rls_transaction, tmpdir
|
||||
):
|
||||
# Mock the scan object with a started_at timestamp
|
||||
mock_scan_instance = MagicMock()
|
||||
mock_scan_instance.started_at = datetime(2023, 6, 15, 10, 30, 45)
|
||||
mock_scan.objects.get.return_value = mock_scan_instance
|
||||
|
||||
# Mock rls_transaction as a context manager
|
||||
mock_rls_transaction.return_value.__enter__ = MagicMock()
|
||||
mock_rls_transaction.return_value.__exit__ = MagicMock(return_value=False)
|
||||
|
||||
base_tmp = Path(str(tmpdir.mkdir("generate_output")))
|
||||
base_dir = str(base_tmp)
|
||||
tenant_id = "t1"
|
||||
scan_id = "s1"
|
||||
tenant_id = str(uuid.uuid4())
|
||||
scan_id = str(uuid.uuid4())
|
||||
provider = "aws"
|
||||
expected_timestamp = "20230615103045"
|
||||
|
||||
path, compliance = _generate_output_directory(
|
||||
base_dir, provider, tenant_id, scan_id
|
||||
@@ -143,17 +157,29 @@ class TestOutputs:
|
||||
assert os.path.isdir(os.path.dirname(path))
|
||||
assert os.path.isdir(os.path.dirname(compliance))
|
||||
|
||||
assert path.endswith(f"{provider}-{output_file_timestamp}")
|
||||
assert compliance.endswith(f"{provider}-{output_file_timestamp}")
|
||||
assert path.endswith(f"{provider}-{expected_timestamp}")
|
||||
assert compliance.endswith(f"{provider}-{expected_timestamp}")
|
||||
|
||||
def test_generate_output_directory_invalid_character(self, tmpdir):
|
||||
from prowler.config.config import output_file_timestamp
|
||||
@patch("tasks.jobs.export.rls_transaction")
|
||||
@patch("tasks.jobs.export.Scan")
|
||||
def test_generate_output_directory_invalid_character(
|
||||
self, mock_scan, mock_rls_transaction, tmpdir
|
||||
):
|
||||
# Mock the scan object with a started_at timestamp
|
||||
mock_scan_instance = MagicMock()
|
||||
mock_scan_instance.started_at = datetime(2023, 6, 15, 10, 30, 45)
|
||||
mock_scan.objects.get.return_value = mock_scan_instance
|
||||
|
||||
# Mock rls_transaction as a context manager
|
||||
mock_rls_transaction.return_value.__enter__ = MagicMock()
|
||||
mock_rls_transaction.return_value.__exit__ = MagicMock(return_value=False)
|
||||
|
||||
base_tmp = Path(str(tmpdir.mkdir("generate_output")))
|
||||
base_dir = str(base_tmp)
|
||||
tenant_id = "t1"
|
||||
scan_id = "s1"
|
||||
tenant_id = str(uuid.uuid4())
|
||||
scan_id = str(uuid.uuid4())
|
||||
provider = "aws/test@check"
|
||||
expected_timestamp = "20230615103045"
|
||||
|
||||
path, compliance = _generate_output_directory(
|
||||
base_dir, provider, tenant_id, scan_id
|
||||
@@ -162,5 +188,5 @@ class TestOutputs:
|
||||
assert os.path.isdir(os.path.dirname(path))
|
||||
assert os.path.isdir(os.path.dirname(compliance))
|
||||
|
||||
assert path.endswith(f"aws-test-check-{output_file_timestamp}")
|
||||
assert compliance.endswith(f"aws-test-check-{output_file_timestamp}")
|
||||
assert path.endswith(f"aws-test-check-{expected_timestamp}")
|
||||
assert compliance.endswith(f"aws-test-check-{expected_timestamp}")
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,9 +1,16 @@
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from tasks.tasks import _perform_scan_complete_tasks, generate_outputs_task
|
||||
from tasks.tasks import (
|
||||
_perform_scan_complete_tasks,
|
||||
check_integrations_task,
|
||||
generate_outputs_task,
|
||||
s3_integration_task,
|
||||
security_hub_integration_task,
|
||||
)
|
||||
|
||||
from api.models import Integration
|
||||
|
||||
|
||||
# TODO Move this to outputs/reports jobs
|
||||
@@ -27,7 +34,6 @@ class TestGenerateOutputs:
|
||||
assert result == {"upload": False}
|
||||
mock_filter.assert_called_once_with(scan_id=self.scan_id)
|
||||
|
||||
@patch("tasks.tasks.rmtree")
|
||||
@patch("tasks.tasks._upload_to_s3")
|
||||
@patch("tasks.tasks._compress_output_files")
|
||||
@patch("tasks.tasks.get_compliance_frameworks")
|
||||
@@ -46,7 +52,6 @@ class TestGenerateOutputs:
|
||||
mock_get_available_frameworks,
|
||||
mock_compress,
|
||||
mock_upload,
|
||||
mock_rmtree,
|
||||
):
|
||||
mock_scan_summary_filter.return_value.exists.return_value = True
|
||||
|
||||
@@ -96,6 +101,7 @@ class TestGenerateOutputs:
|
||||
return_value=("out-dir", "comp-dir"),
|
||||
),
|
||||
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
|
||||
patch("tasks.tasks.rmtree"),
|
||||
):
|
||||
mock_compress.return_value = "/tmp/zipped.zip"
|
||||
mock_upload.return_value = "s3://bucket/zipped.zip"
|
||||
@@ -110,9 +116,6 @@ class TestGenerateOutputs:
|
||||
mock_scan_update.return_value.update.assert_called_once_with(
|
||||
output_location="s3://bucket/zipped.zip"
|
||||
)
|
||||
mock_rmtree.assert_called_once_with(
|
||||
Path("/tmp/zipped.zip").parent, ignore_errors=True
|
||||
)
|
||||
|
||||
def test_generate_outputs_fails_upload(self):
|
||||
with (
|
||||
@@ -144,6 +147,7 @@ class TestGenerateOutputs:
|
||||
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
|
||||
patch("tasks.tasks._upload_to_s3", return_value=None),
|
||||
patch("tasks.tasks.Scan.all_objects.filter") as mock_scan_update,
|
||||
patch("tasks.tasks.rmtree"),
|
||||
):
|
||||
mock_filter.return_value.exists.return_value = True
|
||||
mock_findings.return_value.order_by.return_value.iterator.return_value = [
|
||||
@@ -153,7 +157,7 @@ class TestGenerateOutputs:
|
||||
|
||||
result = generate_outputs_task(
|
||||
scan_id="scan",
|
||||
provider_id="provider",
|
||||
provider_id=self.provider_id,
|
||||
tenant_id=self.tenant_id,
|
||||
)
|
||||
|
||||
@@ -185,6 +189,7 @@ class TestGenerateOutputs:
|
||||
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
|
||||
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/f.zip"),
|
||||
patch("tasks.tasks.Scan.all_objects.filter"),
|
||||
patch("tasks.tasks.rmtree"),
|
||||
):
|
||||
mock_filter.return_value.exists.return_value = True
|
||||
mock_findings.return_value.order_by.return_value.iterator.return_value = [
|
||||
@@ -255,8 +260,8 @@ class TestGenerateOutputs:
|
||||
),
|
||||
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
|
||||
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
|
||||
patch("tasks.tasks.rmtree"),
|
||||
patch("tasks.tasks.Scan.all_objects.filter"),
|
||||
patch("tasks.tasks.rmtree"),
|
||||
patch(
|
||||
"tasks.tasks.batched",
|
||||
return_value=[
|
||||
@@ -333,13 +338,13 @@ class TestGenerateOutputs:
|
||||
),
|
||||
patch("tasks.tasks._compress_output_files", return_value="outdir.zip"),
|
||||
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/outdir.zip"),
|
||||
patch("tasks.tasks.rmtree"),
|
||||
patch(
|
||||
"tasks.tasks.Scan.all_objects.filter",
|
||||
return_value=MagicMock(update=lambda **kw: None),
|
||||
),
|
||||
patch("tasks.tasks.batched", return_value=two_batches),
|
||||
patch("tasks.tasks.OUTPUT_FORMATS_MAPPING", {}),
|
||||
patch("tasks.tasks.rmtree"),
|
||||
patch(
|
||||
"tasks.tasks.COMPLIANCE_CLASS_MAP",
|
||||
{"aws": [(lambda name: True, TrackingComplianceWriter)]},
|
||||
@@ -358,6 +363,7 @@ class TestGenerateOutputs:
|
||||
assert writer.transform_calls == [([raw2], compliance_obj, "cis")]
|
||||
assert result == {"upload": True}
|
||||
|
||||
# TODO: We need to add a periodic task to delete old output files
|
||||
def test_generate_outputs_logs_rmtree_exception(self, caplog):
|
||||
mock_finding_output = MagicMock()
|
||||
mock_finding_output.compliance = {"cis": ["requirement-1", "requirement-2"]}
|
||||
@@ -415,6 +421,56 @@ class TestGenerateOutputs:
|
||||
)
|
||||
assert "Error deleting output files" in caplog.text
|
||||
|
||||
@patch("tasks.tasks.rls_transaction")
|
||||
@patch("tasks.tasks.Integration.objects.filter")
|
||||
def test_generate_outputs_filters_enabled_s3_integrations(
|
||||
self, mock_integration_filter, mock_rls
|
||||
):
|
||||
"""Test that generate_outputs_task only processes enabled S3 integrations."""
|
||||
with (
|
||||
patch("tasks.tasks.ScanSummary.objects.filter") as mock_summary,
|
||||
patch("tasks.tasks.Provider.objects.get"),
|
||||
patch("tasks.tasks.initialize_prowler_provider"),
|
||||
patch("tasks.tasks.Compliance.get_bulk"),
|
||||
patch("tasks.tasks.get_compliance_frameworks", return_value=[]),
|
||||
patch("tasks.tasks.Finding.all_objects.filter") as mock_findings,
|
||||
patch(
|
||||
"tasks.tasks._generate_output_directory", return_value=("out", "comp")
|
||||
),
|
||||
patch("tasks.tasks.FindingOutput._transform_findings_stats"),
|
||||
patch("tasks.tasks.FindingOutput.transform_api_finding"),
|
||||
patch("tasks.tasks._compress_output_files", return_value="/tmp/compressed"),
|
||||
patch("tasks.tasks._upload_to_s3", return_value="s3://bucket/file.zip"),
|
||||
patch("tasks.tasks.Scan.all_objects.filter"),
|
||||
patch("tasks.tasks.rmtree"),
|
||||
patch("tasks.tasks.s3_integration_task.apply_async") as mock_s3_task,
|
||||
):
|
||||
mock_summary.return_value.exists.return_value = True
|
||||
mock_findings.return_value.order_by.return_value.iterator.return_value = [
|
||||
[MagicMock()],
|
||||
True,
|
||||
]
|
||||
mock_integration_filter.return_value = [MagicMock()]
|
||||
mock_rls.return_value.__enter__.return_value = None
|
||||
|
||||
with (
|
||||
patch("tasks.tasks.OUTPUT_FORMATS_MAPPING", {}),
|
||||
patch("tasks.tasks.COMPLIANCE_CLASS_MAP", {"aws": []}),
|
||||
):
|
||||
generate_outputs_task(
|
||||
scan_id=self.scan_id,
|
||||
provider_id=self.provider_id,
|
||||
tenant_id=self.tenant_id,
|
||||
)
|
||||
|
||||
# Verify the S3 integrations filters
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
integrationproviderrelationship__provider_id=self.provider_id,
|
||||
integration_type=Integration.IntegrationChoices.AMAZON_S3,
|
||||
enabled=True,
|
||||
)
|
||||
mock_s3_task.assert_called_once()
|
||||
|
||||
|
||||
class TestScanCompleteTasks:
|
||||
@patch("tasks.tasks.create_compliance_requirements_task.apply_async")
|
||||
@@ -436,3 +492,540 @@ class TestScanCompleteTasks:
|
||||
provider_id="provider-id",
|
||||
tenant_id="tenant-id",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestCheckIntegrationsTask:
|
||||
def setup_method(self):
|
||||
self.scan_id = str(uuid.uuid4())
|
||||
self.provider_id = str(uuid.uuid4())
|
||||
self.tenant_id = str(uuid.uuid4())
|
||||
self.output_directory = "/tmp/some-output-dir"
|
||||
|
||||
@patch("tasks.tasks.rls_transaction")
|
||||
@patch("tasks.tasks.Integration.objects.filter")
|
||||
def test_check_integrations_no_integrations(
|
||||
self, mock_integration_filter, mock_rls
|
||||
):
|
||||
mock_integration_filter.return_value.exists.return_value = False
|
||||
# Ensure rls_transaction is mocked
|
||||
mock_rls.return_value.__enter__.return_value = None
|
||||
|
||||
result = check_integrations_task(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
)
|
||||
|
||||
assert result == {"integrations_processed": 0}
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
integrationproviderrelationship__provider_id=self.provider_id,
|
||||
enabled=True,
|
||||
)
|
||||
|
||||
@patch("tasks.tasks.security_hub_integration_task")
|
||||
@patch("tasks.tasks.group")
|
||||
@patch("tasks.tasks.rls_transaction")
|
||||
@patch("tasks.tasks.Integration.objects.filter")
|
||||
def test_check_integrations_security_hub_success(
|
||||
self, mock_integration_filter, mock_rls, mock_group, mock_security_hub_task
|
||||
):
|
||||
"""Test that SecurityHub integrations are processed correctly."""
|
||||
# Mock that we have SecurityHub integrations
|
||||
mock_integrations = MagicMock()
|
||||
mock_integrations.exists.return_value = True
|
||||
|
||||
# Mock SecurityHub integrations to return existing integrations
|
||||
mock_security_hub_integrations = MagicMock()
|
||||
mock_security_hub_integrations.exists.return_value = True
|
||||
|
||||
# Set up the filter chain
|
||||
mock_integration_filter.return_value = mock_integrations
|
||||
mock_integrations.filter.return_value = mock_security_hub_integrations
|
||||
|
||||
# Mock the task signature
|
||||
mock_task_signature = MagicMock()
|
||||
mock_security_hub_task.s.return_value = mock_task_signature
|
||||
|
||||
# Mock group job
|
||||
mock_job = MagicMock()
|
||||
mock_group.return_value = mock_job
|
||||
|
||||
# Ensure rls_transaction is mocked
|
||||
mock_rls.return_value.__enter__.return_value = None
|
||||
|
||||
# Execute the function
|
||||
result = check_integrations_task(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
scan_id="test-scan-id",
|
||||
)
|
||||
|
||||
# Should process 1 SecurityHub integration
|
||||
assert result == {"integrations_processed": 1}
|
||||
|
||||
# Verify the integration filter was called
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
integrationproviderrelationship__provider_id=self.provider_id,
|
||||
enabled=True,
|
||||
)
|
||||
|
||||
# Verify SecurityHub integrations were filtered
|
||||
mock_integrations.filter.assert_called_once_with(
|
||||
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB
|
||||
)
|
||||
|
||||
# Verify SecurityHub task was created with correct parameters
|
||||
mock_security_hub_task.s.assert_called_once_with(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
scan_id="test-scan-id",
|
||||
)
|
||||
|
||||
# Verify group was called and job was executed
|
||||
mock_group.assert_called_once_with([mock_task_signature])
|
||||
mock_job.apply_async.assert_called_once()
|
||||
|
||||
@patch("tasks.tasks.rls_transaction")
|
||||
@patch("tasks.tasks.Integration.objects.filter")
|
||||
def test_check_integrations_disabled_integrations_ignored(
|
||||
self, mock_integration_filter, mock_rls
|
||||
):
|
||||
"""Test that disabled integrations are not processed."""
|
||||
mock_integration_filter.return_value.exists.return_value = False
|
||||
mock_rls.return_value.__enter__.return_value = None
|
||||
|
||||
result = check_integrations_task(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
)
|
||||
|
||||
assert result == {"integrations_processed": 0}
|
||||
mock_integration_filter.assert_called_once_with(
|
||||
integrationproviderrelationship__provider_id=self.provider_id,
|
||||
enabled=True,
|
||||
)
|
||||
|
||||
@patch("tasks.tasks.s3_integration_task")
|
||||
@patch("tasks.tasks.Integration.objects.filter")
|
||||
@patch("tasks.tasks.ScanSummary.objects.filter")
|
||||
@patch("tasks.tasks.Provider.objects.get")
|
||||
@patch("tasks.tasks.initialize_prowler_provider")
|
||||
@patch("tasks.tasks.Compliance.get_bulk")
|
||||
@patch("tasks.tasks.get_compliance_frameworks")
|
||||
@patch("tasks.tasks.Finding.all_objects.filter")
|
||||
@patch("tasks.tasks._generate_output_directory")
|
||||
@patch("tasks.tasks.FindingOutput._transform_findings_stats")
|
||||
@patch("tasks.tasks.FindingOutput.transform_api_finding")
|
||||
@patch("tasks.tasks._compress_output_files")
|
||||
@patch("tasks.tasks._upload_to_s3")
|
||||
@patch("tasks.tasks.Scan.all_objects.filter")
|
||||
@patch("tasks.tasks.rmtree")
|
||||
def test_generate_outputs_with_asff_for_aws_with_security_hub(
|
||||
self,
|
||||
mock_rmtree,
|
||||
mock_scan_update,
|
||||
mock_upload,
|
||||
mock_compress,
|
||||
mock_transform_finding,
|
||||
mock_transform_stats,
|
||||
mock_generate_dir,
|
||||
mock_findings,
|
||||
mock_get_frameworks,
|
||||
mock_compliance_bulk,
|
||||
mock_initialize_provider,
|
||||
mock_provider_get,
|
||||
mock_scan_summary,
|
||||
mock_integration_filter,
|
||||
mock_s3_task,
|
||||
):
|
||||
"""Test that ASFF output is generated for AWS providers with SecurityHub integration."""
|
||||
# Setup
|
||||
mock_scan_summary_qs = MagicMock()
|
||||
mock_scan_summary_qs.exists.return_value = True
|
||||
mock_scan_summary.return_value = mock_scan_summary_qs
|
||||
|
||||
# Mock AWS provider
|
||||
mock_provider = MagicMock()
|
||||
mock_provider.uid = "aws-account-123"
|
||||
mock_provider.provider = "aws"
|
||||
mock_provider_get.return_value = mock_provider
|
||||
|
||||
# Mock SecurityHub integration exists
|
||||
mock_security_hub_integrations = MagicMock()
|
||||
mock_security_hub_integrations.exists.return_value = True
|
||||
mock_integration_filter.return_value = mock_security_hub_integrations
|
||||
|
||||
# Mock s3_integration_task
|
||||
mock_s3_task.apply_async.return_value.get.return_value = True
|
||||
|
||||
# Mock other necessary components
|
||||
mock_initialize_provider.return_value = MagicMock()
|
||||
mock_compliance_bulk.return_value = {}
|
||||
mock_get_frameworks.return_value = []
|
||||
mock_generate_dir.return_value = ("out-dir", "comp-dir")
|
||||
mock_transform_stats.return_value = {"stats": "data"}
|
||||
|
||||
# Mock findings
|
||||
mock_finding = MagicMock()
|
||||
mock_findings.return_value.order_by.return_value.iterator.return_value = [
|
||||
[mock_finding],
|
||||
True,
|
||||
]
|
||||
mock_transform_finding.return_value = MagicMock(compliance={})
|
||||
|
||||
# Track which output formats were created
|
||||
created_writers = {}
|
||||
|
||||
def track_writer_creation(cls_type):
|
||||
def factory(*args, **kwargs):
|
||||
writer = MagicMock()
|
||||
writer._data = []
|
||||
writer.transform = MagicMock()
|
||||
writer.batch_write_data_to_file = MagicMock()
|
||||
created_writers[cls_type] = writer
|
||||
return writer
|
||||
|
||||
return factory
|
||||
|
||||
# Mock OUTPUT_FORMATS_MAPPING with tracking
|
||||
with patch(
|
||||
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
|
||||
{
|
||||
"csv": {
|
||||
"class": track_writer_creation("csv"),
|
||||
"suffix": ".csv",
|
||||
"kwargs": {},
|
||||
},
|
||||
"json-asff": {
|
||||
"class": track_writer_creation("asff"),
|
||||
"suffix": ".asff.json",
|
||||
"kwargs": {},
|
||||
},
|
||||
"json-ocsf": {
|
||||
"class": track_writer_creation("ocsf"),
|
||||
"suffix": ".ocsf.json",
|
||||
"kwargs": {},
|
||||
},
|
||||
},
|
||||
):
|
||||
mock_compress.return_value = "/tmp/compressed.zip"
|
||||
mock_upload.return_value = "s3://bucket/file.zip"
|
||||
|
||||
# Execute
|
||||
result = generate_outputs_task(
|
||||
scan_id=self.scan_id,
|
||||
provider_id=self.provider_id,
|
||||
tenant_id=self.tenant_id,
|
||||
)
|
||||
|
||||
# Verify ASFF was created for AWS with SecurityHub
|
||||
assert "asff" in created_writers, "ASFF writer should be created"
|
||||
assert "csv" in created_writers, "CSV writer should be created"
|
||||
assert "ocsf" in created_writers, "OCSF writer should be created"
|
||||
|
||||
# Verify SecurityHub integration was checked
|
||||
assert mock_integration_filter.call_count == 2
|
||||
mock_integration_filter.assert_any_call(
|
||||
integrationproviderrelationship__provider_id=self.provider_id,
|
||||
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
|
||||
enabled=True,
|
||||
)
|
||||
|
||||
assert result == {"upload": True}
|
||||
|
||||
@patch("tasks.tasks.s3_integration_task")
|
||||
@patch("tasks.tasks.Integration.objects.filter")
|
||||
@patch("tasks.tasks.ScanSummary.objects.filter")
|
||||
@patch("tasks.tasks.Provider.objects.get")
|
||||
@patch("tasks.tasks.initialize_prowler_provider")
|
||||
@patch("tasks.tasks.Compliance.get_bulk")
|
||||
@patch("tasks.tasks.get_compliance_frameworks")
|
||||
@patch("tasks.tasks.Finding.all_objects.filter")
|
||||
@patch("tasks.tasks._generate_output_directory")
|
||||
@patch("tasks.tasks.FindingOutput._transform_findings_stats")
|
||||
@patch("tasks.tasks.FindingOutput.transform_api_finding")
|
||||
@patch("tasks.tasks._compress_output_files")
|
||||
@patch("tasks.tasks._upload_to_s3")
|
||||
@patch("tasks.tasks.Scan.all_objects.filter")
|
||||
@patch("tasks.tasks.rmtree")
|
||||
def test_generate_outputs_no_asff_for_aws_without_security_hub(
|
||||
self,
|
||||
mock_rmtree,
|
||||
mock_scan_update,
|
||||
mock_upload,
|
||||
mock_compress,
|
||||
mock_transform_finding,
|
||||
mock_transform_stats,
|
||||
mock_generate_dir,
|
||||
mock_findings,
|
||||
mock_get_frameworks,
|
||||
mock_compliance_bulk,
|
||||
mock_initialize_provider,
|
||||
mock_provider_get,
|
||||
mock_scan_summary,
|
||||
mock_integration_filter,
|
||||
mock_s3_task,
|
||||
):
|
||||
"""Test that ASFF output is NOT generated for AWS providers without SecurityHub integration."""
|
||||
# Setup
|
||||
mock_scan_summary_qs = MagicMock()
|
||||
mock_scan_summary_qs.exists.return_value = True
|
||||
mock_scan_summary.return_value = mock_scan_summary_qs
|
||||
|
||||
# Mock AWS provider
|
||||
mock_provider = MagicMock()
|
||||
mock_provider.uid = "aws-account-123"
|
||||
mock_provider.provider = "aws"
|
||||
mock_provider_get.return_value = mock_provider
|
||||
|
||||
# Mock NO SecurityHub integration
|
||||
mock_security_hub_integrations = MagicMock()
|
||||
mock_security_hub_integrations.exists.return_value = False
|
||||
mock_integration_filter.return_value = mock_security_hub_integrations
|
||||
|
||||
# Mock other necessary components
|
||||
mock_initialize_provider.return_value = MagicMock()
|
||||
mock_compliance_bulk.return_value = {}
|
||||
mock_get_frameworks.return_value = []
|
||||
mock_generate_dir.return_value = ("out-dir", "comp-dir")
|
||||
mock_transform_stats.return_value = {"stats": "data"}
|
||||
|
||||
# Mock findings
|
||||
mock_finding = MagicMock()
|
||||
mock_findings.return_value.order_by.return_value.iterator.return_value = [
|
||||
[mock_finding],
|
||||
True,
|
||||
]
|
||||
mock_transform_finding.return_value = MagicMock(compliance={})
|
||||
|
||||
# Track which output formats were created
|
||||
created_writers = {}
|
||||
|
||||
def track_writer_creation(cls_type):
|
||||
def factory(*args, **kwargs):
|
||||
writer = MagicMock()
|
||||
writer._data = []
|
||||
writer.transform = MagicMock()
|
||||
writer.batch_write_data_to_file = MagicMock()
|
||||
created_writers[cls_type] = writer
|
||||
return writer
|
||||
|
||||
return factory
|
||||
|
||||
# Mock OUTPUT_FORMATS_MAPPING with tracking
|
||||
with patch(
|
||||
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
|
||||
{
|
||||
"csv": {
|
||||
"class": track_writer_creation("csv"),
|
||||
"suffix": ".csv",
|
||||
"kwargs": {},
|
||||
},
|
||||
"json-asff": {
|
||||
"class": track_writer_creation("asff"),
|
||||
"suffix": ".asff.json",
|
||||
"kwargs": {},
|
||||
},
|
||||
"json-ocsf": {
|
||||
"class": track_writer_creation("ocsf"),
|
||||
"suffix": ".ocsf.json",
|
||||
"kwargs": {},
|
||||
},
|
||||
},
|
||||
):
|
||||
mock_compress.return_value = "/tmp/compressed.zip"
|
||||
mock_upload.return_value = "s3://bucket/file.zip"
|
||||
|
||||
# Execute
|
||||
result = generate_outputs_task(
|
||||
scan_id=self.scan_id,
|
||||
provider_id=self.provider_id,
|
||||
tenant_id=self.tenant_id,
|
||||
)
|
||||
|
||||
# Verify ASFF was NOT created when no SecurityHub integration
|
||||
assert "asff" not in created_writers, "ASFF writer should NOT be created"
|
||||
assert "csv" in created_writers, "CSV writer should be created"
|
||||
assert "ocsf" in created_writers, "OCSF writer should be created"
|
||||
|
||||
# Verify SecurityHub integration was checked
|
||||
assert mock_integration_filter.call_count == 2
|
||||
mock_integration_filter.assert_any_call(
|
||||
integrationproviderrelationship__provider_id=self.provider_id,
|
||||
integration_type=Integration.IntegrationChoices.AWS_SECURITY_HUB,
|
||||
enabled=True,
|
||||
)
|
||||
|
||||
assert result == {"upload": True}
|
||||
|
||||
@patch("tasks.tasks.ScanSummary.objects.filter")
|
||||
@patch("tasks.tasks.Provider.objects.get")
|
||||
@patch("tasks.tasks.initialize_prowler_provider")
|
||||
@patch("tasks.tasks.Compliance.get_bulk")
|
||||
@patch("tasks.tasks.get_compliance_frameworks")
|
||||
@patch("tasks.tasks.Finding.all_objects.filter")
|
||||
@patch("tasks.tasks._generate_output_directory")
|
||||
@patch("tasks.tasks.FindingOutput._transform_findings_stats")
|
||||
@patch("tasks.tasks.FindingOutput.transform_api_finding")
|
||||
@patch("tasks.tasks._compress_output_files")
|
||||
@patch("tasks.tasks._upload_to_s3")
|
||||
@patch("tasks.tasks.Scan.all_objects.filter")
|
||||
@patch("tasks.tasks.rmtree")
|
||||
def test_generate_outputs_no_asff_for_non_aws_provider(
|
||||
self,
|
||||
mock_rmtree,
|
||||
mock_scan_update,
|
||||
mock_upload,
|
||||
mock_compress,
|
||||
mock_transform_finding,
|
||||
mock_transform_stats,
|
||||
mock_generate_dir,
|
||||
mock_findings,
|
||||
mock_get_frameworks,
|
||||
mock_compliance_bulk,
|
||||
mock_initialize_provider,
|
||||
mock_provider_get,
|
||||
mock_scan_summary,
|
||||
):
|
||||
"""Test that ASFF output is NOT generated for non-AWS providers (e.g., Azure, GCP)."""
|
||||
# Setup
|
||||
mock_scan_summary_qs = MagicMock()
|
||||
mock_scan_summary_qs.exists.return_value = True
|
||||
mock_scan_summary.return_value = mock_scan_summary_qs
|
||||
|
||||
# Mock Azure provider (non-AWS)
|
||||
mock_provider = MagicMock()
|
||||
mock_provider.uid = "azure-subscription-123"
|
||||
mock_provider.provider = "azure" # Non-AWS provider
|
||||
mock_provider_get.return_value = mock_provider
|
||||
|
||||
# Mock other necessary components
|
||||
mock_initialize_provider.return_value = MagicMock()
|
||||
mock_compliance_bulk.return_value = {}
|
||||
mock_get_frameworks.return_value = []
|
||||
mock_generate_dir.return_value = ("out-dir", "comp-dir")
|
||||
mock_transform_stats.return_value = {"stats": "data"}
|
||||
|
||||
# Mock findings
|
||||
mock_finding = MagicMock()
|
||||
mock_findings.return_value.order_by.return_value.iterator.return_value = [
|
||||
[mock_finding],
|
||||
True,
|
||||
]
|
||||
mock_transform_finding.return_value = MagicMock(compliance={})
|
||||
|
||||
# Track which output formats were created
|
||||
created_writers = {}
|
||||
|
||||
def track_writer_creation(cls_type):
|
||||
def factory(*args, **kwargs):
|
||||
writer = MagicMock()
|
||||
writer._data = []
|
||||
writer.transform = MagicMock()
|
||||
writer.batch_write_data_to_file = MagicMock()
|
||||
created_writers[cls_type] = writer
|
||||
return writer
|
||||
|
||||
return factory
|
||||
|
||||
# Mock OUTPUT_FORMATS_MAPPING with tracking
|
||||
with patch(
|
||||
"tasks.tasks.OUTPUT_FORMATS_MAPPING",
|
||||
{
|
||||
"csv": {
|
||||
"class": track_writer_creation("csv"),
|
||||
"suffix": ".csv",
|
||||
"kwargs": {},
|
||||
},
|
||||
"json-asff": {
|
||||
"class": track_writer_creation("asff"),
|
||||
"suffix": ".asff.json",
|
||||
"kwargs": {},
|
||||
},
|
||||
"json-ocsf": {
|
||||
"class": track_writer_creation("ocsf"),
|
||||
"suffix": ".ocsf.json",
|
||||
"kwargs": {},
|
||||
},
|
||||
},
|
||||
):
|
||||
mock_compress.return_value = "/tmp/compressed.zip"
|
||||
mock_upload.return_value = "s3://bucket/file.zip"
|
||||
|
||||
# Execute
|
||||
result = generate_outputs_task(
|
||||
scan_id=self.scan_id,
|
||||
provider_id=self.provider_id,
|
||||
tenant_id=self.tenant_id,
|
||||
)
|
||||
|
||||
# Verify ASFF was NOT created for non-AWS provider
|
||||
assert (
|
||||
"asff" not in created_writers
|
||||
), "ASFF writer should NOT be created for non-AWS providers"
|
||||
assert "csv" in created_writers, "CSV writer should be created"
|
||||
assert "ocsf" in created_writers, "OCSF writer should be created"
|
||||
|
||||
assert result == {"upload": True}
|
||||
|
||||
@patch("tasks.tasks.upload_s3_integration")
|
||||
def test_s3_integration_task_success(self, mock_upload):
|
||||
mock_upload.return_value = True
|
||||
output_directory = "/tmp/prowler_api_output/test"
|
||||
|
||||
result = s3_integration_task(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
output_directory=output_directory,
|
||||
)
|
||||
|
||||
assert result is True
|
||||
mock_upload.assert_called_once_with(
|
||||
self.tenant_id, self.provider_id, output_directory
|
||||
)
|
||||
|
||||
@patch("tasks.tasks.upload_s3_integration")
|
||||
def test_s3_integration_task_failure(self, mock_upload):
|
||||
mock_upload.return_value = False
|
||||
output_directory = "/tmp/prowler_api_output/test"
|
||||
|
||||
result = s3_integration_task(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
output_directory=output_directory,
|
||||
)
|
||||
|
||||
assert result is False
|
||||
mock_upload.assert_called_once_with(
|
||||
self.tenant_id, self.provider_id, output_directory
|
||||
)
|
||||
|
||||
@patch("tasks.tasks.upload_security_hub_integration")
|
||||
def test_security_hub_integration_task_success(self, mock_upload):
|
||||
"""Test successful SecurityHub integration task execution."""
|
||||
mock_upload.return_value = True
|
||||
scan_id = "test-scan-123"
|
||||
|
||||
result = security_hub_integration_task(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
scan_id=scan_id,
|
||||
)
|
||||
|
||||
assert result is True
|
||||
mock_upload.assert_called_once_with(self.tenant_id, self.provider_id, scan_id)
|
||||
|
||||
@patch("tasks.tasks.upload_security_hub_integration")
|
||||
def test_security_hub_integration_task_failure(self, mock_upload):
|
||||
"""Test SecurityHub integration task handling failure."""
|
||||
mock_upload.return_value = False
|
||||
scan_id = "test-scan-123"
|
||||
|
||||
result = security_hub_integration_task(
|
||||
tenant_id=self.tenant_id,
|
||||
provider_id=self.provider_id,
|
||||
scan_id=scan_id,
|
||||
)
|
||||
|
||||
assert result is False
|
||||
mock_upload.assert_called_once_with(self.tenant_id, self.provider_id, scan_id)
|
||||
|
||||
@@ -0,0 +1,234 @@
|
||||
from locust import events, task
|
||||
from utils.config import (
|
||||
L_PROVIDER_NAME,
|
||||
M_PROVIDER_NAME,
|
||||
RESOURCES_UI_FIELDS,
|
||||
S_PROVIDER_NAME,
|
||||
TARGET_INSERTED_AT,
|
||||
)
|
||||
from utils.helpers import (
|
||||
APIUserBase,
|
||||
get_api_token,
|
||||
get_auth_headers,
|
||||
get_dynamic_filters_pairs,
|
||||
get_next_resource_filter,
|
||||
get_scan_id_from_provider_name,
|
||||
)
|
||||
|
||||
GLOBAL = {
|
||||
"token": None,
|
||||
"scan_ids": {},
|
||||
"resource_filters": None,
|
||||
"large_resource_filters": None,
|
||||
}
|
||||
|
||||
|
||||
@events.test_start.add_listener
|
||||
def on_test_start(environment, **kwargs):
|
||||
GLOBAL["token"] = get_api_token(environment.host)
|
||||
|
||||
GLOBAL["scan_ids"]["small"] = get_scan_id_from_provider_name(
|
||||
environment.host, GLOBAL["token"], S_PROVIDER_NAME
|
||||
)
|
||||
GLOBAL["scan_ids"]["medium"] = get_scan_id_from_provider_name(
|
||||
environment.host, GLOBAL["token"], M_PROVIDER_NAME
|
||||
)
|
||||
GLOBAL["scan_ids"]["large"] = get_scan_id_from_provider_name(
|
||||
environment.host, GLOBAL["token"], L_PROVIDER_NAME
|
||||
)
|
||||
|
||||
GLOBAL["resource_filters"] = get_dynamic_filters_pairs(
|
||||
environment.host, GLOBAL["token"], "resources"
|
||||
)
|
||||
GLOBAL["large_resource_filters"] = get_dynamic_filters_pairs(
|
||||
environment.host, GLOBAL["token"], "resources", GLOBAL["scan_ids"]["large"]
|
||||
)
|
||||
|
||||
|
||||
class APIUser(APIUserBase):
|
||||
def on_start(self):
|
||||
self.token = GLOBAL["token"]
|
||||
self.s_scan_id = GLOBAL["scan_ids"]["small"]
|
||||
self.m_scan_id = GLOBAL["scan_ids"]["medium"]
|
||||
self.l_scan_id = GLOBAL["scan_ids"]["large"]
|
||||
self.available_resource_filters = GLOBAL["resource_filters"]
|
||||
self.available_resource_filters_large_scan = GLOBAL["large_resource_filters"]
|
||||
|
||||
@task
|
||||
def resources_default(self):
|
||||
name = "/resources"
|
||||
page_number = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources?page[number]={page_number}"
|
||||
f"&filter[updated_at]={TARGET_INSERTED_AT}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(3)
|
||||
def resources_default_ui_fields(self):
|
||||
name = "/resources?fields"
|
||||
page_number = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources?page[number]={page_number}"
|
||||
f"&fields[resources]={','.join(RESOURCES_UI_FIELDS)}"
|
||||
f"&filter[updated_at]={TARGET_INSERTED_AT}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(3)
|
||||
def resources_default_include(self):
|
||||
name = "/resources?include"
|
||||
page = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources?page[number]={page}"
|
||||
f"&filter[updated_at]={TARGET_INSERTED_AT}"
|
||||
f"&include=provider"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(3)
|
||||
def resources_metadata(self):
|
||||
name = "/resources/metadata"
|
||||
endpoint = f"/resources/metadata?filter[updated_at]={TARGET_INSERTED_AT}"
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task
|
||||
def resources_scan_small(self):
|
||||
name = "/resources?filter[scan_id] - 50k"
|
||||
page_number = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources?page[number]={page_number}" f"&filter[scan]={self.s_scan_id}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task
|
||||
def resources_metadata_scan_small(self):
|
||||
name = "/resources/metadata?filter[scan_id] - 50k"
|
||||
endpoint = f"/resources/metadata?&filter[scan]={self.s_scan_id}"
|
||||
self.client.get(
|
||||
endpoint,
|
||||
headers=get_auth_headers(self.token),
|
||||
name=name,
|
||||
)
|
||||
|
||||
@task(2)
|
||||
def resources_scan_medium(self):
|
||||
name = "/resources?filter[scan_id] - 250k"
|
||||
page_number = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources?page[number]={page_number}" f"&filter[scan]={self.m_scan_id}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task
|
||||
def resources_metadata_scan_medium(self):
|
||||
name = "/resources/metadata?filter[scan_id] - 250k"
|
||||
endpoint = f"/resources/metadata?&filter[scan]={self.m_scan_id}"
|
||||
self.client.get(
|
||||
endpoint,
|
||||
headers=get_auth_headers(self.token),
|
||||
name=name,
|
||||
)
|
||||
|
||||
@task
|
||||
def resources_scan_large(self):
|
||||
name = "/resources?filter[scan_id] - 500k"
|
||||
page_number = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources?page[number]={page_number}" f"&filter[scan]={self.l_scan_id}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task
|
||||
def resources_scan_large_include(self):
|
||||
name = "/resources?filter[scan_id]&include - 500k"
|
||||
page_number = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources?page[number]={page_number}"
|
||||
f"&filter[scan]={self.l_scan_id}"
|
||||
f"&include=provider"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task
|
||||
def resources_metadata_scan_large(self):
|
||||
endpoint = f"/resources/metadata?&filter[scan]={self.l_scan_id}"
|
||||
self.client.get(
|
||||
endpoint,
|
||||
headers=get_auth_headers(self.token),
|
||||
name="/resources/metadata?filter[scan_id] - 500k",
|
||||
)
|
||||
|
||||
@task(2)
|
||||
def resources_filters(self):
|
||||
name = "/resources?filter[resource_filter]&include"
|
||||
filter_name, filter_value = get_next_resource_filter(
|
||||
self.available_resource_filters
|
||||
)
|
||||
|
||||
endpoint = (
|
||||
f"/resources?filter[{filter_name}]={filter_value}"
|
||||
f"&filter[updated_at]={TARGET_INSERTED_AT}"
|
||||
f"&include=provider"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(3)
|
||||
def resources_metadata_filters(self):
|
||||
name = "/resources/metadata?filter[resource_filter]"
|
||||
filter_name, filter_value = get_next_resource_filter(
|
||||
self.available_resource_filters
|
||||
)
|
||||
|
||||
endpoint = (
|
||||
f"/resources/metadata?filter[{filter_name}]={filter_value}"
|
||||
f"&filter[updated_at]={TARGET_INSERTED_AT}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(3)
|
||||
def resources_metadata_filters_scan_large(self):
|
||||
name = "/resources/metadata?filter[resource_filter]&filter[scan_id] - 500k"
|
||||
filter_name, filter_value = get_next_resource_filter(
|
||||
self.available_resource_filters
|
||||
)
|
||||
|
||||
endpoint = (
|
||||
f"/resources/metadata?filter[{filter_name}]={filter_value}"
|
||||
f"&filter[scan]={self.l_scan_id}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(2)
|
||||
def resourcess_filter_large_scan_include(self):
|
||||
name = "/resources?filter[resource_filter][scan]&include - 500k"
|
||||
filter_name, filter_value = get_next_resource_filter(
|
||||
self.available_resource_filters
|
||||
)
|
||||
|
||||
endpoint = (
|
||||
f"/resources?filter[{filter_name}]={filter_value}"
|
||||
f"&filter[scan]={self.l_scan_id}"
|
||||
f"&include=provider"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(3)
|
||||
def resources_latest_default_ui_fields(self):
|
||||
name = "/resources/latest?fields"
|
||||
page_number = self._next_page(name)
|
||||
endpoint = (
|
||||
f"/resources/latest?page[number]={page_number}"
|
||||
f"&fields[resources]={','.join(RESOURCES_UI_FIELDS)}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(3)
|
||||
def resources_latest_metadata_filters(self):
|
||||
name = "/resources/metadata/latest?filter[resource_filter]"
|
||||
filter_name, filter_value = get_next_resource_filter(
|
||||
self.available_resource_filters
|
||||
)
|
||||
|
||||
endpoint = f"/resources/metadata/latest?filter[{filter_name}]={filter_value}"
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
@@ -13,6 +13,23 @@ FINDINGS_RESOURCE_METADATA = {
|
||||
"resource_types": "resource_type",
|
||||
"services": "service",
|
||||
}
|
||||
RESOURCE_METADATA = {
|
||||
"regions": "region",
|
||||
"types": "type",
|
||||
"services": "service",
|
||||
}
|
||||
|
||||
RESOURCES_UI_FIELDS = [
|
||||
"name",
|
||||
"failed_findings_count",
|
||||
"region",
|
||||
"service",
|
||||
"type",
|
||||
"provider",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
"uid",
|
||||
]
|
||||
|
||||
S_PROVIDER_NAME = "provider-50k"
|
||||
M_PROVIDER_NAME = "provider-250k"
|
||||
|
||||
@@ -7,6 +7,7 @@ from locust import HttpUser, between
|
||||
from utils.config import (
|
||||
BASE_HEADERS,
|
||||
FINDINGS_RESOURCE_METADATA,
|
||||
RESOURCE_METADATA,
|
||||
TARGET_INSERTED_AT,
|
||||
USER_EMAIL,
|
||||
USER_PASSWORD,
|
||||
@@ -121,13 +122,16 @@ def get_scan_id_from_provider_name(host: str, token: str, provider_name: str) ->
|
||||
return response.json()["data"][0]["id"]
|
||||
|
||||
|
||||
def get_resource_filters_pairs(host: str, token: str, scan_id: str = "") -> dict:
|
||||
def get_dynamic_filters_pairs(
|
||||
host: str, token: str, endpoint: str, scan_id: str = ""
|
||||
) -> dict:
|
||||
"""
|
||||
Retrieves and maps resource metadata filter values from the findings endpoint.
|
||||
Retrieves and maps metadata filter values from a given endpoint.
|
||||
|
||||
Args:
|
||||
host (str): The host URL of the API.
|
||||
token (str): Bearer token for authentication.
|
||||
endpoint (str): The API endpoint to query for metadata.
|
||||
scan_id (str, optional): Optional scan ID to filter metadata. Defaults to using inserted_at timestamp.
|
||||
|
||||
Returns:
|
||||
@@ -136,22 +140,28 @@ def get_resource_filters_pairs(host: str, token: str, scan_id: str = "") -> dict
|
||||
Raises:
|
||||
AssertionError: If the request fails or does not return a 200 status code.
|
||||
"""
|
||||
metadata_mapping = (
|
||||
FINDINGS_RESOURCE_METADATA if endpoint == "findings" else RESOURCE_METADATA
|
||||
)
|
||||
date_filter = "inserted_at" if endpoint == "findings" else "updated_at"
|
||||
metadata_filters = (
|
||||
f"filter[scan]={scan_id}"
|
||||
if scan_id
|
||||
else f"filter[inserted_at]={TARGET_INSERTED_AT}"
|
||||
else f"filter[{date_filter}]={TARGET_INSERTED_AT}"
|
||||
)
|
||||
response = requests.get(
|
||||
f"{host}/findings/metadata?{metadata_filters}", headers=get_auth_headers(token)
|
||||
f"{host}/{endpoint}/metadata?{metadata_filters}",
|
||||
headers=get_auth_headers(token),
|
||||
)
|
||||
assert (
|
||||
response.status_code == 200
|
||||
), f"Failed to get resource filters values: {response.text}"
|
||||
attributes = response.json()["data"]["attributes"]
|
||||
|
||||
return {
|
||||
FINDINGS_RESOURCE_METADATA[key]: values
|
||||
metadata_mapping[key]: values
|
||||
for key, values in attributes.items()
|
||||
if key in FINDINGS_RESOURCE_METADATA.keys()
|
||||
if key in metadata_mapping.keys()
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -23,6 +23,7 @@ import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shlex
|
||||
import signal
|
||||
import socket
|
||||
import subprocess
|
||||
@@ -145,11 +146,11 @@ def _get_script_arguments():
|
||||
|
||||
def _run_prowler(prowler_args):
|
||||
_debug("Running prowler with args: {0}".format(prowler_args), 1)
|
||||
_prowler_command = "{prowler}/prowler {args}".format(
|
||||
prowler=PATH_TO_PROWLER, args=prowler_args
|
||||
_prowler_command = shlex.split(
|
||||
"{prowler}/prowler {args}".format(prowler=PATH_TO_PROWLER, args=prowler_args)
|
||||
)
|
||||
_debug("Running command: {0}".format(_prowler_command), 2)
|
||||
_process = subprocess.Popen(_prowler_command, stdout=subprocess.PIPE, shell=True)
|
||||
_debug("Running command: {0}".format(" ".join(_prowler_command)), 2)
|
||||
_process = subprocess.Popen(_prowler_command, stdout=subprocess.PIPE)
|
||||
_output, _error = _process.communicate()
|
||||
_debug("Raw prowler output: {0}".format(_output), 3)
|
||||
_debug("Raw prowler error: {0}".format(_error), 3)
|
||||
|
||||
@@ -0,0 +1,34 @@
|
||||
/* Override Tailwind CSS reset for markdown content */
|
||||
.markdown-content ul {
|
||||
list-style: disc !important;
|
||||
margin-left: 20px !important;
|
||||
padding-left: 10px !important;
|
||||
margin-bottom: 8px !important;
|
||||
}
|
||||
|
||||
.markdown-content ol {
|
||||
list-style: decimal !important;
|
||||
margin-left: 20px !important;
|
||||
padding-left: 10px !important;
|
||||
margin-bottom: 8px !important;
|
||||
}
|
||||
|
||||
.markdown-content li {
|
||||
margin-bottom: 4px !important;
|
||||
display: list-item !important;
|
||||
}
|
||||
|
||||
.markdown-content p {
|
||||
margin-bottom: 8px !important;
|
||||
}
|
||||
|
||||
/* Ensure nested lists work properly */
|
||||
.markdown-content ul ul {
|
||||
margin-top: 4px !important;
|
||||
margin-bottom: 4px !important;
|
||||
}
|
||||
|
||||
.markdown-content ol ol {
|
||||
margin-top: 4px !important;
|
||||
margin-bottom: 4px !important;
|
||||
}
|
||||
@@ -0,0 +1,25 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_cis
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_cis(
|
||||
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
)
|
||||
+66
-16
@@ -1654,6 +1654,39 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
|
||||
[
|
||||
html.Div(
|
||||
[
|
||||
# Description as first details item
|
||||
html.Div(
|
||||
[
|
||||
html.P(
|
||||
html.Strong(
|
||||
"Description: ",
|
||||
style={
|
||||
"margin-bottom": "8px"
|
||||
},
|
||||
)
|
||||
),
|
||||
html.Div(
|
||||
dcc.Markdown(
|
||||
str(
|
||||
data.get(
|
||||
"DESCRIPTION",
|
||||
"",
|
||||
)
|
||||
),
|
||||
dangerously_allow_html=True,
|
||||
style={
|
||||
"margin-left": "0px",
|
||||
"padding-left": "10px",
|
||||
},
|
||||
),
|
||||
className="markdown-content",
|
||||
style={
|
||||
"margin-left": "0px",
|
||||
"padding-left": "10px",
|
||||
},
|
||||
),
|
||||
],
|
||||
),
|
||||
html.Div(
|
||||
[
|
||||
html.P(
|
||||
@@ -1793,19 +1826,27 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
|
||||
html.P(
|
||||
html.Strong(
|
||||
"Risk: ",
|
||||
style={
|
||||
"margin-right": "5px"
|
||||
},
|
||||
style={},
|
||||
)
|
||||
),
|
||||
html.P(
|
||||
str(data.get("RISK", "")),
|
||||
html.Div(
|
||||
dcc.Markdown(
|
||||
str(
|
||||
data.get("RISK", "")
|
||||
),
|
||||
dangerously_allow_html=True,
|
||||
style={
|
||||
"margin-left": "0px",
|
||||
"padding-left": "10px",
|
||||
},
|
||||
),
|
||||
className="markdown-content",
|
||||
style={
|
||||
"margin-left": "5px"
|
||||
"margin-left": "0px",
|
||||
"padding-left": "10px",
|
||||
},
|
||||
),
|
||||
],
|
||||
style={"display": "flex"},
|
||||
),
|
||||
html.Div(
|
||||
[
|
||||
@@ -1847,23 +1888,32 @@ def generate_table(data, index, color_mapping_severity, color_mapping_status):
|
||||
html.Strong(
|
||||
"Recommendation: ",
|
||||
style={
|
||||
"margin-right": "5px"
|
||||
"margin-bottom": "8px"
|
||||
},
|
||||
)
|
||||
),
|
||||
html.P(
|
||||
str(
|
||||
data.get(
|
||||
"REMEDIATION_RECOMMENDATION_TEXT",
|
||||
"",
|
||||
)
|
||||
html.Div(
|
||||
dcc.Markdown(
|
||||
str(
|
||||
data.get(
|
||||
"REMEDIATION_RECOMMENDATION_TEXT",
|
||||
"",
|
||||
)
|
||||
),
|
||||
dangerously_allow_html=True,
|
||||
style={
|
||||
"margin-left": "0px",
|
||||
"padding-left": "10px",
|
||||
},
|
||||
),
|
||||
className="markdown-content",
|
||||
style={
|
||||
"margin-left": "5px"
|
||||
"margin-left": "0px",
|
||||
"padding-left": "10px",
|
||||
},
|
||||
),
|
||||
],
|
||||
style={"display": "flex"},
|
||||
style={"margin-bottom": "15px"},
|
||||
),
|
||||
html.Div(
|
||||
[
|
||||
|
||||
+11
-5
@@ -14,9 +14,11 @@ services:
|
||||
ports:
|
||||
- "${DJANGO_PORT:-8080}:${DJANGO_PORT:-8080}"
|
||||
volumes:
|
||||
- "./api/src/backend:/home/prowler/backend"
|
||||
- "./api/pyproject.toml:/home/prowler/pyproject.toml"
|
||||
- "/tmp/prowler_api_output:/tmp/prowler_api_output"
|
||||
- ./api/src/backend:/home/prowler/backend
|
||||
- ./api/pyproject.toml:/home/prowler/pyproject.toml
|
||||
- ./api/docker-entrypoint.sh:/home/prowler/docker-entrypoint.sh
|
||||
- ./_data/api:/home/prowler/.config/prowler-api
|
||||
- outputs:/tmp/prowler_api_output
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
@@ -64,7 +66,7 @@ services:
|
||||
image: valkey/valkey:7-alpine3.19
|
||||
hostname: "valkey"
|
||||
volumes:
|
||||
- ./api/_data/valkey:/data
|
||||
- ./_data/valkey:/data
|
||||
env_file:
|
||||
- path: .env
|
||||
required: false
|
||||
@@ -87,7 +89,7 @@ services:
|
||||
- path: .env
|
||||
required: false
|
||||
volumes:
|
||||
- "/tmp/prowler_api_output:/tmp/prowler_api_output"
|
||||
- "outputs:/tmp/prowler_api_output"
|
||||
depends_on:
|
||||
valkey:
|
||||
condition: service_healthy
|
||||
@@ -115,3 +117,7 @@ services:
|
||||
entrypoint:
|
||||
- "../docker-entrypoint.sh"
|
||||
- "beat"
|
||||
|
||||
volumes:
|
||||
outputs:
|
||||
driver: local
|
||||
|
||||
+7
-2
@@ -8,7 +8,8 @@ services:
|
||||
ports:
|
||||
- "${DJANGO_PORT:-8080}:${DJANGO_PORT:-8080}"
|
||||
volumes:
|
||||
- "/tmp/prowler_api_output:/tmp/prowler_api_output"
|
||||
- ./_data/api:/home/prowler/.config/prowler-api
|
||||
- output:/tmp/prowler_api_output
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
@@ -68,7 +69,7 @@ services:
|
||||
- path: .env
|
||||
required: false
|
||||
volumes:
|
||||
- "/tmp/prowler_api_output:/tmp/prowler_api_output"
|
||||
- "output:/tmp/prowler_api_output"
|
||||
depends_on:
|
||||
valkey:
|
||||
condition: service_healthy
|
||||
@@ -91,3 +92,7 @@ services:
|
||||
entrypoint:
|
||||
- "../docker-entrypoint.sh"
|
||||
- "beat"
|
||||
|
||||
volumes:
|
||||
output:
|
||||
driver: local
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
---
|
||||
hide:
|
||||
- toc
|
||||
---
|
||||
# About
|
||||
|
||||
## Author
|
||||
Prowler was created by **Toni de la Fuente** in 2016.
|
||||
|
||||
| <br>[](https://twitter.com/toniblyx) [](https://twitter.com/prowlercloud)|
|
||||
|:--:|
|
||||
| <b>Toni de la Fuente </b>|
|
||||
|
||||
## Maintainers
|
||||
Prowler is maintained by the Engineers of the **Prowler Team** :
|
||||
|
||||
| [](https://twitter.com/NachoRivCor) | [](https://twitter.com/sergargar1) |[](https://twitter.com/jfagoagas) |
|
||||
|:--:|:--:|:--:
|
||||
| <b>Nacho Rivera</b>| <b>Sergio Garcia</b>| <b>Pepe Fagoaga</b>|
|
||||
|
||||
## License
|
||||
|
||||
Prowler is licensed as **Apache License 2.0** as specified in each file. You may obtain a copy of the License at
|
||||
<http://www.apache.org/licenses/LICENSE-2.0>
|
||||
@@ -0,0 +1,61 @@
|
||||
## Access Prowler App
|
||||
|
||||
After [installation](../installation/prowler-app.md), navigate to [http://localhost:3000](http://localhost:3000) and sign up with email and password.
|
||||
|
||||
<img src="../../img/sign-up-button.png" alt="Sign Up Button" width="320"/>
|
||||
<img src="../../img/sign-up.png" alt="Sign Up" width="285"/>
|
||||
|
||||
???+ note "User creation and default tenant behavior"
|
||||
|
||||
When creating a new user, the behavior depends on whether an invitation is provided:
|
||||
|
||||
- **Without an invitation**:
|
||||
|
||||
- A new tenant is automatically created.
|
||||
- The new user is assigned to this tenant.
|
||||
- A set of **RBAC admin permissions** is generated and assigned to the user for the newly-created tenant.
|
||||
|
||||
- **With an invitation**: The user is added to the specified tenant with the permissions defined in the invitation.
|
||||
|
||||
This mechanism ensures that the first user in a newly created tenant has administrative permissions within that tenant.
|
||||
|
||||
## Log In
|
||||
|
||||
Access Prowler App by logging in with **email and password**.
|
||||
|
||||
<img src="../../img/log-in.png" alt="Log In" width="285"/>
|
||||
|
||||
## Add Cloud Provider
|
||||
|
||||
Configure a cloud provider for scanning:
|
||||
|
||||
1. Navigate to `Settings > Cloud Providers` and click `Add Account`.
|
||||
2. Select the cloud provider.
|
||||
3. Enter the provider's identifier (Optional: Add an alias):
|
||||
- **AWS**: Account ID
|
||||
- **GCP**: Project ID
|
||||
- **Azure**: Subscription ID
|
||||
- **Kubernetes**: Cluster ID
|
||||
- **M365**: Domain ID
|
||||
4. Follow the guided instructions to add and authenticate your credentials.
|
||||
|
||||
## Start a Scan
|
||||
|
||||
Once credentials are successfully added and validated, Prowler initiates a scan of your cloud environment.
|
||||
|
||||
Click `Go to Scans` to monitor progress.
|
||||
|
||||
## View Results
|
||||
|
||||
Review findings during scan execution in the following sections:
|
||||
|
||||
- **Overview** – Provides a high-level summary of your scans.
|
||||
<img src="../../products/img/overview.png" alt="Overview" width="700"/>
|
||||
|
||||
- **Compliance** – Displays compliance insights based on security frameworks.
|
||||
<img src="../../img/compliance.png" alt="Compliance" width="700"/>
|
||||
|
||||
> For detailed usage instructions, refer to the [Prowler App Guide](../tutorials/prowler-app.md).
|
||||
|
||||
???+ note
|
||||
Prowler will automatically scan all configured providers every **24 hours**, ensuring your cloud environment stays continuously monitored.
|
||||
@@ -0,0 +1,282 @@
|
||||
## Running Prowler
|
||||
|
||||
Running Prowler requires specifying the provider (e.g `aws`, `gcp`, `azure`, `kubernetes`, `m365`, `github`, `iac` or `mongodbatlas`):
|
||||
|
||||
???+ note
|
||||
If no provider is specified, AWS is used by default for backward compatibility with Prowler v2.
|
||||
|
||||
```console
|
||||
prowler <provider>
|
||||
```
|
||||

|
||||
|
||||
???+ note
|
||||
Running the `prowler` command without options will uses environment variable credentials. Refer to the Authentication section of each provider for credential configuration details.
|
||||
|
||||
## Verbose Output
|
||||
|
||||
If you prefer the former verbose output, use: `--verbose`. This allows seeing more info while Prowler is running, minimal output is displayed unless verbosity is enabled.
|
||||
|
||||
## Report Generation
|
||||
|
||||
By default, Prowler generates CSV, JSON-OCSF, and HTML reports. To generate a JSON-ASFF report (used by AWS Security Hub), specify `-M` or `--output-modes`:
|
||||
|
||||
```console
|
||||
prowler <provider> -M csv json-asff json-ocsf html
|
||||
```
|
||||
The HTML report is saved in the output directory, alongside other reports. It will look like this:
|
||||
|
||||

|
||||
|
||||
## Listing Available Checks and Services
|
||||
|
||||
List all available checks or services within a provider using `-l`/`--list-checks` or `--list-services`.
|
||||
|
||||
```console
|
||||
prowler <provider> --list-checks
|
||||
prowler <provider> --list-services
|
||||
```
|
||||
## Running Specific Checks or Services
|
||||
|
||||
Execute specific checks or services using `-c`/`checks` or `-s`/`services`:
|
||||
|
||||
```console
|
||||
prowler azure --checks storage_blob_public_access_level_is_disabled
|
||||
prowler aws --services s3 ec2
|
||||
prowler gcp --services iam compute
|
||||
prowler kubernetes --services etcd apiserver
|
||||
```
|
||||
## Excluding Checks and Services
|
||||
|
||||
Checks and services can be excluded with `-e`/`--excluded-checks` or `--excluded-services`:
|
||||
|
||||
```console
|
||||
prowler aws --excluded-checks s3_bucket_public_access
|
||||
prowler azure --excluded-services defender iam
|
||||
prowler gcp --excluded-services kms
|
||||
prowler kubernetes --excluded-services controllermanager
|
||||
```
|
||||
## Additional Options
|
||||
|
||||
Explore more advanced time-saving execution methods in the [Miscellaneous](../tutorials/misc.md) section.
|
||||
|
||||
Access the help menu and view all available options with `-h`/`--help`:
|
||||
|
||||
```console
|
||||
prowler --help
|
||||
```
|
||||
|
||||
## AWS
|
||||
|
||||
Use a custom AWS profile with `-p`/`--profile` and/or specific AWS regions with `-f`/`--filter-region`:
|
||||
|
||||
```console
|
||||
prowler aws --profile custom-profile -f us-east-1 eu-south-2
|
||||
```
|
||||
|
||||
???+ note
|
||||
By default, `prowler` will scan all AWS regions.
|
||||
|
||||
See more details about AWS Authentication in the [Authentication Section](../tutorials/aws/authentication.md) section.
|
||||
|
||||
## Azure
|
||||
|
||||
Azure requires specifying the auth method:
|
||||
|
||||
```console
|
||||
# To use service principal authentication
|
||||
prowler azure --sp-env-auth
|
||||
|
||||
# To use az cli authentication
|
||||
prowler azure --az-cli-auth
|
||||
|
||||
# To use browser authentication
|
||||
prowler azure --browser-auth --tenant-id "XXXXXXXX"
|
||||
|
||||
# To use managed identity auth
|
||||
prowler azure --managed-identity-auth
|
||||
```
|
||||
|
||||
See more details about Azure Authentication in the [Authentication Section](../tutorials/azure/authentication.md)
|
||||
|
||||
By default, Prowler scans all accessible subscriptions. Scan specific subscriptions using the following flag (using az cli auth as example):
|
||||
|
||||
```console
|
||||
prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription ID 2> ... <subscription ID N>
|
||||
```
|
||||
## Google Cloud
|
||||
|
||||
- **User Account Credentials**
|
||||
|
||||
By default, Prowler uses **User Account credentials**. Configure accounts using:
|
||||
|
||||
- `gcloud init` – Set up a new account.
|
||||
- `gcloud config set account <account>` – Switch to an existing account.
|
||||
|
||||
Once configured, obtain access credentials using: `gcloud auth application-default login`.
|
||||
|
||||
- **Service Account Authentication**
|
||||
|
||||
Alternatively, you can use Service Account credentials:
|
||||
|
||||
Generate and download Service Account keys in JSON format. Refer to [Google IAM documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) for details.
|
||||
|
||||
Provide the key file location using this argument:
|
||||
|
||||
```console
|
||||
prowler gcp --credentials-file path
|
||||
```
|
||||
|
||||
- **Scanning Specific GCP Projects**
|
||||
|
||||
By default, Prowler scans all accessible GCP projects. Scan specific projects with the `--project-ids` flag:
|
||||
|
||||
```console
|
||||
prowler gcp --project-ids <Project ID 1> <Project ID 2> ... <Project ID N>
|
||||
```
|
||||
|
||||
- **GCP Retry Configuration**
|
||||
|
||||
Configure the maximum number of retry attempts for Google Cloud SDK API calls with the `--gcp-retries-max-attempts` flag:
|
||||
|
||||
```console
|
||||
prowler gcp --gcp-retries-max-attempts 5
|
||||
```
|
||||
|
||||
This is useful when experiencing quota exceeded errors (HTTP 429) to increase the number of automatic retry attempts.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
Prowler enables security scanning of Kubernetes clusters, supporting both **in-cluster** and **external** execution.
|
||||
|
||||
- **Non In-Cluster Execution**
|
||||
|
||||
```console
|
||||
prowler kubernetes --kubeconfig-file path
|
||||
```
|
||||
???+ note
|
||||
If no `--kubeconfig-file` is provided, Prowler will use the default KubeConfig file location (`~/.kube/config`).
|
||||
|
||||
- **In-Cluster Execution**
|
||||
|
||||
To run Prowler inside the cluster, apply the provided YAML configuration to deploy a job in a new namespace:
|
||||
|
||||
```console
|
||||
kubectl apply -f kubernetes/prowler-sa.yaml
|
||||
kubectl apply -f kubernetes/job.yaml
|
||||
kubectl apply -f kubernetes/prowler-role.yaml
|
||||
kubectl apply -f kubernetes/prowler-rolebinding.yaml
|
||||
kubectl get pods --namespace prowler-ns --> prowler-XXXXX
|
||||
kubectl logs prowler-XXXXX --namespace prowler-ns
|
||||
```
|
||||
|
||||
???+ note
|
||||
By default, Prowler scans all namespaces in the active Kubernetes context. Use the `--context`flag to specify the context to be scanned and `--namespaces` to restrict scanning to specific namespaces.
|
||||
|
||||
## Microsoft 365
|
||||
|
||||
Microsoft 365 requires specifying the auth method:
|
||||
|
||||
```console
|
||||
|
||||
# To use service principal authentication for MSGraph and PowerShell modules
|
||||
prowler m365 --sp-env-auth
|
||||
|
||||
# To use both service principal (for MSGraph) and user credentials (for PowerShell modules)
|
||||
prowler m365 --env-auth
|
||||
|
||||
# To use az cli authentication
|
||||
prowler m365 --az-cli-auth
|
||||
|
||||
# To use browser authentication
|
||||
prowler m365 --browser-auth --tenant-id "XXXXXXXX"
|
||||
|
||||
```
|
||||
|
||||
See more details about M365 Authentication in the [Authentication Section](../tutorials/microsoft365/authentication.md) section.
|
||||
|
||||
## GitHub
|
||||
|
||||
Prowler enables security scanning of your **GitHub account**, including **Repositories**, **Organizations** and **Applications**.
|
||||
|
||||
- **Supported Authentication Methods**
|
||||
|
||||
Authenticate using one of the following methods:
|
||||
|
||||
```console
|
||||
# Personal Access Token (PAT):
|
||||
prowler github --personal-access-token pat
|
||||
|
||||
# OAuth App Token:
|
||||
prowler github --oauth-app-token oauth_token
|
||||
|
||||
# GitHub App Credentials:
|
||||
prowler github --github-app-id app_id --github-app-key app_key
|
||||
```
|
||||
|
||||
???+ note
|
||||
If no login method is explicitly provided, Prowler will automatically attempt to authenticate using environment variables in the following order of precedence:
|
||||
|
||||
1. `GITHUB_PERSONAL_ACCESS_TOKEN`
|
||||
2. `OAUTH_APP_TOKEN`
|
||||
3. `GITHUB_APP_ID` and `GITHUB_APP_KEY`
|
||||
|
||||
## Infrastructure as Code (IaC)
|
||||
|
||||
Prowler's Infrastructure as Code (IaC) provider enables you to scan local or remote infrastructure code for security and compliance issues using [Trivy](https://trivy.dev/). This provider supports a wide range of IaC frameworks, allowing you to assess your code before deployment.
|
||||
|
||||
```console
|
||||
# Scan a directory for IaC files
|
||||
prowler iac --scan-path ./my-iac-directory
|
||||
|
||||
# Scan a remote GitHub repository (public or private)
|
||||
prowler iac --scan-repository-url https://github.com/user/repo.git
|
||||
|
||||
# Authenticate to a private repo with GitHub username and PAT
|
||||
prowler iac --scan-repository-url https://github.com/user/repo.git \
|
||||
--github-username <username> --personal-access-token <token>
|
||||
|
||||
# Authenticate to a private repo with OAuth App Token
|
||||
prowler iac --scan-repository-url https://github.com/user/repo.git \
|
||||
--oauth-app-token <oauth_token>
|
||||
|
||||
# Specify frameworks to scan (default: all)
|
||||
prowler iac --scan-path ./my-iac-directory --frameworks terraform kubernetes
|
||||
|
||||
# Exclude specific paths
|
||||
prowler iac --scan-path ./my-iac-directory --exclude-path ./my-iac-directory/test,./my-iac-directory/examples
|
||||
```
|
||||
|
||||
???+ note
|
||||
- `--scan-path` and `--scan-repository-url` are mutually exclusive; only one can be specified at a time.
|
||||
- For remote repository scans, authentication can be provided via CLI flags or environment variables (`GITHUB_OAUTH_APP_TOKEN`, `GITHUB_USERNAME`, `GITHUB_PERSONAL_ACCESS_TOKEN`). CLI flags take precedence.
|
||||
- The IaC provider does not require cloud authentication for local scans.
|
||||
- It is ideal for CI/CD pipelines and local development environments.
|
||||
- For more details on supported scanners, see the [Trivy documentation](https://trivy.dev/latest/docs/scanner/vulnerability/)
|
||||
|
||||
See more details about IaC scanning in the [IaC Tutorial](../tutorials/iac/getting-started-iac.md) section.
|
||||
|
||||
## MongoDB Atlas
|
||||
|
||||
Prowler allows you to scan your MongoDB Atlas cloud database deployments for security and compliance issues.
|
||||
|
||||
Authentication is done using MongoDB Atlas API key pairs:
|
||||
|
||||
```console
|
||||
# Using command-line arguments
|
||||
prowler mongodbatlas --atlas-public-key <public_key> --atlas-private-key <private_key>
|
||||
|
||||
# Using environment variables
|
||||
export ATLAS_PUBLIC_KEY=<public_key>
|
||||
export ATLAS_PRIVATE_KEY=<private_key>
|
||||
prowler mongodbatlas
|
||||
```
|
||||
|
||||
You can filter scans to specific organizations or projects:
|
||||
|
||||
```console
|
||||
# Scan specific project
|
||||
prowler mongodbatlas --atlas-project-id <project_id>
|
||||
```
|
||||
|
||||
See more details about MongoDB Atlas Authentication in [MongoDB Atlas Authentication](../tutorials/mongodbatlas/authentication.md)
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
In this page you can find all the details about [Amazon Web Services (AWS)](https://aws.amazon.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit just one account and organization settings per scan. To configure it, follow the [getting started](../index.md#aws) page.
|
||||
By default, Prowler will audit just one account and organization settings per scan. To configure it, follow the [AWS getting started guide](../tutorials/aws/getting-started-aws.md).
|
||||
|
||||
## AWS Provider Classes Architecture
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
In this page you can find all the details about [Microsoft Azure](https://azure.microsoft.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit all the subscriptions that it is able to list in the Microsoft Entra tenant, and tenant Entra ID service. To configure it, follow the [getting started](../index.md#azure) page.
|
||||
By default, Prowler will audit all the subscriptions that it is able to list in the Microsoft Entra tenant, and tenant Entra ID service. To configure it, follow the [Azure getting started guide](../tutorials/azure/getting-started-azure.md).
|
||||
|
||||
## Azure Provider Classes Architecture
|
||||
|
||||
|
||||
@@ -0,0 +1,213 @@
|
||||
# Check Metadata Guidelines
|
||||
|
||||
## Introduction
|
||||
|
||||
This guide provides comprehensive guidelines for creating check metadata in Prowler. For basic information on check metadata structure, refer to the [check metadata](./checks.md#metadata-structure-for-prowler-checks) section.
|
||||
|
||||
## Check Title Guidelines
|
||||
|
||||
### Writing Guidelines
|
||||
|
||||
1. **Determine Resource Finding Scope (Singular vs. Plural)**:
|
||||
When determining whether to use singular or plural in the check title, examine the code for certain patterns. If the code contains a loop that generates an individual report for each resource, use the singular form. If the code produces a single report that covers all resources collectively, use the plural form. For organization- or account-wide checks, select the scope that best matches the breadth of the evaluation. Additionally, review the `status_extended` field messages in the code, as they often provide clues about whether the check is scoped to individual resources or to groups of resources.
|
||||
Analyze the detection code to determine if the check reports on individual resources or aggregated resources:
|
||||
- **Singular**: Use when the check creates one report per resource (e.g., "EC2 instance has IMDSv2 enforced", "S3 bucket does not allow public write access").
|
||||
- **Plural**: Use when the check creates one report for all resources together (e.g., "All EC2 instances have IMDSv2 enforced", "S3 buckets do not allow public write access").
|
||||
2. **Describe the Compliant (*PASS*) State**:
|
||||
Always write the title to describe the **desired, compliant state** of the resources. The title should reflect what it looks like when the audited resource is following the check's requirements.
|
||||
3. **Be Specific and Factual**:
|
||||
Include the exact secure configuration being verified. Avoid vague or generic terms like "properly configured".
|
||||
4. **Avoid Redundant or Action Words**:
|
||||
Do not include verbs like "Check", "Verify", "Ensure", or "Monitor". The title is a declarative statement of the secure condition.
|
||||
5. **Length Limit**:
|
||||
Keep the title under 150 characters.
|
||||
|
||||
### Common Mistakes to Avoid
|
||||
|
||||
- Starting with verbs like "Check", "Verify", "Ensure", "Make sure". Always start with the affected resource instead.
|
||||
- Being too vague or generic (e.g., "Ensure security groups are properly configured", what does it mean? "properly configured" is not a clear description of the compliant state).
|
||||
- Focusing on the non-compliant state instead of the compliant state.
|
||||
- Using unclear scope and resource identification.
|
||||
|
||||
## Check Type Guidelines (AWS Only)
|
||||
|
||||
### AWS Security Hub Type Format
|
||||
|
||||
AWS Security Hub uses a three-part type taxonomy:
|
||||
|
||||
- **Namespace**: The top-level security domain.
|
||||
- **Category**: The security control family or area.
|
||||
- **Classifier**: The specific security concern (optional).
|
||||
|
||||
A partial path may be defined (e.g., `TTPs` or `TTPs/Defense Evasion` are valid).
|
||||
|
||||
### Selection Guidelines
|
||||
|
||||
1. **Be Specific**: Use the most specific classifier that accurately describes the check.
|
||||
2. **Standard Compliance**: Consider if the check relates to specific compliance standards.
|
||||
3. **Multiple Types**: You can specify multiple types if the check addresses multiple concerns.
|
||||
|
||||
## Description Guidelines
|
||||
|
||||
### Writing Guidelines
|
||||
|
||||
1. **Focus on the Finding**: All fields should address how the finding affects the security posture, rather than the control itself.
|
||||
2. **Use Natural Language**: Write in simple, clear paragraphs with complete, grammatically correct sentences.
|
||||
3. **Use Markdown Formatting**: Enhance readability with:
|
||||
- Use **bold** for emphasis on key security concepts.
|
||||
- Use *italic* for a secondary emphasis. Use it for clarifications, conditions, or optional notes. But don't abuse it.
|
||||
- Use `code` formatting for specific configuration values, or technical details. Don't use it for service names or common technical terms.
|
||||
- Use one or two line breaks (`\n` or `\n\n`) to separate distinct ideas.
|
||||
- Use bullet points (`-`) for listing multiple concepts or actions.
|
||||
- Use numbers for listing steps or sequential actions.
|
||||
4. **Be Concise**: Maximum 400 characters (spaces count). Every word should add value.
|
||||
5. **Explain What the Finding Means**: Focus on what the security control evaluates and what it means when it passes or fails, but without explicitly stating the pass or fail state.
|
||||
6. **Be Technical but Clear**: Use appropriate technical terminology while remaining understandable.
|
||||
7. **Avoid Risk Descriptions**: Do not describe potential risks, threats, or consequences.
|
||||
8. **CheckTitle and Description can be the same**: If the check is very simple and the title is already clear, you can use the same text for the description.
|
||||
|
||||
### Common Mistakes to Avoid
|
||||
|
||||
- **Technical Implementation Details**: "The control loops through all instances and calls the describe_instances API...".
|
||||
- **Vague Descriptions**: "This control verifies proper configuration of resources". What does it mean? "proper configuration" is not a clear description of the compliant state.
|
||||
- **Risk Descriptions**: "This could lead to data breaches" or "This poses a security threat".
|
||||
- **Starting with Verbs**: "Check if...", "Verify...", "Ensure...". Always start with the affected resource instead.
|
||||
- **References to Pass/Fail States**: Avoid using words like "pass" or "fail".
|
||||
|
||||
## Risk Guidelines
|
||||
|
||||
### Writing Guidelines
|
||||
|
||||
1. **Explain the Cybersecurity Impact**: Focus on how the finding affects confidentiality, integrity, or availability (CIA triad). If the CIA triad does not apply, explain the risk in terms of the organization's business objectives.
|
||||
2. **Be Specific About Threats**: Clearly state what could happen if this security control is not in place. What attacks or incidents become possible?
|
||||
3. **Focus on Risk Context**: Explain the specific security implications of the finding, not just generic security risks.
|
||||
4. **Use Markdown Formatting**: Enhance readability with markdown formatting:
|
||||
- Use **bold** for emphasis on key security concepts.
|
||||
- Use *italic* for a secondary emphasis. Use it for clarifications, conditions, or optional notes. But don't abuse it.
|
||||
- Use `code` formatting for specific configuration values, or technical details. Don't use it for service names or common technical terms.
|
||||
- Use one or two line breaks (`\n` or `\n\n`) to separate distinct ideas.
|
||||
- Use bullet points (`-`) for listing multiple concepts or actions.
|
||||
- Use numbers for listing steps or sequential actions.
|
||||
5. **Be Concise**: Maximum 400 characters. Make every word count.
|
||||
|
||||
### Common Mistakes to Avoid
|
||||
|
||||
- **Generic Risks**: "This could lead to security issues" or "Regulatory compliance violations".
|
||||
- **Technical Implementation Focus**: "The API call might fail and return incorrect results...".
|
||||
- **Overly Broad Statements**: "This is a serious security risk that could impact everything".
|
||||
- **Vague Threats**: "This could be exploited by threat actors" without explaining how.
|
||||
|
||||
## Recommendation Guidelines
|
||||
|
||||
### Writing Guidelines
|
||||
|
||||
1. **Provide Actionable Best Practice Guidance**: Explain what should be done to maintain security posture. Focus on preventive measures and proactive security practices.
|
||||
2. **Be Principle-Based**: Reference established security principles (least privilege, defense in depth, zero trust, separation of duties) where applicable.
|
||||
3. **Focus on Prevention**: Explain best practices that prevent the security issue from occurring, not just detection or remediation.
|
||||
4. **Use Markdown Formatting**: Enhance readability with markdown formatting:
|
||||
- Use **bold** for emphasis on key security concepts.
|
||||
- Use *italic* for a secondary emphasis. Use it for clarifications, conditions, or optional notes. But don't abuse it.
|
||||
- Use `code` formatting for specific configuration values, or technical details. Don't use it for service names or common technical terms.
|
||||
- Use one or two line breaks (`\n` or `\n\n`) to separate distinct ideas.
|
||||
- Use bullet points (`-`) for listing multiple concepts or actions.
|
||||
- Use numbers for listing steps or sequential actions.
|
||||
5. **Be Concise**: Maximum 400 characters.
|
||||
|
||||
### Common Mistakes to Avoid
|
||||
|
||||
- **Specific Remediation Steps**: "1. Go to the console\n2. Click on settings..." - Focus on principles, not click-by-click instructions.
|
||||
- **Implementation Details**: "Configure the JSON policy with the following IAM actions..." - Explain what to achieve, not how.
|
||||
- **Vague Guidance**: "Follow security best practices..." without explaining what those practices are.
|
||||
- **Resource-Specific Recommendations**: "Enable MFA on user john.doe@example.com" - Keep it general.
|
||||
- **Missing Context**: Not explaining why the best practice is important for security.
|
||||
|
||||
### Good Examples
|
||||
|
||||
- *"Avoid exposing sensitive resources directly to the Internet; configure access controls to limit exposure."*
|
||||
- *"Apply the principle of least privilege when assigning permissions to users and services."*
|
||||
- *"Regularly review and update your security configurations to align with current best practices."*
|
||||
|
||||
## Remediation Code Guidelines
|
||||
|
||||
### Critical Requirement
|
||||
|
||||
The **fundamental principle** is to focus on the **specific change** that converts the finding from non-compliant to compliant.
|
||||
|
||||
Also is important to keep all code examples as short as possible, including the essential code to fix the issue. Remove any extra configuration, optional parameters, or nice-to-have settings and add comments to explain the code when possible.
|
||||
|
||||
### Common Guidelines for All Code Fields
|
||||
|
||||
1. **Be Minimal**: Keep code blocks as short as possible - only include what is absolutely necessary.
|
||||
2. **Focus on the Fix**: Remove any extra configuration, optional parameters, or nice-to-have settings.
|
||||
3. **Be Accurate**: Ensure all commands and code are syntactically correct.
|
||||
4. **Use Markdown Formatting**: Format code properly using code blocks and appropriate syntax highlighting.
|
||||
5. **Follow Best Practices**: Use the most secure and recommended approaches for each platform.
|
||||
|
||||
### CLI Guidelines
|
||||
|
||||
- Only provide a single command that directly changes the finding from fail to pass.
|
||||
- The command must be executable as-is and resolve the security issue completely.
|
||||
- Use proper command syntax for the provider (AWS CLI, Azure CLI, gcloud, kubectl, etc.).
|
||||
- Do not use markdown formatting or code blocks - just the raw command.
|
||||
- Do not include multiple commands, comments, or explanations.
|
||||
- If the issue cannot be resolved with a single command, leave this field empty.
|
||||
|
||||
### Native IaC Guidelines
|
||||
|
||||
- **Keep It Minimal**: Only include the specific resource/configuration that fixes the security issue.
|
||||
- Format as markdown code blocks with proper syntax highlighting.
|
||||
- Include only the required properties to fix the issue.
|
||||
- Add comments indicating the critical line(s) that remediate the check.
|
||||
- Use `example_resource` as the generic name for all resources and IDs.
|
||||
|
||||
### Terraform Guidelines
|
||||
|
||||
- **Keep It Minimal**: Only include the specific resource/configuration that fixes the security issue.
|
||||
- Provide valid HCL (HashiCorp Configuration Language) code with an example of a compliant configuration.
|
||||
- Use the latest Terraform syntax and provider versions.
|
||||
- Include only the required arguments to fix the issue - skip optional parameters.
|
||||
- Format as markdown code blocks with `hcl` syntax highlighting.
|
||||
- Add comments indicating the critical line(s) that remediate the check.
|
||||
- Use `example_resource` as the generic name for all resources and IDs.
|
||||
- Skip provider requirements unless critical for the fix.
|
||||
|
||||
### Other (Manual Steps) Guidelines
|
||||
|
||||
- **Keep It Minimal**: Only include the exact steps needed to fix the security issue.
|
||||
- Provide step-by-step instructions for manual remediation through web interfaces.
|
||||
- Use numbered lists for sequential steps.
|
||||
- Be specific about menu locations, button names, and settings.
|
||||
- Skip optional configurations or nice-to-have settings.
|
||||
- Format using markdown for better readability.
|
||||
|
||||
## Categories Guidelines
|
||||
|
||||
### Selection Guidelines
|
||||
|
||||
1. **Be Specific**: Only select categories that directly relate to what the automated control evaluates.
|
||||
2. **Primary Focus**: Consider the primary security concern the automated control addresses.
|
||||
3. **Avoid Over-Categorization**: Do not select categories just because they are tangentially related.
|
||||
|
||||
### Available Categories
|
||||
|
||||
| Category | Definition |
|
||||
|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| encryption | Ensures data is encrypted in transit and/or at rest, including key management practices |
|
||||
| internet-exposed | Checks that limit or flag public access to services, APIs, or assets from the Internet |
|
||||
| logging | Ensures appropriate logging of events, activities, and system interactions for traceability |
|
||||
| secrets | Manages and protects credentials, API keys, tokens, and other sensitive information |
|
||||
| resilience | Ensures systems can maintain availability and recover from disruptions, failures, or degradation. Includes redundancy, fault-tolerance, auto-scaling, backup, disaster recovery, and failover strategies |
|
||||
| threat-detection | Identifies suspicious activity or behaviors using IDS, malware scanning, or anomaly detection |
|
||||
| trust-boundaries | Enforces isolation or segmentation between different trust levels (e.g., VPCs, tenants, network zones) |
|
||||
| vulnerabilities | Detects or remediates known software, infrastructure, or config vulnerabilities (e.g., CVEs) |
|
||||
| cluster-security | Secures Kubernetes cluster components such as API server, etcd, and role-based access |
|
||||
| container-security | Ensures container images and runtimes follow security best practices |
|
||||
| node-security | Secures nodes running containers or services |
|
||||
| gen-ai | Checks related to safe and secure use of generative AI services or models |
|
||||
| ci-cd | Ensures secure configurations in CI/CD pipelines |
|
||||
| identity-access | Governs user and service identities, including least privilege, MFA, and permission boundaries |
|
||||
| email-security | Ensures detection and protection against phishing, spam, spoofing, etc. |
|
||||
| forensics-ready | Ensures systems are instrumented to support post-incident investigations. Any digital trace or evidence (logs, volume snapshots, memory dumps, network captures, etc.) preserved immutably and accompanied by integrity guarantees, which can be used in a forensic analysis |
|
||||
| software-supply-chain | Detects or prevents tampering, unauthorized packages, or third-party risks in software supply chain |
|
||||
| e3 | M365-specific controls enabled by or dependent on an E3 license (e.g., baseline security policies, conditional access) |
|
||||
| e5 | M365-specific controls enabled by or dependent on an E5 license (e.g., advanced threat protection, audit, DLP, and eDiscovery) |
|
||||
+155
-70
@@ -8,15 +8,20 @@ Checks are the core component of Prowler. A check is a piece of code designed to
|
||||
|
||||
### Creating a Check
|
||||
|
||||
To create a new check:
|
||||
The most common high level steps to create a new check are:
|
||||
|
||||
- Prerequisites: A Prowler provider and service must exist. Verify support and check for pre-existing checks via [Prowler Hub](https://hub.prowler.com). If the provider or service is not present, please refer to the [Provider](./provider.md) and [Service](./services.md) documentation for creation instructions.
|
||||
|
||||
- Navigate to the service directory. The path should be as follows: `prowler/providers/<provider>/services/<service>`.
|
||||
|
||||
- Create a check-specific folder. The path should follow this pattern: `prowler/providers/<provider>/services/<service>/<check_name>`. Adhere to the [Naming Format for Checks](#naming-format-for-checks).
|
||||
|
||||
- Populate the folder with files as specified in [File Creation](#file-creation).
|
||||
1. Prerequisites:
|
||||
- Verify the check does not already exist by searching [Prowler Hub](https://hub.prowler.com) or checking `prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>/`.
|
||||
- Ensure required provider and service exist. If not, follow the [Provider](./provider.md) and [Service](./services.md) documentation to create them.
|
||||
- Confirm the service has implemented all required methods and attributes for the check (in most cases, you will need to add or modify some methods in the service to get the data you need for the check).
|
||||
2. Navigate to the service directory. The path should be as follows: `prowler/providers/<provider>/services/<service>`.
|
||||
3. Create a check-specific folder. The path should follow this pattern: `prowler/providers/<provider>/services/<service>/<check_name_want_to_implement>`. Adhere to the [Naming Format for Checks](#naming-format-for-checks).
|
||||
4. Populate the folder with files as specified in [File Creation](#file-creation).
|
||||
5. Run the check locally to ensure it works as expected. For checking you can use the CLI in the next way:
|
||||
- To ensure the check has been detected by Prowler: `poetry run python prowler-cli.py <provider> --list-checks | grep <check_name>`.
|
||||
- To run the check, to find possible issues: `poetry run python prowler-cli.py <provider> --log-level ERROR --verbose --check <check_name>`.
|
||||
6. Create comprehensive tests for the check that cover multiple scenarios including both PASS (compliant) and FAIL (non-compliant) cases. For detailed information about test structure and implementation guidelines, refer to the [Testing](./unit-testing.md) documentation.
|
||||
7. If the check and its corresponding tests are working as expected, you can submit a PR to Prowler.
|
||||
|
||||
### Naming Format for Checks
|
||||
|
||||
@@ -35,7 +40,7 @@ Each check in Prowler follows a straightforward structure. Within the newly crea
|
||||
|
||||
- `__init__.py` (empty file) – Ensures Python treats the check folder as a package.
|
||||
- `<check_name>.py` (code file) – Contains the check logic, following the prescribed format. Please refer to the [prowler's check code structure](./checks.md#prowlers-check-code-structure) for more information.
|
||||
- `<check_name>.metadata.json` (metadata file) – Defines the check's metadata for contextual information. Please refer to the [check metadata](./checks.md#) for more information.
|
||||
- `<check_name>.metadata.json` (metadata file) – Defines the check's metadata for contextual information. Please refer to the [check metadata](./checks.md#metadata-structure-for-prowler-checks) for more information.
|
||||
|
||||
## Prowler's Check Code Structure
|
||||
|
||||
@@ -59,13 +64,19 @@ from prowler.providers.<provider>.services.<service>.<service>_client import <se
|
||||
# Each check must be implemented as a Python class with the same name as its corresponding file.
|
||||
# The class must inherit from the Check base class.
|
||||
class <check_name>(Check):
|
||||
"""Short description of what is being checked"""
|
||||
"""
|
||||
Ensure that <resource> meets <security_requirement>.
|
||||
|
||||
This check evaluates whether <specific_condition> to ensure <security_benefit>.
|
||||
- PASS: <description_of_compliant_state(s)>.
|
||||
- FAIL: <description_of_non_compliant_state(s)>.
|
||||
"""
|
||||
|
||||
def execute(self):
|
||||
"""Execute <check short description>
|
||||
"""Execute the check logic.
|
||||
|
||||
Returns:
|
||||
List[CheckReport<Provider>]: A list of reports containing the result of the check.
|
||||
A list of reports containing the result of the check.
|
||||
"""
|
||||
findings = []
|
||||
# Iterate over the target resources using the provider service client
|
||||
@@ -147,12 +158,10 @@ else:
|
||||
Each check **must** populate the report with an unique identifier for the audited resource. This identifier or identifiers are going to depend on the provider and the resource that is being audited. Here are the criteria for each provider:
|
||||
|
||||
- AWS
|
||||
|
||||
- Amazon Resource ID — `report.resource_id`.
|
||||
- The resource identifier. This is the name of the resource, the ID of the resource, or a resource path. Some resource identifiers include a parent resource (sub-resource-type/parent-resource/sub-resource) or a qualifier such as a version (resource-type:resource-name:qualifier).
|
||||
- If the resource ID cannot be retrieved directly from the audited resource, it can be extracted from the ARN. It is the last part of the ARN after the last slash (`/`) or colon (`:`).
|
||||
- If no actual resource to audit exists, this format can be used: `<resource_type>/unknown`
|
||||
|
||||
- Amazon Resource Name — `report.resource_arn`.
|
||||
- The [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) of the audited entity.
|
||||
- If the ARN cannot be retrieved directly from the audited resource, construct a valid ARN using the `resource_id` component as the audited entity. Examples:
|
||||
@@ -163,32 +172,24 @@ Each check **must** populate the report with an unique identifier for the audite
|
||||
- AWS Security Hub — `arn:<partition>:security-hub:<region>:<account-id>:hub/unknown`.
|
||||
- Access Analyzer — `arn:<partition>:access-analyzer:<region>:<account-id>:analyzer/unknown`.
|
||||
- GuardDuty — `arn:<partition>:guardduty:<region>:<account-id>:detector/unknown`.
|
||||
|
||||
- GCP
|
||||
|
||||
- Resource ID — `report.resource_id`.
|
||||
- Resource ID represents the full, [unambiguous path to a resource](https://google.aip.dev/122#full-resource-names), known as the full resource name. Typically, it follows the format: `//{api_service/resource_path}`.
|
||||
- If the resource ID cannot be retrieved directly from the audited resource, by default the resource name is used.
|
||||
- Resource Name — `report.resource_name`.
|
||||
- Resource Name usually refers to the name of a resource within its service.
|
||||
|
||||
- Azure
|
||||
|
||||
- Resource ID — `report.resource_id`.
|
||||
- Resource ID represents the full Azure Resource Manager path to a resource, which follows the format: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`.
|
||||
- Resource Name — `report.resource_name`.
|
||||
- Resource Name usually refers to the name of a resource within its service.
|
||||
- If the [resource name](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules) cannot be retrieved directly from the audited resource, the last part of the resource ID can be used.
|
||||
|
||||
- Kubernetes
|
||||
|
||||
- Resource ID — `report.resource_id`.
|
||||
- The UID of the Kubernetes object. This is a system-generated string that uniquely identifies the object within the cluster for its entire lifetime. See [Kubernetes Object Names and IDs - UIDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids).
|
||||
- Resource Name — `report.resource_name`.
|
||||
- The name of the Kubernetes object. This is a client-provided string that must be unique for the resource type within a namespace (for namespaced resources) or cluster (for cluster-scoped resources). Names typically follow DNS subdomain or label conventions. See [Kubernetes Object Names and IDs - Names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names).
|
||||
|
||||
- M365
|
||||
|
||||
- Resource ID — `report.resource_id`.
|
||||
- If the audited resource has a globally unique identifier such as a `guid`, use it as the `resource_id`.
|
||||
- If no `guid` exists, use another unique and relevant identifier for the resource, such as the tenant domain, the internal policy ID, or a representative string following the format `<resource_type>/<name_or_id>`.
|
||||
@@ -204,9 +205,7 @@ Each check **must** populate the report with an unique identifier for the audite
|
||||
- For global configurations:
|
||||
- `resource_id`: Tenant domain or representative string (e.g., "userSettings")
|
||||
- `resource_name`: Description of the configuration (e.g., "SharePoint Settings")
|
||||
|
||||
- GitHub
|
||||
|
||||
- Resource ID — `report.resource_id`.
|
||||
- The ID of the Github resource. This is a system-generated integer that uniquely identifies the resource within the Github platform.
|
||||
- Resource Name — `report.resource_name`.
|
||||
@@ -227,88 +226,174 @@ Below is a generic example of a check metadata file. **Do not include comments i
|
||||
```json
|
||||
{
|
||||
"Provider": "aws",
|
||||
"CheckID": "example_check_id",
|
||||
"CheckTitle": "Example Check Title",
|
||||
"CheckType": ["Infrastructure Security"],
|
||||
"ServiceName": "ec2",
|
||||
"SubServiceName": "ami",
|
||||
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
|
||||
"Severity": "critical",
|
||||
"CheckID": "service_resource_security_setting",
|
||||
"CheckTitle": "Service resource has security setting enabled",
|
||||
"CheckType": [],
|
||||
"ServiceName": "service",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "Other",
|
||||
"Description": "Example description of the check.",
|
||||
"Risk": "Example risk if the check fails.",
|
||||
"RelatedUrl": "https://example.com",
|
||||
"Description": "This check verifies that the service resource has the required **security setting** enabled to protect against potential vulnerabilities.\n\nIt ensures that the resource follows security best practices and maintains proper access controls. The check evaluates whether the security configuration is properly implemented and active.",
|
||||
"Risk": "Without proper security settings, the resource may be vulnerable to:\n\n- **Unauthorized access** - Malicious actors could gain entry\n- **Data breaches** - Sensitive information could be compromised\n- **Security threats** - Various attack vectors could be exploited\n\nThis could result in compliance violations and potential financial or reputational damage.",
|
||||
"RelatedUrl": "",
|
||||
"AdditionalURLs": ["https://example.com/security-documentation", "https://example.com/best-practices"],
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "example CLI command",
|
||||
"NativeIaC": "",
|
||||
"Other": "",
|
||||
"Terraform": ""
|
||||
"CLI": "provider-cli service enable-security-setting --resource-id resource-123",
|
||||
"NativeIaC": "```yaml\nType: Provider::Service::Resource\nProperties:\n SecuritySetting: enabled\n ResourceId: resource-123\n```",
|
||||
"Other": "1. Open the provider management console\n2. Navigate to the service section\n3. Select the resource\n4. Enable the security setting\n5. Save the configuration",
|
||||
"Terraform": "```hcl\nresource \"provider_service_resource\" \"example\" {\n resource_id = \"resource-123\"\n security_setting = true\n}\n```"
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Example recommendation text.",
|
||||
"Url": "https://example.com/remediation"
|
||||
"Text": "Enable security settings on all service resources to ensure proper protection. Regularly review and update security configurations to align with current best practices.",
|
||||
"Url": "https://hub.prowler.com/check/service_resource_security_setting"
|
||||
}
|
||||
},
|
||||
"Categories": ["example-category"],
|
||||
"Categories": ["internet-exposed", "secrets"],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [],
|
||||
"Notes": ""
|
||||
"RelatedTo": ["service_resource_security_setting", "service_resource_security_setting_2"],
|
||||
"Notes": "This is a generic example check that should be customized for specific provider and service requirements."
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata Fields and Their Purpose
|
||||
|
||||
- **Provider** — The Prowler provider related to the check. The name **must** be lowercase and match the provider folder name. For supported providers refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
|
||||
#### Provider
|
||||
|
||||
- **CheckID** — The unique identifier for the check inside the provider, this field **must** match the check's folder and python file and json metadata file name. For more information about the naming refer to the [Naming Format for Checks](#naming-format-for-checks) section.
|
||||
The Prowler provider related to the check. The name **must** be lowercase and match the provider folder name. For supported providers refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
|
||||
|
||||
- **CheckTitle** — A concise, descriptive title for the check.
|
||||
#### CheckID
|
||||
|
||||
- **CheckType** — *For now this field is only standardized for the AWS provider*.
|
||||
- For AWS this field must follow the [AWS Security Hub Types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types) format. So the common pattern to follow is `namespace/category/classifier`, refer to the attached documentation for the valid values for this fields.
|
||||
The unique identifier for the check inside the provider. This field **must** match the check's folder, Python file, and JSON metadata file name. For more information about naming, refer to the [Naming Format for Checks](#naming-format-for-checks) section.
|
||||
|
||||
- **ServiceName** — The name of the provider service being audited. This field **must** be in lowercase and match with the service folder name. For supported services refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
|
||||
#### CheckTitle
|
||||
|
||||
- **SubServiceName** — The subservice or resource within the service, if applicable. For more information refer to the [Naming Format for Checks](#naming-format-for-checks) section.
|
||||
The `CheckTitle` field must be plain text, clearly and succinctly define **the best practice being evaluated and which resource(s) each finding applies to**. The title should be specific, concise (no more than 150 characters), and reference the relevant resource(s) involved.
|
||||
|
||||
- **ResourceIdTemplate** — A template for the unique resource identifier. For more information refer to the [Prowler's Resource Identification](#prowlers-resource-identification) section.
|
||||
**Always write the `CheckTitle` to describe the *PASS* case**, the desired secure or compliant state of the resource(s). This helps ensure that findings are easy to interpret and that the title always reflects the best practice being met.
|
||||
|
||||
- **Severity** — The severity of the finding if the check fails. Must be one of: `critical`, `high`, `medium`, `low`, or `informational`, this field **must** be in lowercase. To get more information about the severity levels refer to the [Prowler's Check Severity Levels](#prowlers-check-severity-levels) section.
|
||||
For detailed guidelines on writing effective check titles, including how to determine singular vs. plural scope and common mistakes to avoid, see [CheckTitle Guidelines](./check-metadata-guidelines.md#check-title-guidelines).
|
||||
|
||||
- **ResourceType** — The type of resource being audited. *For now this field is only standardized for the AWS provider*.
|
||||
#### CheckType
|
||||
|
||||
- For AWS use the [Security Hub resource types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-resources.html) or, if not available, the PascalCase version of the [CloudFormation type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) (e.g., `AwsEc2Instance`). Use "Other" if no match exists.
|
||||
???+ warning
|
||||
This field is only applicable to the AWS provider.
|
||||
|
||||
- **Description** — A short description of what the check does.
|
||||
It follows the [AWS Security Hub Types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types) format using the pattern `namespace/category/classifier`.
|
||||
|
||||
- **Risk** — The risk or impact if the check fails, explaining why the finding matters.
|
||||
For the complete AWS Security Hub selection guidelines, see [CheckType Guidelines](./check-metadata-guidelines.md#check-type-guidelines-aws-only).
|
||||
|
||||
- **RelatedUrl** — A URL to official documentation or further reading about the check's purpose. If no official documentation is available, use the risk and recommendation text from trusted third-party sources.
|
||||
#### ServiceName
|
||||
|
||||
- **Remediation** — Guidance for fixing a failed check, including:
|
||||
The name of the provider service being audited. Must be lowercase and match the service folder name. For supported services refer to [Prowler Hub](https://hub.prowler.com/check) or the [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
|
||||
|
||||
- **Code** — Remediation commands or code snippets for CLI, Terraform, native IaC, or other tools like the Web Console.
|
||||
#### SubServiceName
|
||||
|
||||
- **Recommendation** — A textual human readable recommendation. Here it is not necessary to include actual steps, but rather a general recommendation about what to do to fix the check.
|
||||
This field is in the process of being deprecated and should be **left empty**.
|
||||
|
||||
- **Categories** — One or more categories for grouping checks in execution (e.g., `internet-exposed`). For the current list of categories, refer to the [Prowler Hub](https://hub.prowler.com/check).
|
||||
#### ResourceIdTemplate
|
||||
|
||||
- **DependsOn** — Currently not used.
|
||||
This field is in the process of being deprecated and should be **left empty**.
|
||||
|
||||
- **RelatedTo** — Currently not used.
|
||||
#### Severity
|
||||
|
||||
- **Notes** — Any additional information not covered by other fields.
|
||||
Severity level if the check fails. Must be one of: `critical`, `high`, `medium`, `low`, or `informational`, and written in lowercase. See [Prowler's Check Severity Levels](#prowlers-check-severity-levels) for details.
|
||||
|
||||
### Remediation Code Guidelines
|
||||
#### ResourceType
|
||||
|
||||
When providing remediation steps, reference the following sources:
|
||||
The type of resource being audited. This field helps categorize and organize findings by resource type for better analysis and reporting. For each provider:
|
||||
|
||||
- Official provider documentation.
|
||||
- [Prowler Checks Remediation Index](https://docs.prowler.com/checks/checks-index)
|
||||
- [TrendMicro Cloud One Conformity](https://www.trendmicro.com/cloudoneconformity)
|
||||
- [CloudMatos Remediation Repository](https://github.com/cloudmatos/matos/tree/master/remediations)
|
||||
- **AWS**: Use [Security Hub resource types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-resources.html) or PascalCase CloudFormation types removing the `::` separator used in CloudFormation templates (e.g., in CloudFormation template the type of an EC2 instance is `AWS::EC2::Instance` but in the check it should be `AwsEc2Instance`). Use `Other` if none apply.
|
||||
- **Azure**: Use types from [Azure Resource Graph](https://learn.microsoft.com/en-us/azure/governance/resource-graph/reference/supported-tables-resources), for example: `Microsoft.Storage/storageAccounts`.
|
||||
- **Google Cloud**: Use [Cloud Asset Inventory asset types](https://cloud.google.com/asset-inventory/docs/asset-types), for example: `compute.googleapis.com/Instance`.
|
||||
- **Kubernetes**: Use types shown under `KIND` from `kubectl api-resources`.
|
||||
- **M365 / GitHub**: Leave empty due to lack of standardized types.
|
||||
|
||||
#### Description
|
||||
|
||||
A concise, natural language explanation that **clearly describes what the finding means**, focusing on clarity and context rather than technical implementation details. Use simple paragraphs with line breaks if needed, but avoid sections, code blocks, or complex formatting. This field is limited to maximum 400 characters.
|
||||
|
||||
For detailed writing guidelines and common mistakes to avoid, see [Description Guidelines](./check-metadata-guidelines.md#description-guidelines).
|
||||
|
||||
#### Risk
|
||||
|
||||
A clear, natural language explanation of **why this finding poses a cybersecurity risk**. Focus on how it may impact confidentiality, integrity, or availability. If those do not apply, describe any relevant operational or financial risks. Use simple paragraphs with line breaks if needed, but avoid sections, code blocks, or complex formatting. Limit your explanation to 400 characters.
|
||||
|
||||
For detailed writing guidelines and common mistakes to avoid, see [Risk Guidelines](./check-metadata-guidelines.md#risk-guidelines).
|
||||
|
||||
#### RelatedUrl
|
||||
|
||||
*Deprecated*. Use `AdditionalURLs` for adding your URLs references.
|
||||
|
||||
#### AdditionalURLs
|
||||
|
||||
???+ warning
|
||||
URLs must be valid and not repeated.
|
||||
|
||||
A list of official documentation URLs for further reading. These should be authoritative sources that provide additional context, best practices, or detailed information about the security control being checked. Prefer official provider documentation, security standards, or well-established security resources. Avoid third-party blogs or unofficial sources unless they are highly reputable and directly relevant.
|
||||
|
||||
#### Remediation
|
||||
|
||||
Provides both code examples and best practice recommendations for addressing the security issue.
|
||||
|
||||
- **Code**: Contains remediation examples in different formats:
|
||||
- **CLI**: Command-line interface commands to make the finding compliant in runtime.
|
||||
- **NativeIaC**: Native Infrastructure as Code templates with an example of a compliant configuration. For now it applies to:
|
||||
- **AWS**: CloudFormation YAML formatted code (do not use JSON format).
|
||||
- **Azure**: Bicep formatted code (do not use ARM templates).
|
||||
- **Terraform**: HashiCorp Configuration Language (HCL) code with an example of a compliant configuration.
|
||||
- **Other**: Manual steps through web interfaces or other tools to make the finding compliant.
|
||||
|
||||
For detailed guidelines on writing remediation code, see [Remediation Code Guidelines](./check-metadata-guidelines.md#remediation-code-guidelines).
|
||||
|
||||
- **Recommendation**
|
||||
- **Text**: Generic best practice guidance in natural language using Markdown format (maximum 400 characters). For writing guidelines, see [Recommendation Guidelines](./check-metadata-guidelines.md#recommendation-guidelines).
|
||||
- **Url**: [Prowler Hub URL](https://hub.prowler.com/) of the check. This URL is always composed by `https://hub.prowler.com/check/<check_id>`.
|
||||
|
||||
#### Categories
|
||||
|
||||
One or more functional groupings used for execution filtering (e.g., `internet-exposed`). You can define new categories just by adding to this field.
|
||||
|
||||
For the complete list of available categories, see [Categories Guidelines](./check-metadata-guidelines.md#categories-guidelines).
|
||||
|
||||
#### DependsOn
|
||||
|
||||
List of check IDs of checks that if are compliant, this check will be a compliant too or it is not going to give any finding.
|
||||
|
||||
#### RelatedTo
|
||||
|
||||
List of check IDs of checks that are conceptually related, even if they do not share a technical dependency.
|
||||
|
||||
#### Notes
|
||||
|
||||
Any additional information not covered in the above fields.
|
||||
|
||||
### Python Model Reference
|
||||
|
||||
The metadata structure is enforced in code using a Pydantic model. For reference, see the [`CheckMetadata`](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py).
|
||||
|
||||
## Generic Check Patterns and Best Practices
|
||||
|
||||
### Common Patterns
|
||||
|
||||
- Every check is implemented as a class inheriting from `Check` (from `prowler.lib.check.models`).
|
||||
- The main logic is implemented in the `execute()` method (**only method that must be implemented**), which always returns a list of provider-specific report objects (e.g., `CheckReport<Provider>`)—one per finding/resource. If there are no findings/resources, return an empty list.
|
||||
- **Never** use the provider's client directly; instead, use the service client (e.g., `<service>_client`) and iterate over its resources.
|
||||
- For each resource, create a provider-specific report object, populate it with metadata, resource details, status (`PASS`, `FAIL`, etc.), and a human-readable `status_extended` message.
|
||||
- Use the `metadata()` method to attach check metadata to each report.
|
||||
- Checks are designed to be idempotent and stateless: they do not modify resources, only report on their state.
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Use clear, actionable, and user-friendly language in `status_extended` to explain the result. Always provide information to identify the resource.
|
||||
- Use helper functions/utilities for repeated logic to avoid code duplication. Save them in the `lib` folder of the service.
|
||||
- Handle exceptions gracefully: catch errors per resource, log them, and continue processing other resources.
|
||||
- Document the check with a class and function level docstring explaining what it does, what it checks, and any caveats or provider-specific behaviors.
|
||||
- Use type hints for the `execute()` method (e.g., `-> list[CheckReport<Provider>]`) for clarity and static analysis.
|
||||
- Ensure checks are efficient; avoid excessive nested loops. If the complexity is high, consider refactoring the check.
|
||||
- Keep the check logic focused: one check = one control/requirement. Avoid combining unrelated logic in a single check.
|
||||
|
||||
## Specific Check Patterns
|
||||
|
||||
Details for specific providers can be found in documentation pages named using the pattern `<provider_name>-details`.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This page details the [Google Cloud Platform (GCP)](https://cloud.google.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit all the GCP projects that the authenticated identity can access. To configure it, follow the [getting started](../index.md#google-cloud) page.
|
||||
By default, Prowler will audit all the GCP projects that the authenticated identity can access. To configure it, follow the [GCP getting started guide](../tutorials/gcp/getting-started-gcp.md).
|
||||
|
||||
## GCP Provider Classes Architecture
|
||||
|
||||
@@ -50,6 +50,49 @@ The GCP provider implementation follows the general [Provider structure](./provi
|
||||
- **Location:** [`prowler/providers/gcp/lib/`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/lib/)
|
||||
- **Purpose:** Helpers for argument parsing, mutelist management, and other cross-cutting concerns.
|
||||
|
||||
## Retry Configuration
|
||||
|
||||
GCP services implement automatic retry functionality for rate limiting errors (HTTP 429). This is configured centrally and must be included in all API calls:
|
||||
|
||||
### Required Implementation
|
||||
|
||||
```python
|
||||
from prowler.providers.gcp.config import DEFAULT_RETRY_ATTEMPTS
|
||||
|
||||
# In discovery.build()
|
||||
client = discovery.build(
|
||||
service, version, credentials=credentials,
|
||||
num_retries=DEFAULT_RETRY_ATTEMPTS
|
||||
)
|
||||
|
||||
# In request.execute()
|
||||
response = request.execute(num_retries=DEFAULT_RETRY_ATTEMPTS)
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
- **Default Value**: 3 attempts (configurable in `prowler/providers/gcp/config.py`)
|
||||
- **Command Line Flag**: `--gcp-retries-max-attempts` for runtime configuration
|
||||
- **Error Types**: HTTP 429 and quota exceeded errors
|
||||
- **Backoff Strategy**: Exponential backoff with randomization
|
||||
|
||||
### Example Service Implementation
|
||||
|
||||
```python
|
||||
def _get_instances(self):
|
||||
for project_id in self.project_ids:
|
||||
try:
|
||||
client = discovery.build(
|
||||
"compute", "v1", credentials=self.credentials,
|
||||
num_retries=DEFAULT_RETRY_ATTEMPTS
|
||||
)
|
||||
request = client.instances().list(project=project_id)
|
||||
response = request.execute(num_retries=DEFAULT_RETRY_ATTEMPTS)
|
||||
# Process response...
|
||||
except Exception as error:
|
||||
logger.error(f"{error.__class__.__name__}: {error}")
|
||||
```
|
||||
|
||||
## Specific Patterns in GCP Services
|
||||
|
||||
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
|
||||
@@ -69,6 +112,7 @@ The best reference to understand how to implement a new service is following the
|
||||
- Resource discovery and attribute collection can be parallelized using `self.__threading_call__`, typically by region/zone or resource.
|
||||
- All GCP resources are represented as Pydantic `BaseModel` classes, providing type safety and structured access to resource attributes.
|
||||
- Each GCP API calls are wrapped in try/except blocks, always logging errors.
|
||||
- **Retry Configuration**: All `request.execute()` calls must include `num_retries=DEFAULT_RETRY_ATTEMPTS` for automatic retry on rate limiting errors (HTTP 429).
|
||||
- Tags and additional attributes that cannot be retrieved from the default call should be collected and stored for each resource using dedicated methods and threading.
|
||||
|
||||
## Specific Patterns in GCP Checks
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This page details the [GitHub](https://github.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit the GitHub account - scanning all repositories, organizations, and applications that your configured credentials can access. To configure it, follow the [getting started](../index.md#github) page.
|
||||
By default, Prowler will audit the GitHub account - scanning all repositories, organizations, and applications that your configured credentials can access. To configure it, follow the [GitHub getting started guide](../tutorials/github/getting-started-github.md).
|
||||
|
||||
## GitHub Provider Classes Architecture
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This page details the [Kubernetes](https://kubernetes.io/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit all namespaces in the Kubernetes cluster accessible by the configured context. To configure it, follow the [getting started](../index.md#kubernetes) page.
|
||||
By default, Prowler will audit all namespaces in the Kubernetes cluster accessible by the configured context. To configure it, see the [In-Cluster Execution](../tutorials/kubernetes/in-cluster.md) or [Non In-Cluster Execution](../tutorials/kubernetes/outside-cluster.md) guides.
|
||||
|
||||
## Kubernetes Provider Classes Architecture
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This page details the [Microsoft 365 (M365)](https://www.microsoft.com/en-us/microsoft-365) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit the Microsoft Entra ID tenant and its supported services. To configure it, follow the [getting started](../index.md#microsoft-365) page.
|
||||
By default, Prowler will audit the Microsoft Entra ID tenant and its supported services. To configure it, follow the [M365 getting started guide](../tutorials/microsoft365/getting-started-m365.md).
|
||||
|
||||
---
|
||||
|
||||
@@ -15,7 +15,7 @@ By default, Prowler will audit the Microsoft Entra ID tenant and its supported s
|
||||
- **Required modules:**
|
||||
- [ExchangeOnlineManagement](https://www.powershellgallery.com/packages/ExchangeOnlineManagement/3.6.0) (≥ 3.6.0)
|
||||
- [MicrosoftTeams](https://www.powershellgallery.com/packages/MicrosoftTeams/6.6.0) (≥ 6.6.0)
|
||||
- If you use Prowler Cloud or the official containers, PowerShell is pre-installed. For local or pip installations, you must install PowerShell and the modules yourself. See [Requirements: Supported PowerShell Versions](../getting-started/requirements.md#supported-powershell-versions) and [Needed PowerShell Modules](../getting-started/requirements.md#needed-powershell-modules).
|
||||
- If you use Prowler Cloud or the official containers, PowerShell is pre-installed. For local or pip installations, you must install PowerShell and the modules yourself. See [Authentication: Supported PowerShell Versions](../tutorials/microsoft365/authentication.md#supported-powershell-versions) and [Needed PowerShell Modules](../tutorials/microsoft365/authentication.md#required-powershell-modules).
|
||||
- For more details and troubleshooting, see [Use of PowerShell in M365](../tutorials/microsoft365/use-of-powershell.md).
|
||||
|
||||
---
|
||||
|
||||
@@ -101,6 +101,7 @@ Prowler supports multiple output formats, allowing users to tailor findings pres
|
||||
finding_dict["DESCRIPTION"] = finding.metadata.Description
|
||||
finding_dict["RISK"] = finding.metadata.Risk
|
||||
finding_dict["RELATED_URL"] = finding.metadata.RelatedUrl
|
||||
finding_dict["ADDITIONAL_URLS"] = unroll_list(finding.metadata.AdditionalURLs)
|
||||
finding_dict["REMEDIATION_RECOMMENDATION_TEXT"] = (
|
||||
finding.metadata.Remediation.Recommendation.Text
|
||||
)
|
||||
|
||||
@@ -10,6 +10,7 @@ A provider is any platform or service that offers resources, data, or functional
|
||||
- Software as a Service (SaaS) Platforms (like Microsoft 365)
|
||||
- Development Platforms (like GitHub)
|
||||
- Container Orchestration Platforms (like Kubernetes)
|
||||
- Database-as-a-Service Platforms (like MongoDB Atlas)
|
||||
|
||||
For providers supported by Prowler, refer to [Prowler Hub](https://hub.prowler.com/).
|
||||
|
||||
@@ -63,6 +64,7 @@ Given the complexity and variability of providers, use existing provider impleme
|
||||
- [Kubernetes](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/kubernetes_provider.py)
|
||||
- [Microsoft365](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/microsoft365/microsoft365_provider.py)
|
||||
- [GitHub](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/github/github_provider.py)
|
||||
- [MongoDB Atlas](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/mongodbatlas/mongodbatlas_provider.py)
|
||||
|
||||
### Basic Provider Implementation: Pseudocode Example
|
||||
|
||||
|
||||
@@ -0,0 +1,210 @@
|
||||
# Renaming Checks in Prowler
|
||||
|
||||
To rename a check in Prowler, follow these steps when aligning with Check ID structure, fixing typos, or updating check logic that requires a new name.
|
||||
|
||||
When changing a Check ID, update the following files:
|
||||
|
||||
## Update Check Folder Structure
|
||||
|
||||
First, rename the check folder with the new check name.
|
||||
|
||||
**Path:** `prowler/providers/<provider>/services/<service>/<check_name>`
|
||||
|
||||
**Example:**
|
||||
```
|
||||
# Before
|
||||
prowler/providers/aws/services/inspector2/inspector2_findings_exist/
|
||||
|
||||
# After
|
||||
prowler/providers/aws/services/inspector2/inspector2_active_findings_exist/
|
||||
```
|
||||
|
||||
Next, rename the file that contains the check logic. Inside that file, also rename the class name to match the new check name.
|
||||
|
||||
**Path:** `prowler/providers/<provider>/services/<service>/<check_name>/<check_name>.py`
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# Before
|
||||
class inspector2_findings_exist(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
# ... check logic ...
|
||||
|
||||
# After
|
||||
class inspector2_active_findings_exist(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
# ... check logic ...
|
||||
```
|
||||
|
||||
Then, rename the file that contains the check metadata. Inside that file, add the old check name as an alias in the `CheckAliases` field and modify the `CheckID` to the new check name.
|
||||
|
||||
**Path:** `prowler/providers/<provider>/services/<service>/<check_name>/<check_name>.metadata.json`
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"Provider": "aws",
|
||||
"CheckID": "inspector2_active_findings_exist",
|
||||
"CheckTitle": "Check if Inspector2 active findings exist",
|
||||
"CheckAliases": [
|
||||
"inspector2_findings_exist"
|
||||
],
|
||||
"CheckType": [],
|
||||
"ServiceName": "inspector2",
|
||||
"SubServiceName": "",
|
||||
"ResourceIdTemplate": "arn:aws:inspector2:region:account-id/detector-id",
|
||||
"Severity": "medium",
|
||||
"ResourceType": "Other",
|
||||
"Description": "This check determines if there are any active findings in your AWS account that have been detected by AWS Inspector2.",
|
||||
"Risk": "Without using AWS Inspector, you may not be aware of all the security vulnerabilities in your AWS resources.",
|
||||
"RelatedUrl": "https://docs.aws.amazon.com/inspector/latest/user/findings-understanding.html",
|
||||
"Remediation": {
|
||||
"Code": {
|
||||
"CLI": "",
|
||||
"NativeIaC": "",
|
||||
"Other": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/Inspector/amazon-inspector-findings.html",
|
||||
"Terraform": ""
|
||||
},
|
||||
"Recommendation": {
|
||||
"Text": "Review the active findings from Inspector2",
|
||||
"Url": "https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html"
|
||||
}
|
||||
},
|
||||
"Categories": [],
|
||||
"DependsOn": [],
|
||||
"RelatedTo": [],
|
||||
"Notes": ""
|
||||
}
|
||||
```
|
||||
|
||||
## Update Test Files
|
||||
|
||||
Second, rename the tests folder with the new check name.
|
||||
|
||||
**Path:** `tests/providers/<provider>/services/<service>/<check_name>`
|
||||
|
||||
**Example:**
|
||||
```
|
||||
# Before
|
||||
tests/providers/aws/services/inspector2/inspector2_findings_exist/
|
||||
|
||||
# After
|
||||
tests/providers/aws/services/inspector2/inspector2_active_findings_exist/
|
||||
```
|
||||
|
||||
Next, rename the test file that contains all the unit tests. Inside that file, rename all appearances of the old check name to the new check name.
|
||||
|
||||
**Path:** `tests/providers/<provider>/services/<service>/<check_name>/<check_name>_test.py`
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# Before
|
||||
from prowler.providers.aws.services.inspector2.inspector2_findings_exist.inspector2_findings_exist import (
|
||||
inspector2_findings_exist,
|
||||
)
|
||||
|
||||
class Test_inspector2_findings_exist:
|
||||
def test_inspector2_no_findings(self):
|
||||
# ... test logic ...
|
||||
|
||||
def test_inspector2_with_findings(self):
|
||||
# ... test logic ...
|
||||
|
||||
# After
|
||||
from prowler.providers.aws.services.inspector2.inspector2_active_findings_exist.inspector2_active_findings_exist import (
|
||||
inspector2_active_findings_exist,
|
||||
)
|
||||
|
||||
class Test_inspector2_active_findings_exist:
|
||||
def test_inspector2_no_findings(self):
|
||||
# ... test logic ...
|
||||
|
||||
def test_inspector2_with_findings(self):
|
||||
# ... test logic ...
|
||||
```
|
||||
|
||||
**Important:** Update all references to the old check name in the test file, including:
|
||||
|
||||
- Import statements at the top of the file
|
||||
- Class name in the test class
|
||||
- Any function calls to the check
|
||||
- Any string references to the check name
|
||||
- Mock patches that reference the check
|
||||
|
||||
**Complete example of all changes needed in test files:**
|
||||
```python
|
||||
# Before
|
||||
from prowler.providers.aws.services.inspector2.inspector2_findings_exist.inspector2_findings_exist import (
|
||||
inspector2_findings_exist,
|
||||
)
|
||||
|
||||
class Test_inspector2_findings_exist:
|
||||
def test_inspector2_no_findings(self):
|
||||
# Mock setup
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.services.inspector2.inspector2_findings_exist.inspector2_findings_exist.inspector2_client",
|
||||
inspector2_client,
|
||||
):
|
||||
check = inspector2_findings_exist()
|
||||
result = check.execute()
|
||||
assert len(result) == 1
|
||||
assert result[0].status == "PASS"
|
||||
assert "No active findings found" in result[0].status_extended
|
||||
|
||||
# After
|
||||
from prowler.providers.aws.services.inspector2.inspector2_active_findings_exist.inspector2_active_findings_exist import (
|
||||
inspector2_active_findings_exist,
|
||||
)
|
||||
|
||||
class Test_inspector2_active_findings_exist:
|
||||
def test_inspector2_no_findings(self):
|
||||
# Mock setup
|
||||
with mock.patch(
|
||||
"prowler.providers.aws.services.inspector2.inspector2_active_findings_exist.inspector2_active_findings_exist.inspector2_client",
|
||||
inspector2_client,
|
||||
):
|
||||
check = inspector2_active_findings_exist()
|
||||
result = check.execute()
|
||||
assert len(result) == 1
|
||||
assert result[0].status == "PASS"
|
||||
assert "No active findings found" in result[0].status_extended
|
||||
```
|
||||
|
||||
## Update Compliance Mappings
|
||||
|
||||
Finally, rename all appearances of the old check name to the new check name inside any compliance framework where the check is mapped.
|
||||
|
||||
- `prowler/compliance/<service>/<compliance_where_the_check_is_mapped>.json`
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"Framework": "CIS",
|
||||
"Version": "2.0",
|
||||
"Provider": "AWS",
|
||||
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services.",
|
||||
"Requirements": [
|
||||
{
|
||||
"Id": "4.1",
|
||||
"Description": "Ensure a log metric filter and alarm exist for unauthorized API calls",
|
||||
"Checks": [
|
||||
"inspector2_active_findings_exist"
|
||||
],
|
||||
"Attributes": [
|
||||
{
|
||||
"Section": "4 Logging and Monitoring",
|
||||
"Profile": "Level 1",
|
||||
"AssessmentStatus": "Automated",
|
||||
"Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The development compliance file may contain examples of the check being renamed. If so, modify this file as well:
|
||||
|
||||
- `api/src/backend/api/fixtures/dev/7_dev_compliance.json`
|
||||
@@ -21,6 +21,8 @@ Within this folder the following files are also to be created:
|
||||
- `<new_service_name>_service.py` – Contains all the logic and API calls of the service.
|
||||
- `<new_service_name>_client_.py` – Contains the initialization of the freshly created service's class so that the checks can use it.
|
||||
|
||||
Once the files are create, you can check that the service has been created by running the following command: `poetry run python prowler-cli.py <provider> --list-services | grep <new_service_name>`.
|
||||
|
||||
## Service Structure and Initialisation
|
||||
|
||||
The Prowler's service structure is as outlined below. To initialise it, just import the service client in a check.
|
||||
@@ -75,7 +77,7 @@ class <Service>(ServiceParentClass):
|
||||
# String in case the provider's API service name is different.
|
||||
super().__init__(__class__.__name__, provider)
|
||||
|
||||
# Create an empty dictionary of items to be gathered, using the unique ID as the dictionary’s key, e.g., instances.
|
||||
# Create an empty dictionary of items to be gathered, using the unique ID as the dictionary's key, e.g., instances.
|
||||
self.<items> = {}
|
||||
|
||||
# If parallelization can be carried out by regions or locations, the function __threading_call__ to be used must be implemented in the Service Parent Class.
|
||||
@@ -160,11 +162,9 @@ class <Service>(ServiceParentClass):
|
||||
???+note
|
||||
To prevent false findings, when Prowler fails to retrieve items due to Access Denied or similar errors, the affected item's value is set to `None`.
|
||||
|
||||
#### Service Models
|
||||
#### Resource Models
|
||||
|
||||
Service models define structured classes used within services to store and process data extracted from API calls.
|
||||
|
||||
Using Pydantic for Data Validation
|
||||
Resource models define structured classes used within services to store and process data extracted from API calls. They are defined in the same file as the service class, but outside of the class, usually at the bottom of the file.
|
||||
|
||||
Prowler leverages Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) to enforce data validation.
|
||||
|
||||
@@ -227,11 +227,24 @@ from prowler.providers.<provider>.services.<new_service_name>.<new_service_name>
|
||||
|
||||
## Provider Permissions in Prowler
|
||||
|
||||
Before implementing a new service, verify that Prowler’s existing permissions for each provider are sufficient. If additional permissions are required, refer to the relevant documentation and update accordingly.
|
||||
Before implementing a new service, verify that Prowler's existing permissions for each provider are sufficient. If additional permissions are required, refer to the relevant documentation and update accordingly.
|
||||
|
||||
Provider-Specific Permissions Documentation:
|
||||
|
||||
- [AWS](../getting-started/requirements.md#authentication)
|
||||
- [Azure](../getting-started/requirements.md#needed-permissions)
|
||||
- [GCP](../getting-started/requirements.md#needed-permissions_1)
|
||||
- [M365](../getting-started/requirements.md#needed-permissions_2)
|
||||
- [AWS](../tutorials/aws/authentication.md#required-permissions)
|
||||
- [Azure](../tutorials/azure/authentication.md#required-permissions)
|
||||
- [GCP](../tutorials/gcp/authentication.md#required-permissions)
|
||||
- [M365](../tutorials/microsoft365/authentication.md#required-permissions)
|
||||
- [GitHub](../tutorials/github/authentication.md)
|
||||
|
||||
## Best Practices
|
||||
|
||||
- When available in the provider, use threading or parallelization utilities for all methods that can be parallelized by to maximize performance and reduce scan time.
|
||||
- Define a Pydantic `BaseModel` for every resource you manage, and use these models for all resource data handling.
|
||||
- Log every major step (start, success, error) in resource discovery and attribute collection for traceability and debugging; include as much context as possible.
|
||||
- Catch and log all exceptions, providing detailed context (region, subscription, resource, error type, line number) to aid troubleshooting.
|
||||
- Use consistent naming for resource containers, unique identifiers, and model attributes to improve code readability and maintainability.
|
||||
- Add docstrings to every method and comments to explain any service-specific logic, especially where provider APIs behave differently or have quirks.
|
||||
- Collect and store resource tags and additional attributes to support richer checks and reporting.
|
||||
- Leverage shared utility helpers for session setup, identifier parsing, and other cross-cutting concerns to avoid code duplication. This kind of code is typically stored in a `lib` folder in the service folder.
|
||||
- Keep code modular, maintainable, and well-documented for ease of extension and troubleshooting.
|
||||
|
||||
@@ -39,7 +39,7 @@ To execute the Prowler test suite, install the necessary dependencies listed in
|
||||
|
||||
### Prerequisites
|
||||
|
||||
If you have not installed Prowler yet, refer to the [developer guide introduction](./introduction.md#get-the-code-and-install-all-dependencies).
|
||||
If you have not installed Prowler yet, refer to the [developer guide introduction](./introduction.md#getting-the-code-and-installing-all-dependencies).
|
||||
|
||||
### Executing Tests
|
||||
|
||||
@@ -520,7 +520,7 @@ Execute tests on the service `__init__` to ensure correct information retrieval.
|
||||
|
||||
While service tests resemble *Integration Tests*, as they assess how the service interacts with the provider, they ultimately fall under *Unit Tests*, due to the use of Moto or custom mock objects.
|
||||
|
||||
For detailed guidance on test creation and existing service tests, refer to the [AWS checks test](./unit-testing.md#checks) [documentation](https://github.com/prowler-cloud/prowler/tree/master/tests/providers/aws/services).
|
||||
For detailed guidance on test creation and existing service tests, check the current [AWS checks implementation](https://github.com/prowler-cloud/prowler/tree/master/tests/providers/aws/services).
|
||||
|
||||
## GCP
|
||||
|
||||
|
||||
@@ -1,521 +0,0 @@
|
||||
# Requirements
|
||||
|
||||
Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html#), [Azure SDK](https://azure.github.io/azure-sdk-for-python/) and [GCP API Python Client](https://github.com/googleapis/google-api-python-client/).
|
||||
## AWS
|
||||
|
||||
Since Prowler uses AWS Credentials under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence).
|
||||
|
||||
### Authentication
|
||||
|
||||
Make sure you have properly configured your AWS-CLI with a valid Access Key and Region or declare AWS variables properly (or instance profile/role):
|
||||
|
||||
```console
|
||||
aws configure
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```console
|
||||
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
|
||||
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
|
||||
export AWS_SESSION_TOKEN="XXXXXXXXX"
|
||||
```
|
||||
|
||||
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
|
||||
|
||||
- `arn:aws:iam::aws:policy/SecurityAudit`
|
||||
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
|
||||
|
||||
???+ note
|
||||
Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using. If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
|
||||
|
||||
### Multi-Factor Authentication
|
||||
|
||||
If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to input the following values to get a new session:
|
||||
|
||||
- ARN of your MFA device
|
||||
- TOTP (Time-Based One-Time Password)
|
||||
|
||||
## Azure
|
||||
|
||||
Prowler for Azure supports the following authentication types. To use each one you need to pass the proper flag to the execution:
|
||||
|
||||
- [Service Principal Application](https://learn.microsoft.com/en-us/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) (recommended).
|
||||
- Current AZ CLI credentials stored.
|
||||
- Interactive browser authentication.
|
||||
- [Managed identity](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview) authentication.
|
||||
|
||||
???+ warning
|
||||
For Prowler App only the Service Principal authentication method is supported.
|
||||
|
||||
### Service Principal Application authentication
|
||||
|
||||
To allow Prowler assume the service principal application identity to start the scan it is needed to configure the following environment variables:
|
||||
|
||||
```console
|
||||
export AZURE_CLIENT_ID="XXXXXXXXX"
|
||||
export AZURE_TENANT_ID="XXXXXXXXX"
|
||||
export AZURE_CLIENT_SECRET="XXXXXXX"
|
||||
```
|
||||
|
||||
If you try to execute Prowler with the `--sp-env-auth` flag and those variables are empty or not exported, the execution is going to fail.
|
||||
Follow the instructions in the [Create Prowler Service Principal](../tutorials/azure/create-prowler-service-principal.md#how-to-create-prowler-service-principal-application) section to create a service principal.
|
||||
|
||||
### AZ CLI / Browser / Managed Identity authentication
|
||||
|
||||
The other three cases does not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options. To use `--browser-auth` the user needs to authenticate against Azure using the default browser to start the scan, also `tenant-id` is required.
|
||||
|
||||
### Needed permissions
|
||||
|
||||
Prowler for Azure needs two types of permission scopes to be set:
|
||||
|
||||
- **Microsoft Entra ID permissions**: used to retrieve metadata from the identity assumed by Prowler and specific Entra checks (not mandatory to have access to execute the tool). The permissions required by the tool are the following:
|
||||
- `Directory.Read.All`
|
||||
- `Policy.Read.All`
|
||||
- `UserAuthenticationMethod.Read.All` (used only for the Entra checks related with multifactor authentication)
|
||||
|
||||
???+ note
|
||||
You can replace `Directory.Read.All` with `Domain.Read.All` that is a more restrictive permission but you won't be able to run the Entra checks related with DirectoryRoles and GetUsers.
|
||||
|
||||
- **Subscription scope permissions**: required to launch the checks against your resources, mandatory to launch the tool. It is required to add the following RBAC builtin roles per subscription to the entity that is going to be assumed by the tool:
|
||||
- `Reader`
|
||||
- `ProwlerRole` (custom role with minimal permissions defined in [prowler-azure-custom-role](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-azure-custom-role.json))
|
||||
???+ note
|
||||
Please, notice that the field `assignableScopes` in the JSON custom role file must be changed to be the subscription or management group where the role is going to be assigned. The valid formats for the field are `/subscriptions/<subscription-id>` or `/providers/Microsoft.Management/managementGroups/<management-group-id>`.
|
||||
|
||||
To assign the permissions, follow the instructions in the [Microsoft Entra ID permissions](../tutorials/azure/create-prowler-service-principal.md#assigning-the-proper-permissions) section and the [Azure subscriptions permissions](../tutorials/azure/subscriptions.md#assign-the-appropriate-permissions-to-the-identity-that-is-going-to-be-assumed-by-prowler) section, respectively.
|
||||
|
||||
???+ warning
|
||||
Some permissions in `ProwlerRole` are considered **write** permissions, so if you have a `ReadOnly` lock attached to some resources you may get an error and will not get a finding for that check.
|
||||
|
||||
#### Checks that require ProwlerRole
|
||||
|
||||
The following checks require the `ProwlerRole` permissions to be executed, if you want to run them, make sure you have assigned the role to the identity that is going to be assumed by Prowler:
|
||||
|
||||
- `app_function_access_keys_configured`
|
||||
- `app_function_ftps_deployment_disabled`
|
||||
|
||||
## Google Cloud
|
||||
|
||||
### Authentication
|
||||
|
||||
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
|
||||
|
||||
1. [GOOGLE_APPLICATION_CREDENTIALS environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
|
||||
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
|
||||
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
|
||||
|
||||
### Needed permissions
|
||||
|
||||
Prowler for Google Cloud needs the following permissions to be set:
|
||||
|
||||
- **Viewer (`roles/viewer`) IAM role**: granted at the project / folder / org level in order to scan the target projects
|
||||
|
||||
- **Project level settings**: you need to have at least one project with the below settings:
|
||||
- Identity and Access Management (IAM) API (`iam.googleapis.com`) enabled by either using the
|
||||
[Google Cloud API UI](https://console.cloud.google.com/apis/api/iam.googleapis.com/metrics) or
|
||||
by using the gcloud CLI `gcloud services enable iam.googleapis.com --project <your-project-id>` command
|
||||
- Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`) IAM role
|
||||
- Set the quota project to be this project by either running `gcloud auth application-default set-quota-project <project-id>` or by setting an environment variable:
|
||||
`export GOOGLE_CLOUD_QUOTA_PROJECT=<project-id>`
|
||||
|
||||
|
||||
The above settings must be associated to a user or service account.
|
||||
|
||||
|
||||
???+ note
|
||||
By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
|
||||
|
||||
## Microsoft 365
|
||||
|
||||
Prowler for M365 currently supports the following authentication types:
|
||||
|
||||
- [Service Principal Application](https://learn.microsoft.com/en-us/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object).
|
||||
- Service Principal Application and Microsoft User Credentials (**recommended**).
|
||||
- Current AZ CLI credentials stored.
|
||||
- Interactive browser authentication.
|
||||
|
||||
|
||||
???+ warning
|
||||
For Prowler App only the Service Principal with User Credentials authentication method is supported.
|
||||
|
||||
### Service Principal authentication (recommended)
|
||||
|
||||
Authentication flag: `--sp-env-auth`
|
||||
|
||||
To allow Prowler assume the service principal identity to start the scan it is needed to configure the following environment variables:
|
||||
|
||||
```console
|
||||
export AZURE_CLIENT_ID="XXXXXXXXX"
|
||||
export AZURE_CLIENT_SECRET="XXXXXXXXX"
|
||||
export AZURE_TENANT_ID="XXXXXXXXX"
|
||||
```
|
||||
|
||||
If you try to execute Prowler with the `--sp-env-auth` flag and those variables are empty or not exported, the execution is going to fail.
|
||||
Follow the instructions in the [Create Prowler Service Principal](../tutorials/microsoft365/getting-started-m365.md#create-the-service-principal-app) section to create a service principal.
|
||||
|
||||
If you don't add the external API permissions described in the mentioned section above you will only be able to run the checks that work through MS Graph. This means that you won't run all the provider.
|
||||
|
||||
If you want to scan all the checks from M365 you will need to add the required permissions to the service principal application. Refer to the [Needed permissions](/docs/tutorials/microsoft365/getting-started-m365.md#needed-permissions) section for more information.
|
||||
|
||||
### Service Principal and User Credentials authentication
|
||||
|
||||
Authentication flag: `--env-auth`
|
||||
|
||||
This authentication method follows the same approach as the service principal method but introduces two additional environment variables for user credentials: `M365_USER` and `M365_PASSWORD`.
|
||||
|
||||
```console
|
||||
export AZURE_CLIENT_ID="XXXXXXXXX"
|
||||
export AZURE_CLIENT_SECRET="XXXXXXXXX"
|
||||
export AZURE_TENANT_ID="XXXXXXXXX"
|
||||
export M365_USER="your_email@example.com"
|
||||
export M365_PASSWORD="examplepassword"
|
||||
```
|
||||
|
||||
These two new environment variables are **required** in this authentication method to execute the PowerShell modules needed to retrieve information from M365 services. Prowler uses Service Principal authentication to access Microsoft Graph and user credentials to authenticate to Microsoft PowerShell modules.
|
||||
|
||||
- `M365_USER` should be your Microsoft account email using the **assigned domain in the tenant**. This means it must look like `example@YourCompany.onmicrosoft.com` or `example@YourCompany.com`, but it must be the exact domain assigned to that user in the tenant.
|
||||
|
||||
???+ warning
|
||||
If the user is newly created, you need to sign in with that account first, as Microsoft will prompt you to change the password. If you don’t complete this step, user authentication will fail because Microsoft marks the initial password as expired.
|
||||
|
||||
???+ warning
|
||||
The user must not be MFA capable. Microsoft does not allow MFA capable users to authenticate programmatically. See [Microsoft documentation](https://learn.microsoft.com/en-us/entra/identity-platform/scenario-desktop-acquire-token-username-password?tabs=dotnet) for more information.
|
||||
|
||||
???+ warning
|
||||
Using a tenant domain other than the one assigned — even if it belongs to the same tenant — will cause Prowler to fail, as Microsoft authentication will not succeed.
|
||||
|
||||
Ensure you are using the right domain for the user you are trying to authenticate with.
|
||||
|
||||

|
||||
|
||||
- `M365_PASSWORD` must be the user password.
|
||||
|
||||
???+ note
|
||||
Before we asked for a encrypted password, but now we ask for the user password directly. Prowler will now handle the password encryption for you.
|
||||
|
||||
|
||||
### Interactive Browser authentication
|
||||
|
||||
Authentication flag: `--browser-auth`
|
||||
|
||||
This authentication method requires the user to authenticate against Azure using the default browser to start the scan, also `--tenant-id` flag is required.
|
||||
|
||||
With this credentials you will only be able to run the checks that work through MS Graph, this means that you won't run all the provider. If you want to scan all the checks from M365 you will need to use the recommended authentication method.
|
||||
|
||||
Since this is a delegated permission authentication method, necessary permissions should be given to the user, not the app.
|
||||
|
||||
|
||||
### Needed permissions
|
||||
|
||||
Prowler for M365 requires different permission scopes depending on the authentication method you choose. The permissions must be configured using Microsoft Entra ID:
|
||||
|
||||
#### For Service Principal Authentication (`--sp-env-auth`) - Recommended
|
||||
|
||||
When using service principal authentication, you need to add the following **Application Permissions** configured to:
|
||||
|
||||
**Microsoft Graph API Permissions:**
|
||||
- `AuditLog.Read.All`: Required for Entra service.
|
||||
- `Directory.Read.All`: Required for all services.
|
||||
- `Policy.Read.All`: Required for all services.
|
||||
- `SharePointTenantSettings.Read.All`: Required for SharePoint service.
|
||||
- `User.Read` (IMPORTANT: this must be set as **delegated**): Required for the sign-in.
|
||||
|
||||
**External API Permissions:**
|
||||
- `Exchange.ManageAsApp` from external API `Office 365 Exchange Online`: Required for Exchange PowerShell module app authentication. You also need to assign the `Exchange Administrator` role to the app.
|
||||
- `application_access` from external API `Skype and Teams Tenant Admin API`: Required for Teams PowerShell module app authentication.
|
||||
|
||||
???+ note
|
||||
You can replace `Directory.Read.All` with `Domain.Read.All` is a more restrictive permission but you won't be able to run the Entra checks related with DirectoryRoles and GetUsers.
|
||||
|
||||
> If you do this you will need to add also the `Organization.Read.All` permission to the service principal application in order to authenticate.
|
||||
|
||||
???+ warning
|
||||
With service principal only authentication, you can only run checks that work through MS Graph API. Some checks that require PowerShell modules will not be executed.
|
||||
|
||||
#### For Service Principal + User Credentials Authentication (`--env-auth`)
|
||||
|
||||
When using service principal with user credentials authentication, you need **both** sets of permissions:
|
||||
|
||||
**1. Service Principal Application Permissions**:
|
||||
- You **will need** all the Microsoft Graph API permissions listed above.
|
||||
- You **won't need** the External API permissions listed above.
|
||||
|
||||
**2. User-Level Permissions**: These are set at the `M365_USER` level, so the user used to run Prowler must have one of the following roles:
|
||||
- `Global Reader` (recommended): this allows you to read all roles needed.
|
||||
- `Exchange Administrator` and `Teams Administrator`: user needs both roles but with this [roles](https://learn.microsoft.com/en-us/exchange/permissions-exo/permissions-exo#microsoft-365-permissions-in-exchange-online) you can access to the same information as a Global Reader (since only read access is needed, Global Reader is recommended).
|
||||
|
||||
???+ note
|
||||
This is the **recommended authentication method** because it allows you to run the full M365 provider including PowerShell checks, providing complete coverage of all available security checks.
|
||||
|
||||
#### For Browser Authentication (`--browser-auth`)
|
||||
|
||||
When using browser authentication, permissions are delegated to the user, so the user must have the appropriate permissions rather than the application.
|
||||
|
||||
???+ warning
|
||||
With browser authentication, you will only be able to run checks that work through MS Graph API. PowerShell module checks will not be executed.
|
||||
|
||||
---
|
||||
|
||||
**To assign these permissions and roles**, follow the instructions in the Microsoft Entra ID [permissions](../tutorials/microsoft365/getting-started-m365.md#grant-required-api-permissions) and [roles](../tutorials/microsoft365/getting-started-m365.md#assign-required-roles-to-your-user) section.
|
||||
|
||||
|
||||
### Supported PowerShell versions
|
||||
|
||||
You must have PowerShell installed to run certain M365 checks.
|
||||
Currently, we support **PowerShell version 7.4 or higher** (7.5 is recommended).
|
||||
|
||||
This requirement exists because **PowerShell 5.1** (the version that comes by default on some Windows systems) does not support several cmdlets needed to run the checks properly.
|
||||
Additionally, earlier [PowerShell Cross-Platform versions](https://learn.microsoft.com/en-us/powershell/scripting/install/powershell-support-lifecycle?view=powershell-7.5) are no longer under technical support, which may cause unexpected errors.
|
||||
|
||||
|
||||
???+ note
|
||||
Installing powershell will be only needed if you install prowler from pip or other sources, these means that the SDK and API containers contain PowerShell installed by default.
|
||||
|
||||
Installing PowerShell is different depending on your OS.
|
||||
|
||||
- [Windows](https://learn.microsoft.com/es-es/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.5#install-powershell-using-winget-recommended): you will need to update PowerShell to +7.4 to be able to run prowler, if not some checks will not show findings and the provider could not work as expected. This version of PowerShell is [supported](https://learn.microsoft.com/es-es/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.4#supported-versions-of-windows) on Windows 10, Windows 11, Windows Server 2016 and higher versions.
|
||||
|
||||
```console
|
||||
winget install --id Microsoft.PowerShell --source winget
|
||||
```
|
||||
|
||||
|
||||
- [MacOS](https://learn.microsoft.com/es-es/powershell/scripting/install/installing-powershell-on-macos?view=powershell-7.5#install-the-latest-stable-release-of-powershell): installing PowerShell on MacOS needs to have installed [brew](https://brew.sh/), once you have it is just running the command above, Pwsh is only supported in macOS 15 (Sequoia) x64 and Arm64, macOS 14 (Sonoma) x64 and Arm64, macOS 13 (Ventura) x64 and Arm64
|
||||
|
||||
```console
|
||||
brew install powershell/tap/powershell
|
||||
```
|
||||
|
||||
Once it's installed run `pwsh` on your terminal to verify it's working.
|
||||
|
||||
- Linux: installing PowerShell on Linux depends on the distro you are using:
|
||||
|
||||
- [Ubuntu](https://learn.microsoft.com/es-es/powershell/scripting/install/install-ubuntu?view=powershell-7.5#installation-via-package-repository-the-package-repository): The required version for installing PowerShell +7.4 on Ubuntu are Ubuntu 22.04 and Ubuntu 24.04. The recommended way to install it is downloading the package available on PMC. You just need to follow the following steps:
|
||||
|
||||
```console
|
||||
###################################
|
||||
# Prerequisites
|
||||
|
||||
# Update the list of packages
|
||||
sudo apt-get update
|
||||
|
||||
# Install pre-requisite packages.
|
||||
sudo apt-get install -y wget apt-transport-https software-properties-common
|
||||
|
||||
# Get the version of Ubuntu
|
||||
source /etc/os-release
|
||||
|
||||
# Download the Microsoft repository keys
|
||||
wget -q https://packages.microsoft.com/config/ubuntu/$VERSION_ID/packages-microsoft-prod.deb
|
||||
|
||||
# Register the Microsoft repository keys
|
||||
sudo dpkg -i packages-microsoft-prod.deb
|
||||
|
||||
# Delete the Microsoft repository keys file
|
||||
rm packages-microsoft-prod.deb
|
||||
|
||||
# Update the list of packages after we added packages.microsoft.com
|
||||
sudo apt-get update
|
||||
|
||||
###################################
|
||||
# Install PowerShell
|
||||
sudo apt-get install -y powershell
|
||||
|
||||
# Start PowerShell
|
||||
pwsh
|
||||
```
|
||||
|
||||
- [Alpine](https://learn.microsoft.com/es-es/powershell/scripting/install/install-alpine?view=powershell-7.5#installation-steps): The only supported version for installing PowerShell +7.4 on Alpine is Alpine 3.20. The unique way to install it is downloading the tar.gz package available on [PowerShell github](https://github.com/PowerShell/PowerShell/releases/download/v7.5.0/powershell-7.5.0-linux-musl-x64.tar.gz). You just need to follow the following steps:
|
||||
|
||||
```console
|
||||
# Install the requirements
|
||||
sudo apk add --no-cache \
|
||||
ca-certificates \
|
||||
less \
|
||||
ncurses-terminfo-base \
|
||||
krb5-libs \
|
||||
libgcc \
|
||||
libintl \
|
||||
libssl3 \
|
||||
libstdc++ \
|
||||
tzdata \
|
||||
userspace-rcu \
|
||||
zlib \
|
||||
icu-libs \
|
||||
curl
|
||||
|
||||
apk -X https://dl-cdn.alpinelinux.org/alpine/edge/main add --no-cache \
|
||||
lttng-ust \
|
||||
openssh-client \
|
||||
|
||||
# Download the powershell '.tar.gz' archive
|
||||
curl -L https://github.com/PowerShell/PowerShell/releases/download/v7.5.0/powershell-7.5.0-linux-musl-x64.tar.gz -o /tmp/powershell.tar.gz
|
||||
|
||||
# Create the target folder where powershell will be placed
|
||||
sudo mkdir -p /opt/microsoft/powershell/7
|
||||
|
||||
# Expand powershell to the target folder
|
||||
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
|
||||
|
||||
# Set execute permissions
|
||||
sudo chmod +x /opt/microsoft/powershell/7/pwsh
|
||||
|
||||
# Create the symbolic link that points to pwsh
|
||||
sudo ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh
|
||||
|
||||
# Start PowerShell
|
||||
pwsh
|
||||
```
|
||||
|
||||
- [Debian](https://learn.microsoft.com/es-es/powershell/scripting/install/install-debian?view=powershell-7.5#installation-on-debian-11-or-12-via-the-package-repository): The required version for installing PowerShell +7.4 on Debian are Debian 11 and Debian 12. The recommended way to install it is downloading the package available on PMC. You just need to follow the following steps:
|
||||
|
||||
```console
|
||||
###################################
|
||||
# Prerequisites
|
||||
|
||||
# Update the list of packages
|
||||
sudo apt-get update
|
||||
|
||||
# Install pre-requisite packages.
|
||||
sudo apt-get install -y wget
|
||||
|
||||
# Get the version of Debian
|
||||
source /etc/os-release
|
||||
|
||||
# Download the Microsoft repository GPG keys
|
||||
wget -q https://packages.microsoft.com/config/debian/$VERSION_ID/packages-microsoft-prod.deb
|
||||
|
||||
# Register the Microsoft repository GPG keys
|
||||
sudo dpkg -i packages-microsoft-prod.deb
|
||||
|
||||
# Delete the Microsoft repository GPG keys file
|
||||
rm packages-microsoft-prod.deb
|
||||
|
||||
# Update the list of packages after we added packages.microsoft.com
|
||||
sudo apt-get update
|
||||
|
||||
###################################
|
||||
# Install PowerShell
|
||||
sudo apt-get install -y powershell
|
||||
|
||||
# Start PowerShell
|
||||
pwsh
|
||||
```
|
||||
|
||||
- [Rhel](https://learn.microsoft.com/es-es/powershell/scripting/install/install-rhel?view=powershell-7.5#installation-via-the-package-repository): The required version for installing PowerShell +7.4 on Red Hat are RHEL 8 and RHEL 9. The recommended way to install it is downloading the package available on PMC. You just need to follow the following steps:
|
||||
|
||||
```console
|
||||
###################################
|
||||
# Prerequisites
|
||||
|
||||
# Get version of RHEL
|
||||
source /etc/os-release
|
||||
if [ ${VERSION_ID%.*} -lt 8 ]
|
||||
then majorver=7
|
||||
elif [ ${VERSION_ID%.*} -lt 9 ]
|
||||
then majorver=8
|
||||
else majorver=9
|
||||
fi
|
||||
|
||||
# Download the Microsoft RedHat repository package
|
||||
curl -sSL -O https://packages.microsoft.com/config/rhel/$majorver/packages-microsoft-prod.rpm
|
||||
|
||||
# Register the Microsoft RedHat repository
|
||||
sudo rpm -i packages-microsoft-prod.rpm
|
||||
|
||||
# Delete the downloaded package after installing
|
||||
rm packages-microsoft-prod.rpm
|
||||
|
||||
# Update package index files
|
||||
sudo dnf update
|
||||
# Install PowerShell
|
||||
sudo dnf install powershell -y
|
||||
```
|
||||
|
||||
- [Docker](https://learn.microsoft.com/es-es/powershell/scripting/install/powershell-in-docker?view=powershell-7.5#use-powershell-in-a-container): The following command download the latest stable versions of PowerShell:
|
||||
|
||||
```console
|
||||
docker pull mcr.microsoft.com/dotnet/sdk:9.0
|
||||
```
|
||||
|
||||
To start an interactive shell of Pwsh you just need to run:
|
||||
|
||||
```console
|
||||
docker run -it mcr.microsoft.com/dotnet/sdk:9.0 pwsh
|
||||
```
|
||||
|
||||
|
||||
### Needed PowerShell modules
|
||||
|
||||
To obtain the required data for this provider, we use several PowerShell cmdlets.
|
||||
These cmdlets come from different modules that must be installed.
|
||||
|
||||
The installation of these modules will be performed automatically if you run Prowler with the flag `--init-modules`. This an example way of running Prowler and installing the modules:
|
||||
|
||||
```console
|
||||
python3 prowler-cli.py m365 --verbose --log-level ERROR --env-auth --init-modules
|
||||
```
|
||||
|
||||
If you already have them installed, there is no problem even if you use the flag because it will automatically check if the needed modules are already installed.
|
||||
|
||||
???+ note
|
||||
Prowler installs the modules using `-Scope CurrentUser`.
|
||||
If you encounter any issues with services not working after the automatic installation, try installing the modules manually using `-Scope AllUsers` (administrator permissions are required for this).
|
||||
The command needed to install a module manually is:
|
||||
```powershell
|
||||
Install-Module -Name "ModuleName" -Scope AllUsers -Force
|
||||
```
|
||||
|
||||
The required modules are:
|
||||
|
||||
- [ExchangeOnlineManagement](https://www.powershellgallery.com/packages/ExchangeOnlineManagement/3.6.0): Minimum version 3.6.0. Required for several checks across Exchange, Defender, and Purview.
|
||||
- [MicrosoftTeams](https://www.powershellgallery.com/packages/MicrosoftTeams/6.6.0): Minimum version 6.6.0. Required for all Teams checks.
|
||||
- [MSAL.PS](https://www.powershellgallery.com/packages/MSAL.PS/4.32.0): Required for Exchange module via application authentication.
|
||||
|
||||
## GitHub
|
||||
### Authentication
|
||||
|
||||
Prowler supports multiple methods to [authenticate with GitHub](https://docs.github.com/en/rest/authentication/authenticating-to-the-rest-api). These include:
|
||||
|
||||
- **Personal Access Token (PAT)**
|
||||
- **OAuth App Token**
|
||||
- **GitHub App Credentials**
|
||||
|
||||
This flexibility allows you to scan and analyze your GitHub account, including repositories, organizations, and applications, using the method that best suits your use case.
|
||||
|
||||
The provided credentials must have the appropriate permissions to perform all the required checks.
|
||||
|
||||
???+ note
|
||||
GitHub App Credentials support less checks than other authentication methods.
|
||||
|
||||
## Infrastructure as Code (IaC)
|
||||
|
||||
Prowler's Infrastructure as Code (IaC) provider enables you to scan local or remote infrastructure code for security and compliance issues using [Checkov](https://www.checkov.io/). This provider supports a wide range of IaC frameworks and requires no cloud authentication for local scans.
|
||||
|
||||
### Authentication
|
||||
|
||||
- For local scans, no authentication is required.
|
||||
- For remote repository scans, authentication can be provided via:
|
||||
- [**GitHub Username and Personal Access Token (PAT)**](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic)
|
||||
- [**GitHub OAuth App Token**](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-fine-grained-personal-access-token)
|
||||
- [**Git URL**](https://git-scm.com/docs/git-clone#_git_urls)
|
||||
|
||||
### Supported Frameworks
|
||||
|
||||
The IaC provider leverages Checkov to support multiple frameworks, including:
|
||||
|
||||
- Terraform
|
||||
- CloudFormation
|
||||
- Kubernetes
|
||||
- ARM (Azure Resource Manager)
|
||||
- Serverless
|
||||
- Dockerfile
|
||||
- YAML/JSON (generic IaC)
|
||||
- Bicep
|
||||
- Helm
|
||||
- GitHub Actions, GitLab CI, Bitbucket Pipelines, Azure Pipelines, CircleCI, Argo Workflows
|
||||
- Ansible
|
||||
- Kustomize
|
||||
- OpenAPI
|
||||
- SAST, SCA (Software Composition Analysis)
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 518 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 261 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user