Compare commits

..

109 Commits

Author SHA1 Message Date
Leonardo Azize Martins
c120a31904 Fix: Credential chaining from environment variables @lazize #996f
If profile is not defined, restore original credentials from environment variables,
if they exists, before assume-role
2022-01-21 19:21:40 +01:00
Leonardo Azize Martins
ed558c857a Add additional action permissions for Glue and Shield Advanced checks @lazize
* Add extra shield action permission

Allows the shield:GetSubscriptionState action

* Add permission actions

Make sure all files where permission actions are necessary will have the same actions
2022-01-20 16:45:34 +01:00
Toni de la Fuente
d7b360629d Fix: issue #986 2022-01-14 13:10:17 +01:00
Toni de la Fuente
0be3be129c Fix issues #971 set all as INFO instead of FAIL when no access to resource 2022-01-14 12:56:08 +01:00
Toni de la Fuente
e54bd14527 Fix: issue #741 CloudFront and real-time logs 2022-01-14 12:50:06 +01:00
Toni de la Fuente
2e9cd246ad Fix: issue #758 and #984 2022-01-13 15:16:17 +01:00
Toni de la Fuente
2cbb55feb5 Updated .dockerignore with .github/ 2022-01-06 13:45:26 +01:00
Toni de la Fuente
8cee401512 Label version 2.7.0-6January2022 2022-01-06 13:44:46 +01:00
Toni de la Fuente
7b848c04c4 Added FAIL as error handling when SCP prevents queries to regions 2022-01-06 13:43:48 +01:00
Toni de la Fuente
f0e8c79928 Added passed security groups and updated title to check 777 2022-01-06 13:40:19 +01:00
Toni de la Fuente
7a0e180cee Added passed security groups in output to check 778 2022-01-06 13:35:24 +01:00
Toni de la Fuente
9e90bde554 Changed Route53 checks 7152 and 7153 title to clarify 2022-01-06 13:12:24 +01:00
Toni de la Fuente
d65eabcd9e Changed Route53 checks7152 and 7153 to INFO when no domains found 2022-01-06 13:08:11 +01:00
Toni de la Fuente
d3e50169d9 Merge branch '2.7' of https://github.com/toniblyx/prowler into 2.7 2022-01-05 09:46:43 +01:00
Toni de la Fuente
b60aa38d77 Removed echoes after role chaining fix 2022-01-05 09:46:34 +01:00
Pepe Fagoaga
57e3c5a2b2 docs(templates): Improve bug template with more info (#982) 2022-01-04 11:00:09 +01:00
Toni de la Fuente
99538a8a9c Added Shield actions GetSubscriptionState and DescribeProtection 2021-12-29 12:51:13 +01:00
Toni de la Fuente
4cd025538f Added Shield actions GetSubscriptionState and DescribeProtection 2021-12-29 12:29:39 +01:00
Toni de la Fuente
d1eca877dd Add AWS Advance Shield protection checks corrections 2021-12-29 11:35:17 +01:00
Toni de la Fuente
70e5335406 Add AWS Advance Shield protection checks @michael-dickinson-sainsburys
Add AWS Advance Shield protection checks @michael-dickinson-sainsburys
2021-12-29 11:03:12 +01:00
Michael Dickinson
d707f5d994 Include example for global resources 2021-12-23 22:36:53 +00:00
Michael Dickinson
586d1f308c New check 7171 Classic load balancers are protected by AWS Shield Advanced 2021-12-23 22:30:01 +00:00
Michael Dickinson
ab9e40c4dd New check 7170 Application load balancers are protected by AWS Shield Advanced 2021-12-23 22:30:01 +00:00
Michael Dickinson
472afa5a1f New check 7169 Global accelerators are protected by AWS Shield Advanced 2021-12-23 22:30:01 +00:00
Michael Dickinson
122e1c0cf8 New check 7168 Route53 hosted zones are protected by AWS Shield Advanced 2021-12-23 22:30:01 +00:00
Michael Dickinson
4aa4c95d22 New check 7167 Cloudfront distributions are protected by AWS Shield Advanced 2021-12-23 22:30:01 +00:00
Michael Dickinson
7690cb5f98 New check 7166 Elastic IP addresses with associations are protected by AWS Shield Advanced 2021-12-23 22:21:51 +00:00
Toni de la Fuente
751bf805f9 restoreInitialAWSCredentials, unset the credentials if never set
restoreInitialAWSCredentials, unset the credentials if never set
2021-12-22 16:22:48 +01:00
Andrea Di Fabio
0f2e351716 When performing a restoreInitialAWSCredentials, unset the credentials ENV variables if they were never set 2021-12-22 10:13:06 -05:00
Toni de la Fuente
39171b7387 Fixed CIS LEVEL on 7163 through 7165 2021-12-22 15:54:50 +01:00
Toni de la Fuente
8acd861ef9 Improved help usage options -h 2021-12-22 15:31:29 +01:00
Toni de la Fuente
050718c423 Fix checks with comma in fields that break csv @j2clerck
Fix checks with comma in fields that break csv @j2clerck
2021-12-22 15:08:02 +01:00
Toni de la Fuente
62eeafec37 Removed unneeded package "file" from Dockerfile @sectoramen
Removed unneeded package "file" from Dockerfile @sectoramen
2021-12-22 14:48:42 +01:00
Andrea Di Fabio
693a5dfd09 removed file as it is not needed 2021-12-22 08:40:59 -05:00
Joseph de CLERCK
c5416fdeec remove commas 2021-12-21 17:55:10 -05:00
Toni de la Fuente
180c2f27c5 Added $PROFILE_OPT to CopyToS3 commands @sectoramen
Added $PROFILE_OPT to CopyToS3 commands @sectoramen
2021-12-21 19:09:37 +01:00
Andrea Di Fabio
5771c78206 Added $PROFILE_OPT to CopyToS3 commands 2021-12-21 12:51:22 -05:00
Toni de la Fuente
8b415ec063 Added -D option to copy to S3 with the initial AWS credentials instead of the assumed as with -B option @sectoramen
Added -D option to copy to S3 with the initial AWS credentials instead of the assumed as with -B option @sectoramen
2021-12-21 17:22:01 +01:00
Andrea Di Fabio
aa945f3b46 Cosmetic variable name change 2021-12-21 11:16:46 -05:00
Andrea Di Fabio
949c2056ab Added -D option to copy to S3 with the initial AWS credentials 2021-12-21 11:11:53 -05:00
Joseph de CLERCK
050a565440 fix checks with comma issues 2021-12-20 14:38:24 -05:00
Toni de la Fuente
833ad79c3a Improved documentation for install process 2021-12-20 09:45:55 +01:00
Andrea Di Fabio
6cb3d23ed2 fixed bracket 2021-12-19 11:48:32 -05:00
Andrea Di Fabio
947c78cd76 exporting the ENV variables 2021-12-19 11:47:34 -05:00
Andrea Di Fabio
cc9d015fc8 Backup AWS Credentials before AssumeRole and Restore them before CopyToS3 2021-12-19 09:53:20 -05:00
Toni de la Fuente
a6249c54e0 Added cache purge to Dockerfile 2021-12-16 20:34:00 +01:00
Toni de la Fuente
9a9b933f88 Added upgrade to the RUN 2021-12-16 16:27:09 +01:00
Toni de la Fuente
67adaf522e Updated Dockerfile with AWS cli v2 2021-12-16 15:58:02 +01:00
Toni de la Fuente
dc6718b9d1 Updated Dockerfile to use amazonlinux container @Kirizan
Updated Dockerfile to use amazonlinux container @Kirizan
2021-12-16 15:22:02 +01:00
nikirby
bbfd47da2e Updated Dockerfile to use amazonlinux container 2021-12-16 09:16:29 -05:00
Toni de la Fuente
0c7adae3cd Add docker volume example to README.md 2021-12-13 15:20:37 +01:00
Toni de la Fuente
1045fc8330 Fix broken link in README.md @rtcms
Fix broken link in README.md @rtcms
2021-12-13 14:36:52 +01:00
Toni de la Fuente
13db49ba64 Merge branch '2.7' into patch-1 2021-12-13 14:36:10 +01:00
RT
70669eb5b5 Fix Broken Link 2021-12-13 14:56:53 +05:30
Toni de la Fuente
142f42d406 Added invalid check or group id to the error message #962 2021-12-09 15:54:43 +01:00
Toni de la Fuente
d48e9f34a5 Added new checks 7164 and 7165 to group extras 2021-12-09 12:23:22 +01:00
Toni de la Fuente
49ea3d8589 Fix #940 handling error when can not list functions 2021-12-09 12:19:45 +01:00
Toni de la Fuente
78faa8f365 Fix #962 check 7147 ALTERNATE NAME 2021-12-09 11:48:20 +01:00
Toni de la Fuente
c241f0008a Fix #957 check 763 had us-east-1 region hardcoded 2021-12-09 10:01:34 +01:00
Toni de la Fuente
f98f1f8a1d Merge branch '2.7' of https://github.com/toniblyx/prowler into 2.7 2021-12-09 09:54:14 +01:00
Toni de la Fuente
65c95a8354 Fix #963 check 792 to force json in ELB queries 2021-12-09 09:54:05 +01:00
Toni de la Fuente
bfdc88123c New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau @7thseraph @maisenhe
New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau @7thseraph @maisenhe
2021-12-07 15:55:10 +01:00
Daniel Peladeau
ad902746f4 Fix ASFF Resource Type 2021-12-07 08:48:41 -05:00
Daniel Peladeau
15c5409b0b New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau 2021-12-07 08:42:31 -05:00
Daniel Peladeau
f9d792d677 New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau 2021-12-03 17:05:04 -05:00
Toni de la Fuente
022a4487f3 Merge branch '2.7' of https://github.com/toniblyx/prowler into 2.7 2021-12-03 16:09:36 +01:00
Toni de la Fuente
0ac1b00044 Added issue templates 2021-12-03 15:40:26 +01:00
Toni de la Fuente
2c6895f31e Added checks extra7160,extra7161,extra7162,extra7163 to group Extras 2021-12-03 15:19:01 +01:00
Toni de la Fuente
fdcc4fb54a Added checks extra7160,extra7161,extra7162,extra7163 to group Extras 2021-12-03 15:15:58 +01:00
Toni de la Fuente
3c3a3f7c54 Fix issue with Security Hub integration when resolving closed findings are either a lot of new findings, or a lot of resolved findings @Kirizan
Fix issue with Security Hub integration when resolving closed findings are either a lot of new findings, or a lot of resolved findings @Kirizan
2021-12-03 14:53:53 +01:00
Toni de la Fuente
334a8c2b13 New check 7164 Cloudwatch log groups are protected by KMS @maisenhe
New check 7164 Cloudwatch log groups are protected by KMS @maisenhe
2021-12-02 18:36:25 +01:00
Joel Maisenhelder
6ff94b2b60 updated CHECK_RISK 2021-12-02 09:01:54 -06:00
Joel Maisenhelder
e1e57db93e New check 7164 Check if Cloudwatch log groups are protected by AWS KMS@maisenhe 2021-12-01 19:44:34 -06:00
nikirby
4ea4debedd Added line to delete the temp folder after everything is done. 2021-12-01 15:06:09 -05:00
nikirby
43b3ad6447 Adjusted the batch to only do 50 at a time. 100 caused capacity issues. Also added a check for an edge case where if the updated findings was a multiple of the batch size, it would throw an error for attempting to import 0 findings. 2021-12-01 13:29:12 -05:00
nikirby
5ecf1e2a85 Fixed error that appeared if the number of findings was very high. 2021-12-01 13:25:48 -05:00
Toni de la Fuente
2bed303474 Label 2.7.0-1December2021 for tests 2021-12-01 14:14:36 +01:00
Toni de la Fuente
fa46709860 Fix and clean assume-role to better handle AWS STS CLI errors @jfagoagas
Fix and clean assume-role to better handle AWS STS CLI errors @jfagoagas
2021-12-01 13:35:05 +01:00
Pepe Fagoaga
61bab1d85e Merge branch '2.7' into fix-aws-sts-handle-errors 2021-12-01 13:29:59 +01:00
Toni de la Fuente
84a6843dda Fix issue #938 assume_role multiple times @halfluke
Fix issue #938 assume_role multiple times @halfluke
2021-12-01 12:52:16 +01:00
Toni de la Fuente
0b92aeca4c Merge branch '2.7' into fix_#938_assume_role 2021-12-01 12:47:08 +01:00
root
e5eb066c61 #938 issue assume_role multiple times should be fixed 2021-11-30 15:29:10 -05:00
Toni de la Fuente
593c3b88e8 Docs Readme: fix link for group25 FTR @lopmoris
Docs Readme: fix link for group25 FTR @lopmoris
2021-11-29 19:27:01 +01:00
Israel
f8ea974142 Update README.md
broken link for capital letters in group file (group25_FTR)
2021-11-29 19:11:10 +01:00
Toni de la Fuente
3c32c995bf Fix group25 FTR @lopmoris
Fix group25 FTR @lopmoris
2021-11-29 18:00:05 +01:00
Israel
dd4c3a3b3d Update group25_FTR
When trying to run the group 25 (Amazon FTR related security checks) nothing happens, after looking at the code there is a misconfiguration in 2 params: GROUP_RUN_BY_DEFAULT[9] and GROUP_CHECKS[9]. Updating values to 25 fixed the issue.
2021-11-29 17:56:04 +01:00
Pepe Fagoaga
0b9526da1d Merge branch '2.7' into fix-aws-sts-handle-errors 2021-11-29 12:19:48 +01:00
Toni de la Fuente
315b4cc7c9 Fix assume-role: check if -T and -A options are set together @jfagoagas
Fix assume-role: check if `-T` and `-A` options are set together @jfagoagas
2021-11-29 10:56:27 +01:00
Toni de la Fuente
37de1a1330 Docs Readme: -T option is not mandatory @jfagoagas
Docs Readme: `-T` option is not mandatory @jfagoagas
2021-11-29 10:54:57 +01:00
Pepe Fagoaga
63eb184f39 fix(assume-role): Handle AWS STS CLI errors 2021-11-28 18:44:28 +01:00
Pepe Fagoaga
e74658557e fix(assume-role): Handle AWS STS CLI errors 2021-11-28 14:01:26 +01:00
Pepe Fagoaga
43af01e805 docs(Readme): -T option is not mandatory 2021-11-28 13:31:07 +01:00
Pepe Fagoaga
af4b5539e6 fix(assumed-role): Check if -T and -A options are set 2021-11-28 13:14:10 +01:00
Toni de la Fuente
1f27f08489 New check 7163 Secrets Manager key rotation enabled @Daniel-Peladeau
New check 7163 Secrets Manager key rotation enabled @Daniel-Peladeau
2021-11-25 15:22:35 +01:00
Daniel Peladeau
a101352219 Updating check_extra7163 with requested changes 2021-11-23 22:29:12 -05:00
Toni de la Fuente
34f67f71e4 Smaller fixes in README and multiaccount serverless deployment templates @dlorch
Smaller fixes in README and multiaccount serverless deployment templates @dlorch
2021-11-23 20:59:44 +01:00
Daniel Lorch
bbf6a0f5d9 Install detect-secrets (e.g. for check_extra742) 2021-11-23 20:08:07 +01:00
Daniel Lorch
10f2234e3d Fix link to quicksight dashboard 2021-11-23 17:26:26 +01:00
Daniel Lorch
cebd5cce3e Update ProwlerRole.yaml to have same permissions as util/org-multi-account/ProwlerRole.yaml 2021-11-23 17:26:14 +01:00
Toni de la Fuente
d45cab2b0a New check 7162 CloudWatch log groups have 365 days retention @Obiakara
New check 7162 CloudWatch log groups have 365 days retention @Obiakara
2021-11-23 11:12:43 +01:00
Toni de la Fuente
8e793c9060 New check 7161 EFS encryption at rest enabled @rustic
New check 7161 EFS encryption at rest enabled @rustic
2021-11-23 11:03:23 +01:00
Toni de la Fuente
5ae04717fa New check 7160 Redshift cluster AutomaticVersionUpgrade enabled @jonloza
New check 7160 Redshift cluster AutomaticVersionUpgrade enabled @jonloza
2021-11-23 11:02:40 +01:00
Jonathan Lozano
49f1060922 New check7160 Enabled AutomaticVersionUpgrade on RedShift Cluster 2021-11-22 16:06:10 -05:00
Daniel Peladeau
65cf527d5a New check_extra7163 Secrets Manager key rotation enabled 2021-11-21 16:36:49 -05:00
Lee Myers
e7a5d7266c Extra7161 EFS encryption at rest check 2021-11-19 17:28:12 -05:00
Chinedu Obiakara
2af565ead1 changed check title, resource type and service name as well as making the code more dynamic 2021-11-19 13:31:14 -06:00
Chinedu Obiakara
1fd2e36049 fixed code to handle all regions and formatted output 2021-11-19 12:00:15 -06:00
Chinedu Obiakara
68b26700c1 Added check_extra7162 which checks if Log groups have 365 days retention 2021-11-19 09:28:16 -06:00
Lee Myers
c6858f1d0e Extra7161 EFS encryption at rest check 2021-11-18 20:25:25 -05:00
6634 changed files with 19149 additions and 831907 deletions

View File

@@ -1,14 +0,0 @@
{
"repoOwner": "prowler-cloud",
"repoName": "prowler",
"targetPRLabels": [
"backport"
],
"sourcePRLabels": [
"was-backported"
],
"copySourcePRLabels": false,
"copySourcePRReviewers": true,
"prTitle": "{{sourcePullRequest.title}}",
"commitConflicts": true
}

6
.dockerignore Normal file
View File

@@ -0,0 +1,6 @@
.git/
.github/
# Ignore output directories
output/
junit-reports/

154
.env
View File

@@ -1,154 +0,0 @@
#### Important Note ####
# This file is used to store environment variables for the Prowler App.
# For production, it is recommended to use a secure method to store these variables and change the default secret keys.
#### Prowler UI Configuration ####
PROWLER_UI_VERSION="stable"
AUTH_URL=http://localhost:3000
API_BASE_URL=http://prowler-api:8080/api/v1
NEXT_PUBLIC_API_BASE_URL=${API_BASE_URL}
NEXT_PUBLIC_API_DOCS_URL=http://prowler-api:8080/api/v1/docs
AUTH_TRUST_HOST=true
UI_PORT=3000
# Temp URL for feeds need to use actual
RSS_FEED_URL=https://prowler.com/blog/rss
# openssl rand -base64 32
AUTH_SECRET="N/c6mnaS5+SWq81+819OrzQZlmx1Vxtp/orjttJSmw8="
# Google Tag Manager ID
NEXT_PUBLIC_GOOGLE_TAG_MANAGER_ID=""
#### Prowler API Configuration ####
PROWLER_API_VERSION="stable"
# PostgreSQL settings
# If running Django and celery on host, use 'localhost', else use 'postgres-db'
POSTGRES_HOST=postgres-db
POSTGRES_PORT=5432
POSTGRES_ADMIN_USER=prowler_admin
POSTGRES_ADMIN_PASSWORD=postgres
POSTGRES_USER=prowler
POSTGRES_PASSWORD=postgres
POSTGRES_DB=prowler_db
# Celery-Prowler task settings
TASK_RETRY_DELAY_SECONDS=0.1
TASK_RETRY_ATTEMPTS=5
# Valkey settings
# If running Valkey and celery on host, use localhost, else use 'valkey'
VALKEY_HOST=valkey
VALKEY_PORT=6379
VALKEY_DB=0
# API scan settings
# The path to the directory where scan output should be stored
DJANGO_TMP_OUTPUT_DIRECTORY="/tmp/prowler_api_output"
# The maximum number of findings to process in a single batch
DJANGO_FINDINGS_BATCH_SIZE=1000
# The AWS access key to be used when uploading scan output to an S3 bucket
# If left empty, default AWS credentials resolution behavior will be used
DJANGO_OUTPUT_S3_AWS_ACCESS_KEY_ID=""
# The AWS secret key to be used when uploading scan output to an S3 bucket
DJANGO_OUTPUT_S3_AWS_SECRET_ACCESS_KEY=""
# An optional AWS session token
DJANGO_OUTPUT_S3_AWS_SESSION_TOKEN=""
# The AWS region where your S3 bucket is located (e.g., "us-east-1")
DJANGO_OUTPUT_S3_AWS_DEFAULT_REGION=""
# The name of the S3 bucket where scan output should be stored
DJANGO_OUTPUT_S3_AWS_OUTPUT_BUCKET=""
# Django settings
DJANGO_ALLOWED_HOSTS=localhost,127.0.0.1,prowler-api
DJANGO_BIND_ADDRESS=0.0.0.0
DJANGO_PORT=8080
DJANGO_DEBUG=False
DJANGO_SETTINGS_MODULE=config.django.production
# Select one of [ndjson|human_readable]
DJANGO_LOGGING_FORMATTER=human_readable
# Select one of [DEBUG|INFO|WARNING|ERROR|CRITICAL]
# Applies to both Django and Celery Workers
DJANGO_LOGGING_LEVEL=INFO
# Defaults to the maximum available based on CPU cores if not set.
DJANGO_WORKERS=4
# Token lifetime is in minutes
DJANGO_ACCESS_TOKEN_LIFETIME=30
# Token lifetime is in minutes
DJANGO_REFRESH_TOKEN_LIFETIME=1440
DJANGO_CACHE_MAX_AGE=3600
DJANGO_STALE_WHILE_REVALIDATE=60
DJANGO_MANAGE_DB_PARTITIONS=True
# openssl genrsa -out private.pem 2048
DJANGO_TOKEN_SIGNING_KEY="-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDs4e+kt7SnUJek
6V5r9zMGzXCoU5qnChfPiqu+BgANyawz+MyVZPs6RCRfeo6tlCknPQtOziyXYM2I
7X+qckmuzsjqp8+u+o1mw3VvUuJew5k2SQLPYwsiTzuFNVJEOgRo3hywGiGwS2iv
/5nh2QAl7fq2qLqZEXQa5+/xJlQggS1CYxOJgggvLyra50QZlBvPve/AxKJ/EV/Q
irWTZU5lLNI8sH2iZR05vQeBsxZ0dCnGMT+vGl+cGkqrvzQzKsYbDmabMcfTYhYi
78fpv6A4uharJFHayypYBjE39PwhMyyeycrNXlpm1jpq+03HgmDuDMHydk1tNwuT
nEC7m7iNAgMBAAECggEAA2m48nJcJbn9SVi8bclMwKkWmbJErOnyEGEy2sTK3Of+
NWx9BB0FmqAPNxn0ss8K7cANKOhDD7ZLF9E2MO4/HgfoMKtUzHRbM7MWvtEepldi
nnvcUMEgULD8Dk4HnqiIVjt3BdmGiTv46OpBnRWrkSBV56pUL+7msZmMZTjUZvh2
ZWv0+I3gtDIjo2Zo/FiwDV7CfwRjJarRpYUj/0YyuSA4FuOUYl41WAX1I301FKMH
xo3jiAYi1s7IneJ16OtPpOA34Wg5F6ebm/UO0uNe+iD4kCXKaZmxYQPh5tfB0Qa3
qj1T7GNpFNyvtG7VVdauhkb8iu8X/wl6PCwbg0RCKQKBgQD9HfpnpH0lDlHMRw9K
X7Vby/1fSYy1BQtlXFEIPTN/btJ/asGxLmAVwJ2HAPXWlrfSjVAH7CtVmzN7v8oj
HeIHfeSgoWEu1syvnv2AMaYSo03UjFFlfc/GUxF7DUScRIhcJUPCP8jkAROz9nFv
DByNjUL17Q9r43DmDiRsy0IFqQKBgQDvlJ9Uhl+Sp7gRgKYwa/IG0+I4AduAM+Gz
Dxbm52QrMGMTjaJFLmLHBUZ/ot+pge7tZZGws8YR8ufpyMJbMqPjxhIvRRa/p1Tf
E3TQPW93FMsHUvxAgY3MV5MzXFPhlNAKb+akP/RcXUhetGAuZKLubtDCWa55ZQuL
wj2OS+niRQKBgE7K8zUqNi6/22S8xhy/2GPgB1qPObbsABUofK0U6CAGLo6te+gc
6Jo84IyzFtQbDNQFW2Fr+j1m18rw9AqkdcUhQndiZS9AfG07D+zFB86LeWHt4DS4
ymIRX8Kvaak/iDcu/n3Mf0vCrhB6aetImObTj4GgrwlFvtJOmrYnO8EpAoGAIXXP
Xt25gWD9OyyNiVu6HKwA/zN7NYeJcRmdaDhO7B1A6R0x2Zml4AfjlbXoqOLlvLAf
zd79vcoAC82nH1eOPiSOq51plPDI0LMF8IN0CtyTkn1Lj7LIXA6rF1RAvtOqzppc
SvpHpZK9pcRpXnFdtBE0BMDDtl6fYzCIqlP94UUCgYEAnhXbAQMF7LQifEm34Dx8
BizRMOKcqJGPvbO2+Iyt50O5X6onU2ITzSV1QHtOvAazu+B1aG9pEuBFDQ+ASxEu
L9ruJElkOkb/o45TSF6KCsHd55ReTZ8AqnRjf5R+lyzPqTZCXXb8KTcRvWT4zQa3
VxyT2PnaSqEcexWUy4+UXoQ=
-----END PRIVATE KEY-----"
# openssl rsa -in private.pem -pubout -out public.pem
DJANGO_TOKEN_VERIFYING_KEY="-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7OHvpLe0p1CXpOlea/cz
Bs1wqFOapwoXz4qrvgYADcmsM/jMlWT7OkQkX3qOrZQpJz0LTs4sl2DNiO1/qnJJ
rs7I6qfPrvqNZsN1b1LiXsOZNkkCz2MLIk87hTVSRDoEaN4csBohsEtor/+Z4dkA
Je36tqi6mRF0Gufv8SZUIIEtQmMTiYIILy8q2udEGZQbz73vwMSifxFf0Iq1k2VO
ZSzSPLB9omUdOb0HgbMWdHQpxjE/rxpfnBpKq780MyrGGw5mmzHH02IWIu/H6b+g
OLoWqyRR2ssqWAYxN/T8ITMsnsnKzV5aZtY6avtNx4Jg7gzB8nZNbTcLk5xAu5u4
jQIDAQAB
-----END PUBLIC KEY-----"
# openssl rand -base64 32
DJANGO_SECRETS_ENCRYPTION_KEY="oE/ltOhp/n1TdbHjVmzcjDPLcLA41CVI/4Rk+UB5ESc="
DJANGO_BROKER_VISIBILITY_TIMEOUT=86400
DJANGO_SENTRY_DSN=
# Sentry settings
SENTRY_ENVIRONMENT=local
SENTRY_RELEASE=local
#### Prowler release version ####
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.10.0
# Social login credentials
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
SOCIAL_GOOGLE_OAUTH_CLIENT_ID=""
SOCIAL_GOOGLE_OAUTH_CLIENT_SECRET=""
SOCIAL_GITHUB_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/github"
SOCIAL_GITHUB_OAUTH_CLIENT_ID=""
SOCIAL_GITHUB_OAUTH_CLIENT_SECRET=""
# Single Sign-On (SSO)
SAML_SSO_CALLBACK_URL="${AUTH_URL}/api/auth/callback/saml"
# Lighthouse tracing
LANGSMITH_TRACING=false
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY=""
LANGCHAIN_PROJECT=""

6
.github/CODEOWNERS vendored
View File

@@ -1,6 +0,0 @@
/* @prowler-cloud/sdk
/.github/ @prowler-cloud/sdk
prowler @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
tests @prowler-cloud/sdk @prowler-cloud/detection-and-remediation
api @prowler-cloud/api
ui @prowler-cloud/ui

50
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,50 @@
---
name: Bug report
about: Create a report to help us improve
title: "[Bug]: "
labels: ["bug", "triage"]
assignees: ''
---
<!--
Please use this template to create your bug report. By providing as much info as possible you help us understand the issue, reproduce it and resolve it for you quicker. Therefore, take a couple of extra minutes to make sure you have provided all info needed.
PROTIP: record your screen and attach it as a gif to showcase the issue.
- How to record and attach gif: https://bit.ly/2Mi8T6K
-->
**What happened?**
A clear and concise description of what the bug is or what is not working as expected
**How to reproduce it**
Steps to reproduce the behavior:
1. What command are you running?
2. Environment you have, like single account, multi-account, organizations, etc.
3. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots or Logs**
If applicable, add screenshots to help explain your problem.
Also, you can add logs (anonymize them first!). Here a command that may help to share a log
`bash -x ./prowler -options > debug.log 2>&1` then attach here `debug.log`
**From where are you running Prowler?**
Please, complete the following information:
- Resource: [e.g. EC2 instance, Fargate task, Docker container manually, EKS, Cloud9, CodeBuild, workstation, etc.)
- OS: [e.g. Amazon Linux 2, Mac, Alpine, Windows, etc. ]
- AWS-CLI Version [`aws --version`]:
- Prowler Version [`./prowler -V`]:
- Shell and version:
- Others:
**Additional context**
Add any other context about the problem here.

View File

@@ -1,96 +0,0 @@
name: 🐞 Bug Report
description: Create a report to help us improve
labels: ["bug", "status/needs-triage"]
body:
- type: textarea
id: reproduce
attributes:
label: Steps to Reproduce
description: Steps to reproduce the behavior
placeholder: |-
1. What command are you running?
2. Cloud provider you are launching
3. Environment you have, like single account, multi-account, organizations, multi or single subscription, etc.
4. See error
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: actual
attributes:
label: Actual Result with Screenshots or Logs
description: If applicable, add screenshots to help explain your problem. Also, you can add logs (anonymize them first!). Here a command that may help to share a log `prowler <your arguments> --log-level ERROR --log-file $(date +%F)_error.log` then attach here the log file.
validations:
required: true
- type: dropdown
id: type
attributes:
label: How did you install Prowler?
options:
- Cloning the repository from github.com (git clone)
- From pip package (pip install prowler)
- From brew (brew install prowler)
- Docker (docker pull toniblyx/prowler)
validations:
required: true
- type: textarea
id: environment
attributes:
label: Environment Resource
description: From where are you running Prowler?
placeholder: |-
1. EC2 instance
2. Fargate task
3. Docker container locally
4. EKS
5. Cloud9
6. CodeBuild
7. Workstation
8. Other(please specify)
validations:
required: true
- type: textarea
id: os
attributes:
label: OS used
description: Which OS are you using?
placeholder: |-
1. Amazon Linux 2
2. MacOS
3. Alpine Linux
4. Windows
5. Other(please specify)
validations:
required: true
- type: input
id: prowler-version
attributes:
label: Prowler version
description: Which Prowler version are you using?
placeholder: |-
prowler --version
validations:
required: true
- type: input
id: pip-version
attributes:
label: Pip version
description: Which pip version are you using?
placeholder: |-
pip --version
validations:
required: true
- type: textarea
id: additional
attributes:
description: Additional context
label: Context
validations:
required: false

View File

@@ -1 +0,0 @@
blank_issues_enabled: false

View File

@@ -1,35 +0,0 @@
name: 💡 Feature Request
description: Suggest an idea for this project
labels: ["feature-request", "status/needs-triage"]
body:
- type: textarea
id: Problem
attributes:
label: New feature motivation
description: Is your feature request related to a problem? Please describe
placeholder: |-
1. A clear and concise description of what the problem is. Ex. I'm always frustrated when
validations:
required: true
- type: textarea
id: Solution
attributes:
label: Solution Proposed
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: Alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: true
- type: textarea
id: Context
attributes:
label: Additional context
description: Add any other context or screenshots about the feature request here.
validations:
required: false

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,3 +0,0 @@
name: "API - CodeQL Config"
paths:
- "api/"

View File

@@ -1,4 +0,0 @@
name: "SDK - CodeQL Config"
paths-ignore:
- "api/"
- "ui/"

View File

@@ -1,3 +0,0 @@
name: "UI - CodeQL Config"
paths:
- "ui/"

120
.github/dependabot.yml vendored
View File

@@ -1,120 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
# v5
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "monthly"
open-pull-requests-limit: 25
target-branch: master
labels:
- "dependencies"
- "pip"
# Dependabot Updates are temporary disabled - 2025/03/19
# - package-ecosystem: "pip"
# directory: "/api"
# schedule:
# interval: "daily"
# open-pull-requests-limit: 10
# target-branch: master
# labels:
# - "dependencies"
# - "pip"
# - "component/api"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
open-pull-requests-limit: 25
target-branch: master
labels:
- "dependencies"
- "github_actions"
# Dependabot Updates are temporary disabled - 2025/03/19
# - package-ecosystem: "npm"
# directory: "/ui"
# schedule:
# interval: "daily"
# open-pull-requests-limit: 10
# target-branch: master
# labels:
# - "dependencies"
# - "npm"
# - "component/ui"
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "monthly"
open-pull-requests-limit: 25
target-branch: master
labels:
- "dependencies"
- "docker"
# Dependabot Updates are temporary disabled - 2025/04/15
# v4.6
# - package-ecosystem: "pip"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 10
# target-branch: v4.6
# labels:
# - "dependencies"
# - "pip"
# - "v4"
# - package-ecosystem: "github-actions"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 10
# target-branch: v4.6
# labels:
# - "dependencies"
# - "github_actions"
# - "v4"
# - package-ecosystem: "docker"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 10
# target-branch: v4.6
# labels:
# - "dependencies"
# - "docker"
# - "v4"
# Dependabot Updates are temporary disabled - 2025/03/19
# v3
# - package-ecosystem: "pip"
# directory: "/"
# schedule:
# interval: "monthly"
# open-pull-requests-limit: 10
# target-branch: v3
# labels:
# - "dependencies"
# - "pip"
# - "v3"
# - package-ecosystem: "github-actions"
# directory: "/"
# schedule:
# interval: "monthly"
# open-pull-requests-limit: 10
# target-branch: v3
# labels:
# - "dependencies"
# - "github_actions"
# - "v3"

114
.github/labeler.yml vendored
View File

@@ -1,114 +0,0 @@
documentation:
- changed-files:
- any-glob-to-any-file: "docs/**"
provider/aws:
- changed-files:
- any-glob-to-any-file: "prowler/providers/aws/**"
- any-glob-to-any-file: "tests/providers/aws/**"
provider/azure:
- changed-files:
- any-glob-to-any-file: "prowler/providers/azure/**"
- any-glob-to-any-file: "tests/providers/azure/**"
provider/gcp:
- changed-files:
- any-glob-to-any-file: "prowler/providers/gcp/**"
- any-glob-to-any-file: "tests/providers/gcp/**"
provider/kubernetes:
- changed-files:
- any-glob-to-any-file: "prowler/providers/kubernetes/**"
- any-glob-to-any-file: "tests/providers/kubernetes/**"
provider/m365:
- changed-files:
- any-glob-to-any-file: "prowler/providers/m365/**"
- any-glob-to-any-file: "tests/providers/m365/**"
provider/github:
- changed-files:
- any-glob-to-any-file: "prowler/providers/github/**"
- any-glob-to-any-file: "tests/providers/github/**"
provider/iac:
- changed-files:
- any-glob-to-any-file: "prowler/providers/iac/**"
- any-glob-to-any-file: "tests/providers/iac/**"
github_actions:
- changed-files:
- any-glob-to-any-file: ".github/workflows/*"
cli:
- changed-files:
- any-glob-to-any-file: "cli/**"
mutelist:
- changed-files:
- any-glob-to-any-file: "prowler/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/aws/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/azure/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/gcp/lib/mutelist/**"
- any-glob-to-any-file: "prowler/providers/kubernetes/lib/mutelist/**"
- any-glob-to-any-file: "tests/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/aws/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/azure/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/gcp/lib/mutelist/**"
- any-glob-to-any-file: "tests/providers/kubernetes/lib/mutelist/**"
integration/s3:
- changed-files:
- any-glob-to-any-file: "prowler/providers/aws/lib/s3/**"
- any-glob-to-any-file: "tests/providers/aws/lib/s3/**"
integration/slack:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/slack/**"
- any-glob-to-any-file: "tests/lib/outputs/slack/**"
integration/security-hub:
- changed-files:
- any-glob-to-any-file: "prowler/providers/aws/lib/security_hub/**"
- any-glob-to-any-file: "tests/providers/aws/lib/security_hub/**"
- any-glob-to-any-file: "prowler/lib/outputs/asff/**"
- any-glob-to-any-file: "tests/lib/outputs/asff/**"
output/html:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/html/**"
- any-glob-to-any-file: "tests/lib/outputs/html/**"
output/asff:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/asff/**"
- any-glob-to-any-file: "tests/lib/outputs/asff/**"
output/ocsf:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/ocsf/**"
- any-glob-to-any-file: "tests/lib/outputs/ocsf/**"
output/csv:
- changed-files:
- any-glob-to-any-file: "prowler/lib/outputs/csv/**"
- any-glob-to-any-file: "tests/lib/outputs/csv/**"
component/api:
- changed-files:
- any-glob-to-any-file: "api/**"
component/ui:
- changed-files:
- any-glob-to-any-file: "ui/**"
compliance:
- changed-files:
- any-glob-to-any-file: "prowler/compliance/**"
- any-glob-to-any-file: "prowler/lib/outputs/compliance/**"
- any-glob-to-any-file: "tests/lib/outputs/compliance/**"
review-django-migrations:
- changed-files:
- any-glob-to-any-file: "api/src/backend/api/migrations/**"

View File

@@ -1,32 +1 @@
### Context
Please include relevant motivation and context for this PR.
If fixes an issue please add it with `Fix #XXXX`
### Description
Please include a summary of the change and which issue is fixed. List any dependencies that are required for this change.
### Steps to review
Please add a detailed description of how to review this PR.
### Checklist
- Are there new checks included in this PR? Yes / No
- If so, do we need to update permissions for the provider? Please review this carefully.
- [ ] Review if the code is being covered by tests.
- [ ] Review if code is being documented following this specification https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings
- [ ] Review if backport is needed.
- [ ] Review if is needed to change the [Readme.md](https://github.com/prowler-cloud/prowler/blob/master/README.md)
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/prowler/CHANGELOG.md), if applicable.
#### API
- [ ] Verify if API specs need to be regenerated.
- [ ] Check if version updates are required (e.g., specs, Poetry, etc.).
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/api/CHANGELOG.md), if applicable.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

View File

@@ -1,115 +0,0 @@
name: API - Build and Push containers
on:
push:
branches:
- "master"
paths:
- "api/**"
- "prowler/**"
- ".github/workflows/api-build-lint-push-containers.yml"
# Uncomment the code below to test this action on PRs
# pull_request:
# branches:
# - "master"
# paths:
# - "api/**"
# - ".github/workflows/api-build-lint-push-containers.yml"
release:
types: [published]
env:
# Tags
LATEST_TAG: latest
RELEASE_TAG: ${{ github.event.release.tag_name }}
STABLE_TAG: stable
WORKING_DIRECTORY: ./api
# Container Registries
PROWLERCLOUD_DOCKERHUB_REPOSITORY: prowlercloud
PROWLERCLOUD_DOCKERHUB_IMAGE: prowler-api
jobs:
repository-check:
name: Repository check
runs-on: ubuntu-latest
outputs:
is_repo: ${{ steps.repository_check.outputs.is_repo }}
steps:
- name: Repository check
id: repository_check
working-directory: /tmp
run: |
if [[ ${{ github.repository }} == "prowler-cloud/prowler" ]]
then
echo "is_repo=true" >> "${GITHUB_OUTPUT}"
else
echo "This action only runs for prowler-cloud/prowler"
echo "is_repo=false" >> "${GITHUB_OUTPUT}"
fi
# Build Prowler OSS container
container-build-push:
needs: repository-check
if: needs.repository-check.outputs.is_repo == 'true'
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKING_DIRECTORY }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set short git commit SHA
id: vars
run: |
shortSha=$(git rev-parse --short ${{ github.sha }})
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push container image (latest)
# Comment the following line for testing
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
# Set push: false for testing
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.SHORT_SHA }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Trigger deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
event-type: prowler-api-deploy
client-payload: '{"sha": "${{ github.sha }}", "short_sha": "${{ env.SHORT_SHA }}"}'

View File

@@ -1,59 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: API - CodeQL
on:
push:
branches:
- "master"
- "v5.*"
paths:
- "api/**"
pull_request:
branches:
- "master"
- "v5.*"
paths:
- "api/**"
schedule:
- cron: '00 12 * * *'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/api-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
category: "/language:${{matrix.language}}"

View File

@@ -1,239 +0,0 @@
name: API - Pull Request
on:
push:
branches:
- "master"
- "v5.*"
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
pull_request:
branches:
- "master"
- "v5.*"
paths:
- ".github/workflows/api-pull-request.yml"
- "api/**"
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_ADMIN_USER: prowler
POSTGRES_ADMIN_PASSWORD: S3cret
POSTGRES_USER: prowler_user
POSTGRES_PASSWORD: prowler
POSTGRES_DB: postgres-db
VALKEY_HOST: localhost
VALKEY_PORT: 6379
VALKEY_DB: 0
API_WORKING_DIR: ./api
IMAGE_NAME: prowler-api
IGNORE_FILES: |
api/docs/**
api/README.md
api/CHANGELOG.md
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.12"]
# Service containers to run with `test`
services:
# Label used to access the service container
postgres:
image: postgres
env:
POSTGRES_HOST: ${{ env.POSTGRES_HOST }}
POSTGRES_PORT: ${{ env.POSTGRES_PORT }}
POSTGRES_USER: ${{ env.POSTGRES_USER }}
POSTGRES_PASSWORD: ${{ env.POSTGRES_PASSWORD }}
POSTGRES_DB: ${{ env.POSTGRES_DB }}
# Set health checks to wait until postgres has started
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
valkey:
image: valkey/valkey:7-alpine3.19
env:
VALKEY_HOST: ${{ env.VALKEY_HOST }}
VALKEY_PORT: ${{ env.VALKEY_PORT }}
VALKEY_DB: ${{ env.VALKEY_DB }}
# Set health checks to wait until postgres has started
ports:
- 6379:6379
options: >-
--health-cmd "valkey-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
api/**
.github/workflows/api-pull-request.yml
files_ignore: ${{ env.IGNORE_FILES }}
- name: Replace @master with current branch in pyproject.toml
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Using branch: $BRANCH_NAME"
sed -i "s|@master|@$BRANCH_NAME|g" pyproject.toml
- name: Install poetry
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Update poetry.lock after the branch name change
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry lock
- name: Update SDK's poetry.lock resolved_reference to latest commit - Only for push events to `master`
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true' && github.event_name == 'push'
run: |
# Get the latest commit hash from the prowler-cloud/prowler repository
LATEST_COMMIT=$(curl -s "https://api.github.com/repos/prowler-cloud/prowler/commits/master" | jq -r '.sha')
echo "Latest commit hash: $LATEST_COMMIT"
# Update the resolved_reference specifically for prowler-cloud/prowler repository
sed -i '/url = "https:\/\/github\.com\/prowler-cloud\/prowler\.git"/,/resolved_reference = / {
s/resolved_reference = "[a-f0-9]\{40\}"/resolved_reference = "'"$LATEST_COMMIT"'"/
}' poetry.lock
# Verify the change was made
echo "Updated resolved_reference:"
grep -A2 -B2 "resolved_reference" poetry.lock
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install system dependencies for xmlsec
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
sudo apt-get update
sudo apt-get install -y libxml2-dev libxmlsec1-dev libxmlsec1-openssl pkg-config
- name: Install dependencies
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry install --no-root
poetry run pip list
VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
) && curl -L -o /tmp/hadolint "https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64" \
&& chmod +x /tmp/hadolint
- name: Poetry check
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry check --lock
- name: Lint with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff check . --exclude contrib
- name: Check Format with ruff
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run ruff format --check . --exclude contrib
- name: Lint with pylint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/
- name: Bandit
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
# 76352, 76353, 77323 come from SDK, but they cannot upgrade it yet. It does not affect API
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
run: |
poetry run safety check --ignore 70612,66963,74429,76352,76353,77323,77744,77745
- name: Vulture
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run vulture --exclude "contrib,tests,conftest.py" --min-confidence 100 .
- name: Hadolint
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
working-directory: ./api
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest --cov=./src/backend --cov-report=xml src/backend
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: api
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: api/**
files_ignore: ${{ env.IGNORE_FILES }}
- name: Set up Docker Buildx
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build Container
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.API_WORKING_DIR }}
push: false
tags: ${{ env.IMAGE_NAME }}:latest
outputs: type=docker
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -1,47 +0,0 @@
name: Prowler - Automatic Backport
on:
pull_request_target:
branches: ['master']
types: ['labeled', 'closed']
env:
# The prefix of the label that triggers the backport must not contain the branch name
# so, for example, if the branch is 'master', the label should be 'backport-to-<branch>'
BACKPORT_LABEL_PREFIX: backport-to-
BACKPORT_LABEL_IGNORE: was-backported
jobs:
backport:
name: Backport PR
if: github.event.pull_request.merged == true && !(contains(github.event.pull_request.labels.*.name, 'backport')) && !(contains(github.event.pull_request.labels.*.name, 'was-backported'))
runs-on: ubuntu-latest
permissions:
id-token: write
pull-requests: write
contents: write
steps:
- name: Check labels
id: preview_label_check
uses: agilepathway/label-checker@c3d16ad512e7cea5961df85ff2486bb774caf3c5 # v1.6.65
with:
allow_failure: true
prefix_mode: true
any_of: ${{ env.BACKPORT_LABEL_PREFIX }}
none_of: ${{ env.BACKPORT_LABEL_IGNORE }}
repo_token: ${{ secrets.GITHUB_TOKEN }}
- name: Backport Action
if: steps.preview_label_check.outputs.label_check == 'success'
uses: sorenlouv/backport-github-action@ad888e978060bc1b2798690dd9d03c4036560947 # v9.5.1
with:
github_token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
auto_backport_label_prefix: ${{ env.BACKPORT_LABEL_PREFIX }}
- name: Info log
if: ${{ success() && steps.preview_label_check.outputs.label_check == 'success' }}
run: cat ~/.backport/backport.info.log
- name: Debug log
if: ${{ failure() && steps.preview_label_check.outputs.label_check == 'success' }}
run: cat ~/.backport/backport.debug.log

View File

@@ -1,36 +0,0 @@
name: Prowler - Pull Request Documentation Link
on:
pull_request:
branches:
- 'master'
- 'v3'
paths:
- 'docs/**'
- '.github/workflows/build-documentation-on-pr.yml'
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
jobs:
documentation-link:
name: Documentation Link
runs-on: ubuntu-latest
steps:
- name: Find existing documentation comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: find-comment
with:
issue-number: ${{ env.PR_NUMBER }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- prowler-docs-link -->'
- name: Create or update PR comment with the Prowler Documentation URI
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ env.PR_NUMBER }}
body: |
<!-- prowler-docs-link -->
You can check the documentation for this PR here -> [Prowler Documentation](https://prowler-prowler-docs--${{ env.PR_NUMBER }}.com.readthedocs.build/projects/prowler-open-source/en/${{ env.PR_NUMBER }}/)
edit-mode: replace

View File

@@ -1,23 +0,0 @@
name: Prowler - Conventional Commit
on:
pull_request:
types:
- "opened"
- "edited"
- "synchronize"
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
jobs:
conventional-commit-check:
runs-on: ubuntu-latest
steps:
- name: conventional-commit-check
id: conventional-commit-check
uses: agenthunt/conventional-commit-checker-action@9e552d650d0e205553ec7792d447929fc78e012b # v2.0.0
with:
pr-title-regex: '^([^\s(]+)(?:\(([^)]+)\))?: (.+)'

View File

@@ -1,67 +0,0 @@
name: Prowler - Create Backport Label
on:
release:
types: [published]
jobs:
create_label:
runs-on: ubuntu-latest
permissions:
contents: write
issues: write
steps:
- name: Create backport label
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_TAG: ${{ github.event.release.tag_name }}
OWNER_REPO: ${{ github.repository }}
run: |
VERSION_ONLY=${RELEASE_TAG#v} # Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
# Check if it's a minor version (X.Y.0)
if [[ "$VERSION_ONLY" =~ ^[0-9]+\.[0-9]+\.0$ ]]; then
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is a minor version. Proceeding to create backport label."
TWO_DIGIT_VERSION=${VERSION_ONLY%.0} # Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
FINAL_LABEL_NAME="backport-to-v${TWO_DIGIT_VERSION}"
FINAL_DESCRIPTION="Backport PR to the v${TWO_DIGIT_VERSION} branch"
echo "Effective label name will be: ${FINAL_LABEL_NAME}"
echo "Effective description will be: ${FINAL_DESCRIPTION}"
# Check if the label already exists
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token ${GITHUB_TOKEN}" "https://api.github.com/repos/${OWNER_REPO}/labels/${FINAL_LABEL_NAME}")
if [ "${STATUS_CODE}" -eq 200 ]; then
echo "Label '${FINAL_LABEL_NAME}' already exists."
elif [ "${STATUS_CODE}" -eq 404 ]; then
echo "Label '${FINAL_LABEL_NAME}' does not exist. Creating it..."
# Prepare JSON data payload
JSON_DATA=$(printf '{"name":"%s","description":"%s","color":"B60205"}' "${FINAL_LABEL_NAME}" "${FINAL_DESCRIPTION}")
CREATE_STATUS_CODE=$(curl -s -o /tmp/curl_create_response.json -w "%{http_code}" -X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${GITHUB_TOKEN}" \
--data "${JSON_DATA}" \
"https://api.github.com/repos/${OWNER_REPO}/labels")
CREATE_RESPONSE_BODY=$(cat /tmp/curl_create_response.json)
rm -f /tmp/curl_create_response.json
if [ "$CREATE_STATUS_CODE" -eq 201 ]; then
echo "Label '${FINAL_LABEL_NAME}' created successfully."
else
echo "Error creating label '${FINAL_LABEL_NAME}'. Status: $CREATE_STATUS_CODE"
echo "Response: $CREATE_RESPONSE_BODY"
exit 1
fi
else
echo "Error checking for label '${FINAL_LABEL_NAME}'. HTTP Status: ${STATUS_CODE}"
exit 1
fi
else
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is not a minor version. Skipping backport label creation."
exit 0
fi

View File

@@ -1,19 +0,0 @@
name: Prowler - Find secrets
on: pull_request
jobs:
trufflehog:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@a05cf0859455b5b16317ee22d809887a4043cdf0 # v3.90.2
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD
extra_args: --only-verified

View File

@@ -1,17 +0,0 @@
name: Prowler - PR Labeler
on:
pull_request_target:
branches:
- "master"
- "v3"
- "v4.*"
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@8558fd74291d67161a8a78ce36a881fa63b766a9 # v5.0.0

View File

@@ -1,167 +0,0 @@
name: Prowler - PR Conflict Checker
on:
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- "master"
- "v5.*"
pull_request_target:
types:
- opened
- synchronize
- reopened
branches:
- "master"
- "v5.*"
jobs:
conflict-checker:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
**
- name: Check for conflict markers
id: conflict-check
run: |
echo "Checking for conflict markers in changed files..."
CONFLICT_FILES=""
HAS_CONFLICTS=false
# Check each changed file for conflict markers
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [ -f "$file" ]; then
echo "Checking file: $file"
# Look for conflict markers
if grep -l "^<<<<<<<\|^=======\|^>>>>>>>" "$file" 2>/dev/null; then
echo "Conflict markers found in: $file"
CONFLICT_FILES="$CONFLICT_FILES$file "
HAS_CONFLICTS=true
fi
fi
done
if [ "$HAS_CONFLICTS" = true ]; then
echo "has_conflicts=true" >> $GITHUB_OUTPUT
echo "conflict_files=$CONFLICT_FILES" >> $GITHUB_OUTPUT
echo "Conflict markers detected in files: $CONFLICT_FILES"
else
echo "has_conflicts=false" >> $GITHUB_OUTPUT
echo "No conflict markers found in changed files"
fi
- name: Add conflict label
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const hasConflictLabel = labels.some(label => label.name === 'has-conflicts');
if (!hasConflictLabel) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['has-conflicts']
});
console.log('Added has-conflicts label');
} else {
console.log('has-conflicts label already exists');
}
- name: Remove conflict label
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
github-token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
script: |
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: 'has-conflicts'
});
console.log('Removed has-conflicts label');
} catch (error) {
if (error.status === 404) {
console.log('has-conflicts label was not present');
} else {
throw error;
}
}
- name: Find existing conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: find-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Create or update conflict comment
if: steps.conflict-check.outputs.has_conflicts == 'true'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
⚠️ **Conflict Markers Detected**
This pull request contains unresolved conflict markers in the following files:
```
${{ steps.conflict-check.outputs.conflict_files }}
```
Please resolve these conflicts by:
1. Locating the conflict markers: `<<<<<<<`, `=======`, and `>>>>>>>`
2. Manually editing the files to resolve the conflicts
3. Removing all conflict markers
4. Committing and pushing the changes
- name: Find existing conflict comment when resolved
if: steps.conflict-check.outputs.has_conflicts == 'false'
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: find-resolved-comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-regex: '(⚠️ \*\*Conflict Markers Detected\*\*|✅ \*\*Conflict Markers Resolved\*\*)'
- name: Update comment when conflicts resolved
if: steps.conflict-check.outputs.has_conflicts == 'false' && steps.find-resolved-comment.outputs.comment-id != ''
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.find-resolved-comment.outputs.comment-id }}
issue-number: ${{ github.event.pull_request.number }}
edit-mode: replace
body: |
✅ **Conflict Markers Resolved**
All conflict markers have been successfully resolved in this pull request.

View File

@@ -1,300 +0,0 @@
name: Prowler - Release Preparation
run-name: Prowler Release Preparation for ${{ inputs.prowler_version }}
on:
workflow_dispatch:
inputs:
prowler_version:
description: 'Prowler version to release (e.g., 5.9.0)'
required: true
type: string
env:
PROWLER_VERSION: ${{ github.event.inputs.prowler_version }}
jobs:
prepare-release:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
- name: Set up Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.12'
- name: Install Poetry
run: |
python3 -m pip install --user poetry
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Configure Git
run: |
git config --global user.name "prowler-bot"
git config --global user.email "179230569+prowler-bot@users.noreply.github.com"
- name: Parse version and determine branch
run: |
# Validate version format (reusing pattern from sdk-bump-version.yml)
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
PATCH_VERSION=${BASH_REMATCH[3]}
# Export version components to environment
echo "MAJOR_VERSION=${MAJOR_VERSION}" >> "${GITHUB_ENV}"
echo "MINOR_VERSION=${MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "PATCH_VERSION=${PATCH_VERSION}" >> "${GITHUB_ENV}"
# Determine branch name (format: v5.9)
BRANCH_NAME="v${MAJOR_VERSION}.${MINOR_VERSION}"
echo "BRANCH_NAME=${BRANCH_NAME}" >> "${GITHUB_ENV}"
# Calculate UI version (1.X.X format - matches Prowler minor version)
UI_VERSION="1.${MINOR_VERSION}.${PATCH_VERSION}"
echo "UI_VERSION=${UI_VERSION}" >> "${GITHUB_ENV}"
# Calculate API version (1.X.X format - one minor version ahead)
API_MINOR_VERSION=$((MINOR_VERSION + 1))
API_VERSION="1.${API_MINOR_VERSION}.${PATCH_VERSION}"
echo "API_VERSION=${API_VERSION}" >> "${GITHUB_ENV}"
echo "Prowler version: $PROWLER_VERSION"
echo "Branch name: $BRANCH_NAME"
echo "UI version: $UI_VERSION"
echo "API version: $API_VERSION"
echo "Is minor release: $([ $PATCH_VERSION -eq 0 ] && echo 'true' || echo 'false')"
else
echo "Invalid version syntax: '$PROWLER_VERSION' (must be N.N.N)" >&2
exit 1
fi
- name: Checkout existing branch for patch release
if: ${{ env.PATCH_VERSION != '0' }}
run: |
echo "Patch release detected, checking out existing branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/heads/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists locally, checking out..."
git checkout "$BRANCH_NAME"
elif git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists remotely, checking out..."
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "ERROR: Branch $BRANCH_NAME should exist for patch release $PROWLER_VERSION"
exit 1
fi
- name: Verify version in pyproject.toml
run: |
CURRENT_VERSION=$(grep '^version = ' pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
PROWLER_VERSION_TRIMMED=$(echo "$PROWLER_VERSION" | tr -d '[:space:]')
if [ "$CURRENT_VERSION" != "$PROWLER_VERSION_TRIMMED" ]; then
echo "ERROR: Version mismatch in pyproject.toml (expected: '$PROWLER_VERSION_TRIMMED', found: '$CURRENT_VERSION')"
exit 1
fi
echo "✓ pyproject.toml version: $CURRENT_VERSION"
- name: Verify version in prowler/config/config.py
run: |
CURRENT_VERSION=$(grep '^prowler_version = ' prowler/config/config.py | sed -E 's/prowler_version = "([^"]+)"/\1/' | tr -d '[:space:]')
PROWLER_VERSION_TRIMMED=$(echo "$PROWLER_VERSION" | tr -d '[:space:]')
if [ "$CURRENT_VERSION" != "$PROWLER_VERSION_TRIMMED" ]; then
echo "ERROR: Version mismatch in prowler/config/config.py (expected: '$PROWLER_VERSION_TRIMMED', found: '$CURRENT_VERSION')"
exit 1
fi
echo "✓ prowler/config/config.py version: $CURRENT_VERSION"
- name: Verify version in api/pyproject.toml
run: |
CURRENT_API_VERSION=$(grep '^version = ' api/pyproject.toml | sed -E 's/version = "([^"]+)"/\1/' | tr -d '[:space:]')
API_VERSION_TRIMMED=$(echo "$API_VERSION" | tr -d '[:space:]')
if [ "$CURRENT_API_VERSION" != "$API_VERSION_TRIMMED" ]; then
echo "ERROR: API version mismatch in api/pyproject.toml (expected: '$API_VERSION_TRIMMED', found: '$CURRENT_API_VERSION')"
exit 1
fi
echo "✓ api/pyproject.toml version: $CURRENT_API_VERSION"
- name: Verify prowler dependency in api/pyproject.toml
if: ${{ env.PATCH_VERSION != '0' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
BRANCH_NAME_TRIMMED=$(echo "$BRANCH_NAME" | tr -d '[:space:]')
if [ "$CURRENT_PROWLER_REF" != "$BRANCH_NAME_TRIMMED" ]; then
echo "ERROR: Prowler dependency mismatch in api/pyproject.toml (expected: '$BRANCH_NAME_TRIMMED', found: '$CURRENT_PROWLER_REF')"
exit 1
fi
echo "✓ api/pyproject.toml prowler dependency: $CURRENT_PROWLER_REF"
- name: Verify version in api/src/backend/api/v1/views.py
run: |
CURRENT_API_VERSION=$(grep 'spectacular_settings.VERSION = ' api/src/backend/api/v1/views.py | sed -E 's/.*spectacular_settings.VERSION = "([^"]+)".*/\1/' | tr -d '[:space:]')
API_VERSION_TRIMMED=$(echo "$API_VERSION" | tr -d '[:space:]')
if [ "$CURRENT_API_VERSION" != "$API_VERSION_TRIMMED" ]; then
echo "ERROR: API version mismatch in views.py (expected: '$API_VERSION_TRIMMED', found: '$CURRENT_API_VERSION')"
exit 1
fi
echo "✓ api/src/backend/api/v1/views.py version: $CURRENT_API_VERSION"
- name: Checkout existing release branch for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
echo "Minor release detected (patch = 0), checking out existing branch $BRANCH_NAME..."
if git show-ref --verify --quiet "refs/remotes/origin/$BRANCH_NAME"; then
echo "Branch $BRANCH_NAME exists remotely, checking out..."
git checkout -b "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "ERROR: Branch $BRANCH_NAME should exist for minor release $PROWLER_VERSION. Please create it manually first."
exit 1
fi
- name: Prepare prowler dependency update for minor release
if: ${{ env.PATCH_VERSION == '0' }}
run: |
CURRENT_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
BRANCH_NAME_TRIMMED=$(echo "$BRANCH_NAME" | tr -d '[:space:]')
# Create a temporary branch for the PR
TEMP_BRANCH="update-api-dependency-$BRANCH_NAME_TRIMMED-$(date +%s)"
echo "TEMP_BRANCH=$TEMP_BRANCH" >> $GITHUB_ENV
# Switch back to master and create temp branch
git checkout master
git checkout -b "$TEMP_BRANCH"
# Minor release: update the dependency to use the release branch
echo "Updating prowler dependency from '$CURRENT_PROWLER_REF' to '$BRANCH_NAME_TRIMMED'"
sed -i "s|prowler @ git+https://github.com/prowler-cloud/prowler.git@[^\"]*\"|prowler @ git+https://github.com/prowler-cloud/prowler.git@$BRANCH_NAME_TRIMMED\"|" api/pyproject.toml
# Verify the change was made
UPDATED_PROWLER_REF=$(grep 'prowler @ git+https://github.com/prowler-cloud/prowler.git@' api/pyproject.toml | sed -E 's/.*@([^"]+)".*/\1/' | tr -d '[:space:]')
if [ "$UPDATED_PROWLER_REF" != "$BRANCH_NAME_TRIMMED" ]; then
echo "ERROR: Failed to update prowler dependency in api/pyproject.toml"
exit 1
fi
# Update poetry lock file
echo "Updating poetry.lock file..."
cd api
poetry lock
cd ..
# Commit and push the temporary branch
git add api/pyproject.toml api/poetry.lock
git commit -m "chore(api): update prowler dependency to $BRANCH_NAME_TRIMMED for release $PROWLER_VERSION"
git push origin "$TEMP_BRANCH"
echo "✓ Prepared prowler dependency update to: $UPDATED_PROWLER_REF"
- name: Create Pull Request against release branch
if: ${{ env.PATCH_VERSION == '0' }}
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
branch: ${{ env.TEMP_BRANCH }}
base: ${{ env.BRANCH_NAME }}
title: "chore(api): Update prowler dependency to ${{ env.BRANCH_NAME }} for release ${{ env.PROWLER_VERSION }}"
body: |
### Description
Updates the API prowler dependency for release ${{ env.PROWLER_VERSION }}.
**Changes:**
- Updates `api/pyproject.toml` prowler dependency from `@master` to `@${{ env.BRANCH_NAME }}`
- Updates `api/poetry.lock` file with resolved dependencies
This PR should be merged into the `${{ env.BRANCH_NAME }}` release branch.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
labels: |
component/api
no-changelog
- name: Extract changelog entries
run: |
set -e
# Function to extract changelog for a specific version
extract_changelog() {
local file="$1"
local version="$2"
local output_file="$3"
if [ ! -f "$file" ]; then
echo "Warning: $file not found, skipping..."
touch "$output_file"
return
fi
# Extract changelog section for this version
awk -v version="$version" '
/^## \[v?'"$version"'\]/ { found=1; next }
found && /^## \[v?[0-9]+\.[0-9]+\.[0-9]+\]/ { found=0 }
found && !/^## \[v?'"$version"'\]/ { print }
' "$file" > "$output_file"
# Remove --- separators
sed -i '/^---$/d' "$output_file"
# Remove trailing empty lines
sed -i '/^$/d' "$output_file"
}
# Extract changelogs
echo "Extracting changelog entries..."
extract_changelog "prowler/CHANGELOG.md" "$PROWLER_VERSION" "prowler_changelog.md"
extract_changelog "api/CHANGELOG.md" "$API_VERSION" "api_changelog.md"
extract_changelog "ui/CHANGELOG.md" "$UI_VERSION" "ui_changelog.md"
# Combine changelogs in order: UI, API, SDK
> combined_changelog.md
if [ -s "ui_changelog.md" ]; then
echo "## UI" >> combined_changelog.md
echo "" >> combined_changelog.md
cat ui_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
if [ -s "api_changelog.md" ]; then
echo "## API" >> combined_changelog.md
echo "" >> combined_changelog.md
cat api_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
if [ -s "prowler_changelog.md" ]; then
echo "## SDK" >> combined_changelog.md
echo "" >> combined_changelog.md
cat prowler_changelog.md >> combined_changelog.md
echo "" >> combined_changelog.md
fi
echo "Combined changelog preview:"
cat combined_changelog.md
- name: Create draft release
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8 # v2.3.2
with:
tag_name: ${{ env.PROWLER_VERSION }}
name: Prowler ${{ env.PROWLER_VERSION }}
body_path: combined_changelog.md
draft: true
target_commitish: ${{ env.PATCH_VERSION == '0' && 'master' || env.BRANCH_NAME }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Clean up temporary files
run: |
rm -f prowler_changelog.md api_changelog.md ui_changelog.md combined_changelog.md

View File

@@ -1,77 +0,0 @@
name: Prowler - Check Changelog
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
jobs:
check-changelog:
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: write
env:
MONITORED_FOLDERS: "api ui prowler"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Get list of changed files
id: changed_files
run: |
git fetch origin ${{ github.base_ref }}
git diff --name-only origin/${{ github.base_ref }}...HEAD > changed_files.txt
cat changed_files.txt
- name: Check for folder changes and changelog presence
id: check_folders
run: |
missing_changelogs=""
for folder in $MONITORED_FOLDERS; do
if grep -q "^${folder}/" changed_files.txt; then
echo "Detected changes in ${folder}/"
if ! grep -q "^${folder}/CHANGELOG.md$" changed_files.txt; then
echo "No changelog update found for ${folder}/"
missing_changelogs="${missing_changelogs}- \`${folder}\`\n"
fi
fi
done
echo "missing_changelogs<<EOF" >> $GITHUB_OUTPUT
echo -e "${missing_changelogs}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Find existing changelog comment
if: github.event.pull_request.head.repo.full_name == github.repository
id: find_comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e #v3.1.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: '<!-- changelog-check -->'
- name: Update PR comment with changelog status
if: github.event.pull_request.head.repo.full_name == github.repository
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
issue-number: ${{ github.event.pull_request.number }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
edit-mode: replace
body: |
<!-- changelog-check -->
${{ steps.check_folders.outputs.missing_changelogs != '' && format('⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
{0}
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.', steps.check_folders.outputs.missing_changelogs) || '✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉' }}
- name: Fail if changelog is missing
if: steps.check_folders.outputs.missing_changelogs != ''
run: |
echo "ERROR: Missing changelog updates in some folders."
exit 1

View File

@@ -1,38 +0,0 @@
name: Prowler - Merged Pull Request
on:
pull_request_target:
branches: ['master']
types: ['closed']
jobs:
trigger-cloud-pull-request:
name: Trigger Cloud Pull Request
if: github.event.pull_request.merged == true && github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ github.event.pull_request.merge_commit_sha }}
- name: Set short git commit SHA
id: vars
run: |
shortSha=$(git rev-parse --short ${{ github.event.pull_request.merge_commit_sha }})
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Trigger pull request
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
event-type: prowler-pull-request-merged
client-payload: |
{
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
"PROWLER_PR_TITLE": ${{ toJson(github.event.pull_request.title) }},
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
"PROWLER_PR_BODY": ${{ toJson(github.event.pull_request.body) }},
"PROWLER_PR_URL": ${{ toJson(github.event.pull_request.html_url) }}
}

View File

@@ -1,202 +0,0 @@
name: SDK - Build and Push containers
on:
push:
branches:
# For `v3-latest`
- "v3"
# For `v4-latest`
- "v4.6"
# For `latest`
- "master"
paths-ignore:
- ".github/**"
- "README.md"
- "docs/**"
- "ui/**"
- "api/**"
release:
types: [published]
env:
# AWS Configuration
AWS_REGION_STG: eu-west-1
AWS_REGION_PLATFORM: eu-west-1
AWS_REGION: us-east-1
# Container's configuration
IMAGE_NAME: prowler
DOCKERFILE_PATH: ./Dockerfile
# Tags
LATEST_TAG: latest
STABLE_TAG: stable
# The RELEASE_TAG is set during runtime in releases
RELEASE_TAG: ""
# The PROWLER_VERSION and PROWLER_VERSION_MAJOR are set during runtime in releases
PROWLER_VERSION: ""
PROWLER_VERSION_MAJOR: ""
# TEMPORARY_TAG: temporary
# Python configuration
PYTHON_VERSION: 3.12
# Container Registries
PROWLERCLOUD_DOCKERHUB_REPOSITORY: prowlercloud
PROWLERCLOUD_DOCKERHUB_IMAGE: prowler
jobs:
# Build Prowler OSS container
container-build-push:
# needs: dockerfile-linter
runs-on: ubuntu-latest
outputs:
prowler_version_major: ${{ steps.get-prowler-version.outputs.PROWLER_VERSION_MAJOR }}
prowler_version: ${{ steps.get-prowler-version.outputs.PROWLER_VERSION }}
env:
POETRY_VIRTUALENVS_CREATE: "false"
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install Poetry
run: |
pipx install poetry==2.*
pipx inject poetry poetry-bumpversion
- name: Get Prowler version
id: get-prowler-version
run: |
PROWLER_VERSION="$(poetry version -s 2>/dev/null)"
echo "PROWLER_VERSION=${PROWLER_VERSION}" >> "${GITHUB_ENV}"
echo "PROWLER_VERSION=${PROWLER_VERSION}" >> "${GITHUB_OUTPUT}"
# Store prowler version major just for the release
PROWLER_VERSION_MAJOR="${PROWLER_VERSION%%.*}"
echo "PROWLER_VERSION_MAJOR=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_ENV}"
echo "PROWLER_VERSION_MAJOR=${PROWLER_VERSION_MAJOR}" >> "${GITHUB_OUTPUT}"
case ${PROWLER_VERSION_MAJOR} in
3)
echo "LATEST_TAG=v3-latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=v3-stable" >> "${GITHUB_ENV}"
;;
4)
echo "LATEST_TAG=v4-latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=v4-stable" >> "${GITHUB_ENV}"
;;
5)
echo "LATEST_TAG=latest" >> "${GITHUB_ENV}"
echo "STABLE_TAG=stable" >> "${GITHUB_ENV}"
;;
*)
# Fallback if any other version is present
echo "Releasing another Prowler major version, aborting..."
exit 1
;;
esac
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: public.ecr.aws
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
password: ${{ secrets.PUBLIC_ECR_AWS_SECRET_ACCESS_KEY }}
env:
AWS_REGION: ${{ env.AWS_REGION }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push container image (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
# Use local context to get changes
# https://github.com/docker/build-push-action#path-context
context: .
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.PROWLER_VERSION }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.PROWLER_VERSION }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Push README to Docker Hub (toniblyx)
uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ env.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}
readme-filepath: ./README.md
- name: Push README to Docker Hub (prowlercloud)
uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}
readme-filepath: ./README.md
dispatch-action:
needs: container-build-push
runs-on: ubuntu-latest
steps:
- name: Get latest commit info (latest)
if: github.event_name == 'push'
run: |
LATEST_COMMIT_HASH=$(echo ${{ github.event.after }} | cut -b -7)
echo "LATEST_COMMIT_HASH=${LATEST_COMMIT_HASH}" >> $GITHUB_ENV
- name: Dispatch event (latest)
if: github.event_name == 'push' && needs.container-build-push.outputs.prowler_version_major == '3'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
--data '{"event_type":"dispatch","client_payload":{"version":"v3-latest", "tag": "${{ env.LATEST_COMMIT_HASH }}"}}'
- name: Dispatch event (release)
if: github.event_name == 'release' && needs.container-build-push.outputs.prowler_version_major == '3'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
--data '{"event_type":"dispatch","client_payload":{"version":"release", "tag":"${{ needs.container-build-push.outputs.prowler_version }}"}}'

View File

@@ -1,146 +0,0 @@
name: SDK - Bump Version
on:
release:
types: [published]
env:
PROWLER_VERSION: ${{ github.event.release.tag_name }}
BASE_BRANCH: master
jobs:
bump-version:
name: Bump Version
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Get Prowler version
shell: bash
run: |
if [[ $PROWLER_VERSION =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
MAJOR_VERSION=${BASH_REMATCH[1]}
MINOR_VERSION=${BASH_REMATCH[2]}
FIX_VERSION=${BASH_REMATCH[3]}
# Export version components to GitHub environment
echo "MAJOR_VERSION=${MAJOR_VERSION}" >> "${GITHUB_ENV}"
echo "MINOR_VERSION=${MINOR_VERSION}" >> "${GITHUB_ENV}"
echo "FIX_VERSION=${FIX_VERSION}" >> "${GITHUB_ENV}"
if (( MAJOR_VERSION == 5 )); then
if (( FIX_VERSION == 0 )); then
echo "Minor Release: $PROWLER_VERSION"
# Set up next minor version for master
BUMP_VERSION_TO=${MAJOR_VERSION}.$((MINOR_VERSION + 1)).${FIX_VERSION}
echo "BUMP_VERSION_TO=${BUMP_VERSION_TO}" >> "${GITHUB_ENV}"
TARGET_BRANCH=${BASE_BRANCH}
echo "TARGET_BRANCH=${TARGET_BRANCH}" >> "${GITHUB_ENV}"
# Set up patch version for version branch
PATCH_VERSION_TO=${MAJOR_VERSION}.${MINOR_VERSION}.1
echo "PATCH_VERSION_TO=${PATCH_VERSION_TO}" >> "${GITHUB_ENV}"
VERSION_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "VERSION_BRANCH=${VERSION_BRANCH}" >> "${GITHUB_ENV}"
echo "Bumping to next minor version: ${BUMP_VERSION_TO} in branch ${TARGET_BRANCH}"
echo "Bumping to next patch version: ${PATCH_VERSION_TO} in branch ${VERSION_BRANCH}"
else
echo "Patch Release: $PROWLER_VERSION"
BUMP_VERSION_TO=${MAJOR_VERSION}.${MINOR_VERSION}.$((FIX_VERSION + 1))
echo "BUMP_VERSION_TO=${BUMP_VERSION_TO}" >> "${GITHUB_ENV}"
TARGET_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
echo "TARGET_BRANCH=${TARGET_BRANCH}" >> "${GITHUB_ENV}"
echo "Bumping to next patch version: ${BUMP_VERSION_TO} in branch ${TARGET_BRANCH}"
fi
else
echo "Releasing another Prowler major version, aborting..."
exit 1
fi
else
echo "Invalid version syntax: '$PROWLER_VERSION' (must be N.N.N)" >&2
exit 1
fi
- name: Bump versions in files
run: |
echo "Using PROWLER_VERSION=$PROWLER_VERSION"
echo "Using BUMP_VERSION_TO=$BUMP_VERSION_TO"
set -e
echo "Bumping version in pyproject.toml ..."
sed -i "s|version = \"${PROWLER_VERSION}\"|version = \"${BUMP_VERSION_TO}\"|" pyproject.toml
echo "Bumping version in prowler/config/config.py ..."
sed -i "s|prowler_version = \"${PROWLER_VERSION}\"|prowler_version = \"${BUMP_VERSION_TO}\"|" prowler/config/config.py
echo "Bumping version in .env ..."
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${BUMP_VERSION_TO}|" .env
git --no-pager diff
- name: Create Pull Request
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.TARGET_BRANCH }}
commit-message: "chore(release): Bump version to v${{ env.BUMP_VERSION_TO }}"
branch: "version-bump-to-v${{ env.BUMP_VERSION_TO }}"
title: "chore(release): Bump version to v${{ env.BUMP_VERSION_TO }}"
labels: no-changelog
body: |
### Description
Bump Prowler version to v${{ env.BUMP_VERSION_TO }}
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Handle patch version for minor release
if: env.FIX_VERSION == '0'
run: |
echo "Using PROWLER_VERSION=$PROWLER_VERSION"
echo "Using PATCH_VERSION_TO=$PATCH_VERSION_TO"
set -e
echo "Bumping version in pyproject.toml ..."
sed -i "s|version = \"${PROWLER_VERSION}\"|version = \"${PATCH_VERSION_TO}\"|" pyproject.toml
echo "Bumping version in prowler/config/config.py ..."
sed -i "s|prowler_version = \"${PROWLER_VERSION}\"|prowler_version = \"${PATCH_VERSION_TO}\"|" prowler/config/config.py
echo "Bumping version in .env ..."
sed -i "s|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PROWLER_VERSION}|NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${PATCH_VERSION_TO}|" .env
git --no-pager diff
- name: Create Pull Request for patch version
if: env.FIX_VERSION == '0'
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
base: ${{ env.VERSION_BRANCH }}
commit-message: "chore(release): Bump version to v${{ env.PATCH_VERSION_TO }}"
branch: "version-bump-to-v${{ env.PATCH_VERSION_TO }}"
title: "chore(release): Bump version to v${{ env.PATCH_VERSION_TO }}"
labels: no-changelog
body: |
### Description
Bump Prowler version to v${{ env.PATCH_VERSION_TO }}
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

View File

@@ -1,67 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: SDK - CodeQL
on:
push:
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
paths-ignore:
- 'ui/**'
- 'api/**'
- '.github/**'
pull_request:
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
paths-ignore:
- 'ui/**'
- 'api/**'
- '.github/**'
schedule:
- cron: '00 12 * * *'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/sdk-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
category: "/language:${{matrix.language}}"

View File

@@ -1,256 +0,0 @@
name: SDK - Pull Request
on:
push:
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
pull_request:
branches:
- "master"
- "v3"
- "v4.*"
- "v5.*"
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: ./**
files_ignore: |
.github/**
docs/**
permissions/**
api/**
ui/**
prowler/CHANGELOG.md
README.md
mkdocs.yml
.backportrc.json
.env
docker-compose*
examples/**
.gitignore
- name: Install poetry
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
python -m pip install --upgrade pip
pipx install poetry==2.1.1
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install dependencies
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry install --no-root
poetry run pip list
VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
) && curl -L -o /tmp/hadolint "https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64" \
&& chmod +x /tmp/hadolint
- name: Poetry check
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry check --lock
- name: Lint with flake8
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run flake8 . --ignore=E266,W503,E203,E501,W605,E128 --exclude contrib,ui,api
- name: Checking format with black
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run black --exclude api ui --check .
- name: Lint with pylint
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pylint --disable=W,C,R,E -j 0 -rn -sn prowler/
- name: Bandit
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run bandit -q -lll -x '*_test.py,./contrib/,./api/,./ui' -r .
- name: Safety
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run safety check --ignore 70612 -r pyproject.toml
- name: Vulture
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run vulture --exclude "contrib,api,ui" --min-confidence 100 .
- name: Dockerfile - Check if Dockerfile has changed
id: dockerfile-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
Dockerfile
- name: Hadolint
if: steps.dockerfile-changed-files.outputs.any_changed == 'true'
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
# Test AWS
- name: AWS - Check if any file has changed
id: aws-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/aws/**
./tests/providers/aws/**
.poetry.lock
- name: AWS - Test
if: steps.aws-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/aws --cov-report=xml:aws_coverage.xml tests/providers/aws
# Test Azure
- name: Azure - Check if any file has changed
id: azure-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/azure/**
./tests/providers/azure/**
.poetry.lock
- name: Azure - Test
if: steps.azure-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/azure --cov-report=xml:azure_coverage.xml tests/providers/azure
# Test GCP
- name: GCP - Check if any file has changed
id: gcp-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/gcp/**
./tests/providers/gcp/**
.poetry.lock
- name: GCP - Test
if: steps.gcp-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/gcp --cov-report=xml:gcp_coverage.xml tests/providers/gcp
# Test Kubernetes
- name: Kubernetes - Check if any file has changed
id: kubernetes-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/kubernetes/**
./tests/providers/kubernetes/**
.poetry.lock
- name: Kubernetes - Test
if: steps.kubernetes-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/kubernetes --cov-report=xml:kubernetes_coverage.xml tests/providers/kubernetes
# Test GitHub
- name: GitHub - Check if any file has changed
id: github-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/github/**
./tests/providers/github/**
.poetry.lock
- name: GitHub - Test
if: steps.github-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/github --cov-report=xml:github_coverage.xml tests/providers/github
# Test NHN
- name: NHN - Check if any file has changed
id: nhn-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/nhn/**
./tests/providers/nhn/**
.poetry.lock
- name: NHN - Test
if: steps.nhn-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/nhn --cov-report=xml:nhn_coverage.xml tests/providers/nhn
# Test M365
- name: M365 - Check if any file has changed
id: m365-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/m365/**
./tests/providers/m365/**
.poetry.lock
- name: M365 - Test
if: steps.m365-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/m365 --cov-report=xml:m365_coverage.xml tests/providers/m365
# Test IaC
- name: IaC - Check if any file has changed
id: iac-changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: |
./prowler/providers/iac/**
./tests/providers/iac/**
.poetry.lock
- name: IaC - Test
if: steps.iac-changed-files.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/providers/iac --cov-report=xml:iac_coverage.xml tests/providers/iac
# Common Tests
- name: Lib - Test
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/lib --cov-report=xml:lib_coverage.xml tests/lib
- name: Config - Test
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
poetry run pytest -n auto --cov=./prowler/config --cov-report=xml:config_coverage.xml tests/config
# Codecov
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: prowler
files: ./aws_coverage.xml,./azure_coverage.xml,./gcp_coverage.xml,./kubernetes_coverage.xml,./github_coverage.xml,./nhn_coverage.xml,./m365_coverage.xml,./lib_coverage.xml,./config_coverage.xml

View File

@@ -1,98 +0,0 @@
name: SDK - PyPI release
on:
release:
types: [published]
env:
RELEASE_TAG: ${{ github.event.release.tag_name }}
PYTHON_VERSION: 3.11
# CACHE: "poetry"
jobs:
repository-check:
name: Repository check
runs-on: ubuntu-latest
outputs:
is_repo: ${{ steps.repository_check.outputs.is_repo }}
steps:
- name: Repository check
id: repository_check
working-directory: /tmp
run: |
if [[ ${{ github.repository }} == "prowler-cloud/prowler" ]]
then
echo "is_repo=true" >> "${GITHUB_OUTPUT}"
else
echo "This action only runs for prowler-cloud/prowler"
echo "is_repo=false" >> "${GITHUB_OUTPUT}"
fi
release-prowler-job:
runs-on: ubuntu-latest
needs: repository-check
if: needs.repository-check.outputs.is_repo == 'true'
env:
POETRY_VIRTUALENVS_CREATE: "false"
name: Release Prowler to PyPI
steps:
- name: Repository check
working-directory: /tmp
run: |
if [[ "${{ github.repository }}" != "prowler-cloud/prowler" ]]; then
echo "This action only runs for prowler-cloud/prowler"
exit 1
fi
- name: Get Prowler version
run: |
PROWLER_VERSION="${{ env.RELEASE_TAG }}"
case ${PROWLER_VERSION%%.*} in
3)
echo "Releasing Prowler v3 with tag ${PROWLER_VERSION}"
;;
4)
echo "Releasing Prowler v4 with tag ${PROWLER_VERSION}"
;;
5)
echo "Releasing Prowler v5 with tag ${PROWLER_VERSION}"
;;
*)
echo "Releasing another Prowler major version, aborting..."
exit 1
;;
esac
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install dependencies
run: |
pipx install poetry==2.1.1
- name: Setup Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ env.PYTHON_VERSION }}
# cache: ${{ env.CACHE }}
- name: Build Prowler package
run: |
poetry build
- name: Publish Prowler package to PyPI
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish
- name: Replicate PyPI package
run: |
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info
pip install toml
python util/replicate_pypi_package.py
poetry build
- name: Publish prowler-cloud package to PyPI
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish

View File

@@ -1,68 +0,0 @@
# This is a basic workflow to help you get started with Actions
name: SDK - Refresh AWS services' regions
on:
schedule:
- cron: "0 9 * * 1" # runs at 09:00 UTC every Monday
env:
GITHUB_BRANCH: "master"
AWS_REGION_DEV: us-east-1
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
permissions:
id-token: write
pull-requests: write
contents: write
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: setup python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.9 #install the python needed
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install boto3
- name: Configure AWS Credentials -- DEV
uses: aws-actions/configure-aws-credentials@b47578312673ae6fa5b5096b330d9fbac3d116df # v4.2.1
with:
aws-region: ${{ env.AWS_REGION_DEV }}
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
role-session-name: refresh-AWS-regions-dev
# Runs a single command using the runners shell
- name: Run a one-line script
run: python3 util/update_aws_services_regions.py
# Create pull request
- name: Create Pull Request
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
author: prowler-bot <179230569+prowler-bot@users.noreply.github.com>
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services"
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low, provider/aws"
title: "chore(regions_update): Changes in regions for AWS services"
body: |
### Description
This PR updates the regions for AWS services.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

View File

@@ -1,121 +0,0 @@
name: UI - Build and Push containers
on:
push:
branches:
- "master"
paths:
- "ui/**"
- ".github/workflows/ui-build-lint-push-containers.yml"
# Uncomment the below code to test this action on PRs
# pull_request:
# branches:
# - "master"
# paths:
# - "ui/**"
# - ".github/workflows/ui-build-lint-push-containers.yml"
release:
types: [published]
env:
# Tags
LATEST_TAG: latest
RELEASE_TAG: ${{ github.event.release.tag_name }}
STABLE_TAG: stable
WORKING_DIRECTORY: ./ui
# Container Registries
PROWLERCLOUD_DOCKERHUB_REPOSITORY: prowlercloud
PROWLERCLOUD_DOCKERHUB_IMAGE: prowler-ui
NEXT_PUBLIC_API_BASE_URL: http://prowler-api:8080/api/v1
jobs:
repository-check:
name: Repository check
runs-on: ubuntu-latest
outputs:
is_repo: ${{ steps.repository_check.outputs.is_repo }}
steps:
- name: Repository check
id: repository_check
working-directory: /tmp
run: |
if [[ ${{ github.repository }} == "prowler-cloud/prowler" ]]
then
echo "is_repo=true" >> "${GITHUB_OUTPUT}"
else
echo "This action only runs for prowler-cloud/prowler"
echo "is_repo=false" >> "${GITHUB_OUTPUT}"
fi
# Build Prowler OSS container
container-build-push:
needs: repository-check
if: needs.repository-check.outputs.is_repo == 'true'
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKING_DIRECTORY }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set short git commit SHA
id: vars
run: |
shortSha=$(git rev-parse --short ${{ github.sha }})
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push container image (latest)
# Comment the following line for testing
if: github.event_name == 'push'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
build-args: |
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=${{ env.SHORT_SHA }}
NEXT_PUBLIC_API_BASE_URL=${{ env.NEXT_PUBLIC_API_BASE_URL }}
# Set push: false for testing
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.LATEST_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.SHORT_SHA }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.WORKING_DIRECTORY }}
build-args: |
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v${{ env.RELEASE_TAG }}
NEXT_PUBLIC_API_BASE_URL=${{ env.NEXT_PUBLIC_API_BASE_URL }}
push: true
tags: |
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.RELEASE_TAG }}
${{ env.PROWLERCLOUD_DOCKERHUB_REPOSITORY }}/${{ env.PROWLERCLOUD_DOCKERHUB_IMAGE }}:${{ env.STABLE_TAG }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Trigger deployment
if: github.event_name == 'push'
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.PROWLER_BOT_ACCESS_TOKEN }}
repository: ${{ secrets.CLOUD_DISPATCH }}
event-type: prowler-ui-deploy
client-payload: '{"sha": "${{ github.sha }}", "short_sha": "${{ env.SHORT_SHA }}"}'

View File

@@ -1,59 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: UI - CodeQL
on:
push:
branches:
- "master"
- "v5.*"
paths:
- "ui/**"
pull_request:
branches:
- "master"
- "v5.*"
paths:
- "ui/**"
schedule:
- cron: "00 12 * * *"
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: ["javascript"]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/ui-codeql-config.yml
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
category: "/language:${{matrix.language}}"

View File

@@ -1,98 +0,0 @@
name: UI - E2E Tests
on:
pull_request:
branches:
- master
- "v5.*"
paths:
- '.github/workflows/ui-e2e-tests.yml'
- 'ui/**'
jobs:
e2e-tests:
if: github.repository == 'prowler-cloud/prowler'
runs-on: ubuntu-latest
env:
AUTH_SECRET: 'fallback-ci-secret-for-testing'
AUTH_TRUST_HOST: true
NEXTAUTH_URL: 'http://localhost:3000'
NEXT_PUBLIC_API_BASE_URL: 'http://localhost:8080/api/v1'
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Start API services
run: |
# Override docker-compose image tag to use latest instead of stable
# This overrides any PROWLER_API_VERSION set in .env file
export PROWLER_API_VERSION=latest
echo "Using PROWLER_API_VERSION=${PROWLER_API_VERSION}"
docker compose up -d api worker worker-beat
- name: Wait for API to be ready
run: |
echo "Waiting for prowler-api..."
timeout=150 # 5 minutes max
elapsed=0
while [ $elapsed -lt $timeout ]; do
if curl -s ${NEXT_PUBLIC_API_BASE_URL}/docs >/dev/null 2>&1; then
echo "Prowler API is ready!"
exit 0
fi
echo "Waiting for prowler-api... (${elapsed}s elapsed)"
sleep 5
elapsed=$((elapsed + 5))
done
echo "Timeout waiting for prowler-api to start"
exit 1
- name: Load database fixtures for E2E tests
run: |
docker compose exec -T api sh -c '
echo "Loading all fixtures from api/fixtures/dev/..."
for fixture in api/fixtures/dev/*.json; do
if [ -f "$fixture" ]; then
echo "Loading $fixture"
poetry run python manage.py loaddata "$fixture" --database admin
fi
done
echo "All database fixtures loaded successfully!"
'
- name: Setup Node.js environment
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: '20.x'
cache: 'npm'
cache-dependency-path: './ui/package-lock.json'
- name: Install UI dependencies
working-directory: ./ui
run: npm ci
- name: Build UI application
working-directory: ./ui
run: npm run build
- name: Cache Playwright browsers
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
id: playwright-cache
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('ui/package-lock.json') }}
restore-keys: |
${{ runner.os }}-playwright-
- name: Install Playwright browsers
working-directory: ./ui
if: steps.playwright-cache.outputs.cache-hit != 'true'
run: npm run test:e2e:install
- name: Run E2E tests
working-directory: ./ui
run: npm run test:e2e
- name: Upload test reports
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: failure()
with:
name: playwright-report
path: ui/playwright-report/
retention-days: 30
- name: Cleanup services
if: always()
run: |
echo "Shutting down services..."
docker compose down -v || true
echo "Cleanup completed"

View File

@@ -1,65 +0,0 @@
name: UI - Pull Request
on:
push:
branches:
- "master"
- "v5.*"
paths:
- ".github/workflows/ui-pull-request.yml"
- "ui/**"
pull_request:
branches:
- master
- "v5.*"
paths:
- 'ui/**'
env:
UI_WORKING_DIR: ./ui
IMAGE_NAME: prowler-ui
jobs:
test-and-coverage:
runs-on: ubuntu-latest
strategy:
matrix:
os: [ubuntu-latest]
node-version: [20.x]
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: './ui/package-lock.json'
- name: Install dependencies
working-directory: ./ui
run: npm ci
- name: Run Healthcheck
working-directory: ./ui
run: npm run healthcheck
- name: Build the application
working-directory: ./ui
run: npm run build
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build Container
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ env.UI_WORKING_DIR }}
# Always build using `prod` target
target: prod
push: false
tags: ${{ env.IMAGE_NAME }}:latest
outputs: type=docker
build-args: |
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_51LwpXXXX

48
.gitignore vendored
View File

@@ -5,15 +5,6 @@
[._]ss[a-gi-z]
[._]sw[a-p]
# Python code
__pycache__
venv/
build/
/dist/
*.egg-info/
*/__pycache__/*.pyc
.idea/
# Session
Session.vim
Sessionx.vim
@@ -31,7 +22,7 @@ tags
*.DS_Store
# Prowler output
/output
output/
# Prowler found secrets
secrets-*/
@@ -42,39 +33,8 @@ junit-reports/
# VSCode files
.vscode/
# Cursor files
.cursorignore
.cursor/
terraform-kickstarter/.terraform.lock.hcl
# RooCode files
.roo/
.rooignore
.roomodes
terraform-kickstarter/.terraform/providers/registry.terraform.io/hashicorp/aws/3.56.0/darwin_amd64/terraform-provider-aws_v3.56.0_x5
# Cline files
.cline/
.clineignore
# Terraform
.terraform*
*.tfstate
*.tfstate.*
# .env
ui/.env*
api/.env*
.env.local
# Coverage
.coverage*
.coverage
coverage*
# Node
node_modules
# Persistent data
_data/
# Claude
CLAUDE.md
terraform-kickstarter/terraform.tfstate

View File

@@ -1,127 +0,0 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
- id: check-yaml
args: ["--unsafe"]
- id: check-json
- id: end-of-file-fixer
- id: trailing-whitespace
- id: no-commit-to-branch
- id: pretty-format-json
args: ["--autofix", --no-sort-keys, --no-ensure-ascii]
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.13.0
hooks:
- id: pretty-format-toml
args: [--autofix]
files: pyproject.toml
## BASH
- repo: https://github.com/koalaman/shellcheck-precommit
rev: v0.10.0
hooks:
- id: shellcheck
exclude: contrib
## PYTHON
- repo: https://github.com/myint/autoflake
rev: v2.3.1
hooks:
- id: autoflake
args:
[
"--in-place",
"--remove-all-unused-imports",
"--remove-unused-variable",
]
- repo: https://github.com/timothycrosley/isort
rev: 5.13.2
hooks:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/psf/black
rev: 24.4.2
hooks:
- id: black
- repo: https://github.com/pycqa/flake8
rev: 7.0.0
hooks:
- id: flake8
exclude: contrib
args: ["--ignore=E266,W503,E203,E501,W605"]
- repo: https://github.com/python-poetry/poetry
rev: 2.1.1
hooks:
- id: poetry-check
name: API - poetry-check
args: ["--directory=./api"]
pass_filenames: false
- id: poetry-lock
name: API - poetry-lock
args: ["--directory=./api"]
pass_filenames: false
- id: poetry-check
name: SDK - poetry-check
args: ["--directory=./"]
pass_filenames: false
- id: poetry-lock
name: SDK - poetry-lock
args: ["--directory=./"]
pass_filenames: false
- repo: https://github.com/hadolint/hadolint
rev: v2.13.0-beta
hooks:
- id: hadolint
args: ["--ignore=DL3013"]
- repo: local
hooks:
- id: pylint
name: pylint
entry: bash -c 'pylint --disable=W,C,R,E -j 0 -rn -sn prowler/'
language: system
files: '.*\.py'
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
stages: ["pre-commit", "pre-push"]
- id: bandit
name: bandit
description: "Bandit is a tool for finding common security issues in Python code"
entry: bash -c 'bandit -q -lll -x '*_test.py,./contrib/,./.venv/' -r .'
language: system
files: '.*\.py'
- id: safety
name: safety
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
# TODO: Botocore needs urllib3 1.X so we need to ignore these vulnerabilities 77744,77745. Remove this once we upgrade to urllib3 2.X
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353,77744,77745'
language: system
- id: vulture
name: vulture
description: "Vulture finds unused code in Python programs."
entry: bash -c 'vulture --exclude "contrib,.venv,api/src/backend/api/tests/,api/src/backend/conftest.py,api/src/backend/tasks/tests/" --min-confidence 100 .'
language: system
files: '.*\.py'

View File

@@ -1,25 +0,0 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.11"
jobs:
post_create_environment:
# Install poetry
# https://python-poetry.org/docs/#installing-manually
- python -m pip install poetry
post_install:
# Install dependencies with 'docs' dependency group
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# VIRTUAL_ENV needs to be set manually for now.
# See https://github.com/readthedocs/readthedocs.org/pull/11152/
- VIRTUAL_ENV=${READTHEDOCS_VIRTUALENV_PATH} python -m poetry install --only=docs
mkdocs:
configuration: mkdocs.yml

View File

@@ -55,7 +55,7 @@ further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [support.prowler.com](https://customer.support.prowler.com/servicedesk/customer/portals). All
reported by contacting the project team at community@prowler.cloud. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.

View File

@@ -1,13 +0,0 @@
# Do you want to learn on how to...
- Contribute with your code or fixes to Prowler
- Create a new check for a provider
- Create a new security compliance framework
- Add a custom output format
- Add a new integration
- Contribute with documentation
Want some swag as appreciation for your contribution?
# Prowler Developer Guide
https://docs.prowler.com/projects/prowler-open-source/en/latest/developer-guide/introduction/

View File

@@ -1,61 +0,0 @@
FROM python:3.12.11-slim-bookworm AS build
LABEL maintainer="https://github.com/prowler-cloud/prowler"
LABEL org.opencontainers.image.source="https://github.com/prowler-cloud/prowler"
ARG POWERSHELL_VERSION=7.5.0
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget libicu72 libunwind8 libssl3 libcurl4 ca-certificates apt-transport-https gnupg \
&& rm -rf /var/lib/apt/lists/*
# Install PowerShell
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-x64.tar.gz -O /tmp/powershell.tar.gz ; \
elif [ "$ARCH" = "aarch64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-arm64.tar.gz -O /tmp/powershell.tar.gz ; \
else \
echo "Unsupported architecture: $ARCH" && exit 1 ; \
fi && \
mkdir -p /opt/microsoft/powershell/7 && \
tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7 && \
chmod +x /opt/microsoft/powershell/7/pwsh && \
ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && \
rm /tmp/powershell.tar.gz
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
USER prowler
WORKDIR /home/prowler
# Copy necessary files
COPY prowler/ /home/prowler/prowler/
COPY dashboard/ /home/prowler/dashboard/
COPY pyproject.toml /home/prowler
COPY README.md /home/prowler/
COPY prowler/providers/m365/lib/powershell/m365_powershell.py /home/prowler/prowler/providers/m365/lib/powershell/m365_powershell.py
# Install Python dependencies
ENV HOME='/home/prowler'
ENV PATH="${HOME}/.local/bin:${PATH}"
#hadolint ignore=DL3013
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir poetry
RUN poetry install --compile && \
rm -rf ~/.cache/pip
# Install PowerShell modules
RUN poetry run python prowler/providers/m365/lib/powershell/m365_powershell.py
# Remove deprecated dash dependencies
RUN pip uninstall dash-html-components -y && \
pip uninstall dash-core-components -y
USER prowler
ENTRYPOINT ["poetry", "run", "prowler"]

View File

@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright @ 2024 Toni de la Fuente
Copyright 2018 Netflix, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -198,4 +198,4 @@
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
limitations under the License.

View File

@@ -0,0 +1,5 @@
```
./prowler -l # to see all available checks and their groups.
./prowler -L # to see all available groups only.
./prowler -l -g groupname # to see checks in a particular group
```

View File

@@ -1,47 +0,0 @@
.DEFAULT_GOAL:=help
##@ Testing
test: ## Test with pytest
rm -rf .coverage && \
pytest -n auto -vvv -s --cov=./prowler --cov-report=xml tests
coverage: ## Show Test Coverage
coverage run --skip-covered -m pytest -v && \
coverage report -m && \
rm -rf .coverage && \
coverage report -m
coverage-html: ## Show Test Coverage
rm -rf ./htmlcov && \
coverage html && \
open htmlcov/index.html
##@ Linting
format: ## Format Code
@echo "Running black..."
black .
lint: ## Lint Code
@echo "Running flake8..."
flake8 . --ignore=E266,W503,E203,E501,W605,E128 --exclude contrib
@echo "Running black... "
black --check .
@echo "Running pylint..."
pylint --disable=W,C,R,E -j 0 prowler util
##@ PyPI
pypi-clean: ## Delete the distribution files
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info
pypi-build: ## Build package
$(MAKE) pypi-clean && \
poetry build
pypi-upload: ## Upload package
python3 -m twine upload --repository pypi dist/*
##@ Help
help: ## Show this help.
@echo "Prowler Makefile"
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)

13
Pipfile Normal file
View File

@@ -0,0 +1,13 @@
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
boto3 = ">=1.9.188"
detect-secrets = "==1.0.3"
[requires]
python_version = "3.7"

880
README.md
View File

@@ -1,338 +1,714 @@
<p align="center">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-black.png#gh-light-mode-only" width="50%" height="50%">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-white.png#gh-dark-mode-only" width="50%" height="50%">
</p>
<p align="center">
<b><i>Prowler</b> is the Open Cloud Security platform trusted by thousands to automate security and compliance in any cloud environment. With hundreds of ready-to-use checks and compliance frameworks, Prowler delivers real-time, customizable monitoring and seamless integrations, making cloud security simple, scalable, and cost-effective for organizations of any size.
</p>
<p align="center">
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
<img src="https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png" />
</p>
<p align="center">
<a href="https://goto.prowler.com/slack"><img width="30" height="30" alt="Prowler community on Slack" src="https://github.com/prowler-cloud/prowler/assets/38561120/3c8b4ec5-6849-41a5-b5e1-52bbb94af73a"></a>
<br>
<a href="https://goto.prowler.com/slack">Join our Prowler community!</a>
</p>
<hr>
<p align="center">
<a href="https://goto.prowler.com/slack"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=downloads"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://codecov.io/gh/prowler-cloud/prowler"><img src="https://codecov.io/gh/prowler-cloud/prowler/graph/badge.svg?token=OflBGsdpDl"/></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/v/release/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler/releases"><img alt="Version" src="https://img.shields.io/github/release-date/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler"><img alt="Contributors" src="https://img.shields.io/github/contributors-anon/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler/issues"><img alt="Issues" src="https://img.shields.io/github/issues/prowler-cloud/prowler"></a>
<a href="https://github.com/prowler-cloud/prowler"><img alt="License" src="https://img.shields.io/github/license/prowler-cloud/prowler"></a>
<a href="https://twitter.com/ToniBlyx"><img alt="Twitter" src="https://img.shields.io/twitter/follow/toniblyx?style=social"></a>
<a href="https://twitter.com/prowlercloud"><img alt="Twitter" src="https://img.shields.io/twitter/follow/prowlercloud?style=social"></a>
</p>
<hr>
<p align="center">
<img align="center" src="/docs/img/prowler-cli-quick.gif" width="100%" height="100%">
</p>
# Prowler - AWS Security Tool
# Description
[![Discord Shield](https://discordapp.com/api/guilds/807208614288818196/widget.png?style=shield)](https://discord.gg/UjSMCVnxSB)
[![Docker Pulls](https://img.shields.io/docker/pulls/toniblyx/prowler)](https://hub.docker.com/r/toniblyx/prowler)
[![aws-ecr](https://user-images.githubusercontent.com/3985464/141164269-8cfeef0f-6b62-4c99-8fe9-4537986a1613.png)](https://gallery.ecr.aws/o4g1s5r6/prowler)
**Prowler** is an open-source security tool designed to assess and enforce security best practices across AWS, Azure, Google Cloud, and Kubernetes. It supports tasks such as security audits, incident response, continuous monitoring, system hardening, forensic readiness, and remediation processes.
## Table of Contents
Prowler includes hundreds of built-in controls to ensure compliance with standards and frameworks, including:
- [Description](#description)
- [Features](#features)
- [High level architecture](#high-level-architecture)
- [Requirements and Installation](#requirements-and-installation)
- [Usage](#usage)
- [Screenshots](#screenshots)
- [Advanced Usage](#advanced-usage)
- [Security Hub integration](#security-hub-integration)
- [CodeBuild deployment](#codebuild-deployment)
- [Whitelist/allowlist or remove FAIL from resources](#whitelist-or-allowlist-or-remove-a-fail-from-resources)
- [Fix](#how-to-fix-every-fail)
- [Troubleshooting](#troubleshooting)
- [Extras](#extras)
- [Forensics Ready Checks](#forensics-ready-checks)
- [GDPR Checks](#gdpr-checks)
- [HIPAA Checks](#hipaa-checks)
- [Trust Boundaries Checks](#trust-boundaries-checks)
- [Multi Account and Continuous Monitoring](util/org-multi-account/README.md)
- [Add Custom Checks](#add-custom-checks)
- [Third Party Integrations](#third-party-integrations)
- [Full list of checks and groups](/LIST_OF_CHECKS_AND_GROUPS.md)
- [License](#license)
- **Industry Standards:** CIS, NIST 800, NIST CSF, and CISA
- **Regulatory Compliance and Governance:** RBI, FedRAMP, and PCI-DSS
- **Frameworks for Sensitive Data and Privacy:** GDPR, HIPAA, and FFIEC
- **Frameworks for Organizational Governance and Quality Control:** SOC2 and GXP
- **AWS-Specific Frameworks:** AWS Foundational Technical Review (FTR) and AWS Well-Architected Framework (Security Pillar)
- **National Security Standards:** ENS (Spanish National Security Scheme)
- **Custom Security Frameworks:** Tailored to your needs
## Description
## Prowler App
Prowler is a command line tool that helps you with AWS security assessment, auditing, hardening and incident response.
Prowler App is a web-based application that simplifies running Prowler across your cloud provider accounts. It provides a user-friendly interface to visualize the results and streamline your security assessments.
It follows guidelines of the CIS Amazon Web Services Foundations Benchmark (49 checks) and has more than 100 additional checks including related to GDPR, HIPAA, PCI-DSS, ISO-27001, FFIEC, SOC2 and others.
![Prowler App](docs/products/img/overview.png)
Read more about [CIS Amazon Web Services Foundations Benchmark v1.2.0 - 05-23-2018](https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf)
>For more details, refer to the [Prowler App Documentation](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-app-installation)
## Features
## Prowler CLI
+200 checks covering security best practices across all AWS regions and most of AWS services and related to the next groups:
```console
prowler <provider>
```
![Prowler CLI Execution](docs/img/short-display.png)
- Identity and Access Management [group1]
- Logging [group2]
- Monitoring [group3]
- Networking [group4]
- CIS Level 1 [cislevel1]
- CIS Level 2 [cislevel2]
- Extras *see Extras section* [extras]
- Forensics related group of checks [forensics-ready]
- GDPR [gdpr] Read more [here](#gdpr-checks)
- HIPAA [hipaa] Read more [here](#hipaa-checks)
- Trust Boundaries [trustboundaries] Read more [here](#trust-boundaries-checks)
- Secrets
- Internet exposed resources
- EKS-CIS
- Also includes PCI-DSS, ISO-27001, FFIEC, SOC2, ENS (Esquema Nacional de Seguridad of Spain).
- AWS FTR [FTR] Read more [here](#aws-ftr-checks)
With Prowler you can:
## Prowler Dashboard
- Get a direct colorful or monochrome report
- A HTML, CSV, JUNIT, JSON or JSON ASFF (Security Hub) format report
- Send findings directly to Security Hub
- Run specific checks and groups or create your own
- Check multiple AWS accounts in parallel or sequentially
- And more! Read examples below
```console
prowler dashboard
```
![Prowler Dashboard](docs/products/img/dashboard.png)
## High level architecture
# Prowler at a Glance
> [!Tip]
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
You can run Prowler from your workstation, an EC2 instance, Fargate or any other container, Codebuild, CloudShell and Cloud9.
![Prowler high level architecture](https://user-images.githubusercontent.com/3985464/109143232-1488af80-7760-11eb-8d83-726790fda592.jpg)
## Requirements and Installation
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 576 | 82 | 36 | 10 |
| GCP | 79 | 13 | 10 | 3 |
| Azure | 162 | 19 | 11 | 4 |
| Kubernetes | 83 | 7 | 5 | 7 |
| GitHub | 17 | 2 | 1 | 0 |
| M365 | 70 | 7 | 3 | 2 |
| NHN (Unofficial) | 6 | 2 | 1 | 0 |
Prowler has been written in bash using AWS-CLI underneath and it works in Linux, Mac OS or Windows with cygwin or virtualization. Also requires `jq` and `detect-secrets` to work properly.
> [!Note]
> The numbers in the table are updated periodically.
- Make sure the latest version of AWS-CLI is installed. It works with either v1 or v2, however _latest v2 is recommended if using new regions since they require STS v2 token_, and other components needed, with Python pip already installed.
- For Amazon Linux (`yum` based Linux distributions and AWS CLI v2):
```
sudo yum update -y
sudo yum remove -y awscli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
sudo yum install -y python3 jq git
sudo pip3 install detect-secrets==1.0.3
git clone https://github.com/toniblyx/prowler
```
- For Ubuntu Linux (`apt` based Linux distributions and AWS CLI v2):
```
sudo apt update
sudo apt install python3 python3-pip jq git zip
pip install detect-secrets==1.0.3
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
git clone https://github.com/toniblyx/prowler
```
> NOTE: detect-secrets Yelp version is no longer supported, the one from IBM is mantained now. Use the one mentioned below or the specific Yelp version 1.0.3 to make sure it works as expected (`pip install detect-secrets==1.0.3`):
```sh
pip install "git+https://github.com/ibm/detect-secrets.git@master#egg=detect-secrets"
```
> [!Note]
> Use the following commands to list Prowler's available checks, services, compliance frameworks, and categories:
> - `prowler <provider> --list-checks`
> - `prowler <provider> --list-services`
> - `prowler <provider> --list-compliance`
> - `prowler <provider> --list-categories`
AWS-CLI can be also installed it using other methods, refer to official documentation for more details: <https://aws.amazon.com/cli/>, but `detect-secrets` has to be installed using `pip` or `pip3`.
# 💻 Installation
- Once Prowler repository is cloned, get into the folder and you can run it:
## Prowler App
```sh
cd prowler
./prowler
```
Prowler App offers flexible installation methods tailored to various environments:
- Since Prowler users AWS CLI under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence). Make sure you have properly configured your AWS-CLI with a valid Access Key and Region or declare AWS variables properly (or instance profile/role):
> For detailed instructions on using Prowler App, refer to the [Prowler App Usage Guide](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/prowler-app/).
```sh
aws configure
```
### Docker Compose
or
**Requirements**
```sh
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
- Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the AWS managed policies, SecurityAudit and ViewOnlyAccess, to the user or role being used. Policy ARNs are:
**Commands**
```sh
arn:aws:iam::aws:policy/SecurityAudit
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
```
``` console
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/docker-compose.yml
curl -LO https://raw.githubusercontent.com/prowler-cloud/prowler/refs/heads/master/.env
docker compose up -d
> Additional permissions needed: to make sure Prowler can scan all services included in the group *Extras*, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/toniblyx/prowler/blob/master/iam/prowler-additions-policy.json) to the role you are using. If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/toniblyx/prowler/blob/master/iam/prowler-security-hub.json).
## Usage
1. Run the `prowler` command without options (it will use your environment variable credentials if they exist or will default to using the `~/.aws/credentials` file and run checks over all regions when needed. The default region is us-east-1):
```sh
./prowler
```
Use `-l` to list all available checks and the groups (sections) that reference them. To list all groups use `-L` and to list content of a group use `-l -g <groupname>`.
If you want to avoid installing dependencies run it using Docker:
```sh
docker run -ti --rm --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest
```
In case you want to get reports created by Prowler use docker volume option like in the example below:
```sh
docker run -ti --rm -v /your/local/output:/prowler/output --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest -g hipaa -M csv,json,html
```
1. For custom AWS-CLI profile and region, use the following: (it will use your custom profile and run checks over all regions when needed):
```sh
./prowler -p custom-profile -r us-east-1
```
1. For a single check use option `-c`:
```sh
./prowler -c check310
```
With Docker:
```sh
docker run -ti --rm --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest "-c check310"
```
or multiple checks separated by comma:
```sh
./prowler -c check310,check722
```
or all checks but some of them:
```sh
./prowler -E check42,check43
```
or for custom profile and region:
```sh
./prowler -p custom-profile -r us-east-1 -c check11
```
or for a group of checks use group name:
```sh
./prowler -g group1 # for iam related checks
```
or exclude some checks in the group:
```sh
./prowler -g group4 -E check42,check43
```
Valid check numbers are based on the AWS CIS Benchmark guide, so 1.1 is check11 and 3.10 is check310
### Regions
By default, Prowler scans all opt-in regions available, that might take a long execution time depending on the number of resources and regions used. Same applies for GovCloud or China regions. See below Advance usage for examples.
Prowler has two parameters related to regions: `-r` that is used query AWS services API endpoints (it uses `us-east-1` by default and required for GovCloud or China) and the option `-f` that is to filter those regions you only want to scan. For example if you want to scan Dublin only use `-f eu-west-1` and if you want to scan Dublin and Ohio `-f 'eu-west-1 us-east-s'`, note the single quotes and space between regions.
## Screenshots
- Sample screenshot of default console report first lines of command `./prowler`:
<img width="900" src="https://user-images.githubusercontent.com/3985464/141444529-84640bed-be0b-4112-80a2-2a43e3ebf53f.png">
- Sample screenshot of the html output `-M html`:
<img width="900" alt="Prowler html" src="https://user-images.githubusercontent.com/3985464/141443976-41d32cc2-533d-405a-92cb-affc3995d6ec.png">
- Sample screenshot of the Quicksight dashboard, see [quicksight-security-dashboard.workshop.aws](https://quicksight-security-dashboard.workshop.aws/):
<img width="900" alt="Prowler with Quicksight" src="https://user-images.githubusercontent.com/3985464/128932819-0156e838-286d-483c-b953-fda68a325a3d.png">
- Sample screenshot of the junit-xml output in CodeBuild `-M junit-xml`:
<img width="900" src="https://user-images.githubusercontent.com/3985464/113942824-ca382b00-9801-11eb-84e5-d7731548a7a9.png">
### Save your reports
1. If you want to save your report for later analysis thare are different ways, natively (supported text, mono, csv, json, json-asff, junit-xml and html, see note below for more info):
```sh
./prowler -M csv
```
or with multiple formats at the same time:
```sh
./prowler -M csv,json,json-asff,html
```
or just a group of checks in multiple formats:
```sh
./prowler -g gdpr -M csv,json,json-asff
```
or if you want a sorted and dynamic HTML report do:
```sh
./prowler -M html
```
Now `-M` creates a file inside the prowler `output` directory named `prowler-output-AWSACCOUNTID-YYYYMMDDHHMMSS.format`. You don't have to specify anything else, no pipes, no redirects.
or just saving the output to a file like below:
```sh
./prowler -M mono > prowler-report.txt
```
To generate JUnit report files, include the junit-xml format. This can be combined with any other format. Files are written inside a prowler root directory named `junit-reports`:
```sh
./prowler -M text,junit-xml
```
>Note about output formats to use with `-M`: "text" is the default one with colors, "mono" is like default one but monochrome, "csv" is comma separated values, "json" plain basic json (without comma between lines) and "json-asff" is also json with Amazon Security Finding Format that you can ship to Security Hub using `-S`.
or save your report in an S3 bucket (this only works for text or mono. For csv, json or json-asff it has to be copied afterwards):
```sh
./prowler -M mono | aws s3 cp - s3://bucket-name/prowler-report.txt
```
When generating multiple formats and running using Docker, to retrieve the reports, bind a local directory to the container, e.g.:
```sh
docker run -ti --rm --name prowler --volume "$(pwd)":/prowler/output --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest -M csv,json
```
1. To perform an assessment based on CIS Profile Definitions you can use cislevel1 or cislevel2 with `-g` flag, more information about this [here, page 8](https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf):
```sh
./prowler -g cislevel1
```
1. If you want to run Prowler to check multiple AWS accounts in parallel (runs up to 4 simultaneously `-P 4`) but you may want to read below in Advanced Usage section to do so assuming a role:
```sh
grep -E '^\[([0-9A-Aa-z_-]+)\]' ~/.aws/credentials | tr -d '][' | shuf | \
xargs -n 1 -L 1 -I @ -r -P 4 ./prowler -p @ -M csv 2> /dev/null >> all-accounts.csv
```
1. For help about usage run:
```
./prowler -h
```
## Advanced Usage
### Assume Role:
Prowler uses the AWS CLI underneath so it uses the same authentication methods. However, there are few ways to run Prowler against multiple accounts using IAM Assume Role feature depending on eachg use case. You can just set up your custom profile inside `~/.aws/config` with all needed information about the role to assume then call it with `./prowler -p your-custom-profile`. Additionally you can use `-A 123456789012` and `-R RemoteRoleToAssume` and Prowler will get those temporary credentials using `aws sts assume-role`, set them up as environment variables and run against that given account. To create a role to assume in multiple accounts easier eather as CFN Stack or StackSet, look at [this CloudFormation template](iam/create_role_to_assume_cfn.yaml) and adapt it.
```sh
./prowler -A 123456789012 -R ProwlerRole
```
> Containers are built for `linux/amd64`.
### Configuring Your Workstation for Prowler App
If your workstation's architecture is incompatible, you can resolve this by:
- **Setting the environment variable**: `DOCKER_DEFAULT_PLATFORM=linux/amd64`
- **Using the following flag in your Docker command**: `--platform linux/amd64`
> Once configured, access the Prowler App at http://localhost:3000. Sign up using your email and password to get started.
### Common Issues with Docker Pull Installation
> [!Note]
If you want to use AWS role assumption (e.g., with the "Connect assuming IAM Role" option), you may need to mount your local `.aws` directory into the container as a volume (e.g., `- "${HOME}/.aws:/home/prowler/.aws:ro"`). There are several ways to configure credentials for Docker containers. See the [Troubleshooting](./docs/troubleshooting.md) section for more details and examples.
You can find more information in the [Troubleshooting](./docs/troubleshooting.md) section.
### From GitHub
**Requirements**
* `git` installed.
* `poetry` v2 installed: [poetry installation](https://python-poetry.org/docs/#installation).
* `npm` installed: [npm installation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
* `Docker Compose` installed: https://docs.docker.com/compose/install/.
**Commands to run the API**
``` console
git clone https://github.com/prowler-cloud/prowler
cd prowler/api
poetry install
eval $(poetry env activate)
set -a
source .env
docker compose up postgres valkey -d
cd src/backend
python manage.py migrate --database admin
gunicorn -c config/guniconf.py config.wsgi:application
```
> [!IMPORTANT]
> As of Poetry v2.0.0, the `poetry shell` command has been deprecated. Use `poetry env activate` instead for environment activation.
>
> If your Poetry version is below v2.0.0, continue using `poetry shell` to activate your environment.
> For further guidance, refer to the Poetry Environment Activation Guide https://python-poetry.org/docs/managing-environments/#activating-the-environment.
> After completing the setup, access the API documentation at http://localhost:8080/api/v1/docs.
**Commands to run the API Worker**
``` console
git clone https://github.com/prowler-cloud/prowler
cd prowler/api
poetry install
eval $(poetry env activate)
set -a
source .env
cd src/backend
python -m celery -A config.celery worker -l info -E
```sh
./prowler -A 123456789012 -R ProwlerRole -I 123456
```
**Commands to run the API Scheduler**
> *NOTE 1 about Session Duration*: By default it gets credentials valid for 1 hour (3600 seconds). Depending on the mount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify *"Maximum CLI/API session duration"* for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
``` console
git clone https://github.com/prowler-cloud/prowler
cd prowler/api
poetry install
eval $(poetry env activate)
set -a
source .env
cd src/backend
python -m celery -A config.celery beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
> *NOTE 2 about Session Duration*: Bear in mind that if you are using roles assumed by role chaining there is a hard limit of 1 hour so consider not using role chaining if possible, read more about that, in foot note 1 below the table [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html).
For example, if you want to get only the fails in CSV format from all checks regarding RDS without banner from the AWS Account 123456789012 assuming the role RemoteRoleToAssume and set a fixed session duration of 1h:
```sh
./prowler -A 123456789012 -R RemoteRoleToAssume -T 3600 -b -M cvs -q -g rds
```
or with a given External ID:
```sh
./prowler -A 123456789012 -R RemoteRoleToAssume -T 3600 -I 123456 -b -M cvs -q -g rds
```
**Commands to run the UI**
### Assume Role and across all accounts in AWS Organizations or just a list of accounts:
``` console
git clone https://github.com/prowler-cloud/prowler
cd prowler/ui
npm install
npm run build
npm start
If you want to run Prowler or just a check or a group across all accounts of AWS Organizations you can do this:
First get a list of accounts that are not suspended:
```
ACCOUNTS_IN_ORGS=$(aws organizations list-accounts --query Accounts[?Status==`ACTIVE`].Id --output text)
```
Then run Prowler to assume a role (same in all members) per each account, in this example it is just running one particular check:
```
for accountId in $ACCOUNTS_IN_ORGS; do ./prowler -A $accountId -R RemoteRoleToAssume -c extra79; done
```
Usig the same for loop it can be scanned a list of accounts with a variable like `ACCOUNTS_LIST='11111111111 2222222222 333333333'`
### GovCloud
Prowler runs in GovCloud regions as well. To make sure it points to the right API endpoint use `-r` to either `us-gov-west-1` or `us-gov-east-1`. If not filter region is used it will look for resources in both GovCloud regions by default:
```sh
./prowler -r us-gov-west-1
```
> For Security Hub integration see below in Security Hub section.
### Custom folder for custom checks
Flag `-x /my/own/checks` will include any check in that particular directory. To see how to write checks see [Add Custom Checks](#add-custom-checks) section.
### Show or log only FAILs
In order to remove noise and get only FAIL findings there is a `-q` flag that makes Prowler to show and log only FAILs.
It can be combined with any other option.
Will show WARNINGS when a resource is excluded, just to take into consideration.
```sh
# -q option combined with -M csv -b
./prowler -q -M csv -b
```
> Once configured, access the Prowler App at http://localhost:3000. Sign up using your email and password to get started.
### Set the entropy limit for detect-secrets
## Prowler CLI
### Pip package
Prowler CLI is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/). Consequently, it can be installed using pip with Python >3.9.1, <3.13:
Sets the entropy limit for high entropy base64 strings from environment variable `BASE64_LIMIT`. Value must be between 0.0 and 8.0, defaults is 4.5.
Sets the entropy limit for high entropy hex strings from environment variable `HEX_LIMIT`. Value must be between 0.0 and 8.0, defaults is 3.0.
```console
pip install prowler
prowler -v
```sh
export BASE64_LIMIT=4.5
export HEX_LIMIT=3.0
```
>For further guidance, refer to [https://docs.prowler.com](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-cli-installation)
### Run Prowler using AWS CloudShell
### Containers
An easy way to run Prowler to scan your account is using AWS CloudShell. Read more and learn how to do it [here](util/cloudshell/README.md).
**Available Versions of Prowler CLI**
## Security Hub integration
The following versions of Prowler CLI are available, depending on your requirements:
Since October 30th 2020 (version v2.3RC5), Prowler supports natively and as **official integration** sending findings to [AWS Security Hub](https://aws.amazon.com/security-hub). This integration allows Prowler to import its findings to AWS Security Hub. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions and from Prowler for free.
- `latest`: Synchronizes with the `master` branch. Note that this version is not stable.
- `v4-latest`: Synchronizes with the `v4` branch. Note that this version is not stable.
- `v3-latest`: Synchronizes with the `v3` branch. Note that this version is not stable.
- `<x.y.z>` (release): Stable releases corresponding to specific versions. You can find the complete list of releases [here](https://github.com/prowler-cloud/prowler/releases).
- `stable`: Always points to the latest release.
- `v4-stable`: Always points to the latest release for v4.
- `v3-stable`: Always points to the latest release for v3.
Before sending findings to Prowler, you need to perform next steps:
1. Since Security Hub is a region based service, enable it in the region or regions you require. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-security-hub --region <region>`.
2. Enable Prowler as partner integration integration. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-import-findings-for-product --region <region> --product-arn arn:aws:securityhub:<region>::product/prowler/prowler` (change region also inside the ARN).
- Using the AWS Management Console:
![Screenshot 2020-10-29 at 10 26 02 PM](https://user-images.githubusercontent.com/3985464/97634660-5ade3400-1a36-11eb-9a92-4a45cc98c158.png)
3. As mentioned in section "Custom IAM Policy", to allow Prowler to import its findings to AWS Security Hub you need to add the policy below to the role or user running Prowler:
- [iam/prowler-security-hub.json](iam/prowler-security-hub.json)
The container images are available here:
- Prowler CLI:
- [DockerHub](https://hub.docker.com/r/prowlercloud/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
- Prowler App:
- [DockerHub - Prowler UI](https://hub.docker.com/r/prowlercloud/prowler-ui/tags)
- [DockerHub - Prowler API](https://hub.docker.com/r/prowlercloud/prowler-api/tags)
Once it is enabled, it is as simple as running the command below (for all regions):
### From GitHub
Python >3.9.1, <3.13 is required with pip and Poetry:
``` console
git clone https://github.com/prowler-cloud/prowler
cd prowler
eval $(poetry env activate)
poetry install
python prowler-cli.py -v
```sh
./prowler -M json-asff -S
```
> [!IMPORTANT]
> To clone Prowler on Windows, configure Git to support long file paths by running the following command: `git config core.longpaths true`.
or for only one filtered region like eu-west-1:
```sh
./prowler -M json-asff -q -S -f eu-west-1
```
> Note 1: It is recommended to send only fails to Security Hub and that is possible adding `-q` to the command.
> [!IMPORTANT]
> As of Poetry v2.0.0, the `poetry shell` command has been deprecated. Use `poetry env activate` instead for environment activation.
>
> If your Poetry version is below v2.0.0, continue using `poetry shell` to activate your environment.
> For further guidance, refer to the Poetry Environment Activation Guide https://python-poetry.org/docs/managing-environments/#activating-the-environment.
> Note 2: Since Prowler perform checks to all regions by defaults you may need to filter by region when runing Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs).
# ✏️ High level architecture
> Note 3: to have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
## Prowler App
**Prowler App** is composed of three key components:
Once you run findings for first time you will be able to see Prowler findings in Findings section:
- **Prowler UI**: A web-based interface, built with Next.js, providing a user-friendly experience for executing Prowler scans and visualizing results.
- **Prowler API**: A backend service, developed with Django REST Framework, responsible for running Prowler scans and storing the generated results.
- **Prowler SDK**: A Python SDK designed to extend the functionality of the Prowler CLI for advanced capabilities.
![Screenshot 2020-10-29 at 10 29 05 PM](https://user-images.githubusercontent.com/3985464/97634676-66c9f600-1a36-11eb-9341-70feb06f6331.png)
![Prowler App Architecture](docs/img/prowler-app-architecture.png)
### Security Hub in GovCloud regions
## Prowler CLI
To use Prowler and Security Hub integration in GovCloud there is an additional requirement, usage of `-r` is needed to point the API queries to the right API endpoint. Here is a sample command that sends only failed findings to Security Hub in region `us-gov-west-1`:
```
./prowler -r us-gov-west-1 -f us-gov-west-1 -S -M csv,json-asff -q
```
**Running Prowler**
### Security Hub in China regions
Prowler can be executed across various environments, offering flexibility to meet your needs. It can be run from:
To use Prowler and Security Hub integration in China regions there is an additional requirement, usage of `-r` is needed to point the API queries to the right API endpoint. Here is a sample command that sends only failed findings to Security Hub in region `cn-north-1`:
```
./prowler -r cn-north-1 -f cn-north-1 -q -S -M csv,json-asff
```
- Your own workstation
## CodeBuild deployment
- A Kubernetes Job
Either to run Prowler once or based on a schedule this template makes it pretty straight forward. This template will create a CodeBuild environment and run Prowler directly leaving all reports in a bucket and creating a report also inside CodeBuild basedon the JUnit output from Prowler. Scheduling can be cron based like `cron(0 22 * * ? *)` or rate based like `rate(5 hours)` since CloudWatch Event rules (or Eventbridge) is used here.
- Google Compute Engine
The Cloud Formation template that helps you doing that is [here](https://github.com/toniblyx/prowler/blob/master/util/codebuild/codebuild-prowler-audit-account-cfn.yaml).
- Azure Virtual Machines (VMs)
> This is a simple solution to monitor one account. For multiples accounts see [Multi Account and Continuous Monitoring](util/org-multi-account/README.md).
- Amazon EC2 instances
## Whitelist or allowlist or remove a fail from resources
- AWS Fargate or other container platforms
Sometimes you may find resources that are intentionally configured in a certain way that may be a bad practice but it is all right with it, for example an S3 bucket open to the internet hosting a web site, or a security group with an open port needed in your use case. Now you can use `-w whitelist_sample.txt` and add your resources as `checkID:resourcename` as in this command:
- CloudShell
```
./prowler -w whitelist_sample.txt
```
And many more environments.
Whitelist option works along with other options and adds a `WARNING` instead of `INFO`, `PASS` or `FAIL` to any output format except for `json-asff`.
![Architecture](docs/img/architecture.png)
## How to fix every FAIL
# Deprecations from v3
Check your report and fix the issues following all specific guidelines per check in <https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf>
## General
- `Allowlist` now is called `Mutelist`.
- The `--quiet` option has been deprecated. Use the `--status` flag to filter findings based on their status: PASS, FAIL, or MANUAL.
- All findings with an `INFO` status have been reclassified as `MANUAL`.
- The CSV output format is standardized across all providers.
## Troubleshooting
**Deprecated Output Formats**
### STS expired token
The following formats are now deprecated:
- Native JSON has been replaced with JSON in [OCSF] v1.1.0 format, which is standardized across all providers (https://schema.ocsf.io/).
If you are using an STS token for AWS-CLI and your session is expired you probably get this error:
## AWS
```sh
A client error (ExpiredToken) occurred when calling the GenerateCredentialReport operation: The security token included in the request is expired
```
**AWS Flag Deprecation**
To fix it, please renew your token by authenticating again to the AWS API, see next section below if you use MFA.
The flag --sts-endpoint-region has been deprecated due to the adoption of AWS STS regional tokens.
### Run Prowler with MFA protected credentials
**Sending FAIL Results to AWS Security Hub**
To run Prowler using a profile that requires MFA you just need to get the session token before hand. Just make sure you use this command:
- To send only FAILS to AWS Security Hub, use one of the following options: `--send-sh-only-fails` or `--security-hub --status FAIL`.
```sh
aws --profile <YOUR_AWS_PROFILE> sts get-session-token --duration 129600 --serial-number <ARN_OF_MFA> --token-code <MFA_TOKEN_CODE> --output text
```
Once you get your token you can export it as environment variable:
```sh
export AWS_PROFILE=YOUR_AWS_PROFILE
export AWS_SESSION_TOKEN=YOUR_NEW_TOKEN
AWS_SECRET_ACCESS_KEY=YOUR_SECRET
export AWS_ACCESS_KEY_ID=YOUR_KEY
```
or set manually up your `~/.aws/credentials` file properly.
There are some helpfull tools to save time in this process like [aws-mfa-script](https://github.com/asagage/aws-mfa-script) or [aws-cli-mfa](https://github.com/sweharris/aws-cli-mfa).
### AWS Managed IAM Policies
[ViewOnlyAccess](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_view-only-user)
- Use case: This user can view a list of AWS resources and basic metadata in the account across all services. The user cannot read resource content or metadata that goes beyond the quota and list information for resources.
- Policy description: This policy grants List*, Describe*, Get*, View*, and Lookup* access to resources for most AWS services. To see what actions this policy includes for each service, see [ViewOnlyAccess Permissions](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/job-function/ViewOnlyAccess)
[SecurityAudit](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_security-auditor)
- Use case: This user monitors accounts for compliance with security requirements. This user can access logs and events to investigate potential security breaches or potential malicious activity.
- Policy description: This policy grants permissions to view configuration data for many AWS services and to review their logs. To see what actions this policy includes for each service, see [SecurityAudit Permissions](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/SecurityAudit)
### Custom IAM Policy
[Prowler-Additions-Policy](iam/prowler-additions-policy.json)
Some new and specific checks require Prowler to inherit more permissions than SecurityAudit and ViewOnlyAccess to work properly. In addition to the AWS managed policies, "SecurityAudit" and "ViewOnlyAccess", the user/role you use for checks may need to be granted a custom policy with a few more read-only permissions (to support additional services mostly). Here is an example policy with the additional rights, "Prowler-Additions-Policy" (see below bootstrap script for set it up):
- [iam/prowler-additions-policy.json](iam/prowler-additions-policy.json)
[Prowler-Security-Hub Policy](iam/prowler-security-hub.json)
Allows Prowler to import its findings to [AWS Security Hub](https://aws.amazon.com/security-hub). More information in [Security Hub integration](#security-hub-integration):
- [iam/prowler-security-hub.json](iam/prowler-security-hub.json)
### Bootstrap Script
Quick bash script to set up a "prowler" IAM user with "SecurityAudit" and "ViewOnlyAccess" group with the required permissions (including "Prowler-Additions-Policy"). To run the script below, you need user with administrative permissions; set the `AWS_DEFAULT_PROFILE` to use that account:
```sh
export AWS_DEFAULT_PROFILE=default
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' | tr -d '"')
aws iam create-group --group-name Prowler
aws iam create-policy --policy-name Prowler-Additions-Policy --policy-document file://$(pwd)/iam/prowler-additions-policy.json
aws iam attach-group-policy --group-name Prowler --policy-arn arn:aws:iam::aws:policy/SecurityAudit
aws iam attach-group-policy --group-name Prowler --policy-arn arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
aws iam attach-group-policy --group-name Prowler --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/Prowler-Additions-Policy
aws iam create-user --user-name prowler
aws iam add-user-to-group --user-name prowler --group-name Prowler
aws iam create-access-key --user-name prowler
unset ACCOUNT_ID AWS_DEFAULT_PROFILE
```
The `aws iam create-access-key` command will output the secret access key and the key id; keep these somewhere safe, and add them to `~/.aws/credentials` with an appropriate profile name to use them with Prowler. This is the only time they secret key will be shown. If you lose it, you will need to generate a replacement.
> [This CloudFormation template](iam/create_role_to_assume_cfn.yaml) may also help you on that task.
## Extras
We are adding additional checks to improve the information gather from each account, these checks are out of the scope of the CIS benchmark for AWS, but we consider them very helpful to get to know each AWS account set up and find issues on it.
Some of these checks look for publicly facing resources may not actually be fully public due to other layered controls like S3 Bucket Policies, Security Groups or Network ACLs.
To list all existing checks in the extras group run the command below:
```sh
./prowler -l -g extras
```
>There are some checks not included in that list, they are experimental or checks that takes long to run like `extra759` and `extra760` (search for secrets in Lambda function variables and code).
To check all extras in one command:
```sh
./prowler -g extras
```
or to run just one of the checks:
```sh
./prowler -c extraNUMBER
```
or to run multiple extras in one go:
```sh
./prowler -c extraNumber,extraNumber
```
# 📖 Documentation
## Forensics Ready Checks
**Documentation Resources**
With this group of checks, Prowler looks if each service with logging or audit capabilities has them enabled to ensure all needed evidences are recorded and collected for an eventual digital forensic investigation in case of incident. List of checks part of this group (you can also see all groups with `./prowler -L`). The list of checks can be seen in the group file at:
For installation instructions, usage details, tutorials, and the Developer Guide, visit https://docs.prowler.com/
[groups/group8_forensics](groups/group8_forensics)
# 📃 License
The `forensics-ready` group of checks uses existing and extra checks. To get a forensics readiness report, run this command:
**Prowler License Information**
```sh
./prowler -g forensics-ready
```
Prowler is licensed under the Apache License 2.0, as indicated in each file within the repository. Obtaining a Copy of the License
## GDPR Checks
A copy of the License is available at <http://www.apache.org/licenses/LICENSE-2.0>
With this group of checks, Prowler shows result of checks related to GDPR, more information [here](https://github.com/toniblyx/prowler/issues/189). The list of checks can be seen in the group file at:
[groups/group9_gdpr](groups/group9_gdpr)
The `gdpr` group of checks uses existing and extra checks. To get a GDPR report, run this command:
```sh
./prowler -g gdpr
```
## AWS FTR Checks
With this group of checks, Prowler shows result of checks related to the AWS Foundational Technical Review, more information [here](https://apn-checklists.s3.amazonaws.com/foundational/partner-hosted/partner-hosted/CVLHEC5X7.html). The list of checks can be seen in the group file at:
[groups/group25_ftr](groups/group25_FTR)
The `ftr` group of checks uses existing and extra checks. To get a AWS FTR report, run this command:
```sh
./prowler -g ftr
```
## HIPAA Checks
With this group of checks, Prowler shows results of controls related to the "Security Rule" of the Health Insurance Portability and Accountability Act aka [HIPAA](https://www.hhs.gov/hipaa/for-professionals/security/index.html) as defined in [45 CFR Subpart C - Security Standards for the Protection of Electronic Protected Health Information](https://www.law.cornell.edu/cfr/text/45/part-164/subpart-C) within [PART 160 - GENERAL ADMINISTRATIVE REQUIREMENTS](https://www.law.cornell.edu/cfr/text/45/part-160) and [Subpart A](https://www.law.cornell.edu/cfr/text/45/part-164/subpart-A) and [Subpart C](https://www.law.cornell.edu/cfr/text/45/part-164/subpart-C) of PART 164 - SECURITY AND PRIVACY
More information on the original PR is [here](https://github.com/toniblyx/prowler/issues/227).
### Note on Business Associate Addendum's (BAA)
Under the HIPAA regulations, cloud service providers (CSPs) such as AWS are considered business associates. The Business Associate Addendum (BAA) is an AWS contract that is required under HIPAA rules to ensure that AWS appropriately safeguards protected health information (PHI). The BAA also serves to clarify and limit, as appropriate, the permissible uses and disclosures of PHI by AWS, based on the relationship between AWS and our customers, and the activities or services being performed by AWS. Customers may use any AWS service in an account designated as a HIPAA account, but they should only process, store, and transmit protected health information (PHI) in the HIPAA-eligible services defined in the Business Associate Addendum (BAA). For the latest list of HIPAA-eligible AWS services, see [HIPAA Eligible Services Reference](https://aws.amazon.com/compliance/hipaa-eligible-services-reference/).
More information on AWS & HIPAA can be found [here](https://aws.amazon.com/compliance/hipaa-compliance/)
The list of checks showed by this group is as follows, they will be mostly relevant for Subsections [164.306 Security standards: General rules](https://www.law.cornell.edu/cfr/text/45/164.306) and [164.312 Technical safeguards](https://www.law.cornell.edu/cfr/text/45/164.312). Prowler is only able to make checks in the spirit of the technical requirements outlined in these Subsections, and cannot cover all procedural controls required. They be found in the group file at:
[groups/group10_hipaa](groups/group10_hipaa)
The `hipaa` group of checks uses existing and extra checks. To get a HIPAA report, run this command:
```sh
./prowler -g hipaa
```
## Trust Boundaries Checks
### Definition and Terms
The term "trust boundary" is originating from the threat modelling process and the most popular contributor Adam Shostack and author of "Threat Modeling: Designing for Security" defines it as following ([reference](https://adam.shostack.org/uncover.html)):
> Trust boundaries are perhaps the most subjective of all: these represent the border between trusted and untrusted elements. Trust is complex. You might trust your mechanic with your car, your dentist with your teeth, and your banker with your money, but you probably don't trust your dentist to change your spark plugs.
AWS is made to be flexible for service links within and between different AWS accounts, we all know that.
This group of checks helps to analyse a particular AWS account (subject) on existing links to other AWS accounts across various AWS services, in order to identify untrusted links.
### Run
To give it a quick shot just call:
```sh
./prowler -g trustboundaries
```
### Scenarios
Currently, this check group supports two different scenarios:
1. Single account environment: no action required, the configuration is happening automatically for you.
2. Multi account environment: in case you environment has multiple trusted and known AWS accounts you maybe want to append them manually to [groups/group16_trustboundaries](groups/group16_trustboundaries) as a space separated list into `GROUP_TRUSTBOUNDARIES_TRUSTED_ACCOUNT_IDS` variable, then just run prowler.
### Coverage
Current coverage of Amazon Web Service (AWS) taken from [here](https://docs.aws.amazon.com/whitepapers/latest/aws-overview/introduction.html):
| Topic | Service | Trust Boundary |
|---------------------------------|------------|---------------------------------------------------------------------------|
| Networking and Content Delivery | Amazon VPC | VPC endpoints connections ([extra786](checks/check_extra786)) |
| | | VPC endpoints whitelisted principals ([extra787](checks/check_extra787)) |
All ideas or recommendations to extend this group are very welcome [here](https://github.com/toniblyx/prowler/issues/new/choose).
### Detailed Explanation of the Concept
The diagrams depict two common scenarios, single account and multi account environments.
Every circle represents one AWS account.
The dashed line represents the trust boundary, that separates trust and untrusted AWS accounts.
The arrow simply describes the direction of the trust, however the data can potentially flow in both directions.
Single Account environment assumes that only the AWS account subject to this analysis is trusted. However, there is a chance that two VPCs are existing within that one AWS account which are still trusted as a self reference.
![single-account-environment](/docs/images/prowler-single-account-environment.png)
Multi Account environments assumes a minimum of two trusted or known accounts. For this particular example all trusted and known accounts will be tested. Therefore `GROUP_TRUSTBOUNDARIES_TRUSTED_ACCOUNT_IDS` variable in [groups/group16_trustboundaries](groups/group16_trustboundaries) should include all trusted accounts Account #A, Account #B, Account #C, and Account #D in order to finally raise Account #E and Account #F for being untrusted or unknown.
![multi-account-environment](/docs/images/prowler-multi-account-environment.png)
## Add Custom Checks
In order to add any new check feel free to create a new extra check in the extras group or other group. To do so, you will need to follow these steps:
1. Follow structure in file `checks/check_sample`
2. Name your check with a number part of an existing group or a new one
3. Save changes and run it as `./prowler -c extraNN`
4. Send me a pull request! :)
## Add Custom Groups
1. Follow structure in file `groups/groupN_sample`
1. Name your group with a non existing number
1. Save changes and run it as `./prowler -g extraNN`
1. Send me a pull request! :)
- You can also create a group with only the checks that you want to perform in your company, for instance a group named `group9_mycompany` with only the list of checks that you care or your particular compliance applies.
## Third Party Integrations
### Telegram
Javier Pecete has done an awesome job integrating Prowler with Telegram, you have more details here <https://github.com/i4specete/ServerTelegramBot>
### Cloud Security Suite
The guys of SecurityFTW have added Prowler in their Cloud Security Suite along with other cool security tools <https://github.com/SecurityFTW/cs-suite>
## License
Prowler is licensed as Apache License 2.0 as specified in each file. You may obtain a copy of the License at
<http://www.apache.org/licenses/LICENSE-2.0>
**I'm not related anyhow with CIS organization, I just write and maintain Prowler to help companies over the world to make their cloud infrastructure more secure.**
If you want to contact me visit <https://blyx.com/contact> or follow me on Twitter <https://twitter.com/toniblyx> my DMs are open.

View File

@@ -1,65 +0,0 @@
# Security
## Reporting Vulnerabilities
At Prowler, we consider the security of our open source software and systems a top priority. But no matter how much effort we put into system security, there can still be vulnerabilities present.
If you discover a vulnerability, we would like to know about it so we can take steps to address it as quickly as possible. We would like to ask you to help us better protect our users, our clients and our systems.
When reporting vulnerabilities, please consider (1) attack scenario / exploitability, and (2) the security impact of the bug. The following issues are considered out of scope:
- Social engineering support or attacks requiring social engineering.
- Clickjacking on pages with no sensitive actions.
- Cross-Site Request Forgery (CSRF) on unauthenticated forms or forms with no sensitive actions.
- Attacks requiring Man-In-The-Middle (MITM) or physical access to a user's device.
- Previously known vulnerable libraries without a working Proof of Concept (PoC).
- Comma Separated Values (CSV) injection without demonstrating a vulnerability.
- Missing best practices in SSL/TLS configuration.
- Any activity that could lead to the disruption of service (DoS).
- Rate limiting or brute force issues on non-authentication endpoints.
- Missing best practices in Content Security Policy (CSP).
- Missing HttpOnly or Secure flags on cookies.
- Configuration of or missing security headers.
- Missing email best practices, such as invalid, incomplete, or missing SPF/DKIM/DMARC records.
- Vulnerabilities only affecting users of outdated or unpatched browsers (less than two stable versions behind).
- Software version disclosure, banner identification issues, or descriptive error messages.
- Tabnabbing.
- Issues that require unlikely user interaction.
- Improper logout functionality and improper session timeout.
- CORS misconfiguration without an exploitation scenario.
- Broken link hijacking.
- Automated scanning results (e.g., sqlmap, Burp active scanner) that have not been manually verified.
- Content spoofing and text injection issues without a clear attack vector.
- Email spoofing without exploiting security flaws.
- Dead links or broken links.
- User enumeration.
Testing guidelines:
- Do not run automated scanners on other customer projects. Running automated scanners can run up costs for our users. Aggressively configured scanners might inadvertently disrupt services, exploit vulnerabilities, lead to system instability or breaches and violate Terms of Service from our upstream providers. Our own security systems won't be able to distinguish hostile reconnaissance from whitehat research. If you wish to run an automated scanner, notify us at support@prowler.com and only run it on your own Prowler app project. Do NOT attack Prowler in usage of other customers.
- Do not take advantage of the vulnerability or problem you have discovered, for example by downloading more data than necessary to demonstrate the vulnerability or deleting or modifying other people's data.
Reporting guidelines:
- File a report through our Support Desk at https://support.prowler.com
- If it is about a lack of a security functionality, please file a feature request instead at https://github.com/prowler-cloud/prowler/issues
- Do provide sufficient information to reproduce the problem, so we will be able to resolve it as quickly as possible.
- If you have further questions and want direct interaction with the Prowler team, please contact us at via our Community Slack at goto.prowler.com/slack.
Disclosure guidelines:
- In order to protect our users and customers, do not reveal the problem to others until we have researched, addressed and informed our affected customers.
- If you want to publicly share your research about Prowler at a conference, in a blog or any other public forum, you should share a draft with us for review and approval at least 30 days prior to the publication date. Please note that the following should not be included:
- Data regarding any Prowler user or customer projects.
- Prowler customers' data.
- Information about Prowler employees, contractors or partners.
What we promise:
- We will respond to your report within 5 business days with our evaluation of the report and an expected resolution date.
- If you have followed the instructions above, we will not take any legal action against you in regard to the report.
- We will handle your report with strict confidentiality, and not pass on your personal details to third parties without your permission.
- We will keep you informed of the progress towards resolving the problem.
- In the public information concerning the problem reported, we will give your name as the discoverer of the problem (unless you desire otherwise).
We strive to resolve all problems as quickly as possible, and we would like to play an active role in the ultimate publication on the problem after it is resolved.
---
For more information about our security policies, please refer to our [Security](https://docs.prowler.com/projects/prowler-open-source/en/latest/security/) section in our documentation.

View File

@@ -1,58 +0,0 @@
# Django settings
DJANGO_ALLOWED_HOSTS=localhost,127.0.0.1
DJANGO_BIND_ADDRESS=0.0.0.0
DJANGO_PORT=8000
DJANGO_DEBUG=False
# Select one of [production|devel]
DJANGO_SETTINGS_MODULE=config.django.[production|devel]
# Select one of [ndjson|human_readable]
DJANGO_LOGGING_FORMATTER=[ndjson|human_readable]
# Select one of [DEBUG|INFO|WARNING|ERROR|CRITICAL]
# Applies to both Django and Celery Workers
DJANGO_LOGGING_LEVEL=INFO
DJANGO_WORKERS=4 # Defaults to the maximum available based on CPU cores if not set.
DJANGO_TOKEN_SIGNING_KEY=""
DJANGO_TOKEN_VERIFYING_KEY=""
# Token lifetime is in minutes
DJANGO_ACCESS_TOKEN_LIFETIME=30
DJANGO_REFRESH_TOKEN_LIFETIME=1440
DJANGO_CACHE_MAX_AGE=3600
DJANGO_STALE_WHILE_REVALIDATE=60
DJANGO_SECRETS_ENCRYPTION_KEY=""
# Decide whether to allow Django manage database table partitions
DJANGO_MANAGE_DB_PARTITIONS=[True|False]
DJANGO_CELERY_DEADLOCK_ATTEMPTS=5
DJANGO_BROKER_VISIBILITY_TIMEOUT=86400
DJANGO_SENTRY_DSN=
# PostgreSQL settings
# If running django and celery on host, use 'localhost', else use 'postgres-db'
POSTGRES_HOST=[localhost|postgres-db]
POSTGRES_PORT=5432
POSTGRES_ADMIN_USER=prowler
POSTGRES_ADMIN_PASSWORD=S3cret
POSTGRES_USER=prowler_user
POSTGRES_PASSWORD=S3cret
POSTGRES_DB=prowler_db
# Valkey settings
# If running django and celery on host, use localhost, else use 'valkey'
VALKEY_HOST=[localhost|valkey]
VALKEY_PORT=6379
VALKEY_DB=0
# Sentry settings
SENTRY_ENVIRONMENT=local
SENTRY_RELEASE=local
# Social login credentials
DJANGO_GOOGLE_OAUTH_CLIENT_ID=""
DJANGO_GOOGLE_OAUTH_CLIENT_SECRET=""
DJANGO_GOOGLE_OAUTH_CALLBACK_URL=""
DJANGO_GITHUB_OAUTH_CLIENT_ID=""
DJANGO_GITHUB_OAUTH_CLIENT_SECRET=""
DJANGO_GITHUB_OAUTH_CALLBACK_URL=""
# Deletion Task Batch Size
DJANGO_DELETION_BATCH_SIZE=5000

View File

@@ -1,228 +0,0 @@
# Prowler API Changelog
All notable changes to the **Prowler API** are documented in this file.
## [1.12.0] (Prowler 5.11.0)
### Added
- Lighthouse support for OpenAI GPT-5 [(#8527)](https://github.com/prowler-cloud/prowler/pull/8527)
- Integration with Amazon Security Hub, enabling sending findings to Security Hub [(#8365)](https://github.com/prowler-cloud/prowler/pull/8365)
- Generate ASFF output for AWS providers with SecurityHub integration enabled [(#8569)](https://github.com/prowler-cloud/prowler/pull/8569)
### Fixed
- GitHub provider always scans user instead of organization when using provider UID [(#8587)](https://github.com/prowler-cloud/prowler/pull/8587)
## [1.11.0] (Prowler 5.10.0)
### Added
- Github provider support [(#8271)](https://github.com/prowler-cloud/prowler/pull/8271)
- Integration with Amazon S3, enabling storage and retrieval of scan data via S3 buckets [(#8056)](https://github.com/prowler-cloud/prowler/pull/8056)
### Fixed
- Avoid sending errors to Sentry in M365 provider when user authentication fails [(#8420)](https://github.com/prowler-cloud/prowler/pull/8420)
---
## [1.10.2] (Prowler v5.9.2)
### Changed
- Optimized queries for resources views [(#8336)](https://github.com/prowler-cloud/prowler/pull/8336)
---
## [v1.10.1] (Prowler v5.9.1)
### Fixed
- Calculate failed findings during scans to prevent heavy database queries [(#8322)](https://github.com/prowler-cloud/prowler/pull/8322)
---
## [v1.10.0] (Prowler v5.9.0)
### Added
- SSO with SAML support [(#8175)](https://github.com/prowler-cloud/prowler/pull/8175)
- `GET /resources/metadata`, `GET /resources/metadata/latest` and `GET /resources/latest` to expose resource metadata and latest scan results [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
### Changed
- `/processors` endpoints to post-process findings. Currently, only the Mutelist processor is supported to allow to mute findings.
- Optimized the underlying queries for resources endpoints [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
- Optimized include parameters for resources view [(#8229)](https://github.com/prowler-cloud/prowler/pull/8229)
- Optimized overview background tasks [(#8300)](https://github.com/prowler-cloud/prowler/pull/8300)
### Fixed
- Search filter for findings and resources [(#8112)](https://github.com/prowler-cloud/prowler/pull/8112)
- RBAC is now applied to `GET /overviews/providers` [(#8277)](https://github.com/prowler-cloud/prowler/pull/8277)
### Changed
- `POST /schedules/daily` returns a `409 CONFLICT` if already created [(#8258)](https://github.com/prowler-cloud/prowler/pull/8258)
### Security
- Enhanced password validation to enforce 12+ character passwords with special characters, uppercase, lowercase, and numbers [(#8225)](https://github.com/prowler-cloud/prowler/pull/8225)
---
## [v1.9.1] (Prowler v5.8.1)
### Added
- Custom exception for provider connection errors during scans [(#8234)](https://github.com/prowler-cloud/prowler/pull/8234)
### Changed
- Summary and overview tasks now use a dedicated queue and no longer propagate errors to compliance tasks [(#8214)](https://github.com/prowler-cloud/prowler/pull/8214)
### Fixed
- Scan with no resources will not trigger legacy code for findings metadata [(#8183)](https://github.com/prowler-cloud/prowler/pull/8183)
- Invitation email comparison case-insensitive [(#8206)](https://github.com/prowler-cloud/prowler/pull/8206)
### Removed
- Validation of the provider's secret type during updates [(#8197)](https://github.com/prowler-cloud/prowler/pull/8197)
---
## [v1.9.0] (Prowler v5.8.0)
### Added
- Support GCP Service Account key [(#7824)](https://github.com/prowler-cloud/prowler/pull/7824)
- `GET /compliance-overviews` endpoints to retrieve compliance metadata and specific requirements statuses [(#7877)](https://github.com/prowler-cloud/prowler/pull/7877)
- Lighthouse configuration support [(#7848)](https://github.com/prowler-cloud/prowler/pull/7848)
### Changed
- Reworked `GET /compliance-overviews` to return proper requirement metrics [(#7877)](https://github.com/prowler-cloud/prowler/pull/7877)
- Optional `user` and `password` for M365 provider [(#7992)](https://github.com/prowler-cloud/prowler/pull/7992)
### Fixed
- Scheduled scans are no longer deleted when their daily schedule run is disabled [(#8082)](https://github.com/prowler-cloud/prowler/pull/8082)
---
## [v1.8.5] (Prowler v5.7.5)
### Fixed
- Normalize provider UID to ensure safe and unique export directory paths [(#8007)](https://github.com/prowler-cloud/prowler/pull/8007).
- Blank resource types in `/metadata` endpoints [(#8027)](https://github.com/prowler-cloud/prowler/pull/8027)
---
## [v1.8.4] (Prowler v5.7.4)
### Removed
- Reverted RLS transaction handling and DB custom backend [(#7994)](https://github.com/prowler-cloud/prowler/pull/7994)
---
## [v1.8.3] (Prowler v5.7.3)
### Added
- Database backend to handle already closed connections [(#7935)](https://github.com/prowler-cloud/prowler/pull/7935)
### Changed
- Renamed field encrypted_password to password for M365 provider [(#7784)](https://github.com/prowler-cloud/prowler/pull/7784)
### Fixed
- Transaction persistence with RLS operations [(#7916)](https://github.com/prowler-cloud/prowler/pull/7916)
- Reverted the change `get_with_retry` to use the original `get` method for retrieving tasks [(#7932)](https://github.com/prowler-cloud/prowler/pull/7932)
---
## [v1.8.2] (Prowler v5.7.2)
### Fixed
- Task lookup to use task_kwargs instead of task_args for scan report resolution [(#7830)](https://github.com/prowler-cloud/prowler/pull/7830)
- Kubernetes UID validation to allow valid context names [(#7871)](https://github.com/prowler-cloud/prowler/pull/7871)
- Connection status verification before launching a scan [(#7831)](https://github.com/prowler-cloud/prowler/pull/7831)
- Race condition when creating background tasks [(#7876)](https://github.com/prowler-cloud/prowler/pull/7876)
- Error when modifying or retrieving tenants due to missing user UUID in transaction context [(#7890)](https://github.com/prowler-cloud/prowler/pull/7890)
---
## [v1.8.1] (Prowler v5.7.1)
### Fixed
- Added database index to improve performance on finding lookup [(#7800)](https://github.com/prowler-cloud/prowler/pull/7800)
---
## [v1.8.0] (Prowler v5.7.0)
### Added
- Huge improvements to `/findings/metadata` and resource related filters for findings [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
- Improvements to `/overviews` endpoints [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
- Queue to perform backfill background tasks [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
- New endpoints to retrieve latest findings and metadata [(#7743)](https://github.com/prowler-cloud/prowler/pull/7743)
- Export support for Prowler ThreatScore in M365 [(7783)](https://github.com/prowler-cloud/prowler/pull/7783)
---
## [v1.7.0] (Prowler v5.6.0)
### Added
- M365 as a new provider [(#7563)](https://github.com/prowler-cloud/prowler/pull/7563)
- `compliance/` folder and ZIPexport functionality for all compliance reports [(#7653)](https://github.com/prowler-cloud/prowler/pull/7653)
- API endpoint to fetch and download any specific compliance file by name [(#7653)](https://github.com/prowler-cloud/prowler/pull/7653)
---
## [v1.6.0] (Prowler v5.5.0)
### Added
- Support for developing new integrations [(#7167)](https://github.com/prowler-cloud/prowler/pull/7167)
- HTTP Security Headers [(#7289)](https://github.com/prowler-cloud/prowler/pull/7289)
- New endpoint to get the compliance overviews metadata [(#7333)](https://github.com/prowler-cloud/prowler/pull/7333)
- Support for muted findings [(#7378)](https://github.com/prowler-cloud/prowler/pull/7378)
- Missing fields to API findings and resources [(#7318)](https://github.com/prowler-cloud/prowler/pull/7318)
---
## [v1.5.4] (Prowler v5.4.4)
### Fixed
- Bug with periodic tasks when trying to delete a provider [(#7466)](https://github.com/prowler-cloud/prowler/pull/7466)
---
## [v1.5.3] (Prowler v5.4.3)
### Fixed
- Duplicated scheduled scans handling [(#7401)](https://github.com/prowler-cloud/prowler/pull/7401)
- Environment variable to configure the deletion task batch size [(#7423)](https://github.com/prowler-cloud/prowler/pull/7423)
---
## [v1.5.2] (Prowler v5.4.2)
### Changed
- Refactored deletion logic and implemented retry mechanism for deletion tasks [(#7349)](https://github.com/prowler-cloud/prowler/pull/7349)
---
## [v1.5.1] (Prowler v5.4.1)
### Fixed
- Handle response in case local files are missing [(#7183)](https://github.com/prowler-cloud/prowler/pull/7183)
- Race condition when deleting export files after the S3 upload [(#7172)](https://github.com/prowler-cloud/prowler/pull/7172)
- Handle exception when a provider has no secret in test connection [(#7283)](https://github.com/prowler-cloud/prowler/pull/7283)
---
## [v1.5.0] (Prowler v5.4.0)
### Added
- Social login integration with Google and GitHub [(#6906)](https://github.com/prowler-cloud/prowler/pull/6906)
- API scan report system, now all scans launched from the API will generate a compressed file with the report in OCSF, CSV and HTML formats [(#6878)](https://github.com/prowler-cloud/prowler/pull/6878)
- Configurable Sentry integration [(#6874)](https://github.com/prowler-cloud/prowler/pull/6874)
### Changed
- Optimized `GET /findings` endpoint to improve response time and size [(#7019)](https://github.com/prowler-cloud/prowler/pull/7019)
---
## [v1.4.0] (Prowler v5.3.0)
### Changed
- Daily scheduled scan instances are now created beforehand with `SCHEDULED` state [(#6700)](https://github.com/prowler-cloud/prowler/pull/6700)
- Findings endpoints now require at least one date filter [(#6800)](https://github.com/prowler-cloud/prowler/pull/6800)
- Findings metadata endpoint received a performance improvement [(#6863)](https://github.com/prowler-cloud/prowler/pull/6863)
- Increased the allowed length of the provider UID for Kubernetes providers [(#6869)](https://github.com/prowler-cloud/prowler/pull/6869)
---

View File

@@ -1,76 +0,0 @@
FROM python:3.12.10-slim-bookworm AS build
LABEL maintainer="https://github.com/prowler-cloud/api"
ARG POWERSHELL_VERSION=7.5.0
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
# hadolint ignore=DL3008
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
libicu72 \
gcc \
g++ \
make \
libxml2-dev \
libxmlsec1-dev \
libxmlsec1-openssl \
pkg-config \
libtool \
libxslt1-dev \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Install PowerShell
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-x64.tar.gz -O /tmp/powershell.tar.gz ; \
elif [ "$ARCH" = "aarch64" ]; then \
wget --progress=dot:giga https://github.com/PowerShell/PowerShell/releases/download/v${POWERSHELL_VERSION}/powershell-${POWERSHELL_VERSION}-linux-arm64.tar.gz -O /tmp/powershell.tar.gz ; \
else \
echo "Unsupported architecture: $ARCH" && exit 1 ; \
fi && \
mkdir -p /opt/microsoft/powershell/7 && \
tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7 && \
chmod +x /opt/microsoft/powershell/7/pwsh && \
ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && \
rm /tmp/powershell.tar.gz
# Add prowler user
RUN addgroup --gid 1000 prowler && \
adduser --uid 1000 --gid 1000 --disabled-password --gecos "" prowler
USER prowler
WORKDIR /home/prowler
# Ensure output directory exists
RUN mkdir -p /tmp/prowler_api_output
COPY pyproject.toml ./
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir poetry
ENV PATH="/home/prowler/.local/bin:$PATH"
# Add `--no-root` to avoid installing the current project as a package
RUN poetry install --no-root && \
rm -rf ~/.cache/pip
RUN poetry run python "$(poetry env info --path)/src/prowler/prowler/providers/m365/lib/powershell/m365_powershell.py"
COPY src/backend/ ./backend/
COPY docker-entrypoint.sh ./docker-entrypoint.sh
WORKDIR /home/prowler/backend
# Development image
FROM build AS dev
ENTRYPOINT ["../docker-entrypoint.sh", "dev"]
# Production image
FROM build
ENTRYPOINT ["../docker-entrypoint.sh", "prod"]

View File

@@ -1,335 +0,0 @@
# Description
This repository contains the JSON API and Task Runner components for Prowler, which facilitate a complete backend that interacts with the Prowler SDK and is used by the Prowler UI.
# Components
The Prowler API is composed of the following components:
- The JSON API, which is an API built with Django Rest Framework.
- The Celery worker, which is responsible for executing the background tasks that are defined in the JSON API.
- The PostgreSQL database, which is used to store the data.
- The Valkey database, which is an in-memory database which is used as a message broker for the Celery workers.
## Note about Valkey
[Valkey](https://valkey.io/) is an open source (BSD) high performance key/value datastore.
Valkey exposes a Redis 7.2 compliant API. Any service that exposes the Redis API can be used with Prowler API.
# Modify environment variables
Under the root path of the project, you can find a file called `.env.example`. This file shows all the environment variables that the project uses. You *must* create a new file called `.env` and set the values for the variables.
## Local deployment
Keep in mind if you export the `.env` file to use it with local deployment that you will have to do it within the context of the Poetry interpreter, not before. Otherwise, variables will not be loaded properly.
To do this, you can run:
```console
poetry shell
set -a
source .env
```
# 🚀 Production deployment
## Docker deployment
This method requires `docker` and `docker compose`.
### Clone the repository
```console
# HTTPS
git clone https://github.com/prowler-cloud/api.git
# SSH
git clone git@github.com:prowler-cloud/api.git
```
### Build the base image
```console
docker compose --profile prod build
```
### Run the production service
This command will start the Django production server and the Celery worker and also the Valkey and PostgreSQL databases.
```console
docker compose --profile prod up -d
```
You can access the server in `http://localhost:8080`.
> **NOTE:** notice how the port is different. When developing using docker, the port will be `8080` to prevent conflicts.
### View the Production Server Logs
To view the logs for any component (e.g., Django, Celery worker), you can use the following command with a wildcard. This command will follow logs for any container that matches the specified pattern:
```console
docker logs -f $(docker ps --format "{{.Names}}" | grep 'api-')
## Local deployment
To use this method, you'll need to set up a Python virtual environment (version ">=3.11,<3.13") and keep dependencies updated. Additionally, ensure that `poetry` and `docker compose` are installed.
### Clone the repository
```console
# HTTPS
git clone https://github.com/prowler-cloud/api.git
# SSH
git clone git@github.com:prowler-cloud/api.git
```
### Install all dependencies with Poetry
```console
poetry install
poetry shell
```
## Start the PostgreSQL Database and Valkey
The PostgreSQL database (version 16.3) and Valkey (version 7) are required for the development environment. To make development easier, we have provided a `docker-compose` file that will start these components for you.
**Note:** Make sure to use the specified versions, as there are features in our setup that may not be compatible with older versions of PostgreSQL and Valkey.
```console
docker compose up postgres valkey -d
```
## Deploy Django and the Celery worker
### Run migrations
For migrations, you need to force the `admin` database router. Assuming you have the correct environment variables and Python virtual environment, run:
```console
cd src/backend
python manage.py migrate --database admin
```
### Run the Celery worker
```console
cd src/backend
python -m celery -A config.celery worker -l info -E
```
### Run the Django server with Gunicorn
```console
cd src/backend
gunicorn -c config/guniconf.py config.wsgi:application
```
> By default, the Gunicorn server will try to use as many workers as your machine can handle. You can manually change that in the `src/backend/config/guniconf.py` file.
# 🧪 Development guide
## Local deployment
To use this method, you'll need to set up a Python virtual environment (version ">=3.11,<3.13") and keep dependencies updated. Additionally, ensure that `poetry` and `docker compose` are installed.
### Clone the repository
```console
# HTTPS
git clone https://github.com/prowler-cloud/api.git
# SSH
git clone git@github.com:prowler-cloud/api.git
```
### Start the PostgreSQL Database and Valkey
The PostgreSQL database (version 16.3) and Valkey (version 7) are required for the development environment. To make development easier, we have provided a `docker-compose` file that will start these components for you.
**Note:** Make sure to use the specified versions, as there are features in our setup that may not be compatible with older versions of PostgreSQL and Valkey.
```console
docker compose up postgres valkey -d
```
### Install the Python dependencies
> You must have Poetry installed
```console
poetry install
poetry shell
```
### Apply migrations
For migrations, you need to force the `admin` database router. Assuming you have the correct environment variables and Python virtual environment, run:
```console
cd src/backend
python manage.py migrate --database admin
```
### Run the Django development server
```console
cd src/backend
python manage.py runserver
```
You can access the server in `http://localhost:8000`.
All changes in the code will be automatically reloaded in the server.
### Run the Celery worker
```console
python -m celery -A config.celery worker -l info -E
```
The Celery worker does not detect and reload changes in the code, so you need to restart it manually when you make changes.
## Docker deployment
This method requires `docker` and `docker compose`.
### Clone the repository
```console
# HTTPS
git clone https://github.com/prowler-cloud/api.git
# SSH
git clone git@github.com:prowler-cloud/api.git
```
### Build the base image
```console
docker compose --profile dev build
```
### Run the development service
This command will start the Django development server and the Celery worker and also the Valkey and PostgreSQL databases.
```console
docker compose --profile dev up -d
```
You can access the server in `http://localhost:8080`.
All changes in the code will be automatically reloaded in the server.
> **NOTE:** notice how the port is different. When developing using docker, the port will be `8080` to prevent conflicts.
### View the development server logs
To view the logs for any component (e.g., Django, Celery worker), you can use the following command with a wildcard. This command will follow logs for any container that matches the specified pattern:
```console
docker logs -f $(docker ps --format "{{.Names}}" | grep 'api-')
```
## Applying migrations
For migrations, you need to force the `admin` database router. Assuming you have the correct environment variables and Python virtual environment, run:
```console
poetry shell
cd src/backend
python manage.py migrate --database admin
```
## Apply fixtures
Fixtures are used to populate the database with initial development data.
```console
poetry shell
cd src/backend
python manage.py loaddata api/fixtures/0_dev_users.json --database admin
```
> The default credentials are `dev@prowler.com:Thisisapassword123@` or `dev2@prowler.com:Thisisapassword123@`
## Run tests
Note that the tests will fail if you use the same `.env` file as the development environment.
For best results, run in a new shell with no environment variables set.
```console
poetry shell
cd src/backend
pytest
```
# Custom commands
Django provides a way to create custom commands that can be run from the command line.
> These commands can be found in: ```prowler/api/src/backend/api/management/commands```
To run a custom command, you need to be in the `prowler/api/src/backend` directory and run:
```console
poetry shell
python manage.py <command_name>
```
## Generate dummy data
```console
python manage.py findings --tenant
<TENANT_ID> --findings <NUM_FINDINGS> --re
sources <NUM_RESOURCES> --batch <TRANSACTION_BATCH_SIZE> --alias <ALIAS>
```
This command creates, for a given tenant, a provider, scan and a set of findings and resources related altogether.
> Scan progress and state are updated in real time.
> - 0-33%: Create resources.
> - 33-66%: Create findings.
> - 66%: Create resource-finding mapping.
>
> The last step is required to access the findings details, since the UI needs that to print all the information.
### Example
```console
~/backend $ poetry run python manage.py findings --tenant
fffb1893-3fc7-4623-a5d9-fae47da1c528 --findings 25000 --re
sources 1000 --batch 5000 --alias test-script
Starting data population
Tenant: fffb1893-3fc7-4623-a5d9-fae47da1c528
Alias: test-script
Resources: 1000
Findings: 25000
Batch size: 5000
Creating resources...
100%|███████████████████████| 1/1 [00:00<00:00, 7.72it/s]
Resources created successfully.
Creating findings...
100%|███████████████████████| 5/5 [00:05<00:00, 1.09s/it]
Findings created successfully.
Creating resource-finding mappings...
100%|███████████████████████| 5/5 [00:02<00:00, 1.81it/s]
Resource-finding mappings created successfully.
Successfully populated test data.
```

View File

@@ -1,75 +0,0 @@
#!/bin/sh
apply_migrations() {
echo "Applying database migrations..."
# Fix Inconsistent migration history after adding sites app
poetry run python manage.py check_and_fix_socialaccount_sites_migration --database admin
poetry run python manage.py migrate --database admin
}
apply_fixtures() {
echo "Applying Django fixtures..."
for fixture in api/fixtures/dev/*.json; do
if [ -f "$fixture" ]; then
echo "Loading $fixture"
poetry run python manage.py loaddata "$fixture" --database admin
fi
done
}
start_dev_server() {
echo "Starting the development server..."
poetry run python manage.py runserver 0.0.0.0:"${DJANGO_PORT:-8080}"
}
start_prod_server() {
echo "Starting the Gunicorn server..."
poetry run gunicorn -c config/guniconf.py config.wsgi:application
}
start_worker() {
echo "Starting the worker..."
poetry run python -m celery -A config.celery worker -l "${DJANGO_LOGGING_LEVEL:-info}" -Q celery,scans,scan-reports,deletion,backfill,overview,integrations -E --max-tasks-per-child 1
}
start_worker_beat() {
echo "Starting the worker-beat..."
sleep 15
poetry run python -m celery -A config.celery beat -l "${DJANGO_LOGGING_LEVEL:-info}" --scheduler django_celery_beat.schedulers:DatabaseScheduler
}
manage_db_partitions() {
if [ "${DJANGO_MANAGE_DB_PARTITIONS}" = "True" ]; then
echo "Managing DB partitions..."
# For now we skip the deletion of partitions until we define the data retention policy
# --yes auto approves the operation without the need of an interactive terminal
poetry run python manage.py pgpartition --using admin --skip-delete --yes
fi
}
case "$1" in
dev)
apply_migrations
apply_fixtures
manage_db_partitions
start_dev_server
;;
prod)
apply_migrations
manage_db_partitions
start_prod_server
;;
worker)
start_worker
;;
beat)
start_worker_beat
;;
*)
echo "Usage: $0 {dev|prod|worker|beat}"
exit 1
;;
esac

View File

@@ -1,65 +0,0 @@
# Partitions
## Overview
Partitions are used to split the data in a table into smaller chunks, allowing for more efficient querying and storage.
The Prowler API uses partitions to store findings. The partitions are created based on the UUIDv7 `id` field.
You can use the Prowler API without ever creating additional partitions. This documentation is only relevant if you want to manage partitions to gain additional query performance.
### Required Postgres Configuration
There are 3 configuration options that need to be set in the `postgres.conf` file to get the most performance out of the partitioning:
- `enable_partition_pruning = on` (default is on)
- `enable_partitionwise_join = on` (default is off)
- `enable_partitionwise_aggregate = on` (default is off)
For more information on these options, see the [Postgres documentation](https://www.postgresql.org/docs/current/runtime-config-query.html).
## Partitioning Strategy
The partitioning strategy is defined in the `api.partitions` module. The strategy is responsible for creating and deleting partitions based on the provided configuration.
## Managing Partitions
The application will run without any extra work on your part. If you want to add or delete partitions, you can use the following commands:
To manage the partitions, run `python manage.py pgpartition --using admin`
This command will generate a list of partitions to create and delete based on the provided configuration.
By default, the command will prompt you to accept the changes before applying them.
```shell
Finding:
+ 2024_nov
name: 2024_nov
from_values: 0192e505-9000-72c8-a47c-cce719d8fb93
to_values: 01937f84-5418-7eb8-b2a6-e3be749e839d
size_unit: months
size_value: 1
+ 2024_dec
name: 2024_dec
from_values: 01937f84-5800-7b55-879c-9cdb46f023f6
to_values: 01941f29-7818-7f9f-b4be-20b05bb2f574
size_unit: months
size_value: 1
0 partitions will be deleted
2 partitions will be created
```
If you choose to apply the partitions, tables will be generated with the following format: `<table_name>_<year>_<month>`.
For more info on the partitioning manager, see https://github.com/SectorLabs/django-postgres-extra
### Changing the Partitioning Parameters
There are 4 environment variables that can be used to change the partitioning parameters:
- `DJANGO_MANAGE_DB_PARTITIONS`: Allow Django to manage database partitons. By default is set to `False`.
- `FINDINGS_TABLE_PARTITION_MONTHS`: Set the months for each partition. Setting the partition monts to 1 will create partitions with a size of 1 natural month.
- `FINDINGS_TABLE_PARTITION_COUNT`: Set the number of partitions to create
- `FINDINGS_TABLE_PARTITION_MAX_AGE_MONTHS`: Set the number of months to keep partitions before deleting them. Setting this to `None` will keep partitions indefinitely.

6163
api/poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,65 +0,0 @@
[build-system]
build-backend = "poetry.core.masonry.api"
requires = ["poetry-core"]
[project]
authors = [{name = "Prowler Engineering", email = "engineering@prowler.com"}]
dependencies = [
"celery[pytest] (>=5.4.0,<6.0.0)",
"dj-rest-auth[with_social,jwt] (==7.0.1)",
"django==5.1.10",
"django-allauth[saml] (>=65.8.0,<66.0.0)",
"django-celery-beat (>=2.7.0,<3.0.0)",
"django-celery-results (>=2.5.1,<3.0.0)",
"django-cors-headers==4.4.0",
"django-environ==0.11.2",
"django-filter==24.3",
"django-guid==3.5.0",
"django-postgres-extra (>=2.0.8,<3.0.0)",
"djangorestframework==3.15.2",
"djangorestframework-jsonapi==7.0.2",
"djangorestframework-simplejwt (>=5.3.1,<6.0.0)",
"drf-nested-routers (>=0.94.1,<1.0.0)",
"drf-spectacular==0.27.2",
"drf-spectacular-jsonapi==0.5.1",
"gunicorn==23.0.0",
"lxml==5.3.2",
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
"psycopg2-binary==2.9.9",
"pytest-celery[redis] (>=1.0.1,<2.0.0)",
"sentry-sdk[django] (>=2.20.0,<3.0.0)",
"uuid6==2024.7.10",
"openai (>=1.82.0,<2.0.0)",
"xmlsec==1.3.14",
"h2 (==4.3.0)"
]
description = "Prowler's API (Django/DRF)"
license = "Apache-2.0"
name = "prowler-api"
package-mode = false
# Needed for the SDK compatibility
requires-python = ">=3.11,<3.13"
version = "1.12.0"
[project.scripts]
celery = "src.backend.config.settings.celery"
[tool.poetry.group.dev.dependencies]
bandit = "1.7.9"
coverage = "7.5.4"
django-silk = "5.3.2"
docker = "7.1.0"
freezegun = "1.5.1"
marshmallow = ">=3.15.0,<4.0.0"
mypy = "1.10.1"
pylint = "3.2.5"
pytest = "8.2.2"
pytest-cov = "5.0.0"
pytest-django = "4.8.0"
pytest-env = "1.1.3"
pytest-randomly = "3.15.0"
pytest-xdist = "3.6.1"
ruff = "0.5.0"
safety = "3.2.9"
tqdm = "4.67.1"
vulture = "2.14"

View File

View File

@@ -1,71 +0,0 @@
from allauth.socialaccount.adapter import DefaultSocialAccountAdapter
from django.db import transaction
from api.db_router import MainRouter
from api.db_utils import rls_transaction
from api.models import Membership, Role, Tenant, User, UserRoleRelationship
class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
@staticmethod
def get_user_by_email(email: str):
try:
return User.objects.get(email=email)
except User.DoesNotExist:
return None
def pre_social_login(self, request, sociallogin):
# Link existing accounts with the same email address
email = sociallogin.account.extra_data.get("email")
if sociallogin.provider.id == "saml":
email = sociallogin.user.email
if email:
existing_user = self.get_user_by_email(email)
if existing_user:
sociallogin.connect(request, existing_user)
def save_user(self, request, sociallogin, form=None):
"""
Called after the user data is fully populated from the provider
and is about to be saved to the DB for the first time.
"""
with transaction.atomic(using=MainRouter.admin_db):
user = super().save_user(request, sociallogin, form)
provider = sociallogin.provider.id
extra = sociallogin.account.extra_data
if provider != "saml":
# Handle other providers (e.g., GitHub, Google)
user.save(using=MainRouter.admin_db)
social_account_name = extra.get("name")
if social_account_name:
user.name = social_account_name
user.save(using=MainRouter.admin_db)
tenant = Tenant.objects.using(MainRouter.admin_db).create(
name=f"{user.email.split('@')[0]} default tenant"
)
with rls_transaction(str(tenant.id)):
Membership.objects.using(MainRouter.admin_db).create(
user=user, tenant=tenant, role=Membership.RoleChoices.OWNER
)
role = Role.objects.using(MainRouter.admin_db).create(
name="admin",
tenant_id=tenant.id,
manage_users=True,
manage_account=True,
manage_billing=True,
manage_providers=True,
manage_integrations=True,
manage_scans=True,
unlimited_visibility=True,
)
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
user=user,
role=role,
tenant_id=tenant.id,
)
else:
request.session["saml_user_created"] = str(user.id)
return user

View File

@@ -1,3 +0,0 @@
# from django.contrib import admin
# Register your models here.

View File

@@ -1,12 +0,0 @@
from django.apps import AppConfig
class ApiConfig(AppConfig):
default_auto_field = "django.db.models.BigAutoField"
name = "api"
def ready(self):
from api import signals # noqa: F401
from api.compliance import load_prowler_compliance
load_prowler_compliance()

View File

@@ -1,142 +0,0 @@
from django.core.exceptions import ObjectDoesNotExist
from django.db import transaction
from rest_framework import permissions
from rest_framework.exceptions import NotAuthenticated
from rest_framework.filters import SearchFilter
from rest_framework_json_api import filters
from rest_framework_json_api.views import ModelViewSet
from rest_framework_simplejwt.authentication import JWTAuthentication
from api.db_router import MainRouter
from api.db_utils import POSTGRES_USER_VAR, rls_transaction
from api.filters import CustomDjangoFilterBackend
from api.models import Role, Tenant
from api.rbac.permissions import HasPermissions
class BaseViewSet(ModelViewSet):
authentication_classes = [JWTAuthentication]
required_permissions = []
permission_classes = [permissions.IsAuthenticated, HasPermissions]
filter_backends = [
filters.QueryParameterValidationFilter,
filters.OrderingFilter,
CustomDjangoFilterBackend,
SearchFilter,
]
filterset_fields = []
search_fields = []
ordering_fields = "__all__"
ordering = ["id"]
def initial(self, request, *args, **kwargs):
"""
Sets required_permissions before permissions are checked.
"""
self.set_required_permissions()
super().initial(request, *args, **kwargs)
def set_required_permissions(self):
"""This is an abstract method that must be implemented by subclasses."""
NotImplemented
def get_queryset(self):
raise NotImplementedError
class BaseRLSViewSet(BaseViewSet):
def dispatch(self, request, *args, **kwargs):
with transaction.atomic():
return super().dispatch(request, *args, **kwargs)
def initial(self, request, *args, **kwargs):
# Ideally, this logic would be in the `.setup()` method but DRF view sets don't call it
# https://docs.djangoproject.com/en/5.1/ref/class-based-views/base/#django.views.generic.base.View.setup
if request.auth is None:
raise NotAuthenticated
tenant_id = request.auth.get("tenant_id")
if tenant_id is None:
raise NotAuthenticated("Tenant ID is not present in token")
with rls_transaction(tenant_id):
self.request.tenant_id = tenant_id
return super().initial(request, *args, **kwargs)
def get_serializer_context(self):
context = super().get_serializer_context()
context["tenant_id"] = self.request.tenant_id
return context
class BaseTenantViewset(BaseViewSet):
def dispatch(self, request, *args, **kwargs):
with transaction.atomic():
tenant = super().dispatch(request, *args, **kwargs)
try:
# If the request is a POST, create the admin role
if request.method == "POST":
isinstance(tenant, dict) and self._create_admin_role(tenant.data["id"])
except Exception as e:
self._handle_creation_error(e, tenant)
raise
return tenant
def _create_admin_role(self, tenant_id):
Role.objects.using(MainRouter.admin_db).create(
name="admin",
tenant_id=tenant_id,
manage_users=True,
manage_account=True,
manage_billing=True,
manage_providers=True,
manage_integrations=True,
manage_scans=True,
unlimited_visibility=True,
)
def _handle_creation_error(self, error, tenant):
if tenant.data.get("id"):
try:
Tenant.objects.using(MainRouter.admin_db).filter(
id=tenant.data["id"]
).delete()
except ObjectDoesNotExist:
pass # Tenant might not exist, handle gracefully
def initial(self, request, *args, **kwargs):
if request.auth is None:
raise NotAuthenticated
tenant_id = request.auth.get("tenant_id")
if tenant_id is None:
raise NotAuthenticated("Tenant ID is not present in token")
user_id = str(request.user.id)
with rls_transaction(value=user_id, parameter=POSTGRES_USER_VAR):
return super().initial(request, *args, **kwargs)
class BaseUserViewset(BaseViewSet):
def dispatch(self, request, *args, **kwargs):
with transaction.atomic():
return super().dispatch(request, *args, **kwargs)
def initial(self, request, *args, **kwargs):
# TODO refactor after improving RLS on users
if request.stream is not None and request.stream.method == "POST":
return super().initial(request, *args, **kwargs)
if request.auth is None:
raise NotAuthenticated
tenant_id = request.auth.get("tenant_id")
if tenant_id is None:
raise NotAuthenticated("Tenant ID is not present in token")
with rls_transaction(tenant_id):
self.request.tenant_id = tenant_id
return super().initial(request, *args, **kwargs)

View File

@@ -1,239 +0,0 @@
from types import MappingProxyType
from api.models import Provider
from prowler.config.config import get_available_compliance_frameworks
from prowler.lib.check.compliance_models import Compliance
from prowler.lib.check.models import CheckMetadata
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE = {}
PROWLER_CHECKS = {}
AVAILABLE_COMPLIANCE_FRAMEWORKS = {}
def get_compliance_frameworks(provider_type: Provider.ProviderChoices) -> list[str]:
"""
Retrieve and cache the list of available compliance frameworks for a specific cloud provider.
This function lazily loads and caches the available compliance frameworks (e.g., CIS, MITRE, ISO)
for each provider type (AWS, Azure, GCP, etc.) on first access. Subsequent calls for the same
provider will return the cached result.
Args:
provider_type (Provider.ProviderChoices): The cloud provider type for which to retrieve
available compliance frameworks (e.g., "aws", "azure", "gcp", "m365").
Returns:
list[str]: A list of framework identifiers (e.g., "cis_1.4_aws", "mitre_attack_azure") available
for the given provider.
"""
global AVAILABLE_COMPLIANCE_FRAMEWORKS
if provider_type not in AVAILABLE_COMPLIANCE_FRAMEWORKS:
AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type] = (
get_available_compliance_frameworks(provider_type)
)
return AVAILABLE_COMPLIANCE_FRAMEWORKS[provider_type]
def get_prowler_provider_checks(provider_type: Provider.ProviderChoices):
"""
Retrieve all check IDs for the specified provider type.
This function fetches the check metadata for the given cloud provider
and returns an iterable of check IDs.
Args:
provider_type (Provider.ProviderChoices): The provider type
(e.g., 'aws', 'azure') for which to retrieve check IDs.
Returns:
Iterable[str]: An iterable of check IDs associated with the specified provider type.
"""
return CheckMetadata.get_bulk(provider_type).keys()
def get_prowler_provider_compliance(provider_type: Provider.ProviderChoices) -> dict:
"""
Retrieve the Prowler compliance data for a specified provider type.
This function fetches the compliance frameworks and their associated
requirements for the given cloud provider.
Args:
provider_type (Provider.ProviderChoices): The provider type
(e.g., 'aws', 'azure') for which to retrieve compliance data.
Returns:
dict: A dictionary mapping compliance framework names to their respective
Compliance objects for the specified provider.
"""
return Compliance.get_bulk(provider_type)
def load_prowler_compliance():
"""
Load and initialize the Prowler compliance data and checks for all provider types.
This function retrieves compliance data for all supported provider types,
generates a compliance overview template, and populates the global variables
`PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE` and `PROWLER_CHECKS` with read-only mappings
of the compliance templates and checks, respectively.
"""
global PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE
global PROWLER_CHECKS
prowler_compliance = {
provider_type: get_prowler_provider_compliance(provider_type)
for provider_type in Provider.ProviderChoices.values
}
template = generate_compliance_overview_template(prowler_compliance)
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE = MappingProxyType(template)
PROWLER_CHECKS = MappingProxyType(load_prowler_checks(prowler_compliance))
def load_prowler_checks(prowler_compliance):
"""
Generate a mapping of checks to the compliance frameworks that include them.
This function processes the provided compliance data and creates a dictionary
mapping each provider type to a dictionary where each check ID maps to a set
of compliance names that include that check.
Args:
prowler_compliance (dict): The compliance data for all provider types,
as returned by `get_prowler_provider_compliance`.
Returns:
dict: A nested dictionary where the first-level keys are provider types,
and the values are dictionaries mapping check IDs to sets of compliance names.
"""
checks = {}
for provider_type in Provider.ProviderChoices.values:
checks[provider_type] = {
check_id: set() for check_id in get_prowler_provider_checks(provider_type)
}
for compliance_name, compliance_data in prowler_compliance[
provider_type
].items():
for requirement in compliance_data.Requirements:
for check in requirement.Checks:
try:
checks[provider_type][check].add(compliance_name)
except KeyError:
continue
return checks
def generate_scan_compliance(
compliance_overview, provider_type: str, check_id: str, status: str
):
"""
Update the compliance overview with the status of a specific check.
This function updates the compliance overview by setting the status of the given check
within all compliance frameworks and requirements that include it. It then updates the
requirement status to 'FAIL' if any of its checks have failed, and adjusts the counts
of passed and failed requirements in the compliance overview.
Args:
compliance_overview (dict): The compliance overview data structure to update.
provider_type (str): The provider type (e.g., 'aws', 'azure') associated with the check.
check_id (str): The identifier of the check whose status is being updated.
status (str): The status of the check (e.g., 'PASS', 'FAIL', 'MUTED').
Returns:
None: This function modifies the compliance_overview in place.
"""
for compliance_id in PROWLER_CHECKS[provider_type][check_id]:
for requirement in compliance_overview[compliance_id]["requirements"].values():
if check_id in requirement["checks"]:
requirement["checks"][check_id] = status
requirement["checks_status"][status.lower()] += 1
if requirement["status"] != "FAIL" and any(
value == "FAIL" for value in requirement["checks"].values()
):
requirement["status"] = "FAIL"
compliance_overview[compliance_id]["requirements_status"]["passed"] -= 1
compliance_overview[compliance_id]["requirements_status"]["failed"] += 1
def generate_compliance_overview_template(prowler_compliance: dict):
"""
Generate a compliance overview template for all provider types.
This function creates a nested dictionary structure representing the compliance
overview template for each provider type, compliance framework, and requirement.
It initializes the status of all checks and requirements, and calculates initial
counts for requirements status.
Args:
prowler_compliance (dict): The compliance data for all provider types,
as returned by `get_prowler_provider_compliance`.
Returns:
dict: A nested dictionary representing the compliance overview template,
structured by provider type and compliance framework.
"""
template = {}
for provider_type in Provider.ProviderChoices.values:
provider_compliance = template.setdefault(provider_type, {})
compliance_data_dict = prowler_compliance[provider_type]
for compliance_name, compliance_data in compliance_data_dict.items():
compliance_requirements = {}
requirements_status = {"passed": 0, "failed": 0, "manual": 0}
total_requirements = 0
for requirement in compliance_data.Requirements:
total_requirements += 1
total_checks = len(requirement.Checks)
checks_dict = {check: None for check in requirement.Checks}
req_status_val = "MANUAL" if total_checks == 0 else "PASS"
# Build requirement dictionary
requirement_dict = {
"name": requirement.Name or requirement.Id,
"description": requirement.Description,
"tactics": getattr(requirement, "Tactics", []),
"subtechniques": getattr(requirement, "SubTechniques", []),
"platforms": getattr(requirement, "Platforms", []),
"technique_url": getattr(requirement, "TechniqueURL", ""),
"attributes": [
dict(attribute) for attribute in requirement.Attributes
],
"checks": checks_dict,
"checks_status": {
"pass": 0,
"fail": 0,
"manual": 0,
"total": total_checks,
},
"status": req_status_val,
}
# Update requirements status counts for the framework
if req_status_val == "MANUAL":
requirements_status["manual"] += 1
elif req_status_val == "PASS":
requirements_status["passed"] += 1
# Add requirement to compliance requirements
compliance_requirements[requirement.Id] = requirement_dict
# Build compliance dictionary
compliance_dict = {
"framework": compliance_data.Framework,
"version": compliance_data.Version,
"provider": provider_type,
"description": compliance_data.Description,
"requirements": compliance_requirements,
"requirements_status": requirements_status,
"total_requirements": total_requirements,
}
# Add compliance to provider compliance
provider_compliance[compliance_name] = compliance_dict
return template

View File

@@ -1,29 +0,0 @@
ALLOWED_APPS = ("django", "socialaccount", "account", "authtoken", "silk")
class MainRouter:
default_db = "default"
admin_db = "admin"
def db_for_read(self, model, **hints): # noqa: F841
model_table_name = model._meta.db_table
if model_table_name.startswith("django_") or any(
model_table_name.startswith(f"{app}_") for app in ALLOWED_APPS
):
return self.admin_db
return None
def db_for_write(self, model, **hints): # noqa: F841
model_table_name = model._meta.db_table
if any(model_table_name.startswith(f"{app}_") for app in ALLOWED_APPS):
return self.admin_db
return None
def allow_migrate(self, db, app_label, model_name=None, **hints): # noqa: F841
return db == self.admin_db
def allow_relation(self, obj1, obj2, **hints): # noqa: F841
# Allow relations if both objects are in either "default" or "admin" db connectors
if {obj1._state.db, obj2._state.db} <= {self.default_db, self.admin_db}:
return True
return None

View File

@@ -1,566 +0,0 @@
import re
import secrets
import uuid
from contextlib import contextmanager
from datetime import datetime, timedelta, timezone
from django.conf import settings
from django.contrib.auth.models import BaseUserManager
from django.db import connection, models, transaction
from django_celery_beat.models import PeriodicTask
from psycopg2 import connect as psycopg2_connect
from psycopg2.extensions import AsIs, new_type, register_adapter, register_type
from rest_framework_json_api.serializers import ValidationError
DB_USER = settings.DATABASES["default"]["USER"] if not settings.TESTING else "test"
DB_PASSWORD = (
settings.DATABASES["default"]["PASSWORD"] if not settings.TESTING else "test"
)
DB_PROWLER_USER = (
settings.DATABASES["prowler_user"]["USER"] if not settings.TESTING else "test"
)
DB_PROWLER_PASSWORD = (
settings.DATABASES["prowler_user"]["PASSWORD"] if not settings.TESTING else "test"
)
TASK_RUNNER_DB_TABLE = "django_celery_results_taskresult"
POSTGRES_TENANT_VAR = "api.tenant_id"
POSTGRES_USER_VAR = "api.user_id"
SET_CONFIG_QUERY = "SELECT set_config(%s, %s::text, TRUE);"
@contextmanager
def psycopg_connection(database_alias: str):
psycopg2_connection = None
try:
admin_db = settings.DATABASES[database_alias]
psycopg2_connection = psycopg2_connect(
dbname=admin_db["NAME"],
user=admin_db["USER"],
password=admin_db["PASSWORD"],
host=admin_db["HOST"],
port=admin_db["PORT"],
)
yield psycopg2_connection
finally:
if psycopg2_connection is not None:
psycopg2_connection.close()
@contextmanager
def rls_transaction(value: str, parameter: str = POSTGRES_TENANT_VAR):
"""
Creates a new database transaction setting the given configuration value for Postgres RLS. It validates the
if the value is a valid UUID.
Args:
value (str): Database configuration parameter value.
parameter (str): Database configuration parameter name, by default is 'api.tenant_id'.
"""
with transaction.atomic():
with connection.cursor() as cursor:
try:
# just in case the value is an UUID object
uuid.UUID(str(value))
except ValueError:
raise ValidationError("Must be a valid UUID")
cursor.execute(SET_CONFIG_QUERY, [parameter, value])
yield cursor
class CustomUserManager(BaseUserManager):
def create_user(self, email, password=None, **extra_fields):
if not email:
raise ValueError("The email field must be set")
email = self.normalize_email(email)
user = self.model(email=email, **extra_fields)
user.set_password(password)
user.save(using=self._db)
return user
def get_by_natural_key(self, email):
return self.get(email__iexact=email)
def enum_to_choices(enum_class):
"""
This function converts a Python Enum to a list of tuples, where the first element is the value and the second element is the name.
It's for use with Django's `choices` attribute, which expects a list of tuples.
"""
return [(item.value, item.name.replace("_", " ").title()) for item in enum_class]
def one_week_from_now():
"""
Return a datetime object with a date one week from now.
"""
return datetime.now(timezone.utc) + timedelta(days=7)
def generate_random_token(length: int = 14, symbols: str | None = None) -> str:
"""
Generate a random token with the specified length.
"""
_symbols = "23456789ABCDEFGHJKMNPQRSTVWXYZ"
return "".join(secrets.choice(symbols or _symbols) for _ in range(length))
def batch_delete(tenant_id, queryset, batch_size=settings.DJANGO_DELETION_BATCH_SIZE):
"""
Deletes objects in batches and returns the total number of deletions and a summary.
Args:
tenant_id (str): Tenant ID the queryset belongs to.
queryset (QuerySet): The queryset of objects to delete.
batch_size (int): The number of objects to delete in each batch.
Returns:
tuple: (total_deleted, deletion_summary)
"""
total_deleted = 0
deletion_summary = {}
while True:
with rls_transaction(tenant_id, POSTGRES_TENANT_VAR):
# Get a batch of IDs to delete
batch_ids = set(
queryset.values_list("id", flat=True).order_by("id")[:batch_size]
)
if not batch_ids:
# No more objects to delete
break
deleted_count, deleted_info = queryset.filter(id__in=batch_ids).delete()
total_deleted += deleted_count
for model_label, count in deleted_info.items():
deletion_summary[model_label] = deletion_summary.get(model_label, 0) + count
return total_deleted, deletion_summary
def delete_related_daily_task(provider_id: str):
"""
Deletes the periodic task associated with a specific provider.
Args:
provider_id (str): The unique identifier for the provider
whose related periodic task should be deleted.
"""
task_name = f"scan-perform-scheduled-{provider_id}"
PeriodicTask.objects.filter(name=task_name).delete()
def create_objects_in_batches(
tenant_id: str, model, objects: list, batch_size: int = 500
):
"""
Bulk-create model instances in repeated, per-tenant RLS transactions.
All chunks execute in their own transaction, so no single transaction
grows too large.
Args:
tenant_id (str): UUID string of the tenant under which to set RLS.
model: Django model class whose `.objects.bulk_create()` will be called.
objects (list): List of model instances (unsaved) to bulk-create.
batch_size (int): Maximum number of objects per bulk_create call.
"""
total = len(objects)
for i in range(0, total, batch_size):
chunk = objects[i : i + batch_size]
with rls_transaction(value=tenant_id, parameter=POSTGRES_TENANT_VAR):
model.objects.bulk_create(chunk, batch_size)
def update_objects_in_batches(
tenant_id: str, model, objects: list, fields: list, batch_size: int = 500
):
"""
Bulk-update model instances in repeated, per-tenant RLS transactions.
All chunks execute in their own transaction, so no single transaction
grows too large.
Args:
tenant_id (str): UUID string of the tenant under which to set RLS.
model: Django model class whose `.objects.bulk_update()` will be called.
objects (list): List of model instances (saved) to bulk-update.
fields (list): List of field names to update.
batch_size (int): Maximum number of objects per bulk_update call.
"""
total = len(objects)
for start in range(0, total, batch_size):
chunk = objects[start : start + batch_size]
with rls_transaction(value=tenant_id, parameter=POSTGRES_TENANT_VAR):
model.objects.bulk_update(chunk, fields, batch_size)
# Postgres Enums
class PostgresEnumMigration:
def __init__(self, enum_name: str, enum_values: tuple):
self.enum_name = enum_name
self.enum_values = enum_values
def create_enum_type(self, apps, schema_editor): # noqa: F841
string_enum_values = ", ".join([f"'{value}'" for value in self.enum_values])
with schema_editor.connection.cursor() as cursor:
cursor.execute(
f"CREATE TYPE {self.enum_name} AS ENUM ({string_enum_values});"
)
def drop_enum_type(self, apps, schema_editor): # noqa: F841
with schema_editor.connection.cursor() as cursor:
cursor.execute(f"DROP TYPE {self.enum_name};")
class PostgresEnumField(models.Field):
def __init__(self, enum_type_name, *args, **kwargs):
self.enum_type_name = enum_type_name
super().__init__(*args, **kwargs)
def db_type(self, connection):
return self.enum_type_name
def from_db_value(self, value, expression, connection): # noqa: F841
return value
def to_python(self, value):
if isinstance(value, EnumType):
return value.value
return value
def get_prep_value(self, value):
if isinstance(value, EnumType):
return value.value
return value
class EnumType:
def __init__(self, value):
self.value = value
def __str__(self):
return self.value
def enum_adapter(enum_obj):
return AsIs(f"'{enum_obj.value}'::{enum_obj.__class__.enum_type_name}")
def get_enum_oid(connection, enum_type_name: str):
with connection.cursor() as cursor:
cursor.execute("SELECT oid FROM pg_type WHERE typname = %s;", (enum_type_name,))
result = cursor.fetchone()
if result is None:
raise ValueError(f"Enum type '{enum_type_name}' not found")
return result[0]
def register_enum(apps, schema_editor, enum_class): # noqa: F841
with psycopg_connection(schema_editor.connection.alias) as connection:
enum_oid = get_enum_oid(connection, enum_class.enum_type_name)
enum_instance = new_type(
(enum_oid,),
enum_class.enum_type_name,
lambda value, cur: value, # noqa: F841
)
register_type(enum_instance, connection)
register_adapter(enum_class, enum_adapter)
def _should_create_index_on_partition(
partition_name: str, all_partitions: bool = False
) -> bool:
"""
Determine if we should create an index on this partition.
Args:
partition_name: The name of the partition (e.g., "findings_2025_aug", "findings_default")
all_partitions: If True, create on all partitions. If False, only current/future partitions.
Returns:
bool: True if index should be created on this partition, False otherwise.
"""
if all_partitions:
return True
# Extract date from partition name if it follows the pattern
# Partition names look like: findings_2025_aug, findings_2025_jul, etc.
date_pattern = r"(\d{4})_([a-z]{3})$"
match = re.search(date_pattern, partition_name)
if not match:
# If we can't parse the date, include it to be safe (e.g., default partition)
return True
try:
year_str, month_abbr = match.groups()
year = int(year_str)
# Map month abbreviations to numbers
month_map = {
"jan": 1,
"feb": 2,
"mar": 3,
"apr": 4,
"may": 5,
"jun": 6,
"jul": 7,
"aug": 8,
"sep": 9,
"oct": 10,
"nov": 11,
"dec": 12,
}
month = month_map.get(month_abbr.lower())
if month is None:
# Unknown month abbreviation, include it to be safe
return True
partition_date = datetime(year, month, 1, tzinfo=timezone.utc)
# Get current month start
now = datetime.now(timezone.utc)
current_month_start = now.replace(
day=1, hour=0, minute=0, second=0, microsecond=0
)
# Include current month and future partitions
return partition_date >= current_month_start
except (ValueError, TypeError):
# If date parsing fails, include it to be safe
return True
def create_index_on_partitions(
apps, # noqa: F841
schema_editor,
parent_table: str,
index_name: str,
columns: str,
method: str = "BTREE",
where: str = "",
all_partitions: bool = True,
):
"""
Create an index on existing partitions of `parent_table`.
Args:
parent_table: The name of the root table (e.g. "findings").
index_name: A short name for the index (will be prefixed per-partition).
columns: The parenthesized column list, e.g. "tenant_id, scan_id, status".
method: The index method—BTREE, GIN, etc. Defaults to BTREE.
where: Optional WHERE clause (without the leading "WHERE"), e.g. "status = 'FAIL'".
all_partitions: Whether to create indexes on all partitions or just current/future ones.
Defaults to False (current/future only) to avoid maintenance overhead
on old partitions where the index may not be needed.
Examples:
# Create index only on current and future partitions (recommended for new indexes)
create_index_on_partitions(
apps, schema_editor,
parent_table="findings",
index_name="new_performance_idx",
columns="tenant_id, status, severity",
all_partitions=False # Default behavior
)
# Create index on all partitions (use when migrating existing critical indexes)
create_index_on_partitions(
apps, schema_editor,
parent_table="findings",
index_name="critical_existing_idx",
columns="tenant_id, scan_id",
all_partitions=True
)
"""
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT inhrelid::regclass::text
FROM pg_inherits
WHERE inhparent = %s::regclass
""",
[parent_table],
)
partitions = [row[0] for row in cursor.fetchall()]
where_sql = f" WHERE {where}" if where else ""
for partition in partitions:
if _should_create_index_on_partition(partition, all_partitions):
idx_name = f"{partition.replace('.', '_')}_{index_name}"
sql = (
f"CREATE INDEX CONCURRENTLY IF NOT EXISTS {idx_name} "
f"ON {partition} USING {method} ({columns})"
f"{where_sql};"
)
schema_editor.execute(sql)
def drop_index_on_partitions(
apps, # noqa: F841
schema_editor,
parent_table: str,
index_name: str,
):
"""
Drop the per-partition indexes that were created by create_index_on_partitions.
Args:
parent_table: The name of the root table (e.g. "findings").
index_name: The same short name used when creating them.
"""
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT inhrelid::regclass::text
FROM pg_inherits
WHERE inhparent = %s::regclass
""",
[parent_table],
)
partitions = [row[0] for row in cursor.fetchall()]
for partition in partitions:
idx_name = f"{partition.replace('.', '_')}_{index_name}"
sql = f"DROP INDEX CONCURRENTLY IF EXISTS {idx_name};"
schema_editor.execute(sql)
# Postgres enum definition for member role
class MemberRoleEnum(EnumType):
enum_type_name = "member_role"
class MemberRoleEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("member_role", *args, **kwargs)
# Postgres enum definition for Provider.provider
class ProviderEnum(EnumType):
enum_type_name = "provider"
class ProviderEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("provider", *args, **kwargs)
# Postgres enum definition for Scan.type
class ScanTriggerEnum(EnumType):
enum_type_name = "scan_trigger"
class ScanTriggerEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("scan_trigger", *args, **kwargs)
# Postgres enum definition for state
class StateEnum(EnumType):
enum_type_name = "state"
class StateEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("state", *args, **kwargs)
# Postgres enum definition for Finding.Delta
class FindingDeltaEnum(EnumType):
enum_type_name = "finding_delta"
class FindingDeltaEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("finding_delta", *args, **kwargs)
# Postgres enum definition for Severity
class SeverityEnum(EnumType):
enum_type_name = "severity"
class SeverityEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("severity", *args, **kwargs)
# Postgres enum definition for Status
class StatusEnum(EnumType):
enum_type_name = "status"
class StatusEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("status", *args, **kwargs)
# Postgres enum definition for Provider secrets type
class ProviderSecretTypeEnum(EnumType):
enum_type_name = "provider_secret_type"
class ProviderSecretTypeEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("provider_secret_type", *args, **kwargs)
# Postgres enum definition for Provider secrets type
class InvitationStateEnum(EnumType):
enum_type_name = "invitation_state"
class InvitationStateEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("invitation_state", *args, **kwargs)
# Postgres enum definition for Integration type
class IntegrationTypeEnum(EnumType):
enum_type_name = "integration_type"
class IntegrationTypeEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("integration_type", *args, **kwargs)
# Postgres enum definition for Processor type
class ProcessorTypeEnum(EnumType):
enum_type_name = "processor_type"
class ProcessorTypeEnumField(PostgresEnumField):
def __init__(self, *args, **kwargs):
super().__init__("processor_type", *args, **kwargs)

View File

@@ -1,68 +0,0 @@
import uuid
from functools import wraps
from django.db import connection, transaction
from rest_framework_json_api.serializers import ValidationError
from api.db_utils import POSTGRES_TENANT_VAR, SET_CONFIG_QUERY
def set_tenant(func=None, *, keep_tenant=False):
"""
Decorator to set the tenant context for a Celery task based on the provided tenant_id.
This decorator extracts the `tenant_id` from the task's keyword arguments,
and uses it to set the tenant context for the current database session.
The `tenant_id` is then removed from the kwargs before the task function
is executed. If `tenant_id` is not provided, a KeyError is raised.
Args:
func (function): The Celery task function to be decorated.
Raises:
KeyError: If `tenant_id` is not found in the task's keyword arguments.
Returns:
function: The wrapped function with tenant context set.
Example:
# This decorator MUST be defined the last in the decorator chain
@shared_task
@set_tenant
def some_task(arg1, **kwargs):
# Task logic here
pass
# When calling the task
some_task.delay(arg1, tenant_id="8db7ca86-03cc-4d42-99f6-5e480baf6ab5")
# The tenant context will be set before the task logic executes.
"""
def decorator(func):
@wraps(func)
@transaction.atomic
def wrapper(*args, **kwargs):
try:
if not keep_tenant:
tenant_id = kwargs.pop("tenant_id")
else:
tenant_id = kwargs["tenant_id"]
except KeyError:
raise KeyError("This task requires the tenant_id")
try:
uuid.UUID(tenant_id)
except ValueError:
raise ValidationError("Tenant ID must be a valid UUID")
with connection.cursor() as cursor:
cursor.execute(SET_CONFIG_QUERY, [POSTGRES_TENANT_VAR, tenant_id])
return func(*args, **kwargs)
return wrapper
if func is None:
return decorator
else:
return decorator(func)

View File

@@ -1,98 +0,0 @@
from django.core.exceptions import ValidationError as django_validation_error
from rest_framework import status
from rest_framework.exceptions import APIException
from rest_framework_json_api.exceptions import exception_handler
from rest_framework_json_api.serializers import ValidationError
from rest_framework_simplejwt.exceptions import InvalidToken, TokenError
class ModelValidationError(ValidationError):
def __init__(
self,
detail: str | None = None,
code: str | None = None,
pointer: str | None = None,
status_code: int = 400,
):
super().__init__(
detail=[
{
"detail": detail,
"status": str(status_code),
"source": {"pointer": pointer},
"code": code,
}
]
)
class InvitationTokenExpiredException(APIException):
status_code = status.HTTP_410_GONE
default_detail = "The invitation token has expired and is no longer valid."
default_code = "token_expired"
# Task Management Exceptions (non-HTTP)
class TaskManagementError(Exception):
"""Base exception for task management errors."""
def __init__(self, task=None):
self.task = task
super().__init__()
class TaskFailedException(TaskManagementError):
"""Raised when a task has failed."""
class TaskNotFoundException(TaskManagementError):
"""Raised when a task is not found."""
class TaskInProgressException(TaskManagementError):
"""Raised when a task is running but there's no related Task object to return."""
def __init__(self, task_result=None):
self.task_result = task_result
super().__init__()
# Provider connection errors
class ProviderConnectionError(Exception):
"""Base exception for provider connection errors."""
def custom_exception_handler(exc, context):
if isinstance(exc, django_validation_error):
if hasattr(exc, "error_dict"):
exc = ValidationError(exc.message_dict)
else:
exc = ValidationError(detail=exc.messages[0], code=exc.code)
elif isinstance(exc, (TokenError, InvalidToken)):
if (
hasattr(exc, "detail")
and isinstance(exc.detail, dict)
and "messages" in exc.detail
):
exc.detail["messages"] = [
message_item["message"] for message_item in exc.detail["messages"]
]
return exception_handler(exc, context)
class ConflictException(APIException):
status_code = status.HTTP_409_CONFLICT
default_detail = "A conflict occurred. The resource already exists."
default_code = "conflict"
def __init__(self, detail=None, code=None, pointer=None):
error_detail = {
"detail": detail or self.default_detail,
"status": self.status_code,
"code": self.default_code,
}
if pointer:
error_detail["source"] = {"pointer": pointer}
super().__init__(detail=[error_detail])

View File

@@ -1,795 +0,0 @@
from datetime import date, datetime, timedelta, timezone
from dateutil.parser import parse
from django.conf import settings
from django.db.models import Q
from django_filters.rest_framework import (
BaseInFilter,
BooleanFilter,
CharFilter,
ChoiceFilter,
DateFilter,
FilterSet,
UUIDFilter,
)
from rest_framework_json_api.django_filters.backends import DjangoFilterBackend
from rest_framework_json_api.serializers import ValidationError
from api.db_utils import (
FindingDeltaEnumField,
InvitationStateEnumField,
ProviderEnumField,
SeverityEnumField,
StatusEnumField,
)
from api.models import (
ComplianceRequirementOverview,
Finding,
Integration,
Invitation,
Membership,
PermissionChoices,
Processor,
Provider,
ProviderGroup,
ProviderSecret,
Resource,
ResourceTag,
Role,
Scan,
ScanSummary,
SeverityChoices,
StateChoices,
StatusChoices,
Task,
User,
)
from api.rls import Tenant
from api.uuid_utils import (
datetime_to_uuid7,
transform_into_uuid7,
uuid7_end,
uuid7_range,
uuid7_start,
)
from api.v1.serializers import TaskBase
class CustomDjangoFilterBackend(DjangoFilterBackend):
def to_html(self, _request, _queryset, _view):
"""Override this method to use the Browsable API in dev environments.
This disables the HTML render for the default filter.
"""
return None
def get_filterset_class(self, view, queryset=None):
# Check if the view has 'get_filterset_class' method
if hasattr(view, "get_filterset_class"):
return view.get_filterset_class()
# Fallback to the default implementation
return super().get_filterset_class(view, queryset)
class UUIDInFilter(BaseInFilter, UUIDFilter):
pass
class CharInFilter(BaseInFilter, CharFilter):
pass
class ChoiceInFilter(BaseInFilter, ChoiceFilter):
pass
class CommonFindingFilters(FilterSet):
# We filter providers from the scan in findings
provider = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider__in = UUIDInFilter(field_name="scan__provider__id", lookup_expr="in")
provider_type = ChoiceFilter(
choices=Provider.ProviderChoices.choices, field_name="scan__provider__provider"
)
provider_type__in = ChoiceInFilter(
choices=Provider.ProviderChoices.choices, field_name="scan__provider__provider"
)
provider_uid = CharFilter(field_name="scan__provider__uid", lookup_expr="exact")
provider_uid__in = CharInFilter(field_name="scan__provider__uid", lookup_expr="in")
provider_uid__icontains = CharFilter(
field_name="scan__provider__uid", lookup_expr="icontains"
)
provider_alias = CharFilter(field_name="scan__provider__alias", lookup_expr="exact")
provider_alias__in = CharInFilter(
field_name="scan__provider__alias", lookup_expr="in"
)
provider_alias__icontains = CharFilter(
field_name="scan__provider__alias", lookup_expr="icontains"
)
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
uid = CharFilter(field_name="uid")
delta = ChoiceFilter(choices=Finding.DeltaChoices.choices)
status = ChoiceFilter(choices=StatusChoices.choices)
severity = ChoiceFilter(choices=SeverityChoices)
impact = ChoiceFilter(choices=SeverityChoices)
muted = BooleanFilter(
help_text="If this filter is not provided, muted and non-muted findings will be returned."
)
resources = UUIDInFilter(field_name="resource__id", lookup_expr="in")
region = CharFilter(method="filter_resource_region")
region__in = CharInFilter(field_name="resource_regions", lookup_expr="overlap")
region__icontains = CharFilter(
field_name="resource_regions", lookup_expr="icontains"
)
service = CharFilter(method="filter_resource_service")
service__in = CharInFilter(field_name="resource_services", lookup_expr="overlap")
service__icontains = CharFilter(
field_name="resource_services", lookup_expr="icontains"
)
resource_uid = CharFilter(field_name="resources__uid")
resource_uid__in = CharInFilter(field_name="resources__uid", lookup_expr="in")
resource_uid__icontains = CharFilter(
field_name="resources__uid", lookup_expr="icontains"
)
resource_name = CharFilter(field_name="resources__name")
resource_name__in = CharInFilter(field_name="resources__name", lookup_expr="in")
resource_name__icontains = CharFilter(
field_name="resources__name", lookup_expr="icontains"
)
resource_type = CharFilter(method="filter_resource_type")
resource_type__in = CharInFilter(field_name="resource_types", lookup_expr="overlap")
resource_type__icontains = CharFilter(
field_name="resources__type", lookup_expr="icontains"
)
# Temporarily disabled until we implement tag filtering in the UI
# resource_tag_key = CharFilter(field_name="resources__tags__key")
# resource_tag_key__in = CharInFilter(
# field_name="resources__tags__key", lookup_expr="in"
# )
# resource_tag_key__icontains = CharFilter(
# field_name="resources__tags__key", lookup_expr="icontains"
# )
# resource_tag_value = CharFilter(field_name="resources__tags__value")
# resource_tag_value__in = CharInFilter(
# field_name="resources__tags__value", lookup_expr="in"
# )
# resource_tag_value__icontains = CharFilter(
# field_name="resources__tags__value", lookup_expr="icontains"
# )
# resource_tags = CharInFilter(
# method="filter_resource_tag",
# lookup_expr="in",
# help_text="Filter by resource tags `key:value` pairs.\nMultiple values may be "
# "separated by commas.",
# )
def filter_resource_service(self, queryset, name, value):
return queryset.filter(resource_services__contains=[value])
def filter_resource_region(self, queryset, name, value):
return queryset.filter(resource_regions__contains=[value])
def filter_resource_type(self, queryset, name, value):
return queryset.filter(resource_types__contains=[value])
def filter_resource_tag(self, queryset, name, value):
overall_query = Q()
for key_value_pair in value:
tag_key, tag_value = key_value_pair.split(":", 1)
overall_query |= Q(
resources__tags__key__icontains=tag_key,
resources__tags__value__icontains=tag_value,
)
return queryset.filter(overall_query).distinct()
class TenantFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
class Meta:
model = Tenant
fields = {
"name": ["exact", "icontains"],
"inserted_at": ["date", "gte", "lte"],
"updated_at": ["gte", "lte"],
}
class MembershipFilter(FilterSet):
date_joined = DateFilter(field_name="date_joined", lookup_expr="date")
role = ChoiceFilter(choices=Membership.RoleChoices.choices)
class Meta:
model = Membership
fields = {
"tenant": ["exact"],
"role": ["exact"],
"date_joined": ["date", "gte", "lte"],
}
class ProviderFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
connected = BooleanFilter()
provider = ChoiceFilter(choices=Provider.ProviderChoices.choices)
class Meta:
model = Provider
fields = {
"provider": ["exact", "in"],
"id": ["exact", "in"],
"uid": ["exact", "icontains", "in"],
"alias": ["exact", "icontains", "in"],
"inserted_at": ["gte", "lte"],
"updated_at": ["gte", "lte"],
}
filter_overrides = {
ProviderEnumField: {
"filter_class": CharFilter,
},
}
class ProviderRelationshipFilterSet(FilterSet):
provider_type = ChoiceFilter(
choices=Provider.ProviderChoices.choices, field_name="provider__provider"
)
provider_type__in = ChoiceInFilter(
choices=Provider.ProviderChoices.choices, field_name="provider__provider"
)
provider_uid = CharFilter(field_name="provider__uid", lookup_expr="exact")
provider_uid__in = CharInFilter(field_name="provider__uid", lookup_expr="in")
provider_uid__icontains = CharFilter(
field_name="provider__uid", lookup_expr="icontains"
)
provider_alias = CharFilter(field_name="provider__alias", lookup_expr="exact")
provider_alias__in = CharInFilter(field_name="provider__alias", lookup_expr="in")
provider_alias__icontains = CharFilter(
field_name="provider__alias", lookup_expr="icontains"
)
class ProviderGroupFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
class Meta:
model = ProviderGroup
fields = {
"id": ["exact", "in"],
"name": ["exact", "in"],
"inserted_at": ["gte", "lte"],
"updated_at": ["gte", "lte"],
}
class ScanFilter(ProviderRelationshipFilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
completed_at = DateFilter(field_name="completed_at", lookup_expr="date")
started_at = DateFilter(field_name="started_at", lookup_expr="date")
next_scan_at = DateFilter(field_name="next_scan_at", lookup_expr="date")
trigger = ChoiceFilter(choices=Scan.TriggerChoices.choices)
state = ChoiceFilter(choices=StateChoices.choices)
state__in = ChoiceInFilter(
field_name="state", choices=StateChoices.choices, lookup_expr="in"
)
class Meta:
model = Scan
fields = {
"provider": ["exact", "in"],
"name": ["exact", "icontains"],
"started_at": ["gte", "lte"],
"next_scan_at": ["gte", "lte"],
"trigger": ["exact"],
}
class TaskFilter(FilterSet):
name = CharFilter(field_name="task_runner_task__task_name", lookup_expr="exact")
name__icontains = CharFilter(
field_name="task_runner_task__task_name", lookup_expr="icontains"
)
state = ChoiceFilter(
choices=StateChoices.choices, method="filter_state", lookup_expr="exact"
)
task_state_inverse_mapping_values = {
v: k for k, v in TaskBase.state_mapping.items()
}
def filter_state(self, queryset, name, value):
if value not in StateChoices:
raise ValidationError(
f"Invalid provider value: '{value}'. Valid values are: "
f"{', '.join(StateChoices)}"
)
return queryset.filter(
task_runner_task__status=self.task_state_inverse_mapping_values[value]
)
class Meta:
model = Task
fields = []
class ResourceTagFilter(FilterSet):
class Meta:
model = ResourceTag
fields = {
"key": ["exact", "icontains"],
"value": ["exact", "icontains"],
}
search = ["text_search"]
class ResourceFilter(ProviderRelationshipFilterSet):
tag_key = CharFilter(method="filter_tag_key")
tag_value = CharFilter(method="filter_tag_value")
tag = CharFilter(method="filter_tag")
tags = CharFilter(method="filter_tag")
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
scan = UUIDFilter(field_name="provider__scan", lookup_expr="exact")
scan__in = UUIDInFilter(field_name="provider__scan", lookup_expr="in")
class Meta:
model = Resource
fields = {
"provider": ["exact", "in"],
"uid": ["exact", "icontains"],
"name": ["exact", "icontains"],
"region": ["exact", "icontains", "in"],
"service": ["exact", "icontains", "in"],
"type": ["exact", "icontains", "in"],
"inserted_at": ["gte", "lte"],
"updated_at": ["gte", "lte"],
}
def filter_queryset(self, queryset):
if not (self.data.get("scan") or self.data.get("scan__in")) and not (
self.data.get("updated_at")
or self.data.get("updated_at__date")
or self.data.get("updated_at__gte")
or self.data.get("updated_at__lte")
):
raise ValidationError(
[
{
"detail": "At least one date filter is required: filter[updated_at], filter[updated_at.gte], "
"or filter[updated_at.lte].",
"status": 400,
"source": {"pointer": "/data/attributes/updated_at"},
"code": "required",
}
]
)
gte_date = (
parse(self.data.get("updated_at__gte")).date()
if self.data.get("updated_at__gte")
else datetime.now(timezone.utc).date()
)
lte_date = (
parse(self.data.get("updated_at__lte")).date()
if self.data.get("updated_at__lte")
else datetime.now(timezone.utc).date()
)
if abs(lte_date - gte_date) > timedelta(
days=settings.FINDINGS_MAX_DAYS_IN_RANGE
):
raise ValidationError(
[
{
"detail": f"The date range cannot exceed {settings.FINDINGS_MAX_DAYS_IN_RANGE} days.",
"status": 400,
"source": {"pointer": "/data/attributes/updated_at"},
"code": "invalid",
}
]
)
return super().filter_queryset(queryset)
def filter_tag_key(self, queryset, name, value):
return queryset.filter(Q(tags__key=value) | Q(tags__key__icontains=value))
def filter_tag_value(self, queryset, name, value):
return queryset.filter(Q(tags__value=value) | Q(tags__value__icontains=value))
def filter_tag(self, queryset, name, value):
# We won't know what the user wants to filter on just based on the value,
# and we don't want to build special filtering logic for every possible
# provider tag spec, so we'll just do a full text search
return queryset.filter(tags__text_search=value)
class LatestResourceFilter(ProviderRelationshipFilterSet):
tag_key = CharFilter(method="filter_tag_key")
tag_value = CharFilter(method="filter_tag_value")
tag = CharFilter(method="filter_tag")
tags = CharFilter(method="filter_tag")
class Meta:
model = Resource
fields = {
"provider": ["exact", "in"],
"uid": ["exact", "icontains"],
"name": ["exact", "icontains"],
"region": ["exact", "icontains", "in"],
"service": ["exact", "icontains", "in"],
"type": ["exact", "icontains", "in"],
}
def filter_tag_key(self, queryset, name, value):
return queryset.filter(Q(tags__key=value) | Q(tags__key__icontains=value))
def filter_tag_value(self, queryset, name, value):
return queryset.filter(Q(tags__value=value) | Q(tags__value__icontains=value))
def filter_tag(self, queryset, name, value):
# We won't know what the user wants to filter on just based on the value,
# and we don't want to build special filtering logic for every possible
# provider tag spec, so we'll just do a full text search
return queryset.filter(tags__text_search=value)
class FindingFilter(CommonFindingFilters):
scan = UUIDFilter(method="filter_scan_id")
scan__in = UUIDInFilter(method="filter_scan_id_in")
inserted_at = DateFilter(method="filter_inserted_at", lookup_expr="date")
inserted_at__date = DateFilter(method="filter_inserted_at", lookup_expr="date")
inserted_at__gte = DateFilter(
method="filter_inserted_at_gte",
help_text=f"Maximum date range is {settings.FINDINGS_MAX_DAYS_IN_RANGE} days.",
)
inserted_at__lte = DateFilter(
method="filter_inserted_at_lte",
help_text=f"Maximum date range is {settings.FINDINGS_MAX_DAYS_IN_RANGE} days.",
)
class Meta:
model = Finding
fields = {
"id": ["exact", "in"],
"uid": ["exact", "in"],
"scan": ["exact", "in"],
"delta": ["exact", "in"],
"status": ["exact", "in"],
"severity": ["exact", "in"],
"impact": ["exact", "in"],
"check_id": ["exact", "in", "icontains"],
"inserted_at": ["date", "gte", "lte"],
"updated_at": ["gte", "lte"],
}
filter_overrides = {
FindingDeltaEnumField: {
"filter_class": CharFilter,
},
StatusEnumField: {
"filter_class": CharFilter,
},
SeverityEnumField: {
"filter_class": CharFilter,
},
}
def filter_resource_type(self, queryset, name, value):
return queryset.filter(resource_types__contains=[value])
def filter_resource_region(self, queryset, name, value):
return queryset.filter(resource_regions__contains=[value])
def filter_resource_service(self, queryset, name, value):
return queryset.filter(resource_services__contains=[value])
def filter_queryset(self, queryset):
if not (self.data.get("scan") or self.data.get("scan__in")) and not (
self.data.get("inserted_at")
or self.data.get("inserted_at__date")
or self.data.get("inserted_at__gte")
or self.data.get("inserted_at__lte")
):
raise ValidationError(
[
{
"detail": "At least one date filter is required: filter[inserted_at], filter[inserted_at.gte], "
"or filter[inserted_at.lte].",
"status": 400,
"source": {"pointer": "/data/attributes/inserted_at"},
"code": "required",
}
]
)
gte_date = (
datetime.strptime(self.data.get("inserted_at__gte"), "%Y-%m-%d").date()
if self.data.get("inserted_at__gte")
else datetime.now(timezone.utc).date()
)
lte_date = (
datetime.strptime(self.data.get("inserted_at__lte"), "%Y-%m-%d").date()
if self.data.get("inserted_at__lte")
else datetime.now(timezone.utc).date()
)
if abs(lte_date - gte_date) > timedelta(
days=settings.FINDINGS_MAX_DAYS_IN_RANGE
):
raise ValidationError(
[
{
"detail": f"The date range cannot exceed {settings.FINDINGS_MAX_DAYS_IN_RANGE} days.",
"status": 400,
"source": {"pointer": "/data/attributes/inserted_at"},
"code": "invalid",
}
]
)
return super().filter_queryset(queryset)
# Convert filter values to UUIDv7 values for use with partitioning
def filter_scan_id(self, queryset, name, value):
try:
value_uuid = transform_into_uuid7(value)
start = uuid7_start(value_uuid)
end = uuid7_end(value_uuid, settings.FINDINGS_TABLE_PARTITION_MONTHS)
except ValidationError as validation_error:
detail = str(validation_error.detail[0])
raise ValidationError(
[
{
"detail": detail,
"status": 400,
"source": {"pointer": "/data/relationships/scan"},
"code": "invalid",
}
]
)
return (
queryset.filter(id__gte=start).filter(id__lt=end).filter(scan_id=value_uuid)
)
def filter_scan_id_in(self, queryset, name, value):
try:
uuid_list = [
transform_into_uuid7(value_uuid)
for value_uuid in value
if value_uuid is not None
]
start, end = uuid7_range(uuid_list)
except ValidationError as validation_error:
detail = str(validation_error.detail[0])
raise ValidationError(
[
{
"detail": detail,
"status": 400,
"source": {"pointer": "/data/relationships/scan"},
"code": "invalid",
}
]
)
if start == end:
return queryset.filter(id__gte=start).filter(scan_id__in=uuid_list)
else:
return (
queryset.filter(id__gte=start)
.filter(id__lt=end)
.filter(scan_id__in=uuid_list)
)
def filter_inserted_at(self, queryset, name, value):
datetime_value = self.maybe_date_to_datetime(value)
start = uuid7_start(datetime_to_uuid7(datetime_value))
end = uuid7_start(datetime_to_uuid7(datetime_value + timedelta(days=1)))
return queryset.filter(id__gte=start, id__lt=end)
def filter_inserted_at_gte(self, queryset, name, value):
datetime_value = self.maybe_date_to_datetime(value)
start = uuid7_start(datetime_to_uuid7(datetime_value))
return queryset.filter(id__gte=start)
def filter_inserted_at_lte(self, queryset, name, value):
datetime_value = self.maybe_date_to_datetime(value)
end = uuid7_start(datetime_to_uuid7(datetime_value + timedelta(days=1)))
return queryset.filter(id__lt=end)
@staticmethod
def maybe_date_to_datetime(value):
dt = value
if isinstance(value, date):
dt = datetime.combine(value, datetime.min.time(), tzinfo=timezone.utc)
return dt
class LatestFindingFilter(CommonFindingFilters):
class Meta:
model = Finding
fields = {
"id": ["exact", "in"],
"uid": ["exact", "in"],
"delta": ["exact", "in"],
"status": ["exact", "in"],
"severity": ["exact", "in"],
"impact": ["exact", "in"],
"check_id": ["exact", "in", "icontains"],
}
filter_overrides = {
FindingDeltaEnumField: {
"filter_class": CharFilter,
},
StatusEnumField: {
"filter_class": CharFilter,
},
SeverityEnumField: {
"filter_class": CharFilter,
},
}
class ProviderSecretFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
provider = UUIDFilter(field_name="provider__id", lookup_expr="exact")
class Meta:
model = ProviderSecret
fields = {
"name": ["exact", "icontains"],
}
class InvitationFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
expires_at = DateFilter(field_name="expires_at", lookup_expr="date")
state = ChoiceFilter(choices=Invitation.State.choices)
state__in = ChoiceInFilter(choices=Invitation.State.choices, lookup_expr="in")
class Meta:
model = Invitation
fields = {
"email": ["exact", "icontains"],
"inserted_at": ["date", "gte", "lte"],
"updated_at": ["date", "gte", "lte"],
"expires_at": ["date", "gte", "lte"],
"inviter": ["exact"],
}
filter_overrides = {
InvitationStateEnumField: {
"filter_class": CharFilter,
}
}
class UserFilter(FilterSet):
date_joined = DateFilter(field_name="date_joined", lookup_expr="date")
class Meta:
model = User
fields = {
"name": ["exact", "icontains"],
"email": ["exact", "icontains"],
"company_name": ["exact", "icontains"],
"date_joined": ["date", "gte", "lte"],
"is_active": ["exact"],
}
class RoleFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
updated_at = DateFilter(field_name="updated_at", lookup_expr="date")
permission_state = ChoiceFilter(
choices=PermissionChoices.choices, method="filter_permission_state"
)
def filter_permission_state(self, queryset, name, value):
return Role.filter_by_permission_state(queryset, value)
class Meta:
model = Role
fields = {
"id": ["exact", "in"],
"name": ["exact", "in"],
"inserted_at": ["gte", "lte"],
"updated_at": ["gte", "lte"],
}
class ComplianceOverviewFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
scan_id = UUIDFilter(field_name="scan_id")
region = CharFilter(field_name="region")
class Meta:
model = ComplianceRequirementOverview
fields = {
"inserted_at": ["date", "gte", "lte"],
"compliance_id": ["exact", "icontains"],
"framework": ["exact", "iexact", "icontains"],
"version": ["exact", "icontains"],
"region": ["exact", "icontains", "in"],
}
class ScanSummaryFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
provider_id = UUIDFilter(field_name="scan__provider__id", lookup_expr="exact")
provider_type = ChoiceFilter(
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
provider_type__in = ChoiceInFilter(
field_name="scan__provider__provider", choices=Provider.ProviderChoices.choices
)
region = CharFilter(field_name="region")
class Meta:
model = ScanSummary
fields = {
"inserted_at": ["date", "gte", "lte"],
"region": ["exact", "icontains", "in"],
}
class ServiceOverviewFilter(ScanSummaryFilter):
def is_valid(self):
# Check if at least one of the inserted_at filters is present
inserted_at_filters = [
self.data.get("inserted_at"),
self.data.get("inserted_at__gte"),
self.data.get("inserted_at__lte"),
]
if not any(inserted_at_filters):
raise ValidationError(
{
"inserted_at": [
"At least one of filter[inserted_at], filter[inserted_at__gte], or "
"filter[inserted_at__lte] is required."
]
}
)
return super().is_valid()
class IntegrationFilter(FilterSet):
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
integration_type = ChoiceFilter(choices=Integration.IntegrationChoices.choices)
integration_type__in = ChoiceInFilter(
choices=Integration.IntegrationChoices.choices,
field_name="integration_type",
lookup_expr="in",
)
class Meta:
model = Integration
fields = {
"inserted_at": ["date", "gte", "lte"],
}
class ProcessorFilter(FilterSet):
processor_type = ChoiceFilter(choices=Processor.ProcessorChoices.choices)
processor_type__in = ChoiceInFilter(
choices=Processor.ProcessorChoices.choices,
field_name="processor_type",
lookup_expr="in",
)

View File

@@ -1,41 +0,0 @@
[
{
"model": "api.user",
"pk": "8b38e2eb-6689-4f1e-a4ba-95b275130200",
"fields": {
"password": "pbkdf2_sha256$870000$Z63pGJ7nre48hfcGbk5S0O$rQpKczAmijs96xa+gPVJifpT3Fetb8DOusl5Eq6gxac=",
"last_login": null,
"name": "Devie Prowlerson",
"email": "dev@prowler.com",
"company_name": "Prowler Developers",
"is_active": true,
"date_joined": "2024-09-17T09:04:20.850Z"
}
},
{
"model": "api.user",
"pk": "b6493a3a-c997-489b-8b99-278bf74de9f6",
"fields": {
"password": "pbkdf2_sha256$870000$Z63pGJ7nre48hfcGbk5S0O$rQpKczAmijs96xa+gPVJifpT3Fetb8DOusl5Eq6gxac=",
"last_login": null,
"name": "Devietoo Prowlerson",
"email": "dev2@prowler.com",
"company_name": "Prowler Developers",
"is_active": true,
"date_joined": "2024-09-18T09:04:20.850Z"
}
},
{
"model": "api.user",
"pk": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
"fields": {
"password": "pbkdf2_sha256$870000$Z63pGJ7nre48hfcGbk5S0O$rQpKczAmijs96xa+gPVJifpT3Fetb8DOusl5Eq6gxac=",
"last_login": null,
"name": "E2E Test User",
"email": "e2e@prowler.com",
"company_name": "Prowler E2E Tests",
"is_active": true,
"date_joined": "2024-01-01T00:00:00.850Z"
}
}
]

View File

@@ -1,69 +0,0 @@
[
{
"model": "api.tenant",
"pk": "12646005-9067-4d2a-a098-8bb378604362",
"fields": {
"inserted_at": "2024-03-21T23:00:00Z",
"updated_at": "2024-03-21T23:00:00Z",
"name": "Tenant1"
}
},
{
"model": "api.tenant",
"pk": "0412980b-06e3-436a-ab98-3c9b1d0333d3",
"fields": {
"inserted_at": "2024-03-21T23:00:00Z",
"updated_at": "2024-03-21T23:00:00Z",
"name": "Tenant2"
}
},
{
"model": "api.membership",
"pk": "2b0db93a-7e0b-4edf-a851-ea448676b7eb",
"fields": {
"user": "8b38e2eb-6689-4f1e-a4ba-95b275130200",
"tenant": "0412980b-06e3-436a-ab98-3c9b1d0333d3",
"role": "owner",
"date_joined": "2024-09-19T11:03:59.712Z"
}
},
{
"model": "api.membership",
"pk": "797d7cee-abc9-4598-98bb-4bf4bfb97f27",
"fields": {
"user": "8b38e2eb-6689-4f1e-a4ba-95b275130200",
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"role": "owner",
"date_joined": "2024-09-19T11:02:59.712Z"
}
},
{
"model": "api.membership",
"pk": "dea37563-7009-4dcf-9f18-25efb41462a7",
"fields": {
"user": "b6493a3a-c997-489b-8b99-278bf74de9f6",
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"role": "member",
"date_joined": "2024-09-19T11:03:59.712Z"
}
},
{
"model": "api.tenant",
"pk": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"fields": {
"inserted_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"name": "E2E Test Tenant"
}
},
{
"model": "api.membership",
"pk": "9b1a2c3d-4e5f-6789-abc1-23456789def0",
"fields": {
"user": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"role": "owner",
"date_joined": "2024-01-01T00:00:00.000Z"
}
}
]

View File

@@ -1,193 +0,0 @@
[
{
"model": "api.provider",
"pk": "37b065f8-26b0-4218-a665-0b23d07b27d9",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-08-01T17:20:27.050Z",
"updated_at": "2024-08-01T17:20:27.050Z",
"provider": "gcp",
"uid": "a12322-test321",
"alias": "gcp_testing_2",
"connected": null,
"connection_last_checked_at": null,
"metadata": {}
}
},
{
"model": "api.provider",
"pk": "8851db6b-42e5-4533-aa9e-30a32d67e875",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-08-01T17:19:42.453Z",
"updated_at": "2024-08-01T17:19:42.453Z",
"provider": "gcp",
"uid": "a12345-test123",
"alias": "gcp_testing_1",
"connected": null,
"connection_last_checked_at": null,
"metadata": {}
}
},
{
"model": "api.provider",
"pk": "b85601a8-4b45-4194-8135-03fb980ef428",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-08-01T17:19:09.556Z",
"updated_at": "2024-08-01T17:19:09.556Z",
"provider": "aws",
"uid": "123456789020",
"alias": "aws_testing_2",
"connected": null,
"connection_last_checked_at": null,
"metadata": {}
}
},
{
"model": "api.provider",
"pk": "baa7b895-8bac-4f47-b010-4226d132856e",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-08-01T17:20:16.962Z",
"updated_at": "2024-08-01T17:20:16.962Z",
"provider": "gcp",
"uid": "a12322-test123",
"alias": "gcp_testing_3",
"connected": null,
"connection_last_checked_at": null,
"metadata": {}
}
},
{
"model": "api.provider",
"pk": "d7c7ea89-d9af-423b-a364-1290dcad5a01",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-08-01T17:18:58.132Z",
"updated_at": "2024-08-01T17:18:58.132Z",
"provider": "aws",
"uid": "123456789015",
"alias": "aws_testing_1",
"connected": null,
"connection_last_checked_at": null,
"metadata": {}
}
},
{
"model": "api.provider",
"pk": "1b59e032-3eb6-4694-93a5-df84cd9b3ce2",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-08-06T16:03:26.176Z",
"updated_at": "2024-08-06T16:03:26.176Z",
"provider": "azure",
"uid": "8851db6b-42e5-4533-aa9e-30a32d67e875",
"alias": "azure_testing",
"connected": null,
"connection_last_checked_at": null,
"metadata": {},
"scanner_args": {}
}
},
{
"model": "api.provider",
"pk": "26e55a24-cb2c-4cef-ac87-6f91fddb2c97",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-08-06T16:03:07.037Z",
"updated_at": "2024-08-06T16:03:07.037Z",
"provider": "kubernetes",
"uid": "kubernetes-test-12345",
"alias": "k8s_testing",
"connected": null,
"connection_last_checked_at": null,
"metadata": {},
"scanner_args": {}
}
},
{
"model": "api.provider",
"pk": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:45:26.352Z",
"updated_at": "2024-10-18T11:16:23.533Z",
"provider": "aws",
"uid": "106908755759",
"alias": "real testing aws provider",
"connected": true,
"connection_last_checked_at": "2024-10-18T11:16:23.503Z",
"metadata": {},
"scanner_args": {}
}
},
{
"model": "api.provider",
"pk": "7791914f-d646-4fe2-b2ed-73f2c6499a36",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:45:26.352Z",
"updated_at": "2024-10-18T11:16:23.533Z",
"provider": "kubernetes",
"uid": "gke_lucky-coast-419309_us-central1_autopilot-cluster-2",
"alias": "k8s_testing_2",
"connected": true,
"connection_last_checked_at": "2024-10-18T11:16:23.503Z",
"metadata": {},
"scanner_args": {}
}
},
{
"model": "api.providersecret",
"pk": "11491b47-75ae-4f71-ad8d-3e630a72182e",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-11T08:03:05.026Z",
"updated_at": "2024-10-11T08:04:47.033Z",
"name": "GCP static secrets",
"secret_type": "static",
"_secret": "Z0FBQUFBQm5DTndmZW9KakRZUHM2UHhQN2V3RzN0QmM1cERham8yMHp5cnVTT0lzdGFyS1FuVmJXUlpYSGsyU0cxR3RMMEdQYXlYMUVsaWtqLU1OZWlaVUp6OFREYlotZTVBY3BuTlZYbm9YcUJydzAxV2p5dkpLamI1Y2tUYzA0MmJUNWxsNTBRM0E1SDRCa0pPQWVlb05YU3dfeUhkLTRmOEh3dGczOGh1ZGhQcVdZdVAtYmtoSWlwNXM4VGFoVmF3dno2X1hrbk5GZjZTWjVuWEdEZUFXeHJSQjEzbTlVakhNdzYyWTdiVEpvUEc2MTNpRzUtczhEank1eGI0b3MyMlAyaGN6dlByZmtUWHByaDNUYWFqYS1tYnNBUkRKTzBacFNSRjFuVmd5bUtFUEJhd1ZVS1ZDd2xSUV9PaEtLTnc0XzVkY2lhM01WTjQwaWdJSk9wNUJSXzQ4RUNQLXFPNy1VdzdPYkZyWkVkU3RyQjVLTS1MVHN0R3k4THNKZ2NBNExaZnl3Q1EwN2dwNGRsUXptMjB0LXUzTUpzTDE2Q1hmS0ZSN2g1ZjBPeV8taFoxNUwxc2FEcktXX0dCM1IzeUZTTHNiTmNxVXBvNWViZTJScUVWV2VYTFQ4UHlid21PY1A0UjdNMGtERkZCd0lLMlJENDMzMVZUM09DQ0twd1N3VHlZd09XLUctOWhYcFJIR1p5aUlZeEUzejc2dWRYdGNsd0xOODNqRUFEczhSTWNtWU0tdFZ1ZTExaHNHUVYtd0Zxdld1LTdKVUNINzlZTGdHODhKeVVpQmRZMHRUNTJRRWhwS1F1Y3I2X2Iwc0c1NHlXSVRLZWxreEt0dVRnOTZFMkptU2VMS1dWXzdVOVRzMUNUWXM2aFlxVDJXdGo3d2cxSVZGWlI2ZWhIZzZBcEl4bEJ6UnVHc0RYWVNHcjFZUHI5ZUYyWG9rSlo0QUVSUkFCX3h2UmtJUTFzVXJUZ25vTmk2VzdoTTNta05ucmNfTi0yR1ZxN1E2MnZJOVVKOGxmMXMzdHMxVndmSVhQbUItUHgtMVpVcHJwMU5JVHJLb0Y1aHV5OEEwS0kzQkEtcFJkdkRnWGxmZnprNFhndWg1TmQyd09yTFdTRmZ3d2ZvZFUtWXp4a2VYb3JjckFIcE13MDUzX0RHSnlzM0N2ZE5IRzJzMXFMc0k4MDRyTHdLZFlWOG9SaFF0LU43Ynd6VFlEcVNvdFZ0emJEVk10aEp4dDZFTFNFNzk0UUo2WTlVLWRGYm1fanZHaFZreHBIMmtzVjhyS0xPTk9fWHhiVTJHQXZwVlVuY3JtSjFUYUdHQzhEaHFNZXhwUHBmY0kxaUVrOHo4a0FYOTdpZVJDbFRvdFlQeWo3eFZHX1ZMZ1Myc3prU3o2c3o2eXNja1U4N0Y1T0d1REVjZFRGNTByUkgyemVCSjlQYkY2bmJ4YTZodHB0cUNzd2xZcENycUdsczBIaEZPbG1jVUlqNlM2cEE3aGpVaWswTzBDLVFGUHM5UHhvM09saWNtaDhaNVlsc3FZdktKeWlheDF5OGhTODE2N3JWamdTZG5Fa3JSQ2ZUSEVfRjZOZXdreXRZLTBZRFhleVFFeC1YUzc0cWhYeEhobGxvdnZ3Rm15WFlBWXp0dm1DeTA5eExLeEFRRXVRSXBXdTNEaWdZZ3JDenItdDhoZlFiTzI0SGZ1c01FR1FNaFVweVBKR1YxWGRUMW1Mc2JVdW9raWR6UHk2ZTBnS05pV3oyZVBjREdkY3k4ZHZPUWE5S281MkJRSHF3NnpTclZ5bl90bk1wUEh6Tkp5dXlDcE5paWRqcVhxRFVObWIzRldWOGJ2aC1CRHZpbFZrb0hjNGpCMm5POGRiS2lETUpMLUVfQlhCdTZPLW9USW1LTFlTSF9zRUJYZ1NKeFFEQjNOR215ZXJDbkFndmcxWl9rWlk9",
"provider": "8851db6b-42e5-4533-aa9e-30a32d67e875"
}
},
{
"model": "api.providersecret",
"pk": "40191ad5-d8c2-40a9-826d-241397626b68",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-10T11:11:44.515Z",
"updated_at": "2024-10-11T07:59:56.102Z",
"name": "AWS static secrets",
"secret_type": "static",
"_secret": "Z0FBQUFBQm5DTnI4Y1RyV19UWEJzc3kzQUExcU5tdlQzbFVLeDdZMWd1MzkwWkl2UF9oZGhiVEJHVWpSMXV4MjYyN3g2OVpvNVpkQUQ3S0VGaGdQLTFhQWE3MkpWZUt2cnVhODc4d3FpY3FVZkpwdHJzNUJPeFRwZ3N4bGpPZTlkNWRNdFlwTHU3aTNWR3JjSzJwLWRITHdfQWpXb1F0c1l3bVFxbnFrTEpPTGgxcnF1VUprSzZ5dGRQU2VGYmZhTTlwbVpsNFBNWlFhVW9RbjJyYnZ5N0oweE5kV0ZEaUdpUUpNVExOa3oyQ2dNREVSenJ0TEFZc0RrRWpXNUhyMmtybGNLWDVOR0FabEl4QVR1bkZyb2hBLWc1MFNIekVyeXI0SmVreHBjRnJ1YUlVdXpVbW9JZkk0aEgxYlM1VGhSRlhtcS14YzdTYUhXR2xodElmWjZuNUVwaHozX1RVTG1QWHdPZWd4clNHYnAyOTBsWEl5UU83RGxZb0RKWjdadjlsTmJtSHQ0Yl9uaDJoODB0QV9sWmFYbFAxcjA1bmhNVlNqc2xEeHlvcUJFbVZvY250ZENnMnZLT1psb1JDclB3WVR6NGdZb2pzb3U4Ny04QlB0UTZub0dMOXZEUTZEcVJhZldCWEZZSDdLTy02UVZqck5zVTZwS3pObGlOejNJeHUzbFRabFM2V2xaekZVRjZtX3VzZlplendnOWQzT01WMFd3ejNadHVlTFlqRGR2dk5Da29zOFYwOUdOaEc4OHhHRnJFMmJFMk12VDNPNlBBTGlsXy13cUM1QkVYb0o1Z2U4ZXJnWXpZdm1sWjA5bzQzb2NFWC1xbmIycGZRbGtCaGNaOWlkX094UUNNampwbkZoREctNWI4QnZRaE8zM3BEQ1BwNzA1a3BzOGczZXdIM2s1NHFGN1ZTbmJhZkc4RVdfM0ZIZU5udTBYajd1RGxpWXZpRWdSMmhHa2RKOEIzbmM0X2F1OGxrN2p6LW9UVldDOFVpREoxZ1UzcTBZX19OQ0xJb0syWlhNSlQ4MzQwdzRtVG94Y01GS3FMLV95UVlxOTFORk8zdjE5VGxVaXdhbGlzeHdoYWNzazZWai1GUGtUM2gzR0ZWTTY4SThWeVFnZldIaklOTTJqTTg1VkhEYW5wNmdEVllXMmJCV2tpVmVYeUV2c0E1T00xbHJRNzgzVG9wb0Q1cV81UEhqYUFsQ2p1a0VpRDVINl9SVkpyZVRNVnVXQUxwY3NWZnJrNmRVREpiLWNHYUpXWmxkQlhNbWhuR1NmQ1BaVDlidUxCWHJMaHhZbk1FclVBaEVZeWg1ZlFoenZzRHlKbV8wa3lmMGZrd3NmTDZjQkE0UXNSUFhpTWtUUHBrX29BVzc4QzEtWEJIQW1GMGFuZVlXQWZIOXJEamloeGFCeHpYMHNjMFVfNXpQdlJfSkk2bzFROU5NU0c1SHREWW1nbkFNZFZ0UjdPRGdjaF96RGplY1hjdFFzLVR6MTVXYlRjbHIxQ2JRejRpVko5NWhBU0ZHR3ZvczU5elljRGpHRTdIc0FsSm5fUHEwT1gtTS1lN3M3X3ZZRnlkYUZoZXRQeEJsZlhLdFdTUzU1NUl4a29aOWZIdTlPM0Fnak1xYWVkYTNiMmZXUHlXS2lwUVBZLXQyaUxuRmtQNFFieE9SVmdZVW9WTHlzbnBPZlNIdGVHOE1LNVNESjN3cGtVSHVpT1NJWHE1ZzNmUTVTOC0xX3NGSmJqU19IbjZfQWtMRG1YNUQtRy13TUJIZFlyOXJkQzFQbkdZVXVzM2czbS1HWHFBT1pXdVd3N09tcG82SVhnY1ZtUWxqTEg2UzJCUmllb2pweVN2aGwwS1FVRUhjNEN2amRMc3MwVU4zN3dVMWM5Slg4SERtenFaQk1yMWx0LWtxVWtLZVVtbU4yejVEM2h6TEt0RGdfWE09",
"provider": "b85601a8-4b45-4194-8135-03fb980ef428"
}
},
{
"model": "api.providersecret",
"pk": "ed89d1ea-366a-4d12-a602-f2ab77019742",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-10T11:11:44.515Z",
"updated_at": "2024-10-11T07:59:56.102Z",
"name": "Azure static secrets",
"secret_type": "static",
"_secret": "Z0FBQUFBQm5DTnI4Y1RyV19UWEJzc3kzQUExcU5tdlQzbFVLeDdZMWd1MzkwWkl2UF9oZGhiVEJHVWpSMXV4MjYyN3g2OVpvNVpkQUQ3S0VGaGdQLTFhQWE3MkpWZUt2cnVhODc4d3FpY3FVZkpwdHJzNUJPeFRwZ3N4bGpPZTlkNWRNdFlwTHU3aTNWR3JjSzJwLWRITHdfQWpXb1F0c1l3bVFxbnFrTEpPTGgxcnF1VUprSzZ5dGRQU2VGYmZhTTlwbVpsNFBNWlFhVW9RbjJyYnZ5N0oweE5kV0ZEaUdpUUpNVExOa3oyQ2dNREVSenJ0TEFZc0RrRWpXNUhyMmtybGNLWDVOR0FabEl4QVR1bkZyb2hBLWc1MFNIekVyeXI0SmVreHBjRnJ1YUlVdXpVbW9JZkk0aEgxYlM1VGhSRlhtcS14YzdTYUhXR2xodElmWjZuNUVwaHozX1RVTG1QWHdPZWd4clNHYnAyOTBsWEl5UU83RGxZb0RKWjdadjlsTmJtSHQ0Yl9uaDJoODB0QV9sWmFYbFAxcjA1bmhNVlNqc2xEeHlvcUJFbVZvY250ZENnMnZLT1psb1JDclB3WVR6NGdZb2pzb3U4Ny04QlB0UTZub0dMOXZEUTZEcVJhZldCWEZZSDdLTy02UVZqck5zVTZwS3pObGlOejNJeHUzbFRabFM2V2xaekZVRjZtX3VzZlplendnOWQzT01WMFd3ejNadHVlTFlqRGR2dk5Da29zOFYwOUdOaEc4OHhHRnJFMmJFMk12VDNPNlBBTGlsXy13cUM1QkVYb0o1Z2U4ZXJnWXpZdm1sWjA5bzQzb2NFWC1xbmIycGZRbGtCaGNaOWlkX094UUNNampwbkZoREctNWI4QnZRaE8zM3BEQ1BwNzA1a3BzOGczZXdIM2s1NHFGN1ZTbmJhZkc4RVdfM0ZIZU5udTBYajd1RGxpWXZpRWdSMmhHa2RKOEIzbmM0X2F1OGxrN2p6LW9UVldDOFVpREoxZ1UzcTBZX19OQ0xJb0syWlhNSlQ4MzQwdzRtVG94Y01GS3FMLV95UVlxOTFORk8zdjE5VGxVaXdhbGlzeHdoYWNzazZWai1GUGtUM2gzR0ZWTTY4SThWeVFnZldIaklOTTJqTTg1VkhEYW5wNmdEVllXMmJCV2tpVmVYeUV2c0E1T00xbHJRNzgzVG9wb0Q1cV81UEhqYUFsQ2p1a0VpRDVINl9SVkpyZVRNVnVXQUxwY3NWZnJrNmRVREpiLWNHYUpXWmxkQlhNbWhuR1NmQ1BaVDlidUxCWHJMaHhZbk1FclVBaEVZeWg1ZlFoenZzRHlKbV8wa3lmMGZrd3NmTDZjQkE0UXNSUFhpTWtUUHBrX29BVzc4QzEtWEJIQW1GMGFuZVlXQWZIOXJEamloeGFCeHpYMHNjMFVfNXpQdlJfSkk2bzFROU5NU0c1SHREWW1nbkFNZFZ0UjdPRGdjaF96RGplY1hjdFFzLVR6MTVXYlRjbHIxQ2JRejRpVko5NWhBU0ZHR3ZvczU5elljRGpHRTdIc0FsSm5fUHEwT1gtTS1lN3M3X3ZZRnlkYUZoZXRQeEJsZlhLdFdTUzU1NUl4a29aOWZIdTlPM0Fnak1xYWVkYTNiMmZXUHlXS2lwUVBZLXQyaUxuRmtQNFFieE9SVmdZVW9WTHlzbnBPZlNIdGVHOE1LNVNESjN3cGtVSHVpT1NJWHE1ZzNmUTVTOC0xX3NGSmJqU19IbjZfQWtMRG1YNUQtRy13TUJIZFlyOXJkQzFQbkdZVXVzM2czbS1HWHFBT1pXdVd3N09tcG82SVhnY1ZtUWxqTEg2UzJCUmllb2pweVN2aGwwS1FVRUhjNEN2amRMc3MwVU4zN3dVMWM5Slg4SERtenFaQk1yMWx0LWtxVWtLZVVtbU4yejVEM2h6TEt0RGdfWE09",
"provider": "1b59e032-3eb6-4694-93a5-df84cd9b3ce2"
}
},
{
"model": "api.providersecret",
"pk": "ae48ecde-75cd-4814-92ab-18f48719e5d9",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:45:26.412Z",
"updated_at": "2024-10-18T10:45:26.412Z",
"name": "Valid AWS Credentials",
"secret_type": "static",
"_secret": "Z0FBQUFBQm5FanhHa3dXS0I3M2NmWm56SktiaGNqdDZUN0xQU1QwUi15QkhLZldFUmRENk1BXzlscG9JSUxVSTF5ekxuMkdEanlJNjhPUS1VSV9wVTBvU2l4ZnNGOVJhYW93RC1LTEhmc2pyOTJvUWwyWnpFY19WN1pRQk5IdDYwYnBDQnF1eU9nUzdwTGU3QU5qMGFyX1E4SXdpSk9paGVLcVpOVUhwb3duaXgxZ0ZxME5Pcm40QzBGWEZKY2lmRVlCMGFuVFVzemxuVjVNalZVQ2JsY2ZqNWt3Z01IYUZ0dk92YkdtSUZ5SlBvQWZoVU5DWlRFWmExNnJGVEY4Q1Bnd2VJUW9TSWdRcG9rSDNfREQwRld3Q1RYVnVYWVJLWWIxZmpsWGpwd0xQM0dtLTlYUjdHOVhhNklLWXFGTHpFQUVyVmNhYW9CU0tocGVyX3VjMkVEcVdjdFBfaVpsLTBzaUxrWTlta3dpelNtTG9xYVhBUHUzNUE4RnI1WXdJdHcxcFVfaG1XRHhDVFBKamxJb1FaQ2lsQ3FzRmxZbEJVemVkT1E2aHZfbDJqWDJPT3ViOWJGYzQ3eTNWNlFQSHBWRDFiV2tneDM4SmVqMU9Bd01TaXhPY2dmWG5RdENURkM2b2s5V3luVUZQcnFKNldnWEdYaWE2MnVNQkEwMHd6cUY5cVJkcGw4bHBtNzhPeHhkREdwSXNEc1JqQkxUR1FYRTV0UFNwbVlVSWF5LWgtbVhJZXlPZ0Q4cG9HX2E0Qld0LTF1TTFEVy1XNGdnQTRpLWpQQmFJUEdaOFJGNDVoUVJnQ25YVU5DTENMaTY4YmxtYWJFRERXTjAydVN2YnBDb3RkUE0zSDRlN1A3TXc4d2h1Wmd0LWUzZEcwMUstNUw2YnFyS2Z0NEVYMXllQW5GLVBpeU55SkNhczFIeFhrWXZpVXdwSFVrTDdiQjQtWHZJdERXVThzSnJsT2FNZzJDaUt6Y2NXYUZhUlo3VkY0R1BrSHNHNHprTmxjYmp1TXVKakRha0VtNmRFZWRmZHJWdnRCOVNjVGFVWjVQM3RwWWl4SkNmOU1pb2xqMFdOblhNY3Y3aERpOHFlWjJRc2dtRDkzZm1Qc29wdk5OQmJPbGk5ZUpGM1I2YzRJN2gxR3FEMllXR1pma1k0emVqSjZyMUliMGZsc3NfSlVDbGt4QzJTc3hHOU9FRHlZb09zVnlvcDR6WC1uclRSenI0Yy13WlFWNzJWRkwydjhmSjFZdnZ5X3NmZVF6UWRNMXo5STVyV3B0d09UUlFtOURITGhXSDVIUl9zYURJc05KWUNxekVyYkxJclNFNV9leEk4R2xsMGJod3lYeFIwaXR2dllwLTZyNWlXdDRpRkxVYkxWZFdvYUhKck5aeElBZUtKejNKS2tYVW1rTnVrRjJBQmdlZmV6ckozNjNwRmxLS1FaZzRVTTBZYzFFYi1idjBpZkQ3bWVvbEdRZXJrWFNleWZmSmFNdG1wQlp0YmxjWDV5T0tEbHRsYnNHbjRPRjl5MkttOUhRWlJtd1pmTnY4Z1lPRlZoTzFGVDdTZ0RDY1ByV0RndTd5LUNhcHNXUnNIeXdLMEw3WS1tektRTWFLQy1zakpMLWFiM3FOakE1UWU4LXlOX2VPbmd4MTZCRk9OY3Z4UGVDSWxhRlg4eHI4X1VUTDZZM0pjV0JDVi1UUjlTUl85cm1LWlZ0T1dzU0lpdWUwbXgtZ0l6eHNSNExRTV9MczJ6UkRkVElnRV9Rc0RoTDFnVHRZSEFPb2paX200TzZiRzVmRE5hOW5CTjh5Qi1WaEtueEpqRzJDY1luVWZtX1pseUpQSE5lQ0RrZ05EbWo5cU9MZ0ZkcXlqUll4UUkyejRfY2p4RXdEeC1PS1JIQVNUcmNIdkRJbzRiUktMWEQxUFM3aGNzeVFWUDdtcm5xNHlOYUU9",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555"
}
}
]

View File

@@ -1,256 +0,0 @@
[
{
"model": "api.scan",
"pk": "0191e280-9d2f-71c8-9b18-487a23ba185e",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "37b065f8-26b0-4218-a665-0b23d07b27d9",
"trigger": "manual",
"name": "test scan 1",
"state": "completed",
"unique_resource_count": 1,
"duration": 5,
"scanner_args": {
"checks_to_execute": ["accessanalyzer_enabled"]
},
"inserted_at": "2024-09-01T17:25:27.050Z",
"started_at": "2024-09-01T17:25:27.050Z",
"updated_at": "2024-09-01T17:25:27.050Z",
"completed_at": "2024-09-01T17:25:32.050Z"
}
},
{
"model": "api.scan",
"pk": "01920573-aa9c-73c9-bcda-f2e35c9b19d2",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "b85601a8-4b45-4194-8135-03fb980ef428",
"trigger": "manual",
"name": "test aws scan 2",
"state": "completed",
"unique_resource_count": 1,
"duration": 20,
"scanner_args": {
"checks_to_execute": ["accessanalyzer_enabled"]
},
"inserted_at": "2024-09-02T17:24:27.050Z",
"started_at": "2024-09-02T17:24:27.050Z",
"updated_at": "2024-09-02T17:24:27.050Z",
"completed_at": "2024-09-01T17:24:37.050Z"
}
},
{
"model": "api.scan",
"pk": "01920573-ea5b-77fd-a93f-1ed2ae12f728",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "baa7b895-8bac-4f47-b010-4226d132856e",
"trigger": "manual",
"name": "test gcp scan",
"state": "completed",
"unique_resource_count": 10,
"duration": 10,
"scanner_args": {
"checks_to_execute": ["cloudsql_instance_automated_backups"]
},
"inserted_at": "2024-09-02T19:26:27.050Z",
"started_at": "2024-09-02T19:26:27.050Z",
"updated_at": "2024-09-02T19:26:27.050Z",
"completed_at": "2024-09-01T17:26:37.050Z"
}
},
{
"model": "api.scan",
"pk": "01920573-ea5b-77fd-a93f-1ed2ae12f728",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "b85601a8-4b45-4194-8135-03fb980ef428",
"trigger": "manual",
"name": "test aws scan",
"state": "completed",
"unique_resource_count": 1,
"duration": 35,
"scanner_args": {
"checks_to_execute": ["accessanalyzer_enabled"]
},
"inserted_at": "2024-09-02T19:27:27.050Z",
"started_at": "2024-09-02T19:27:27.050Z",
"updated_at": "2024-09-02T19:27:27.050Z",
"completed_at": "2024-09-01T17:27:37.050Z"
}
},
{
"model": "api.scan",
"pk": "c281c924-23f3-4fcc-ac63-73a22154b7de",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "b85601a8-4b45-4194-8135-03fb980ef428",
"trigger": "scheduled",
"name": "test scheduled aws scan",
"state": "available",
"scanner_args": {
"checks_to_execute": ["cloudformation_stack_outputs_find_secrets"]
},
"scheduled_at": "2030-09-02T19:20:27.050Z",
"inserted_at": "2024-09-02T19:24:27.050Z",
"updated_at": "2024-09-02T19:24:27.050Z"
}
},
{
"model": "api.scan",
"pk": "25c8907c-b26e-4ec0-966b-a1f53a39d8e6",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "b85601a8-4b45-4194-8135-03fb980ef428",
"trigger": "scheduled",
"name": "test scheduled aws scan 2",
"state": "available",
"scanner_args": {
"checks_to_execute": [
"accessanalyzer_enabled",
"cloudformation_stack_outputs_find_secrets"
]
},
"scheduled_at": "2030-08-02T19:31:27.050Z",
"inserted_at": "2024-09-02T19:38:27.050Z",
"updated_at": "2024-09-02T19:38:27.050Z"
}
},
{
"model": "api.scan",
"pk": "25c8907c-b26e-4ec0-966b-a1f53a39d8e6",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "baa7b895-8bac-4f47-b010-4226d132856e",
"trigger": "scheduled",
"name": "test scheduled gcp scan",
"state": "available",
"scanner_args": {
"checks_to_execute": [
"cloudsql_instance_automated_backups",
"iam_audit_logs_enabled"
]
},
"scheduled_at": "2030-07-02T19:30:27.050Z",
"inserted_at": "2024-09-02T19:29:27.050Z",
"updated_at": "2024-09-02T19:29:27.050Z"
}
},
{
"model": "api.scan",
"pk": "25c8907c-b26e-4ec0-966b-a1f53a39d8e6",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "1b59e032-3eb6-4694-93a5-df84cd9b3ce2",
"trigger": "scheduled",
"name": "test scheduled azure scan",
"state": "available",
"scanner_args": {
"checks_to_execute": [
"aks_cluster_rbac_enabled",
"defender_additional_email_configured_with_a_security_contact"
]
},
"scheduled_at": "2030-08-05T19:32:27.050Z",
"inserted_at": "2024-09-02T19:29:27.050Z",
"updated_at": "2024-09-02T19:29:27.050Z"
}
},
{
"model": "api.scan",
"pk": "01929f3b-ed2e-7623-ad63-7c37cd37828f",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"name": "real scan 1",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"trigger": "manual",
"state": "completed",
"unique_resource_count": 19,
"progress": 100,
"scanner_args": {
"checks_to_execute": ["accessanalyzer_enabled"]
},
"duration": 7,
"scheduled_at": null,
"inserted_at": "2024-10-18T10:45:57.678Z",
"updated_at": "2024-10-18T10:46:05.127Z",
"started_at": "2024-10-18T10:45:57.909Z",
"completed_at": "2024-10-18T10:46:05.127Z"
}
},
{
"model": "api.scan",
"pk": "6dd8925f-a52d-48de-a546-d2d90db30ab1",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"name": "real scan azure",
"provider": "1b59e032-3eb6-4694-93a5-df84cd9b3ce2",
"trigger": "manual",
"state": "completed",
"unique_resource_count": 20,
"progress": 100,
"scanner_args": {
"checks_to_execute": [
"accessanalyzer_enabled",
"account_security_contact_information_is_registered"
]
},
"duration": 4,
"scheduled_at": null,
"inserted_at": "2024-10-18T11:16:21.358Z",
"updated_at": "2024-10-18T11:16:26.060Z",
"started_at": "2024-10-18T11:16:21.593Z",
"completed_at": "2024-10-18T11:16:26.060Z"
}
},
{
"model": "api.scan",
"pk": "4ca7ce89-3236-41a8-a369-8937bc152af5",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"name": "real scan k8s",
"provider": "7791914f-d646-4fe2-b2ed-73f2c6499a36",
"trigger": "manual",
"state": "completed",
"unique_resource_count": 20,
"progress": 100,
"scanner_args": {
"checks_to_execute": [
"accessanalyzer_enabled",
"account_security_contact_information_is_registered"
]
},
"duration": 4,
"scheduled_at": null,
"inserted_at": "2024-10-18T11:16:21.358Z",
"updated_at": "2024-10-18T11:16:26.060Z",
"started_at": "2024-10-18T11:16:21.593Z",
"completed_at": "2024-10-18T11:16:26.060Z"
}
},
{
"model": "api.scan",
"pk": "01929f57-c0ee-7553-be0b-cbde006fb6f7",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"name": "real scan 2",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"trigger": "manual",
"state": "completed",
"unique_resource_count": 20,
"progress": 100,
"scanner_args": {
"checks_to_execute": [
"accessanalyzer_enabled",
"account_security_contact_information_is_registered"
]
},
"duration": 4,
"scheduled_at": null,
"inserted_at": "2024-10-18T11:16:21.358Z",
"updated_at": "2024-10-18T11:16:26.060Z",
"started_at": "2024-10-18T11:16:21.593Z",
"completed_at": "2024-10-18T11:16:26.060Z"
}
}
]

View File

@@ -1,322 +0,0 @@
[
{
"model": "api.resource",
"pk": "0234477d-0b8e-439f-87d3-ce38dff3a434",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.772Z",
"updated_at": "2024-10-18T11:16:24.466Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root",
"name": "",
"region": "eu-south-2",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'2':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'eu':7C 'eu-south':6C 'iam':3A 'other':11 'root':5A 'south':8C"
}
},
{
"model": "api.resource",
"pk": "17ce30a3-6e77-42a5-bb08-29dfcad7396a",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.882Z",
"updated_at": "2024-10-18T11:16:24.533Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root2",
"name": "",
"region": "eu-west-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'eu':7C 'eu-west':6C 'iam':3A 'other':11 'root':5A 'west':8C"
}
},
{
"model": "api.resource",
"pk": "1f9de587-ba5b-415a-b9b0-ceed4c6c9f32",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:05.091Z",
"updated_at": "2024-10-18T11:16:24.637Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root3",
"name": "",
"region": "ap-northeast-2",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'2':9C '112233445566':4A 'accessanalyzer':10 'ap':7C 'ap-northeast':6C 'arn':1A 'aws':2A 'iam':3A 'northeast':8C 'other':11 'root':5A"
}
},
{
"model": "api.resource",
"pk": "29b35668-6dad-411d-bfec-492311889892",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:05.008Z",
"updated_at": "2024-10-18T11:16:24.600Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root4",
"name": "",
"region": "us-west-2",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'2':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'iam':3A 'other':11 'root':5A 'us':7C 'us-west':6C 'west':8C"
}
},
{
"model": "api.resource",
"pk": "30505514-01d4-42bb-8b0c-471bbab27460",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T11:16:26.014Z",
"updated_at": "2024-10-18T11:16:26.023Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root5",
"name": "",
"region": "us-east-1",
"service": "account",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'account':10 'arn':1A 'aws':2A 'east':8C 'iam':3A 'other':11 'root':5A 'us':7C 'us-east':6C"
}
},
{
"model": "api.resource",
"pk": "372932f0-e4df-4968-9721-bb4f6236fae4",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.848Z",
"updated_at": "2024-10-18T11:16:24.516Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root6",
"name": "",
"region": "eu-west-3",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'3':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'eu':7C 'eu-west':6C 'iam':3A 'other':11 'root':5A 'west':8C"
}
},
{
"model": "api.resource",
"pk": "3a37d124-7637-43f6-9df7-e9aa7ef98c53",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.979Z",
"updated_at": "2024-10-18T11:16:24.585Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root7",
"name": "",
"region": "sa-east-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'east':8C 'iam':3A 'other':11 'root':5A 'sa':7C 'sa-east':6C"
}
},
{
"model": "api.resource",
"pk": "3c49318e-03c6-4f12-876f-40451ce7de3d",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:05.072Z",
"updated_at": "2024-10-18T11:16:24.630Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root8",
"name": "",
"region": "ap-southeast-2",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'2':9C '112233445566':4A 'accessanalyzer':10 'ap':7C 'ap-southeast':6C 'arn':1A 'aws':2A 'iam':3A 'other':11 'root':5A 'southeast':8C"
}
},
{
"model": "api.resource",
"pk": "430bf313-8733-4bc5-ac70-5402adfce880",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.994Z",
"updated_at": "2024-10-18T11:16:24.593Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root9",
"name": "",
"region": "eu-north-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'eu':7C 'eu-north':6C 'iam':3A 'north':8C 'other':11 'root':5A"
}
},
{
"model": "api.resource",
"pk": "78bd2a52-82f9-45df-90a9-4ad78254fdc4",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:05.055Z",
"updated_at": "2024-10-18T11:16:24.622Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root10",
"name": "",
"region": "ap-northeast-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'ap':7C 'ap-northeast':6C 'arn':1A 'aws':2A 'iam':3A 'northeast':8C 'other':11 'root':5A"
}
},
{
"model": "api.resource",
"pk": "7973e332-795e-4a74-b4d4-a53a21c98c80",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.896Z",
"updated_at": "2024-10-18T11:16:24.542Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root11",
"name": "",
"region": "us-east-2",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'2':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'east':8C 'iam':3A 'other':11 'root':5A 'us':7C 'us-east':6C"
}
},
{
"model": "api.resource",
"pk": "8ca0a188-5699-436e-80fd-e566edaeb259",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.938Z",
"updated_at": "2024-10-18T11:16:24.565Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root12",
"name": "",
"region": "ca-central-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'ca':7C 'ca-central':6C 'central':8C 'iam':3A 'other':11 'root':5A"
}
},
{
"model": "api.resource",
"pk": "8fe4514f-71d7-46ab-b0dc-70cef23b4d13",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.965Z",
"updated_at": "2024-10-18T11:16:24.578Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root13",
"name": "",
"region": "eu-west-2",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'2':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'eu':7C 'eu-west':6C 'iam':3A 'other':11 'root':5A 'west':8C"
}
},
{
"model": "api.resource",
"pk": "9ab35225-dc7c-4ebd-bbc0-d81fb5d9de77",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.909Z",
"updated_at": "2024-10-18T11:16:24.549Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root14",
"name": "",
"region": "ap-south-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'ap':7C 'ap-south':6C 'arn':1A 'aws':2A 'iam':3A 'other':11 'root':5A 'south':8C"
}
},
{
"model": "api.resource",
"pk": "9be26c1d-adf0-4ba8-9ca9-c740f4a0dc4e",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.863Z",
"updated_at": "2024-10-18T11:16:24.524Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root15",
"name": "",
"region": "eu-central-2",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'2':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'central':8C 'eu':7C 'eu-central':6C 'iam':3A 'other':11 'root':5A"
}
},
{
"model": "api.resource",
"pk": "ba108c01-bcad-44f1-b211-c1d8985da89d",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:05.110Z",
"updated_at": "2024-10-18T11:16:24.644Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root16",
"name": "",
"region": "ap-northeast-3",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'3':9C '112233445566':4A 'accessanalyzer':10 'ap':7C 'ap-northeast':6C 'arn':1A 'aws':2A 'iam':3A 'northeast':8C 'other':11 'root':5A"
}
},
{
"model": "api.resource",
"pk": "dc6cfb5d-6835-4c7b-9152-c18c734a6eaa",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:05.038Z",
"updated_at": "2024-10-18T11:16:24.615Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root17",
"name": "",
"region": "eu-central-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'central':8C 'eu':7C 'eu-central':6C 'iam':3A 'other':11 'root':5A"
}
},
{
"model": "api.resource",
"pk": "e0664164-cfda-44a4-b743-acee1c69386c",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.924Z",
"updated_at": "2024-10-18T11:16:24.557Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root18",
"name": "",
"region": "us-west-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'arn':1A 'aws':2A 'iam':3A 'other':11 'root':5A 'us':7C 'us-west':6C 'west':8C"
}
},
{
"model": "api.resource",
"pk": "e1929daa-a984-4116-8131-492a48321dba",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:05.023Z",
"updated_at": "2024-10-18T11:16:24.607Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:iam::112233445566:root19",
"name": "",
"region": "ap-southeast-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9C '112233445566':4A 'accessanalyzer':10 'ap':7C 'ap-southeast':6C 'arn':1A 'aws':2A 'iam':3A 'other':11 'root':5A 'southeast':8C"
}
},
{
"model": "api.resource",
"pk": "e37bb1f1-1669-4bb3-be86-e3378ddfbcba",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"inserted_at": "2024-10-18T10:46:04.952Z",
"updated_at": "2024-10-18T11:16:24.571Z",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"uid": "arn:aws:access-analyzer:us-east-1:112233445566:analyzer/ConsoleAnalyzer-83b66ad7-d024-454e-b851-52d11cc1cf7c",
"name": "",
"region": "us-east-1",
"service": "accessanalyzer",
"type": "Other",
"text_search": "'1':9A,15C '112233445566':10A 'access':4A 'access-analyzer':3A 'accessanalyzer':16 'analyzer':5A 'analyzer/consoleanalyzer-83b66ad7-d024-454e-b851-52d11cc1cf7c':11A 'arn':1A 'aws':2A 'east':8A,14C 'other':17 'us':7A,13C 'us-east':6A,12C"
}
}
]

File diff suppressed because it is too large Load Diff

View File

@@ -1,180 +0,0 @@
[
{
"model": "api.providergroup",
"pk": "3fe28fb8-e545-424c-9b8f-69aff638f430",
"fields": {
"name": "first_group",
"inserted_at": "2024-11-13T11:36:19.503Z",
"updated_at": "2024-11-13T11:36:19.503Z",
"tenant": "12646005-9067-4d2a-a098-8bb378604362"
}
},
{
"model": "api.providergroup",
"pk": "525e91e7-f3f3-4254-bbc3-27ce1ade86b1",
"fields": {
"name": "second_group",
"inserted_at": "2024-11-13T11:36:25.421Z",
"updated_at": "2024-11-13T11:36:25.421Z",
"tenant": "12646005-9067-4d2a-a098-8bb378604362"
}
},
{
"model": "api.providergroup",
"pk": "481769f5-db2b-447b-8b00-1dee18db90ec",
"fields": {
"name": "third_group",
"inserted_at": "2024-11-13T11:36:37.603Z",
"updated_at": "2024-11-13T11:36:37.603Z",
"tenant": "12646005-9067-4d2a-a098-8bb378604362"
}
},
{
"model": "api.providergroupmembership",
"pk": "13625bd3-f428-4021-ac1b-b0bd41b6e02f",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "1b59e032-3eb6-4694-93a5-df84cd9b3ce2",
"provider_group": "3fe28fb8-e545-424c-9b8f-69aff638f430",
"inserted_at": "2024-11-13T11:55:17.138Z"
}
},
{
"model": "api.providergroupmembership",
"pk": "54784ebe-42d2-4937-aa6a-e21c62879567",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"provider_group": "3fe28fb8-e545-424c-9b8f-69aff638f430",
"inserted_at": "2024-11-13T11:55:17.138Z"
}
},
{
"model": "api.providergroupmembership",
"pk": "c8bd52d5-42a5-48fe-8e0a-3eef154b8ebe",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"provider": "15fce1fa-ecaa-433f-a9dc-62553f3a2555",
"provider_group": "525e91e7-f3f3-4254-bbc3-27ce1ade86b1",
"inserted_at": "2024-11-13T11:55:41.237Z"
}
},
{
"model": "api.role",
"pk": "3f01e759-bdf9-4a99-8888-1ab805b79f93",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"name": "admin_test",
"manage_users": true,
"manage_account": true,
"manage_billing": true,
"manage_providers": true,
"manage_integrations": true,
"manage_scans": true,
"unlimited_visibility": true,
"inserted_at": "2024-11-20T15:32:42.402Z",
"updated_at": "2024-11-20T15:32:42.402Z"
}
},
{
"model": "api.role",
"pk": "845ff03a-87ef-42ba-9786-6577c70c4df0",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"name": "first_role",
"manage_users": true,
"manage_account": true,
"manage_billing": true,
"manage_providers": true,
"manage_integrations": false,
"manage_scans": false,
"unlimited_visibility": true,
"inserted_at": "2024-11-20T15:31:53.239Z",
"updated_at": "2024-11-20T15:31:53.239Z"
}
},
{
"model": "api.role",
"pk": "902d726c-4bd5-413a-a2a4-f7b4754b6b20",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"name": "third_role",
"manage_users": false,
"manage_account": false,
"manage_billing": false,
"manage_providers": false,
"manage_integrations": false,
"manage_scans": true,
"unlimited_visibility": false,
"inserted_at": "2024-11-20T15:34:05.440Z",
"updated_at": "2024-11-20T15:34:05.440Z"
}
},
{
"model": "api.roleprovidergrouprelationship",
"pk": "57fd024a-0a7f-49b4-a092-fa0979a07aaf",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"role": "3f01e759-bdf9-4a99-8888-1ab805b79f93",
"provider_group": "3fe28fb8-e545-424c-9b8f-69aff638f430",
"inserted_at": "2024-11-20T15:32:42.402Z"
}
},
{
"model": "api.roleprovidergrouprelationship",
"pk": "a3cd0099-1c13-4df1-a5e5-ecdfec561b35",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"role": "3f01e759-bdf9-4a99-8888-1ab805b79f93",
"provider_group": "481769f5-db2b-447b-8b00-1dee18db90ec",
"inserted_at": "2024-11-20T15:32:42.402Z"
}
},
{
"model": "api.roleprovidergrouprelationship",
"pk": "cfd84182-a058-40c2-af3c-0189b174940f",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"role": "3f01e759-bdf9-4a99-8888-1ab805b79f93",
"provider_group": "525e91e7-f3f3-4254-bbc3-27ce1ade86b1",
"inserted_at": "2024-11-20T15:32:42.402Z"
}
},
{
"model": "api.userrolerelationship",
"pk": "92339663-e954-4fd8-98fb-8bfe15949975",
"fields": {
"tenant": "12646005-9067-4d2a-a098-8bb378604362",
"role": "3f01e759-bdf9-4a99-8888-1ab805b79f93",
"user": "8b38e2eb-6689-4f1e-a4ba-95b275130200",
"inserted_at": "2024-11-20T15:36:14.302Z"
}
},
{
"model": "api.role",
"pk": "a5b6c7d8-9e0f-1234-5678-90abcdef1234",
"fields": {
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"name": "e2e_admin",
"manage_users": true,
"manage_account": true,
"manage_billing": true,
"manage_providers": true,
"manage_integrations": true,
"manage_scans": true,
"unlimited_visibility": true,
"inserted_at": "2024-01-01T00:00:00.000Z",
"updated_at": "2024-01-01T00:00:00.000Z"
}
},
{
"model": "api.userrolerelationship",
"pk": "f1e2d3c4-b5a6-9876-5432-10fedcba9876",
"fields": {
"tenant": "7c8f94a3-e2d1-4b3a-9f87-2c4d5e6f1a2b",
"role": "a5b6c7d8-9e0f-1234-5678-90abcdef1234",
"user": "6d4f8a91-3c2e-4b5a-8f7d-1e9c5b2a4d6f",
"inserted_at": "2024-01-01T00:00:00.000Z"
}
}
]

File diff suppressed because one or more lines are too long

View File

@@ -1,80 +0,0 @@
from django.contrib.sites.models import Site
from django.core.management.base import BaseCommand
from django.db import DEFAULT_DB_ALIAS, connection, connections, transaction
from django.db.migrations.recorder import MigrationRecorder
def table_exists(table_name):
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT EXISTS (
SELECT 1 FROM information_schema.tables
WHERE table_name = %s
)
""",
[table_name],
)
return cursor.fetchone()[0]
class Command(BaseCommand):
help = "Fix migration inconsistency between socialaccount and sites"
def add_arguments(self, parser):
parser.add_argument(
"--database",
default=DEFAULT_DB_ALIAS,
help="Specifies the database to operate on.",
)
def handle(self, *args, **options):
db = options["database"]
connection = connections[db]
recorder = MigrationRecorder(connection)
applied = set(recorder.applied_migrations())
has_social = ("socialaccount", "0001_initial") in applied
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_name = 'django_site'
);
"""
)
site_table_exists = cursor.fetchone()[0]
if has_social and not site_table_exists:
self.stdout.write(
f"Detected inconsistency in '{db}'. Creating 'django_site' table manually..."
)
with transaction.atomic(using=db):
with connection.schema_editor() as schema_editor:
schema_editor.create_model(Site)
recorder.record_applied("sites", "0001_initial")
recorder.record_applied("sites", "0002_alter_domain_unique")
self.stdout.write(
"Fixed: 'django_site' table created and migrations registered."
)
# Ensure the relationship table also exists
if not table_exists("socialaccount_socialapp_sites"):
self.stdout.write(
"Detected missing 'socialaccount_socialapp_sites' table. Creating manually..."
)
with connection.schema_editor() as schema_editor:
from allauth.socialaccount.models import SocialApp
schema_editor.create_model(
SocialApp._meta.get_field("sites").remote_field.through
)
self.stdout.write(
"Fixed: 'socialaccount_socialapp_sites' table created."
)

View File

@@ -1,285 +0,0 @@
import random
from datetime import datetime, timezone
from math import ceil
from uuid import uuid4
from django.core.management.base import BaseCommand
from tqdm import tqdm
from api.db_utils import rls_transaction
from api.models import (
Finding,
Provider,
Resource,
ResourceFindingMapping,
ResourceScanSummary,
Scan,
StatusChoices,
)
from prowler.lib.check.models import CheckMetadata
class Command(BaseCommand):
help = "Populates the database with test data for performance testing."
def add_arguments(self, parser):
parser.add_argument(
"--tenant",
type=str,
required=True,
help="Tenant id for which the data will be populated.",
)
parser.add_argument(
"--resources",
type=int,
required=True,
help="The number of resources to create.",
)
parser.add_argument(
"--findings",
type=int,
required=True,
help="The number of findings to create.",
)
parser.add_argument(
"--batch", type=int, required=True, help="The batch size for bulk creation."
)
parser.add_argument(
"--alias",
type=str,
required=False,
help="Optional alias for the provider and scan",
)
def handle(self, *args, **options):
tenant_id = options["tenant"]
num_resources = options["resources"]
num_findings = options["findings"]
batch_size = options["batch"]
alias = options["alias"] or "Testing"
uid_token = str(uuid4())
self.stdout.write(self.style.NOTICE("Starting data population"))
self.stdout.write(self.style.NOTICE(f"\tTenant: {tenant_id}"))
self.stdout.write(self.style.NOTICE(f"\tAlias: {alias}"))
self.stdout.write(self.style.NOTICE(f"\tResources: {num_resources}"))
self.stdout.write(self.style.NOTICE(f"\tFindings: {num_findings}"))
self.stdout.write(self.style.NOTICE(f"\tBatch size: {batch_size}\n\n"))
# Resource metadata
possible_regions = [
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"ap-southeast-1",
"ap-southeast-2",
"ap-northeast-1",
"ap-northeast-2",
"ap-south-1",
"sa-east-1",
]
possible_services = []
possible_types = []
bulk_check_metadata = CheckMetadata.get_bulk(provider="aws")
for check_metadata in bulk_check_metadata.values():
if check_metadata.ServiceName not in possible_services:
possible_services.append(check_metadata.ServiceName)
if (
check_metadata.ResourceType
and check_metadata.ResourceType not in possible_types
):
possible_types.append(check_metadata.ResourceType)
with rls_transaction(tenant_id):
provider, _ = Provider.all_objects.get_or_create(
tenant_id=tenant_id,
provider="aws",
connected=True,
uid=str(random.randint(100000000000, 999999999999)),
defaults={
"alias": alias,
},
)
with rls_transaction(tenant_id):
scan = Scan.all_objects.create(
tenant_id=tenant_id,
provider=provider,
name=alias,
trigger="manual",
state="executing",
progress=0,
started_at=datetime.now(timezone.utc),
)
scan_state = "completed"
try:
# Create resources
resources = []
for i in range(num_resources):
resources.append(
Resource(
tenant_id=tenant_id,
provider_id=provider.id,
uid=f"testing-{uid_token}-{i}",
name=f"Testing {uid_token}-{i}",
region=random.choice(possible_regions),
service=random.choice(possible_services),
type=random.choice(possible_types),
inserted_at="2024-10-01T00:00:00Z",
)
)
num_batches = ceil(len(resources) / batch_size)
self.stdout.write(self.style.WARNING("Creating resources..."))
for i in tqdm(range(0, len(resources), batch_size), total=num_batches):
with rls_transaction(tenant_id):
Resource.all_objects.bulk_create(resources[i : i + batch_size])
self.stdout.write(self.style.SUCCESS("Resources created successfully.\n\n"))
with rls_transaction(tenant_id):
scan.progress = 33
scan.save()
# Create Findings
findings = []
possible_deltas = ["new", "changed", None]
possible_severities = ["critical", "high", "medium", "low"]
findings_resources_mapping = []
for i in range(num_findings):
severity = random.choice(possible_severities)
check_id = random.randint(1, 1000)
assigned_resource_num = random.randint(0, len(resources) - 1)
assigned_resource = resources[assigned_resource_num]
findings_resources_mapping.append(assigned_resource_num)
findings.append(
Finding(
tenant_id=tenant_id,
scan=scan,
uid=f"testing-{uid_token}-{i}",
delta=random.choice(possible_deltas),
check_id=f"check-{check_id}",
status=random.choice(list(StatusChoices)),
severity=severity,
impact=severity,
raw_result={},
check_metadata={
"checktitle": f"Test title for check {check_id}",
"risk": f"Testing risk {uid_token}-{i}",
"provider": "aws",
"severity": severity,
"categories": ["category1", "category2", "category3"],
"description": "This is a random description that should not matter for testing purposes.",
"servicename": assigned_resource.service,
"resourcetype": assigned_resource.type,
},
resource_types=[assigned_resource.type],
resource_regions=[assigned_resource.region],
resource_services=[assigned_resource.service],
inserted_at="2024-10-01T00:00:00Z",
)
)
num_batches = ceil(len(findings) / batch_size)
self.stdout.write(self.style.WARNING("Creating findings..."))
for i in tqdm(range(0, len(findings), batch_size), total=num_batches):
with rls_transaction(tenant_id):
Finding.all_objects.bulk_create(findings[i : i + batch_size])
self.stdout.write(self.style.SUCCESS("Findings created successfully.\n\n"))
with rls_transaction(tenant_id):
scan.progress = 66
scan.save()
# Create ResourceFindingMapping
mappings = []
scan_resource_cache: set[tuple] = set()
for index, finding_instance in enumerate(findings):
resource_instance = resources[findings_resources_mapping[index]]
mappings.append(
ResourceFindingMapping(
tenant_id=tenant_id,
resource=resource_instance,
finding=finding_instance,
)
)
scan_resource_cache.add(
(
str(resource_instance.id),
resource_instance.service,
resource_instance.region,
resource_instance.type,
)
)
num_batches = ceil(len(mappings) / batch_size)
self.stdout.write(
self.style.WARNING("Creating resource-finding mappings...")
)
for i in tqdm(range(0, len(mappings), batch_size), total=num_batches):
with rls_transaction(tenant_id):
ResourceFindingMapping.objects.bulk_create(
mappings[i : i + batch_size]
)
self.stdout.write(
self.style.SUCCESS(
"Resource-finding mappings created successfully.\n\n"
)
)
with rls_transaction(tenant_id):
scan.progress = 99
scan.save()
self.stdout.write(self.style.WARNING("Creating finding filter values..."))
resource_scan_summaries = [
ResourceScanSummary(
tenant_id=tenant_id,
scan_id=str(scan.id),
resource_id=resource_id,
service=service,
region=region,
resource_type=resource_type,
)
for resource_id, service, region, resource_type in scan_resource_cache
]
num_batches = ceil(len(resource_scan_summaries) / batch_size)
with rls_transaction(tenant_id):
for i in tqdm(
range(0, len(resource_scan_summaries), batch_size),
total=num_batches,
):
with rls_transaction(tenant_id):
ResourceScanSummary.objects.bulk_create(
resource_scan_summaries[i : i + batch_size],
ignore_conflicts=True,
)
self.stdout.write(
self.style.SUCCESS("Finding filter values created successfully.\n\n")
)
except Exception as e:
self.stdout.write(self.style.ERROR(f"Failed to populate test data: {e}"))
scan_state = "failed"
finally:
scan.completed_at = datetime.now(timezone.utc)
scan.duration = int(
(datetime.now(timezone.utc) - scan.started_at).total_seconds()
)
scan.progress = 100
scan.state = scan_state
scan.unique_resource_count = num_resources
with rls_transaction(tenant_id):
scan.save()
self.stdout.write(self.style.NOTICE("Successfully populated test data."))

View File

@@ -1,49 +0,0 @@
import logging
import time
from config.custom_logging import BackendLogger
def extract_auth_info(request) -> dict:
if getattr(request, "auth", None) is not None:
tenant_id = request.auth.get("tenant_id", "N/A")
user_id = request.auth.get("sub", "N/A")
else:
tenant_id, user_id = "N/A", "N/A"
return {"tenant_id": tenant_id, "user_id": user_id}
class APILoggingMiddleware:
"""
Middleware for logging API requests.
This middleware logs details of API requests, including the typical request metadata among other useful information.
Args:
get_response (Callable): A callable to get the response, typically the next middleware or view.
"""
def __init__(self, get_response):
self.get_response = get_response
self.logger = logging.getLogger(BackendLogger.API)
def __call__(self, request):
request_start_time = time.time()
response = self.get_response(request)
duration = time.time() - request_start_time
auth_info = extract_auth_info(request)
self.logger.info(
"",
extra={
"user_id": auth_info["user_id"],
"tenant_id": auth_info["tenant_id"],
"method": request.method,
"path": request.path,
"query_params": request.GET.dict(),
"status_code": response.status_code,
"duration": duration,
},
)
return response

File diff suppressed because it is too large Load Diff

View File

@@ -1,23 +0,0 @@
from django.conf import settings
from django.db import migrations
from api.db_utils import DB_PROWLER_USER
DB_NAME = settings.DATABASES["default"]["NAME"]
class Migration(migrations.Migration):
dependencies = [
("api", "0001_initial"),
("token_blacklist", "0012_alter_outstandingtoken_user"),
]
operations = [
migrations.RunSQL(
f"""
GRANT SELECT, INSERT, UPDATE, DELETE ON token_blacklist_blacklistedtoken TO {DB_PROWLER_USER};
GRANT SELECT, INSERT, UPDATE, DELETE ON token_blacklist_outstandingtoken TO {DB_PROWLER_USER};
GRANT SELECT, DELETE ON django_admin_log TO {DB_PROWLER_USER};
"""
),
]

View File

@@ -1,23 +0,0 @@
# Generated by Django 5.1.1 on 2024-12-20 13:16
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0002_token_migrations"),
]
operations = [
migrations.RemoveConstraint(
model_name="provider",
name="unique_provider_uids",
),
migrations.AddConstraint(
model_name="provider",
constraint=models.UniqueConstraint(
fields=("tenant_id", "provider", "uid", "is_deleted"),
name="unique_provider_uids",
),
),
]

View File

@@ -1,248 +0,0 @@
# Generated by Django 5.1.1 on 2024-12-05 12:29
import uuid
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0003_update_provider_unique_constraint_with_is_deleted"),
]
operations = [
migrations.CreateModel(
name="Role",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("name", models.CharField(max_length=255)),
("manage_users", models.BooleanField(default=False)),
("manage_account", models.BooleanField(default=False)),
("manage_billing", models.BooleanField(default=False)),
("manage_providers", models.BooleanField(default=False)),
("manage_integrations", models.BooleanField(default=False)),
("manage_scans", models.BooleanField(default=False)),
("unlimited_visibility", models.BooleanField(default=False)),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "roles",
},
),
migrations.CreateModel(
name="RoleProviderGroupRelationship",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "role_provider_group_relationship",
},
),
migrations.CreateModel(
name="UserRoleRelationship",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "role_user_relationship",
},
),
migrations.AddField(
model_name="roleprovidergrouprelationship",
name="provider_group",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.providergroup"
),
),
migrations.AddField(
model_name="roleprovidergrouprelationship",
name="role",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.role"
),
),
migrations.AddField(
model_name="role",
name="provider_groups",
field=models.ManyToManyField(
related_name="roles",
through="api.RoleProviderGroupRelationship",
to="api.providergroup",
),
),
migrations.AddField(
model_name="userrolerelationship",
name="role",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.role"
),
),
migrations.AddField(
model_name="userrolerelationship",
name="user",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL
),
),
migrations.AddField(
model_name="role",
name="users",
field=models.ManyToManyField(
related_name="roles",
through="api.UserRoleRelationship",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddConstraint(
model_name="roleprovidergrouprelationship",
constraint=models.UniqueConstraint(
fields=("role_id", "provider_group_id"),
name="unique_role_provider_group_relationship",
),
),
migrations.AddConstraint(
model_name="roleprovidergrouprelationship",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_roleprovidergrouprelationship",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="userrolerelationship",
constraint=models.UniqueConstraint(
fields=("role_id", "user_id"), name="unique_role_user_relationship"
),
),
migrations.AddConstraint(
model_name="userrolerelationship",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_userrolerelationship",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddConstraint(
model_name="role",
constraint=models.UniqueConstraint(
fields=("tenant_id", "name"), name="unique_role_per_tenant"
),
),
migrations.AddConstraint(
model_name="role",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_role",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.CreateModel(
name="InvitationRoleRelationship",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
(
"invitation",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.invitation"
),
),
(
"role",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.role"
),
),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "role_invitation_relationship",
},
),
migrations.AddConstraint(
model_name="invitationrolerelationship",
constraint=models.UniqueConstraint(
fields=("role_id", "invitation_id"),
name="unique_role_invitation_relationship",
),
),
migrations.AddConstraint(
model_name="invitationrolerelationship",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_invitationrolerelationship",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddField(
model_name="role",
name="invitations",
field=models.ManyToManyField(
related_name="roles",
through="api.InvitationRoleRelationship",
to="api.invitation",
),
),
]

View File

@@ -1,44 +0,0 @@
from django.db import migrations
from api.db_router import MainRouter
def create_admin_role(apps, schema_editor):
Tenant = apps.get_model("api", "Tenant")
Role = apps.get_model("api", "Role")
User = apps.get_model("api", "User")
UserRoleRelationship = apps.get_model("api", "UserRoleRelationship")
for tenant in Tenant.objects.using(MainRouter.admin_db).all():
admin_role, _ = Role.objects.using(MainRouter.admin_db).get_or_create(
name="admin",
tenant=tenant,
defaults={
"manage_users": True,
"manage_account": True,
"manage_billing": True,
"manage_providers": True,
"manage_integrations": True,
"manage_scans": True,
"unlimited_visibility": True,
},
)
users = User.objects.using(MainRouter.admin_db).filter(
membership__tenant=tenant
)
for user in users:
UserRoleRelationship.objects.using(MainRouter.admin_db).get_or_create(
user=user,
role=admin_role,
tenant=tenant,
)
class Migration(migrations.Migration):
dependencies = [
("api", "0004_rbac"),
]
operations = [
migrations.RunPython(create_admin_role),
]

View File

@@ -1,15 +0,0 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0005_rbac_missing_admin_roles"),
]
operations = [
migrations.AddField(
model_name="finding",
name="first_seen_at",
field=models.DateTimeField(editable=False, null=True),
),
]

View File

@@ -1,25 +0,0 @@
# Generated by Django 5.1.5 on 2025-01-28 15:03
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0006_findings_first_seen"),
]
operations = [
migrations.AddIndex(
model_name="scan",
index=models.Index(
fields=["tenant_id", "provider_id", "state", "inserted_at"],
name="scans_prov_state_insert_idx",
),
),
migrations.AddIndex(
model_name="scansummary",
index=models.Index(
fields=["tenant_id", "scan_id"], name="scan_summaries_tenant_scan_idx"
),
),
]

View File

@@ -1,64 +0,0 @@
import json
from datetime import datetime, timedelta, timezone
import django.db.models.deletion
from django.db import migrations, models
from django_celery_beat.models import PeriodicTask
from api.db_utils import rls_transaction
from api.models import Scan, StateChoices
def migrate_daily_scheduled_scan_tasks(apps, schema_editor):
for daily_scheduled_scan_task in PeriodicTask.objects.filter(
task="scan-perform-scheduled"
):
task_kwargs = json.loads(daily_scheduled_scan_task.kwargs)
tenant_id = task_kwargs["tenant_id"]
provider_id = task_kwargs["provider_id"]
current_time = datetime.now(timezone.utc)
scheduled_time_today = datetime.combine(
current_time.date(),
daily_scheduled_scan_task.start_time.time(),
tzinfo=timezone.utc,
)
if current_time < scheduled_time_today:
next_scan_date = scheduled_time_today
else:
next_scan_date = scheduled_time_today + timedelta(days=1)
with rls_transaction(tenant_id):
Scan.objects.create(
tenant_id=tenant_id,
name="Daily scheduled scan",
provider_id=provider_id,
trigger=Scan.TriggerChoices.SCHEDULED,
state=StateChoices.SCHEDULED,
scheduled_at=next_scan_date,
scheduler_task_id=daily_scheduled_scan_task.id,
)
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0007_scan_and_scan_summaries_indexes"),
("django_celery_beat", "0019_alter_periodictasks_options"),
]
operations = [
migrations.AddField(
model_name="scan",
name="scheduler_task",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="django_celery_beat.periodictask",
),
),
migrations.RunPython(migrate_daily_scheduled_scan_tasks),
]

View File

@@ -1,22 +0,0 @@
# Generated by Django 5.1.5 on 2025-02-07 09:42
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0008_daily_scheduled_tasks_update"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="uid",
field=models.CharField(
max_length=250,
validators=[django.core.validators.MinLengthValidator(3)],
verbose_name="Unique identifier for the provider, set by the provider",
),
),
]

View File

@@ -1,109 +0,0 @@
from functools import partial
from django.db import connection, migrations
def create_index_on_partitions(
apps, schema_editor, parent_table: str, index_name: str, index_details: str
):
with connection.cursor() as cursor:
cursor.execute(
"""
SELECT inhrelid::regclass::text
FROM pg_inherits
WHERE inhparent = %s::regclass;
""",
[parent_table],
)
partitions = [row[0] for row in cursor.fetchall()]
# Iterate over partitions and create index concurrently.
# Note: PostgreSQL does not allow CONCURRENTLY inside a transaction,
# so we need atomic = False for this migration.
for partition in partitions:
sql = (
f"CREATE INDEX CONCURRENTLY IF NOT EXISTS {partition.replace('.', '_')}_{index_name} ON {partition} "
f"{index_details};"
)
schema_editor.execute(sql)
def drop_index_on_partitions(apps, schema_editor, parent_table: str, index_name: str):
with schema_editor.connection.cursor() as cursor:
cursor.execute(
"""
SELECT inhrelid::regclass::text
FROM pg_inherits
WHERE inhparent = %s::regclass;
""",
[parent_table],
)
partitions = [row[0] for row in cursor.fetchall()]
# Iterate over partitions and drop index concurrently.
for partition in partitions:
partition_index = f"{partition.replace('.', '_')}_{index_name}"
sql = f"DROP INDEX CONCURRENTLY IF EXISTS {partition_index};"
schema_editor.execute(sql)
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0009_increase_provider_uid_maximum_length"),
]
operations = [
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="findings_tenant_and_id_idx",
index_details="(tenant_id, id)",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="findings_tenant_and_id_idx",
),
),
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="find_tenant_scan_idx",
index_details="(tenant_id, scan_id)",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="find_tenant_scan_idx",
),
),
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="find_tenant_scan_id_idx",
index_details="(tenant_id, scan_id, id)",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="find_tenant_scan_id_idx",
),
),
migrations.RunPython(
partial(
create_index_on_partitions,
parent_table="findings",
index_name="find_delta_new_idx",
index_details="(tenant_id, id) where delta = 'new'",
),
reverse_code=partial(
drop_index_on_partitions,
parent_table="findings",
index_name="find_delta_new_idx",
),
),
]

View File

@@ -1,49 +0,0 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0010_findings_performance_indexes_partitions"),
]
operations = [
migrations.AddIndex(
model_name="finding",
index=models.Index(
fields=["tenant_id", "id"], name="findings_tenant_and_id_idx"
),
),
migrations.AddIndex(
model_name="finding",
index=models.Index(
fields=["tenant_id", "scan_id"], name="find_tenant_scan_idx"
),
),
migrations.AddIndex(
model_name="finding",
index=models.Index(
fields=["tenant_id", "scan_id", "id"], name="find_tenant_scan_id_idx"
),
),
migrations.AddIndex(
model_name="finding",
index=models.Index(
condition=models.Q(("delta", "new")),
fields=["tenant_id", "id"],
name="find_delta_new_idx",
),
),
migrations.AddIndex(
model_name="resourcetagmapping",
index=models.Index(
fields=["tenant_id", "resource_id"], name="resource_tag_tenant_idx"
),
),
migrations.AddIndex(
model_name="resource",
index=models.Index(
fields=["tenant_id", "service", "region", "type"],
name="resource_tenant_metadata_idx",
),
),
]

View File

@@ -1,15 +0,0 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0011_findings_performance_indexes_parent"),
]
operations = [
migrations.AddField(
model_name="scan",
name="output_location",
field=models.CharField(blank=True, max_length=200, null=True),
),
]

View File

@@ -1,35 +0,0 @@
# Generated by Django 5.1.5 on 2025-03-03 15:46
from functools import partial
from django.db import migrations
from api.db_utils import IntegrationTypeEnum, PostgresEnumMigration, register_enum
from api.models import Integration
IntegrationTypeEnumMigration = PostgresEnumMigration(
enum_name="integration_type",
enum_values=tuple(
integration_type[0]
for integration_type in Integration.IntegrationChoices.choices
),
)
class Migration(migrations.Migration):
atomic = False
dependencies = [
("api", "0012_scan_report_output"),
]
operations = [
migrations.RunPython(
IntegrationTypeEnumMigration.create_enum_type,
reverse_code=IntegrationTypeEnumMigration.drop_enum_type,
),
migrations.RunPython(
partial(register_enum, enum_class=IntegrationTypeEnum),
reverse_code=migrations.RunPython.noop,
),
]

View File

@@ -1,131 +0,0 @@
# Generated by Django 5.1.5 on 2025-03-03 15:46
import uuid
import django.db.models.deletion
from django.db import migrations, models
import api.db_utils
import api.rls
from api.rls import RowLevelSecurityConstraint
class Migration(migrations.Migration):
dependencies = [
("api", "0013_integrations_enum"),
]
operations = [
migrations.CreateModel(
name="Integration",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
("enabled", models.BooleanField(default=False)),
("connected", models.BooleanField(blank=True, null=True)),
(
"connection_last_checked_at",
models.DateTimeField(blank=True, null=True),
),
(
"integration_type",
api.db_utils.IntegrationTypeEnumField(
choices=[
("amazon_s3", "Amazon S3"),
("saml", "SAML"),
("aws_security_hub", "AWS Security Hub"),
("jira", "JIRA"),
("slack", "Slack"),
]
),
),
("configuration", models.JSONField(default=dict)),
("_credentials", models.BinaryField(db_column="credentials")),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={"db_table": "integrations", "abstract": False},
),
migrations.AddConstraint(
model_name="integration",
constraint=RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_integration",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.CreateModel(
name="IntegrationProviderRelationship",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4,
editable=False,
primary_key=True,
serialize=False,
),
),
("inserted_at", models.DateTimeField(auto_now_add=True)),
(
"integration",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to="api.integration",
),
),
(
"provider",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.provider"
),
),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "integration_provider_mappings",
"constraints": [
models.UniqueConstraint(
fields=("integration_id", "provider_id"),
name="unique_integration_provider_rel",
),
],
},
),
migrations.AddConstraint(
model_name="IntegrationProviderRelationship",
constraint=RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_integrationproviderrelationship",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
migrations.AddField(
model_name="integration",
name="providers",
field=models.ManyToManyField(
blank=True,
related_name="integrations",
through="api.IntegrationProviderRelationship",
to="api.provider",
),
),
]

View File

@@ -1,26 +0,0 @@
# Generated by Django 5.1.5 on 2025-03-25 11:29
from django.db import migrations, models
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0014_integrations"),
]
operations = [
migrations.AddField(
model_name="finding",
name="muted",
field=models.BooleanField(default=False),
),
migrations.AlterField(
model_name="finding",
name="status",
field=api.db_utils.StatusEnumField(
choices=[("FAIL", "Fail"), ("PASS", "Pass"), ("MANUAL", "Manual")]
),
),
]

View File

@@ -1,32 +0,0 @@
# Generated by Django 5.1.5 on 2025-03-31 10:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("api", "0015_finding_muted"),
]
operations = [
migrations.AddField(
model_name="finding",
name="compliance",
field=models.JSONField(blank=True, default=dict, null=True),
),
migrations.AddField(
model_name="resource",
name="details",
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name="resource",
name="metadata",
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name="resource",
name="partition",
field=models.TextField(blank=True, null=True),
),
]

View File

@@ -1,32 +0,0 @@
# Generated by Django 5.1.7 on 2025-04-16 08:47
from django.db import migrations
import api.db_utils
class Migration(migrations.Migration):
dependencies = [
("api", "0016_finding_compliance_resource_details_and_more"),
]
operations = [
migrations.AlterField(
model_name="provider",
name="provider",
field=api.db_utils.ProviderEnumField(
choices=[
("aws", "AWS"),
("azure", "Azure"),
("gcp", "GCP"),
("kubernetes", "Kubernetes"),
("m365", "M365"),
],
default="aws",
),
),
migrations.RunSQL(
"ALTER TYPE provider ADD VALUE IF NOT EXISTS 'm365';",
reverse_sql=migrations.RunSQL.noop,
),
]

View File

@@ -1,81 +0,0 @@
# Generated by Django 5.1.7 on 2025-05-05 10:01
import uuid
import django.db.models.deletion
import uuid6
from django.db import migrations, models
import api.rls
class Migration(migrations.Migration):
dependencies = [
("api", "0017_m365_provider"),
]
operations = [
migrations.CreateModel(
name="ResourceScanSummary",
fields=[
(
"id",
models.BigAutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("scan_id", models.UUIDField(db_index=True, default=uuid6.uuid7)),
("resource_id", models.UUIDField(db_index=True, default=uuid.uuid4)),
("service", models.CharField(max_length=100)),
("region", models.CharField(max_length=100)),
("resource_type", models.CharField(max_length=100)),
(
"tenant",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
),
),
],
options={
"db_table": "resource_scan_summaries",
"indexes": [
models.Index(
fields=["tenant_id", "scan_id", "service"],
name="rss_tenant_scan_svc_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region"],
name="rss_tenant_scan_reg_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "resource_type"],
name="rss_tenant_scan_type_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region", "service"],
name="rss_tenant_scan_reg_svc_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "service", "resource_type"],
name="rss_tenant_scan_svc_type_idx",
),
models.Index(
fields=["tenant_id", "scan_id", "region", "resource_type"],
name="rss_tenant_scan_reg_type_idx",
),
],
"unique_together": {("tenant_id", "scan_id", "resource_id")},
},
),
migrations.AddConstraint(
model_name="resourcescansummary",
constraint=api.rls.RowLevelSecurityConstraint(
"tenant_id",
name="rls_on_resourcescansummary",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
),
]

Some files were not shown because too many files have changed in this diff Show More