Compare commits

..

219 Commits

Author SHA1 Message Date
Fennerr
bcfdcbde30 Resolved some conflicts 2024-02-15 15:56:07 +02:00
Sergio Garcia
2f50aaa9c1 resolve conflicts 2024-01-16 11:16:11 +01:00
Nacho Rivera
537081a0f6 feat(AwsProvider): include new structure for AWS provider (#3252)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-01-16 10:51:15 +01:00
Sergio Garcia
2eb774bbc9 chore(manual status): change INFO to MANUAL status (#3254) 2024-01-16 10:45:00 +01:00
Sergio Garcia
5419117842 feat(status): add --status flag (#3238) 2024-01-16 10:44:39 +01:00
Sergio Garcia
e72831d428 feat(kubernetes): add Kubernetes provider (#3226)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:43:22 +01:00
Sergio Garcia
217b8ad250 fix(gcp): fix error in generating compliance (#3201) 2024-01-16 10:42:34 +01:00
Sergio Garcia
09b4548445 feat(compliance): execute all compliance by default (#3003)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:42:31 +01:00
Nacho Rivera
0d96583769 feat(CloudProvider): introduce global provider Azure&GCP (#3069) 2024-01-16 10:41:11 +01:00
Sergio Garcia
722fe0a1bc chore(sts-endpoint): deprecate --sts-endpoint-region (#3046)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:39:14 +01:00
Sergio Garcia
445821eceb feat(mute list): change allowlist to mute list (#3039)
Co-authored-by: Nacho Rivera <nachor1992@gmail.com>
2024-01-16 10:39:11 +01:00
Nacho Rivera
c3d129a4b2 chore(update): rebase from master (#3067)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: John Mastron <14130495+mtronrd@users.noreply.github.com>
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: simone ragonesi <102741679+R3DRUN3@users.noreply.github.com>
Co-authored-by: Johnny Lu <johnny2lu@gmail.com>
Co-authored-by: Vajrala Venkateswarlu <59252985+venkyvajrala@users.noreply.github.com>
Co-authored-by: Ignacio Dominguez <ignacio.dominguez@zego.com>
2024-01-16 10:36:42 +01:00
Nacho Rivera
36fc575e40 feat(AwsProvider): include new structure for AWS provider (#3252)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-01-15 16:55:53 +01:00
Sergio Garcia
24efb34d91 chore(manual status): change INFO to MANUAL status (#3254) 2024-01-09 18:08:00 +01:00
Sergio Garcia
c08e244c95 feat(status): add --status flag (#3238) 2024-01-09 11:35:44 +01:00
Sergio Garcia
c2f8980f1f feat(kubernetes): add Kubernetes provider (#3226)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-09 10:31:51 +01:00
Fennerr
028d29b8ff Added rich to poetry dependencies 2023-12-21 08:43:57 +02:00
Fennerr
b976cab926 Added rich to poetry dependencies 2023-12-21 08:43:47 +02:00
Fennerr
197a08ab94 Added --only-logs and some reordering 2023-12-21 08:34:49 +02:00
Fennerr
0d97780ade cleaned up execution manager,live display. Added metaclass 2023-12-20 23:22:18 +02:00
Fennerr
f2f922d7e8 fixed decorator to correctly handle args 2023-12-20 12:10:34 +02:00
Fennerr
606b4b5a66 merged threading progress 2023-12-20 11:58:39 +02:00
Fennerr
132056f4c1 some more progress 2023-12-20 11:52:32 +02:00
Fennerr
4845d6033b added progress decorator 2023-12-20 11:48:40 +02:00
Fennerr
57550e6984 initial switch 2023-12-20 11:42:48 +02:00
Fennerr
040b780af7 WIP: improved layout 2023-12-20 00:14:53 +02:00
Fennerr
abaa7855d7 Pull rebase from master 2023-12-19 21:55:18 +02:00
Fennerr
e9c6b35698 WIP: added verbose results and timer 2023-12-19 21:54:26 +02:00
Nacho Rivera
78505cb0a8 chore(sqs_...not_publicly_accessible): less restrictive condition test (#3211)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-19 16:53:19 +01:00
Fennerr
c92740869f WIP: centered results table 2023-12-19 14:13:59 +02:00
dependabot[bot]
f8d77d9a30 build(deps): bump google-auth-httplib2 from 0.1.1 to 0.2.0 (#3207)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 13:05:30 +01:00
Fennerr
49003fae08 WIP: added results table 2023-12-19 13:52:54 +02:00
Sergio Garcia
1a4887f028 chore(regions_update): Changes in regions for AWS services. (#3209)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-19 12:39:19 +01:00
dependabot[bot]
71042b5919 build(deps): bump mkdocs-material from 9.4.14 to 9.5.2 (#3206)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 12:39:10 +01:00
dependabot[bot]
435976800a build(deps-dev): bump moto from 4.2.11 to 4.2.12 (#3205)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 10:14:04 +01:00
dependabot[bot]
18f4c7205b build(deps-dev): bump coverage from 7.3.2 to 7.3.3 (#3204)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:55:14 +01:00
dependabot[bot]
06eeefb8bf build(deps-dev): bump pylint from 3.0.2 to 3.0.3 (#3203)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:30:45 +01:00
Fennerr
01f3c8656c WIP: improved layout 2023-12-18 23:33:43 +02:00
Sergio Garcia
1737d7cf42 fix(gcp): fix UnknownApiNameOrVersion error (#3202) 2023-12-18 14:32:33 +01:00
Fennerr
ba705406ff Moved all the check execution logic into execution manager 2023-12-18 14:37:41 +02:00
Fennerr
d8101acc9c Moved all the check execution logic into execution manager 2023-12-18 14:37:14 +02:00
dependabot[bot]
cd03fa6d46 build(deps): bump jsonschema from 4.18.0 to 4.20.0 (#3057)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-18 13:00:43 +01:00
Sergio Garcia
0ef85b3dee fix(gcp): fix error in generating compliance (#3201) 2023-12-18 12:10:58 +01:00
Sergio Garcia
a10a73962e chore(regions_update): Changes in regions for AWS services. (#3200) 2023-12-18 07:21:18 +01:00
Pepe Fagoaga
99d6fee7a0 fix(iam): Handle NoSuchEntity in list_group_policies (#3197) 2023-12-15 14:04:59 +01:00
Nacho Rivera
c8831f0f50 chore(s3 bucket input validation): validates input bucket (#3198) 2023-12-15 13:37:41 +01:00
Pepe Fagoaga
fdeb523581 feat(securityhub): Send only FAILs but storing all in the output files (#3195) 2023-12-15 13:31:55 +01:00
Fennerr
126acc046a Added execution manager and live display 2023-12-15 12:28:25 +02:00
Sergio Garcia
9a868464ee chore(regions_update): Changes in regions for AWS services. (#3196)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-15 10:15:54 +01:00
Alexandros Gidarakos
051ec75e01 docs(cloudshell): Update AWS CloudShell installation steps (#3192)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-14 08:35:23 +01:00
Alexandros Gidarakos
fc3909491a docs(cloudshell): Add missing steps to workaround (#3191) 2023-12-14 08:18:24 +01:00
Fennerr
f324f27016 updated ui redesign implementation 2023-12-13 23:00:12 +02:00
Sergio Garcia
93a2431211 feat(compliance): execute all compliance by default (#3003)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-13 17:31:39 +01:00
Pepe Fagoaga
2437fe270c docs(cloudshell): Add workaround to clone from github (#3190) 2023-12-13 17:19:30 +01:00
Nacho Rivera
c937b193d0 fix(apigw_restapi_auth check): add method auth testing (#3183) 2023-12-13 16:20:09 +01:00
Fennerr
8b5c995486 fix(lambda): memory leakage with lambda function code (#3167)
Co-authored-by: Justin Moorcroft <justin.moorcroft@mwrcybersec.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-13 15:15:13 +01:00
Fennerr
5b80082491 merged issue-2516 2023-12-13 15:17:13 +02:00
Pepe Fagoaga
2ca4656ef9 fix(lambda): Do not use function.code 2023-12-13 13:53:36 +01:00
Pepe Fagoaga
cb4de850e9 fix(lambda): Do not use function.code 2023-12-13 13:51:53 +01:00
Fennerr
92e0d74055 Keeping the code seperate from the function obj 2023-12-13 14:51:17 +02:00
Fennerr
578b21f424 Fixed error log message 2023-12-13 14:36:59 +02:00
Fennerr
85c44f01c5 Initial progress 2023-12-13 14:23:49 +02:00
Pepe Fagoaga
fb5d6cfd7e refactor(lambda): fetch code 2023-12-13 13:16:13 +01:00
Pepe Fagoaga
1b3f830623 test(lambda): fix tests 2023-12-13 13:15:23 +01:00
Sergio Garcia
4410f2a582 chore(regions_update): Changes in regions for AWS services. (#3189)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-13 10:32:10 +01:00
Nacho Rivera
1fe74937c1 feat(CloudProvider): introduce global provider Azure&GCP (#3069) 2023-12-12 18:05:17 +01:00
Sergio Garcia
6ee016e577 chore(sts-endpoint): deprecate --sts-endpoint-region (#3046)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 17:13:50 +01:00
Sergio Garcia
f7248dfb1c feat(mute list): change allowlist to mute list (#3039)
Co-authored-by: Nacho Rivera <nachor1992@gmail.com>
2023-12-12 16:57:52 +01:00
Fennerr
0481435846 Made use of service thread_pool 2023-12-12 16:10:34 +02:00
Fennerr
bbb816868e docs(aws): Added debug information to inspect retries in API calls (#3186)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 14:07:33 +01:00
Fennerr
5554e2be1b Merge branch 'master' into issue-2516 2023-12-12 14:58:10 +02:00
Fennerr
e97e2e84fc Merge branch 'master' of https://github.com/prowler-cloud/prowler 2023-12-12 14:57:53 +02:00
Fennerr
19f38dbb63 Modified logging statements 2023-12-12 14:25:29 +02:00
Fennerr
2441cca810 fix(threading): Improved threading for the AWS Service (#3175)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 12:50:26 +01:00
Sergio Garcia
3c3dfb380b fix(gcp): improve logging messages (#3185) 2023-12-12 12:38:50 +01:00
Nacho Rivera
0f165f0bf0 chore(actions): add prowler 4.0 branch to actions (#3184) 2023-12-12 11:40:01 +01:00
Fennerr
06d9eccebd Added threading 2023-12-12 12:22:38 +02:00
Sergio Garcia
7fcff548eb chore(regions_update): Changes in regions for AWS services. (#3182)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-12 10:28:01 +01:00
dependabot[bot]
8fa7b9ba00 build(deps-dev): bump docker from 6.1.3 to 7.0.0 (#3180)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 10:27:49 +01:00
dependabot[bot]
b101e15985 build(deps-dev): bump bandit from 1.7.5 to 1.7.6 (#3179)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 09:53:03 +01:00
dependabot[bot]
b4e412a37f build(deps-dev): bump pylint from 3.0.2 to 3.0.3 (#3181)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 09:33:27 +01:00
dependabot[bot]
ac0e2bbdb2 build(deps): bump google-api-python-client from 2.109.0 to 2.110.0 (#3178)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 08:07:30 +01:00
Sergio Garcia
ba16330e20 feat(cognito): add Amazon Cognito service (#3060) 2023-12-11 14:35:00 +01:00
Pepe Fagoaga
c9cb9774c6 fix(aws_regions): Get enabled regions (#3095) 2023-12-11 14:09:39 +01:00
Pepe Fagoaga
7b5b14dbd0 refactor(cloudwatch): simplify logic (#3172) 2023-12-11 11:23:24 +01:00
Fennerr
bd13973cf5 docs(parallel-execution): Combining the output files (#3096) 2023-12-11 11:11:53 +01:00
Fennerr
a7f8656e89 chore(elb): Improve status in elbv2_insecure_ssl_ciphers (#3169) 2023-12-11 11:04:37 +01:00
Sergio Garcia
1be52fab06 chore(ens): do not apply recomendation type to score (#3058) 2023-12-11 10:53:26 +01:00
Pepe Fagoaga
c9baff1a7f fix(generate_regional_clients): Global is not needed anymore (#3162) 2023-12-11 10:50:15 +01:00
Fennerr
5dfd8460be Improved threading for EC2 2023-12-11 11:06:32 +02:00
Pepe Fagoaga
d1bc68086d fix(access-analyzer): Handle ValidationException (#3165) 2023-12-11 09:40:12 +01:00
Pepe Fagoaga
44a4c0670b fix(cloudtrail): Handle UnsupportedOperationException (#3166) 2023-12-11 09:38:23 +01:00
Pepe Fagoaga
4785056740 fix(elasticache): Handle CacheClusterNotFound (#3174) 2023-12-11 09:37:01 +01:00
Pepe Fagoaga
694aa448a4 fix(s3): Handle NoSuchBucket in the service (#3173) 2023-12-11 09:36:26 +01:00
Sergio Garcia
ee215b1ced chore(regions_update): Changes in regions for AWS services. (#3168) 2023-12-11 08:04:48 +01:00
Fennerr
f71052bcfe Update awslambda_service.py
fixed blank space + removed the if statement in __init__ which I should have previously removed
2023-12-06 09:19:44 +02:00
Fennerr
7bfdb8c1f3 Update awslambda_service.py
removed function param - it is not needed
2023-12-05 22:18:36 +02:00
Justin Moorcroft
dedb03cc6e initial fix for issue #2516 2023-12-05 22:11:32 +02:00
Nacho Rivera
018e87884c test(audit_info): missing workspace test (#3164) 2023-12-05 16:05:39 +01:00
Nacho Rivera
a81cbbc325 test(audit_info): refactor iam (#3163) 2023-12-05 15:59:53 +01:00
Pepe Fagoaga
3962c9d816 test(audit_info): refactor acm, account and access analyzer (#3097) 2023-12-05 15:09:14 +01:00
Pepe Fagoaga
e187875da5 test(audit_info): refactor guardduty (#3160) 2023-12-05 15:00:46 +01:00
Pepe Fagoaga
f0d1a799a2 test(audit_info): refactor cloudtrail (#3111) 2023-12-05 14:59:42 +01:00
Pepe Fagoaga
5452d535d7 test(audit_info): refactor ec2 (#3132) 2023-12-05 14:58:58 +01:00
Pepe Fagoaga
7a776532a8 test(aws_account_id): refactor (#3161) 2023-12-05 14:58:42 +01:00
Nacho Rivera
e704d57957 test(audit_info): refactor inspector2 (#3159) 2023-12-05 14:19:40 +01:00
Pepe Fagoaga
c9a6eb5a1a test(audit_info): refactor globalaccelerator (#3154) 2023-12-05 14:13:02 +01:00
Pepe Fagoaga
c071812160 test(audit_info): refactor glue (#3158) 2023-12-05 14:12:44 +01:00
Pepe Fagoaga
3f95ad9ada test(audit_info): refactor glacier (#3153) 2023-12-05 14:09:04 +01:00
Nacho Rivera
250f59c9f5 test(audit_info): refactor kms (#3157) 2023-12-05 14:05:56 +01:00
Nacho Rivera
c17bbea2c7 test(audit_info): refactor macie (#3156) 2023-12-05 13:59:08 +01:00
Nacho Rivera
0262f8757a test(audit_info): refactor neptune (#3155) 2023-12-05 13:48:32 +01:00
Nacho Rivera
dbc2c481dc test(audit_info): refactor networkfirewall (#3152) 2023-12-05 13:20:52 +01:00
Pepe Fagoaga
e432c39eec test(audit_info): refactor fms (#3151) 2023-12-05 13:18:28 +01:00
Pepe Fagoaga
7383ae4f9c test(audit_info): refactor elbv2 (#3148) 2023-12-05 13:18:06 +01:00
Pepe Fagoaga
d217e33678 test(audit_info): refactor emr (#3149) 2023-12-05 13:17:42 +01:00
Nacho Rivera
d1daceff91 test(audit_info): refactor opensearch (#3150) 2023-12-05 13:17:28 +01:00
Nacho Rivera
dbbd556830 test(audit_info): refactor organizations (#3147) 2023-12-05 12:59:22 +01:00
Nacho Rivera
d483f1d90f test(audit_info): refactor rds (#3146) 2023-12-05 12:51:22 +01:00
Nacho Rivera
80684a998f test(audit_info): refactor redshift (#3144) 2023-12-05 12:42:08 +01:00
Pepe Fagoaga
0c4f0fde48 test(audit_info): refactor elb (#3145) 2023-12-05 12:41:37 +01:00
Pepe Fagoaga
071115cd52 test(audit_info): refactor elasticache (#3142) 2023-12-05 12:41:11 +01:00
Nacho Rivera
9136a755fe test(audit_info): refactor resourceexplorer2 (#3143) 2023-12-05 12:28:38 +01:00
Nacho Rivera
6ff864fc04 test(audit_info): refactor route53 (#3141) 2023-12-05 12:28:12 +01:00
Nacho Rivera
828a6f4696 test(audit_info): refactor s3 (#3140) 2023-12-05 12:13:21 +01:00
Pepe Fagoaga
417aa550a6 test(audit_info): refactor eks (#3139) 2023-12-05 12:07:41 +01:00
Pepe Fagoaga
78ffc2e238 test(audit_info): refactor efs (#3138) 2023-12-05 12:07:21 +01:00
Pepe Fagoaga
c9f22db1b5 test(audit_info): refactor ecs (#3137) 2023-12-05 12:07:01 +01:00
Pepe Fagoaga
41da560b64 test(audit_info): refactor ecr (#3136) 2023-12-05 12:06:42 +01:00
Nacho Rivera
b49e0b95f7 test(audit_info): refactor shield (#3131) 2023-12-05 11:40:42 +01:00
Nacho Rivera
50ef2729e6 test(audit_info): refactor sagemaker (#3135) 2023-12-05 11:40:19 +01:00
Nacho Rivera
6a901bb7de test(audit_info): refactor secretsmanager (#3134) 2023-12-05 11:33:54 +01:00
Nacho Rivera
f0da63c850 test(audit_info): refactor shub (#3133) 2023-12-05 11:33:34 +01:00
Nacho Rivera
b861c1dd3c test(audit_info): refactor sns (#3128) 2023-12-05 11:05:27 +01:00
Nacho Rivera
45faa2e9e8 test(audit_info): refactor sqs (#3130) 2023-12-05 11:05:05 +01:00
Pepe Fagoaga
b2e1eed684 test(audit_info): refactor dynamodb (#3129) 2023-12-05 10:59:26 +01:00
Pepe Fagoaga
4018221da6 test(audit_info): refactor drs (#3127) 2023-12-05 10:59:09 +01:00
Pepe Fagoaga
28ec3886f9 test(audit_info): refactor documentdb (#3126) 2023-12-05 10:58:48 +01:00
Pepe Fagoaga
ed323f4602 test(audit_info): refactor dlm (#3124) 2023-12-05 10:58:31 +01:00
Pepe Fagoaga
f72d360384 test(audit_info): refactor directoryservice (#3123) 2023-12-05 10:58:09 +01:00
Nacho Rivera
682bba452b test(audit_info): refactor ssm (#3125) 2023-12-05 10:45:15 +01:00
Nacho Rivera
e2ce5ae2af test(audit_info): refactor ssmincidents (#3122) 2023-12-05 10:38:09 +01:00
Nacho Rivera
039a0da69e tests(audit_info): refactor trustedadvisor (#3120) 2023-12-05 10:30:54 +01:00
Pepe Fagoaga
c9ad12b87e test(audit_info): refactor config (#3121) 2023-12-05 10:30:13 +01:00
Pepe Fagoaga
094be2e2e6 test(audit_info): refactor codeartifact (#3117) 2023-12-05 10:17:08 +01:00
Pepe Fagoaga
1b3029d833 test(audit_info): refactor codebuild (#3118) 2023-12-05 10:17:02 +01:00
Nacho Rivera
d00d5e863b tests(audit_info): refactor vpc (#3119) 2023-12-05 10:16:51 +01:00
Pepe Fagoaga
3d19e89710 test(audit_info): refactor cloudwatch (#3116) 2023-12-05 10:04:45 +01:00
Pepe Fagoaga
247cd6fc44 test(audit_info): refactor cloudfront (#3110) 2023-12-05 10:04:07 +01:00
Pepe Fagoaga
ba244c887f test(audit_info): refactor cloudformation (#3105) 2023-12-05 10:03:50 +01:00
Pepe Fagoaga
f77d92492a test(audit_info): refactor backup (#3104) 2023-12-05 10:03:32 +01:00
Pepe Fagoaga
1b85af95c0 test(audit_info): refactor athena (#3101) 2023-12-05 10:03:11 +01:00
Pepe Fagoaga
9236f5d058 test(audit_info): refactor autoscaling (#3102) 2023-12-05 10:02:54 +01:00
dependabot[bot]
39ba8cd230 build(deps-dev): bump freezegun from 1.2.2 to 1.3.1 (#3109)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 09:51:57 +01:00
Nacho Rivera
e67328945f test(audit_info): refactor waf (#3115) 2023-12-05 09:51:37 +01:00
Nacho Rivera
bcee2b0b6d test(audit_info): refactor wafv2 (#3114) 2023-12-05 09:51:20 +01:00
Nacho Rivera
be9a1b2f9a test(audit_info): refactor wellarchitected (#3113) 2023-12-05 09:40:31 +01:00
dependabot[bot]
4f9c2aadc2 build(deps-dev): bump moto from 4.2.10 to 4.2.11 (#3108)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 09:34:13 +01:00
Pepe Fagoaga
25d419ac7f test(audit_info): refactor appstream (#3100) 2023-12-05 09:33:53 +01:00
Pepe Fagoaga
57cfb508f1 test(audit_info): refactor apigateway (#3098) 2023-12-05 09:33:20 +01:00
Pepe Fagoaga
c88445f90d test(audit_info): refactor apigatewayv2 (#3099) 2023-12-05 09:32:31 +01:00
Nacho Rivera
9b6d6c3a42 test(audit_info): refactor workspaces (#3112) 2023-12-05 09:32:13 +01:00
Pepe Fagoaga
d26c1405ce test(audit_info): refactor awslambda (#3103) 2023-12-05 09:18:23 +01:00
dependabot[bot]
4bb35ab92d build(deps): bump slack-sdk from 3.26.0 to 3.26.1 (#3107)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 08:39:26 +01:00
dependabot[bot]
cdd983aa04 build(deps): bump google-api-python-client from 2.108.0 to 2.109.0 (#3106)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 08:12:57 +01:00
Nacho Rivera
e83ce86eb3 fix(docs): typo in reporting/csv (#3094) 2023-12-04 10:20:57 +01:00
Nacho Rivera
bcc590a3ee chore(actions): not launch linters for mkdocs.yml (#3093) 2023-12-04 09:57:18 +01:00
Fennerr
5fdffb93d1 docs(parallel-execution): How to execute it in parallel (#3091)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-04 09:48:46 +01:00
Nacho Rivera
db20b2c04f fix(docs): csv fields (#3092)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-04 09:46:20 +01:00
Nacho Rivera
4e037c0f43 fix(send_to_s3_bucket): don't kill exec when fail (#3088) 2023-12-01 13:25:59 +01:00
Nacho Rivera
fdcc2ac5cb revert(clean local dirs): delete clean local dirs output feature (#3087) 2023-12-01 12:26:59 +01:00
William
9099bd79f8 fix(vpc_different_regions): Handle if there are no VPC (#3081)
Co-authored-by: William Brady <will@crofton.cloud>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-01 11:44:23 +01:00
Pepe Fagoaga
a01683d8f6 refactor(severities): Define it in one place (#3086) 2023-12-01 11:39:35 +01:00
Pepe Fagoaga
6d2b2a9a93 refactor(load_checks_to_execute): Refactor function and add tests (#3066) 2023-11-30 17:41:14 +01:00
Sergio Garcia
de4166bf0d chore(regions_update): Changes in regions for AWS services. (#3079) 2023-11-29 11:21:06 +01:00
dependabot[bot]
1cbef30788 build(deps): bump cryptography from 41.0.4 to 41.0.6 (#3078)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-29 08:17:34 +01:00
Nacho Rivera
89c6e27489 fix(trustedadvisor): handle missing checks dict key (#3075) 2023-11-28 10:37:24 +01:00
Sergio Garcia
f74ffc530d chore(regions_update): Changes in regions for AWS services. (#3074)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-28 10:22:29 +01:00
dependabot[bot]
441d4d6a38 build(deps-dev): bump moto from 4.2.9 to 4.2.10 (#3073)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 09:57:56 +01:00
dependabot[bot]
3c6b9d63a6 build(deps): bump slack-sdk from 3.24.0 to 3.26.0 (#3072) 2023-11-28 09:21:46 +01:00
dependabot[bot]
254d8616b7 build(deps-dev): bump pytest-xdist from 3.4.0 to 3.5.0 (#3071) 2023-11-28 09:06:23 +01:00
dependabot[bot]
d3bc6fda74 build(deps): bump mkdocs-material from 9.4.10 to 9.4.14 (#3070)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 08:46:49 +01:00
Nacho Rivera
e4a5d9376f fix(clean local output dirs): change function description (#3068)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-27 14:55:34 +01:00
Nacho Rivera
856afb3966 chore(update): rebase from master (#3067)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: John Mastron <14130495+mtronrd@users.noreply.github.com>
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: simone ragonesi <102741679+R3DRUN3@users.noreply.github.com>
Co-authored-by: Johnny Lu <johnny2lu@gmail.com>
Co-authored-by: Vajrala Venkateswarlu <59252985+venkyvajrala@users.noreply.github.com>
Co-authored-by: Ignacio Dominguez <ignacio.dominguez@zego.com>
2023-11-27 13:58:45 +01:00
Nacho Rivera
523605e3e7 fix(set_azure_audit_info): assign correct logging when no auth (#3063) 2023-11-27 11:00:22 +01:00
Nacho Rivera
ed33fac337 fix(gcp provider): move generate_client for consistency (#3064) 2023-11-27 10:31:40 +01:00
Sergio Garcia
bf0e62aca5 chore(regions_update): Changes in regions for AWS services. (#3065)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-27 10:30:12 +01:00
Nacho Rivera
60c0b79b10 fix(outputs): initialize_file_descriptor is called dynamically (#3050) 2023-11-21 16:05:26 +01:00
Sergio Garcia
f9d2e7aa93 chore(regions_update): Changes in regions for AWS services. (#3059)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-21 11:07:08 +01:00
dependabot[bot]
0646748e24 build(deps): bump google-api-python-client from 2.107.0 to 2.108.0 (#3056)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 09:31:25 +01:00
dependabot[bot]
f6408e9df7 build(deps-dev): bump moto from 4.2.8 to 4.2.9 (#3055)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 08:14:00 +01:00
dependabot[bot]
5769bc815c build(deps): bump mkdocs-material from 9.4.8 to 9.4.10 (#3054)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 07:51:27 +01:00
dependabot[bot]
5a3e3e9b1f build(deps): bump slack-sdk from 3.23.0 to 3.24.0 (#3053) 2023-11-21 07:31:15 +01:00
Pepe Fagoaga
26cbafa204 fix(deps): Add missing jsonschema (#3052) 2023-11-20 18:41:39 +01:00
Sergio Garcia
d14541d1de fix(json-ocsf): add profile only for AWS provider (#3051) 2023-11-20 17:00:36 +01:00
Sergio Garcia
3955ebd56c chore(python): update python version constraint <3.12 (#3047) 2023-11-20 14:49:09 +01:00
Ignacio Dominguez
e212645cf0 fix(codeartifact): solve dependency confusion check (#2999)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-20 14:48:46 +01:00
Sergio Garcia
db9c1c24d3 chore(moto): install all moto dependencies (#3048) 2023-11-20 13:44:53 +01:00
Vajrala Venkateswarlu
0a305c281f feat(custom_checks_metadata): Add checks metadata overide for severity (#3038)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-20 10:44:47 +01:00
Sergio Garcia
43c96a7875 chore(regions_update): Changes in regions for AWS services. (#3045)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-20 10:15:32 +01:00
Sergio Garcia
3a93aba7d7 chore(release): update Prowler Version to 3.11.3 (#3044)
Co-authored-by: github-actions <noreply@github.com>
2023-11-16 17:07:14 +01:00
Sergio Garcia
3d563356e5 fix(json): check if profile is None (#3043) 2023-11-16 13:52:07 +01:00
Johnny Lu
9205ef30f8 fix(securityhub): findings not being imported or archived in non-aws partitions (#3040)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-16 11:27:28 +01:00
Sergio Garcia
19c2dccc6d chore(regions_update): Changes in regions for AWS services. (#3042)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-16 11:09:41 +01:00
Sergio Garcia
8f819048ed chore(release): update Prowler Version to 3.11.2 (#3037)
Co-authored-by: github-actions <noreply@github.com>
2023-11-15 09:07:57 +01:00
Sergio Garcia
3a3bb44f11 fix(GuardDuty): only execute checks if GuardDuty enabled (#3028) 2023-11-14 14:14:05 +01:00
Nacho Rivera
f8e713a544 feat(azure regions): support non default azure region (#3013)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-14 13:17:48 +01:00
Pepe Fagoaga
573f1eba56 fix(securityhub): Use enabled_regions instead of audited_regions (#3029) 2023-11-14 12:57:54 +01:00
simone ragonesi
a36be258d8 chore: modify latest version msg (#3036)
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
2023-11-14 12:11:55 +01:00
Sergio Garcia
690ec057c3 fix(ec2_securitygroup_not_used): check if security group is associated (#3026) 2023-11-14 12:03:01 +01:00
dependabot[bot]
2681feb1f6 build(deps): bump azure-storage-blob from 12.18.3 to 12.19.0 (#3034)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 11:47:42 +01:00
Sergio Garcia
e662adb8c5 chore(regions_update): Changes in regions for AWS services. (#3035)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-14 11:47:24 +01:00
Sergio Garcia
c94bd96c93 chore(args): make compatible severity and services arguments (#3024) 2023-11-14 11:26:53 +01:00
dependabot[bot]
6d85433194 build(deps): bump alive-progress from 3.1.4 to 3.1.5 (#3033)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 09:41:32 +01:00
dependabot[bot]
7a6092a779 build(deps): bump google-api-python-client from 2.106.0 to 2.107.0 (#3032)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 09:16:00 +01:00
dependabot[bot]
4c84529aed build(deps-dev): bump pytest-xdist from 3.3.1 to 3.4.0 (#3031)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 08:48:02 +01:00
Sergio Garcia
512d3e018f chore(accessanalyzer): include service in allowlist_non_default_regions (#3025) 2023-11-14 08:00:17 +01:00
dependabot[bot]
c6aff985c9 build(deps-dev): bump moto from 4.2.7 to 4.2.8 (#3030)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 07:54:34 +01:00
Sergio Garcia
7fadf31a2b chore(release): update Prowler Version to 3.11.1 (#3021)
Co-authored-by: github-actions <noreply@github.com>
2023-11-10 12:53:07 +01:00
718 changed files with 17865 additions and 17084 deletions

View File

@@ -13,10 +13,10 @@ name: "CodeQL"
on:
push:
branches: [ "master", prowler-2, prowler-3.0-dev ]
branches: [ "master", "prowler-4.0-dev" ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "master" ]
branches: [ "master", "prowler-4.0-dev" ]
schedule:
- cron: '00 12 * * *'

View File

@@ -4,9 +4,11 @@ on:
push:
branches:
- "master"
- "prowler-4.0-dev"
pull_request:
branches:
- "master"
- "prowler-4.0-dev"
jobs:
build:
runs-on: ubuntu-latest
@@ -26,6 +28,7 @@ jobs:
README.md
docs/**
permissions/**
mkdocs.yml
- name: Install poetry
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |

View File

@@ -102,7 +102,7 @@ All the checks MUST fill the `report.status` and `report.status_extended` with t
- Status -- `report.status`
- `PASS` --> If the check is passing against the configured value.
- `FAIL` --> If the check is passing against the configured value.
- `INFO` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
- `MANUAL` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
- Status Extended -- `report.status_extended`
- MUST end in a dot `.`
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`

View File

@@ -136,26 +136,16 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
=== "AWS CloudShell"
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [2](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
_Requirements_:
* First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
* Open AWS CloudShell `bash`.
_Commands_:
* Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
pip install prowler
prowler -v
```

View File

@@ -37,7 +37,3 @@ If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to
- ARN of your MFA device
- TOTP (Time-Based One-Time Password)
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.

View File

@@ -32,3 +32,14 @@ Prowler's AWS Provider uses the Boto3 [Standard](https://boto3.amazonaws.com/v1/
- Retry attempts on nondescriptive, transient error codes. Specifically, these HTTP status codes: 500, 502, 503, 504.
- Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.
## Notes for validating retry attempts
If you are making changes to Prowler, and want to validate if requests are being retried or given up on, you can take the following approach
* Run prowler with `--log-level DEBUG` and `--log-file debuglogs.txt`
* Search for retry attempts using `grep -i 'Retry needed' debuglogs.txt`
This is based off of the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html#checking-retry-attempts-in-your-client-logs), which states that if a retry is performed, you will see a message starting with "Retry needed".
You can determine the total number of calls made using `grep -i 'Sending http request' debuglogs.txt | wc -l`

View File

@@ -1,26 +1,26 @@
# AWS CloudShell
Prowler can be easily executed in AWS CloudShell but it has some prerequisites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
- First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
- Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
```
- Now enjoy Prowler:
```
## Installation
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [[2]](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
```shell
pip install prowler
prowler -v
prowler
```
- To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
## Download Files
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
## Clone Prowler from Github
The limited storage that AWS CloudShell provides for the user's home directory causes issues when installing the poetry dependencies to run Prowler from GitHub. Here is a workaround:
```shell
git clone https://github.com/prowler-cloud/prowler.git
cd prowler
pip install poetry
mkdir /tmp/pypoetry
poetry config cache-dir /tmp/pypoetry
poetry shell
poetry install
python prowler.py -v
```

View File

@@ -23,14 +23,6 @@ prowler aws -R arn:aws:iam::<account_id>:role/<role_name>
prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R arn:aws:iam::<account_id>:role/<role_name>
```
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.
> Since v3.11.0, Prowler uses a regional token in STS sessions so it can scan all AWS regions without needing the `--sts-endpoint-region` argument.
> Make sure that you have enabled the AWS Region you want to scan in BOTH AWS Accounts (assumed role account and account from which you assume the role).
## Role MFA
If your IAM Role has MFA configured you can use `--mfa` along with `-R`/`--role <role_arn>` and Prowler will ask you to input the following values to get a new temporary session for the IAM Role provided:

View File

@@ -0,0 +1,16 @@
# Use non default Azure regions
Microsoft provides clouds for compliance with regional laws, which are available for your use.
By default, Prowler uses `AzureCloud` cloud which is the comercial one. (you can list all the available with `az cloud list --output table`).
At the time of writing this documentation the available Azure Clouds from different regions are the following:
- AzureCloud
- AzureChinaCloud
- AzureUSGovernment
- AzureGermanCloud
If you want to change the default one you must include the flag `--azure-region`, i.e.:
```console
prowler azure --az-cli-auth --azure-region AzureChinaCloud
```

View File

@@ -1,5 +1,18 @@
# Compliance
Prowler allows you to execute checks based on requirements defined in compliance frameworks.
Prowler allows you to execute checks based on requirements defined in compliance frameworks. By default, it will execute and give you an overview of the status of each compliance framework:
<img src="../img/compliance.png"/>
> You can find CSVs containing detailed compliance results inside the compliance folder within Prowler's output folder.
## Execute Prowler based on Compliance Frameworks
Prowler can analyze your environment based on a specific compliance framework and get more details, to do it, you can use option `--compliance`:
```sh
prowler <provider> --compliance <compliance_framework>
```
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 1.5. For details a CSV file has been generated as well.
<img src="../img/compliance-cis-sample1.png"/>
## List Available Compliance Frameworks
In order to see which compliance frameworks are cover by Prowler, you can use option `--list-compliance`:
@@ -10,9 +23,12 @@ Currently, the available frameworks are:
- `cis_1.4_aws`
- `cis_1.5_aws`
- `cis_2.0_aws`
- `cisa_aws`
- `ens_rd2022_aws`
- `aws_audit_manager_control_tower_guardrails_aws`
- `aws_foundational_security_best_practices_aws`
- `aws_well_architected_framework_reliability_pillar_aws`
- `aws_well_architected_framework_security_pillar_aws`
- `cisa_aws`
- `fedramp_low_revision_4_aws`
@@ -22,6 +38,9 @@ Currently, the available frameworks are:
- `gxp_eu_annex_11_aws`
- `gxp_21_cfr_part_11_aws`
- `hipaa_aws`
- `iso27001_2013_aws`
- `iso27001_2013_aws`
- `mitre_attack_aws`
- `nist_800_53_revision_4_aws`
- `nist_800_53_revision_5_aws`
- `nist_800_171_revision_2_aws`
@@ -38,7 +57,6 @@ prowler <provider> --list-compliance-requirements <compliance_framework(s)>
```
Example for the first requirements of CIS 1.5 for AWS:
```
Listing CIS 1.5 AWS Compliance Requirements:
@@ -71,15 +89,6 @@ Requirement Id: 1.5
```
## Execute Prowler based on Compliance Frameworks
As we mentioned, Prowler can be execute to analyse you environment based on a specific compliance framework, to do it, you can use option `--compliance`:
```sh
prowler <provider> --compliance <compliance_framework>
```
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 1.5. For details a CSV file has been generated as well.
<img src="../img/compliance-cis-sample1.png"/>
## Create and contribute adding other Security Frameworks
This information is part of the Developer Guide and can be found here: https://docs.prowler.cloud/en/latest/tutorials/developer-guide/.

View File

@@ -29,10 +29,10 @@ The following list includes all the AWS checks with configurable variables that
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
| `ecr_repositories_scan_vulnerabilities_in_latest_image` | `ecr_repository_vulnerability_minimum_severity` | String |
| `trustedadvisor_premium_support_plan_subscribed` | `verify_premium_support_plans` | Boolean |
| `config_recorder_all_regions_enabled` | `allowlist_non_default_regions` | Boolean |
| `drs_job_exist` | `allowlist_non_default_regions` | Boolean |
| `guardduty_is_enabled` | `allowlist_non_default_regions` | Boolean |
| `securityhub_enabled` | `allowlist_non_default_regions` | Boolean |
| `config_recorder_all_regions_enabled` | `mute_non_default_regions` | Boolean |
| `drs_job_exist` | `mute_non_default_regions` | Boolean |
| `guardduty_is_enabled` | `mute_non_default_regions` | Boolean |
| `securityhub_enabled` | `mute_non_default_regions` | Boolean |
## Azure
@@ -50,8 +50,8 @@ The following list includes all the AWS checks with configurable variables that
aws:
# AWS Global Configuration
# aws.allowlist_non_default_regions --> Allowlist Failed Findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
allowlist_non_default_regions: False
# aws.mute_non_default_regions --> Mute Failed Findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
mute_non_default_regions: False
# AWS IAM Configuration
# aws.iam_user_accesskey_unused --> CIS recommends 45 days

View File

@@ -0,0 +1,43 @@
# Custom Checks Metadata
In certain organizations, the severity of specific checks might differ from the default values defined in the check's metadata. For instance, while `s3_bucket_level_public_access_block` could be deemed `critical` for some organizations, others might assign a different severity level.
The custom metadata option offers a means to override default metadata set by Prowler
You can utilize `--custom-checks-metadata-file` followed by the path to your custom checks metadata YAML file.
## Available Fields
The list of supported check's metadata fields that can be override are listed as follows:
- Severity
## File Syntax
This feature is available for all the providers supported in Prowler since the metadata format is common between all the providers. The following is the YAML format for the custom checks metadata file:
```yaml title="custom_checks_metadata.yaml"
CustomChecksMetadata:
aws:
Checks:
s3_bucket_level_public_access_block:
Severity: high
s3_bucket_no_mfa_delete:
Severity: high
azure:
Checks:
storage_infrastructure_encryption_is_enabled:
Severity: medium
gcp:
Checks:
compute_instance_public_ip:
Severity: critical
```
## Usage
Executing the following command will assess all checks and generate a report while overriding the metadata for those checks:
```sh
prowler <provider> --custom-checks-metadata-file <path/to/custom/metadata>
```
This customization feature enables organizations to tailor the severity of specific checks based on their unique requirements, providing greater flexibility in security assessment and reporting.

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

View File

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

View File

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 94 KiB

View File

@@ -8,7 +8,7 @@ There are different log levels depending on the logging information that is desi
- **DEBUG**: It will show low-level logs from Python.
- **INFO**: It will show all the API calls that are being invoked by the provider.
- **WARNING**: It will show all resources that are being **allowlisted**.
- **WARNING**: It will show all resources that are being **muted**.
- **ERROR**: It will show any errors, e.g., not authorized actions.
- **CRITICAL**: The default log level. If a critical log appears, it will **exit** Prowlers execution.

View File

@@ -9,10 +9,10 @@ Execute Prowler in verbose mode (like in Version 2):
```console
prowler <provider> --verbose
```
## Show only Fails
Prowler can only display the failed findings:
## Filter findings by status
Prowler can filter the findings by their status:
```console
prowler <provider> -q/--quiet
prowler <provider> --status [PASS, FAIL, MANUAL]
```
## Disable Exit Code 3
Prowler does not trigger exit code 3 with failed checks:

View File

@@ -1,19 +1,19 @@
# Allowlisting
# Mute Listing
Sometimes you may find resources that are intentionally configured in a certain way that may be a bad practice but it is all right with it, for example an AWS S3 Bucket open to the internet hosting a web site, or an AWS Security Group with an open port needed in your use case.
Allowlist option works along with other options and adds a `WARNING` instead of `INFO`, `PASS` or `FAIL` to any output format.
Mute List option works along with other options and adds a `MUTED` instead of `MANUAL`, `PASS` or `FAIL` to any output format.
You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, but first, let's review the syntax.
You can use `-w`/`--mutelist-file` with the path of your mutelist yaml file, but first, let's review the syntax.
## Allowlist Yaml File Syntax
## Mute List Yaml File Syntax
### Account, Check and/or Region can be * to apply for all the cases.
### Resources and tags are lists that can have either Regex or Keywords.
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
### For each check you can except Accounts, Regions, Resources and/or Tags.
########################### ALLOWLIST EXAMPLE ###########################
Allowlist:
########################### MUTE LIST EXAMPLE ###########################
Mute List:
Accounts:
"123456789012":
Checks:
@@ -79,10 +79,10 @@ You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, b
Tags:
- "environment=prod" # Will ignore every resource except in account 123456789012 except the ones containing the string "test" and tag environment=prod
## Allowlist specific regions
If you want to allowlist/mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w allowlist.yaml`:
## Mute specific regions
If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
Allowlist:
Mute List:
Accounts:
"*":
Checks:
@@ -93,50 +93,50 @@ If you want to allowlist/mute failed findings only in specific regions, create a
Resources:
- "*"
## Default AWS Allowlist
Prowler provides you a Default AWS Allowlist with the AWS Resources that should be allowlisted such as all resources created by AWS Control Tower when setting up a landing zone.
You can execute Prowler with this allowlist using the following command:
## Default AWS Mute List
Prowler provides you a Default AWS Mute List with the AWS Resources that should be muted such as all resources created by AWS Control Tower when setting up a landing zone.
You can execute Prowler with this mutelist using the following command:
```sh
prowler aws --allowlist prowler/config/aws_allowlist.yaml
prowler aws --mutelist prowler/config/aws_mutelist.yaml
```
## Supported Allowlist Locations
## Supported Mute List Locations
The allowlisting flag supports the following locations:
The mutelisting flag supports the following locations:
### Local file
You will need to pass the local path where your Allowlist YAML file is located:
You will need to pass the local path where your Mute List YAML file is located:
```
prowler <provider> -w allowlist.yaml
prowler <provider> -w mutelist.yaml
```
### AWS S3 URI
You will need to pass the S3 URI where your Allowlist YAML file was uploaded to your bucket:
You will need to pass the S3 URI where your Mute List YAML file was uploaded to your bucket:
```
prowler aws -w s3://<bucket>/<prefix>/allowlist.yaml
prowler aws -w s3://<bucket>/<prefix>/mutelist.yaml
```
> Make sure that the used AWS credentials have s3:GetObject permissions in the S3 path where the allowlist file is located.
> Make sure that the used AWS credentials have s3:GetObject permissions in the S3 path where the mutelist file is located.
### AWS DynamoDB Table ARN
You will need to pass the DynamoDB Allowlist Table ARN:
You will need to pass the DynamoDB Mute List Table ARN:
```
prowler aws -w arn:aws:dynamodb:<region_name>:<account_id>:table/<table_name>
```
1. The DynamoDB Table must have the following String keys:
<img src="../img/allowlist-keys.png"/>
<img src="../img/mutelist-keys.png"/>
- The Allowlist Table must have the following columns:
- Accounts (String): This field can contain either an Account ID or an `*` (which applies to all the accounts that use this table as an allowlist).
- The Mute List Table must have the following columns:
- Accounts (String): This field can contain either an Account ID or an `*` (which applies to all the accounts that use this table as an mutelist).
- Checks (String): This field can contain either a Prowler Check Name or an `*` (which applies to all the scanned checks).
- Regions (List): This field contains a list of regions where this allowlist rule is applied (it can also contains an `*` to apply all scanned regions).
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be allowlisted.
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be allowlisted.
- Exceptions (Map): -Optional- This field contains a map of lists of accounts/regions/resources/tags that are wanted to be excepted in the allowlist.
- Regions (List): This field contains a list of regions where this mutelist rule is applied (it can also contains an `*` to apply all scanned regions).
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be muted.
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be muted.
- Exceptions (Map): -Optional- This field contains a map of lists of accounts/regions/resources/tags that are wanted to be excepted in the mutelist.
The following example will allowlist all resources in all accounts for the EC2 checks in the regions `eu-west-1` and `us-east-1` with the tags `environment=dev` and `environment=prod`, except the resources containing the string `test` in the account `012345678912` and region `eu-west-1` with the tag `environment=prod`:
The following example will mute all resources in all accounts for the EC2 checks in the regions `eu-west-1` and `us-east-1` with the tags `environment=dev` and `environment=prod`, except the resources containing the string `test` in the account `012345678912` and region `eu-west-1` with the tag `environment=prod`:
<img src="../img/allowlist-row.png"/>
<img src="../img/mutelist-row.png"/>
> Make sure that the used AWS credentials have `dynamodb:PartiQLSelect` permissions in the table.
@@ -151,7 +151,7 @@ prowler aws -w arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
Make sure that the credentials that Prowler uses can invoke the Lambda Function:
```
- PolicyName: GetAllowList
- PolicyName: GetMuteList
PolicyDocument:
Version: '2012-10-17'
Statement:
@@ -160,14 +160,14 @@ Make sure that the credentials that Prowler uses can invoke the Lambda Function:
Resource: arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
```
The Lambda Function can then generate an Allowlist dynamically. Here is the code an example Python Lambda Function that
generates an Allowlist:
The Lambda Function can then generate an Mute List dynamically. Here is the code an example Python Lambda Function that
generates an Mute List:
```
def handler(event, context):
checks = {}
checks["vpc_flow_logs_enabled"] = { "Regions": [ "*" ], "Resources": [ "" ], Optional("Tags"): [ "key:value" ] }
al = { "Allowlist": { "Accounts": { "*": { "Checks": checks } } } }
al = { "Mute List": { "Accounts": { "*": { "Checks": checks } } } }
return al
```

View File

@@ -0,0 +1,187 @@
# Parallel Execution
The strategy used here will be to execute Prowler once per service. You can modify this approach as per your requirements.
This can help for really large accounts, but please be aware of AWS API rate limits:
1. **Service-Specific Limits**: Each AWS service has its own rate limits. For instance, Amazon EC2 might have different rate limits for launching instances versus making API calls to describe instances.
2. **API Rate Limits**: Most of the rate limits in AWS are applied at the API level. Each API call to an AWS service counts towards the rate limit for that service.
3. **Throttling Responses**: When you exceed the rate limit for a service, AWS responds with a throttling error. In AWS SDKs, these are typically represented as `ThrottlingException` or `RateLimitExceeded` errors.
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.cloud/en/latest/tutorials/aws/boto3-configuration/).
> Note: You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
## Linux
Generate a list of services that Prowler supports, and populate this info into a file:
```bash
prowler aws --list-services | awk -F"- " '{print $2}' | sed '/^$/d' > services
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.sh` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run Prowler with.
```bash
#!/bin/bash
# Change these variables as needed
profile="your_profile"
account_id=$(aws sts get-caller-identity --profile "${profile}" --query 'Account' --output text)
echo "Executing in account: ${account_id}"
# Maximum number of concurrent processes
MAX_PROCESSES=5
# Loop through the services
while read service; do
echo "$(date '+%Y-%m-%d %H:%M:%S'): Starting job for service: ${service}"
# Run the command in the background
(prowler -p "$profile" -s "$service" -F "${account_id}-${service}" --ignore-unused-services --only-logs; echo "$(date '+%Y-%m-%d %H:%M:%S') - ${service} has completed") &
# Check if we have reached the maximum number of processes
while [ $(jobs -r | wc -l) -ge ${MAX_PROCESSES} ]; do
# Wait for a second before checking again
sleep 1
done
done < ./services
# Wait for all background processes to finish
wait
echo "All jobs completed"
```
Output will be stored in the `output/` folder that is in the same directory from which you executed the script.
## Windows
Generate a list of services that Prowler supports, and populate this info into a file:
```powershell
prowler aws --list-services | ForEach-Object {
# Capture lines that are likely service names
if ($_ -match '^\- \w+$') {
$_.Trim().Substring(2)
}
} | Where-Object {
# Filter out empty or null lines
$_ -ne $null -and $_ -ne ''
} | Set-Content -Path "services"
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.ps1` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run prowler with.
Change any parameters you would like when calling prowler in the `Start-Job -ScriptBlock` section. Note that you need to keep the `--only-logs` parameter, else some encoding issue occurs when trying to render the progress-bar and prowler won't successfully execute.
```powershell
$profile = "your_profile"
$account_id = Invoke-Expression -Command "aws sts get-caller-identity --profile $profile --query 'Account' --output text"
Write-Host "Executing Prowler in $account_id"
# Maximum number of concurrent jobs
$MAX_PROCESSES = 5
# Read services from a file
$services = Get-Content -Path "services"
# Array to keep track of started jobs
$jobs = @()
foreach ($service in $services) {
# Start the command as a job
$job = Start-Job -ScriptBlock {
prowler -p ${using:profile} -s ${using:service} -F "${using:account_id}-${using:service}" --ignore-unused-services --only-logs
$endTimestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
Write-Output "${endTimestamp} - $using:service has completed"
}
$jobs += $job
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - Starting job for service: $service"
# Check if we have reached the maximum number of jobs
while (($jobs | Where-Object { $_.State -eq 'Running' }).Count -ge $MAX_PROCESSES) {
Start-Sleep -Seconds 1
# Check for any completed jobs and receive their output
$completedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($completedJob in $completedJobs) {
Receive-Job -Job $completedJob -Keep | ForEach-Object { Write-Host $_ }
$jobs = $jobs | Where-Object { $_.Id -ne $completedJob.Id }
Remove-Job -Job $completedJob
}
}
}
# Check for any remaining completed jobs
$remainingCompletedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($remainingJob in $remainingCompletedJobs) {
Receive-Job -Job $remainingJob -Keep | ForEach-Object { Write-Host $_ }
Remove-Job -Job $remainingJob
}
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - All jobs completed"
```
Output will be stored in `C:\Users\YOUR-USER\Documents\output\`
## Combining the output files
Guidance is provided for the CSV file format. From the ouput directory, execute either the following Bash or PowerShell script. The script will collect the output from the CSV files, only include the header from the first file, and then output the result as CombinedCSV.csv in the current working directory.
There is no logic implemented in terms of which CSV files it will combine. If you have additional CSV files from other actions, such as running a quick inventory, you will need to move that out of the current (or any nested) directory, or move the output you want to combine into its own folder and run the script from there.
```bash
#!/bin/bash
# Initialize a variable to indicate the first file
firstFile=true
# Find all CSV files and loop through them
find . -name "*.csv" -print0 | while IFS= read -r -d '' file; do
if [ "$firstFile" = true ]; then
# For the first file, keep the header
cat "$file" > CombinedCSV.csv
firstFile=false
else
# For subsequent files, skip the header
tail -n +2 "$file" >> CombinedCSV.csv
fi
done
```
```powershell
# Get all CSV files from current directory and its subdirectories
$csvFiles = Get-ChildItem -Recurse -Filter "*.csv"
# Initialize a variable to track if it's the first file
$firstFile = $true
# Loop through each CSV file
foreach ($file in $csvFiles) {
if ($firstFile) {
# For the first file, keep the header and change the flag
$combinedCsv = Import-Csv -Path $file.FullName
$firstFile = $false
} else {
# For subsequent files, skip the header
$tempCsv = Import-Csv -Path $file.FullName
$combinedCsv += $tempCsv | Select-Object * -Skip 1
}
}
# Export the combined data to a new CSV file
$combinedCsv | Export-Csv -Path "CombinedCSV.csv" -NoTypeInformation
```
## TODO: Additional Improvements
Some services need to instantiate another service to perform a check. For instance, `cloudwatch` will instantiate Prowler's `iam` service to perform the `cloudwatch_cross_account_sharing_disabled` check. When the `iam` service is instantiated, it will perform the `__init__` function, and pull all the information required for that service. This provides an opportunity for an improvement in the above script to group related services together so that the `iam` services (or any other cross-service references) isn't repeatedily instantiated by grouping dependant services together. A complete mapping between these services still needs to be further investigated, but these are the cross-references that have been noted:
* inspector2 needs lambda and ec2
* cloudwatch needs iam
* dlm needs ec2

View File

@@ -43,46 +43,71 @@ Hereunder is the structure for each of the supported report formats by Prowler:
![HTML Output](../img/output-html.png)
### CSV
The following are the columns present in the CSV format:
CSV format has a set of common columns for all the providers, and then provider specific columns.
The common columns are the following:
- ASSESSMENT_START_TIME
- FINDING_UNIQUE_ID
- PROVIDER
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- COMPLIANCE
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
And then by the provider specific columns:
#### AWS
- PROFILE
- ACCOUNT_ID
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_ID
- RESOURCE_ARN
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- COMPLIANCE
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- RESOURCE_ID
- RESOURCE_ARN
#### AZURE
- TENANT_DOMAIN
- SUBSCRIPTION
- RESOURCE_ID
- RESOURCE_NAME
#### GCP
- PROJECT_ID
- LOCATION
- RESOURCE_ID
- RESOURCE_NAME
> Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
### JSON

View File

@@ -36,10 +36,12 @@ nav:
- Slack Integration: tutorials/integrations.md
- Configuration File: tutorials/configuration_file.md
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Mute List: tutorials/mutelist.md
- Check Aliases: tutorials/check-aliases.md
- Custom Metadata: tutorials/custom-checks-metadata.md
- Ignore Unused Services: tutorials/ignore-unused-services.md
- Pentesting: tutorials/pentesting.md
- Parallel Execution: tutorials/parallel-execution.md
- Developer Guide: developer-guide/introduction.md
- AWS:
- Authentication: tutorials/aws/authentication.md
@@ -56,6 +58,7 @@ nav:
- Boto3 Configuration: tutorials/aws/boto3-configuration.md
- Azure:
- Authentication: tutorials/azure/authentication.md
- Non default clouds: tutorials/azure/use-non-default-cloud.md
- Subscriptions: tutorials/azure/subscriptions.md
- Google Cloud:
- Authentication: tutorials/gcp/authentication.md

760
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,13 +6,12 @@ import sys
from colorama import Fore, Style
from prowler.lib.banner import print_banner
from prowler.config.config import get_available_compliance_frameworks
from prowler.lib.check.check import (
bulk_load_checks_metadata,
bulk_load_compliance_frameworks,
exclude_checks_to_run,
exclude_services_to_run,
execute_checks,
list_categories,
list_checks_json,
list_services,
@@ -26,15 +25,20 @@ from prowler.lib.check.check import (
)
from prowler.lib.check.checks_loader import load_checks_to_execute
from prowler.lib.check.compliance import update_checks_metadata_with_compliance
from prowler.lib.check.custom_checks_metadata import (
parse_custom_checks_metadata_file,
update_checks_metadata,
)
from prowler.lib.check.managers import ExecutionManager
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.lib.logger import logger, set_logging_config
from prowler.lib.outputs.compliance import display_compliance_table
from prowler.lib.outputs.compliance.compliance import display_compliance_table
from prowler.lib.outputs.html import add_html_footer, fill_html_overview_statistics
from prowler.lib.outputs.json import close_json
from prowler.lib.outputs.outputs import extract_findings_statistics
from prowler.lib.outputs.slack import send_slack_message
from prowler.lib.outputs.summary_table import display_summary_table
from prowler.providers.aws.aws_provider import get_available_aws_service_regions
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.lib.s3.s3 import send_to_s3_bucket
from prowler.providers.aws.lib.security_hub.security_hub import (
batch_send_to_security_hub,
@@ -42,12 +46,16 @@ from prowler.providers.aws.lib.security_hub.security_hub import (
resolve_security_hub_previous_findings,
verify_security_hub_integration_enabled_per_region,
)
from prowler.providers.common.allowlist import set_provider_allowlist
from prowler.providers.common.audit_info import (
set_provider_audit_info,
set_provider_execution_parameters,
)
from prowler.providers.common.clean import clean_provider_local_output_directories
from prowler.providers.common.common import (
get_global_provider,
set_global_provider_object,
)
from prowler.providers.common.mutelist import set_provider_mutelist
from prowler.providers.common.outputs import set_provider_output_options
from prowler.providers.common.quick_inventory import run_provider_quick_inventory
@@ -68,13 +76,19 @@ def prowler():
checks_folder = args.checks_folder
severities = args.severity
compliance_framework = args.compliance
custom_checks_metadata_file = args.custom_checks_metadata_file
if not args.no_banner:
print_banner(args)
live_display.initialize(args)
# if not args.no_banner:
# print_banner(args)
# We treat the compliance framework as another output format
if compliance_framework:
args.output_modes.extend(compliance_framework)
# If no input compliance framework, set all
else:
args.output_modes.extend(get_available_compliance_frameworks(provider))
# Set Logger configuration
set_logging_config(args.log_level, args.log_file, args.only_logs)
@@ -97,9 +111,19 @@ def prowler():
bulk_compliance_frameworks = bulk_load_compliance_frameworks(provider)
# Complete checks metadata with the compliance framework specification
update_checks_metadata_with_compliance(
bulk_checks_metadata = update_checks_metadata_with_compliance(
bulk_compliance_frameworks, bulk_checks_metadata
)
# Update checks metadata if the --custom-checks-metadata-file is present
custom_checks_metadata = None
if custom_checks_metadata_file:
custom_checks_metadata = parse_custom_checks_metadata_file(
provider, custom_checks_metadata_file
)
bulk_checks_metadata = update_checks_metadata(
bulk_checks_metadata, custom_checks_metadata
)
if args.list_compliance:
print_compliance_frameworks(bulk_compliance_frameworks)
sys.exit()
@@ -134,6 +158,7 @@ def prowler():
# Set the audit info based on the selected provider
audit_info = set_provider_audit_info(provider, args.__dict__)
set_global_provider_object(args)
# Import custom checks from folder
if checks_folder:
@@ -158,12 +183,12 @@ def prowler():
# Sort final check list
checks_to_execute = sorted(checks_to_execute)
# Parse Allowlist
allowlist_file = set_provider_allowlist(provider, audit_info, args)
# Parse Mute List
mutelist_file = set_provider_mutelist(provider, audit_info, args)
# Set output options based on the selected provider
audit_output_options = set_provider_output_options(
provider, args, audit_info, allowlist_file, bulk_checks_metadata
provider, args, audit_info, mutelist_file, bulk_checks_metadata
)
# Run the quick inventory for the provider if available
@@ -173,10 +198,16 @@ def prowler():
# Execute checks
findings = []
if len(checks_to_execute):
findings = execute_checks(
checks_to_execute, provider, audit_info, audit_output_options
execution_manager = ExecutionManager(
checks_to_execute,
provider,
audit_info,
audit_output_options,
custom_checks_metadata,
)
findings = execution_manager.execute_checks()
else:
logger.error(
"There are no checks to execute. Please, check your input arguments"
@@ -238,16 +269,20 @@ def prowler():
f"{Style.BRIGHT}\nSending findings to AWS Security Hub, please wait...{Style.RESET_ALL}"
)
# Verify where AWS Security Hub is enabled
global_provider = get_global_provider()
aws_security_enabled_regions = []
security_hub_regions = (
get_available_aws_service_regions("securityhub", audit_info)
global_provider.get_available_aws_service_regions("securityhub")
if not audit_info.audited_regions
else audit_info.audited_regions
)
for region in security_hub_regions:
# Save the regions where AWS Security Hub is enabled
if verify_security_hub_integration_enabled_per_region(
region, audit_info.audit_session
audit_info.audited_partition,
region,
audit_info.audit_session,
audit_info.audited_account,
):
aws_security_enabled_regions.append(region)
@@ -287,8 +322,12 @@ def prowler():
provider,
)
if compliance_framework and findings:
for compliance in compliance_framework:
if findings:
compliance_overview = False
if not compliance_framework:
compliance_overview = True
compliance_framework = get_available_compliance_frameworks(provider)
for compliance in sorted(compliance_framework):
# Display compliance table
display_compliance_table(
findings,
@@ -296,6 +335,11 @@ def prowler():
compliance,
audit_output_options.output_filename,
audit_output_options.output_directory,
compliance_overview,
)
if compliance_overview:
print(
f"\nDetailed compliance results are in {Fore.YELLOW}{audit_output_options.output_directory}/compliance/{Style.RESET_ALL}\n"
)
# If custom checks were passed, remove the modules

View File

@@ -1,4 +1,4 @@
Allowlist:
Mute List:
Accounts:
"*":
########################### AWS CONTROL TOWER ###########################

View File

@@ -3,8 +3,8 @@
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
### For each check you can except Accounts, Regions, Resources and/or Tags.
########################### ALLOWLIST EXAMPLE ###########################
Allowlist:
########################### MUTE LIST EXAMPLE ###########################
Mute List:
Accounts:
"123456789012":
Checks:

View File

@@ -11,7 +11,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "3.11.1"
prowler_version = "3.11.3"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
@@ -22,13 +22,22 @@ gcp_logo = "https://user-images.githubusercontent.com/38561120/235928332-eb4accd
orange_color = "\033[38;5;208m"
banner_color = "\033[1;92m"
# Severities
valid_severities = ["critical", "high", "medium", "low", "informational"]
# Statuses
finding_statuses = ["PASS", "FAIL", "MANUAL"]
# Compliance
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
def get_available_compliance_frameworks():
def get_available_compliance_frameworks(provider=None):
available_compliance_frameworks = []
for provider in ["aws", "gcp", "azure"]:
providers = ["aws", "gcp", "azure"]
if provider:
providers = [provider]
for provider in providers:
with os.scandir(f"{actual_directory}/../compliance/{provider}") as files:
for file in files:
if file.is_file() and file.name.endswith(".json"):
@@ -47,7 +56,6 @@ aws_services_json_file = "aws_regions_by_service.json"
# gcp_zones_json_file = "gcp_zones.json"
default_output_directory = getcwd() + "/output"
output_file_timestamp = timestamp.strftime("%Y%m%d%H%M%S")
timestamp_iso = timestamp.isoformat(sep=" ", timespec="seconds")
csv_file_suffix = ".csv"
@@ -70,7 +78,9 @@ def check_current_version():
if latest_version != prowler_version:
return f"{prowler_version_string} (latest is {latest_version}, upgrade for the latest features)"
else:
return f"{prowler_version_string} (it is the latest version, yay!)"
return (
f"{prowler_version_string} (You are running the latest version, yay!)"
)
except requests.RequestException:
return f"{prowler_version_string}"
except Exception:

View File

@@ -2,10 +2,10 @@
aws:
# AWS Global Configuration
# aws.allowlist_non_default_regions --> Set to True to allowlist failed findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
allowlist_non_default_regions: False
# If you want to allowlist/mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w allowlist.yaml`:
# Allowlist:
# aws.mute_non_default_regions --> Set to True to mute failed findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
mute_non_default_regions: False
# If you want to mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w mutelist.yaml`:
# Mute List:
# Accounts:
# "*":
# Checks:
@@ -92,3 +92,6 @@ azure:
# GCP Configuration
gcp:
# Kubernetes Configuration
kubernetes:

View File

@@ -0,0 +1,19 @@
CustomChecksMetadata:
aws:
Checks:
s3_bucket_level_public_access_block:
Severity: high
s3_bucket_no_mfa_delete:
Severity: high
azure:
Checks:
storage_infrastructure_encryption_is_enabled:
Severity: medium
gcp:
Checks:
compute_instance_public_ip:
Severity: critical
kubernetes:
Checks:
apiserver_anonymous_requests:
Severity: low

View File

@@ -15,13 +15,13 @@ def print_banner(args):
"""
print(banner)
if args.verbose or args.quiet:
if args.verbose:
print(
f"""
Color code for results:
- {Fore.YELLOW}INFO (Information){Style.RESET_ALL}
- {Fore.YELLOW}MANUAL (Manual check){Style.RESET_ALL}
- {Fore.GREEN}PASS (Recommended value){Style.RESET_ALL}
- {orange_color}WARNING (Ignored by allowlist){Style.RESET_ALL}
- {orange_color}MUTED (Muted by muted list){Style.RESET_ALL}
- {Fore.RED}FAIL (Fix required){Style.RESET_ALL}
"""
)

View File

@@ -10,17 +10,19 @@ from pkgutil import walk_packages
from types import ModuleType
from typing import Any
from alive_progress import alive_bar
from colorama import Fore, Style
import prowler
from prowler.config.config import orange_color
from prowler.lib.check.compliance_models import load_compliance_framework
from prowler.lib.check.custom_checks_metadata import update_check_metadata
from prowler.lib.check.managers import ExecutionManager
from prowler.lib.check.models import Check, load_check_metadata
from prowler.lib.logger import logger
from prowler.lib.outputs.outputs import report
from prowler.lib.ui.live_display import live_display
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.lib.allowlist.allowlist import allowlist_findings
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
from prowler.providers.common.common import get_global_provider
from prowler.providers.common.models import Audit_Metadata
from prowler.providers.common.outputs import Provider_Output_Options
@@ -106,14 +108,20 @@ def exclude_services_to_run(
# Load checks from checklist.json
def parse_checks_from_file(input_file: str, provider: str) -> set:
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
"""parse_checks_from_file returns a set of checks read from the given file"""
try:
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
return checks_to_execute
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
# Load checks from custom folder
@@ -309,7 +317,7 @@ def print_checks(
def parse_checks_from_compliance_framework(
compliance_frameworks: list, bulk_compliance_frameworks: dict
) -> list:
"""Parse checks from compliance frameworks specification"""
"""parse_checks_from_compliance_framework returns a set of checks from the given compliance_frameworks"""
checks_to_execute = set()
try:
for framework in compliance_frameworks:
@@ -416,6 +424,7 @@ def execute_checks(
provider: str,
audit_info: Any,
audit_output_options: Provider_Output_Options,
custom_checks_metadata: Any,
) -> list:
# List to store all the check's findings
all_findings = []
@@ -423,8 +432,10 @@ def execute_checks(
services_executed = set()
checks_executed = set()
global_provider = get_global_provider()
# Initialize the Audit Metadata
audit_info.audit_metadata = Audit_Metadata(
global_provider.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=checks_to_execute,
completed_checks=0,
@@ -461,6 +472,7 @@ def execute_checks(
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
all_findings.extend(check_findings)
@@ -483,46 +495,57 @@ def execute_checks(
print(
f"{Style.BRIGHT}Executing {checks_num} {check_noun}, please wait...{Style.RESET_ALL}\n"
)
with alive_bar(
total=len(checks_to_execute),
ctrl_c=False,
bar="blocks",
spinner="classic",
stats=False,
enrich_print=False,
) as bar:
for check_name in checks_to_execute:
# Recover service from check name
service = check_name.split("_")[0]
bar.title = (
f"-> Scanning {orange_color}{service}{Style.RESET_ALL} service"
execution_manager = ExecutionManager(provider, checks_to_execute)
total_checks = execution_manager.total_checks_per_service()
completed_checks = {service: 0 for service in total_checks}
service_findings = []
for service, check_name in execution_manager.execute_checks():
try:
check_findings = execute(
service,
check_name,
provider,
audit_output_options,
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
try:
check_findings = execute(
service,
check_name,
provider,
audit_output_options,
audit_info,
services_executed,
checks_executed,
)
all_findings.extend(check_findings)
all_findings.extend(check_findings)
service_findings.extend(check_findings)
# Update the completed checks count
completed_checks[service] += 1
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
bar()
bar.title = f"-> {Fore.GREEN}Scan completed!{Style.RESET_ALL}"
# Check if all checks for the service are completed
if completed_checks[service] == total_checks[service]:
# All checks for the service are completed
# Add a summary table or perform other actions
live_display.add_results_for_service(service, service_findings)
# Clear service_findings
service_findings = []
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return all_findings
def create_check_service_dict(checks_to_execute):
output = {}
for check_name in checks_to_execute:
service = check_name.split("_")[0]
if service not in output.keys():
output[service] = []
output[service].append(check_name)
return output
def execute(
service: str,
check_name: str,
@@ -531,7 +554,9 @@ def execute(
audit_info: Any,
services_executed: set,
checks_executed: set,
custom_checks_metadata: Any,
):
global_provider = get_global_provider()
# Import check module
check_module_path = (
f"prowler.providers.{provider}.services.{service}.{check_name}.{check_name}"
@@ -541,21 +566,25 @@ def execute(
check_to_execute = getattr(lib, check_name)
c = check_to_execute()
# Update check metadata to reflect that in the outputs
if custom_checks_metadata and custom_checks_metadata["Checks"].get(c.CheckID):
c = update_check_metadata(c, custom_checks_metadata["Checks"][c.CheckID])
# Run check
check_findings = run_check(c, audit_output_options)
# Update Audit Status
services_executed.add(service)
checks_executed.add(check_name)
audit_info.audit_metadata = update_audit_metadata(
audit_info.audit_metadata, services_executed, checks_executed
global_provider.audit_metadata = update_audit_metadata(
global_provider.audit_metadata, services_executed, checks_executed
)
# Allowlist findings
if audit_output_options.allowlist_file:
check_findings = allowlist_findings(
audit_output_options.allowlist_file,
audit_info.audited_account,
# Mute List findings
if audit_output_options.mutelist_file:
check_findings = mutelist_findings(
audit_output_options.mutelist_file,
global_provider.audited_account,
check_findings,
)
@@ -598,22 +627,32 @@ def update_audit_metadata(
)
def recover_checks_from_service(service_list: list, provider: str) -> list:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
modules = recover_checks_from_provider(provider, service)
if not modules:
logger.error(f"Service '{service}' does not have checks.")
def recover_checks_from_service(service_list: list, provider: str) -> set:
"""
Recover all checks from the selected provider and service
else:
for check_module in modules:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_module[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks
Returns a set of checks from the given services
"""
try:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
service_checks = recover_checks_from_provider(provider, service)
if not service_checks:
logger.error(f"Service '{service}' does not have checks.")
else:
for check in service_checks:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)

View File

@@ -0,0 +1,48 @@
import ast
import os
import pathlib
from prowler.lib.logger import logger
class ImportFinder(ast.NodeVisitor):
def __init__(self, provider):
self.imports = set()
self.provider = provider
def visit_ImportFrom(self, node):
if node.module and f"prowler.providers.{self.provider}.services" in node.module:
for name in node.names:
if "_client" in name.name:
self.imports.add(name.name)
self.generic_visit(node)
def analyze_check_file(file_path, provider):
# Prase the check file
with open(file_path, "r") as file:
node = ast.parse(file.read(), filename=file_path)
finder = ImportFinder(provider)
finder.visit(node)
return list(finder.imports)
def get_dependencies_for_checks(provider, checks_dict):
current_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
prowler_dir = current_directory.parent.parent
check_dependencies = {}
for service_name, checks in checks_dict.items():
check_dependencies[service_name] = {}
for check_name in checks:
relative_path = f"providers/{provider}/services/{service_name}/{check_name}/{check_name}.py"
check_file_path = prowler_dir / relative_path
if not check_file_path.exists():
logger.error(
f"{check_name} does not exist at {relative_path}! Cannot determine service dependencies"
)
continue
clients = analyze_check_file(str(check_file_path), provider)
check_dependencies[service_name][check_name] = clients
return check_dependencies

View File

@@ -1,5 +1,6 @@
from colorama import Fore, Style
from prowler.config.config import valid_severities
from prowler.lib.check.check import (
parse_checks_from_compliance_framework,
parse_checks_from_file,
@@ -10,7 +11,6 @@ from prowler.lib.logger import logger
# Generate the list of checks to execute
# PENDING Test for this function
def load_checks_to_execute(
bulk_checks_metadata: dict,
bulk_compliance_frameworks: dict,
@@ -22,69 +22,93 @@ def load_checks_to_execute(
categories: set,
provider: str,
) -> set:
"""Generate the list of checks to execute based on the cloud provider and input arguments specified"""
checks_to_execute = set()
"""Generate the list of checks to execute based on the cloud provider and the input arguments given"""
try:
# Local subsets
checks_to_execute = set()
check_aliases = {}
check_severities = {key: [] for key in valid_severities}
check_categories = {}
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# First, loop over the bulk_checks_metadata to extract the needed subsets
for check, metadata in bulk_checks_metadata.items():
# Aliases
for alias in metadata.CheckAliases:
check_aliases[alias] = check
# Handle if there are some severities passed using --severity
elif severities:
for check in bulk_checks_metadata:
# Check check's severity
if bulk_checks_metadata[check].Severity in severities:
checks_to_execute.add(check)
# Severities
if metadata.Severity:
check_severities[metadata.Severity].append(check)
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
try:
# Categories
for category in metadata.Categories:
if category not in check_categories:
check_categories[category] = []
check_categories[category].append(check)
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# Handle if there are some severities passed using --severity
elif severities:
for severity in severities:
checks_to_execute.update(check_severities[severity])
if service_list:
checks_to_execute = (
recover_checks_from_service(service_list, provider)
& checks_to_execute
)
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
checks_to_execute = parse_checks_from_file(checks_file, provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
try:
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
checks_to_execute = parse_checks_from_compliance_framework(
compliance_frameworks, bulk_compliance_frameworks
)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are categories passed using --categories
elif categories:
for cat in categories:
for check in bulk_checks_metadata:
# Check check's categories
if cat in bulk_checks_metadata[check].Categories:
checks_to_execute.add(check)
# Handle if there are categories passed using --categories
elif categories:
for category in categories:
checks_to_execute.update(check_categories[category])
# If there are no checks passed as argument
else:
try:
# If there are no checks passed as argument
else:
# Get all check modules to run with the specific provider
checks = recover_checks_from_provider(provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
else:
for check_info in checks:
# Recover check name from import path (last part)
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_info[0]
checks_to_execute.add(check_name)
# Get Check Aliases mapping
check_aliases = {}
for check, metadata in bulk_checks_metadata.items():
for alias in metadata.CheckAliases:
check_aliases[alias] = check
# Check Aliases
checks_to_execute = update_checks_to_execute_with_aliases(
checks_to_execute, check_aliases
)
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
def update_checks_to_execute_with_aliases(
checks_to_execute: set, check_aliases: dict
) -> set:
"""update_checks_to_execute_with_aliases returns the checks_to_execute updated using the check aliases."""
# Verify if any input check is an alias of another check
for input_check in checks_to_execute:
if (
@@ -97,5 +121,4 @@ def load_checks_to_execute(
print(
f"\nUsing alias {Fore.YELLOW}{input_check}{Style.RESET_ALL} for check {Fore.YELLOW}{check_aliases[input_check]}{Style.RESET_ALL}...\n"
)
return checks_to_execute

View File

@@ -0,0 +1,77 @@
import sys
import yaml
from jsonschema import validate
from prowler.config.config import valid_severities
from prowler.lib.logger import logger
custom_checks_metadata_schema = {
"type": "object",
"properties": {
"Checks": {
"type": "object",
"patternProperties": {
".*": {
"type": "object",
"properties": {
"Severity": {
"type": "string",
"enum": valid_severities,
}
},
"required": ["Severity"],
"additionalProperties": False,
}
},
"additionalProperties": False,
}
},
"required": ["Checks"],
"additionalProperties": False,
}
def parse_custom_checks_metadata_file(provider: str, parse_custom_checks_metadata_file):
"""parse_custom_checks_metadata_file returns the custom_checks_metadata object if it is valid, otherwise aborts the execution returning the ValidationError."""
try:
with open(parse_custom_checks_metadata_file) as f:
custom_checks_metadata = yaml.safe_load(f)["CustomChecksMetadata"][provider]
validate(custom_checks_metadata, schema=custom_checks_metadata_schema)
return custom_checks_metadata
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
def update_checks_metadata(bulk_checks_metadata, custom_checks_metadata):
"""update_checks_metadata returns the bulk_checks_metadata with the check's metadata updated based on the custom_checks_metadata provided."""
try:
# Update checks metadata from CustomChecksMetadata file
for check, custom_metadata in custom_checks_metadata["Checks"].items():
check_metadata = bulk_checks_metadata.get(check)
if check_metadata:
bulk_checks_metadata[check] = update_check_metadata(
check_metadata, custom_metadata
)
return bulk_checks_metadata
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
def update_check_metadata(check_metadata, custom_metadata):
"""update_check_metadata updates the check_metadata fields present in the custom_metadata and returns the updated version of the check_metadata. If some field is not present or valid the check_metadata is returned with the original fields."""
try:
if custom_metadata:
for attribute in custom_metadata:
try:
setattr(check_metadata, attribute, custom_metadata[attribute])
except ValueError:
pass
finally:
return check_metadata

View File

@@ -0,0 +1,369 @@
import importlib
import os
import sys
import traceback
from types import ModuleType
from typing import Any, Set
from colorama import Fore, Style
from prowler.lib.check.check_to_client_mapper import get_dependencies_for_checks
from prowler.lib.check.custom_checks_metadata import update_check_metadata
from prowler.lib.check.models import Check
from prowler.lib.logger import logger
from prowler.lib.outputs.outputs import report
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
from prowler.providers.common.common import get_global_provider
from prowler.providers.common.models import Audit_Metadata
from prowler.providers.common.outputs import Provider_Output_Options
class ExecutionManager:
def __init__(
self,
checks_to_execute: list,
provider: str,
audit_info: Any,
audit_output_options: Provider_Output_Options,
custom_checks_metadata: Any,
):
self.checks_to_execute = checks_to_execute
self.provider = provider
self.audit_info = audit_info
self.audit_output_options = audit_output_options
self.custom_checks_metadata = custom_checks_metadata
self.live_display = live_display
self.live_display.start()
self.loaded_clients = {} # defaultdict(lambda: False)
self.check_dict = self.create_check_service_dict(checks_to_execute)
self.check_dependencies = get_dependencies_for_checks(provider, self.check_dict)
self.remaining_checks = self.initialize_remaining_checks(
self.check_dependencies
)
self.services_queue = self.initialize_services_queue(self.check_dependencies)
# For tracking the executed services and checks
self.services_executed: Set[str] = set()
self.checks_executed: Set[str] = set()
# Initialize the Audit Metadata
self.audit_info.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=self.checks_to_execute,
completed_checks=0,
audit_progress=0,
)
def update_tracking(self, service: str, check: str):
self.services_executed.add(service)
self.checks_executed.add(check)
@staticmethod
def initialize_remaining_checks(check_dependencies):
remaining_checks = {}
for service, checks in check_dependencies.items():
for check_name, clients in checks.items():
remaining_checks[(service, check_name)] = clients
return remaining_checks
@staticmethod
def initialize_services_queue(check_dependencies):
return list(check_dependencies.keys())
@staticmethod
def create_check_service_dict(checks_to_execute):
output = {}
for check_name in checks_to_execute:
service = check_name.split("_")[0]
if service not in output.keys():
output[service] = []
output[service].append(check_name)
return output
def total_checks_per_service(self):
"""Returns a dictionary with the total number of checks for each service."""
total_checks = {}
for service, checks in self.check_dict.items():
total_checks[service] = len(checks)
return total_checks
def find_next_service(self):
# Prioritize services that use already loaded clients
for service in self.services_queue:
checks = self.check_dependencies[service]
if any(
client in self.loaded_clients
for check in checks.values()
for client in check
):
return service
return None if not self.services_queue else self.services_queue[0]
@staticmethod
def import_check(check_path: str) -> ModuleType:
"""
Imports an input check using its path
When importing a module using importlib.import_module, it's loaded and added to the sys.modules cache.
This means that the module remains in memory and is not garbage collected immediately after use, as it's still referenced in sys.modules.
This behavior is intentional, as importing modules can be a costly operation, and keeping them in memory allows for faster re-use.
release_check deletes this reference if it is no longer required by any of the remaining checks
"""
lib = importlib.import_module(f"{check_path}")
return lib
# Imports service clients, and tracks if it needs to be imported
def import_client(self, client_name):
if not self.loaded_clients.get(client_name):
# Dynamically import the client
module_name, _ = client_name.rsplit("_", 1)
client_module = importlib.import_module(
f"prowler.providers.{self.provider}.services.{module_name}.{client_name}"
)
self.loaded_clients[client_name] = client_module
def release_clients(self, completed_check_clients):
for client_name in completed_check_clients:
# Determine if any of the remaining checks still require the client
if not any(
client == client_name
for check in self.remaining_checks
for client in self.remaining_checks[check]
):
# Delete the reference to the client for this object
del self.loaded_clients[client_name]
module_name, _ = client_name.rsplit("_", 1)
# Delete the reference to the client in sys.modules
del sys.modules[
f"prowler.providers.aws.services.{module_name}.{client_name}"
]
def generate_checks(self):
"""
This is a generator function, which will:
* Determine the next service whose checks will be executed
* Load all the clients which are required by the checks into memory (init them)
* Yield the service and check name, 1-by-1, to be used within execute_checks
* Pass the completed checks to release_clients to determine if the clients that were required by the check are no longer needed, and can be garabage collected
It will complete the checks for a service, before moving onto the next one
It uses find_next_service to prioritize the next service based on if any of that service's checks require a client that has already been loaded
"""
while self.remaining_checks:
current_service = self.find_next_service()
if not current_service:
# Execution has completed, return
break
# Remove the service from the services_queue
self.services_queue.remove(current_service)
checks = self.check_dependencies[current_service]
clients_for_service = list(
set(client for client_list in checks.values() for client in client_list)
)
for client in clients_for_service:
self.live_display.add_client_init_section(client)
self.import_client(client)
# Add the display component
total_checks = len(self.check_dict[current_service])
self.live_display.add_service_section(current_service, total_checks)
for check_name, clients_for_check in checks.items():
yield current_service, check_name
self.live_display.increment_check_progress()
self.live_display.increment_overall_check_progress()
del self.remaining_checks[(current_service, check_name)]
self.release_clients(clients_for_check)
self.live_display.increment_overall_service_progress()
def execute_checks(self) -> list:
# List to store all the check's findings
all_findings = []
# Services and checks executed for the Audit Status
global_provider = get_global_provider()
# Initialize the Audit Metadata
global_provider.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=self.checks_to_execute,
completed_checks=0,
audit_progress=0,
)
if os.name != "nt":
try:
from resource import RLIMIT_NOFILE, getrlimit
# Check ulimit for the maximum system open files
soft, _ = getrlimit(RLIMIT_NOFILE)
if soft < 4096:
logger.warning(
f"Your session file descriptors limit ({soft} open files) is below 4096. We recommend to increase it to avoid errors. Solve it running this command `ulimit -n 4096`. For more info visit https://docs.prowler.cloud/en/latest/troubleshooting/"
)
except Exception as error:
logger.error("Unable to retrieve ulimit default settings")
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# Execution with the --only-logs flag
if self.audit_output_options.only_logs:
for service, check_name in self.generate_checks():
try:
check_findings = self.execute(service, check_name)
all_findings.extend(check_findings)
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
# Default execution
total_checks = self.total_checks_per_service()
self.live_display.add_overall_progress_section(
total_checks_dict=total_checks
)
# For tracking when a service is completed
completed_checks = {service: 0 for service in total_checks}
service_findings = []
for service, check_name in self.generate_checks():
try:
check_findings = self.execute(
service,
check_name,
)
all_findings.extend(check_findings)
service_findings.extend(check_findings)
# Update the completed checks count
completed_checks[service] += 1
# Check if all checks for the service are completed
if completed_checks[service] == total_checks[service]:
# All checks for the service are completed
# Add a summary table or perform other actions
live_display.add_results_for_service(service, service_findings)
# Clear service_findings
service_findings = []
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
self.live_display.hide_service_section()
return all_findings
def execute(
self,
service: str,
check_name: str,
):
try:
# Import check module
check_module_path = f"prowler.providers.{self.provider}.services.{service}.{check_name}.{check_name}"
lib = self.import_check(check_module_path)
# Recover functions from check
check_to_execute = getattr(lib, check_name)
c = check_to_execute()
# Update check metadata to reflect that in the outputs
if self.custom_checks_metadata and self.custom_checks_metadata[
"Checks"
].get(c.CheckID):
c = update_check_metadata(
c, self.custom_checks_metadata["Checks"][c.CheckID]
)
# Run check
check_findings = self.run_check(c, self.audit_output_options)
# Update Audit Status
self.update_tracking(service, check_name)
self.update_audit_metadata()
# Mutelist findings
if self.audit_output_options.mutelist_file:
check_findings = mutelist_findings(
self.audit_output_options.mutelist_file,
self.audit_info.audited_account,
check_findings,
)
# Report the check's findings
report(check_findings, self.audit_output_options, self.audit_info)
if os.environ.get("PROWLER_REPORT_LIB_PATH"):
try:
logger.info("Using custom report interface ...")
lib = os.environ["PROWLER_REPORT_LIB_PATH"]
outputs_module = importlib.import_module(lib)
custom_report_interface = getattr(outputs_module, "report")
custom_report_interface(
check_findings, self.audit_output_options, self.audit_info
)
except Exception:
sys.exit(1)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return check_findings
@staticmethod
def run_check(check: Check, output_options: Provider_Output_Options) -> list:
findings = []
if output_options.verbose:
print(
f"\nCheck ID: {check.CheckID} - {Fore.MAGENTA}{check.ServiceName}{Fore.YELLOW} [{check.Severity}]{Style.RESET_ALL}"
)
logger.debug(f"Executing check: {check.CheckID}")
try:
findings = check.execute()
except Exception as error:
if not output_options.only_logs:
print(
f"Something went wrong in {check.CheckID}, please use --log-level ERROR"
)
logger.error(
f"{check.CheckID} -- {error.__class__.__name__}[{traceback.extract_tb(error.__traceback__)[-1].lineno}]: {error}"
)
finally:
return findings
def update_audit_metadata(self):
"""update_audit_metadata returns the audit_metadata updated with the new status
Updates the given audit_metadata using the length of the services_executed and checks_executed
"""
try:
self.audit_info.audit_metadata.services_scanned = len(
self.services_executed
)
self.audit_info.audit_metadata.completed_checks = len(self.checks_executed)
self.audit_info.audit_metadata.audit_progress = (
100
* len(self.checks_executed)
/ len(self.audit_info.audit_metadata.expected_checks)
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)

View File

@@ -2,10 +2,13 @@ import os
import sys
from abc import ABC, abstractmethod
from dataclasses import dataclass
from functools import wraps
from pydantic import BaseModel, ValidationError
from pydantic.main import ModelMetaclass
from prowler.lib.logger import logger
from prowler.lib.ui.live_display import live_display
class Code(BaseModel):
@@ -57,9 +60,29 @@ class Check_Metadata_Model(BaseModel):
Compliance: list = None
class Check(ABC, Check_Metadata_Model):
class CheckMeta(ModelMetaclass):
"""
Dynamically decorates the execute function of all subclasses of the Check class
By making CheckMeta inherit from ModelMetaclass, it ensures that all features provided by Pydantic's BaseModel (such as data validation, serialization, and so forth) are preserved. CheckMeta just adds additional behavior (decorator application) on top of the existing features.
This also works because ModelMetaclass inherits from ABCMeta, as does the ABC class (its got to do with how metaclasses work when applying it to a class that inherits from other classes that have a metaclass).
The primary role of CheckMeta is to automatically apply a decorator to the execute method of subclasses. This behavior does not conflict with the typical responsibilities of ModelMetaclass
"""
def __new__(cls, name, bases, dct):
if "execute" in dct and not getattr(
dct["execute"], "__isabstractmethod__", False
):
dct["execute"] = Check.update_title_with_findings_decorator(dct["execute"])
return super(CheckMeta, cls).__new__(cls, name, bases, dct)
class Check(ABC, Check_Metadata_Model, metaclass=CheckMeta):
"""Prowler Check"""
title_bar_task: int = None
progress_task: int = None
def __init__(self, **data):
"""Check's init function. Calls the CheckMetadataModel init."""
# Parse the Check's metadata file
@@ -72,6 +95,43 @@ class Check(ABC, Check_Metadata_Model):
# Calls parents init function
super().__init__(**data)
self.live_display_enabled = False
service_section = live_display.get_service_section()
if service_section:
self.live_display_enabled = True
self.title_bar_task = service_section.title_bar.add_task(
f"{self.CheckTitle}...", start=False
)
def increment_task_progress(self):
if self.live_display_enabled:
current_section = live_display.get_service_section()
current_section.task_progress.update(self.progress_task, advance=1)
def start_task(self, message, count):
if self.live_display_enabled:
current_section = live_display.get_service_section()
self.progress_task = current_section.task_progress.add_task(
description=message, total=count, visible=True
)
def update_title_with_findings(self, findings):
if self.live_display_enabled:
current_section = live_display.get_service_section()
# current_section.task_progress.remove_task(self.progress_task)
total_failed = len(
[report for report in findings if report.status == "FAIL"]
)
total_checked = len(findings)
if total_failed == 0:
message = f"{self.CheckTitle} [pass]All resources passed ({total_checked})[/pass]"
else:
message = f"{self.CheckTitle} [fail]{total_failed}/{total_checked} failed![/fail]"
current_section.title_bar.update(
task_id=self.title_bar_task, description=message
)
def metadata(self) -> dict:
"""Return the JSON representation of the check's metadata"""
return self.json()
@@ -80,6 +140,24 @@ class Check(ABC, Check_Metadata_Model):
def execute(self):
"""Execute the check's logic"""
@staticmethod
def update_title_with_findings_decorator(func):
"""
Decorator to update the title bar in the live_display with findings after executing a check.
"""
@wraps(func)
def wrapper(check_instance, *args, **kwargs):
# Execute the original check's logic
findings = func(check_instance, *args, **kwargs)
# Update the title bar with the findings
check_instance.update_title_with_findings(findings)
return findings
return wrapper
@dataclass
class Check_Report:
@@ -146,6 +224,22 @@ class Check_Report_GCP(Check_Report):
self.location = ""
@dataclass
class Check_Report_Kubernetes(Check_Report):
# TODO change class name to CheckReportKubernetes
"""Contains the Kubernetes Check's finding information."""
resource_name: str
resource_id: str
namespace: str
def __init__(self, metadata):
super().__init__(metadata)
self.resource_name = ""
self.resource_id = ""
self.namespace = ""
# Testing Pending
def load_check_metadata(metadata_file: str) -> Check_Metadata_Model:
"""load_check_metadata loads and parse a Check's metadata file"""

View File

@@ -7,6 +7,8 @@ from prowler.config.config import (
check_current_version,
default_config_file_path,
default_output_directory,
valid_severities,
finding_statuses,
)
from prowler.providers.common.arguments import (
init_providers_parser,
@@ -49,6 +51,7 @@ Detailed documentation at https://docs.prowler.cloud
self.__init_exclude_checks_parser__()
self.__init_list_checks_parser__()
self.__init_config_parser__()
self.__init_custom_checks_metadata_parser__()
# Init Providers Arguments
init_providers_parser(self)
@@ -114,10 +117,10 @@ Detailed documentation at https://docs.prowler.cloud
"Outputs"
)
common_outputs_parser.add_argument(
"-q",
"--quiet",
action="store_true",
help="Store or send only Prowler failed findings",
"--status",
nargs="+",
help=f"Filter by the status of the findings {finding_statuses}",
choices=finding_statuses,
)
common_outputs_parser.add_argument(
"-M",
@@ -220,11 +223,11 @@ Detailed documentation at https://docs.prowler.cloud
group.add_argument(
"-s", "--services", nargs="+", help="List of services to be executed."
)
group.add_argument(
common_checks_parser.add_argument(
"--severity",
nargs="+",
help="List of severities to be executed [informational, low, medium, high, critical]",
choices=["informational", "low", "medium", "high", "critical"],
help=f"List of severities to be executed {valid_severities}",
choices=valid_severities,
)
group.add_argument(
"--compliance",
@@ -286,3 +289,15 @@ Detailed documentation at https://docs.prowler.cloud
default=default_config_file_path,
help="Set configuration file path",
)
def __init_custom_checks_metadata_parser__(self):
# CustomChecksMetadata
custom_checks_metadata_subparser = (
self.common_providers_parser.add_argument_group("Custom Checks Metadata")
)
custom_checks_metadata_subparser.add_argument(
"--custom-checks-metadata-file",
nargs="?",
default=None,
help="Path for the custom checks metadata YAML file. See example prowler/config/custom_checks_metadata_example.yaml for reference and format. See more in https://docs.prowler.cloud/en/latest/tutorials/custom-checks-metadata/",
)

View File

@@ -401,7 +401,8 @@ def display_compliance_table(
"Bajo": 0,
}
if finding.status == "FAIL":
fail_count += 1
if attribute.Tipo != "recomendacion":
fail_count += 1
marcos[marco_categoria][
"Estado"
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"

View File

@@ -0,0 +1,55 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_Well_Architected,
generate_csv_fields,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_aws_well_architected_framework(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_CSV_AWS_Well_Architected)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_AWS_Well_Architected(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Name=attribute.Name,
Requirements_Attributes_WellArchitectedQuestionId=attribute.WellArchitectedQuestionId,
Requirements_Attributes_WellArchitectedPracticeId=attribute.WellArchitectedPracticeId,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_LevelOfRisk=attribute.LevelOfRisk,
Requirements_Attributes_AssessmentMethod=attribute.AssessmentMethod,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_ImplementationGuidanceUrl=attribute.ImplementationGuidanceUrl,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)

View File

@@ -0,0 +1,36 @@
from prowler.lib.outputs.compliance.cis_aws import generate_compliance_row_cis_aws
from prowler.lib.outputs.compliance.cis_gcp import generate_compliance_row_cis_gcp
from prowler.lib.outputs.csv import write_csv
def write_compliance_row_cis(
file_descriptors,
finding,
compliance,
output_options,
audit_info,
input_compliance_frameworks,
):
compliance_output = "cis_" + compliance.Version + "_" + compliance.Provider.lower()
# Only with the version of CIS that was selected
if compliance_output in str(input_compliance_frameworks):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
if compliance.Provider == "AWS":
(compliance_row, csv_header) = generate_compliance_row_cis_aws(
finding,
compliance,
requirement,
attribute,
output_options,
audit_info,
)
elif compliance.Provider == "GCP":
(compliance_row, csv_header) = generate_compliance_row_cis_gcp(
finding, compliance, requirement, attribute, output_options
)
write_csv(
file_descriptors[compliance_output], csv_header, compliance_row
)

View File

@@ -0,0 +1,34 @@
from prowler.config.config import timestamp
from prowler.lib.outputs.models import Check_Output_CSV_AWS_CIS, generate_csv_fields
from prowler.lib.utils.utils import outputs_unix_timestamp
def generate_compliance_row_cis_aws(
finding, compliance, requirement, attribute, output_options, audit_info
):
compliance_row = Check_Output_CSV_AWS_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(output_options.unix_timestamp, timestamp),
Requirements_Id=requirement.Id,
Requirements_Description=requirement.Description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_AWS_CIS)
return compliance_row, csv_header

View File

@@ -0,0 +1,35 @@
from prowler.config.config import timestamp
from prowler.lib.outputs.models import Check_Output_CSV_GCP_CIS, generate_csv_fields
from prowler.lib.utils.utils import outputs_unix_timestamp
def generate_compliance_row_cis_gcp(
finding, compliance, requirement, attribute, output_options
):
compliance_row = Check_Output_CSV_GCP_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
ProjectId=finding.project_id,
Location=finding.location.lower(),
AssessmentDate=outputs_unix_timestamp(output_options.unix_timestamp, timestamp),
Requirements_Id=requirement.Id,
Requirements_Description=requirement.Description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
ResourceName=finding.resource_name,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_GCP_CIS)
return compliance_row, csv_header

View File

@@ -0,0 +1,472 @@
import sys
from colorama import Fore, Style
from tabulate import tabulate
from prowler.config.config import orange_color
from prowler.lib.check.models import Check_Report
from prowler.lib.logger import logger
from prowler.lib.outputs.compliance.aws_well_architected_framework import (
write_compliance_row_aws_well_architected_framework,
)
from prowler.lib.outputs.compliance.cis import write_compliance_row_cis
from prowler.lib.outputs.compliance.ens_rd2022_aws import (
write_compliance_row_ens_rd2022_aws,
)
from prowler.lib.outputs.compliance.generic import write_compliance_row_generic
from prowler.lib.outputs.compliance.iso27001_2013_aws import (
write_compliance_row_iso27001_2013_aws,
)
from prowler.lib.outputs.compliance.mitre_attack_aws import (
write_compliance_row_mitre_attack_aws,
)
def add_manual_controls(
output_options, audit_info, file_descriptors, input_compliance_frameworks
):
try:
# Check if MANUAL control was already added to output
if "manual_check" in output_options.bulk_checks_metadata:
manual_finding = Check_Report(
output_options.bulk_checks_metadata["manual_check"].json()
)
manual_finding.status = "MANUAL"
manual_finding.status_extended = "Manual check"
manual_finding.resource_id = "manual_check"
manual_finding.resource_name = "Manual check"
manual_finding.region = ""
manual_finding.location = ""
manual_finding.project_id = ""
fill_compliance(
output_options,
manual_finding,
audit_info,
file_descriptors,
input_compliance_frameworks,
)
del output_options.bulk_checks_metadata["manual_check"]
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def get_check_compliance_frameworks_in_input(
check_id, bulk_checks_metadata, input_compliance_frameworks
):
"""get_check_compliance_frameworks_in_input returns a list of Compliance for the given check if the compliance framework is present in the input compliance to execute"""
check_compliances = []
if bulk_checks_metadata and bulk_checks_metadata[check_id]:
for compliance in bulk_checks_metadata[check_id].Compliance:
compliance_name = ""
if compliance.Version:
compliance_name = (
compliance.Framework.lower()
+ "_"
+ compliance.Version.lower()
+ "_"
+ compliance.Provider.lower()
)
else:
compliance_name = (
compliance.Framework.lower() + "_" + compliance.Provider.lower()
)
if compliance_name.replace("-", "_") in input_compliance_frameworks:
check_compliances.append(compliance)
return check_compliances
def fill_compliance(
output_options, finding, audit_info, file_descriptors, input_compliance_frameworks
):
try:
# We have to retrieve all the check's compliance requirements and get the ones matching with the input ones
check_compliances = get_check_compliance_frameworks_in_input(
finding.check_metadata.CheckID,
output_options.bulk_checks_metadata,
input_compliance_frameworks,
)
for compliance in check_compliances:
if compliance.Framework == "ENS" and compliance.Version == "RD2022":
write_compliance_row_ens_rd2022_aws(
file_descriptors, finding, compliance, output_options, audit_info
)
elif compliance.Framework == "CIS":
write_compliance_row_cis(
file_descriptors,
finding,
compliance,
output_options,
audit_info,
input_compliance_frameworks,
)
elif (
"AWS-Well-Architected-Framework" in compliance.Framework
and compliance.Provider == "AWS"
):
write_compliance_row_aws_well_architected_framework(
file_descriptors, finding, compliance, output_options, audit_info
)
elif (
compliance.Framework == "ISO27001"
and compliance.Version == "2013"
and compliance.Provider == "AWS"
):
write_compliance_row_iso27001_2013_aws(
file_descriptors, finding, compliance, output_options, audit_info
)
elif (
compliance.Framework == "MITRE-ATTACK"
and compliance.Version == ""
and compliance.Provider == "AWS"
):
write_compliance_row_mitre_attack_aws(
file_descriptors, finding, compliance, output_options, audit_info
)
else:
write_compliance_row_generic(
file_descriptors, finding, compliance, output_options, audit_info
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def display_compliance_table(
findings: list,
bulk_checks_metadata: dict,
compliance_framework: str,
output_filename: str,
output_directory: str,
compliance_overview: bool,
):
try:
if "ens_rd2022_aws" == compliance_framework:
marcos = {}
ens_compliance_table = {
"Proveedor": [],
"Marco/Categoria": [],
"Estado": [],
"Alto": [],
"Medio": [],
"Bajo": [],
"Opcional": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework == "ENS"
and compliance.Provider == "AWS"
and compliance.Version == "RD2022"
):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
marco_categoria = (
f"{attribute.Marco}/{attribute.Categoria}"
)
# Check if Marco/Categoria exists
if marco_categoria not in marcos:
marcos[marco_categoria] = {
"Estado": f"{Fore.GREEN}CUMPLE{Style.RESET_ALL}",
"Opcional": 0,
"Alto": 0,
"Medio": 0,
"Bajo": 0,
}
if finding.status == "FAIL":
fail_count += 1
marcos[marco_categoria][
"Estado"
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"
elif finding.status == "PASS":
pass_count += 1
if attribute.Nivel == "opcional":
marcos[marco_categoria]["Opcional"] += 1
elif attribute.Nivel == "alto":
marcos[marco_categoria]["Alto"] += 1
elif attribute.Nivel == "medio":
marcos[marco_categoria]["Medio"] += 1
elif attribute.Nivel == "bajo":
marcos[marco_categoria]["Bajo"] += 1
# Add results to table
for marco in sorted(marcos):
ens_compliance_table["Proveedor"].append(compliance.Provider)
ens_compliance_table["Marco/Categoria"].append(marco)
ens_compliance_table["Estado"].append(marcos[marco]["Estado"])
ens_compliance_table["Opcional"].append(
f"{Fore.BLUE}{marcos[marco]['Opcional']}{Style.RESET_ALL}"
)
ens_compliance_table["Alto"].append(
f"{Fore.LIGHTRED_EX}{marcos[marco]['Alto']}{Style.RESET_ALL}"
)
ens_compliance_table["Medio"].append(
f"{orange_color}{marcos[marco]['Medio']}{Style.RESET_ALL}"
)
ens_compliance_table["Bajo"].append(
f"{Fore.YELLOW}{marcos[marco]['Bajo']}{Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nEstado de Cumplimiento de {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(
f"\nResultados de {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}:"
)
print(
tabulate(
ens_compliance_table,
headers="keys",
tablefmt="rounded_grid",
)
)
print(
f"{Style.BRIGHT}* Solo aparece el Marco/Categoria que contiene resultados.{Style.RESET_ALL}"
)
print(
f"\nResultados detallados de {compliance_framework.upper()} en:"
)
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
elif "cis_" in compliance_framework:
sections = {}
cis_compliance_table = {
"Provider": [],
"Section": [],
"Level 1": [],
"Level 2": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework == "CIS"
and compliance.Version in compliance_framework
):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
section = attribute.Section
# Check if Section exists
if section not in sections:
sections[section] = {
"Status": f"{Fore.GREEN}PASS{Style.RESET_ALL}",
"Level 1": {"FAIL": 0, "PASS": 0},
"Level 2": {"FAIL": 0, "PASS": 0},
}
if finding.status == "FAIL":
fail_count += 1
elif finding.status == "PASS":
pass_count += 1
if attribute.Profile == "Level 1":
if finding.status == "FAIL":
sections[section]["Level 1"]["FAIL"] += 1
else:
sections[section]["Level 1"]["PASS"] += 1
elif attribute.Profile == "Level 2":
if finding.status == "FAIL":
sections[section]["Level 2"]["FAIL"] += 1
else:
sections[section]["Level 2"]["PASS"] += 1
# Add results to table
sections = dict(sorted(sections.items()))
for section in sections:
cis_compliance_table["Provider"].append(compliance.Provider)
cis_compliance_table["Section"].append(section)
if sections[section]["Level 1"]["FAIL"] > 0:
cis_compliance_table["Level 1"].append(
f"{Fore.RED}FAIL({sections[section]['Level 1']['FAIL']}){Style.RESET_ALL}"
)
else:
cis_compliance_table["Level 1"].append(
f"{Fore.GREEN}PASS({sections[section]['Level 1']['PASS']}){Style.RESET_ALL}"
)
if sections[section]["Level 2"]["FAIL"] > 0:
cis_compliance_table["Level 2"].append(
f"{Fore.RED}FAIL({sections[section]['Level 2']['FAIL']}){Style.RESET_ALL}"
)
else:
cis_compliance_table["Level 2"].append(
f"{Fore.GREEN}PASS({sections[section]['Level 2']['PASS']}){Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
)
print(
tabulate(
cis_compliance_table,
headers="keys",
tablefmt="rounded_grid",
)
)
print(
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
)
print(
f"\nDetailed results of {compliance_framework.upper()} are in:"
)
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
elif "mitre_attack" in compliance_framework:
tactics = {}
mitre_compliance_table = {
"Provider": [],
"Tactic": [],
"Status": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
"MITRE-ATTACK" in compliance.Framework
and compliance.Version in compliance_framework
):
for requirement in compliance.Requirements:
for tactic in requirement.Tactics:
if tactic not in tactics:
tactics[tactic] = {"FAIL": 0, "PASS": 0}
if finding.status == "FAIL":
fail_count += 1
tactics[tactic]["FAIL"] += 1
elif finding.status == "PASS":
pass_count += 1
tactics[tactic]["PASS"] += 1
# Add results to table
tactics = dict(sorted(tactics.items()))
for tactic in tactics:
mitre_compliance_table["Provider"].append(compliance.Provider)
mitre_compliance_table["Tactic"].append(tactic)
if tactics[tactic]["FAIL"] > 0:
mitre_compliance_table["Status"].append(
f"{Fore.RED}FAIL({tactics[tactic]['FAIL']}){Style.RESET_ALL}"
)
else:
mitre_compliance_table["Status"].append(
f"{Fore.GREEN}PASS({tactics[tactic]['PASS']}){Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(
f"\nFramework {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Results:"
)
print(
tabulate(
mitre_compliance_table,
headers="keys",
tablefmt="rounded_grid",
)
)
print(
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
)
print(
f"\nDetailed results of {compliance_framework.upper()} are in:"
)
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
else:
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework.upper()
in compliance_framework.upper().replace("_", "-")
and compliance.Version in compliance_framework.upper()
and compliance.Provider in compliance_framework.upper()
):
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
if finding.status == "FAIL":
fail_count += 1
elif finding.status == "PASS":
pass_count += 1
if fail_count + pass_count < 1:
print(
f"\nThere are no resources for {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_framework.upper()}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
if not compliance_overview:
print(f"\nDetailed results of {compliance_framework.upper()} are in:")
print(
f" - CSV: {output_directory}/compliance/{output_filename}_{compliance_framework}.csv\n"
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
)
sys.exit(1)

View File

@@ -0,0 +1,45 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import Check_Output_CSV_ENS_RD2022, generate_csv_fields
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_ens_rd2022_aws(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = "ens_rd2022_aws"
csv_header = generate_csv_fields(Check_Output_CSV_ENS_RD2022)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_ENS_RD2022(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_IdGrupoControl=attribute.IdGrupoControl,
Requirements_Attributes_Marco=attribute.Marco,
Requirements_Attributes_Categoria=attribute.Categoria,
Requirements_Attributes_DescripcionControl=attribute.DescripcionControl,
Requirements_Attributes_Nivel=attribute.Nivel,
Requirements_Attributes_Tipo=attribute.Tipo,
Requirements_Attributes_Dimensiones=",".join(attribute.Dimensiones),
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)

View File

@@ -0,0 +1,51 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_CSV_Generic_Compliance,
generate_csv_fields,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_generic(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_CSV_Generic_Compliance)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_Generic_Compliance(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_SubGroup=attribute.SubGroup,
Requirements_Attributes_Service=attribute.Service,
Requirements_Attributes_Soc_Type=attribute.Soc_Type,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)

View File

@@ -0,0 +1,53 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_ISO27001_2013,
generate_csv_fields,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_iso27001_2013_aws(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_CSV_AWS_ISO27001_2013)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_AWS_ISO27001_2013(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Name=requirement_name,
Requirements_Description=requirement_description,
Requirements_Attributes_Category=attribute.Category,
Requirements_Attributes_Objetive_ID=attribute.Objetive_ID,
Requirements_Attributes_Objetive_Name=attribute.Objetive_Name,
Requirements_Attributes_Check_Summary=attribute.Check_Summary,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)

View File

@@ -0,0 +1,66 @@
from csv import DictWriter
from prowler.config.config import timestamp
from prowler.lib.outputs.models import (
Check_Output_MITRE_ATTACK,
generate_csv_fields,
unroll_list,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def write_compliance_row_mitre_attack_aws(
file_descriptors, finding, compliance, output_options, audit_info
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
csv_header = generate_csv_fields(Check_Output_MITRE_ATTACK)
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
attributes_aws_services = ""
attributes_categories = ""
attributes_values = ""
attributes_comments = ""
for attribute in requirement.Attributes:
attributes_aws_services += attribute.AWSService + "\n"
attributes_categories += attribute.Category + "\n"
attributes_values += attribute.Value + "\n"
attributes_comments += attribute.Comment + "\n"
compliance_row = Check_Output_MITRE_ATTACK(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Name=requirement_name,
Requirements_Tactics=unroll_list(requirement.Tactics),
Requirements_SubTechniques=unroll_list(requirement.SubTechniques),
Requirements_Platforms=unroll_list(requirement.Platforms),
Requirements_TechniqueURL=requirement.TechniqueURL,
Requirements_Attributes_AWSServices=attributes_aws_services,
Requirements_Attributes_Categories=attributes_categories,
Requirements_Attributes_Values=attributes_values,
Requirements_Attributes_Comments=attributes_comments,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_writer.writerow(compliance_row.__dict__)

View File

@@ -0,0 +1,10 @@
from csv import DictWriter
def write_csv(file_descriptor, headers, row):
csv_writer = DictWriter(
file_descriptor,
fieldnames=headers,
delimiter=";",
)
csv_writer.writerow(row.__dict__)

View File

@@ -12,8 +12,6 @@ from prowler.config.config import (
from prowler.lib.logger import logger
from prowler.lib.outputs.html import add_html_header
from prowler.lib.outputs.models import (
Aws_Check_Output_CSV,
Azure_Check_Output_CSV,
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
Check_Output_CSV_AWS_Well_Architected,
@@ -21,19 +19,19 @@ from prowler.lib.outputs.models import (
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
Check_Output_MITRE_ATTACK,
Gcp_Check_Output_CSV,
generate_csv_fields,
)
from prowler.lib.utils.utils import file_exists, open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.common.outputs import get_provider_output_model
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
def initialize_file_descriptor(
filename: str,
output_mode: str,
audit_info: AWS_Audit_Info,
audit_info: Any,
format: Any = None,
) -> TextIOWrapper:
"""Open/Create the output file. If needed include headers or the required format"""
@@ -75,27 +73,15 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
for output_mode in output_modes:
if output_mode == "csv":
filename = f"{output_directory}/{output_filename}{csv_file_suffix}"
if isinstance(audit_info, AWS_Audit_Info):
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Aws_Check_Output_CSV,
)
if isinstance(audit_info, Azure_Audit_Info):
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Azure_Check_Output_CSV,
)
if isinstance(audit_info, GCP_Audit_Info):
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Gcp_Check_Output_CSV,
)
output_model = get_provider_output_model(
audit_info.__class__.__name__
)
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
output_model,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "json":
@@ -123,7 +109,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
elif isinstance(audit_info, GCP_Audit_Info):
if output_mode == "cis_2.0_gcp":
filename = f"{output_directory}/{output_filename}_cis_2.0_gcp{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_cis_2.0_gcp{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_GCP_CIS
)
@@ -138,7 +124,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "ens_rd2022_aws":
filename = f"{output_directory}/{output_filename}_ens_rd2022_aws{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_ens_rd2022_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
@@ -148,14 +134,14 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "cis_1.5_aws":
filename = f"{output_directory}/{output_filename}_cis_1.5_aws{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_cis_1.5_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_AWS_CIS
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "cis_1.4_aws":
filename = f"{output_directory}/{output_filename}_cis_1.4_aws{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_cis_1.4_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_AWS_CIS
)
@@ -165,7 +151,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
output_mode
== "aws_well_architected_framework_security_pillar_aws"
):
filename = f"{output_directory}/{output_filename}_aws_well_architected_framework_security_pillar_aws{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_aws_well_architected_framework_security_pillar_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
@@ -178,7 +164,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
output_mode
== "aws_well_architected_framework_reliability_pillar_aws"
):
filename = f"{output_directory}/{output_filename}_aws_well_architected_framework_reliability_pillar_aws{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_aws_well_architected_framework_reliability_pillar_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
@@ -188,7 +174,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "iso27001_2013_aws":
filename = f"{output_directory}/{output_filename}_iso27001_2013_aws{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_iso27001_2013_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
@@ -198,7 +184,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "mitre_attack_aws":
filename = f"{output_directory}/{output_filename}_mitre_attack_aws{csv_file_suffix}"
filename = f"{output_directory}/compliance/{output_filename}_mitre_attack_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
@@ -209,14 +195,26 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
else:
# Generic Compliance framework
filename = f"{output_directory}/{output_filename}_{output_mode}{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_Generic_Compliance,
)
file_descriptors.update({output_mode: file_descriptor})
if (
isinstance(audit_info, AWS_Audit_Info)
and "aws" in output_mode
or (
isinstance(audit_info, Azure_Audit_Info)
and "azure" in output_mode
)
or (
isinstance(audit_info, GCP_Audit_Info)
and "gcp" in output_mode
)
):
filename = f"{output_directory}/compliance/{output_filename}_{output_mode}{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_Generic_Compliance,
)
file_descriptors.update({output_mode: file_descriptor})
except Exception as error:
logger.error(

View File

@@ -21,6 +21,7 @@ from prowler.lib.utils.utils import open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
from prowler.providers.kubernetes.lib.audit_info.models import Kubernetes_Audit_Info
def add_html_header(file_descriptor, audit_info):
@@ -169,11 +170,11 @@ def add_html_header(file_descriptor, audit_info):
def fill_html(file_descriptor, finding, output_options):
try:
row_class = "p-3 mb-2 bg-success-custom"
if finding.status == "INFO":
if finding.status == "MANUAL":
row_class = "table-info"
elif finding.status == "FAIL":
row_class = "table-danger"
elif finding.status == "WARNING":
elif finding.status == "MUTED":
row_class = "table-warning"
file_descriptor.write(
f"""
@@ -338,8 +339,9 @@ def add_html_footer(output_filename, output_directory):
def get_aws_html_assessment_summary(audit_info):
try:
if isinstance(audit_info, AWS_Audit_Info):
if not audit_info.profile:
audit_info.profile = "ENV"
profile = (
audit_info.profile if audit_info.profile is not None else "default"
)
if isinstance(audit_info.audited_regions, list):
audited_regions = " ".join(audit_info.audited_regions)
elif not audit_info.audited_regions:
@@ -361,7 +363,7 @@ def get_aws_html_assessment_summary(audit_info):
</li>
<li class="list-group-item">
<b>AWS-CLI Profile:</b> """
+ audit_info.profile
+ profile
+ """
</li>
<li class="list-group-item">
@@ -521,6 +523,53 @@ def get_gcp_html_assessment_summary(audit_info):
sys.exit(1)
def get_kubernetes_html_assessment_summary(audit_info):
try:
if isinstance(audit_info, Kubernetes_Audit_Info):
return (
"""
<div class="col-md-2">
<div class="card">
<div class="card-header">
Kubernetes Assessment Summary
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>Kubernetes Context:</b> """
+ audit_info.context["name"]
+ """
</li>
</ul>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
Kubernetes Credentials
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>Kubernetes Cluster:</b> """
+ audit_info.context["context"]["cluster"]
+ """
</li>
<li class="list-group-item">
<b>Kubernetes User:</b> """
+ audit_info.context["context"]["user"]
+ """
</li>
</ul>
</div>
</div>
"""
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_assessment_summary(audit_info):
"""
get_assessment_summary gets the HTML assessment summary for the provider
@@ -531,6 +580,7 @@ def get_assessment_summary(audit_info):
# AWS_Audit_Info --> aws
# GCP_Audit_Info --> gcp
# Azure_Audit_Info --> azure
# Kubernetes_Audit_Info --> kubernetes
provider = audit_info.__class__.__name__.split("_")[0].lower()
# Dynamically get the Provider quick inventory handler

View File

@@ -31,6 +31,7 @@ from prowler.lib.outputs.models import (
unroll_dict_to_list,
)
from prowler.lib.utils.utils import hash_sha512, open_file, outputs_unix_timestamp
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
def fill_json_asff(finding_output, audit_info, finding, output_options):
@@ -115,8 +116,8 @@ def generate_json_asff_status(status: str) -> str:
json_asff_status = "PASSED"
elif status == "FAIL":
json_asff_status = "FAILED"
elif status == "WARNING":
json_asff_status = "WARNING"
elif status == "MUTED":
json_asff_status = "MUTED"
else:
json_asff_status = "NOT_AVAILABLE"
@@ -155,6 +156,11 @@ def fill_json_ocsf(audit_info, finding, output_options) -> Check_Output_JSON_OCS
aws_org_uid = ""
account = None
org = None
profile = ""
if isinstance(audit_info, AWS_Audit_Info):
profile = (
audit_info.profile if audit_info.profile is not None else "default"
)
if (
hasattr(audit_info, "organizations_metadata")
and audit_info.organizations_metadata
@@ -249,9 +255,7 @@ def fill_json_ocsf(audit_info, finding, output_options) -> Check_Output_JSON_OCS
original_time=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
profiles=[audit_info.profile]
if hasattr(audit_info, "organizations_metadata")
else [],
profiles=[profile],
)
compliance = Compliance_OCSF(
status=generate_json_ocsf_status(finding.status),
@@ -289,7 +293,7 @@ def generate_json_ocsf_status(status: str):
json_ocsf_status = "Success"
elif status == "FAIL":
json_ocsf_status = "Failure"
elif status == "WARNING":
elif status == "MUTED":
json_ocsf_status = "Other"
else:
json_ocsf_status = "Unknown"
@@ -303,7 +307,7 @@ def generate_json_ocsf_status_id(status: str):
json_ocsf_status_id = 1
elif status == "FAIL":
json_ocsf_status_id = 2
elif status == "WARNING":
elif status == "MUTED":
json_ocsf_status_id = 99
else:
json_ocsf_status_id = 0

View File

@@ -10,10 +10,19 @@ from prowler.config.config import prowler_version, timestamp
from prowler.lib.check.models import Remediation
from prowler.lib.logger import logger
from prowler.lib.utils.utils import outputs_unix_timestamp
from prowler.providers.aws.lib.audit_info.models import AWS_Organizations_Info
from prowler.providers.aws.lib.audit_info.models import AWSOrganizationsInfo
def get_check_compliance(finding, provider, output_options):
def get_check_compliance(finding, provider, output_options) -> dict:
"""get_check_compliance returns a map with the compliance framework as key and the requirements where the finding's check is present.
Example:
{
"CIS-1.4": ["2.1.3"],
"CIS-1.5": ["2.1.3"],
}
"""
try:
check_compliance = {}
# We have to retrieve all the check's compliance requirements
@@ -76,6 +85,18 @@ def generate_provider_output_csv(
)
finding_output = output_model(**data)
if provider == "kubernetes":
data["resource_id"] = finding.resource_id
data["resource_name"] = finding.resource_name
data["namespace"] = finding.namespace
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.namespace}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
finding_output = output_model(**data)
if provider == "aws":
data["profile"] = audit_info.profile
data["account_id"] = audit_info.audited_account
@@ -348,6 +369,16 @@ class Gcp_Check_Output_CSV(Check_Output_CSV):
resource_name: str = ""
class Kubernetes_Check_Output_CSV(Check_Output_CSV):
"""
Kubernetes_Check_Output_CSV generates a finding's output in CSV format for the Kubernetes provider.
"""
namespace: str = ""
resource_id: str = ""
resource_name: str = ""
def generate_provider_output_json(
provider: str, finding, audit_info, mode: str, output_options
):
@@ -452,7 +483,7 @@ class Aws_Check_Output_JSON(Check_Output_JSON):
Profile: str = ""
AccountId: str = ""
OrganizationsInfo: Optional[AWS_Organizations_Info]
OrganizationsInfo: Optional[AWSOrganizationsInfo]
Region: str = ""
ResourceId: str = ""
ResourceArn: str = ""
@@ -478,7 +509,7 @@ class Azure_Check_Output_JSON(Check_Output_JSON):
class Gcp_Check_Output_JSON(Check_Output_JSON):
"""
Gcp_Check_Output_JSON generates a finding's output in JSON format for the AWS provider.
Gcp_Check_Output_JSON generates a finding's output in JSON format for the GCP provider.
"""
ProjectId: str = ""
@@ -490,6 +521,19 @@ class Gcp_Check_Output_JSON(Check_Output_JSON):
super().__init__(**metadata)
class Kubernetes_Check_Output_JSON(Check_Output_JSON):
"""
Kubernetes_Check_Output_JSON generates a finding's output in JSON format for the Kubernetes provider.
"""
ResourceId: str = ""
ResourceName: str = ""
Namespace: str = ""
def __init__(self, **metadata):
super().__init__(**metadata)
class Check_Output_MITRE_ATTACK(BaseModel):
"""
Check_Output_MITRE_ATTACK generates a finding's output in CSV MITRE ATTACK format.

View File

@@ -4,7 +4,10 @@ from colorama import Fore, Style
from prowler.config.config import available_compliance_frameworks, orange_color
from prowler.lib.logger import logger
from prowler.lib.outputs.compliance import add_manual_controls, fill_compliance
from prowler.lib.outputs.compliance.compliance import (
add_manual_controls,
fill_compliance,
)
from prowler.lib.outputs.file_descriptors import fill_file_descriptors
from prowler.lib.outputs.html import fill_html
from prowler.lib.outputs.json import fill_json_asff, fill_json_ocsf
@@ -17,15 +20,17 @@ from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
def stdout_report(finding, color, verbose, is_quiet):
def stdout_report(finding, color, verbose, status):
if finding.check_metadata.Provider == "aws":
details = finding.region
if finding.check_metadata.Provider == "azure":
details = finding.check_metadata.ServiceName
if finding.check_metadata.Provider == "gcp":
details = finding.location.lower()
if finding.check_metadata.Provider == "kubernetes":
details = finding.namespace.lower()
if verbose and not (is_quiet and finding.status != "FAIL"):
if verbose and (not status or finding.status in status):
print(
f"\t{color}{finding.status}{Style.RESET_ALL} {details}: {finding.status_extended}"
)
@@ -57,28 +62,35 @@ def report(check_findings, output_options, audit_info):
# Print findings by stdout
color = set_report_color(finding.status)
stdout_report(
finding, color, output_options.verbose, output_options.is_quiet
finding, color, output_options.verbose, output_options.status
)
if file_descriptors:
# Check if --quiet to only add fails to outputs
if not (finding.status != "FAIL" and output_options.is_quiet):
if any(
compliance in output_options.output_modes
for compliance in available_compliance_frameworks
):
fill_compliance(
output_options,
finding,
audit_info,
file_descriptors,
# Check if --status is enabled and if the filter applies
if (
not output_options.status
or finding.status in output_options.status
):
input_compliance_frameworks = list(
set(output_options.output_modes).intersection(
available_compliance_frameworks
)
)
add_manual_controls(
output_options,
audit_info,
file_descriptors,
)
fill_compliance(
output_options,
finding,
audit_info,
file_descriptors,
input_compliance_frameworks,
)
add_manual_controls(
output_options,
audit_info,
file_descriptors,
input_compliance_frameworks,
)
# AWS specific outputs
if finding.check_metadata.Provider == "aws":
@@ -140,7 +152,7 @@ def report(check_findings, output_options, audit_info):
file_descriptors["json-ocsf"].write(",")
else: # No service resources in the whole account
color = set_report_color("INFO")
color = set_report_color("MANUAL")
if output_options.verbose:
print(f"\t{color}INFO{Style.RESET_ALL} There are no resources")
# Separator between findings and bar
@@ -165,12 +177,12 @@ def set_report_color(status: str) -> str:
color = Fore.RED
elif status == "ERROR":
color = Fore.BLACK
elif status == "WARNING":
elif status == "MUTED":
color = orange_color
elif status == "INFO":
elif status == "MANUAL":
color = Fore.YELLOW
else:
raise Exception("Invalid Report Status. Must be PASS, FAIL, ERROR or WARNING")
raise Exception("Invalid Report Status. Must be PASS, FAIL, ERROR or MUTED")
return color

View File

@@ -39,6 +39,9 @@ def display_summary_table(
elif provider == "gcp":
entity_type = "Project ID/s"
audited_entities = ", ".join(audit_info.project_ids)
elif provider == "kubernetes":
entity_type = "Context"
audited_entities = audit_info.context["name"]
if findings:
current = {

View File

@@ -0,0 +1,485 @@
import os
import pathlib
from datetime import timedelta
from time import time
from rich.align import Align
from rich.console import Console, Group
from rich.layout import Layout
from rich.live import Live
from rich.padding import Padding
from rich.panel import Panel
from rich.progress import (
BarColumn,
MofNCompleteColumn,
Progress,
TextColumn,
TimeElapsedColumn,
TimeRemainingColumn,
)
from rich.rule import Rule
from rich.table import Table
from rich.text import Text
from rich.theme import Theme
from prowler.config.config import prowler_version, timestamp
from prowler.providers.aws.models import AWSIdentityInfo, AWSAssumeRole
# Defines a subclass of Live for creating and managing the live display in the CLI
class LiveDisplay(Live):
def __init__(self, *args, **kwargs):
# Load a theme for the console display from a file
theme = self.load_theme_from_file()
super().__init__(renderable=None, console=Console(theme=theme), *args, **kwargs)
self.sections = {} # Stores different sections of the layout
self.enabled = False # Flag to enable or disable the live display
# Sets up the layout of the live display
def make_layout(self):
"""
Defines the layout.
Making sections invisible so it doesnt show the default Layout metadata before content is added
Text(" ") is to stop the layout metadata from rendering before the layout is updated with real content
client_and_service handles client init (when importing clients) and service check execution
"""
self.layout = Layout(name="root")
# Split layout into intro, overall progress, and main sections
self.layout.split(
Layout(name="intro", ratio=3, minimum_size=9),
Layout(Text(" "), name="overall_progress", minimum_size=5),
Layout(name="main", ratio=10),
)
# Further split intro layout into body and creds sections
self.layout["intro"].split_row(
Layout(name="body", ratio=3),
Layout(name="creds", ratio=2, visible=False),
)
# Split main layout into client_and_service and results sections
self.layout["main"].split_row(
Layout(
Text(" "), name="client_and_service", ratio=3
), # For client_init and service
Layout(name="results", ratio=2, visible=False),
)
# Loads a theme from a YAML file located in the same directory as this file
def load_theme_from_file(self):
# Loads theme.yaml from the same folder as this file
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open(f"{actual_directory}/theme.yaml") as f:
theme = Theme.from_file(f)
return theme
# Initializes the layout and sections based on CLI arguments
def initialize(self, args):
# A way to get around parsing args to LiveDisplay when it is intialized
# This is so that the live_display object can be intialized in this file, and imported to other parts of prowler
self.cli_args = args
self.enabled = not args.only_logs
if self.enabled:
# Initialize layout
self.make_layout()
# Apply layout
self.update(self.layout)
# Add Intro section
intro_layout = self.layout["intro"]
intro_section = IntroSection(args, intro_layout)
self.sections["intro"] = intro_section
# Start live display
self.start()
# Adds AWS credentials to the display
def print_aws_credentials(self, aws_identity_info: AWSIdentityInfo, assumed_role_info: AWSAssumeRole):
# Adds the AWS credentials to the display - will need to extend to gcp and azure
# Create a new function for gcp and azure in this class, that will call a function in the intro_section class
intro_section = self.sections["intro"]
intro_section.add_aws_credentials(aws_identity_info, assumed_role_info)
# Adds and manages the overall progress section
def add_overall_progress_section(self, total_checks_dict):
overall_progress_section = OverallProgressSection(total_checks_dict)
overall_progress_layout = self.layout["overall_progress"]
overall_progress_layout.update(overall_progress_section)
overall_progress_layout.visible = True
self.sections["overall_progress"] = overall_progress_section
# Add results section
self.add_results_section()
# Wrapper function to increment the overall progress
def increment_overall_check_progress(self):
# Called by ExecutionManager
if self.enabled:
section = self.sections["overall_progress"]
section.increment_check_progress()
# Wrapper function to increment the progress for the current service
def increment_overall_service_progress(self):
# Called by ExecutionManager
if self.enabled:
section = self.sections["overall_progress"]
section.increment_service_progress()
# Adds and manages the results section
def add_results_section(self):
# Intializes the results section
results_layout = self.layout["results"]
results_section = ResultsSection()
results_layout.update(results_section)
results_layout.visible = True
self.sections["results"] = results_section
def add_results_for_service(self, service_name, service_findings):
# Adds rows to the Service Check Results table
if self.enabled:
results_section = self.sections["results"]
results_section.add_results_for_service(service_name, service_findings)
# Client Init Section
def add_client_init_section(self, service_name):
# Used to track progress of client init process
if self.enabled:
client_init_section = ClientInitSection(service_name)
self.sections["client_and_service"] = client_init_section
self.layout["client_and_service"].update(client_init_section)
self.layout["client_and_service"].visible = True
# Service Section
def add_service_section(self, service_name, total_checks):
# Used to create the ServiceSection when checks start to execute (after clients have been imported)
if self.enabled:
service_section = ServiceSection(service_name, total_checks)
self.sections["client_and_service"] = service_section
self.layout["client_and_service"].update(service_section)
def increment_check_progress(self):
if self.enabled:
service_section = self.sections["client_and_service"]
service_section.increment_check_progress()
# Misc
def get_service_section(self):
# Used by Check
if self.enabled:
return self.sections["client_and_service"]
def get_client_init_section(self):
# Used by AWSService
if self.enabled:
return self.sections["client_and_service"]
def hide_service_section(self):
# To hide the last service after execution has completed
self.layout["client_and_service"].visible = False
def print_message(self, message):
# No use yet
self.console.print(message)
# The following classes (ServiceSection, ClientInitSection, IntroSection, OverallProgressSection, ResultsSection)
# are used to define different sections of the live display, each with its own layout, progress bars,
class ServiceSection:
def __init__(self, service_name, total_checks) -> None:
self.service_name = service_name
self.total_checks = total_checks
self.renderables = self.create_service_section()
self.start_check_progress()
def __rich__(self):
return Padding(self.renderables, (2, 2))
def create_service_section(self):
# Create the progress components
self.check_progress = Progress(
TextColumn("[bold]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
transient=False, # Optional: set True if you want the progress bar to disappear after completion
)
# Used to add titles that dont need progress bars
self.title_bar = Progress(
TextColumn("[progress.description]{task.description}"), transient=True
)
# Progress Bar for Service Init and Checks
self.task_progress = Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
TimeElapsedColumn(),
TimeRemainingColumn(),
transient=True,
)
return Group(
Panel(
Group(
self.check_progress,
Rule(style="bold blue"),
self.title_bar,
Rule(style="bold blue"),
self.task_progress,
),
title=f"Service: {self.service_name}",
),
)
def start_check_progress(self):
self.check_progress_task_id = self.check_progress.add_task(
"Checks executed", total=self.total_checks
)
def increment_check_progress(self):
self.check_progress.update(self.check_progress_task_id, advance=1)
class ClientInitSection:
def __init__(self, client_name) -> None:
self.client_name = client_name
self.renderables = self.create_client_init_section()
def __rich__(self):
return Padding(self.renderables, (2, 2))
def create_client_init_section(self):
# Progress Bar for Checks
self.task_progress_bar = Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
TimeElapsedColumn(),
TimeRemainingColumn(),
transient=True,
)
return Group(
Panel(
Group(
self.task_progress_bar,
),
title=f"Intializing {self.client_name.replace('_', ' ')}",
),
)
class IntroSection:
def __init__(self, args, layout: Layout) -> None:
self.body_layout = layout["body"]
self.creds_layout = layout["creds"]
self.renderables = []
self.title = f"Prowler v{prowler_version}"
if not args.no_banner:
self.create_banner(args)
def __rich__(self):
return Group(*self.renderables)
def create_banner(self, args):
banner_text = f"""[banner_color] _
_ __ _ __ _____ _| | ___ _ __
| '_ \| '__/ _ \ \ /\ / / |/ _ \ '__|
| |_) | | | (_) \ V V /| | __/ |
| .__/|_| \___/ \_/\_/ |_|\___|_|v{prowler_version}
|_|[/banner_color][banner_blue]the handy cloud security tool[/banner_blue]
[info]Date: {timestamp.strftime('%Y-%m-%d %H:%M:%S')}[/info]
"""
if args.verbose:
banner_text += """
Color code for results:
- [info]INFO (Information)[/info]
- [pass]PASS (Recommended value)[/pass]
- [orange_color]WARNING (Ignored by mutelist)[/orange_color]
- [fail]FAIL (Fix required)[/fail]
"""
self.renderables.append(banner_text)
self.body_layout.update(Group(*self.renderables))
self.body_layout.visible = True
def add_aws_credentials(self, aws_identity_info: AWSIdentityInfo, assumed_role_info: AWSAssumeRole):
# Beautify audited regions, and set to "all" if there is no filter region
regions = (
", ".join(aws_identity_info.audited_regions)
if aws_identity_info.audited_regions is not None
else "all"
)
# Beautify audited profile, set and to "default" if there is no profile set
profile = aws_identity_info.profile if aws_identity_info.profile is not None else "default"
content = Text()
content.append(
"This report is being generated using credentials below:\n\n", style="bold"
)
content.append("AWS-CLI Profile: ", style="bold")
content.append(f"[{profile}]\n", style="info")
content.append("AWS Filter Region: ", style="bold")
content.append(f"[{regions}]\n", style="info")
content.append("AWS Account: ", style="bold")
content.append(f"[{aws_identity_info.account}]\n", style="info")
content.append("UserId: ", style="bold")
content.append(f"[{aws_identity_info.user_id}]\n", style="info")
content.append("Caller Identity ARN: ", style="bold")
content.append(f"[{aws_identity_info.identity_arn}]\n", style="info")
# If a role has been assumed, print the Assumed Role ARN
if assumed_role_info.role_arn is not None:
content.append("Assumed Role ARN: ", style="bold")
content.append(f"[{assumed_role_info.role_arn}]\n", style="info")
self.creds_layout.update(content)
self.creds_layout.visible = True
class OverallProgressSection:
def __init__(self, total_checks_dict: dict) -> None:
self.start_time = time() # Start the timer
self.renderables = self.create_renderable(total_checks_dict)
def __rich__(self):
elapsed_time = self.total_time_taken()
return Group(*self.renderables, f"Total time taken: {elapsed_time}")
def total_time_taken(self):
elapsed_seconds = int(time() - self.start_time)
elapsed_time = timedelta(seconds=elapsed_seconds)
return elapsed_time
def create_renderable(self, total_checks_dict):
services_num = len(total_checks_dict) # number of keys == number of services
checks_num = sum(total_checks_dict.values())
plural_string = "checks"
singular_string = "check"
check_noun = plural_string if checks_num > 1 else singular_string
# Create the progress bar
self.overall_progress_bar = Progress(
TextColumn("[bold]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
transient=False, # Optional: set True if you want the progress bar to disappear after completion
)
# Create the Services Completed task, to track the number of services completed
self.service_progress_task_id = self.overall_progress_bar.add_task(
"Services completed", total=services_num
)
# Create the Checks Completed task, to track the number of checks completed across all services
self.check_progress_task_id = self.overall_progress_bar.add_task(
"Checks executed", total=checks_num
)
content = Text()
content.append(
f"Executing {checks_num} {check_noun} across {services_num} services, please wait...\n",
style="bold",
)
return [content, self.overall_progress_bar]
def increment_check_progress(self):
self.overall_progress_bar.update(self.check_progress_task_id, advance=1)
def increment_service_progress(self):
self.overall_progress_bar.update(self.service_progress_task_id, advance=1)
class ResultsSection:
def __init__(self, verbose=True):
self.verbose = verbose
self.table = Table(title="Service Check Results")
self.table.add_column("Service", justify="left")
if self.verbose:
self.serverities = ["critical", "high", "medium", "low"]
# Add columns for each severity level when verbose, report on the count of fails per severity per service
for severity in self.serverities:
styled_header = (
f"[{severity.lower()}]{severity.capitalize()}[/{severity.lower()}]"
)
self.table.add_column(styled_header, justify="center")
else:
# Dynamically track the status's, report on the status counts for each service
self.status_columns = set(["PASS", "FAIL"])
self.service_findings = {} # Dictionary to store findings for each service
# Dictionary to map plain statuses to their stylized forms
self.status_headers = {
"FAIL": "[fail]Fail[/fail]",
"PASS": "[pass]Pass[/pass]",
}
# Add the initial columns with styling
for status, header in self.status_headers.items():
self.table.add_column(header, justify="center")
def add_results_for_service(self, service_name, service_findings):
if self.verbose:
# Count fails per severity
severity_counts = {severity: 0 for severity in self.serverities}
for finding in service_findings:
if finding.status == "FAIL":
severity_counts[finding.check_metadata.Severity] += 1
# Add row with severity counts
row = [service_name] + [
str(severity_counts[severity]) for severity in self.serverities
]
self.table.add_row(*row)
else:
# Update the dictionary with the new findings
status_counts = {report.status: 0 for report in service_findings}
for report in service_findings:
status_counts[report.status] += 1
self.service_findings[service_name] = status_counts
# Update status_columns and table columns
self.status_columns.update(status_counts.keys())
for status in self.status_columns:
if status not in self.status_headers:
# [{status.lower()}] is for the styling (defined in theme.yaml)
# If new status, add it to status_headers and table
styled_header = (
f"[{status.lower()}]{status.capitalize()}[/{status.lower()}]"
)
self.status_headers[status] = styled_header
self.table.add_column(styled_header, justify="center")
# Update the table with findings for all services
self._update_table()
def _update_table(self):
# Used for when verbose = false
# Clear existing rows
self.table.rows.clear()
# Add updated rows for all services
for service, counts in self.service_findings.items():
row = [service]
for status in self.status_columns:
count = counts.get(status, 0)
percentage = (
f"{(count / sum(counts.values()) * 100):.2f}%" if counts else "0%"
)
row.append(f"{count} ({percentage})")
self.table.add_row(*row)
def __rich__(self):
# This method allows the ResultsSection to be directly rendered by Rich
if not self.table.rows:
return Text("")
return Padding(Align.center(self.table), (0, 2))
# Create an instance of LiveDisplay to import elsewhere (ExecutionManager, the checks, the services)
live_display = LiveDisplay(vertical_overflow="visible")

16
prowler/lib/ui/theme.yaml Normal file
View File

@@ -0,0 +1,16 @@
[styles]
info = yellow1
warning = dark_orange
fail = bold red
pass = bold green
banner_blue = dodger_blue3 bold
banner_color = bold green
orange_color = dark_orange
critical = bold bright_red
high = bold red
medium = bold dark_orange
low = bold yellow1
# style names must be lower case, start with a letter, and only contain letters or the characters ".", "-", "_".

View File

@@ -11,7 +11,7 @@ from prowler.lib.check.check import list_modules, recover_checks_from_service
from prowler.lib.logger import logger
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.config import AWS_STS_GLOBAL_ENDPOINT_REGION
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info, AWSAssumeRole
from prowler.providers.aws.lib.credentials.credentials import create_sts_session
@@ -109,7 +109,7 @@ class AWS_Provider:
def assume_role(
session: session.Session,
assumed_role_info: AWS_Assume_Role,
assumed_role_info: AWSAssumeRole,
sts_endpoint_region: str = None,
) -> dict:
try:
@@ -152,23 +152,31 @@ def input_role_mfa_token_and_code() -> tuple[str]:
def generate_regional_clients(
service: str, audit_info: AWS_Audit_Info, global_service: bool = False
service: str,
audit_info: AWS_Audit_Info,
) -> dict:
"""generate_regional_clients returns a dict with the following format for the given service:
Example:
{"eu-west-1": boto3_service_client}
"""
try:
regional_clients = {}
service_regions = get_available_aws_service_regions(service, audit_info)
# Check if it is global service to gather only one region
if global_service:
if service_regions:
if audit_info.profile_region in service_regions:
service_regions = [audit_info.profile_region]
service_regions = service_regions[:1]
for region in service_regions:
# Get the regions enabled for the account and get the intersection with the service available regions
if audit_info.enabled_regions:
enabled_regions = service_regions.intersection(audit_info.enabled_regions)
else:
enabled_regions = service_regions
for region in enabled_regions:
regional_client = audit_info.audit_session.client(
service, region_name=region, config=audit_info.session_config
)
regional_client.region = region
regional_clients[region] = regional_client
return regional_clients
except Exception as error:
logger.error(
@@ -176,6 +184,22 @@ def generate_regional_clients(
)
def get_aws_enabled_regions(audit_info: AWS_Audit_Info) -> set:
"""get_aws_enabled_regions returns a set of enabled AWS regions"""
# EC2 Client to check enabled regions
service = "ec2"
default_region = get_default_region(service, audit_info)
ec2_client = audit_info.audit_session.client(service, region_name=default_region)
enabled_regions = set()
# With AllRegions=False we only get the enabled regions for the account
for region in ec2_client.describe_regions(AllRegions=False).get("Regions", []):
enabled_regions.add(region.get("RegionName"))
return enabled_regions
def get_aws_available_regions():
try:
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
@@ -216,6 +240,8 @@ def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
service = "efs"
elif service == "logs":
service = "cloudwatch"
elif service == "cognito":
service = "cognito-idp"
# Check if Prowler has checks in service
try:
list_modules(provider, service)
@@ -267,17 +293,18 @@ def get_regions_from_audit_resources(audit_resources: list) -> set:
return audited_regions
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> list:
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> set:
# Get json locally
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
# Check if it is a subservice
json_regions = data["services"][service]["regions"][audit_info.audited_partition]
if audit_info.audited_regions: # Check for input aws audit_info.audited_regions
regions = list(
set(json_regions).intersection(audit_info.audited_regions)
) # Get common regions between input and json
json_regions = set(
data["services"][service]["regions"][audit_info.audited_partition]
)
# Check for input aws audit_info.audited_regions
if audit_info.audited_regions:
# Get common regions between input and json
regions = json_regions.intersection(audit_info.audited_regions)
else: # Get all regions from json of the service and partition
regions = json_regions
return regions

View File

@@ -0,0 +1,539 @@
import os
import pathlib
import sys
from argparse import Namespace
from typing import Any, Optional
from boto3 import client, session
from botocore.config import Config
from botocore.credentials import RefreshableCredentials
from botocore.session import get_session
from colorama import Fore, Style
from prowler.config.config import aws_services_json_file
from prowler.lib.check.check import list_modules, recover_checks_from_service
from prowler.lib.ui.live_display import live_display
from prowler.lib.logger import logger
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.config import (
AWS_STS_GLOBAL_ENDPOINT_REGION,
BOTO3_USER_AGENT_EXTRA,
)
from prowler.providers.aws.models import (
AWSOrganizationsInfo,
AWSCredentials,
AWSAssumeRole,
AWSAssumeRoleConfiguration,
AWSIdentityInfo,
AWSSession,
)
from prowler.providers.aws.lib.arn.arn import parse_iam_credentials_arn
from prowler.providers.aws.lib.credentials.credentials import (
create_sts_session,
validate_AWSCredentials,
)
from prowler.providers.aws.lib.organizations.organizations import (
get_organizations_metadata,
)
from prowler.providers.common.provider import Provider
class AwsProvider(Provider):
session: AWSSession = AWSSession(
session=None, session_config=None, original_session=None
)
identity: AWSIdentityInfo = AWSIdentityInfo(
account=None,
account_arn=None,
user_id=None,
partition=None,
identity_arn=None,
profile=None,
profile_region=None,
audited_regions=[],
)
assumed_role: AWSAssumeRoleConfiguration = AWSAssumeRoleConfiguration(
assumed_role_info=AWSAssumeRole(
role_arn=None,
session_duration=None,
external_id=None,
mfa_enabled=False,
),
assumed_role_credentials=AWSCredentials(
aws_access_key_id=None,
aws_session_token=None,
aws_secret_access_key=None,
expiration=None,
),
)
organizations_metadata: AWSOrganizationsInfo = AWSOrganizationsInfo(
account_details_email=None,
account_details_name=None,
account_details_arn=None,
account_details_org=None,
account_details_tags=None,
)
audit_resources: Optional[Any]
audit_metadata: Optional[Any]
audit_config: dict = {}
mfa_enabled: bool = False
ignore_unused_services: bool = False
def __init__(self, arguments: Namespace):
logger.info("Setting AWS provider ...")
# Parse input arguments
# Assume Role Options
input_role = getattr(arguments, "role", None)
input_session_duration = getattr(arguments, "session_duration", None)
input_external_id = getattr(arguments, "external_id", None)
# STS Endpoint Region
sts_endpoint_region = getattr(arguments, "sts_endpoint_region", None)
# MFA Configuration (false by default)
input_mfa = getattr(arguments, "mfa", None)
input_profile = getattr(arguments, "profile", None)
input_regions = getattr(arguments, "region", None)
organizations_role_arn = getattr(arguments, "organizations_role", None)
# Set the maximum retries for the standard retrier config
aws_retries_max_attempts = getattr(arguments, "aws_retries_max_attempts", None)
# Set if unused services must be ignored
ignore_unused_services = getattr(arguments, "ignore_unused_services", None)
# Set the maximum retries for the standard retrier config
self.session.session_config = self.__set_session_config__(
aws_retries_max_attempts
)
# Set ignore unused services
self.ignore_unused_services = ignore_unused_services
# Start populating AWS identity object
self.identity.profile = input_profile
self.identity.audited_regions = input_regions
# We need to create an original sessions using regular auth path (creds, profile, etc)
logger.info("Generating original session ...")
self.session.session = self.setup_session(input_mfa)
# After the session is created, validate it
logger.info("Validating credentials ...")
caller_identity = validate_AWSCredentials(
self.session.session, input_regions, sts_endpoint_region
)
logger.info("Credentials validated")
logger.info(f"Original caller identity UserId: {caller_identity['UserId']}")
logger.info(f"Original caller identity ARN: {caller_identity['Arn']}")
# Set values of AWS identity object
self.identity.account = caller_identity["Account"]
self.identity.identity_arn = caller_identity["Arn"]
self.identity.user_id = caller_identity["UserId"]
self.identity.partition = parse_iam_credentials_arn(
caller_identity["Arn"]
).partition
self.identity.account_arn = (
f"arn:{self.identity.partition}:iam::{self.identity.account}:root"
)
# save original session
self.session.original_session = self.session.session
# time for checking role assumption
if input_role:
# session will be the assumed one
self.session.session = self.setup_assumed_session(
input_role,
input_external_id,
input_mfa,
input_session_duration,
sts_endpoint_region,
)
logger.info("Audit session is the new session created assuming role")
# check if organizations info is gonna be retrieved
if organizations_role_arn:
logger.info(
f"Getting organizations metadata for account {organizations_role_arn}"
)
# session will be the assumed one with organizations permissions
self.session.session = self.setup_assumed_session(
organizations_role_arn,
input_external_id,
input_mfa,
input_session_duration,
sts_endpoint_region,
)
self.organizations_metadata = get_organizations_metadata(
self.identity.account, self.assumed_role.assumed_role_credentials
)
logger.info("Organizations metadata retrieved")
if self.session.session.region_name:
self.identity.profile_region = self.session.session.region_name
else:
self.identity.profile_region = "us-east-1"
if not getattr(arguments, "only_logs", None):
self.print_credentials()
# Parse Scan Tags
if getattr(arguments, "resource_tags", None):
input_resource_tags = arguments.resource_tags
self.audit_resources = self.get_tagged_resources(input_resource_tags)
# Parse Input Resource ARNs
self.audit_resources = getattr(arguments, "resource_arn", None)
def setup_session(self, input_mfa: bool):
logger.info("Creating regular session ...")
# Input MFA only if a role is not going to be assumed
if input_mfa and not self.assumed_role.assumed_role_info.role_arn:
mfa_ARN, mfa_TOTP = self.__input_role_mfa_token_and_code__()
get_session_token_arguments = {
"SerialNumber": mfa_ARN,
"TokenCode": mfa_TOTP,
}
sts_client = client("sts")
session_credentials = sts_client.get_session_token(
**get_session_token_arguments
)
return session.Session(
aws_access_key_id=session_credentials["Credentials"]["AccessKeyId"],
aws_secret_access_key=session_credentials["Credentials"][
"SecretAccessKey"
],
aws_session_token=session_credentials["Credentials"]["SessionToken"],
profile_name=self.identity.profile,
)
else:
return session.Session(
profile_name=self.identity.profile,
)
def setup_assumed_session(
self,
input_role: str,
input_external_id: str,
input_mfa: str,
session_duration: int,
sts_endpoint_region: str,
):
logger.info("Creating assumed session ...")
# store information about the role is gonna be assumed
self.assumed_role.assumed_role_info.role_arn = input_role
self.assumed_role.assumed_role_info.session_duration = session_duration
self.assumed_role.assumed_role_info.external_id = input_external_id
self.assumed_role.assumed_role_info.mfa_enabled = input_mfa
# Check if role arn is valid
try:
# this returns the arn already parsed into a dict to be used when it is needed to access its fields
role_arn_parsed = parse_iam_credentials_arn(
self.assumed_role.assumed_role_info.role_arn
)
except Exception as error:
logger.critical(f"{error.__class__.__name__} -- {error}")
sys.exit(1)
else:
logger.info(f"Assuming role {self.assumed_role.assumed_role_info.role_arn}")
# Assume the role
assumed_role_response = self.__assume_role__(
self.session.session,
sts_endpoint_region,
)
logger.info("Role assumed")
# Set the info needed to create a session with an assumed role
self.assumed_role.assumed_role_credentials = AWSCredentials(
aws_access_key_id=assumed_role_response["Credentials"]["AccessKeyId"],
aws_session_token=assumed_role_response["Credentials"]["SessionToken"],
aws_secret_access_key=assumed_role_response["Credentials"][
"SecretAccessKey"
],
expiration=assumed_role_response["Credentials"]["Expiration"],
)
# Set identity parameters
self.identity.account = role_arn_parsed.account_id
self.identity.partition = role_arn_parsed.partition
self.identity.account_arn = (
f"arn:{self.identity.partition}:iam::{self.identity.account}:root"
)
# From botocore we can use RefreshableCredentials class, which has an attribute (refresh_using)
# that needs to be a method without arguments that retrieves a new set of fresh credentials
# asuming the role again. -> https://github.com/boto/botocore/blob/098cc255f81a25b852e1ecdeb7adebd94c7b1b73/botocore/credentials.py#L395
assumed_refreshable_credentials = RefreshableCredentials(
access_key=self.assumed_role.assumed_role_credentials.aws_access_key_id,
secret_key=self.assumed_role.assumed_role_credentials.aws_secret_access_key,
token=self.assumed_role.assumed_role_credentials.aws_session_token,
expiry_time=self.assumed_role.assumed_role_credentials.expiration,
refresh_using=self.refresh_credentials,
method="sts-assume-role",
)
# Here we need the botocore session since it needs to use refreshable credentials
assumed_botocore_session = get_session()
assumed_botocore_session._credentials = assumed_refreshable_credentials
assumed_botocore_session.set_config_variable(
"region", self.identity.profile_region
)
return session.Session(
profile_name=self.identity.profile,
botocore_session=assumed_botocore_session,
)
# Refresh credentials method using assume role
# This method is called "adding ()" to the name, so it cannot accept arguments
# https://github.com/boto/botocore/blob/098cc255f81a25b852e1ecdeb7adebd94c7b1b73/botocore/credentials.py#L570
def refresh_credentials(self):
live_display.print_aws_credentials(self.identity, self.assumed_role.assumed_role_info)
def generate_regional_clients(
self, service: str, global_service: bool = False
) -> dict:
try:
regional_clients = {}
service_regions = self.get_available_aws_service_regions(service)
# Check if it is global service to gather only one region
if global_service:
if service_regions:
if self.identity.profile_region in service_regions:
service_regions = [self.identity.profile_region]
service_regions = service_regions[:1]
for region in service_regions:
regional_client = self.session.session.client(
service, region_name=region, config=self.session.session_config
)
regional_client.region = region
regional_clients[region] = regional_client
return regional_clients
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def get_available_aws_service_regions(self, service: str) -> list:
# Get json locally
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
# Check if it is a subservice
json_regions = data["services"][service]["regions"][self.identity.partition]
if (
self.identity.audited_regions
): # Check for input aws audit_info.audited_regions
regions = list(
set(json_regions).intersection(self.identity.audited_regions)
) # Get common regions between input and json
else: # Get all regions from json of the service and partition
regions = json_regions
return regions
def get_aws_available_regions():
try:
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
regions = set()
for service in data["services"].values():
for partition in service["regions"]:
for item in service["regions"][partition]:
regions.add(item)
return list(regions)
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
return []
def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
"""get_checks_from_input_arn gets the list of checks from the input arns"""
checks_from_arn = set()
is_subservice_in_checks = False
# Handle if there are audit resources so only their services are executed
if audit_resources:
services_without_subservices = ["guardduty", "kms", "s3", "elb", "efs"]
service_list = set()
sub_service_list = set()
for resource in audit_resources:
service = resource.split(":")[2]
sub_service = resource.split(":")[5].split("/")[0].replace("-", "_")
# WAF Services does not have checks
if service != "wafv2" and service != "waf":
# Parse services when they are different in the ARNs
if service == "lambda":
service = "awslambda"
elif service == "elasticloadbalancing":
service = "elb"
elif service == "elasticfilesystem":
service = "efs"
elif service == "logs":
service = "cloudwatch"
# Check if Prowler has checks in service
try:
list_modules(provider, service)
except ModuleNotFoundError:
# Service is not supported
pass
else:
service_list.add(service)
# Get subservices to execute only applicable checks
if service not in services_without_subservices:
# Parse some specific subservices
if service == "ec2":
if sub_service == "security_group":
sub_service = "securitygroup"
if sub_service == "network_acl":
sub_service = "networkacl"
if sub_service == "image":
sub_service = "ami"
if service == "rds":
if sub_service == "cluster_snapshot":
sub_service = "snapshot"
sub_service_list.add(sub_service)
else:
sub_service_list.add(service)
checks = recover_checks_from_service(service_list, provider)
# Filter only checks with audited subservices
for check in checks:
if any(sub_service in check for sub_service in sub_service_list):
if not (sub_service == "policy" and "password_policy" in check):
checks_from_arn.add(check)
is_subservice_in_checks = True
if not is_subservice_in_checks:
checks_from_arn = checks
# Return final checks list
return sorted(checks_from_arn)
def get_regions_from_audit_resources(audit_resources: list) -> set:
"""get_regions_from_audit_resources gets the regions from the audit resources arns"""
audited_regions = set()
for resource in audit_resources:
region = resource.split(":")[3]
if region:
audited_regions.add(region)
return audited_regions
def get_tagged_resources(self, input_resource_tags: list):
"""
get_tagged_resources returns a list of the resources that are going to be scanned based on the given input tags
"""
try:
resource_tags = []
tagged_resources = []
for tag in input_resource_tags:
key = tag.split("=")[0]
value = tag.split("=")[1]
resource_tags.append({"Key": key, "Values": [value]})
# Get Resources with resource_tags for all regions
for regional_client in self.generate_regional_clients(
"resourcegroupstaggingapi"
).values():
try:
get_resources_paginator = regional_client.get_paginator(
"get_resources"
)
for page in get_resources_paginator.paginate(
TagFilters=resource_tags
):
for resource in page["ResourceTagMappingList"]:
tagged_resources.append(resource["ResourceARN"])
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
sys.exit(1)
else:
return tagged_resources
def get_default_region(self, service: str) -> str:
"""get_default_region gets the default region based on the profile and audited service regions"""
service_regions = self.get_available_aws_service_regions(service)
default_region = (
self.get_global_region()
) # global region of the partition when all regions are audited and there is no profile region
if self.identity.profile_region in service_regions:
# return profile region only if it is audited
default_region = self.identity.profile_region
# return first audited region if specific regions are audited
elif self.identity.audited_regions:
default_region = self.identity.audited_regions[0]
return default_region
def get_global_region(self) -> str:
"""get_global_region gets the global region based on the audited partition"""
global_region = "us-east-1"
if self.identity.partition == "aws-cn":
global_region = "cn-north-1"
elif self.identity.partition == "aws-us-gov":
global_region = "us-gov-east-1"
elif "aws-iso" in self.identity.partition:
global_region = "aws-iso-global"
return global_region
def __input_role_mfa_token_and_code__() -> tuple[str]:
"""input_role_mfa_token_and_code ask for the AWS MFA ARN and TOTP and returns it."""
mfa_ARN = input("Enter ARN of MFA: ")
mfa_TOTP = input("Enter MFA code: ")
return (mfa_ARN.strip(), mfa_TOTP.strip())
def __set_session_config__(self, aws_retries_max_attempts: bool):
session_config = Config(
retries={"max_attempts": 3, "mode": "standard"},
user_agent_extra=BOTO3_USER_AGENT_EXTRA,
)
if aws_retries_max_attempts:
# Create the new config
config = Config(
retries={
"max_attempts": aws_retries_max_attempts,
"mode": "standard",
},
)
# Merge the new configuration
session_config = self.session.session_config.merge(config)
return session_config
def __assume_role__(
self,
session,
sts_endpoint_region: str,
) -> dict:
try:
assume_role_arguments = {
"RoleArn": self.assumed_role.assumed_role_info.role_arn,
"RoleSessionName": "ProwlerAsessmentSession",
"DurationSeconds": self.assumed_role.assumed_role_info.session_duration,
}
# Set the info to assume the role from the partition, account and role name
if self.assumed_role.assumed_role_info.external_id:
assume_role_arguments[
"ExternalId"
] = self.assumed_role.assumed_role_info.external_id
if self.assumed_role.assumed_role_info.mfa_enabled:
mfa_ARN, mfa_TOTP = self.__input_role_mfa_token_and_code__()
assume_role_arguments["SerialNumber"] = mfa_ARN
assume_role_arguments["TokenCode"] = mfa_TOTP
# Set the STS Endpoint Region
if sts_endpoint_region is None:
sts_endpoint_region = AWS_STS_GLOBAL_ENDPOINT_REGION
sts_client = create_sts_session(session, sts_endpoint_region)
assumed_credentials = sts_client.assume_role(**assume_role_arguments)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
else:
return assumed_credentials

View File

@@ -498,17 +498,6 @@
]
}
},
"appfabric": {
"regions": {
"aws": [
"ap-northeast-1",
"eu-west-1",
"us-east-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"appflow": {
"regions": {
"aws": [
@@ -808,7 +797,10 @@
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"artifact": {
@@ -1069,6 +1061,17 @@
]
}
},
"b2bi": {
"regions": {
"aws": [
"us-east-1",
"us-east-2",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"backup": {
"regions": {
"aws": [
@@ -1489,6 +1492,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -1715,6 +1719,7 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
@@ -2087,14 +2092,18 @@
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -2187,15 +2196,49 @@
"aws-us-gov": []
}
},
"cognito-identity": {
"cognito": {
"regions": {
"aws": [
"af-south-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-west-1"
]
}
},
"cognito-identity": {
"regions": {
"aws": [
"af-south-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -2222,12 +2265,14 @@
"cognito-idp": {
"regions": {
"aws": [
"af-south-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -2316,15 +2361,22 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -2510,6 +2562,15 @@
]
}
},
"cost-optimization-hub": {
"regions": {
"aws": [
"us-east-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"costexplorer": {
"regions": {
"aws": [
@@ -2873,6 +2934,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -2959,6 +3021,7 @@
"cn-northwest-1"
],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
@@ -2996,7 +3059,10 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"ds": {
@@ -3387,6 +3453,42 @@
]
}
},
"eks-auth": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"elastic-inference": {
"regions": {
"aws": [
@@ -3633,6 +3735,7 @@
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -3640,6 +3743,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -3660,18 +3764,23 @@
"emr-serverless": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -4689,6 +4798,7 @@
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
@@ -4698,6 +4808,7 @@
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -4796,6 +4907,40 @@
]
}
},
"inspector-scan": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"inspector2": {
"regions": {
"aws": [
@@ -5613,6 +5758,44 @@
]
}
},
"launch-wizard": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"launchwizard": {
"regions": {
"aws": [
@@ -5726,6 +5909,7 @@
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -5809,6 +5993,7 @@
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
@@ -6220,14 +6405,17 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6658,6 +6846,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -6727,6 +6916,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -7103,8 +7293,11 @@
"regions": {
"aws": [
"ap-northeast-1",
"ap-northeast-2",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
@@ -7374,7 +7567,10 @@
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-cn": [
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": []
}
},
@@ -7799,6 +7995,20 @@
]
}
},
"redshift-serverless": {
"regions": {
"aws": [
"ap-south-1",
"ca-central-1",
"eu-west-3",
"us-west-1"
],
"aws-cn": [
"cn-north-1"
],
"aws-us-gov": []
}
},
"rekognition": {
"regions": {
"aws": [
@@ -7823,6 +8033,16 @@
]
}
},
"repostspace": {
"regions": {
"aws": [
"eu-central-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"resiliencehub": {
"regions": {
"aws": [
@@ -8127,7 +8347,10 @@
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"route53-recovery-readiness": {
@@ -8860,6 +9083,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -9690,6 +9914,21 @@
]
}
},
"thinclient": {
"regions": {
"aws": [
"ap-south-1",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"us-east-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"timestream": {
"regions": {
"aws": [
@@ -9727,10 +9966,14 @@
"tnb": {
"regions": {
"aws": [
"ap-northeast-2",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-2",
"eu-west-3",
"sa-east-1",
"us-east-1",
"us-west-2"
],
@@ -10404,6 +10647,7 @@
"eu-central-1",
"eu-west-1",
"eu-west-2",
"il-central-1",
"sa-east-1",
"us-east-1",
"us-west-2"

View File

@@ -1,4 +1,5 @@
from argparse import ArgumentTypeError, Namespace
from re import search
from prowler.providers.aws.aws_provider import get_aws_available_regions
from prowler.providers.aws.lib.arn.arn import arn_type
@@ -26,12 +27,6 @@ def init_parser(self):
help="ARN of the role to be assumed",
# Pending ARN validation
)
aws_auth_subparser.add_argument(
"--sts-endpoint-region",
nargs="?",
default=None,
help="Specify the AWS STS endpoint region to use. Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html",
)
aws_auth_subparser.add_argument(
"--mfa",
action="store_true",
@@ -84,6 +79,11 @@ def init_parser(self):
action="store_true",
help="Skip updating previous findings of Prowler in Security Hub",
)
aws_security_hub_subparser.add_argument(
"--send-sh-only-fails",
action="store_true",
help="Send only Prowler failed findings to SecurityHub",
)
# AWS Quick Inventory
aws_quick_inventory_subparser = aws_parser.add_argument_group("Quick Inventory")
aws_quick_inventory_subparser.add_argument(
@@ -99,6 +99,7 @@ def init_parser(self):
"-B",
"--output-bucket",
nargs="?",
type=validate_bucket,
default=None,
help="Custom output bucket, requires -M <mode> and it can work also with -o flag.",
)
@@ -106,6 +107,7 @@ def init_parser(self):
"-D",
"--output-bucket-no-assume",
nargs="?",
type=validate_bucket,
default=None,
help="Same as -B but do not use the assumed role credentials to put objects to the bucket, instead uses the initial credentials.",
)
@@ -117,15 +119,16 @@ def init_parser(self):
default=None,
help="Shodan API key used by check ec2_elastic_ip_shodan.",
)
# Allowlist
allowlist_subparser = aws_parser.add_argument_group("Allowlist")
allowlist_subparser.add_argument(
# Mute List
mutelist_subparser = aws_parser.add_argument_group("Mute List")
mutelist_subparser.add_argument(
"-w",
"--allowlist-file",
"--mutelist-file",
nargs="?",
default=None,
help="Path for allowlist yaml file. See example prowler/config/aws_allowlist.yaml for reference and format. It also accepts AWS DynamoDB Table or Lambda ARNs or S3 URIs, see more in https://docs.prowler.cloud/en/latest/tutorials/allowlist/",
help="Path for mutelist yaml file. See example prowler/config/aws_mutelist.yaml for reference and format. It also accepts AWS DynamoDB Table or Lambda ARNs or S3 URIs, see more in https://docs.prowler.cloud/en/latest/tutorials/mutelist/",
)
# Based Scans
aws_based_scans_subparser = aws_parser.add_argument_group("AWS Based Scans")
aws_based_scans_parser = aws_based_scans_subparser.add_mutually_exclusive_group()
@@ -184,3 +187,13 @@ def validate_arguments(arguments: Namespace) -> tuple[bool, str]:
return (False, "To use -I/-T options -R option is needed")
return (True, "")
def validate_bucket(bucket_name):
"""validate_bucket validates that the input bucket_name is valid"""
if search("(?!(^xn--|.+-s3alias$))^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$", bucket_name):
return bucket_name
else:
raise ArgumentTypeError(
"Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
)

View File

@@ -2,7 +2,7 @@ from boto3 import session
from botocore.config import Config
from prowler.providers.aws.config import BOTO3_USER_AGENT_EXTRA
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info, AWSAssumeRole
# Default Current Audit Info
current_audit_info = AWS_Audit_Info(
@@ -25,7 +25,7 @@ current_audit_info = AWS_Audit_Info(
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=AWS_Assume_Role(
assumed_role_info=AWSAssumeRole(
role_arn=None,
session_duration=None,
external_id=None,
@@ -38,4 +38,5 @@ current_audit_info = AWS_Audit_Info(
audit_metadata=None,
audit_config=None,
ignore_unused_services=False,
enabled_regions=set(),
)

View File

@@ -1,4 +1,4 @@
from dataclasses import dataclass
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@@ -7,7 +7,7 @@ from botocore.config import Config
@dataclass
class AWS_Credentials:
class AWSCredentials:
aws_access_key_id: str
aws_session_token: str
aws_secret_access_key: str
@@ -15,7 +15,7 @@ class AWS_Credentials:
@dataclass
class AWS_Assume_Role:
class AWSAssumeRole:
role_arn: str
session_duration: int
external_id: str
@@ -23,7 +23,7 @@ class AWS_Assume_Role:
@dataclass
class AWS_Organizations_Info:
class AWSOrganizationsInfo:
account_details_email: str
account_details_name: str
account_details_arn: str
@@ -44,12 +44,13 @@ class AWS_Audit_Info:
audited_partition: str
profile: str
profile_region: str
credentials: AWS_Credentials
credentials: AWSCredentials
mfa_enabled: bool
assumed_role_info: AWS_Assume_Role
assumed_role_info: AWSAssumeRole
audited_regions: list
audit_resources: list
organizations_metadata: AWS_Organizations_Info
audit_metadata: Optional[Any] = None
organizations_metadata: AWSOrganizationsInfo
audit_metadata: Optional[Any]
audit_config: Optional[dict] = None
ignore_unused_services: bool = False
enabled_regions: set = field(default_factory=set)

View File

@@ -8,16 +8,12 @@ from prowler.providers.aws.config import AWS_STS_GLOBAL_ENDPOINT_REGION
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
def validate_aws_credentials(
def validate_AWSCredentials(
session: session, input_regions: list, sts_endpoint_region: str = None
) -> dict:
try:
# For a valid STS GetCallerIdentity we have to use the right AWS Region
# Check if the --sts-endpoint-region is set
if sts_endpoint_region is not None:
aws_region = sts_endpoint_region
# If there is no region passed with -f/--region/--filter-region
elif input_regions is None or len(input_regions) == 0:
if input_regions is None or len(input_regions) == 0:
# If you have a region configured in your AWS config or credentials file
if session.region_name is not None:
aws_region = session.region_name
@@ -42,7 +38,7 @@ def validate_aws_credentials(
return caller_identity
def print_aws_credentials(audit_info: AWS_Audit_Info):
def print_AWSCredentials(audit_info: AWS_Audit_Info):
# Beautify audited regions, set "all" if there is no filter region
regions = (
", ".join(audit_info.audited_regions)

View File

@@ -9,7 +9,7 @@ from schema import Optional, Schema
from prowler.lib.logger import logger
from prowler.lib.outputs.models import unroll_tags
allowlist_schema = Schema(
mutelist_schema = Schema(
{
"Accounts": {
str: {
@@ -32,38 +32,38 @@ allowlist_schema = Schema(
)
def parse_allowlist_file(audit_info, allowlist_file):
def parse_mutelist_file(audit_info, mutelist_file):
try:
# Check if file is a S3 URI
if re.search("^s3://([^/]+)/(.*?([^/]+))$", allowlist_file):
bucket = allowlist_file.split("/")[2]
key = ("/").join(allowlist_file.split("/")[3:])
if re.search("^s3://([^/]+)/(.*?([^/]+))$", mutelist_file):
bucket = mutelist_file.split("/")[2]
key = ("/").join(mutelist_file.split("/")[3:])
s3_client = audit_info.audit_session.client("s3")
allowlist = yaml.safe_load(
mutelist = yaml.safe_load(
s3_client.get_object(Bucket=bucket, Key=key)["Body"]
)["Allowlist"]
)["Mute List"]
# Check if file is a Lambda Function ARN
elif re.search(r"^arn:(\w+):lambda:", allowlist_file):
lambda_region = allowlist_file.split(":")[3]
elif re.search(r"^arn:(\w+):lambda:", mutelist_file):
lambda_region = mutelist_file.split(":")[3]
lambda_client = audit_info.audit_session.client(
"lambda", region_name=lambda_region
)
lambda_response = lambda_client.invoke(
FunctionName=allowlist_file, InvocationType="RequestResponse"
FunctionName=mutelist_file, InvocationType="RequestResponse"
)
lambda_payload = lambda_response["Payload"].read()
allowlist = yaml.safe_load(lambda_payload)["Allowlist"]
mutelist = yaml.safe_load(lambda_payload)["Mute List"]
# Check if file is a DynamoDB ARN
elif re.search(
r"^arn:aws(-cn|-us-gov)?:dynamodb:[a-z]{2}-[a-z-]+-[1-9]{1}:[0-9]{12}:table\/[a-zA-Z0-9._-]+$",
allowlist_file,
mutelist_file,
):
allowlist = {"Accounts": {}}
table_region = allowlist_file.split(":")[3]
mutelist = {"Accounts": {}}
table_region = mutelist_file.split(":")[3]
dynamodb_resource = audit_info.audit_session.resource(
"dynamodb", region_name=table_region
)
dynamo_table = dynamodb_resource.Table(allowlist_file.split("/")[1])
dynamo_table = dynamodb_resource.Table(mutelist_file.split("/")[1])
response = dynamo_table.scan(
FilterExpression=Attr("Accounts").is_in(
[audit_info.audited_account, "*"]
@@ -80,8 +80,8 @@ def parse_allowlist_file(audit_info, allowlist_file):
)
dynamodb_items.update(response["Items"])
for item in dynamodb_items:
# Create allowlist for every item
allowlist["Accounts"][item["Accounts"]] = {
# Create mutelist for every item
mutelist["Accounts"][item["Accounts"]] = {
"Checks": {
item["Checks"]: {
"Regions": item["Regions"],
@@ -90,24 +90,24 @@ def parse_allowlist_file(audit_info, allowlist_file):
}
}
if "Tags" in item:
allowlist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
mutelist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
"Tags"
] = item["Tags"]
if "Exceptions" in item:
allowlist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
mutelist["Accounts"][item["Accounts"]]["Checks"][item["Checks"]][
"Exceptions"
] = item["Exceptions"]
else:
with open(allowlist_file) as f:
allowlist = yaml.safe_load(f)["Allowlist"]
with open(mutelist_file) as f:
mutelist = yaml.safe_load(f)["Mute List"]
try:
allowlist_schema.validate(allowlist)
mutelist_schema.validate(mutelist)
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- Allowlist YAML is malformed - {error}[{error.__traceback__.tb_lineno}]"
f"{error.__class__.__name__} -- Mute List YAML is malformed - {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
return allowlist
return mutelist
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
@@ -115,27 +115,27 @@ def parse_allowlist_file(audit_info, allowlist_file):
sys.exit(1)
def allowlist_findings(
allowlist: dict,
def mutelist_findings(
mutelist: dict,
audited_account: str,
check_findings: [Any],
):
# Check if finding is allowlisted
# Check if finding is muted
for finding in check_findings:
if is_allowlisted(
allowlist,
if is_muted(
mutelist,
audited_account,
finding.check_metadata.CheckID,
finding.region,
finding.resource_id,
unroll_tags(finding.resource_tags),
):
finding.status = "WARNING"
finding.status = "MUTED"
return check_findings
def is_allowlisted(
allowlist: dict,
def is_muted(
mutelist: dict,
audited_account: str,
check: str,
finding_region: str,
@@ -143,31 +143,30 @@ def is_allowlisted(
finding_tags,
):
try:
allowlisted_checks = {}
# By default is not allowlisted
is_finding_allowlisted = False
# First set account key from allowlist dict
if audited_account in allowlist["Accounts"]:
allowlisted_checks = allowlist["Accounts"][audited_account]["Checks"]
muted_checks = {}
# By default is not muted
is_finding_muted = False
# First set account key from mutelist dict
if audited_account in mutelist["Accounts"]:
muted_checks = mutelist["Accounts"][audited_account]["Checks"]
# If there is a *, it affects to all accounts
# This cannot be elif since in the case of * and single accounts we
# want to merge allowlisted checks from * to the other accounts check list
if "*" in allowlist["Accounts"]:
checks_multi_account = allowlist["Accounts"]["*"]["Checks"]
allowlisted_checks.update(checks_multi_account)
# Test if it is allowlisted
if is_allowlisted_in_check(
allowlisted_checks,
# want to merge muted checks from * to the other accounts check list
if "*" in mutelist["Accounts"]:
checks_multi_account = mutelist["Accounts"]["*"]["Checks"]
muted_checks.update(checks_multi_account)
# Test if it is muted
if is_muted_in_check(
muted_checks,
audited_account,
check,
finding_region,
finding_resource,
finding_tags,
):
is_finding_allowlisted = True
is_finding_muted = True
return is_finding_allowlisted
return is_finding_muted
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
@@ -175,8 +174,8 @@ def is_allowlisted(
sys.exit(1)
def is_allowlisted_in_check(
allowlisted_checks,
def is_muted_in_check(
muted_checks,
audited_account,
check,
finding_region,
@@ -184,15 +183,15 @@ def is_allowlisted_in_check(
finding_tags,
):
try:
# Default value is not allowlisted
is_check_allowlisted = False
# Default value is not muted
is_check_muted = False
for allowlisted_check, allowlisted_check_info in allowlisted_checks.items():
for muted_check, muted_check_info in muted_checks.items():
# map lambda to awslambda
allowlisted_check = re.sub("^lambda", "awslambda", allowlisted_check)
muted_check = re.sub("^lambda", "awslambda", muted_check)
# Check if the finding is excepted
exceptions = allowlisted_check_info.get("Exceptions")
exceptions = muted_check_info.get("Exceptions")
if is_excepted(
exceptions,
audited_account,
@@ -203,40 +202,36 @@ def is_allowlisted_in_check(
# Break loop and return default value since is excepted
break
allowlisted_regions = allowlisted_check_info.get("Regions")
allowlisted_resources = allowlisted_check_info.get("Resources")
allowlisted_tags = allowlisted_check_info.get("Tags")
muted_regions = muted_check_info.get("Regions")
muted_resources = muted_check_info.get("Resources")
muted_tags = muted_check_info.get("Tags")
# If there is a *, it affects to all checks
if (
"*" == allowlisted_check
or check == allowlisted_check
or re.search(allowlisted_check, check)
"*" == muted_check
or check == muted_check
or re.search(muted_check, check)
):
allowlisted_in_check = True
allowlisted_in_region = is_allowlisted_in_region(
allowlisted_regions, finding_region
)
allowlisted_in_resource = is_allowlisted_in_resource(
allowlisted_resources, finding_resource
)
allowlisted_in_tags = is_allowlisted_in_tags(
allowlisted_tags, finding_tags
muted_in_check = True
muted_in_region = is_muted_in_region(muted_regions, finding_region)
muted_in_resource = is_muted_in_resource(
muted_resources, finding_resource
)
muted_in_tags = is_muted_in_tags(muted_tags, finding_tags)
# For a finding to be allowlisted requires the following set to True:
# - allowlisted_in_check -> True
# - allowlisted_in_region -> True
# - allowlisted_in_tags -> True or allowlisted_in_resource -> True
# For a finding to be muted requires the following set to True:
# - muted_in_check -> True
# - muted_in_region -> True
# - muted_in_tags -> True or muted_in_resource -> True
# - excepted -> False
if (
allowlisted_in_check
and allowlisted_in_region
and (allowlisted_in_tags or allowlisted_in_resource)
muted_in_check
and muted_in_region
and (muted_in_tags or muted_in_resource)
):
is_check_allowlisted = True
is_check_muted = True
return is_check_allowlisted
return is_check_muted
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
@@ -244,12 +239,12 @@ def is_allowlisted_in_check(
sys.exit(1)
def is_allowlisted_in_region(
allowlisted_regions,
def is_muted_in_region(
mutelist_regions,
finding_region,
):
try:
return __is_item_matched__(allowlisted_regions, finding_region)
return __is_item_matched__(mutelist_regions, finding_region)
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
@@ -257,9 +252,9 @@ def is_allowlisted_in_region(
sys.exit(1)
def is_allowlisted_in_tags(allowlisted_tags, finding_tags):
def is_muted_in_tags(muted_tags, finding_tags):
try:
return __is_item_matched__(allowlisted_tags, finding_tags)
return __is_item_matched__(muted_tags, finding_tags)
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
@@ -267,9 +262,9 @@ def is_allowlisted_in_tags(allowlisted_tags, finding_tags):
sys.exit(1)
def is_allowlisted_in_resource(allowlisted_resources, finding_resource):
def is_muted_in_resource(muted_resources, finding_resource):
try:
return __is_item_matched__(allowlisted_resources, finding_resource)
return __is_item_matched__(muted_resources, finding_resource)
except Exception as error:
logger.critical(

View File

@@ -3,12 +3,12 @@ import sys
from boto3 import client
from prowler.lib.logger import logger
from prowler.providers.aws.lib.audit_info.models import AWS_Organizations_Info
from prowler.providers.aws.lib.audit_info.models import AWSOrganizationsInfo
def get_organizations_metadata(
metadata_account: str, assumed_credentials: dict
) -> AWS_Organizations_Info:
) -> AWSOrganizationsInfo:
try:
organizations_client = client(
"organizations",
@@ -30,7 +30,7 @@ def get_organizations_metadata(
account_details_tags = ""
for tag in list_tags_for_resource["Tags"]:
account_details_tags += tag["Key"] + ":" + tag["Value"] + ","
organizations_info = AWS_Organizations_Info(
organizations_info = AWSOrganizationsInfo(
account_details_email=organizations_metadata["Account"]["Email"],
account_details_name=organizations_metadata["Account"]["Name"],
account_details_arn=organizations_metadata["Account"]["Arn"],

View File

@@ -1,8 +1,11 @@
def is_account_only_allowed_in_condition(
condition_statement: dict, source_account: str
def is_condition_block_restrictive(
condition_statement: dict, source_account: str, is_cross_account_allowed=False
):
"""
is_account_only_allowed_in_condition parses the IAM Condition policy block and returns True if the source_account passed as argument is within, False if not.
is_condition_block_restrictive parses the IAM Condition policy block and, by default, returns True if the source_account passed as argument is within, False if not.
If argument is_cross_account_allowed is True it tests if the Condition block includes any of the operators mutelisted returning True if does, False if not.
@param condition_statement: dict with an IAM Condition block, e.g.:
{
@@ -54,13 +57,19 @@ def is_account_only_allowed_in_condition(
condition_statement[condition_operator][value],
list,
):
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
is_condition_key_restrictive = True
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_key_restrictive = False
break
# if cross account is not allowed check for each condition block looking for accounts
# different than default
if not is_cross_account_allowed:
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_key_restrictive = False
break
if is_condition_key_restrictive:
is_condition_valid = True
if is_condition_key_restrictive:
is_condition_valid = True
@@ -70,10 +79,13 @@ def is_account_only_allowed_in_condition(
condition_statement[condition_operator][value],
str,
):
if (
source_account
in condition_statement[condition_operator][value]
):
if is_cross_account_allowed:
is_condition_valid = True
else:
if (
source_account
in condition_statement[condition_operator][value]
):
is_condition_valid = True
return is_condition_valid

View File

@@ -1,5 +1,3 @@
import sys
from prowler.config.config import (
csv_file_suffix,
html_file_suffix,
@@ -41,10 +39,9 @@ def send_to_s3_bucket(
s3_client.upload_file(file_name, output_bucket_name, object_name)
except Exception as error:
logger.critical(
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_s3_object_path(output_directory: str) -> str:

View File

@@ -14,20 +14,22 @@ def prepare_security_hub_findings(
findings: [], audit_info: AWS_Audit_Info, output_options, enabled_regions: []
) -> dict:
security_hub_findings_per_region = {}
# Create a key per region
for region in audit_info.audited_regions:
# Create a key per audited region
for region in enabled_regions:
security_hub_findings_per_region[region] = []
for finding in findings:
# We don't send the INFO findings to AWS Security Hub
if finding.status == "INFO":
# We don't send the MANUAL findings to AWS Security Hub
if finding.status == "MANUAL":
continue
# We don't send findings to not enabled regions
if finding.region not in enabled_regions:
continue
# Handle quiet mode
if output_options.is_quiet and finding.status != "FAIL":
# Handle status filters, if any
if not output_options.status or finding.status in output_options.status:
continue
# Get the finding region
@@ -47,8 +49,10 @@ def prepare_security_hub_findings(
def verify_security_hub_integration_enabled_per_region(
partition: str,
region: str,
session: session.Session,
aws_account_number: str,
) -> bool:
f"""verify_security_hub_integration_enabled returns True if the {SECURITY_HUB_INTEGRATION_NAME} is enabled for the given region. Otherwise returns false."""
prowler_integration_enabled = False
@@ -62,7 +66,8 @@ def verify_security_hub_integration_enabled_per_region(
security_hub_client.describe_hub()
# Check if Prowler integration is enabled in Security Hub
if "prowler/prowler" not in str(
security_hub_prowler_integration_arn = f"arn:{partition}:securityhub:{region}:{aws_account_number}:product-subscription/{SECURITY_HUB_INTEGRATION_NAME}"
if security_hub_prowler_integration_arn not in str(
security_hub_client.list_enabled_products_for_import()
):
logger.error(

View File

@@ -1,10 +1,16 @@
import threading
from concurrent.futures import ThreadPoolExecutor, as_completed
from functools import wraps
from prowler.lib.logger import logger
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.aws_provider import (
generate_regional_clients,
get_default_region,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.aws_provider_new import AwsProvider
MAX_WORKERS = 10
class AWSService:
@@ -12,21 +18,22 @@ class AWSService:
- AWS Regional Clients
- Shared information like the account ID and ARN, the the AWS partition and the checks audited
- AWS Session
- Thread pool for the __threading_call__
- Also handles if the AWS Service is Global
"""
def __init__(self, service: str, audit_info: AWS_Audit_Info, global_service=False):
def __init__(self, service: str, provider: AwsProvider, global_service=False):
# Audit Information
self.audit_info = audit_info
self.audited_account = audit_info.audited_account
self.audited_account_arn = audit_info.audited_account_arn
self.audited_partition = audit_info.audited_partition
self.audit_resources = audit_info.audit_resources
self.audited_checks = audit_info.audit_metadata.expected_checks
self.audit_config = audit_info.audit_config
self.provider = provider
self.audited_account = provider.identity.account
self.audited_account_arn = provider.identity.account_arn
self.audited_partition = provider.identity.partition
self.audit_resources = provider.audit_resources
self.audited_checks = provider.audit_metadata.expected_checks
self.audit_config = provider.audit_config
# AWS Session
self.session = audit_info.audit_session
self.session = provider.session.session
# We receive the service using __class__.__name__ or the service name in lowercase
# e.g.: AccessAnalyzer --> we need a lowercase string, so service.lower()
@@ -34,24 +41,105 @@ class AWSService:
# Generate Regional Clients
if not global_service:
self.regional_clients = generate_regional_clients(
self.service, audit_info, global_service
self.regional_clients = provider.generate_regional_clients(
self.service, global_service
)
# Get a single region and client if the service needs it (e.g. AWS Global Service)
# We cannot include this within an else because some services needs both the regional_clients
# and a single client like S3
self.region = get_default_region(self.service, audit_info)
self.region = provider.get_default_region(self.service)
self.client = self.session.client(self.service, self.region)
# Thread pool for __threading_call__
self.thread_pool = ThreadPoolExecutor(max_workers=MAX_WORKERS)
self.live_display_enabled = False
# Progress bar to add tasks to
service_init_section = live_display.get_client_init_section()
if service_init_section:
# Only Flags is not set to True
self.task_progress_bar = service_init_section.task_progress_bar
self.progress_tasks = []
# For us in other functions
self.live_display_enabled = True
def __get_session__(self):
return self.session
def __threading_call__(self, call):
threads = []
for regional_client in self.regional_clients.values():
threads.append(threading.Thread(target=call, args=(regional_client,)))
for t in threads:
t.start()
for t in threads:
t.join()
def __threading_call__(self, call, iterator=None, *args, **kwargs):
# Use the provided iterator, or default to self.regional_clients
items = iterator if iterator is not None else self.regional_clients.values()
# Determine the total count for logging
item_count = len(items)
# Trim leading and trailing underscores from the call's name
call_name = call.__name__.strip("_")
# Add Capitalization
call_name = " ".join([x.capitalize() for x in call_name.split("_")])
# Print a message based on the call's name, and if its regional or processing a list of items
if iterator is None:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function across {item_count} regions..."
)
else:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function to process {item_count} items..."
)
if self.live_display_enabled:
# Setup the progress bar
task_id = self.task_progress_bar.add_task(
f"- {call_name}...", total=item_count, task_type="Service"
)
self.progress_tasks.append(task_id)
# Submit tasks to the thread pool
futures = [
self.thread_pool.submit(call, item, *args, **kwargs) for item in items
]
# Wait for all tasks to complete
for future in as_completed(futures):
try:
future.result() # Raises exceptions from the thread, if any
if self.live_display_enabled:
# Update the progress bar
self.task_progress_bar.update(task_id, advance=1)
except Exception:
# Handle exceptions if necessary
pass # Replace 'pass' with any additional exception handling logic. Currently handled within the called function
# Make the task disappear once completed
# self.progress.remove_task(task_id)
@staticmethod
def progress_decorator(func):
"""
Decorator to update the progress bar before and after a function call.
To be used for methods within global services, which do not make use of the __threading_call__ function
"""
@wraps(func)
def wrapper(self, *args, **kwargs):
# Trim leading and trailing underscores from the call's name
func_name = func.__name__.strip("_")
# Add Capitalization
func_name = " ".join([x.capitalize() for x in func_name.split("_")])
if self.live_display_enabled:
task_id = self.task_progress_bar.add_task(
f"- {func_name}...", total=1, task_type="Service"
)
self.progress_tasks.append(task_id)
result = func(self, *args, **kwargs) # Execute the function
if self.live_display_enabled:
self.task_progress_bar.update(task_id, advance=1)
# self.task_progress_bar.remove_task(task_id) # Uncomment if you want to remove the task on completion
return result
return wrapper

View File

@@ -0,0 +1,54 @@
from dataclasses import dataclass
from datetime import datetime
from boto3 import session
from botocore.config import Config
@dataclass
class AWSOrganizationsInfo:
account_details_email: str
account_details_name: str
account_details_arn: str
account_details_org: str
account_details_tags: str
@dataclass
class AWSCredentials:
aws_access_key_id: str
aws_session_token: str
aws_secret_access_key: str
expiration: datetime
@dataclass
class AWSAssumeRole:
role_arn: str
session_duration: int
external_id: str
mfa_enabled: bool
@dataclass
class AWSAssumeRoleConfiguration:
assumed_role_info: AWSAssumeRole
assumed_role_credentials: AWSCredentials
@dataclass
class AWSIdentityInfo:
account: str
account_arn: str
user_id: str
partition: str
identity_arn: str
profile: str
profile_region: str
audited_regions: list
@dataclass
class AWSSession:
session: session.Session
session_config: Config
original_session: None

View File

@@ -1,6 +1,6 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.accessanalyzer.accessanalyzer_service import (
AccessAnalyzer,
)
from prowler.providers.common.common import get_global_provider
accessanalyzer_client = AccessAnalyzer(current_audit_info)
accessanalyzer_client = AccessAnalyzer(get_global_provider())

View File

@@ -19,17 +19,23 @@ class accessanalyzer_enabled(Check):
f"IAM Access Analyzer {analyzer.name} is enabled."
)
elif analyzer.status == "NOT_AVAILABLE":
report.status = "FAIL"
report.status_extended = (
f"IAM Access Analyzer in account {analyzer.name} is not enabled."
)
else:
report.status = "FAIL"
report.status_extended = (
f"IAM Access Analyzer {analyzer.name} is not active."
)
if analyzer.status == "NOT_AVAILABLE":
report.status = "FAIL"
report.status_extended = f"IAM Access Analyzer in account {analyzer.name} is not enabled."
else:
report.status = "FAIL"
report.status_extended = (
f"IAM Access Analyzer {analyzer.name} is not active."
)
if (
accessanalyzer_client.audit_config.get(
"mute_non_default_regions", False
)
and not analyzer.region == accessanalyzer_client.region
):
report.status = "MUTED"
findings.append(report)

View File

@@ -10,9 +10,9 @@ from prowler.providers.aws.lib.service.service import AWSService
################## AccessAnalyzer
class AccessAnalyzer(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.analyzers = []
self.__threading_call__(self.__list_analyzers__)
self.__list_findings__()
@@ -85,21 +85,36 @@ class AccessAnalyzer(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# TODO: We need to include ListFindingsV2
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/accessanalyzer/client/list_findings_v2.html
def __list_findings__(self):
logger.info("AccessAnalyzer - Listing Findings per Analyzer...")
try:
for analyzer in self.analyzers:
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
try:
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except ClientError as error:
if error.response["Error"]["Code"] == "ValidationException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,4 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.account.account_service import Account
from prowler.providers.common.common import get_global_provider
account_client = Account(current_audit_info)
account_client = Account(get_global_provider())

View File

@@ -10,6 +10,6 @@ class account_maintain_current_contact_details(Check):
report.region = account_client.region
report.resource_id = account_client.audited_account
report.resource_arn = account_client.audited_account_arn
report.status = "INFO"
report.status_extended = "Manual check: Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Contact Information."
report.status = "MANUAL"
report.status_extended = "Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Contact Information."
return [report]

View File

@@ -10,6 +10,6 @@ class account_security_contact_information_is_registered(Check):
report.region = account_client.region
report.resource_id = account_client.audited_account
report.resource_arn = account_client.audited_account_arn
report.status = "INFO"
report.status_extended = "Manual check: Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Alternate Contacts -> Security Section."
report.status = "MANUAL"
report.status_extended = "Login to the AWS Console. Choose your account name on the top right of the window -> My Account -> Alternate Contacts -> Security Section."
return [report]

View File

@@ -10,6 +10,6 @@ class account_security_questions_are_registered_in_the_aws_account(Check):
report.region = account_client.region
report.resource_id = account_client.audited_account
report.resource_arn = account_client.audited_account_arn
report.status = "INFO"
report.status_extended = "Manual check: Login to the AWS Console as root. Choose your account name on the top right of the window -> My Account -> Configure Security Challenge Questions."
report.status = "MANUAL"
report.status_extended = "Login to the AWS Console as root. Choose your account name on the top right of the window -> My Account -> Configure Security Challenge Questions."
return [report]

View File

@@ -9,9 +9,9 @@ from prowler.providers.aws.lib.service.service import AWSService
class Account(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.number_of_contacts = 4
self.contact_base = self.__get_contact_information__()
self.contacts_billing = self.__get_alternate_contact__("BILLING")

View File

@@ -1,4 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.acm.acm_service import ACM
from prowler.providers.common.common import get_global_provider
acm_client = ACM(current_audit_info)
acm_client = ACM(get_global_provider())

View File

@@ -10,13 +10,13 @@ from prowler.providers.aws.lib.service.service import AWSService
################## ACM
class ACM(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.certificates = []
self.__threading_call__(self.__list_certificates__)
self.__describe_certificates__()
self.__list_tags_for_certificate__()
self.__threading_call__(self.__describe_certificates__, self.certificates)
self.__threading_call__(self.__list_tags_for_certificate__, self.certificates)
def __list_certificates__(self, regional_client):
logger.info("ACM - Listing Certificates...")
@@ -59,33 +59,29 @@ class ACM(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_certificates__(self):
logger.info("ACM - Describing Certificates...")
def __describe_certificates__(self, certificate):
try:
for certificate in self.certificates:
regional_client = self.regional_clients[certificate.region]
response = regional_client.describe_certificate(
CertificateArn=certificate.arn
)["Certificate"]
if (
response["Options"]["CertificateTransparencyLoggingPreference"]
== "ENABLED"
):
certificate.transparency_logging = True
regional_client = self.regional_clients[certificate.region]
response = regional_client.describe_certificate(
CertificateArn=certificate.arn
)["Certificate"]
if (
response["Options"]["CertificateTransparencyLoggingPreference"]
== "ENABLED"
):
certificate.transparency_logging = True
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_certificate__(self):
logger.info("ACM - List Tags...")
def __list_tags_for_certificate__(self, certificate):
try:
for certificate in self.certificates:
regional_client = self.regional_clients[certificate.region]
response = regional_client.list_tags_for_certificate(
CertificateArn=certificate.arn
)["Tags"]
certificate.tags = response
regional_client = self.regional_clients[certificate.region]
response = regional_client.list_tags_for_certificate(
CertificateArn=certificate.arn
)["Tags"]
certificate.tags = response
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,4 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.apigateway.apigateway_service import APIGateway
from prowler.providers.common.common import get_global_provider
apigateway_client = APIGateway(current_audit_info)
apigateway_client = APIGateway(get_global_provider())

View File

@@ -1,7 +1,7 @@
{
"Provider": "aws",
"CheckID": "apigateway_restapi_authorizers_enabled",
"CheckTitle": "Check if API Gateway has configured authorizers.",
"CheckTitle": "Check if API Gateway has configured authorizers at api or method level.",
"CheckAliases": [
"apigateway_authorizers_enabled"
],
@@ -13,7 +13,7 @@
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "medium",
"ResourceType": "AwsApiGatewayRestApi",
"Description": "Check if API Gateway has configured authorizers.",
"Description": "Check if API Gateway has configured authorizers at api or method level.",
"Risk": "If no authorizer is enabled anyone can use the service.",
"RelatedUrl": "",
"Remediation": {

View File

@@ -13,12 +13,41 @@ class apigateway_restapi_authorizers_enabled(Check):
report.resource_id = rest_api.name
report.resource_arn = rest_api.arn
report.resource_tags = rest_api.tags
# it there are not authorizers at api level and resources without methods (default case) ->
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured at api level."
if rest_api.authorizer:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured."
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured at api level"
else:
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured."
# we want to know if api has not authorizers and all the resources don't have methods configured
resources_have_methods = False
all_methods_authorized = True
resource_paths_with_unathorized_methods = []
for resource in rest_api.resources:
# if the resource has methods test if they have all configured authorizer
if resource.resource_methods:
resources_have_methods = True
for (
http_method,
authorization_method,
) in resource.resource_methods.items():
if authorization_method == "NONE":
all_methods_authorized = False
unauthorized_method = (
resource.path + " -> " + http_method
)
resource_paths_with_unathorized_methods.append(
unauthorized_method
)
# if there are methods in at least one resource and are all authorized
if all_methods_authorized and resources_have_methods:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has all methods authorized"
# if there are methods in at least one result but some of then are not authorized-> list it
elif not all_methods_authorized:
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have authorizers at api level and the following paths and methods are unauthorized: {'; '.join(resource_paths_with_unathorized_methods)}."
findings.append(report)
return findings

View File

@@ -9,14 +9,15 @@ from prowler.providers.aws.lib.service.service import AWSService
################## APIGateway
class APIGateway(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.rest_apis = []
self.__threading_call__(self.__get_rest_apis__)
self.__get_authorizers__()
self.__get_rest_api__()
self.__get_stages__()
self.__threading_call__(self.__get_rest_apis__, self.rest_apis)
self.__threading_call__(self.__get_authorizers__, self.rest_apis)
self.__threading_call__(self.__get_rest_api__, self.rest_apis)
self.__threading_call__(self.__get_stages__, self.rest_apis)
self.__threading_call__(self.__get_resources__, self.rest_apis)
def __get_rest_apis__(self, regional_client):
logger.info("APIGateway - Getting Rest APIs...")
@@ -42,60 +43,93 @@ class APIGateway(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_authorizers__(self):
logger.info("APIGateway - Getting Rest APIs authorizer...")
def __get_authorizers__(self, rest_api):
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
"items"
]
if authorizers:
rest_api.authorizer = True
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
"items"
]
if authorizers:
rest_api.authorizer = True
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_rest_api__(self):
logger.info("APIGateway - Describing Rest API...")
def __get_rest_api__(self, rest_api):
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_stages__(self):
logger.info("APIGateway - Getting stages for Rest APIs...")
def __get_stages__(self, rest_api):
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
)
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_resources__(self, rest_api):
try:
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator("get_resources")
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get("resourceMethods", {}).keys():
resource_methods.append(resource_method)
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
)
)
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
class Stage(BaseModel):
@@ -107,6 +141,11 @@ class Stage(BaseModel):
tags: Optional[list] = []
class PathResourceMethods(BaseModel):
path: str
resource_methods: dict
class RestAPI(BaseModel):
id: str
arn: str
@@ -116,3 +155,4 @@ class RestAPI(BaseModel):
public_endpoint: bool = True
stages: list[Stage] = []
tags: Optional[list] = []
resources: list[PathResourceMethods] = []

View File

@@ -1,6 +1,6 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.apigatewayv2.apigatewayv2_service import (
ApiGatewayV2,
)
from prowler.providers.common.common import get_global_provider
apigatewayv2_client = ApiGatewayV2(current_audit_info)
apigatewayv2_client = ApiGatewayV2(get_global_provider())

View File

@@ -9,13 +9,13 @@ from prowler.providers.aws.lib.service.service import AWSService
################## ApiGatewayV2
class ApiGatewayV2(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.apis = []
self.__threading_call__(self.__get_apis__)
self.__get_authorizers__()
self.__get_stages__()
self.__threading_call__(self.__get_apis__, self.apis)
self.__threading_call__(self.__get_authorizers__, self.apis)
self.__threading_call__(self.__get_stages__, self.apis)
def __get_apis__(self, regional_client):
logger.info("APIGatewayv2 - Getting APIs...")
@@ -41,36 +41,32 @@ class ApiGatewayV2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_authorizers__(self):
logger.info("APIGatewayv2 - Getting APIs authorizer...")
def __get_authorizers__(self, api):
try:
for api in self.apis:
regional_client = self.regional_clients[api.region]
authorizers = regional_client.get_authorizers(ApiId=api.id)["Items"]
if authorizers:
api.authorizer = True
regional_client = self.regional_clients[api.region]
authorizers = regional_client.get_authorizers(ApiId=api.id)["Items"]
if authorizers:
api.authorizer = True
except Exception as error:
logger.error(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
)
def __get_stages__(self):
logger.info("APIGatewayv2 - Getting stages for APIs...")
def __get_stages__(self, api):
try:
for api in self.apis:
regional_client = self.regional_clients[api.region]
stages = regional_client.get_stages(ApiId=api.id)
for stage in stages["Items"]:
logging = False
if "AccessLogSettings" in stage:
logging = True
api.stages.append(
Stage(
name=stage["StageName"],
logging=logging,
tags=[stage.get("Tags")],
)
regional_client = self.regional_clients[api.region]
stages = regional_client.get_stages(ApiId=api.id)
for stage in stages["Items"]:
logging = False
if "AccessLogSettings" in stage:
logging = True
api.stages.append(
Stage(
name=stage["StageName"],
logging=logging,
tags=[stage.get("Tags")],
)
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"

View File

@@ -1,4 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.appstream.appstream_service import AppStream
from prowler.providers.common.common import get_global_provider
appstream_client = AppStream(current_audit_info)
appstream_client = AppStream(get_global_provider())

View File

@@ -9,12 +9,12 @@ from prowler.providers.aws.lib.service.service import AWSService
################## AppStream
class AppStream(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.fleets = []
self.__threading_call__(self.__describe_fleets__)
self.__list_tags_for_resource__()
self.__threading_call__(self.__list_tags_for_resource__, self.fleets)
def __describe_fleets__(self, regional_client):
logger.info("AppStream - Describing Fleets...")
@@ -50,15 +50,13 @@ class AppStream(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_resource__(self):
logger.info("AppStream - List Tags...")
def __list_tags_for_resource__(self, fleet):
try:
for fleet in self.fleets:
regional_client = self.regional_clients[fleet.region]
response = regional_client.list_tags_for_resource(
ResourceArn=fleet.arn
)["Tags"]
fleet.tags = [response]
regional_client = self.regional_clients[fleet.region]
response = regional_client.list_tags_for_resource(ResourceArn=fleet.arn)[
"Tags"
]
fleet.tags = [response]
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,4 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.athena.athena_service import Athena
from prowler.providers.common.common import get_global_provider
athena_client = Athena(current_audit_info)
athena_client = Athena(get_global_provider())

View File

@@ -9,14 +9,18 @@ from prowler.providers.aws.lib.service.service import AWSService
################## Athena
class Athena(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.workgroups = {}
self.__threading_call__(self.__list_workgroups__)
self.__get_workgroups__()
self.__list_query_executions__()
self.__list_tags_for_resource__()
self.__threading_call__(self.__get_workgroups__, self.workgroups.values())
self.__threading_call__(
self.__list_query_executions__, self.workgroups.values()
)
self.__threading_call__(
self.__list_tags_for_resource__, self.workgroups.values()
)
def __list_workgroups__(self, regional_client):
logger.info("Athena - Listing WorkGroups...")
@@ -44,86 +48,65 @@ class Athena(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_workgroups__(self):
logger.info("Athena - Getting WorkGroups...")
def __get_workgroups__(self, workgroup):
try:
for workgroup in self.workgroups.values():
try:
wg = self.regional_clients[workgroup.region].get_work_group(
WorkGroup=workgroup.name
)
wg = self.regional_clients[workgroup.region].get_work_group(
WorkGroup=workgroup.name
)
wg_configuration = wg.get("WorkGroup").get("Configuration")
self.workgroups[
workgroup.arn
].enforce_workgroup_configuration = wg_configuration.get(
"EnforceWorkGroupConfiguration", False
)
wg_configuration = wg.get("WorkGroup").get("Configuration")
self.workgroups[
workgroup.arn
].enforce_workgroup_configuration = wg_configuration.get(
"EnforceWorkGroupConfiguration", False
)
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
encryption = (
wg_configuration.get(
"ResultConfiguration",
{"EncryptionConfiguration": {}},
)
.get(
"EncryptionConfiguration",
{"EncryptionOption": ""},
)
.get("EncryptionOption")
)
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
encryption = (
wg_configuration.get(
"ResultConfiguration",
{"EncryptionConfiguration": {}},
)
.get(
"EncryptionConfiguration",
{"EncryptionOption": ""},
)
.get("EncryptionOption")
)
if encryption in ["SSE_S3", "SSE_KMS", "CSE_KMS"]:
encryption_configuration = EncryptionConfiguration(
encryption_option=encryption, encrypted=True
)
self.workgroups[
workgroup.arn
].encryption_configuration = encryption_configuration
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
if encryption in ["SSE_S3", "SSE_KMS", "CSE_KMS"]:
encryption_configuration = EncryptionConfiguration(
encryption_option=encryption, encrypted=True
)
self.workgroups[
workgroup.arn
].encryption_configuration = encryption_configuration
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{workgroup.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_query_executions__(self):
logger.info("Athena - Listing Queries...")
def __list_query_executions__(self, workgroup):
try:
for workgroup in self.workgroups.values():
try:
queries = (
self.regional_clients[workgroup.region]
.list_query_executions(WorkGroup=workgroup.name)
.get("QueryExecutionIds", [])
)
if queries:
workgroup.queries = True
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
queries = (
self.regional_clients[workgroup.region]
.list_query_executions(WorkGroup=workgroup.name)
.get("QueryExecutionIds", [])
)
if queries:
workgroup.queries = True
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{workgroup.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_resource__(self):
logger.info("Athena - Listing Tags...")
def __list_tags_for_resource__(self, workgroup):
try:
for workgroup in self.workgroups.values():
try:
regional_client = self.regional_clients[workgroup.region]
workgroup.tags = regional_client.list_tags_for_resource(
ResourceARN=workgroup.arn
)["Tags"]
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
regional_client = self.regional_clients[workgroup.region]
workgroup.tags = regional_client.list_tags_for_resource(
ResourceARN=workgroup.arn
)["Tags"]
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -12,7 +12,7 @@ class athena_workgroup_encryption(Check):
# Only check for enabled and used workgroups (has recent queries)
if (
workgroup.state == "ENABLED" and workgroup.queries
) or not athena_client.audit_info.ignore_unused_services:
) or not athena_client.provider.ignore_unused_services:
report = Check_Report_AWS(self.metadata())
report.region = workgroup.region
report.resource_id = workgroup.name

View File

@@ -12,7 +12,7 @@ class athena_workgroup_enforce_configuration(Check):
# Only check for enabled and used workgroups (has recent queries)
if (
workgroup.state == "ENABLED" and workgroup.queries
) or not athena_client.audit_info.ignore_unused_services:
) or not athena_client.provider.ignore_unused_services:
report = Check_Report_AWS(self.metadata())
report.region = workgroup.region
report.resource_id = workgroup.name

View File

@@ -1,4 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.autoscaling.autoscaling_service import AutoScaling
from prowler.providers.common.common import get_global_provider
autoscaling_client = AutoScaling(current_audit_info)
autoscaling_client = AutoScaling(get_global_provider())

View File

@@ -7,9 +7,9 @@ from prowler.providers.aws.lib.service.service import AWSService
################## AutoScaling
class AutoScaling(AWSService):
def __init__(self, audit_info):
def __init__(self, provider):
# Call AWSService's __init__
super().__init__(__class__.__name__, audit_info)
super().__init__(__class__.__name__, provider)
self.launch_configurations = []
self.__threading_call__(self.__describe_launch_configurations__)
self.groups = []

View File

@@ -1,4 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.awslambda.awslambda_service import Lambda
from prowler.providers.common.common import get_global_provider
awslambda_client = Lambda(current_audit_info)
awslambda_client = Lambda(get_global_provider())

View File

@@ -8,7 +8,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check):
def execute(self):
findings = []
for function in awslambda_client.functions.values():
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in functions:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
@@ -49,5 +51,7 @@ class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check)
report.status_extended = f"Lambda function {function.name} is recorded by CloudTrail trail {trail.name}."
break
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -11,57 +11,92 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_no_secrets_in_code(Check):
def execute(self):
findings = []
for function in awslambda_client.functions.values():
if function.code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
if awslambda_client.functions:
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function, function_code in awslambda_client.__get_function_code__():
if function_code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function.code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[file_name]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function_code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[
file_name
]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function_code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[
file_name
]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
# report.status_extended = f"Potential {'secrets' if len(secrets_findings)>1 else 'secret'} found in Lambda function {function.name} code. {final_output_string}."
if len(secrets_findings) > 1:
report.status_extended = f"Potential secrets found in Lambda function {function.name} code -> {final_output_string}."
else:
report.status_extended = f"Potential secret found in Lambda function {function.name} code -> {final_output_string}."
# break // Don't break as there may be additional findings
findings.append(report)
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
report.status_extended = f"Potential {'secrets' if len(secrets_findings) > 1 else 'secret'} found in Lambda function {function.name} code -> {final_output_string}."
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -12,6 +12,8 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_no_secrets_in_variables(Check):
def execute(self):
findings = []
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in awslambda_client.functions.values():
report = Check_Report_AWS(self.metadata())
report.region = function.region
@@ -52,5 +54,6 @@ class awslambda_function_no_secrets_in_variables(Check):
os.remove(temp_env_data_file.name)
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

Some files were not shown because too many files have changed in this diff Show More